Research Groups and Themes
Usable, Human-centred Sociotechnical Security
This work links colleagues in the Department of Psychology and in CIS and the focus is on a social/psychological approach to human- centred, usable security (Human Factors)
A recent project (Combatting Criminals in the Cloud, EP/M020576/1) has improved our understanding of cybercriminal behaviours (Adversarial Behaviours). This group has been closely involved with the Research Institute in Cyber Security (RISCS), since its formation. Ongoing work is linked to the EPSRC funded Centre for Digital Citizens (EP/T022582/1) a joint award to Newcastle and Northumbria Universities, where the Northumbria team lead on the ‘Safe Citizen’ challenge area.
Intrusion detection, Hate speech/Fake news detection and Synthetic Data Detection
This work links colleagues in the Department of CIS and the focus is on the technical aspects of cyber security.
Examples of recent work: (1) collaboration with Lockheed Martin to produce industry-standard pen-testing report automatically after the networks scan; (2) industry funding for an AI-based project investigating IoT Botnet and Malware Detection in Smart buildings; (3) funding through the REPHRAIN for exploring adaptive hate speech awareness in reducing hate speech online; (4) funding from the OfS (pro:NE) to investigate AI-based Ransomware Attack Detection. (5) funding through the Alan Turing Institute to work on Data Security for UK Defence and Security.
Biometric Identification and Digital Forensics
This work links colleagues in the Department of CIS and the focus is on the technical aspects of cyber security.
This team conduct work on biometric encryption (Cryptography), digital forensics, (Forensics) biometric recognition, including face and activity recognition, machine learning for media security, image/video authentication and watermarking and secure and trusted identity and access management (Authentication, Authorisation and Accountability). One of the team members led the Institute of Coding (IoC) project at Northumbria University (2019–2021) funded by the Higher Education Funding Council for England (HEFCE).
Research Groups and Themes
Northumbria Cybersecurity Research Group (NCSRG) primarily conducts work across two departments (CIS – Computer and Information Sciences and Psychology) with work grouped into three major areas detailed below, with CyBOK areas flagged in italics.
- Usable, Human-centred Sociotechnical Security – This work links colleagues in the Department of Psychology and in CIS and the focus is on a social/ psychological approach to human- centred, usable security (Human Factors). The aim is to model cybersecurity behaviour across a number of contexts, apply psychological models of behaviour-change, assess the psychological correlates of cyber-risk and cyber insurance uptake (Risk Management and Governance) and take an inclusive approach to cyber-security, addressing the issues facing marginalised communities. An additional focus of a recent project (Combatting Criminals in the Cloud, EP/M020576/1) was improved understanding of cybercriminal behaviours (Adversarial Behaviours). This group has been closely involved with the Research Institute in Cyber Security (RISCS), since its formation. Ongoing work is linked to the EPSRC funded Centre for Digital Citizens (EP/T022582/1) a joint award (2020-25) to Newcastle and Northumbria Universities, where the Northumbria team lead on the ‘Safe Citizen’ challenge area, investigating means to reduce social engineering attacks, improve domestic and workplace cyber-resilience (Human Factors) improve security culture and better understand algorithmic governance and censorship in social media (Risk Management and Governance; Privacy and Security). There is a work in this group focused upon the trustworthy use of data and AI working with the Alan Turing Institute supporting the Safe and Ethical AI Programme, focused in particular on criminal justice and national security (Privacy and Security).
- Intrusion detection, Hate speech/Cyberbullying detection and Synthetic Data Detection – The work here addresses intrusion detection and response, malware/botnet detection, penetration testing, phishing detection, hate speech/cyberbullying detection, fake news detection/synthetic data detection and web security issues (Malware & Attack Technologies; Software Security; Network Security; Security Operations and Incident Management) as well as new work on distributed security and identity management via blockchain etc (Distributed Systems Security; Authentication, Authorisation and Accountability). The team here work closely with collaborative partners across the UK and internationally, with partners including Lockheed Martin, Royal Thai Airforce, Thailand Mae Fah Luang University, Defence School of Communications and Information Systems of Ministry of Defence UK, Defence Technology Institute of Ministry of Defence Thailand, and T- Net Co. Ltd Thailand. Examples of recent work include (1) a recent collaboration with Lockheed Martin to produce industry-standard pen-testing report automatically after the networks scan through the intelligent software (prototype) developed, that can alert of any network intrusion traces (if any), through a standalone software or using a web interface. (2) industry funding for a project investigating IoT Botnet and Malware Detection in Smart buildings through Artificial Intelligence, which will look at challenges an AI/ML solution with implementing IoT devices which can be weak security links to a network; (3) funding through the REPHRAIN programme for an interdisciplinary project investigated the role of adaptive hate speech awareness mechanisms in reducing the production of hate speech across online social platforms. (4) funding from the OfS (pro:NE) to investigate AI-based Intelligent Multi-stage and User-centric Ransomware Attack Detection. (4) funding through the Alan Turing Institute to work on Data Security for a UK Defence and Security project.
- Biometric Identification and Digital Forensics – This team conduct work on biometric encryption (Cryptography), digital forensics, (Forensics) biometric recognition, including face and activity recognition, machine learning for media security, image/video authentication and watermarking and secure and trusted identity and access management (Authentication, Authorisation and Accountability). Research grants secured include an EPSRC project, “Temporal forensic analysis of digital Camera sensor imperfections for picture dating”. This project sought to establish, for any given digital camera, a model that allows the analyst to estimate the acquisition date of digital pictures. Application in high-profile cases that require the extraction of evidential information for courtroom purposes is very useful to assist forensic investigators in analysing incidents and linking different events. There was also work done to explore the utility of real-time sonification for monitoring computer networks to support the situational awareness of network administrators, in visualizing/listening to the network biometric data and understanding any deviations from it. One of the team members led the Institute of Coding (IoC) project at Northumbria University (2019–2021) funded by the Higher Education Funding Council for England (HEFCE), which in turn funded the development of the Cyber Clinic, a weekly extra-curricular series of activities to support students in acquiring digital forensics and cyber security practical skills along with industry knowledge. A recent award to this team includes UKRI funding (2022-23) for “Development of a child sexual abuse conversation (CSAC) dataset” through REPHRAIN – National Research Centre on Privacy, Harm Reduction and Adversarial Influence Online. This project will lead to advances in the understanding of how perpetrators of child sex grooming engage online with young people through computer-mediated communication tools and platforms. This work will consist of identifying, acquiring, sanitising, and anonymising data to build the dataset, and conducting an initial analysis across the dataset. The dataset will, for the first time, provide researchers with access to real-world grooming conversations, laying the foundations for work into reactive and proactive mechanisms for limiting this behaviour across platforms.