Seven million people use AI assistants every day in the UK alone. While AI assistants are useful and convenient, organisations and individuals do not understand all the risks of using this software in an ever-dynamic world.
Humley has joined a partnership of leading academics and industry practitioners to support pioneering research into the security of AI ‘assistants’ and the prevention of cyber security attacks. The SAIS (Secure AI Assistants) Project led by King’s College London in collaboration with Imperial College and industry partners has received £1.5 million in funding to support this forward-thinking initiative.
The SAIS project will draw upon the expertise of industry leaders including Microsoft and innovators such as Humley and Hospify to understand the risks of AI-based devices. It will also use machine learning and AI technologies to better understand and mitigate the risks. It will also bring together policy and regulatory experts and members of the general public. The project is critical as organisations and individuals adapt to changing behaviours in the wake of the coronavirus pandemic, with more people relying on AI ‘assistants’ (from enterprise chatbots and conversational AI platforms to in-home devices such as Google Home and Alexa) to support them in the digitally-reliant world.
AI and machine learning are complex concepts and their application to AI security is relatively new. It is, therefore, understandable that organisations and individuals do not yet fully understand all the potential risks of using this software. Added to this, are the challenges presented because of the constantly evolving nature of AI and its environment, potentially creating new vulnerabilities, which hackers are all too happy to take advantage of to gain access to and exploit personal information if the right security measures are not deployed.

With security of AI Assistants being a hot topic and the significant increase in adoption over the recent year (seven million people using AI assistants daily in the UK alone), the SAIS project seeks to deepen understanding of the systems through which AI models interact with each other including wider ecosystems in which they are embedded in order to protect both business and individual users – now and in the future.
As an innovative provider of Conversational AI Assistants for Enterprise, Humley has an in-depth knowledge of the applications of Machine Learning and Natural Language Processing technology within enterprises and the required security mechanisms required to implement it. Humley will lend its extensive experience of deploying solutions to customers across a wide variety of industries and verticals such as HR, IT, Sales and Customer Services to the discussions.
Humley’s CEO, Adam Harrold, commented, “At Humley the security of our customer’s data is essential and is a primary focus. Our involvement with the council not only enables us to contribute to and be a part of the exciting future of AI, but also contributes to our proactive approach to the development of our own technology and the protection we provide to our customers.”
Commenting on the impact of this research, Dr Jose Such, Reader at King’s Department of Informatics, Director of the King’s Cybersecurity Centre at King’s College London and Principal Investigator of the collaboration, stated: “We are delighted to start work on the Secure AI assistants (SAIS) project funded by EPSRC. This project is of crucial importance to create secure AI Assistants that we can trust and make the most of the exciting functionalities and convenience they bring with them.”
To find out more about the council visit https://www.kcl.ac.uk/news/epsrc-awards-15m-funding-to-kings-led-secure-ai-assistants-research