Summary
Artificial intelligence can be a useful tool in many areas. Generative AI can help CPTED practitioners and Security Risk Management professionals in becoming more efficient and effective. Generative AI models such as ChatGPT, Co-pilot or Bard can be used as an effective editing, brainstorming, summarising, data analysis and image generating tool. They can be useful in gaining copyright free material, although this is currently being challenged. The strengths and limitations of GPT's will be further discussed.
Introduction
AI or Artificial Intelligence is not a new phenomenon. The origins of AI in the modern era date back to at least the 1940s with Claude Shannon and his paper: “ A Mathematical Theory of Communications,” which focuses on n-grams and certainly within the 1950s featuring Turing's seminal paper: “ Computing Machinery and Intelligence,” which introduces the question of whether machines can think and suggests the Turing test, which has become a mainstay of AI. The birth of AI is attributed to the Dartmouth Summer Research Project on Artificial Intelligence in 1956, that brought together more than a hundred researchers from computer science, mathematics, linguistics, and philosophy with the stated aim of creating machines that can think.
When we consider the recent frenzy about AI, AI has become a bit of a catch all and it has become the “in-thing” to say that your company or organisation using AI. “AI-enabled” or “powered by AI” seems to be used by many companies across a wide range of sectors. The most recent phenomenon that has propelled AI to the forefront of popular thought and imagination relates to a specific subset called Large Language Models or LLMs and Generative Pre-trained Transformer or GPT models. This will be referred to as Generative AI throughout the remainder of this article.
Different types of AI
The most well-known AI model is ChatGPT by Open AI. The release of GPT 3.5 in Dec 2022 went viral clocking up a million users in five days with reactions ranging from hysterical 'end of days' scenarios, echoing every movie ever made on AI, with others hailing it as the panacea to all our problems.
While this article referencing ChatGPT and DALL-E created by Open AI, it is worthwhile noting that these are by no means the only products available on the market. Claude 2, Bard, Copilot, Perplexity, Pi, GitHub Copilot X are other products offering general Generative Pre-trained Transformer models while there is rapidly an evolving subset of industry, field and even company specifics models available for use.
What are GPTs?
Generative Pre-trained Transformer (GPT) models are Large Language Models that use a Transformer architecture, employing several layers of self-attention mechanisms and feed forward neural networks – that are essentially machine learning algorithms organised as layers of interconnected nodes, to recognise patterns and make decisions.
These models ingest and learn from, ideally, a vast and diverse dataset. Users are then able to pose natural language queries or questions, to which the GPT model generates a natural language response.
Two attributes of GPT models are worth noting at this point. Firstly, they are predicated on Natural Language Processing (NLP) and designed to generate responses in natural language. Secondly, they are auto regressive. Meaning that they generate one token at a time, where each token is a prediction that is conditioned upon the previous tokens generated and the input query.
More technically, the GPTs employ a process involving four steps: Tokenisation, Encoding, Decoding, and Post Processing. It is necessary to understand how GPTs work because without doing so, we will not be able to understand how best to use them and, importantly, when not to use or rely on them. In addition to this, critical to any GPT model is the training process, including the training data and what level of fine tuning has been conducted during training to optimise the model for its stated purpose.
When considering the training dataset, we must consider how extensive this data set is, the diversity of the training information, what is the cut-off date for the data, are there any copyright issues? This last point of copyright is fast becoming an issue because many GPT models are using the internet as the training corpus and effectively scraping data from websites, both open and behind paywalls, sometimes even in breach of copyright.
Additionally, the training data can be biased, corrupted, or deliberately sabotaged in a malicious attack. Biases come from several sources, including over-representation of English-speaking viewpoints, over-representation of accessible and open-source information, absence of data for rare cases.
It is also possible that training data is deliberately corrupted to further certain agendas or maliciously corrupted using cyber-attacks.
It is also important to note here that Generative AI is not sentient intelligence. It is quintessentially a massive prediction engine which predicts each token predicated on the previous token and the body of the input query. This is the reason why sometimes Generative AI can sound very authoritative while generating an answer that an expert in the field would dismiss as utter nonsense. This phenomenon has been termed as hallucination. Generative AI is not a fact engine. It does not speak truth. In fact, it has no concept of truth. To sum up, Generative AI is a computing algorithm that predicts an answer in natural human language based on the data it has been trained upon and the question it has been asked. So, when we begin to use Generative AI, it is critical to understand these innate limitations.
Having mentioned these, it is also worth pointing out the strengths of Generative AI. One of its most impressive strengths is the speed and scale at which it can work to generate an answer. To address a similar question, may take you a few hours of research and then another few hours of writing, while Generative AI can generate answers within seconds. This speed is truly astounding and something that human beings simply cannot match.
Also, the answer that a Generative AI generates is in well-structured natural language written form with perfect grammar, syntax, punctuation, and structure. Whereas we humans think in a non-linear manner, and often our writing and our speech is unstructured and disjointed – at least in first draft.
Generative AI for CPTED practitioners
Figure 1. Uses of these AI models
Let us peel back on the layers of this new technology, taking a lay person’s perspective, to dispel a few misconceptions, understand how it works and explore ways in which Generative AI can help us as CPTED practitioners and Security Risk Management professionals in becoming more efficient and effective.
There are many ways in which AI can be useful. These include:
- Editing
- Brainstorming
- Questions to ask stakeholders during a security workshop.
- Checklists when doing a site security audit.
- Risks to consider in a crime risk review.
- Mitigation measures to employ to address security risks.
- Summarising
- Data analysis
- Image generator
Generative AI is an excellent editor. You can provide it with your writing as an input and ask it to edit this for you. It is also very good at producing summaries, writing introductions, and writing conclusions.
Beyond editing, if you are brainstorming, Generative AI is a very powerful tool to use for this process as it completes your brainstorming for you in a few seconds. Having said that, the answers you get are only as good as the questions you ask and of course the integrity, completeness, and diversity of the training data set.
For example, you can use it to brainstorm lists such as:
The answers it generates are truly impressive and the speed is the critical factor here. As experts in our field, we must of course review the answers it gives us, as we would review any reports that are generated by our student interns. Think of Generative AI as a smart, fast student intern, which can sometimes get things terribly wrong.
The next area where Generative AI is powerful is in digesting information and extracting insights. Whether this information is in the form of an academic paper or a compiled set of statistics. Again, if efficiency and speed is your thing, there is nothing like it. The summaries produced are generally very accurate and this can save you a lot of your research and analysis time. Having said that, a word of caution if you are tempted to cut and paste anything generated by Generative AI. It can sometimes get things very wrong. You need to do your due diligence and check its work.
The next area where Generative AI can be very useful is the analysis of data. This is a new feature that is currently available as a BETA on GPT 4.0. The idea is to feed your data as a file and the Generative AI examines the data and subsequently generates a written word explanation and extracts relevant insights from your data. It also can generate or represent this information in graphical form. This ability leverages off another strength of Generative AI which is its ability to generate computer code. This cross over into data processing, but this is quite relevant to CPTED and security risk management practitioners as we often have access to publicly available crime and incident statistics. Being able to rapidly ingest these and provide insights is immensely valuable.
Another area, and one that pre-dates ChatGPT 3.5, is the use of AI in using natural language descriptions to generate images. For us, this enables us to produce images to convey concepts or to use as visual aids in our reports and presentations. The useful thing about these images is that they are copyright free – at least at the moment.
Generative AI is a tool that can hugely boost our efficiency and effectiveness as CPTED practitioners. As of now, it does not replace a security consultant or a CPTED practitioner. Is a report produced by a Generative AI better than one produced by a CPTED expert? This would be a real problem if it were true, right? What Generative AI lacks is the real-world context of the specific project or site and the requisite stakeholder engagement that is an integral part of our risk management process. Will there be a future, when drones and chatbots can provide this real-world context and stakeholder engagement? I don’t know. I am sure people will try. But with one chatbot of the security consultant talking to the chatbot of the client, one wonders how much stakeholder engagement will be taking place. Much of what we bring and our strength, is what makes us human and this in my view is difficult to emulate.
Things to consider when using Generative AI
Now, having examined all of this and the wonders of Generative AI, there are a few important considerations that should temper its usage.
Should we be uploading our IP to ChatGPT? Effectively we would be training it better and providing it with more content for its dataset. However, this now belongs to OpenAI which is a profit-making company. It is possible that over time OpenAI creates a Security Consultant GPT?
There are broader ethical and even environmental considerations that also come into this debate. There is also a lot happening within the regulatory space when it comes to AI. The European Union is still debating the EU AI Act, with a deal being reached in December 2023 on comprehensive rules for trustworthy AI. In July 2023, China has released a set of guidelines for Generative AI. A lot of academics, thinkers, industry leaders are calling for increased regulation and management of the risks of AI. This is outside the coverage of my topic for today, however I felt it is important to at least flag this as a consideration. Regulation it seems will always be in catch up mode now that this technology has been unleashed.
Conclusion
In conclusion, as CPTED practitioners, security stakeholders and security risk management professionals we need to take a pragmatic approach to the use of Generative AI, tempered by our understanding of how it works and its limitations. Anything that is produced by a Generative AI needs to be fact checked. We must resist the temptation to simply cut and paste what a Generative AI produces and not view it as something that will do all the work for us. It can certainly provide a massive boost to your efficiency and effectiveness, and in doing so, enable us to be more thorough in our research, analysis, and reporting, while delivering better security outcomes and safer cities. In embracing this technology, we must also walk the ethical tightrope and be conscious of any regulations that come into effect.
And finally, I want to end with a maxim that can be employed in our everyday interactions and certainly with Generative AI: in seeking better answers, we should learn to ask better questions.