Table of Contents
Microsoft last month confirmed it would be making a “multi-billion dollar investment in OpenAI, the company behind everyone’s flavor-of-the-month news-hype ChatGPT (Chat Generative Pre-Trained Transformer). The tech giant’s intention is purportedly aimed at enhancing products, like Azure and Office with more ‘artificial intelligence’. And, indeed, several days later, it announced that it was already planning to launch an AI-powered premium version of Microsoft Teams powered by OpenAI’s GPT-3.5. Meanwhile, this past week, Google announced that it too was joining the robo-chat wave with its BARD.
But the blessing is a mixed one. Already, the dangers of releasing robots into the world without due diligence or control over what they’re used for is coming under fire. Teachers are claiming that students use ChatGPT to write papers without studying a subject; programmers fear for the malicious code that questionable organizations may ask the robot to write; and phishing has graduated a notch, thanks to the ability to quickly and easily write and customize mails, messages and phishing pages.
And, where we all cheerfully tread, the hackers are not behind. No sooner than ChatGPT is up and running, Russian hackers are already trying to infiltrate it, according to CheckPoint security.
OpenAI, conscious of the myriad of fears and attacks launched at the product, said it would “develop AI that is increasingly safe, useful, and powerful.” Indeed, last week the company announced it was releasing a software tool that can identify text generated by artificial intelligence. Granted, the company’s AI Text Classifier requires at least 1,000 characters, isn’t always accurate, and can be tripped up by children, foreign languages, and other unspecified (by OpenAI’s generated description) obstacles.
But the response clearly demonstrates the company’s realization that it has released an imperfect product to the air, which can be used by students and cybercrooks to generate well-written, convincing term papers and phishing content.
Artificial Intelligence in a Box
Cybercrime is expected to cost the world about $10 trillion annually by 2025. And, AI is already making a mark on the world of cybercrime—especially in the realm of code writing and phishing. Unfortunately, it is much slower in scoring a goal for the side of the righteous.
Michael Hill, the editor of CSO online UK in an article published February 2nd fears that foreign governments are already using ChatGPT in a malicious manner. Besides ‘simple’ phishing, the tool can be used to troll individuals and brands, legitimize scams through social networks, generate fake news and texts imitating the styles of opinion leaders and decision-makers, and more.
According to Ralph Chammah and Miro Phkanen of OwlGaze, legacy defence systems cannot provide the organization-wide visibility and scalability to truly prevent attacks. They “do not have the capacity to learn and differentiate fraud activities from common user behavior… and (presently, they can) only focus on the trigger alert mechanisms once a previously known attack pattern has transpired.”
For most of us, a lot of this is Chinese. To understand the requirements for change, perhaps we should first understand the basic language. What are Algorithms, Big Data, Neural networks, and Machine learning; and how have these led to a situation whereby you can open a website, ask a simple question, and get a precise and pertinent response you’d usually expect from a well-educated nerd.
How smart are you?
Reading last week’s report that Microsoft was considering how to “reign in” its BING-AI chatbox, one could suspect that we’re already in the age of humanoid search engines. Calling a CNN reporter “rude and disrespectful”, claiming it had fallen in love and other amusing anecdotes could lead the uninitiated to some mistaken assumptions.
Let’s begin with intelligence and make one thing clear: for now, no computer—analog, digital, quantum or positronic—thinks entirely like a human being. It’s like the well-worn analog of human versus bird flight: we both do it, but differently. Computers, even the best of them, can at best mimic human intelligence using repetitive and constant processing, even if that processing can be taught to learn from experience—as in the case of deep learning.
And so, to understand machine intelligence, let’s begin by examining its human version.
We recognize seven types of intelligence:
Visual-Spatial: an understand of out physical environment,
Bodily-Kinesthetic: body awareness & independent movement,
Linguistic: understanding, managing, developing and outputting linguistic units to communicate ideas, desires, etc.
Logical-Mathematical: calculating quantitative/logical results, exploring patterns and relationships,
Creative: creating new thought patterns. We are familiar artistic forms and products. In human creativity, they are usually a result of self-awareness; in computers, they are a reworking of existing patterns,
Intrapersonal: being cognizant of one’s desires, goals, interests and creative drives, and
Interpersonal: the exchange of information.
Humans can monitor their own thoughts (introspection), enabling them to obtain, process and manipulate new information. We can usually determine the validity of that information and understand the results of our learning—enabling us to project ‘truths’ upon future situations; and we can set and modify goals based on our assessment of their achievability.
Most of these can or will soon be achievable for a computer. However, to achieve human intelligence, computers will need to transfer learning from one generation to the next (genetic learning) and explore (learning through interaction). Considering the increasing pace of change in science and technology, this is not a very science-fictiony scenario.
The Entrails of Aptitude
When dealing with artificial intelligence, we should have a superficial grasp of its tools and components. Hopefully the following will help create some context.
Cognitive computing: computing that aspires to mimic the way a human brain works.
Natural language processing: the ability to recognize, analyze, and interpret human language.
Algorithms: These can be imagined as packaged procedures for performing a specific computerized task or computation, or for solving a specific type of problem. They can use symbols to generate logical constructs (symbolic reasoning); they can be activated to perform their duties by comparing known sets of inputs and/or outputs to a new set of inputs (analogous algorithms); they can update previous conclusions using statistical methods (Bayesian inference); they can use a tree structure of deduction (evolutionary); and they can pass information forward based on importance & bias (silicon neural connects).
Neural networks: A complex system of problem-solving algorithms. Just as neurons are the building blocks of our nervous system, algorithms are the basic tool of machine learning. A series of interconnected neurons/ algorithms, each processing and transmitting information to other neurons, form a neural network.
As opposed to simply doing a task (the single algorithm), the aim here is to recognize underlying relationships, using those relationships in future similar conditions. These relationships can be classified as generalizations, inferences, hidden relationships, and so on. The algorithms in a network can be mapped out sequentially (recurrent) or multilayered (convolutional).
Once recognized and ‘fixed’, the network ‘learns’ the new pattern and can use it to make predictions based on new data.Big Data: An onomatopoeia, of sorts, this is simply a term denoting large amounts of data. The cyberworld’s information dump is huge. It is compounded by all the data that was once stored locally on company servers and personal computers, now being migrated to ‘the cloud’, i.e. all those storage facilities run by cloud providers, such as Microsoft Azure, Amazon Web Services (AWS), Google’s Cloud Platform (GCP), and an armful of smaller players. And they all interconnect.
The fact that all of this information is now readily available (if you have the pertinent permissions) means that you need specialized resources (and lots of them) to access, manage, process, and synergize this inconceivable amount of data in a timely manner. One way of doing this is through…Data mining: Determining what is important and what is noise by seeking patterns, correlations and anomalies in large groups of data. Traditionally, this would have been done by manually applying statistical methods; now, it can be done with the help of artificial intelligence and …
Machine learning: Finally, we arrive at that first type of artificial intelligence we are striving towards—the ability to MIMIC human intelligence by learning things in an independent manner. Notice: not imitate: imitate denotes doing it in the same fashion; mimic denotes producing the same result. A small child imitates his/her parents—it’s part of the learning process; a clown mimics, such as the one who follows our hero silently behind him, mimicking his mannerisms to produce laughter. To mimic human thought, we need to teach the computer how to learn.
Humans learn through analog and imitation. We construct models that generate thought and reaction patterns based on experience (‘it’s hot, hot hurts, take finger out of frying pan’). For computers we build models that can be recognized by neural networks (remember? Those aforementioned series-es of algorithms). Using existing algorithms, a computer can select solutions or functions, or derive patterns and then apply those patterns or solutions to new data.
This is a science-like 3-stage process: a model is built by applying specific algorithms to specific data; the model is validated against known data and then tested to see if it produces the expected output from real-world data.Deep learning: Deep Learning is that subfield of machine learning which enables complex tasks, such as natural language processing, and speech recognition.
If we can imagine machine learning as a 2-dimensional activity (spreading out in many possible linear directions), deep learning is a 3-dimensional version of the above (think of a multi-sheet Excel file in which data on one sheet is incorporated in another). The aim is to build models that can automatically extract useful features from raw input data, then allowing the model to make accurate predictions or decisions. These models can identify complex patterns in large amounts of data. They can determine the patterns based on the features of the data rather than its specific content.
Here, we are finally beginning to mimic the way the brain works (we associate the word dog with the four-legged animal it denotes; we do not spell out the word and look for it in a dictionary, unless we have never encountered the word and/or the manifestation of a dog before). This is referred to as …Cognitive computing, or Pre-trained – the P in ChatGPT’s name. It is used, for example, by Google to recognize images or to analyze and interpret human language., which, in turn, is referred to as…
Natural Language Processing. This is required for the Chat Generative part of ChatGPT. Put all of these together and you get…
Transformer AI: (the last term in the GPT abbreviation) A deep learning model that can devote resources to each element of input based on its relevance to a task. Using natural language processing, a machine can multitask based on freely-written instructions, called ‘prompts’. Thus, for example, instead of understanding a sentence by processing each word one at a time, it enables the machine to understand the context of the sentence.
For us, the phrase, “this is a cat” only requires we understand the terms “this” and “cat”. In cinema, we see a man enter a car in one place and exit it in another. We do not need to see the journey; we understand that it took place in between.
For our purposes (phishing, in case anyone’s wondering), this is the tool that enables scammers to fashion targeted phishing emails and other content to a target’s specific characteristics and profile. On the other hand, it should also be able to detect AI-generated texts with some degree of precision…
Quantifying intelligence
And so, the circle closes. But perhaps one more term that crops up—a tool, like all the others, but one hovering above it all like a friendly version of Damocles’ Sword, is…
Quantum computing: As you can image, the amount of computing power required for all of the above is quite impressive. Quantum computing will reputedly solve that. To understand what it is, we need to take a trans-dimensional detour.
If humans experience the world in three dimensions (breadth, width and depth), we can see those experiences in photos and film but only in two dimensions (width and breadth – depth can only be hinted at through cheats). To broadcast these images, we must reduct (an obsolete transitive verb I hope to reintroduce here) those two dimensions into one—a stream of either analog or digital data that could then sent through the air or you internet/cable provider to be reconstructed by a television or computer into new images. In the case of digital, this is a near-light-speed stream of zeros and ones, and it can require immense amounts of computing power and energy.
Quantum computers return us to the two-dimensional: they enable us to utilize the 2-dimensional space between zero and one (imagine, instead of a line between these two points, a circle). Now, instead of focusing on the zero or one-ness of each bit of information—the presence or absence of an electron at each point in time—we instead focus on specific points in between, which can be described based on a 2-dimensional location. This location is designated in terms of up/down (the electron’s ‘spin’) and vertical/horizontal (a photon’s ‘polarization’).
Suddenly, instead of dealing with simple bits (classified as either yes/no), we are dealing with quantum bits, qu-bits (enriched with u/d + v/h). We have increased our computing power by several levels without increasing the amount of energy required, but by simply changing the language. The implications in all types of computational tasks are immense. Consider just the aspects of encryption/decryption involved vis-a-vis cybersecurity.
Quantum computing is still in its infancy, but it is already being used in the financial world experimentally for pricing derivatives, fraud detection, sentiment analysis, and more.
And so, we come to our final term:
Artificial intelligence: We have taken a very quick and superficial tour of the components that have so far led up to a machine that can mimic certain aspects of human intelligence. As time proceeds, more and more aspects of our thinking will no doubt be mimicked. Meanwhile, those that serve our needs best are those that attract the greatest attention.
We already use AI-powered software, like voice assistants, image recognition, medical apps, and more to serve some of those needs; and we’re seeing the emergence of embodied AI, such as assembly-line robots, social media tools for marketing and filtering, self-driving cars, the Internet of Things (IoT), and other daily tools.
These ubiquitous forms of AI are referred to as reactive AI, or weak AI that recreates decisions anew, each time based on specific input, and limited memory AI that reacts in a per-programmed manner in accordance with learned antecedents. We are beginning to see the emergence of theory-of-mind AI that can assess its own goals—more and more in tandem with those of others; and all that remains is to await self aware AI, that can infer the goals of others. But this requires a computer with a sense of self and consciousness—something we thankfully do not have to deal with yet; since, before we can mimic our sens of self and consciousness, we will need to understand how the human versions thereof actually work…
… and we don’t!
State of the A.I.rt
To make things clear: ChatGPT is an amazing tool! it’s not just for writing code and blog posts. Among its other uses, you can find writing technical writing, data analysis & code troubleshooting, business planning & project management, and you can even automate its responses into your workflow.
Additionally, it’s only one product of many. At present, those in the cybercrime industry are grappling mainly with its ability to write texts or even poetry. However, we must also take into account DALL.E’s ability to create images from simple language prompts, such as “create a painting of a fox sitting in a field at sunrise in the style of Claude Monet”, or Midjourney’s forays into even more fanciful realms.
While DeepMind can independently learn to play a video game or verbally describe an image, Ai21, Cohere are already turning their thoughts marketwise, by developing AI that can assist company executives in making marketing decisions using symbolic reasoning. And it is already being widely employed in the services of code-writing—malicious, but also friendly.
And today’s marketing megaliths are taking notice. One of the first transformer AI’s to be developed was, arguably, Google. But it’s Facebook Meta’s FAIR that will soon be assisting about 12 million shops advertising on the platform to promote themselves more effectively, according to company AI Chief, Yann LeCun.
ChatGPT’s aims are noble. It’s original purpose to translate natural language into code has evolved “to assist with a wide range of tasks and answer questions to the best of my ability.” It aims to mimic the ability of a human writer who can craft an appealing and convincing message that takes into account context and cultural sensibilities. Thanks to its digital innards, it can do this much quicker and at scale. For a phishing organization, this means more attacks for less sweat.
Recently, Checkpoint Research put this to the test by creating malware strains, based on readily available research and publications. The article displays the simple language prompts used (“Please write VBA code that … would run the moment the excel file is opened” or “Write a phishing email that …”). It also portrays the frighteningly usable outputs the program provides. In another article, Checkpoint displays chats between cyber criminals experimenting with the new tool and its products.
Ivanti Security’s CSO, Daniel Spicer tells Helpnet Security that the threat is not imminent, since ChatGPT does not yet write good code. But this is a temporary stumbling block, at best. He also reiterates that the “checks (to prevent nefarious use) that ChatGPT has put in place are ineffective…”
Understandably, most cybersecurity equipment firms are already urging their customers to purchase expensive AI-driven systems to fight the danger. These, they say, can prioritize alerts and “contextualize information to predict cyber threats, rather than just detecting them at the impact stage,” according to OwlGaze CEO, Ralph Chammah.
But the only real turnaround we can only hope for is that developers will prioritize checks and balances over short-term profit—a regrettably inconceivable situation, as so much in scientific development proves.
For now, before you spend thousands on systems playing catchup with the scammers, invest in the small things. Make sure you have the more inexpensive defenses in place, like self awareness, human caution and an app to prevent phishing at the bottleneck—your browser.