Published on: May 11, 2023  Updated on: January 24, 2024

CBT AI: Artificial Intelligence In Cognitive Behavioral Therapy

Author: Lucie Baxter

A man and a woman are sat in a therapy session. The woman is wearing wireless VR glasses as she speaks to the man. It looks like they are in therapy because he is taking notes.

Psychology is a field of science that has brought us many fascinating breakthroughs. The study of the mind has taught us about our attention spans, behaviors, feelings, and dreams. We learn more all of the time, and much of this progress is owed to technology. 

Take artificial intelligence (AI), for example, which is the simulation of human intelligence. These machines that are built with the capacity to recognize feelings, understand emotions, and mimic human cognition have been pushing our knowledge strides ahead. 

And as mental health conditions worsen in the US, this clever technology holds a lot of promise for professionals and patients in the industry. Not everyone shares this sentiment, as a 2023 survey proved with its very telling results. 60% of adults worldwide were uncomfortable with the use of artificial intelligence in their personal healthcare. 

To truly understand if people’s concerns are justified, we must look at the opportunities AI can bring when coupled with psychology. So, let’s explore the method of cognitive behavioral therapy and some of the most promising applications of technology within it. 

What is cognitive behavioral therapy?

The study of psychology has proven that we are inherently complicated beings. Since it is generally believed that we only understand 10% of the human brain’s function, it makes sense that some people need outside guidance to process their thoughts and feelings. 

The heading "Disorders That Could Benefit From Cognitive Behavioural Therapy" sits above 7 icons. Each one illustrates a disorder, which include "Anxiety", "Depression", "Panic Attacks", "Addictions", "Eating Disorders", "Anger", and "Phobias".

Source

This is where cognitive behavioral therapy (CBT), a form of psychological talking treatment, is introduced. It is a very common method used for a range of mental health problems because it focuses on the present instead of the past, encourages positive behavior, is easily applicable in real-life scenarios, and generally boasts fast results.

In short, it aims to make it easier for a patient to actively manage their condition. The techniques within the treatment include, but are not limited to: 

  • Exposure therapy 
  • Journaling 
  • Goal-setting
  • Trying new activities
  • Problem-solving by brainstorming solutions
  • Reframing unhelpful and negative thoughts

As we mentioned earlier, there are significant obstacles for those in charge of delivering these treatments. There is an ever-present social stigma, the need to tackle inequalities, and not enough trained staff or resources to handle the growing amount of patients. 

AI can be an effective way to ease these burdens when introduced as a tool in healthcare. Let’s evaluate where implementation is most useful, where it is already being applied, and where it has the potential to take us. 

Artificial intelligence in healthcare

CBT revolves around talking. That means, for treatment to be successful, it needs to be built on the relationship, trust, and interactions between patient and healthcare provider. This factor alone means that technology is unlikely to ever replace humans entirely. 

This doesn’t mean that there aren’t endless benefits to the application of AI in this method. It has the potential to improve areas such as privacy, affordability, and efficiency, which will make healthcare more fair and attainable. 

The future looks bright, thanks to the work of those in the present. The real-world adoption of AI is becoming more normalized, especially since mental health technology is such a well-funded space. Examples of improved care and management are abundant. 

We will provide some of those below. 

  • Methods such as machine learning and deep learning make it easier to diagnose patients with greater accuracy
  • AI, neural networks, in particular, can make it easier to identify patterns in data, including behavior, reactions to previous treatments, and risk factors. This has made it easier to assign the right healthcare professional, make a plan of action, and make individualized decisions.
  • The expert systems make it possible to forecast a patient’s treatment outcome, their reaction to certain methods, and when interventions might be needed. 
  • Big data can be found everywhere, from patient records to treatment history. AI can be used to manage this information, which is normally a labor-intensive chore. Research has shown it is possible to automate 70% of processing work and 64% of collection tasks. 
  • More people than ever are requiring treatment, but the budgets and number of staff for most providers do not accommodate this. Predictive AI is an invaluable aid in decision-making, planning, strategizing, and allocating money

The application of artificial intelligence in cognitive behavioral therapy

Our connection to intelligent machines is growing with each breakthrough. Computer science has allowed us to teach them patterns, and now we understand each other better than ever. There couldn’t be a better time for AI to find its place within CBT.

In every single reported treatment area, demand has increased since the pandemic started. For example, the demand for treatment of an anxiety disorder has increased from 72% to 82%, and for a depressive disorder it has increased from 58% to 70%.

Source

The advancements mentioned above can help both the person delivering the therapy and the person who is undergoing it. To shed some light on how exactly AI can do that, let’s examine some of the most interesting and promising uses. 

Virtual intelligence 

Some people consider “virtual reality” (VR) and “artificial intelligence” to be two sides of the same coin. While they have their similarities, there are stark differences. For example, VR augments the world around its users, and AI works within the confines of the real one. 

There are some places where they can overlap, which is known as virtual intelligence (VI). If, for example, you put on a headset and ventured into space, AI algorithms would detect your movements in real-time through the use of motion-tracking data. The crafted world would interact with you accordingly, creating seamlessness. 

The application is not yet as commonly used as AI and is yet to become a mainstream option for therapy. However, below are just a small number of the budding opportunities. 

  • There are many reasons why patients need help accessing treatment, such as fear of diagnosis, disorders that prevent sufferers from going outside, children that need to be taken care of, or transportation costs to an office or hospital. Psychotherapy is a highly accessible method because, with the right technology, it can be delivered through a mobile, laptop, or other smart devices. 
  • When working in the branch of psychiatry, it is imperative that patients have a safe space to speak that supports all kinds of needs. People may need a physical barrier because of boundary requirements, others may have symptoms of paranoia or aggression that require distance, and others might have their progress stunted by constant panic attacks. VR can place individuals somewhere where they feel comfortable, which will make them more likely to engage
  • Case studies have shown that VR can be a useful aid in exposure therapy. Combining this method with AI can help people who struggle to imagine or visualize their fears, can provide realism without creating dangerous situations, and can be a controllable space to put coping strategies into practice.

Chatbots

When you visit a business’s website, there is often a box that pops up offering further assistance. This is an encounter with a chatbot. These AI-powered resources can answer user questions instantly and intelligently. But they aren’t just useful as a business strategy. 

Chatbots have already proven their worth in CBT. For example, Wysa is an emotionally intelligent model that reacts to any human behavior expressed by the user. It has been providing treatment in an NHS trial in the UK. The organization is reputable and well-known, so it is important that it only adopts innovative and feasible methods. 

The three images show how Wysa communicates with users and offers different solutions depending on mood, such as "Find a solution" or "Let's just talk." On the right you can see self-help documentation and a friendly penguin to be the face of the application.

Source

What is it about this technology that draws in huge companies, innovative brands, and people trying to make a difference? Below are just a few answers to that question. 

  • Human-computer interactions have progressed past generic and robotic conversations. By building devices on natural language processing models, such as Alexa, they feel much more realistic. Amazon’s voice assistant has over 70 million users, so it works. Through the application of chatbots, users can have their own virtual companion who is accessible 24/7. This takes the pressure away from healthcare professionals who struggle to provide this service. 
  • Computational models and assistants can be taught to understand uncommon languages through things like speech recognition APIs. Before these advancements in AI, certain patients would’ve been isolated from CBT, like migrants. The Massachusetts Institute of Technology (MIT), for example, developed a machine learning method called PARP. This can be applied to languages like Wolof, which is spoken by 5 million people in West Africa, so they can access talking therapies too. 
  • There are many circumstances where a patient might deny themselves treatment. Some may be too anxious to speak on the phone, and others might fear the cleanliness of a waiting room, for instance. Chatbots are used online exclusively, so these worries can be avoided

Facial expression analysis

Online CBT sessions are beneficial for a number of reasons. They’re more accessible, can be recorded and watched back, and can be a good solution for patients who live in remote locations. Though research shows that internet-based therapy can be as effective as in-person, this is partly dependent on the therapist’s ability to read facial expressions.

When hidden behind a screen, it can be much harder to do so. There are more distractions, connection and quality might be poor, and the patient will be more distanced. It’s no surprise that the market for emotion detection was worth over $20 billion in 2022. 

The graph on the right shows that the market was worth $20.26 billion in 2021 but is projected to be worth $60.86 billion by 2030.

Source

The search for a solution brought the mental healthcare industry to emotion AI, which has the ability to interpret expressions through deep learning. Below are some of the other reasons why therapy providers began to invest in this type of facial analysis technology. 

  • One of the main reasons why people give up on talking therapy is because they aren’t getting enough feedback and feel as though they aren’t being listened to. Therapists can use information from AI analysis to determine their responses. For example, whether they should offer empathy or advice at a particular moment.
  • No matter how many precautions you make, there is always a risk of bias or misjudgement. AI data can help professionals to triage users and make judgements about who needs more urgent care. 
  • There are people in the healthcare industry who have unique professional challenges. For example, therapists with autism may find it more difficult to read the expressions of their clients or experience different mental processes to them. Using AI tools could provide them with insights that they wouldn’t have had before. 

The ethics of artificial intelligence and its use in treatment

AI is almost a normal part of our lives now. With this comes new uses in our daily lives. For instance, social media platforms can use machine learning algorithms to create a friendly and useful chatbot. Snapchat has recently released their version, known as My AI. 

It is easy to get wrapped up in the excitement of breakthroughs and new products. But innovation is interconnected with ethics. As we move into a technology-driven future, a system of principles promotes truth, avoidance of error, and using knowledge for good. 

This is even more paramount in a healthcare context, where a person’s well-being, emotions, and life is on the line. We asked our new friend for their thoughts on this matter. 

A screenshot of a conversation on Snapchat where the writer is communicating with the platform's AI chatbot. The chatbot responds to the question "do you think there are ethical considerations to be made when implementing AI into healthcare?" with a list of those considerations, such as data protection, privacy protection, and no biases.

Source

While that response is promising, there are still concerns being raised by Snapchat users. Parents are worried that the wrong response might make their children anxious, especially if they don’t fully understand the difference between computer and human interactions. Other users are stressed because you cannot remove the chatbot from your account. 

There are a lot of ways AI can have a significant impact on mental health, and this example is only a very small piece of the puzzle. When the technology has the potential to cause feelings of stress, fear, and uncertainty, should it be utilized in the treatment of patients who already experience these emotions?

To find an answer to that question, let’s start by examining some of the points that My AI touched upon earlier. 

  • Transparency. It is often the case that AI systems are black boxes. This means that their operations aren’t visible, and they will arrive at a conclusion without giving a reason why. If everything is open to interpretation, who’s to say that the results are responsible, appropriate, or even trustworthy?
  • Unique needs. Those seeking help with their mental health will have individualized needs that cannot be met by learned algorithms. For example, problems would occur with a therapy chatbot. Some patients may have trigger words that need to be avoided, some may require physical touch for comfort, and others might not understand that they are dealing with machine intelligence and not a real person.
  • Responsibility. AI technology is such a remarkable feat for science. Naturally, developing the model and then making it fit for the market is a process that involves many, many people. But when a misdiagnosis occurs, or the wrong treatment is recommended, it will be very difficult to assign accountability. 
  • Bias. AI is susceptible to discrimination, just as the human mind is. These will stem from programming or datasets. Even OpenAI’s revolutionary product ChatGPT has been called out for its rampant social biases. This could hugely affect the delivery of care and diagnosis, especially if less data is collected on minority groups. 
Research found that generic prompts were neutral for gender bias, slightly racially biased, and had severe age bias. More detailed prompts were even worse with slightly masculine biases, moderate racial bias, and severe age bias.

Source

What can be done to keep artificial intelligence usage ethical?

It is the government’s responsibility to bring order to the chaos. While there is still a long way to go before we fully have control, plenty of action has been taken already to ensure AI systems are as ethical as possible. Here are some of the ways healthcare organizations specifically can commit to this. 

  • Only working with reliable third-party vendors who strictly follow legislation
  • Using patient feedback and monitoring to find and eliminate biases
  • Taking additional measures to ensure data protection
  • Using the most up-to-date, tested, and well-reviewed AI systems
  • Understanding that AI can make errors and remembering this in decision-making
  • Always putting the individual and their specific needs before anything else

The perfect combination

Anyone who has undergone therapy for any reason will understand that it is a highly personal process. For that reason, it’s natural for us to prefer a human touch over a machine’s screen. When we pour out our hearts and share our deepest thoughts, we want the listener to not only comprehend what we’re saying but empathize with it too. 

Emotions will forever set AI apart from psychiatrists, nurses, therapists, and doctors. We can safely assume that computers won’t take over any of these jobs, or the study of human psychology for that matter, in our lifetimes. But that isn’t the point. 

The technology can be an invaluable assistant for anyone in charge of delivering care, especially mental health practitioners. When used as an enhancement to existing treatments or as a solution to ongoing problems, great things are bound to happen. 

AI does much more than improve the performance of clinicians. There are apps to serve the needs of content creators, business owners, and students too. Find them at Top Apps.

Author Image - Lucie Baxter

Lucie Baxter

Lucie is a keen content writer who loves diving into everything tech and AI-related. Since graduating from university, she has been working for a range of diverse companies to continue broadening her writing opportunities.

Recent Articles

AI podcasting microphone

Learn how to use advanced search tools, newsletters, and reviews to uncover the perfect AI-focused podcast for you.

Read More
Podcaster using AI

Explore the top beginner-friendly AI podcasts. Our guide helps non-techies dive into AI with easy-to-understand, engaging content. AI expertise starts here!

Read More

Explore the features of The AI Podcast and other noteworthy recommendations to kick your AI learning journey up a notch. AI podcasts won’t...

Read More