As COVID-19 took the world by storm, the healthcare industry had to adapt and transform to meet the demands of a worldwide pandemic. We were overwhelmed by the demand for vaccines, medicine, medical services, and more.
Since then, the relationship between users and the industry is changing at a rapid pace. The pandemic helped in that, but digital acceleration is also playing its due part in pushing a new model for healthcare. This begs the question: how can we get the users and clients on board?
Pushing for change
The adoption of AI and robotics in the healthcare industry it’s not a matter of if but a matter of when. Remote surgeries, automated diagnosis, VR training—it’s all part of the industrial revolution 4.0. But while sci-fi fans might be rooting for a cybernetic reality, plenty of people are worried about the implications.
Let’s be honest, we live in a world where people unironically decided to burn mobile towers to prevent the propagation of the coronavirus. Many are apprehensive in regards to AIs, algorithms, and robotics. Where some see growth and progress others see the dystopian world of the Terminator.
So how can we ease the transition? In this case, I would like to point to “Codename Optimus,” the robot announced by Tesla’s Elon Musk in early 2021. The announcement started with a dancer dressed like a prototype robot, dancing to electronic music.
For critics, it was a silly presentation at best and a joke at worst. But to me, it was a stroke of genius from the marketing department. Allow me to explain:
One of the most difficult aspects of creating human-like robots is overcoming the uncanny valley. The feeling that something that looks like us is not quite right. And I think that this kind of stunt is an excellent example of how to approach the problem by using a bit of psychology.
There is this little thing called the laws of association which, in summary, says that things that are shown together tend to be associated. Like pain and the sound of a dentist’s drill. By showing a goofy presentation with a dancer donning a robot costume, Tesla is laying the ground to present a contentious piece of hardware in a less threatening light.
And that’s where the behavioral model comes in
To put it in simple terms, a behavioral model is an approximation of how living beings behave, and why they behave in specific ways. I always like to explain the basics with B.F Skinner’s model. It’s the foundation for almost every other behavioral model out there.
Skinner saw human action as the consequence of previous actions. More specifically, he believed that contingencies formed between the environment and how we react are learned by reinforcing that relation through either reinforcement or punishment.
Let’s look at a few examples:
- Whenever you smell fresh baked cookies and go check the kitchen. Your partner has prepared a fresh batch and gives you a cookie. That’s positive reinforcement. The next time you smell cookies odds are you’ll go to the kitchen to get a reward (the cookie).
- You start driving and an alarm goes off, a terrible beeping sound that won’t stop until you fasten the seatbelt. That’s negative reinforcement. In this case, something undesirable is removed and you learn that you should fasten the belt to avoid it in the future.
- You take a sip of freshly brewed coffee, but the cup is too hot and you burn your mouth. That’s a punishment. You learn that drinking from a hot cup causes pain, so you avoid it in the future.
Remember, the process is always like this: stimuli -> response -> reinforcement. Of course, that’s one of the simplest models out there. Other researchers and behavior engineers have developed models that introduce cognitive processes as mediators between stimuli and response. But that’s another subject.
With behavioral design, we can create technology based on the principles of a behavioral model. In other words, we use psychology to create a system that considers how the user will behave in certain situations.
One quick example is how the mobile gaming industry uses surprised-based mechanics to entice players to spend money. “Loot-boxes” explode in a rain of colors and lights, a spectacle that triggers every dopamine receptor in the brain.
In behavioral design first, you decide what kind of behavior you want to elicit from the user, then you design feedback loops that reinforce the behavior and penalize unintended or undesired behaviors.
Going back to the mobile game example, when a player runs out of lives they can either pay a small fee and play again immediately or wait a few hours until they get another chance for free. I’m sure you can see which one is the reward and which one is the punishment.
With behavioral design, you can start creating systems that help you ease the transitions for your users. Imagine something as simple as using an AI to schedule medical appointments. Patients can still call and make an appointment with a secretary, or they could use your AI and get a 10% discount.
Now, it’s one thing to use an app to make an appointment and quite another to let an AI operate on you, right?. Yes, of course, but, remember those feedback loops? You can use an incremental design to increase the level of exposure.
First is an appointment, then an automated blood pressure monitor, then an AI-based diagnosis. By slowly introducing more refined technologies you can help people realize that there is nothing to fear.
This is another concept from B.F Skinner’s operant conditioning called successive approximations. The more we are exposed to something, the more normal it seems to us down the line. We desensitize people or normalize a stimulus by exposing the user in incremental amounts.
Quick example: Let’s say that you have a kid that doesn’t like the taste of medicine. Instead of forcing them to take it all, you mix a few droplets in a glass of water. Little by little as they grow used to the taste, you increase the amount until they can take the medicine without any water at all.
Is Behavioral Design Ethical?
When we put it like this, it sounds kind of devious, so, is it ethical to entice changes in behavior with your designs? That’s not an easy question to answer. A few of the examples I’ve used so far are very similar to gambling and can trigger addiction in certain people (the ones relating to games).
On the other hand, the US government has been using behaviorist principles to enact public policies for over a decade. So there is a very well documented precedent about how this kind of approach can help you.
I would argue that if you are using behavioral design to help people overcome irrational fears and to facilitate the adoption of technology that is going to save lives, then there is nothing to fear. Once again, this isn’t brainwashing, it’s just helping others see the world in a different light.