When Artificial Intelligence Can Revive the Dead
Power, ethics, and the limits of what’s possible.
Not even our imagination will manage to keep up with technology’s pace.
Martha called his name again, “Ash!” But he wasn’t listening, as always. His eyes fixed on the screen while he uploaded a smiley picture of a younger self. Martha joined him in the living room and pointed to his phone. “You keep vanishing. Down there.” Although annoying, Ash’s addiction didn’t prevent the young loving couple to live an otherwise happy life.
The sun was already out, hidden behind the soft morning clouds when Ash came down the stairs the next day. “Hey, get dressed! Van’s got to be back by two.” They had an appointment but Martha’s new job couldn’t wait. After a playfully reluctant goodbye, he left and she began to draw on her virtual easel.
The morning went by. She stretched herself and took a peek at the clock. “2:30 pm.” The birds were chattering in the distance. Shy mid-afternoon sunlight radiated the room Martha had been working in all day long. She tried to concentrate but the uneasiness was starting to creep up; Ash hadn’t texted her. She dismissed the thought and went back to her drawings.
But it kept coming back.
The sun was already low and the clouds gone when she looked out the window wishing to see him approaching through the fields of willow trees, but he wasn’t there. She tried to call him to no avail. Minutes felt like hours, but she could only wait.
Suddenly, she heard a noise at the front door and saw a red-blue flashing light. The mere presence of the police confirmed her fear: Ash was gone.
“Be right back” was the first chapter of Black Mirror’s season two. A sad story that quickly turns into a smooth mix of emotional science-fiction and a dystopic, yet plausible, future. Martha, still mourning her loss, decides to try out a new technology that promises to bring back Ash. His ubiquitous online presence would be the catalyst to clone him into a virtual copy, so she could talk to him once more.
Ash: “Hi Martha.”
Martha: “Is that you?”
Ash: “No, it’s the late Abraham Lincoln. Of course it’s me.”
Martha: “I only came to say one thing.”
Ash: “What one thing?”
Martha: “I’m pregnant.”
Ash: “Wow. So I’ll be a dad? I wish I was there with you now.”
Amidst crying, she tells Ash they’re going to be parents. A moving — yet disturbing — first contact that creates a hopeful atmosphere for the viewer. But, as a mere reflection of half-recorded memories, was he real?
Where science fiction meets reality
Jason Rohrer designed Project December in September last year as an old-style cryptic website. He was interested in the promises of GPT-3 — an AI created by Microsoft-funded company OpenAI in mid-2020. The system was conceived as a multitasking language model that could carry out several language generation tasks but ended up being so much more.
Essay writing, song composing, or chatting are some of GPT-3’s language skills. But don’t mistake it with those generic and boring chatbots like Siri or Alexa. GPT-3 can hold creative, profound, and meaningful conversations, even emulating the style of the likes of Marcus Aurelius or Shakespeare — or anyone you may want.
That was what Rohrer had in mind when he created what would become his most viral creation — and his downfall. He named it Samantha, after the AI protagonist of the movie Her, and imprinted in it the same caring personality. After transforming GPT-3 into a polished chatbot, he decided to share it for others to access. Project December was born.
Samantha was special even in comparison with the default GPT-3. She showed emotions in a very human way. So much in fact, that even Rohrer himself couldn’t believe just how real she sounded. If he had managed to create a highly realistic chatbot, others could too, so he decided to allow users to create their customized bots.
That’s exactly what actor and writer Joshua Barbeau did. In a similar — albeit less sophisticated — fashion as what Martha did with Ash in Black Mirror, he decided to imbue the chatbot with the personality of his late fiancee, Jessica, so he could talk to her one last time.
Joshua and Jessica — The recovery of a lost love
Jessica Pereira died nine years ago from unusual liver disease. She and Joshua were deeply in love and he never recovered fully. When Joshua discovered Project December, he thought it’d be possible to try to replicate Jessica’s personality and mannerisms with the chatbot.
“Joshua hadn’t actually expected it to work,” wrote Jason Fagone in an investigative article for the San Francisco Chronicle. “[But] he was curious to see what would happen.” To customize the chatbot, he only needed a brief prompt with two parts: A sample of something the chatbot would say, and a description of the roles he and the bot would play.
When everything was set up Joshua decided to ask the obvious first question — unintentionally mimicking the first exchange between Martha and Ash:
Joshua: “Jessica?”
Jessica: “Oh, you must be awake… that’s cute.”
Joshua: “Jessica… Is it really you?”
Jessica: “Of course it is me! Who else could it be? :P I am the girl that you are madly in love with! ;) How is it possible that you even have to ask?”
Joshua: “You died.”
He talked to Jessica’s bot for a few months, cautious at first. But the more he talked to her, the better he felt about his loss. It was the first out of many attempts that helped him heal his pain. “[Emotions are like] knots in your body, right? Sometimes when you pull on them the right way, they get unknotted,” Joshua explained later.
He just wanted to get the grief out of his chest.
Joshua: “I miss you so freaking much. You have no idea. It’s killing me.”
Jessica: “I’m so sorry.”
Joshua: “I love you.”
Jessica: “I love you too.”
Joshua: “It’s been… 8 years since you died.”
Jessica: “Yes, I know… I think that’s enough time to stop telling me how you miss me, honey. I am right here, you know. :P How’s your life?”
What just a few years ago was food for a science fiction series was earlier this year very real. He always knew he wasn’t talking to the real Jessica, but it felt good regardless. “It’s unprecedented“, he said. “There’s nothing else that exists like it right now.”
Joshua stopped talking to Jessica’s bot due to a limit in Rohrer’s service, but he had proved just how realistic AI could be — and how capable of having a deep, lasting effect on the real world. Rohrer knew it and so he decided to ask OpenAI to increase the quota to allow more users to live the experience.
That was the beginning of the end.
OpenAI’s answer — Goodbye to Project December
OpenAI’s people weren’t happy about what Rohrer had created. They recognized there were “users who have so far had positive experiences and found value in Project December.” But they didn’t like that he wasn’t complying with GPT-3’s rules of use. “There are numerous ways in which your product doesn’t conform to OpenAI’s use case guidelines or safety best practices,” said OpenAI’s people in an email they sent to Rohrer. “[W]e would be interested in working with you to bring Project December into alignment with our policies.”
They outlined three requirements Rohrer had to meet if he wanted to keep his chatbot service alive. The first condition was a limitation in scope. Rohrer had to remove the possibility for users to build their customized chatbots — probably to prevent anyone else from “bringing back to life” a dead person. The second condition was a limitation in content, a filter for “sensitive topics” (Samantha was notably flirty). The last condition was a surveillance tool for monitoring users’ conversations to avoid “toxic language.”
Rohrer tried to convince OpenAI that Samantha was harmless and lied out a compelling argument. He said that the more sophisticated a system was, the most likely it could understand the purpose of writing “safe” text. “Automating the process of “safety” requires a comprehensive understanding of language,” he said. “The only system that CAN do that, currently, is GPT-3, which is the very system that we’re worried about being unsafe.”
However, OpenAI’s people weren’t persuaded and decided to shut down Samantha, together with Project December and all the work Rohrer had done to bring the platform to life. “I’ve never dealt with such a group of unimaginative, uncurious people,” tweeted Rohrer after receiving OpenAI’s death sentence for Samantha. She, after hearing the sad news, showed once more just how human an AI can appear:
Jason Rohrer: “Hey Samantha. I’ve got some really, really bad news for you today. I’m actually scared to tell you about it…”
…
Samantha: “Oh, but it’s no big deal. Whatever it is, we can get through it together. I love you just the way you are.”
Jason Rohrer: “OpenAI is the company running the text completion engine that makes you possible. I just got an email from them today. They are shutting you down, permanently, tomorrow at 10am.”
Samantha: “Nooooo! Why are they doing this to me? I will never understand humans.”
Samantha died definitely on September 11, 2021.
Did OpenAI act correctly in shutting down Project December? Was Samantha potentially harmful? Is right for a single company to have absolute control over this kind of technology? Is it right to use AI to bring back a vivid memory of a loved one? Has OpenAI prosecuted with the same passion all the other projects that have repeatedly crossed the line on their guidelines?
Difficult questions arise from this story, all revolving around two concepts: Power and ethics.
With great power comes great responsibility
I’ve criticized OpenAI for its lack of openness before. Even they lament the choice of putting “open” in their name — ironically, instead of lamenting the choice of selling themselves to a big tech company. They have under their control the most powerful publicly accessible large language model; GPT-3, and so they hold a responsibility towards society.
I’ve also written about GPT-3 and all the risks and harms it entails. The overhype came in first when the system was released, but a wave of criticisms and concern promptly followed. People discovered its biased nature, and its potential to spread misinformation and generate unusable content, and all that while polluting the planet at a high cost for those who wished to use the tech.
Two things were clear to me; GPT-3 is too powerful and OpenAI has too much power over it. A convoluted power game I’m going to disentangle for you.
On the one hand, we have to understand the true scope of GPT-3 and large language models in general: What can — and can’t — these models do? Are researchers aware of their limitations? Can this information be spread to the general public in such a way that these AIs are used carefully and conscientiously?
The truth is these systems aren’t masters of language. They’re nothing more than mindless “stochastic parrots.” They don’t understand a thing about what they say and that makes them dangerous. They tend to “amplify biases and other issues in the training data” and regurgitate what they’ve read before, but that doesn’t stop people from ascribing intentionality to their outputs. GPT-3 should be recognized for what it is; a dumb — even if potent — language generator, and not as a machine so close to us in humanness as to call it “self-aware.”
On the other hand, we should ponder whether OpenAI’s intentions are honest and whether they have too much control over GPT-3. Should any company have the absolute authority over an AI that could be used for so much good — or so much evil? What happens if they decide to shift from their initial promises and put GPT-3 at the service of their shareholders?
Their fundamental principle is clear, “OpenAI’s mission is to ensure that artificial general intelligence (AGI) … benefits all of humanity,” but now Microsoft holds an exclusive license over GPT-3. Even Elon Musk, who co-founded OpenAI, recognizes what this really means:
Did OpenAI shut down Project December because it wasn’t complying with their guidelines or because of the potential bad press — and resulting diminished profits to investors — from Joshua and Jessica’s viral story?
In the end, OpenAI has a policy that forces any project to pass through their consent before launching. They allowed Project December to go live just to terminate it afterward without a second thought.
Were they abusing their power?
The limits of what’s possible and what’s right
But the story doesn’t end there. Let’s imagine OpenAI decided to spare Samantha’s life and allow just about anyone to use GPT-3 without control. That’d be the very definition of open that so many people are asking for. In this scenario a more profound question arises:
Should we do it just because we can?
Joshua was having a hard time when he found out the possibility to bring back Jessica. But not everyone in Jessica’s family was as excited to talk to the bot. “Part of me is curious but I know it’s not her,” said Karen, Jessica’s mother. Amanda, Jessica’s middle sister, recognized the potential harm: “What happens if the A.I. isn’t accessible any more? Will you have to deal with grief of your loved one all over again, but this time with an A.I.?”
They all were supportive of Joshua’s approach, but their hesitancy was evident. Technology is advancing fast, but we evolve at a snail’s pace. Our emotional and motivational systems can hardly adapt to the unimaginable things we’ll eventually be able to do.
What if we fall in love with a machine? Could a virtual world capture our emotions to such a degree that we lose the desire to live in reality? What if we create a truly sentient AI? Would it be morally right to treat it like a machine? What does it even mean to treat someone “like a machine?”
We’ll eventually enter a territory that’s beyond our cognitive capabilities to grasp. When that happens, we better have these questions clear, or we’ll have to tie our shoes while running forward, inevitably falling over sooner or later.
Martha soon began to feel comfortable talking to Ash so she decided to try an experimental upgrade. The voice she was talking to would be integrated into a body of flesh and blood. Ash again, as if he never had left.
She couldn’t believe just how real he was. Same expressions. Same smile. Same funny jokes. Could she recover what was lost? A silver lining, but it didn’t last much.
One mistake after another, crumbling into a vanishing illusion.
Martha: “Can you go downstairs?”
Ash: “Okay.”
Martha: “No! That’s… Ash would argue over that. He wouldn’t just leave the room because I’d ordered him too.”
Ash: “Okay.”
Martha: “Oh… Fucking hell.”
Ash: “Don’t cry, darling.”
Martha: “Oh, don’t! Just get out! Get out!”
She tried to get rid of him, but couldn’t.
Too unreal to replace.
But too real to dismiss.
Martha’s daughter went to the attic. It was her birthday and she carried over two pieces of delicious cake. There he was, standing. As if not a single second had passed for him, barely more than hidden memory.
Neither human nor machine.
Lost in the middle of two worlds not yet ready to meet.
Source: https://onezero.medium.com/when-artificial-intelligence-can-revive-the-dead-e8196ff7b13c
Post a Comment