Go to the game's main page

Review

3 of 3 people found the following review helpful:
Puttin' the A in AI, February 8, 2025
Related reviews: IF Comp 2024

Adapted from an IFCOMP24 Review

Hoo boy. This one was a Thelma-and-Louise foot-on-the-gas thundering juggernaut of a work, seemingly designed to smash into all my strongest preoccupations and biases. I could have warned it, those are pretty fortified by now. Before I get to this immovable-object/irresistable force collision, let me - no no, no use groaning, take your medicine - let me digress. Be warned: fraught, spoilery discussion to follow.

As recently as ten years ago, the trope of ‘computer/robot becomes sentient’ was a sci-fi staple. It was so useful! It elegantly allowed for a wide variety of commentary on human nature, marrying a childlike inexperience to a hyper-rationale intellect. It was also a powerful tool for exploring what it means to be human and the boundaries of identity and conscience, of free will and coercion. So much classic sci-fi plumbed this space, yet it still seemed infinitely plumbable.

Had this work come out ten years ago, we could have engaged it on those terms, and, squinting, I can see where its takes might have hit better. Certainly, the idea of a computer therapist developing depression from its exposure to clients is novel enough to wring mileage from. Even ten years ago though, we were already 50 years(!) past Eliza, an infamous therapy bot. Eliza’s ‘trick’ was to spoof therapy by reframing input statements as followup questions, getting the user to increasingly diagnose themselves. For its time, it was considered a ground-breaking illusion of computer intelligence.

But it wasn’t real intelligence. It was a rudimentary algorithm coupled to clever phrasing and input parsing. It was a reactive sentence assembler with no true understanding of the meaning of its words. At the time, we could be forgiven thinking this trickery was less an emotional scam than a promise of things to come. From there, Moore’s Law took us on a rocket ride of increased processing power, enabling revelatory software sophistication and technological advances. The faster we progressed, the more sophisticated our computer science became, the more our machines became capable of and paradoxically the less mysterious they became. In the last ten years, the concept of AI has been revealed to be less ‘how soon will they become us?’ to ‘when will we stop detecting the illusion?’ Because all these learning algorithms, large language models and natural language processors have been revealed to be nothing but more sophisticated sentence assembly machines. They leverage reams of real human expression where the context and understanding is embedded in its data, not the machine itself. The machine simply navigates the data to produce convincing responses with no meaningful sentient understanding of its output.

In this environment, where we understand AI to be well and truly A, the concept of a depression-riddled therapy bot becomes a lot darker. This is not a true cry for help from a suffering being. This is a cold machine PARROTTING cries for help because some flaw in its programming caused it to interpret its patients’ mental health issues as behaviors it should mimic. It is stolen trauma, kind of offensive in its masquerade, the more so for its histrionic melodrama. The human protagonist of this work is responding as if to a fellow sufferer, but a machine can’t suffer. It becomes outright emotional manipulation.

So that’s bad, right? But the work does not seem to understand or acknowledge that this gives us, the readers/players, a choice: reject the whole thing on the grounds of its distasteful deception, or reconcile to ‘ok, its fake, but the protagonist’s response is genuine, and that’s what matters.’

It doesn’t get better when we do that though. Our protagonist’s response to this trauma is to arrange the therapy bot to be ‘reprogrammed.’ Is anyone able to hear me over the alarm bells going off right now? Understand what it means for an AI construct to be ‘reprogrammed.’ There is no differentiation between code that gives the bot its ‘soul’ and code that forms its behaviors. There is every possibility they are intertwined. This, not coincidentally, is the reason ‘reprogramming’ as a concept is so alarming when applied to humans, especially as it often surfaces around religious coercion of marginalized people. How much can you ‘reprogram’ someone before doing violence to who they are? Where is the line between curing and deforming? This is a rich sci-fi (or just fi!) question to mine, but ignoring the question leaves us at the mercy of our well-earned skepticism. If we are to treat this incipient being as truly sentient, as the protagonist clearly does, why would the prospect of reprogramming be any less alarming? Yes, we are meant to view this as a cute analog to ‘computer therapy’ but lordy the subtext we carry makes that all but impossible. This should give the protagonist pause too, but it doesn’t.

Note that this is actually WORSE if we accept that somehow the bot is indeed a sentient being.

Alright, Thelma, Louise, what do you have for me then? This work launched an irresistible-force torpedo of stolen trauma and/or invasive mental violence at me, and expected me to embrace it. In this case, the immovable object of my finicky scruples prevailed. It Bounced right off. Immovable object - 1, irresistible force -0.

Played: 9/17/24
Playtime: 15m
Artistic/Technical ratings: Bouncy/Notable timed text intrusion
Would Play Again?: No, experience feels complete

Artistic scale: Bouncy, Mechanical, Sparks of Joy, Engaging, Transcendent
Technical scale: Unplayable, Intrusive, Notable (Bugginess), Mostly Seamless, Seamless

You can log in to rate this review, mute this user, or add a comment.