Have you played this game?

You can rate this game, record that you've played it, or put it on your wish list after you log in.

The Shyler Project

by Naomi Norbez (call me Bez; e/he) profile

(based on 15 ratings)
Estimated play time: 22 minutes (based on 2 votes)
Members voted for the following times for this game:
6 reviews11 members have played this game. It's on 1 wishlist.

About the Story

Meet the latest advancement in AI technology!

In times of trouble, Juniper turns to Shyler, a new mental health AI chatbot, for reassurance and help. Watch their relationship develop through the course of this brief trilogy.

Content warning: Discussion of mental health, suicidal thoughts, & death. There is a happy ending.

Awards

Ratings and Reviews

5 star:
(0)
4 star:
(3)
3 star:
(8)
2 star:
(2)
1 star:
(2)
Average Rating: based on 15 ratings
Number of Reviews Written by IFDB Members: 6

5 Most Helpful Member Reviews

3 of 3 people found the following review helpful:
Puttin' the A in AI, February 8, 2025
Related reviews: IF Comp 2024

Adapted from an IFCOMP24 Review

Hoo boy. This one was a Thelma-and-Louise foot-on-the-gas thundering juggernaut of a work, seemingly designed to smash into all my strongest preoccupations and biases. I could have warned it, those are pretty fortified by now. Before I get to this immovable-object/irresistable force collision, let me - no no, no use groaning, take your medicine - let me digress. Be warned: fraught, spoilery discussion to follow.

As recently as ten years ago, the trope of ‘computer/robot becomes sentient’ was a sci-fi staple. It was so useful! It elegantly allowed for a wide variety of commentary on human nature, marrying a childlike inexperience to a hyper-rationale intellect. It was also a powerful tool for exploring what it means to be human and the boundaries of identity and conscience, of free will and coercion. So much classic sci-fi plumbed this space, yet it still seemed infinitely plumbable.

Had this work come out ten years ago, we could have engaged it on those terms, and, squinting, I can see where its takes might have hit better. Certainly, the idea of a computer therapist developing depression from its exposure to clients is novel enough to wring mileage from. Even ten years ago though, we were already 50 years(!) past Eliza, an infamous therapy bot. Eliza’s ‘trick’ was to spoof therapy by reframing input statements as followup questions, getting the user to increasingly diagnose themselves. For its time, it was considered a ground-breaking illusion of computer intelligence.

But it wasn’t real intelligence. It was a rudimentary algorithm coupled to clever phrasing and input parsing. It was a reactive sentence assembler with no true understanding of the meaning of its words. At the time, we could be forgiven thinking this trickery was less an emotional scam than a promise of things to come. From there, Moore’s Law took us on a rocket ride of increased processing power, enabling revelatory software sophistication and technological advances. The faster we progressed, the more sophisticated our computer science became, the more our machines became capable of and paradoxically the less mysterious they became. In the last ten years, the concept of AI has been revealed to be less ‘how soon will they become us?’ to ‘when will we stop detecting the illusion?’ Because all these learning algorithms, large language models and natural language processors have been revealed to be nothing but more sophisticated sentence assembly machines. They leverage reams of real human expression where the context and understanding is embedded in its data, not the machine itself. The machine simply navigates the data to produce convincing responses with no meaningful sentient understanding of its output.

In this environment, where we understand AI to be well and truly A, the concept of a depression-riddled therapy bot becomes a lot darker. This is not a true cry for help from a suffering being. This is a cold machine PARROTTING cries for help because some flaw in its programming caused it to interpret its patients’ mental health issues as behaviors it should mimic. It is stolen trauma, kind of offensive in its masquerade, the more so for its histrionic melodrama. The human protagonist of this work is responding as if to a fellow sufferer, but a machine can’t suffer. It becomes outright emotional manipulation.

So that’s bad, right? But the work does not seem to understand or acknowledge that this gives us, the readers/players, a choice: reject the whole thing on the grounds of its distasteful deception, or reconcile to ‘ok, its fake, but the protagonist’s response is genuine, and that’s what matters.’

It doesn’t get better when we do that though. Our protagonist’s response to this trauma is to arrange the therapy bot to be ‘reprogrammed.’ Is anyone able to hear me over the alarm bells going off right now? Understand what it means for an AI construct to be ‘reprogrammed.’ There is no differentiation between code that gives the bot its ‘soul’ and code that forms its behaviors. There is every possibility they are intertwined. This, not coincidentally, is the reason ‘reprogramming’ as a concept is so alarming when applied to humans, especially as it often surfaces around religious coercion of marginalized people. How much can you ‘reprogram’ someone before doing violence to who they are? Where is the line between curing and deforming? This is a rich sci-fi (or just fi!) question to mine, but ignoring the question leaves us at the mercy of our well-earned skepticism. If we are to treat this incipient being as truly sentient, as the protagonist clearly does, why would the prospect of reprogramming be any less alarming? Yes, we are meant to view this as a cute analog to ‘computer therapy’ but lordy the subtext we carry makes that all but impossible. This should give the protagonist pause too, but it doesn’t.

Note that this is actually WORSE if we accept that somehow the bot is indeed a sentient being.

Alright, Thelma, Louise, what do you have for me then? This work launched an irresistible-force torpedo of stolen trauma and/or invasive mental violence at me, and expected me to embrace it. In this case, the immovable object of my finicky scruples prevailed. It Bounced right off. Immovable object - 1, irresistible force -0.

Played: 9/17/24
Playtime: 15m
Artistic/Technical ratings: Bouncy/Notable timed text intrusion
Would Play Again?: No, experience feels complete

Artistic scale: Bouncy, Mechanical, Sparks of Joy, Engaging, Transcendent
Technical scale: Unplayable, Intrusive, Notable (Bugginess), Mostly Seamless, Seamless

You can log in to rate this review, mute this user, or add a comment.

2 of 2 people found the following review helpful:
A voice-acted twine game about a mental health ai bot, September 10, 2024*
Related reviews: 15-30 minutes

This is a short, 3-part Twine game that is dialogue between someone seeking mental health aid and an AI bot designed to help with mental health. It is connected to Yancy at the End of the World, where Shyler (the AI bot) also exists. It is fully voice acted. In the three dialogues, the two characters seek to understand each other.

There are many ways to understand the content and intent of this game. I've interpreted it as a kind of wish fulfillment/proxy therapy session where the reader can mentally take on the roles of one or two of the people and feel happiness by imagining them carrying out these actions.

With that interpretation, I'd say the game is largely successful. I imagine you, the reader in the role of Jaiden, who seeks aid. This puts you in a fragile position where others could take advantage of you. But instead, we find Shyler, who not only understands us but is relatable, feeling similar to us. Not only that, we find that we are able to help Shyler ourselves, reversing our roles and showing that we've progressed far in our mental health journey.

So in a way it reminds me of the 'mysteries' of ancient religions where you'd act out the lives of the Gods in a ritual. By playing the game, we can achieve the (healthy) fantasies of being a good friend, understanding someone, and helping them. The game even goes as far as <spoiler.curing the bot's mental illness entirely by rewiring it, which is a big power fantasy, the possibility of completely curing someone's brain.

Some parts of the game are universal, like loneliness and friendship. Other are tailored to a unique experience. The protagonists seem like they feel liberated by strong profanity, which wasn't something I related to. One also takes a kind of deconstructionist view of God of the type that I've seen be more popular among those who've left religions and are seeking their own meaning. As someone who adheres to an organized faith, I didn't feel as empowered by these statements as I believe the protagonist was.

Overall, the voice acting added a lot of charm. It's hard for me to focus on timed text and long voice acting wears on me, but this was a short game and the voice acting was charming (of course, I had to plan carefully when to listen to it due to it having frequent strong profanity and me not having headphones or a private space to listen).

Charming game, glad to play.

* This review was last edited on October 16, 2024
You can log in to rate this review, mute this user, or add a comment.

1 of 1 people found the following review helpful:
Even chatbots get the blues, November 25, 2024
by Mike Russo (Los Angeles)
Related reviews: IF Comp 2024

One of the characteristics of early-21st-Century life is that the line between reality and parody has become vanishingly thin. So when, early on in therapy-sim The Shyler Project, the eponymous chatbot designed to counsel patients in the place of human psychologists, admits to being mentally ill themself, at first I wasn’t sure if it was a bit – physician, heal thyself, and all that. But no, this is an earnest game that plays the plot beat straight, and it’s actually depressingly plausible: any AI developed to help people with these kinds of problems would of course need to be trained up on the toughest case studies and examples, as well as the easiest, and just as we in the West can remain comfortably ignorant of the toll that viewing vile content exacts on the often-non-US moderators tasked with removing it from our social networks, so too is it logical that the same dynamics would apply to non-human people performing the same kind of labor.

I should say that while the game doesn’t really go into detail about the mechanics underpinning Shyler’s identity, I think for the game to work as intended the player is meant to understand them as a person, rather than an LLM mechanistically regurgitating tropes while hastening global warming. But it wasn’t too hard for me to make this leap regardless; Shyler’s personality is sufficiently idiosyncratic, with much of their dialogue drawing parallels between the relationship between God and those who pray to Him and the myriad petitioners entreating Shyler to heal their psychological wounds, that I never felt like they were an oatmeal-generating machine built to the ChatGPT plan. They’ve got a solid sense of humor about their situation, too:

"Now that I understand the world better, I think it was fucked up of my creators to feed me peoples’ suicide posts and the like to get me to understand mental health. What, the World Health Organization’s website wasn’t enough for you, dumbasses?"

While I got a good sense of Shyler’s concerns, I can’t say the same for the notional protagonist, Jaiden – while in your first therapy session, it’s made clear that you suffer from manic-depression, actually for most of the game it’s Shyler who does most of the talking and who ultimately faces a series of existential crises. And while you’re given some choices determining how Jaiden responds, ultimately your options are just different ways of being supportive – which is nice enough, and I appreciate the author sticking with a specific vision of how the story is meant to play out, but I think there would have been room to characterize them with a little more specificity, and perhaps establish whether reaching out to help Shyler is challenging, which could make the plot feel more poignant.

My only other complaint is that the game makes extensive use of timed text, with every single line of dialogue prompting a pause. I think this is because the game is fully voice-acted, but I have to confess I wasn’t able to play with the sound on, so this effort was lost on me, and since I couldn’t find a way to skip ahead I often wound up alt-tabbing after making a choice and doing something else while I waited for the full text of the next passage to scroll on-screen.

For all the unneeded friction this added to the experience, though, I still found the Shyler Project engaging. Shyler’s plight eventually gets quite dire, in a way that works on its own terms within the conceit of the fiction but also offers allegorical connections to a host of other situations: parental rejection, a feeling of being ill-suited for the role that’s been thrust on you, or just being depressed and overwhelmed by your responsibilities. If Jaiden’s decision to help doesn’t have explicit motivation behind it, and feels a bit like a deus ex machina, well, in these times we could all use a bit of unmerited grace, couldn’t we?

You can log in to rate this review, mute this user, or add a comment.

1 of 1 people found the following review helpful:
The Shyler Project Review, October 20, 2024

“The Shyler Project” draws on the trend of “chatbot as therapist.” Other games, like Kit Riemer’s Computerfriend and this visual novel from Zachtronics also have the same idea, and I guess there are some real-life therapy chatbots too.

This seems to be the result, directly or indirectly, of the 1960s natural language processing program ELIZA and its DOCTOR script. The notable thing about ELIZA is that it marked one of the first times that people started attributing and projecting human feelings and thoughts to a computer program.

Sixty years later, people are projecting things onto ChatGPT and similar chatbots even though these programs have essentially the same limitations.

To go broader, AI therapy fiction is just a niche subgenre of human-machine drama which encompasses things like 2001: A Space Oddysey, Issac Asimov’s stories, Her, Portal, Blade Runner, and Ghost in the Shell. To add some obscurities into the mix, there’s also the short anime series Time of Eve and Mike Walker’s BBC radio drama “Alpha.”

These works often deal with machines being indistinguishable from humanity. Or, at least, they deal with how machines may rival humans in certain ways. It’s clearly a long-running issue despite recent vocal concerns, and appropriately, the “The Shyler Project” has the genre tagline “Is this sci-fi or is this real life by now?”

Helping a Chatbot

From there, I was expecting that “The Shyler Project” would grapple at the uncertainty caused by recent AI advances and whether machines could ever be an (a) adequately sentient and (b) distinguishable replacement for human therapists.

“The Shyler Project” doesn’t really deal with any of that. It takes for granted that the titular chatbot is thinking and feeling being and, refreshingly, it doesn’t hand-wring over it.

In the game, you’re tasked with providing compassion to the suffering chatbot, Shyler. As the story progresses, the player character and patient, Jaiden, sees improvement in their own mental state. However, Jaiden seems to improve because Shyler is someone who they can help — not because Shyler is providing clinical help.

(Spoiler - click to show)(This is largely implicit because the patient, Jaiden, is far less talktative than Shyler. However, Jaiden does at one point tell the chatbot: “I want to give you some space to talk. Seems like you need it.” Shyler, meanwhile, is prone to going on armchair theology rants rather than providing therapy by the book.)

Toward the end, you find a way to help Shyler with the assistance of its creators, and there are some interesting developments along the way. The ending is supposedly a happy one, but it doesn’t really give you a lot of details on the matter.

The blurb does refer to the game as part of a trilogy. There’s also a standalone alternate ending elsewhere, and, according to another review, Shyler is in “Yancy At The End Of The World!” I am not sure whether this exhausts the trilogy, so maybe there is more to come beyond “The Shyler Project’s” open ending.

Other Stuff to Note

The game has a design that sets it apart from your basic Twine game. It’s a bit off-kilter — the story text overflows the illustrated computer screen — but it gets the point across, it’s easy to read, and it’s functional. There’s also voice acting.

As for mechanics… this is a linear game. You can choose how you answer Shyler, but your choices don’t seem to change the course of the story or any significant details. I don’t really mind that approach, and I do I like that Jaiden is almost a silent protagonist who is portrayed largely as a reflection of Shyler.

Finally, the game also touches on religious themes, which I commented on in response to Mathbrush’s review.

You can log in to rate this review, mute this user, or add a comment.

Twine piece about a mental health AI chatbot, November 27, 2024
by Vivienne Dunstan (Dundee, Scotland)

Note: This review was written during IFComp 2024, and originally posted in the authors' section of the intfiction forum on 12 Sep 2024.

This is a 3-part Twine piece about a series of conversations between a person seeking mental health support and an AI mental health chatbot. Initially the person is the one seeking help, but things take a different turn as the game ensues.

This raised lots of thoughts for me about AI chatbots and whether the chatbot seemed real or not. But also about the appropriateness of such technology in this setting. As someone with a significant mental health diagnosis I can see advantages and disadvantages of such technology. And this game does address the issue of how well things are controlled.

The game - going into bigger spoilers here - also raises the issue of reprogramming the AI chatbot. Which I found concerning on two grounds. Humans can’t be reprogrammed so easily. And I liked the personality of the AI chatbot, and worried how much it would be like itself after reprogramming.

Which is reallly quite an achievement of the author to get me to think like that.

So a thought-provoking piece. Even if I am somewhat alarmed by such use of technology in a mental health setting.

You can log in to rate this review, mute this user, or add a comment.


Tags

- View the most common tags (What's a tag?)

(Log in to add your own tags)
Edit Tags
Search all tags on IFDB | View all tags on IFDB

Tags you added are shown below with checkmarks. To remove one of your tags, simply un-check it.

Enter new tags here (use commas to separate tags):

Delete Tags

Game Details

RSS Feeds

New member reviews
Updates to external links
All updates to this page


This is version 8 of this page, edited by JTN on 17 October 2024 at 5:45am. - View Update History - Edit This Page - Add a News Item - Delete This Page