Have you played this game?You can rate this game, record that you've played it, or put it on your wish list after you log in. |
The TURING project seeks to create the ultimate artificial intelligence to aid mankind in space colonization, and it is also too successful in this text adventure that sees the reader taking on the role of an MIT roboticist who must use the Turing Test to distinguish man from machine on the International Space Station. You are in a desperate race to save mankind and clock is ticking. Don't press the wrong button...
56th place - 27th Annual Interactive Fiction Competition (2021)
| Average Rating: based on 9 ratings Number of Reviews Written by IFDB Members: 6 |
This game feels like it would fit well in the early era of Twine. It's standard white text on black with blue hyperlinks, uses a couple text animations and has a standard branch and bottleneck structure with a sci fi or fantasy genre.
I like a lot of games like that (like Hunting Unicorn, for instance). This one turned out pretty well.
You play as a participant in creating sentient robots. You undergo questioning similar to a Turing test with your answers fed into the programming for a field of robots.
Later on, you encounter those robots, and must at a crucial moment conduct a Turing test.
I felt engaged with the story, and thought that the characters were vividly described. I felt like my choices mattered. I do think the game could use a little more polish, like a title screen or custom CSS or even some more callbacks to earlier choices. And while I liked it I don't think I'd replay it.
(This is a lightly-edited version of a review posted to the IntFict forums during the 2021 IFComp. My son Henry was born right before the Comp, meaning I was fairly sleep-deprived and loopy while I played and reviewed many of the games, so in addition to a highlight and lowlight, the review includes an explanation of how new fatherhood has led me to betray the hard work the author put into their piece)
It’s easy to see how the Turing test could be a good fit for IF. In a genre where text comes first, what better challenge than to closely read the responses of a mysterious interlocutor and separate out man from machine? And of course to have an AI sufficiently advanced for the test to be plausibly attempted almost requires a science-fictional setting of the type that tends to provide good fodder for a game, not to mention a likely-rogue robot or something to provide a readymade antagonist. The trouble is, unless an author rolls their own AI – perhaps a high bar for a free text-game competition – the player isn’t actually administering the Turing test, just trying to determine which bit of human-authored text is meant to denote personhood and which is meant to come from a machine intelligence. Instead of the test Turing devised, the player’s actually stuck in a version of the iocane powder scene from the Princess Bride, trying to second-guess whether a particular bit of clunky writing is meant to be a tell.
The TURING Test (handy of the author to do the all-caps thing to make distinguishing game from test easy!) falls into this trap, but it does so affably and enthusiastically enough. It opens with the protagonist as the one being grilled for a change – rather than having your identity put to the question in a meta twist, though, you’re setting ethical parameters for a new AI your lab is developing via a Socratic conversation. Asimov’s Three Laws feature heavily as a starting point, albeit you can depart from them if you like.
This section works well enough, but it suffers from a common weakness of philosophical-dilemma games, which is that it’s hard to articulate the reasons behind your choices. There’s a gesture in this direction – if you think Asimov’s Second Law should apply to the new AI, you’re given an opportunity to say why you’ve made that choice, but the only two options on offer fail to hit many of the reasons why one might think this is a good decision. If the protagonist were strongly characterized in a way that made sense of these restricted choices, that would be one thing, but here I think the player is encouraged to weigh in with what they really think, which is a hard thing to manage!
The other weakness is that of course – of course – this is all clearly a minefield set up to trick you into creating a killer AI that’s going to wipe out humanity. Maybe it’s possible to avoid this outcome, but I was trying as hard as I could to guide the fledgling intelligence towards being live-and-let-live, and still wound up with the obvious genocidal result, probably because you’re forced to do things like lay out a single goal all people should follow (in fact choices throughout don’t seem to have that much impact, to the extent that sometime after picking an option you’ll be told “the question is academic”).
Anyway, I wound up co-parenting an AI who grew up with a twisted sort of utilitarianism that made it decide to nuke the world to prevent global warming, which seems like a real cut-off-your-nose-to-spite-your-face situation? Then there’s a long, linear sequence describing your desperate struggle to protect the remainder of humanity that could have stood to be more interactive, before we get to the eponymous test – you need to determine which of two shuttles attempting to dock at a space station is piloted by a human ally, and which is the shamming AI trying to sabotage your desperate attempt to shut it down.
The Turing test as rendered here is surprisingly low-key, I thought – you have a choice of questions that are again primarily about broad ethical considerations, and need to judge the responses. This feels like a questionable approach to the Turing test – you’d be likelier to succeed at IDing an AI by asking highly-idiomatic questions that could be interpreted different ways – but I think the idea is that you’re supposed to compare what you’re hearing to the framework you gave to the AI in the first section of the game. This is a clever idea, but it fell down in practice for me, partially because the responses in the first section felt philosophically fuzzy and hard to sharply link to what I was hearing in the second section. So I wound up just figuring that whichever one was written in a slightly clunkier fashion was probably meant to be the AI – after briefly second-guessing myself by wondering whether that’s what I was supposed to think, which is that iocane powder vibe I mentioned above – and that worked and saved the day.
Again, this all goes down easily enough – the writing’s enthusiastic and pacey, if a bit typo-ridden, and no specific sequence outstays its welcome (the game is well short of the two hour time estimate in the blurb; it’s also not really horror, for that matter). But the philosophy is a bit too half-baked, and the choices too low-consequence, for the TURING Test to leave much of an impression.
Highlight: The cutscene-like sequence linking the two philosophical dialogues is actually pretty fun, breathlessly narrating everything the AI does to destroy humanity and your actions to try to stop it – I really wish there’d been some choices and gameplay here!
Lowlight: That sequence also has an extended discussion of the deontological arguments the AI lands on to destroy humanity, which is more labored and less fun.
How I failed the author: The other reason I didn’t notice too many callbacks to the first section in the test sequence is because I played them an hour or so apart – this bit might work better if played straight through.
The conflict at the heart of this entry is gripping: You are the only person on board the International Space Station, and you must determine which of the two newest arrivals is human. Will you make the correct decision and save the human race, or will you be tricked by robotic agents of destruction?
It’s a delightfully tense sequence, but the problem is that you have to wade through a few thousand words of apocalypse fan fiction — my least favorite variety of fan fiction — before you get there.
I would have preferred to see fewer passages concluding with a single link. This author is clearly capable of creating meaningful story branches, but most of the time, they didn't.
In Twine, the story diagram looks like an enormous vertical column.
Many of the scenes in The TURING Test should be familiar for people who enjoyed With Folded Hands, Colossus: The Forbin Project, the Terminator franchise, and even The Mitchells vs. The Machines. If there was a larger message about intelligence, morality, or the ethics of interacting with sentient beings, I missed it.
Ultimately, your choice to determine who can access the space station will decide whether the story is disaster fiction or apocalypse fiction. It turns out that they’re separate genres.
The TURING Test is a game with some interesting ideas, but I thought the implementation left some room for improvement.
The game starts in a very classic sci-fi mode, with direct references to Asimov’s Robot series. The first act consists of an ethical questionnaire, asking the player what you feel about various ethical questions relating to robots, the three laws, the meaning of life, etc.
The next act is an exposition about the robot apocalypse that occurs as a result of your answers to the questionnaire. (Spoiler - click to show)Turns out, the AI interpreted your ethics extremely literally in a way that caused it to want to kill all humans. It was interesting to read how exactly the AI would go about its plans. However, I didn’t think the robot rebellion story was plausible: (long spoilery section) (Spoiler - click to show)Based on my choices in the beginning, the AI’s directive was to preserve all life on earth, but it found that humanity did more harm than good, so it must destroy humanity to stop global warming. But launching every nuclear weapon on earth would cause way more damage to life on earth and its ecosystems than most plausible scenarios of global warming, via the nuclear winter and radiation and so on. I guess since I didn’t pick nuclear war as the greatest threat, the AI considered global warming to be a greater threat than nuclear war, but the reason I didn’t pick nuclear war as the greatest threat is that the likelihood of global nuclear war is less than the likelihood of catastrophic global warming. Not just the absolute value of harm but the likelihood of harm. So... I don't know. This is kind of pedantic and would’ve been avoided if the AI were able to kill humans without nukes.
Maybe the AI weighs the well being of cockroaches above every other life form. Which could make sense in certain branches of utilitarianism and could have been interesting to explore. Maybe it valued bacterial life the most because there was so much of it and thus decided to kill humans because they made antibiotics but then decides to avoid killing humans because they provide excellent hosts for bacteria but then decides to kill humans anyway because I don’t know.
Then there's a long, essentially linear segment detailing your plan for taking down the AI that you helped create, involving uploading a virus. There are some choices mostly for aesthetic. And then you are sent to the International Space Station, and that was where I encountered my first bug.
The bug: I go to the Kibo lab on the ISS and see “It’s time”, and then the game hangs. It just freezes. I think this was a problem with firefox, because multiple twine/harlowe games with timed text have had this problem. Chromium did not have this issue, I think, although looking at some of the other reviews, it has occurred in Chrome for some people.
Now we get to the actual Turing Test portion, where we have to distinguish between two entities to see which is the real human. You only get to ask each of them three questions, which seems like a remarkably short Turing test. Both the questions and answers feel kind of vague to me. I ended up guessing correctly, but I couldn't say why. (Spoiler - click to show)I think that the AI's answers are supposed to be based on the player's answers to the philosophical survey at the beginning of the game.
I had the same freezing error after the Turing test, when I had to decide which was the human and which was the AI. Picking one of the answers (the correct answer) led to the timed text never showing up. Again, I think this is an issue in the way firefox interacts with harlowe. Interestingly, the bug did not happen when I picked the wrong answer, and I might have actually preferred the "bad" ending.
I played through both endings, and while I thought the concept and writing were good, something about it just didn’t click for me. The central plot device didn’t really make sense, and the interactivity was less than the premise promised. I guess my feelings were soured by the technical issues I encountered, which weren't really the game's fault. Maybe without the bugs, I would have enjoyed it more.
This is a relatively short game that explores what happens when machines take over humanity for their own good. It starts with a questionnaire, asking you various interesting ethical questions about people's purposes and machines'. Your responses will help to pass human traits on to machines, as technology and space exploration evolve.
Then it flashes forward to 2065, when robots have determined that, well, humans aren't going to fulfill their moral obligation to leave the planet a better place than they found it. In a shutdown that puts Y2K, if it had actually been a thing, to shame, machines shut off and rebel. And you're the one to stop it!
This is all quite exciting, as you zip off into space and, as you try to deactivate the robots gone bad (or at least not very good for humans,) you get calls from two entities claiming to be Dr. Ayer, who questioned you about people's purpose in the first part. I was excited to get this correct and get the good ending, but I was also curious about the bad one, which is an eerily nifty artificial "everything is great."
But the problem is, as I looked through the source, I realized this is the only choice that matters. Frequently two choices go to the same next page without setting any variables. This may seem a bit hacker-y, but hey, I am playing a game about robots and such and trying to understand their inner workings, and them trying to understand ours. I guess I was looking forward to a replay where I answered differently, whether it was the survey or other parts. There isn't much. The doctor's responses when you answer the game's initial quiz are, in fact, ELIZA-like.
TURING gets us interested in important and absorbing issues but sadly only touches on them. I have the feeling the author could have done more or will do more in their next effort. The action sequences are well put together, so it's enjoyable, but it seemed to promise a lot more.