Number of Reviews: 5
Write a review
1 people found the following review helpful:
Affable but philosophically unconvincing, December 24, 2021
(This is a lightly-edited version of a review posted to the IntFict forums during the 2021 IFComp. My son Henry was born right before the Comp, meaning I was fairly sleep-deprived and loopy while I played and reviewed many of the games, so in addition to a highlight and lowlight, the review includes an explanation of how new fatherhood has led me to betray the hard work the author put into their piece)
It’s easy to see how the Turing test could be a good fit for IF. In a genre where text comes first, what better challenge than to closely read the responses of a mysterious interlocutor and separate out man from machine? And of course to have an AI sufficiently advanced for the test to be plausibly attempted almost requires a science-fictional setting of the type that tends to provide good fodder for a game, not to mention a likely-rogue robot or something to provide a readymade antagonist. The trouble is, unless an author rolls their own AI – perhaps a high bar for a free text-game competition – the player isn’t actually administering the Turing test, just trying to determine which bit of human-authored text is meant to denote personhood and which is meant to come from a machine intelligence. Instead of the test Turing devised, the player’s actually stuck in a version of the iocane powder scene from the Princess Bride, trying to second-guess whether a particular bit of clunky writing is meant to be a tell.
The TURING Test (handy of the author to do the all-caps thing to make distinguishing game from test easy!) falls into this trap, but it does so affably and enthusiastically enough. It opens with the protagonist as the one being grilled for a change – rather than having your identity put to the question in a meta twist, though, you’re setting ethical parameters for a new AI your lab is developing via a Socratic conversation. Asimov’s Three Laws feature heavily as a starting point, albeit you can depart from them if you like.
This section works well enough, but it suffers from a common weakness of philosophical-dilemma games, which is that it’s hard to articulate the reasons behind your choices. There’s a gesture in this direction – if you think Asimov’s Second Law should apply to the new AI, you’re given an opportunity to say why you’ve made that choice, but the only two options on offer fail to hit many of the reasons why one might think this is a good decision. If the protagonist were strongly characterized in a way that made sense of these restricted choices, that would be one thing, but here I think the player is encouraged to weigh in with what they really think, which is a hard thing to manage!
The other weakness is that of course – of course – this is all clearly a minefield set up to trick you into creating a killer AI that’s going to wipe out humanity. Maybe it’s possible to avoid this outcome, but I was trying as hard as I could to guide the fledgling intelligence towards being live-and-let-live, and still wound up with the obvious genocidal result, probably because you’re forced to do things like lay out a single goal all people should follow (in fact choices throughout don’t seem to have that much impact, to the extent that sometime after picking an option you’ll be told “the question is academic”).
Anyway, I wound up co-parenting an AI who grew up with a twisted sort of utilitarianism that made it decide to nuke the world to prevent global warming, which seems like a real cut-off-your-nose-to-spite-your-face situation? Then there’s a long, linear sequence describing your desperate struggle to protect the remainder of humanity that could have stood to be more interactive, before we get to the eponymous test – you need to determine which of two shuttles attempting to dock at a space station is piloted by a human ally, and which is the shamming AI trying to sabotage your desperate attempt to shut it down.
The Turing test as rendered here is surprisingly low-key, I thought – you have a choice of questions that are again primarily about broad ethical considerations, and need to judge the responses. This feels like a questionable approach to the Turing test – you’d be likelier to succeed at IDing an AI by asking highly-idiomatic questions that could be interpreted different ways – but I think the idea is that you’re supposed to compare what you’re hearing to the framework you gave to the AI in the first section of the game. This is a clever idea, but it fell down in practice for me, partially because the responses in the first section felt philosophically fuzzy and hard to sharply link to what I was hearing in the second section. So I wound up just figuring that whichever one was written in a slightly clunkier fashion was probably meant to be the AI – after briefly second-guessing myself by wondering whether that’s what I was supposed to think, which is that iocane powder vibe I mentioned above – and that worked and saved the day.
Again, this all goes down easily enough – the writing’s enthusiastic and pacey, if a bit typo-ridden, and no specific sequence outstays its welcome (the game is well short of the two hour time estimate in the blurb; it’s also not really horror, for that matter). But the philosophy is a bit too half-baked, and the choices too low-consequence, for the TURING Test to leave much of an impression.
Highlight: The cutscene-like sequence linking the two philosophical dialogues is actually pretty fun, breathlessly narrating everything the AI does to destroy humanity and your actions to try to stop it – I really wish there’d been some choices and gameplay here!
Lowlight: That sequence also has an extended discussion of the deontological arguments the AI lands on to destroy humanity, which is more labored and less fun.
How I failed the author: The other reason I didn’t notice too many callbacks to the first section in the test sequence is because I played them an hour or so apart – this bit might work better if played straight through.