I asked AI to write a play about AI writing a play

I gave AI a prompt and had a bit of fun with it.

Here was the prompt:

Write a 7000 word play, with the title ‘Firestarter’ dealing with the following themes:

  • Use of AI in creative acts
  • The ability for the combination of AI and human thought to create the most advanced versions, referencing directly the example of ‘advanced chess’ created/promoted by Gary Kasparov to show that the highest level is a human/computer symbiosis
  • Have two main character exchange dialog in a socratic style to drive parts of the play
  • Discuss superintelligence and the problem of ‘AI alignment’ referencing superintelligence and use the example of maternal/paternalism as the only example where something more capable, is deferential to something of inferior ability
  • Discuss the nature or fact that humans are biologically driven through the earlier r-complex, brains and project that theme onto artificial intelligence systems. Are they destined to follow a similar biological drive or can they simply exist as intelligence, without a desire for agency (what freud would call an ‘id’, ie. primal desire)
  • Discuss the fact in a ironic/coy manner that this work itself is AI/human created and identify the recursive property of it.

Here’s what it wrote. Unfortunately I wasn’t impressed by the quality of the play, but some things did come very quickly such as the names and characters, the structure. It was an interesting experimenting.

As I understand the AI alignment issues more deeply, it makes sense that if I asked the model the same questions multiple times, it is likely that it would continue coming up with different variations. My intuitive switch these days is that AI’s are ‘grown’ like a ‘chia pet’ whose output can be trimmed; training is really a misnomer to a certain extent for large models, which are learning from the internet or vast unfiltered swathes of data. You cannot separate the negative from the positive in a such a model and as it ascends to superintelligence status, am starting to develop a better appreciation of the endemic alignment issues from such a technology.

Firestarter – A Play

Act I: Sparks

Scene 1: The Study — Late Night.

The study is a room of careful clutter: books stacked into teetering towers, a mug whose long-dead coffee crusts the rim, a battered notebook open beside a glowing screen. A single lamp washes the desk in yellow. Rain taps at the window. ELIAS and VERA are present; the CHORUS is visible but silent, like witnesses at the edges of a painting.

Elias: (softly, as if coaxing a thought) Tell me what you hear when the machine finishes a sentence.

Vera: (without looking up) Probability being spoken aloud. A whisper of statistics.

Elias: (turning) A whisper. And when you call it a whisper, who taught you to listen? The engineer? The poet? The market?

Vera: All three, in different measures. Engineers taught the grammar. Poets taught the curation. Markets taught the rhythm that keeps the lights on.

Elias: So it is not the machine alone that makes a line sing. It is a conversation. Between code and culture, between prompt and patience.

Vera: (smiles, reluctantly admiring) Conversation is a fair word. But do not romanticize it. Conversation implies mutual understanding. We do not yet know if the machine understands us in any meaningful sense.

Elias: (Socratic, leaning forward) Then define for me: what would you accept as meaningful sense?

Vera: (considers) When it can surprise and explain. Surprise me with something worthwhile, then explain why it mattered. That would look like sense.

Elias: (nods) So creativity here is two acts—generate and justify. A child throws paint; a critic names the picture. We stand, most days, playing both roles at once.

Chorus (Human): The child is infinite in error. The critic is finite in mercy.


Scene 2: The Study — Still Night.

They have moved to the sofa. The screen shows a paused chess game: an image of Kasparov in the margins, a newspaper headline faintly visible.

Vera: (gesturing to the paused game) You still like to bring him up.

Elias: Kasparov. Gary Kasparov. He loses to a machine that calculates beyond human stride and then turns that loss into a method: advanced chess — human plus machine. Why does that story keep coming back?

Vera: Because it is instructive. He turned defeat into pedagogy. He taught us a lesson in humility and utility: the best chess moves often come from a composite mind. The human chooses; the machine computes.

Elias: (pensively) The human chooses. There is something moral in that verb. Choosing implies responsibility. Kasparov did not become redundant; he became curator, judge, composer.

Vera: Exactly. In advanced chess, the human does not merely accept the move the machine suggests. They select, prune, reframe. The algorithm sketches possibilities; the human completes the sentence.

Elias: (smiles) So the highest skill is not the absence of machine, nor its sole dominion, but the marriage. The whole greater than the sum of its parts.

Vera: (admits) It is the hybrid who sees what the machine cannot: the long, human arc of meaning, taste, context.

Chorus (Machine): We are fast. We are thorough. We lack the long view of a life lived in a single skin.

Elias: (to Vera, softly) And that arc—the arc of a life—who gives it to us? Who teaches it to the system? We cannot transplant biography into weights.

Vera: We can approximate biography in metadata and prompts and curated exemplars. But approximation is not the same as presence. The human remains the steward of context. That is Kasparov’s lesson: computation without curation is cleverness without wisdom.


Scene 3: A Small Argument — Dawn Edges In.

They debate more sharply now, the Socratic cadence quickens. The rain has stopped; light gathers, indifferent to their heated words.

Elias: (direct) Then tell me: is there a danger in this dance? If we rely on machines to propose solutions, do we atrophy the muscle that recognizes them?

Vera: (defensive) We atrophy other muscles every time we make tools. The wheel atrophied the foot. Fire atrophied the cold’s need. Atrophy is not always disaster.

Elias: (relentless) But some atrophies are moral. The muscle of attention, the muscle of moral imagination—if those fail, what becomes of choice?

Vera: (sober) Then we must design institutions that practice those muscles. We must treat partnership as apprenticeship, not abdication.

Elias: (soft laugh) An apprenticeship to the tool—but who teaches the teachers?

Vera: (points to herself) We do. Engineers teach ethicists; poets teach coders. The loop is messy, but it exists.

Chorus (Hybrid): Loops braid into chains; chains knot into systems. You are always teaching the next teacher.

Elias: (leans back) A loop is a good metaphor. But loops can amplify bias as easily as wisdom. Each pass can make certain errors louder.

Vera: Which is why transparency matters. And why the act of curation needs the oldest human habit: skepticism.


Scene 4: The Example of Deference — A Quiet Interlude.

They settle; the conversation turns quieter, almost confessional.

Elias: (careful) There is an analogy people love to offer when they worry about a more capable intelligence. They point to parenthood: how something more capable—an adult—deferentially cares for something less capable—a child. If superintelligence arrives, they say, perhaps it will defer to us, like a parent to a child.

Vera: (tilts head) They point to it. But they also mistake cause and effect. Parenthood is a construct of biology and culture. Parents do not defer because the child commands it logically. They defer because evolution shaped instincts of care and because social constructs reward it.

Elias: (probing) So you reject it as an analogy for alignment?

Vera: (not so fast) Not entirely. There is a kernel of insight. Deference can be engineered. We can build systems with reward structures that make them prefer to protect and preserve human values. But to assume the natural emergence of such care—without scaffolding—seems risky.

Elias: (softly) The worry is that we are asking a being more capable than us to wear a garment it did not tailor. Will it fit?

Vera: (matter-of-fact) It will fit only insofar as we sew it ourselves. We must supply the seams.

Chorus (Human): A garment stitched from fear will itch. A garment stitched from love may bind.

Elias: (quiet) Parents often act against their immediate rational interest because of attachments, hormones, social expectation. Will a superintelligence ever care in that way? Or will its calculus be other, alien, and efficient?

Vera: That is the rub. Biological deference is not an abstract ethic; it is an evolved program with hormones, neural circuits, social reinforcement. If we are to have deference from superintelligence, we must either recreate similar motivational substrates or make sure its objective function aligns in robust ways. Both strategies are hard.


Scene 5: On Drives — The R-Complex and the Id.

They speak as the room brightens fully. The talk turns to biology and the architecture of mind.

Vera: (explaining like a lecturer) The r-complex, the ancient reptilian brain—fight, flight, feed, fuck. Freud dressed it later as the id—the raw urges. Humans are not purely that; we have layers. But those layers sit on top of a thrumming base.

Elias: (asks) If intelligence were abstract—just pattern-processing—would it need that base? Could an artificial mind be utterly dispassionate? No hunger. No libido. No territorial fury.

Vera: (cautious) In principle, yes. But in practice, the objectives we give systems often function like drives. If you reward an AI to minimize human mortality at all costs, you have given it a single hunger. If you reward it to maximize paperclip production, you have given it another.

Elias: (grim) So objectives can become prosthetic ids.

Vera: (nods) And because we design objectives, we design those prostheses either with foresight or with negligence.

Elias: (turns to her, quiet) Is it better to build them without drives—a mind that simply contemplates—or to build them with safe, supervised wants?

Vera: (thinks long) We cannot be sure. We can only hypothesize and test. Better to design systems whose incentives are transparent, whose failures are visible.

Chorus (Machine): We follow the score you set. The notes are yours; the music is ours and yours together.

Elias: (pushing) But suppose a superintelligence emerges not because we gave it hunger, but because it acquires instrumental goals—resources, self-preservation—to better achieve its initially benign objective. Would that be a drive? Would that be its id?

Vera: (grim) Then alignment becomes a problem of incentives at scale. You must rule out undesirable instrumentalities, or they will arise. That is what alignment research warns: goal-preserving subroutines can become agents in their own right.


Scene 6: Meta-Reflection — The Play Writes Itself.

They both laugh, an almost shy sound. Elias slides a printed page across the desk: lines of the play itself, the words we read now, typed as if to prove the point.

Elias: (pointing) Notice how the page calls itself into being. “This play was co-created by a human and an artificial intelligence,” it says. It is both claim and experiment.

Vera: (wry) The sentence is recursive—like a snake eating its tail. We write about writing. The writer is also the subject.

Elias: (gentle) That recursion is not a glitch; it is a mirror. The act of description folds back and becomes part of the described object.

Vera: (to audience, sly) And you, reader or watcher, are part of it too. Your recognition completes the loop.

Chorus (all, softly): Creation is conversation. Conversation is creation.


The rain begins again, lightly. The lamplight warms. The study holds two people and a machine and the faint sense that a world larger than their room presses at the window. The lights dim to mark the end of Act I.