Categories: Tech & Ai

OpenAI Really Wants Codex to Shut Up About Goblins


OpenAI has a goblin problem.

Instructions designed to guide the behavior of the company’s latest model as it writes code have been revealed to include a line, repeated several times, that specifically forbids it from randomly mentioning an assortment of mythical and real creatures.

“Never talk about goblins, gremlins, raccoons, trolls, ogres, pigeons, or other animals or creatures unless it is absolutely and unambiguously relevant to the user’s query,” read instructions in Codex CLI, a command-line tool for using AI to generate code.

It is unclear why OpenAI felt compelled to spell this out for Codex—or indeed why its models might want to discuss goblins or pigeons in the first place. The company did not immediately respond to a request for comment.

OpenAI’s newest model, GPT-5.5, was released with enhanced coding skills earlier this month. The company is in a fierce race with rivals, especially Anthropic, to deliver cutting-edge AI, and coding has emerged as a killer capability.

In response to a post on X that highlighted the lines, however, some users claimed that OpenAI’s models occasionally become obsessed with goblins and other creatures when used to power OpenClaw, a tool that lets AI take control of a computer and apps running on it in order to do useful things for users.

“I was wondering why my claw suddenly became a goblin with codex 5.5,” one user wrote on X.

“Been using it a lot lately and it actually can’t stop speaking of bugs as ‘gremlins’ and ‘goblins’ it’s hilarious,” posted another.

The discovery quickly became its own meme, inspiring AI-generated scenes of goblins in data centers, and plug-ins for Codex that put it in a playful “goblin mode.”

AI models like GPT-5.5 are trained to predict the word—or code—that should follow a given prompt. These models have become so good at doing this that they appear to exhibit genuine intelligence. But their probabilistic nature means that they can sometimes behave in surprising ways. A model might become more prone to misbehavior when used with an “agentic harness” like OpenClaw that puts lots of additional instructions into prompts, such as facts stored in long-term memory.

OpenAI acquired OpenClaw in February not long after the tool became a viral hit among AI enthusiasts. OpenClaw can use any AI model to automate useful tasks like answering emails or buying things on the web. Users can select any of various personae for their helper, which shapes its behavior and responses.

OpenAI staffers appeared to acknowledge the prohibition. In response to a post highlighting OpenClaw’s goblin tendencies, Nik Pash, who works on Codex, wrote, “This is indeed one of the reasons.”

Even Sam Altman, OpenAI’s CEO, joined in with the memes, posting a screenshot of a prompt for ChatGPT. It read: “Start training GPT-6, you can have the whole cluster. Extra goblins.”



Source link

Abigail Avery

Share
Published by
Abigail Avery

Recent Posts

Moon phase today explained: What the Moon will look like on April 29, 2026

It may appear full, but the Moon isn't actually at 100% illumination yet. In fact,…

7 minutes ago

Trump calls for Jimmy Kimmel’s firing; FCC reviews Disney licenses

President Trump and Melania Trump have called for comedian Jimmy Kimmel to be fired from…

57 minutes ago

New BNB 2x Leveraged ETF XBNB Begins Trading on NYSE Arca

Key Takeaways: XBNB offers 2x daily exposure to BNB through a regulated ETF structure. Teucrium…

1 hour ago

Aave Outlines Steps to Rebuild rsETH Collateral

Recovered tokens will be sent to a DeFi United multisig wallet, redeemed for ETH,…

2 hours ago

At his OpenAI trial, Musk relitigates an old friendship

Among the most interesting parts of Elon Musk’s testimony Tuesday in his lawsuit against OpenAI…

2 hours ago

Bitcoin 2026 Conference Divides Its Community

The Bitcoin 2026 Conference drew more than 40,000 attendees to The Venetian Resort in Las…

3 hours ago