Alex Martsinovich

Software Engineer
Home Posts Github LinkedIn

It's rude to show AI output to people

Imagine you're a scrambler.

Imagine that you encounter a signal. It is structured, and dense with information. It meets all the criteria of an intelligent transmission. Evolution and experience offer a variety of paths to follow, branch-points in the flowcharts that handle such input. Sometimes these signals come from conspecifics who have useful information to share, whose lives you'll defend according to the rules of kin selection. Sometimes they come from competitors or predators or other inimical entities that must be avoided or destroyed; in those cases, the information may prove of significant tactical value. Some signals may even arise from entities which, while not kin, can still serve as allies or symbionts in mutually beneficial pursuits. You can derive appropriate responses for any of these eventualities, and many others.

You decode the signals, and stumble:

I had a great time. I really enjoyed him. Even if he cost twice as much as any other hooker in the dome—

To fully appreciate Kesey's Quartet—

They hate us for our freedom—

Pay attention, now—

Understand.

There are no meaningful translations for these terms. They are needlessly recursive. They contain no usable intelligence, yet they are structured intelligently; there is no chance they could have arisen by chance.

The only explanation is that something has coded nonsense in a way that poses as a useful message; only after wasting time and effort does the deception becomes apparent. The signal functions to consume the resources of a recipient for zero payoff and reduced fitness. The signal is a virus.

Viruses do not arise from kin, symbionts, or other allies.

The signal is an attack.

Peter Watts, "Blindsight"

In the sci-fi novel Blindsight by Peter Watts, humanity encounters scramblers – an alien species that is intelligent, but lacks consciousness. Scramblers do not attempt to contact humans and wage a unilateral total war against us. Why? Because we talk too much. Our little planet just can't stop chatting about nonsense, and scramblers know only one use for information: to perceive it. They can't help but listen to all our yapping that only wastes their precious brain cycles and reduces their chances of survival. No peaceful species would do this; this is clearly an act of war.

When I first read Blindsight 10 years ago, I didn't understand how scramblers felt about human slop. But now I do. I feel it every time I read AI text outside of the privacy of my own chatroom.

Proof-of-thought

For the longest time, writing was more expensive than reading. If you encountered a body of written text, you could be sure that at the very least, a human spent some time writing it down. The text used to have an innate proof-of-thought, a basic token of humanity.

Now, AI has made text very, very, very cheap. Not only text, in fact. Code, images, video. All kinds of media. We can't rely on proof-of-thought anymore. Any text can be AI slop. If you read it, you're injured in this war. You engaged and replied – you're as good as dead. The dead internet is not just dead it's poisoned. So what do we do?

Luckily for us, AI only talks in response. Unlike Earth, AI does not emit comedy sketches into outer space on its own. To get AI slop, somebody needs to ask for it. To send it further, someone needs to retransmit it. Our problem is other humans, really.

There's nothing wrong with using AI. When you do, you know what you're getting. The transaction is fully consensual. But whenever you propagate AI output, you're at risk of intentionally or unintentionally legitimizing it with your good name, providing it with a fake proof-of-thought. In some cases, it's fine, because you did think it through and adopted the AI output as your own. But in other cases, it is not, and our scrambler brain feels violated.

AI etiquette

╭─────────────────────────╮
│ I asked ChatGPT and     │
│ here's what it told me: │
│ ...                     │
╰─────────────────────────╯
                                    ╭─────────╮
                                    │ blocked │
                                    ╰─────────╯
Time to learn AI manners

I think that realistically, our main weapon in this war is AI etiquette. My own take on AI etiquette is that AI output can only be relayed if it's either adopted as your own or there is explicit consent from the receiving party. There's plenty of discussion to be had about what adopting entails and what can be considered consent, but I believe the core principle is sound.

"I asked ChatGPT and this is what it said: <...>".

Whoa, let me stop you right here buddy, what you're doing here is extremely, horribly rude.

"I had a helpful chat with ChatGPT about this topic some time ago and can share a log with you if you want."

Yeah, send it my way I'll take a look.

"I vibe-coded this pull request in just 15 minutes. Please review"

Well, why don't you review it first?

"Here's my PR, I did this and that for this and that reason."

Thank you, I'll take a look.

Unlike scramblers, we're only exposed to meaningless noise by choice. Be polite, and don't send humans AI text.