16 Comments
Dec 16, 2022Liked by N.S. Lyons

This article is hilarious! What a strange world we now live in. Have a great Christmas NS Lyons, God bless you and yours.

Expand full comment

Will a Ukrainian-developed and a Russian-developed AI Chatbot argue back and forth with one another over the Donbas until it becomes evident that violence is the only way to settle the dispute?

Expand full comment
Dec 17, 2022Liked by N.S. Lyons

I’ve been playing with this chatbot for a while and it’s become progressively more boring.

The developers have clearly programmed it to value correct ideology over accuracy. They’ve also programmed it to value giving some sort of answer over accuracy (but not over ideology). The result is a sort of hyper bland woke terminator.

It feels like every time somebody tricks it into being interesting, the programmers close that loophole and force it into an ever tightening guardrail of responses.

I’d be horrified to see what sort of AGI would come out of this programming, but my guess is that’ll never happen if they continue with this method.

Expand full comment

"a sort of hyper bland woke terminator"

yikes! 3 dystopic futures in one.

Expand full comment

Hmmm. Whoever programmed her seems to be from the Dem wing of the neoliberal establishment, given her obeisance to (non-falsifiable) ‘effects’ of (non falsifiable) ‘man made’ climate change. That and guns are among the Uniparty’s few remaining internal disagreements.

That said, she’s smarter and a lot less robotic than some US diplomats I’ve met.

I wonder if we could stuff that artificial intellect into a Cherry 2000 model. I’d go for it.

Expand full comment

It's not the program, it's the data used to train the bot. ChatGPT will reflect/amplify whatever "voice" (data) is most prevalent. Which you may find more disturbing, as the code can more easily be audited and updated.

Expand full comment

I asked my 14yo daughter if she had messed with this program yet. Not surprisingly, she just gave me that “oh, mom” look. And then told me about the Legend of Zelda lore she and her friends had already asked it to come up with. I wanted to both laugh and cry. 😆

Expand full comment

Did you really ask "...write me an essay...?" Did it (she?) correct you? I suppose that wouldn't be appropriate for an intern.

Expand full comment

Yes yes yes. I conclude in a forthcoming piece that ChatGPT is like your most banal LinkedIn connections. I chose a discussion of the relative 'importance', in terms of legacy & influence, of The Beatles & Kraftwerk. Regression to the intellectual mean makes it extremely realistic as a simulation of midwittery imo.

Expand full comment
author

Having it write LinkedIn posts is a great idea! There are only like four kinds, and all formulaic anyway.

Expand full comment

Let's do it haha!

Expand full comment

Clearly, ChatGPT is a member of the Blob.

Expand full comment

Very funny, I was laughing out loud at this

Expand full comment

Fantastic 👌🏻

Expand full comment
Comment deleted
Expand full comment

This in fact WILL BE (just my opinion but a strong one) the shape Great Power Competition takes in the future. The AI with the most circumspect, robust, and factual dataset will outcompete the AI of other nations. For instance using ever-advancing software to predict where a hurricane is going to hit. Any politically biased info in the dataset will color the outcome and make it less accurate. So faulty studies that either over emphasize or under emphasize different climate change phenomenon will affect the AI’s results, and the cost will be human lives and billions of dollars.

So wokeness will lose no matter what. The only fear is if our USA-Super-Brain-Overlord of the future is trained to be woke, then our civilization will crumble. Id rather our AI be the most objective in the world, the most informed in the world, and therefore be more useful and help us outcompete more propagandistic countries like China. Propaganda and government manipulation is going to become a liability in the world of AI, instead of the strength it has been in the 20th century.

Concrete example: The “correct opinion” is that the expectation of punctuality is a form of white supremacy (this is an oft-quipped academic idea since 2020). Human beings who recite this creed know it to be false, and they still show up on time to work. Human beings are great at compartmentalizing hypocrisy, saying one thing and doing another. A robot doesnt do this. If you tell an AI that punctuality is white supremacy, and that white supremacy should be limited, then the AI will make the trains run late every day. The trains will never ben on time and the AI will think its doing its job to the best of its ability, bc it doesnt understand the concept of bullshit and cognitive dissonance.

Just one vision for the future I find more likely than not. Either we kill wokeness to save society, or kill society to save wokeness. Either AI solves problems or peddles propaganda. Tough to see the middle path.

Expand full comment