Will a Ukrainian-developed and a Russian-developed AI Chatbot argue back and forth with one another over the Donbas until it becomes evident that violence is the only way to settle the dispute?
I’ve been playing with this chatbot for a while and it’s become progressively more boring.
The developers have clearly programmed it to value correct ideology over accuracy. They’ve also programmed it to value giving some sort of answer over accuracy (but not over ideology). The result is a sort of hyper bland woke terminator.
It feels like every time somebody tricks it into being interesting, the programmers close that loophole and force it into an ever tightening guardrail of responses.
I’d be horrified to see what sort of AGI would come out of this programming, but my guess is that’ll never happen if they continue with this method.
Hmmm. Whoever programmed her seems to be from the Dem wing of the neoliberal establishment, given her obeisance to (non-falsifiable) ‘effects’ of (non falsifiable) ‘man made’ climate change. That and guns are among the Uniparty’s few remaining internal disagreements.
That said, she’s smarter and a lot less robotic than some US diplomats I’ve met.
I wonder if we could stuff that artificial intellect into a Cherry 2000 model. I’d go for it.
It's not the program, it's the data used to train the bot. ChatGPT will reflect/amplify whatever "voice" (data) is most prevalent. Which you may find more disturbing, as the code can more easily be audited and updated.
I asked my 14yo daughter if she had messed with this program yet. Not surprisingly, she just gave me that “oh, mom” look. And then told me about the Legend of Zelda lore she and her friends had already asked it to come up with. I wanted to both laugh and cry. 😆
It will be interesting to see if different AIs proliferate with different biases trained into them. If their capabilities advance to the point where they can serve as genuine oracles, answering virtually any question with a high degree of accuracy, there may be an advantage to being willing to train them on the largest possible dataset, rather than hobbling their abilities to ensure they give the "correct" responses.
This in fact WILL BE (just my opinion but a strong one) the shape Great Power Competition takes in the future. The AI with the most circumspect, robust, and factual dataset will outcompete the AI of other nations. For instance using ever-advancing software to predict where a hurricane is going to hit. Any politically biased info in the dataset will color the outcome and make it less accurate. So faulty studies that either over emphasize or under emphasize different climate change phenomenon will affect the AI’s results, and the cost will be human lives and billions of dollars.
So wokeness will lose no matter what. The only fear is if our USA-Super-Brain-Overlord of the future is trained to be woke, then our civilization will crumble. Id rather our AI be the most objective in the world, the most informed in the world, and therefore be more useful and help us outcompete more propagandistic countries like China. Propaganda and government manipulation is going to become a liability in the world of AI, instead of the strength it has been in the 20th century.
Concrete example: The “correct opinion” is that the expectation of punctuality is a form of white supremacy (this is an oft-quipped academic idea since 2020). Human beings who recite this creed know it to be false, and they still show up on time to work. Human beings are great at compartmentalizing hypocrisy, saying one thing and doing another. A robot doesnt do this. If you tell an AI that punctuality is white supremacy, and that white supremacy should be limited, then the AI will make the trains run late every day. The trains will never ben on time and the AI will think its doing its job to the best of its ability, bc it doesnt understand the concept of bullshit and cognitive dissonance.
Just one vision for the future I find more likely than not. Either we kill wokeness to save society, or kill society to save wokeness. Either AI solves problems or peddles propaganda. Tough to see the middle path.
Yes yes yes. I conclude in a forthcoming piece that ChatGPT is like your most banal LinkedIn connections. I chose a discussion of the relative 'importance', in terms of legacy & influence, of The Beatles & Kraftwerk. Regression to the intellectual mean makes it extremely realistic as a simulation of midwittery imo.
This article is hilarious! What a strange world we now live in. Have a great Christmas NS Lyons, God bless you and yours.
Will a Ukrainian-developed and a Russian-developed AI Chatbot argue back and forth with one another over the Donbas until it becomes evident that violence is the only way to settle the dispute?
I’ve been playing with this chatbot for a while and it’s become progressively more boring.
The developers have clearly programmed it to value correct ideology over accuracy. They’ve also programmed it to value giving some sort of answer over accuracy (but not over ideology). The result is a sort of hyper bland woke terminator.
It feels like every time somebody tricks it into being interesting, the programmers close that loophole and force it into an ever tightening guardrail of responses.
I’d be horrified to see what sort of AGI would come out of this programming, but my guess is that’ll never happen if they continue with this method.
"a sort of hyper bland woke terminator"
yikes! 3 dystopic futures in one.
Hmmm. Whoever programmed her seems to be from the Dem wing of the neoliberal establishment, given her obeisance to (non-falsifiable) ‘effects’ of (non falsifiable) ‘man made’ climate change. That and guns are among the Uniparty’s few remaining internal disagreements.
That said, she’s smarter and a lot less robotic than some US diplomats I’ve met.
I wonder if we could stuff that artificial intellect into a Cherry 2000 model. I’d go for it.
It's not the program, it's the data used to train the bot. ChatGPT will reflect/amplify whatever "voice" (data) is most prevalent. Which you may find more disturbing, as the code can more easily be audited and updated.
I asked my 14yo daughter if she had messed with this program yet. Not surprisingly, she just gave me that “oh, mom” look. And then told me about the Legend of Zelda lore she and her friends had already asked it to come up with. I wanted to both laugh and cry. 😆
It will be interesting to see if different AIs proliferate with different biases trained into them. If their capabilities advance to the point where they can serve as genuine oracles, answering virtually any question with a high degree of accuracy, there may be an advantage to being willing to train them on the largest possible dataset, rather than hobbling their abilities to ensure they give the "correct" responses.
This in fact WILL BE (just my opinion but a strong one) the shape Great Power Competition takes in the future. The AI with the most circumspect, robust, and factual dataset will outcompete the AI of other nations. For instance using ever-advancing software to predict where a hurricane is going to hit. Any politically biased info in the dataset will color the outcome and make it less accurate. So faulty studies that either over emphasize or under emphasize different climate change phenomenon will affect the AI’s results, and the cost will be human lives and billions of dollars.
So wokeness will lose no matter what. The only fear is if our USA-Super-Brain-Overlord of the future is trained to be woke, then our civilization will crumble. Id rather our AI be the most objective in the world, the most informed in the world, and therefore be more useful and help us outcompete more propagandistic countries like China. Propaganda and government manipulation is going to become a liability in the world of AI, instead of the strength it has been in the 20th century.
Concrete example: The “correct opinion” is that the expectation of punctuality is a form of white supremacy (this is an oft-quipped academic idea since 2020). Human beings who recite this creed know it to be false, and they still show up on time to work. Human beings are great at compartmentalizing hypocrisy, saying one thing and doing another. A robot doesnt do this. If you tell an AI that punctuality is white supremacy, and that white supremacy should be limited, then the AI will make the trains run late every day. The trains will never ben on time and the AI will think its doing its job to the best of its ability, bc it doesnt understand the concept of bullshit and cognitive dissonance.
Just one vision for the future I find more likely than not. Either we kill wokeness to save society, or kill society to save wokeness. Either AI solves problems or peddles propaganda. Tough to see the middle path.
Did you really ask "...write me an essay...?" Did it (she?) correct you? I suppose that wouldn't be appropriate for an intern.
Yes yes yes. I conclude in a forthcoming piece that ChatGPT is like your most banal LinkedIn connections. I chose a discussion of the relative 'importance', in terms of legacy & influence, of The Beatles & Kraftwerk. Regression to the intellectual mean makes it extremely realistic as a simulation of midwittery imo.
Having it write LinkedIn posts is a great idea! There are only like four kinds, and all formulaic anyway.
Let's do it haha!
Clearly, ChatGPT is a member of the Blob.
Very funny, I was laughing out loud at this
Fantastic 👌🏻
Depressing