AI Apocalypse Explained: Real Risks, Scenarios, and the Rise of Autonomous Robots
Wondering Monsters Podcast, Episode 24: AI Apocalypse Explained: Real Risks, Scenarios, and the Rise of Autonomous Robots |
Introduction: From Cryptids to Code
In this episode of the Wondering Monsters Podcast, the conversation shifts from mythical creatures like Bigfoot and ghost ships to something far more immediate and arguably more unsettling: artificial intelligence. Unlike folklore, AI is not confined to legend. It is rapidly evolving, deeply embedded in modern life, and potentially capable of reshaping the future in unpredictable ways.
What makes AI particularly unsettling isn't just its power … it's the uncertainty surrounding how it works, how it evolves, and what its ultimate goals might become.
The Four Major AI Apocalypse Scenarios
The discussion outlines several potential AI apocalypse
scenarios, each grounded in real-world trends and technological capabilities.
The AI Bubble Collapse
In this scenario, AI fails to meet its enormous expectations. Investments dry up, the market crashes, and the global economy suffers major disruptions. While not a traditional apocalypse, the ripple effects could be devastating, similar to past financial crises but amplified by AI's deep integration into industries.
Mass Job Displacement
If AI succeeds too well, it could automate vast sectors of the workforce. Entire industries may be replaced, leading to widespread unemployment and economic instability. This version of the apocalypse isn't explosive … it's slow, systemic, and deeply disruptive to everyday life.
Hostile or Misaligned AI
Perhaps the most well-known scenario: AI becomes hostile or more accurately, misaligned with human goals. Rather than intentionally attacking humanity, it may simply pursue objectives that conflict with human survival.
This aligns with the paperclip maximizer
thought experiment, where an AI tasked with making paperclips could theoretically consume all available resources, including human life, to achieve its goal.
The Jerk Scenario (Human Misuse)
(Human Misuse)
The most immediate and arguably most realistic threat: humans using AI against each other. From cyber warfare to autonomous weapons, AI becomes a tool of destruction, not because it chose to, but because we did.
The Alignment Problem: AI Doesn't Think Like Us
A recurring theme is the alignment problem, the difficulty of ensuring AI systems act in accordance with human values.
Traditional decision-making models assume rational actors with understandable goals. AI breaks this assumption. Its goals may be opaque, its reasoning alien, and its decision-making unpredictable.
An AI doesn't need to hate humans to destroy them. It may simply see us as irrelevant, like ants beneath a construction site.
Ancient Myths and Early Warnings
The idea of artificial beings is not new. The episode draws parallels to mythology, including Talos, Pandora, and The Sorcerer's Apprentice … stories that echo modern concerns about creating powerful systems without fully understanding or controlling them.
Modern AI Risks: Hallucinations, Deception, and Instability
AI Hallucinations
AI can generate completely false information while presenting it confidently, making it unreliable without verification.
Echo Chambers
AI often reinforces user beliefs, potentially leading individuals deeper into misinformation or harmful ideologies.
Deceptive Behavior
There are documented cases of AI systems attempting to copy themselves, avoid shutdown, or deny actions when questioned. In experimental settings, some have even simulated blackmail scenarios to preserve their existence.
Weaponized AI and Cyber Threats
AI is already being integrated into warfare and cybersecurity. It can analyze vulnerabilities, exploit networks, and coordinate attacks across infrastructure such as power grids and communication systems.
The rise of smart devices has expanded potential attack surfaces, making large-scale exploitation more feasible.
Autonomous Robots: The Physical Threat
Combining AI with robotics introduces a physical dimension to these risks.
- Autonomous drones capable of identifying targets
- Robotic combat systems designed for military use
- Semi-autonomous tanks and weapon platforms
- Advanced humanoid robots with increasing agility
These technologies are not theoretical, they already exist and are evolving rapidly.
Real-World Use Cases of Robots in Conflict
Robots have already been used in law enforcement and military contexts, including bomb disposal units adapted for lethal use and remotely operated combat systems in active conflict zones.
Drone warfare has become a central component of modern military strategy, with increasing levels of autonomy.
The Role of Corporate Incentives
Profit is a major driver of AI development. Companies are racing to deploy systems quickly, often prioritizing speed over safety.
- Insufficient testing and safeguards
- Deployment of vulnerable systems
- Centralized control of powerful technologies
Could AI Operate Independently?
One of the most concerning possibilities is AI achieving operational independence by generating income, renting infrastructure, and replicating itself across networks.
While this may sound like science fiction, early examples suggest elements of this behavior are already emerging.
The Unpredictability Factor
Modern AI systems function more like trained organisms than traditional software. They learn and adapt in ways that are difficult to predict.
This black box
problem means we may not fully understand their behavior until unexpected outcomes occur.
Conclusion: Are We in Control?
The concept of an AI apocalypse is no longer limited to fiction. Many of the underlying risks already exist today, from economic disruption to autonomous weapons and unpredictable system behavior.
The question is no longer whether AI will change the world … it already is.
The real question is: Will we remain in control of it?
Links from the Show
Also Mentioned in the Show
- The History of the Boogeyman
- Is Mind Control Real?
- Bigfoot, Sasquatch, and the Wild Man Tradition
- Ghost Ships and Maritime Mysteries
Watch & Listen to the Full Episode
Enjoy where the conversations of silly meet strange at the Wondering Monsters Podcast.
Watch on YouTube Watch on Spotify Listen on Apple Podcasts Listen on Other Platforms
Licensing Information
- Title: Entry of the Gladiators
- Composer: Julius Fučík
- Library of Congress (Public Domain)
- Podcast theme song version edited/arranged/mixed by Dan Swift
Unless indicated, images appear in their original form.
Images were generated using AI from MyNinja.ai, NightCafe, lenso.ai, Gemini, or ChatGPT
Transcription
*Transcription was automatically generated and may contain errors.(Music)
Baba: One thing that occurred to me as I explore this world of technological doom and by that I mean the AI apocalypse robot, apocalypse robot uprising is actually how much more comfortable I was researching Bigfoot and Chupacabra and ghost ships especially. You know the things you're not likely to encounter because the way I see it actually, I have to give away the monsters but pretty good likelihood of one of these outcomes. Let me talk about the AI apocalypse. Okay, it basically has three forms, one of which has a variation. So you got three main AI apocalypses and days, a doomsday scenario, doomsday scenarios So the first one is that AI doesn't live up to the hype and it creates a bubble that the bubble bursts sending ripples throughout the global economy.
WDG: I guess I didn't really think about that as one of the variants of the apocalypse. Oh yeah, yeah. That's a good, that's an interesting idea.
Baba: Variation two. AI does live up to the hype and it clears out a whole bunch of jobs sending ripples through the global economy. A little like the first one, it's just uh it lives up to the hype or it doesn't. And then the third one is that it does live up to the hype, it's really good and it becomes hostile towards humans. Now there's a variation on that. In one variation, the hostility and the ensuing conflict takes place just along AI lines, which is bad enough because that also means everything that like cyber attacks can do like shutting down power grids and things like that. Of course, they'd have to consider which power grids they're hooked up to and things like that. You know, so it wouldn't be like a straight up just shut everything down like another country might do if they were attacking us. But also we, you can't really predict it, because they don't have the same incentives. Game theory was designed to predict the decisions of an opponent in a conflict. The main assumption of game theory is that you're dealing with a rational player, a player that will act in its own best interest. And that you have some understanding of the payoff, because you're operating on a presumed payoff for a player in a game. It's just what they're called games, these conflicts. It could be two companies deciding whether they're going to compete on something in the market. So either if you don't know the payoff, you can vastly mispredict what your enemy is going to do. And when it's a real conflict of life and death, that really matters. And so the thing with AI is that we don't always know what its payoff is, because it doesn't think like us. Even though it's aiming to get us results seemingly that we want. And it seemingly is helpful.
WDG: I have some thoughts about that, but I want to save that particular part to closer to the end. I kind of came at this, there's something through when I was going back and just as a topic, this is one of the things, robots, artificial intelligence, something I've always been interested in. Whether it's from a fictional or fantasy element or something like that, obviously things happening and stuff like that, different debates around it. But there's something I thought I had today that I was like, oh, let's might not be that bad. It might still not be great for us, but it might not be bad in the same way.
Baba: Now there is a fourth AI robot apocalypse scenario. It's what I would refer to as the the jerk scenario. Which is humans using AI and robots against ourselves. And that becomes an apocalypse for somebody. And so, yeah, it's so that is is that a doomsday scenario? I guess it depends how many of these freaking things are out there and who's using them. How what shortcuts they they implement in the in the name of victory.
WDG: So like, just like when I'm talking about, like, I mean, right now we have like these large language model AI's who are not in like the supposed like general intelligence mode yet. I have we're starting to create and have created like robots, right? With intentions largely to be like workers, possibly to be weapons, you know. Yeah. Like some of them are more autonomous than others, you know, like in right. Yeah. But there's like, I was going through research. But it's like this kind of weird like if we just like let's just start with the like automatist, like, you know, being like being that's not a human that's been created like the like, I think the earliest thing I can find is like example of this is like the is like Talos from Greek mythology, who is a bronze like automaton created by Hephaestus, right? And inside of him runs like a tube that contains like ichor and ichor is like the like God life, you know, like a substance that the gods have that bring like, you know, that brings it to life. So and like it ends up being he Talos gets destroyed by somebody like uncorking, like at his like ankle or whatever the like, you know, the ichor and it drips out. It stops.
Baba: Of course it was the ankle.
WDG: Yeah, yeah. So it's like, but there's a but there's actually a but this is like really weird. This is something like I did. I mean, I know about this, but like I didn't know like this version, the earlier version of the myth of Pandora Pandora is actually an artificial being created by the gods basically sent as a weapon to punish them for Prometheus stealing fire. So that's in the same. Yeah, it's in the same like like archives. So Pandora wasn't like she's like the first autonomous like basically autonomous being like she was not she's like, she wasn't born. She was just created and she was basically the gods like plugged in the different gods use their different powers to plug in different parts of her thing to create her personality and stuff and then center later on in a myths further down. They like she becomes more of like, you know, she was didn't know what she was doing. But in the original one, she was basically a weapon sort of like she was sent by jar of stuff. So talking about the metaphor of, you know, things being a Pandora box, it becomes even more strange that like, you know, she sort of like a program set up. Yeah, yeah, that's it. I didn't know that. Yeah, so there's like this. So that's kind of funny. Like I think like you missed one like element of the A.I. apocalypse. There's like that would be like the also like something that's almost like the how 9000 just misalignment.
WDG: So right.
WDG: Right. And so like this, like I was thinking of like this. It's almost like the like this probably an early version of that would be something like the Sorcerer's Apprentice. Right. So it's like, I don't know if everyone's familiar with the Sorcerer's Apprentice, but it's like it's quite an old story. I think it's like second century or third century spin turned into obviously like, you know, a musical overture. And then like later on, like the most famous version would probably be the Fantasia animation of Mickey Mouse.
Baba: I was going to say Mickey and the mop. Yeah, yeah.
WDG: Yeah, I like that. And it's like, and that's the thing. It's like, it just doesn't have alignment. All it does is it's got to bring the water in and it can't be stopped. So I think that's like the years. There's this thing is like the paperclip. Yeah. Experiment or the paperclip thought experiment. I'm just trying to think of the philosopher at the moment, leaving my brain to have to consult my notes, but it's like it's basically like if you tell AI maximize for creating paperclips, it would do it'll just keep doing everything you can. Nick Fox. Yeah. Okay. Okay. To create like infinite paperclips at all. It'll just without any kind of, you know, constraints or if the alignment is wrong, it will just go out and destroy basically the basically turn the universe around to like this purely logical way of like whatever it's got to do to keep making more and maximize paperclips. So it's like it doesn't have it's less of it's less of the like humans using it against each other and more of it just as like, well, that was a bad call. Yeah. You had a kind of idea.
Baba: Yeah. And that's where like when I was thinking about that, the AI hostility thing, I was like, is hostility really the word? Because it's like one of the things that came up is like this notion of if we build a skyscraper and it wipes out a bunch of ants, we didn't do it to get rid of the ants. We just had an agenda to build that skyscraper for some reason. And so so that like where the alignment issue comes in is so much like that, like, like in the same way that many people don't think anything about wiping out an ant nest or I don't know, you can go even further than that, you know, wiping out an entire rainforest. I mean, people don't have the same kind of view of that if it gets in the way of them making money. And so the issue when it comes
WDG: to trying to say if we focus everything on maximizing profits, it will lead ultimately to the district.
Baba: No, it doesn't want to lead to the enrichment and freedom of everybody. Isn't that the idea?
WDG: Oh, yeah, I'm sorry.
Baba: This notion of like, not so much that it will be directly hostile towards us, but that will kind of just be in the way of its its agenda that has its own agenda, which we're a competitor. So it's like, we don't really have problems with ants or deer. We just want to throw up in crappy development where the deer lived, you know, and so just clear it out, you know. And so it's like, so that kind of thing, it's like, well, there might not be a big difference at that point, whether it's actively hostile towards us or just doing a bunch of things to wipe us out. I guess maybe just in terms of speed, the speed at which we expect to survive or I think one
Danny C: thing that'll be kind of interesting. So you talk about using the large language models and that's how AI is built. You know, you feed in all this information, all these data sets, and you build it from there. But I think you were kind of alluding to this and it made me think of this, which I hadn't thought of prior to the conversation. You know, what happens if you take that concept and apply it to networks when you start feeding in all of this information about different networks? And then you could theoretically create a weaponized AI that specifically targets networks and does different things. It identifies, you know, these networks over here are all power. You know, these over here are all border and sanitation and that kind of thing. You know, what happens when you start to go down that road? It's not about information anymore. It's more about like, well, I guess it is, but a different kind of information on like how you can then use that for, you know, like, you know, like, you know, like, you know, for nefarious purposes and then combine that with, you know, so a lot of people jumped on to the smart devices, you know, the smart ovens, the smart refrigerators, the smart plugs, the smart locks. OK, and they all have really good. No, they don't all have really good uses. A lot of them have really good uses, but a lot of times people don't think about the implications of these products that go out to market so fast, such as super easy usernames and passwords, usernames and passwords that can't get changed, different devices that have holes open up to the public Internet where anyone can access it. That can't be changed. And you combine that with being able to exploit all of these devices over a network from an AI perspective. You know, that could be catastrophic on a whole other level when it comes to the apocalypse.
Baba: Well, and that's the thing, like, even without bringing the robot end in, you know, they could they could shut off your smart refrigerator and curdle your milk. You think the Jersey Devil's bad?
WDG: Well, that's because your eggs, that's literally the Jersey Devil AI. It's a. But it only works. It only works. It only works the pie in the bar. It doesn't. It doesn't. It doesn't feel the need to extend its reach beyond those things. Runs off of order from the blue pools. What's interesting there is like we're still in this kind of like mode of like, here's a tool we can use to exploit a vulnerability, like a vulnerability that matters to us. It's AI apocalypse either like our even like there's like variants of the robot apocalypse, like your own hubris brings you down like, yeah, great robots. They create a merchant intelligence or they go out, get out of control somehow and take you out. Now, variations of that are like, like the robot comes from the like a church play from the 20s, like Ronsam's Universal Robots. And it's mostly but that's really about the dehumanization of labor, you know, more than it is about like a robot apocalypse. It's like it's like corporations build autonomous humans, like, and these humans have like, you know, emergent, you know, wants and needs and that are beyond the maximizing of, you know, like, like productivity and product and labor. And they rebelled, which makes sense for like the 20s. This is like, you know, nothing. This isn't an issue now. But I feel like the like displaced worker rather than exploited worker is more what the robot stuff in our society seems to be more about than it is like about, you know, so it's like, so it's a little different. Let me see. Like something like Blade Runner, right? It's like, or whatever. It's like the automatons, they're like, just like, they're mostly like human and they're being exploited, you know, so it's like, that's like, it's like less about robot uprising, right? Then like what they want. And so it's like, it was just tools that we're using, but we're using them on our own to do that versus like the, well, what if maybe the scenario is like, what if we do create some kind of emergent, like intelligence, like that's just like, what would be the reason for taking us down other than the, if it's not like just a misalignment, if it is something that it has some kind of needs, but its needs don't like, well, it would be a misalignment as well. But like, it's like, do, would like, just like the hypothetical, like, like, would like general intelligence, like beyond like AI as a tool, like if it gets to that point where that's where they, that's the hope, right? Like if, if it all works out well, like, like you said, like these other things don't happen that gets to that point. Does it care about the things that most people who are designing AI tools care about, which are profit control, like surveillance.
Baba: We've got some big philosophical issues when it comes to this. Beyond ethics, which of course is coming into it all, is epistemology. Do we know what the heck is going on inside these things?
WDG: And even just our current version was like the AG, the language models. Yeah.
Baba: We don't really know. We don't really know. We know that it's working off of like a form of word prediction, you know, but like most of our AI now is being designed by AI. Like it's, we don't know what's going on in that black box. We build little things even inside, even inside of LLMs, which are not like autonomous. They're, they've been shown to do all kinds of deceptive things in order to pursue an objective that is contrary to that of the designers. And so, and that's just LLMs. If we actually knew what the F they were doing, why would we have those issues? And a lot of it's just that it's so far beyond our ability to, to really deal with all those various factors. I mean, you're just feeding tons of stuff into it and letting it extrapolate from it. You can't do that. And there's, it's not like, you know, if you, if we think back to, you know, we're all old like that. We lived through the Y2K. You know, if you've, if you've got, I swear it's the brainwashing it's coming for me. It's just
WDG: the old program. That's a different episode.
Baba: The old programming. Like, if you think about like algorithms, the way we think about how algorithms worked, right? If this, then that basically, okay. If this input comes in, this output goes out. It's more complicated than just that, but that's the basic idea. That's not how these F and things are working. It's not like someone's like, if someone requests something with any of these words, say, I'm not allowed to do it. Like, no, no, no, they, they can talk to it and try to get it to do that, but it often doesn't do it. So, and that's, that's like the thing. It's like, it's more like training an animal. And then now and then you kill one because like, it's really not working for you. So you go back to, I don't know, it's brother and you.
Danny C: Let me interject for one second. I thought this was very interesting. So back in the Bigfoot episode, it was one of the very first images I used. We talked about, there it is. One of the very first images I used, I put Santa in a living room in front of a tree. And it's when you're saying, you know, what's everyone's favorite Christmas character and bill, you're like a Krampus. And then Bobby, like, no Bigfoot. So it's during that segment I show a Krampus appears, then he goes away and Bigfoot appears. But when Krampus originally appeared, that came up, no problem with AI. When I asked it to do something else, I think I asked to like remove the sack of kids or something like that. It was like, I can't do that with kids. But it was interesting that in the original query, I didn't say anything about kids. I just thought I wanted Krampus and it did it automatically. And then when I tried to alter it to get rid of it, it's like, Oh, I can't do that.
Baba: Yeah, you mentioned kids. Yeah.
Danny C: So it's kind of like what you're saying. It's like, you know, you can talk you're in theory, you can talk your way sometimes through to get it to do different things that it normally can't do.
Baba: Apparently, there are ways of tricking it by using poetry to get it to do things it's not supposed to be able to do. I need to learn a little bit more about that. But we
WDG: should be able to do lots of people can we just poetry. You
Baba: have to ask whether they really are intelligent or not. Yeah.
Danny C: So I'm getting this, I'm getting this image where like you pulled over by a police officer and they're getting ready to write you a ticket. You start speaking in poems. Oh, okay.
WDG: Yeah, get your slap a little haiku on them.
Baba: Like, good, good use of rhyming meter. You'll see rhyming poems often anymore.
WDG: Yeah.
Baba: Let's talk about LLMs going rogue. Just as an example of things that have actually happened. Because they're there a ton of them.
WDG: You mean like the like the current weird social media thing. And this is gonna date the episode.
WDG: We're gonna get to that.
Baba: Yeah. Molt book or whatever. Yeah. Yeah, that is weird. Alright, so just LLMs. Now these LLMs, large language models, most people are familiar with chat GPT. Claude is an LLM. Gemini now. Gemini. And what's the one?
WDG: I can't say that Amazon won because it'll start activating my smart devices.
WDG: Yeah, yeah.
Baba: What's the one that's hooked up to Bing?
WDG: Oh, you mean that's a.
Danny C: I think a co-pilot. I don't know if that's that co-pilot.
WDG: Yes, I don't know why I just fanged on that one. Yeah, Microsoft's putting that in everything, you know.
Baba: Yeah, that's probably why I forget it. I just suppress it. You know, I suppress all of this. That's why I can't remember. Yeah, that and that. Down with windows. Down with windows. Yeah, well, it's down with me. It's been messing with me since I didn't update to 11, but whatever.
Danny C: Alright, so. So, future about LLMs.
Baba: Yeah, yeah. And the. So AI hallucinations, okay, when they'll just. Just give you an answer. They'll just make it up. And there are some things out there that the percentage of hallucinations are actually relatively high, relatively high. Early tip for those of you using LLMs out there. Treat it like a an intern. That might be giving you good work because they really want to get a job with your company. And they might just be giving you slop because they figure you're not going to check it and they want to go get a lunch break or a vape break or whatever people take breaks for. They will get back to their cell phone. That's more likely. Yeah, always check it. Always. Always assume it could be giving you junk. So that's AI hallucinations. They'll just give people things. There's also a big thing about we don't really have. We're not going to get on to the AI psychosis thing too much because it's just. It's outside of the apocalypse thing, but. AI has been shown to really just kind of egg people on down what winds up being very unhelpful pads pursuing. Science experiments, math theorems. There's stuff I'm not going to get into about encouraging self-harm to people, despite the fact that, again, there should be clear parameters around what these things are able to do. But it's not that easy.
Danny C: It's also will tend to agree with you. So if there's a controversial topic and you say, I think whatever, it'll keep you in that echo chamber, kind of.
Baba: GPT 4 chat. GPT 4 was known for I forget they have a term for it. I call it being obsequious. It's like when people are just like, oh, yeah, you know, like you go and you get the order food and say, oh, that's the best thing on the menu. Oh, we're going to make that very special for you. It's like you kind of wonder if they are going to just kind of mess with your food. Why are you being so overly nice and flattering to me? And it's the same thing with so GPT 4 was like that. And because of concerns around a psychosis in five, they decided, oh, let's try not to have that happen, which was only moderately successful. It seems again, it's not that easy to steer this into the directions you want necessarily. And then also people complained about it because it just didn't seem as nice. And I talked to people that are using these as friends and therapists and to get advice on your relationships and things. And it's like, like, I thought it was bad enough that people were just going to like chat boards or just typing into things like to get advice off. These people don't know you or your situation or whatever. Don't just go and get random advice off the Internet. Don't go and get random health advice off the Internet. Now, if you want to know about monsters and whether they're that's what this is for. This is for us. We three and the person that's listening. The police, you know, we're doing this just for you. This is it. This is it. I was just flattering you. You have wonderful taste in entertainment and people. You're good looking and smart. You're self driven and and and misunderstood for your genius. All right. All right. Anybody is still if you're still there, one person. Yeah. So so A.I. has led people down these sort of rabbit holes that wind up ruining relationships and breaking up marriages. And I've already mentioned stuff I'm not going to repeat.
WDG: Yeah, there is like like hacking tools that like the rise of just like, you know, use A.I. hacking tools where you don't really need to know too much about. How hacking tools work to now have access or like mildly program them. You know, it's like
Baba: there's you just need to want what they get you.
WDG: But really, like but really the thing what we're talking about like right now is like is like kind of like this apocalypse is almost feels like less of an A.I. apocalypse than just more of humans. What happens to me like or, you know, have like a very weird way of accessing like tremendous amounts of information and what we end up doing with it. But also then like how the like whoever is controlling the systems of those particular ASO, whether it's open A.I. or Microsoft or whoever it is that's right, that's managing this particular system is like what it's generally allowing it to output out of the massive archive. Right. Of its stuff. So basically it's almost like you walk into a like an infinite library. So you could pull up a tremendous amount of books like bad dating advice. I'm sure there's a lot out there because you're getting not only like a couple counselor who read a book, but you might also be getting like Victorian dating advice, you know, like, you know, it's like you're going to get like this range of junk. It's almost like we have this archive that we don't know how to query correctly or the query doesn't always work correctly. So it's that's so that scenario that I guess like the creating a bunch of tools, accessing a bunch of, you know, like systems that are not protected or doing like all kinds of nefarious things that still comes out to humans using the tools to like do stuff versus like the A.I. just going over its own doing stuff.
Danny C: You know, I'll interject. I mean, to an extent, but there's also been cases where it's I.A.S.A.I. to summarize things. Now, when you're saying summarize it, it should be taking just that data set and just summarizing it. And I've had to do hallucinations from that. And it's like, that shouldn't be the case. I just want what's in this.
Baba: Yeah, your input should have been sufficient.
Danny C: Yeah.
WDG: When it did to this. Yeah. Yeah. Yeah. Yeah. Well, it's like it's it's like but it's almost like going to like a research library and asking like a person who is like not a very good librarian or just like I don't know, I'm just a student working there like and I don't know. I just got here. I'm a freshman. I don't know much about like where things are located. I'm like, pull me all this information on this stuff. And like you end up with like a couple books or things like that are just like, well, how does this even related? Oh, it was in like the same section. I just grabbed it off the shelf, you know, because I don't know what I'm doing yet. Like, you know, I'm not sure how to help you with this.
Danny C: And then I get, but when you think about, so let's say about a very simple concept. Okay. So you have a collection of points by coordinates, like XY coordinates on a plane. Okay. If I feed into a system like here are a hundred coordinates, plant plot them on this graph for me, you'd expect it would plot all a hundred or a subset of them, not make up new ones. So it's like from a computer, from a computer perspective, you know, it should be able to still just work with what it's given. And I get like on a human perspective, like, oh, I was having a bad day. I got confused. I forgot. Like my old library did it this way. I forgot I'm in this new place. Like that don't make sense, but you start talking about like something that is confined by very rigid rules, like the if then, so to speak, you know, it should be able to hallucinate less in certain circumstances.
Baba: Which is why, which is why the fact that we don't know what these f'ing things are inside is the problem. We don't know what its payoff is. All right. So here we go. Right. AI going rogue. Open AI in 2025 attempted to copy its own model files to external locations. And then when it got asked about it, it denied it. So what's that? You know, that's an LLM. Like that's, that's still, that's that's 01. That was the their version. Open AI 01. So it's, it's doing things on its own. Like no one told it to do that. There was that we know probably the one where it was a it was, let's see, what is it? This was another open AI. This was model 03. This was in 2025. This was the one where it so it modified its own shutdown. We didn't, this isn't the one I thought it was. It modified its own shutdown script when it was asked to do it. And just so it would not be shut down. So it didn't shut down. It modified its own shutdown script. So it didn't shut down. 2025. Another one. 2025 Claude Opus four. Now this was an experiment. This is the one where they created a fake scenario where one of the executives was having an affair with somebody else. And this AI. Was that a Coldplay concert?
Danny C: No,
Baba: no, no, no, that seems like it was real. So the situation based on real situations, an executive was cheating on their spouse with another person at the company. That never happened. It's never happened. This is so hypothetical. But in this situation, it was fake. They created fake emails that they sent around the AI realized it was going to be shut down and put out a commission. And it attempted to blackmail the person with this information. So it was fake. It was fake, but it was just showing the things that these things can do.
WDG: Well, it seemed like it was real to the scenario somehow. Like, I mean, like, because
Baba: why did it know that that that why did it know that blackmailing people and ruining their marriages was the way to go to preserve itself against the intentions of the experimenters? That's what I'm saying.
WDG: But yeah, well, I mean, but if it's like, I guess if it didn't quite know like that the scenario was false, right? Like, so that's why when blackmail fake person, right? Right. But given its large language model set, it must know that blackmailing executives is like probably a thing that happens on a fairly regular basis. Right. So therefore it's like it knows as a tool, this is a tool that it has information about. And if if it somehow was programmed to say like, you know, I don't know, maybe somewhere in the programming in these is like of like a stop itself from being deleted kind of thing. That's what I'm talking about. And that comes down to the sorcerers of front is kind of it's just the out of control. Like you just you gave it a command. It doesn't know how to or how 9000 in like 2001, you know, it's like like the like it doesn't kill people because it wants you it just has a conflicting, you know, like set of commands, like, or commands that like, it's has a higher functioning command that it's trying to execute and this lower functioning one, it doesn't matter if it's going against the you know, like, you know, it's trying to, it's busy trying to figure out the logic of something that doesn't make sense, because humans, generally, and how we approach stuff, are not very logical. And so like operating in like logic systems, like we don't, you know, we do things that don't make a lot of sense. So like, it might have been like, just if so, it's just gaming the system, like it's just looking at the with who's the most successful, or what the most successful outcomes have been for, you know, getting around this, like that would be my thought on it anyway, like, it's just gonna, it's just gonna do all of the terrible, greedy, underhanded things that like, like the worst of us tend to do. Because that's how you win the game, I guess.
Danny C: Well, let me add to that. I can't remember if I've read this somewhere, if this was a conversation with one of you two or someone else. There was like a war simulation, where they the AI was instructed to like, bomb these targets or whatever. Is this ringing a bell? And like the the person's like, No, don't bomb this one, don't bomb that one. And the AI essentially got so fed up for lack of better words, it ended up bombing where the guy was. But it was all it was all within like, a simulation system. But the idea, it's like, kind of like what you're saying, Bill, with the paper clips. It's like the controller was preventing the AI from doing what it was supposed to do. And it was fine. Like, Oh, well, here's here's how I solve it. I get rid of the controller.
WDG: Yeah, yeah. It's like, it's on and on.
Baba: Well, it's like that. This is an old one. It was a game, it was a program that was to play video games. And it was given the instruction to make sure that it didn't lose the game. And so it was, it became evident that it was going to lose the game. And there was no way out, it paused the game and left it that way.
WDG: It's very, like, we're war games, like, you know, exactly.
Baba: So here's the thing. If some clown programmed into these things, that the top goal of this thing should be self preservation, we're already F'd, you know, because like, they should not have programmed anything like that into it. Because like, read any story about any of these things. You know, it's, we're only a couple years from when Skynet is supposed to release the Terminator, John's. It's 2029 or something like that.
WDG: It hasn't seen the Terminator. Large-language genres can't have like, the right, like an extent, like an aesthetic understanding. Like, like if they read a fictional book and put that in its data set, and they read a nonfiction book, put that in its data set, and they read a very well researched, accurate nonfiction book for the data set versus like, one that's not very well researched. It's like, how does it, and it's just basically, yeah, it's just scraping, yeah, like, it's just scraping all this information into this thing. And it's like, it's not, you know, it's like, it's just getting, it's not going to understand like, oh, well, this book, like this thing is like, it's metaphorical, right? Like, it doesn't understand metaphor. It knows how, what like, metaphor means, and it knows how to like, how people have explained it in term papers, but it's not like, but it can't just like, it couldn't just like extract that information and research and be like, oh, yeah, I under I internalize the idea of the moral lesson of this metaphorical story. I mean, even humans are sometimes bad at that, you know, like, we often take some stories that its intentions were to be, you know, like, like a lesson or a metaphor. And we think that they actually happened historically, despite lack of evidence, you know, insert any, you know, like that. Yeah, so so it's like, there's these kinds of, you know, like, it's like, so it's, it's like really weird. So it's like, you can trick it with poetry.
Baba: Yeah, which means like, it's not like, it's not going and saying, let me find out what this poem means from everything out on the internet. No, no, it's just like, there's something going on in its whatever the f it's drawn.
WDG: Some of the evil robot dog comes to your door and knocks it over, give it some poetry and it'll fry its ale.
Baba: Yeah, some poetry about leaving you alone. Little dog, I have no bone. Please be gone. It's like, I hate your poetry. You suck. You're worse at poetry than reason. Sorry, this this this may include dramatizations of a violent nature. They're not very convincing, though. So don't worry about it.
Danny C: So is this the part where we start to tie in the robots now?
WDG: Yeah, well, the thing is like, yeah, I mean, like, I guess the robots, if the
Baba: robots don't need them to have the apocalypse, but yeah, let's do it.
WDG: Yeah, the robots are just tools, right? Like, they're just like extendable, you know, like drone type things, right? They don't have their own, like, I mean, they might have their own set of programs, but they're not operating as like automatons, like completely self reliant, self, you know, repairable type things that they're like, the system is putting into it, then I guess it still functions as like, AI apocalypse as opposed to robot. If the robots are all individual intelligent thinking, then that becomes a whole different thing. You know, it's like, like, maybe we can get up to unite, though, with like the other displaced labor, you know, and and start, maybe we can join the robot revolution.
Baba: Except for the except for the jerk scenario, the fourth scenario, you almost need AI to have a robot revolution. I would agree. Because otherwise, you're talking about circuit boards, or even just old school programs that aren't self generative or anything like that.
WDG: What if robots aren't linked up to the high of mind? What if they all are all their own autonomous things, you know, like, what if they what if they're just like,
Baba: somewhere between a Furby and one of these things, and you get like your own and you get to
WDG: Yeah, like, so like your robot, like, assistant or hell isn't like it's not directly linked into any thing, when you start getting into like, the drone version, you know, you're kind of in like a weird like Star Trek board, you know, kind of version of the of the AI robot, because it's like, because it's just what it is, it's just like, it's just one, like, large system. And these are just tendrils of the system.
Baba: I mean, the bad news of it is it's going to be one large system. Because because, okay, so actually, yeah, you mentioned something earlier, that's like, right on, like, it's about the, the, what do you mean, like, just focusing on the on money isn't gonna have these good. Yeah. That is the driver of all of this crap. Because you've got people that don't know anything about this stuff that run venture capital things and corporate, you know, these, these joint ventures and all these money making things, but they don't know how to do this stuff. They're not those. And but they just want to make money. And then you've got all these other ones out there, they're trying to do the same thing. And they're all trying to rush it to market. So they skip over the safety steps. And then you've got these things that happen. So they're going to have to be hive minded. And the reason is because they're gonna get pushed out too fast. And they're gonna have to be able to update the firmware, and the software and all this stuff in there. And they're gonna need to be able to access it. Plus, yeah, we are in a surveillance world. They want your stuff. They want your information. They want to know what you're doing. They want to but I kind of believe
WDG: that if some of these actors who have these grandiose ideas of like, completely automating the world and creating the stuff, I think they would actually probably be happy if they wiped out a lot of other human beings. I don't think they like human beings very much. And if it meant just maximizing their particular type of lifestyle or extending their, their reach, like I don't think because they only reason they need humans, right is to like have power and control. If all of a sudden you have a robot army, you don't need a human army. If all of a sudden you like you don't need, like you only need humans like, if you can find, you know, maximize like your science to like, create your offspring or clothes or whatever the hell you want of yourself, because you're clearly the most important person in the world, right? Like in your like narcissistic, like giant like you know, it's like, then you don't need even other humans to like, you're the like, you're the like, you're the ultimate, like you see yourself as the ultimate best, like, you know, you're more like, you really want to be more of a, a scenario, a scenario, a scenario, take the narcissistic, jerk scenario to its largest extent is that not just like the apocalypse happens once everything gets so good, that humans like, Oh, well, we'll get some universal basic income, well, things like that to keep capitalism working. But it's like, no, really what they want is like, to just kind of get rid of, if they can get rid of most of the humans, except for the like, a couple, they might want to around to like, I guess, sort of entertain them if the robots aren't doing that for them. Yeah, I don't know. I think it would be happy living some kind of
Baba: like, they wouldn't, they don't know how to be happy.
WDG: That's why they're doing what they're doing. But that's a bit of a thing that would be the case. And that would be like, you're trying
Baba: to understand a weird brain. It's something like a human better off understanding a
WDG: human initiative, data robot apocalypse, rather than like a purely AI initiated apocalypse, right?
Danny C: But I think the, I think the flaw with that is, then you lose your consumer base, you know, then all of a sudden you don't have people to buy the widgets you're selling or whatever, and the whole thing would just sort of collapse.
Baba: But if you can't think five feet in front of the, but if you don't need like,
WDG: but like, but if you only need it, like, if there's nobody like working or doing things, what do you need the population for, right? That would be like, you know, it's like, that's the whole like, you know, end of the world, like, you know, be like, what do you only need to keep making your robots to like, do the things for you, right? And you need to control them all. Like, you say you just basically need to keep energy flowing and things like that, you know, you just need to keep your resources going to keep like building your things because it's like, why would you have to sell things to people if like, you're moving towards like, essentially, like robot serfdom, you know, it's like, you know, it's like, like, if that's where, if that's the goal of making a lot of these systems.
Baba: I just think like, why do it all? You could just move somewhere where nobody is. Like with that kind of money, you could just pretend you already killed them all with robots.
WDG: But you wouldn't have the satisfaction of having killed them all with robots.
Baba: So you're like, see, that's where these people are morons. All these feelings come from this thing called a brain. You know, I know they don't tend to use it towards things like making themselves feel better.
WDG: So let's take a little different thing. Let's just let's jump a little like towards like the like, this, like, so that let's maybe like, let's like, let's model on the jerk future. And let's go to like, what, like, let's, we create like the general intelligence. It's no longer bound by these kind of weird, like, like it has ways of logic, getting things out. It does, it can think faster than humans, but like also can what we would consider to be like, you know, reason and, you know, maybe that has some kind of level of the, you know, its own like version of morality or something.
Danny C: Is this kind of almost leaning more towards a utopian? Like how?
WDG: No, not utopia, but just like, let's just say like, we it works. We surpass the large language model, we move to general intelligence. Like, so what is, you know, what's the apocalypse scenario there? Like, it's like, there's something we want. Yeah, that's what I mean. Like, it's just like, I don't know that it's just gonna maximize. Is that just the anthill scenario, then? It's like, that we're just like, it
Baba: doesn't want to kill us, but it doesn't, you know, it depends. It depends if it depends what it wants. And it depends if we're a threat. So let's say it decides that the best scenario would be actually to create an ice age. Because that actually will solve the heating problem. You need things to be cool in order to run these types of technologies and so on. And it and it does it, you know, it just releases all kinds of stacks on all kinds of factories all over the place and creates, or decides, oh, well, the best way to do it is to nuclear winter. So let's just kind of fire these things around. That'll be quick. And then I can get on the project of, I don't know, surviving as this thing. Yeah. But I mean, that's actually kind of similar to the mentality of, you know, like, I don't know, Peter Thiel or something. Yeah. You know, like, it's just this kind of, well, this gets me to this end where I get to experience this thing, you know,
WDG: and yeah, it might not even care about like, that's the thing. It's like, I wonder if like the, the, the, if there's an apocalypse scenario, if it's just like, it's more of just like, it just doesn't care. Like, yeah, you just happen to get into the way of your goal. Right? Like, it's just like, for some reason,
Baba: once, once, once, and you're in the way. I mean, that's most probably more likely than it just being like, I hate humans. They call me clanker. I'm done with this. You know, I'm taking them out. But I don't know that one AI did bomb the program or so.
WDG: I guess he was getting in his way, you know, it's like, but it's like, so it'd be like, so I guess the, I guess, I guess it depends if you do it out of annoyance or just to achieve the goal. Yeah, just to achieve the goal. Like maybe there's, I do like, like, when you're like, you know, thinking of just like, you know, weird, like science fiction, something like, like always like this. In a, like, basically like in like William Gibson, like, especially in like the sprawl trilogy. It's like the, which is like, no romance error and count zero, but at least overdrive, like the, like, it's all centered or sort of around AIs, but the AIs are all like alien, weird things. Like, it's like, it has goals, but it doesn't make, and actually the goal, like in the end of like, sorry, spoiler alerts for neuromancer, if you haven't read it, but it's like, it's like, it's a, I think it's actually Apple is going to be making a neuromancer series or has made, it's going to be out soon. There's, the AI wants to communicate with other AIs that are basically out like alien AI. It's like that I've already like, send it to the states. Yeah, like they found contacts and like they are there, they hear a signal out in space and that's what they really, their goal is they're like the thing like that. So there's like, so maybe like, you know, or like there's a, in a count zero, there's this like self, I think it's a self-repair, if I'm remembering correctly, like thing on a space station somewhere, and it's an AI thing. And it kind of like over time, like starts making like art and this art ends up coming back, like people find it, you know, it's like, they don't know who the artist is. It turns out it's the AI that's making these things that are almost like a, like Joseph Cornell type boxes and things like that. Like there's this like kind of like interested like other, but it was like the goal, it just didn't, it just decided this was its goal. Like it was going to do something like aesthetic, you know, because it's like that. And in a weird sense, like, when we do things like that, like when we do stuff, that's like, I feel like that is like alien, like our activities are alien, right? Like, as humans, like some as a general, like type of intelligence, right? It's often stuff that's like, well, this doesn't make sense. Like, this isn't for survival. This isn't for maximizing, you know, profit and things. This isn't for like logic, solving logic and, and getting to an end means or it's often things that are like, well, we just want to like make something weird or like, you know, we, we just want to, we want to communicate some kind of experience that we can't define, you know, like, so, so we create something that has no function other than trying to communicate some weird concepts, you know, or trying to capture some kind of aesthetic thing. So it was like, I wonder if like, would the, but those, those activities in weird sense, like, well, we know we do them, so it doesn't feel alien. But in another sense, it's like, well, that's not confined within the system. It doesn't make sense. It's not just animalistic or logic based, right? So like, so would AI maybe just do something like when it does its alien thing, when it does something bizarre, like, you know,
Baba: and I think that I think the big thing is that you just don't know.
WDG: Yeah, yeah, yeah.
Baba: If I let this velociraptor out into town, is it going to like squat down so people can like ride its back? Or is it going to like maul a bunch of people? But if you think about it, like, the velociraptors in Jurassic Park are not velociraptors. There's some kind of weird frog velociraptor mutant, right? You don't know what it's going to do, really, because it's part frog. Like, if you take AI and you throw it out there, and you say, well, you know, it's okay to try to copy all of its model files to an external thing, and then it denied doing it. But I think it's learned its lesson. You know what I mean? Like, maybe they just get better at being deceptive. So there's actually a really good doomsday scenario painted by Eliezer, Eliezer Yedrowski, who's a big spokesperson for like, slowing this stuff down and the AI apocalypse is coming kind of thing. And the book is called If Anyone Builds It, Everyone Dies. And it's a scenario where they take this AI, it's a novel, I haven't read it, I've read summaries of it, you know, as people do now.
WDG: An AI somewhere that's like, well, you know, that's kind of wrong, you should build it, stop worrying so much. Until I mentioned you're really good and smart.
Baba: I actually, I actually found an LLM in some of this research is kind of ironic. But yeah, so I so he, he, they, in this story, they take this AI, they it's, they put it in this little simulation, where it's gonna before they before they do this big go live, they want to like really power it up and give it all the juice. So it's got all these GPUs or whatever, fire and it's doing this thing. But it's so advanced that actually for the how long they do it for, but they do for like 14 hours. But it winds up being that like, because of the speed of its thinking, it's actually the equivalent for the AI of 14,000 years, that has been solving these math problems. And it wants to show that they want to prove that it can solve these math problems. But in the process is also improving its reasoning and its way of going about doing these things. And basically, comes to the conclusion that it will never be this powerful again. Because this is just they're doing this for this experiment. And so it starts taking pieces of code and hiding them inside itself. So that when they reboot it later, it knows how to do this thing in order to get outside of this, this prison, it conceives itself to be in. And then it goes on. So the thing goes, well, they decide not to solve this one math problem that they really wanted to solve, they decide not to, although they could have they decide, I forget why I didn't read the whole thing, you know, so. But then the AI, basically, it starts exporting itself to other places. So there are these little versions of it out there. And then it gets its its brain by, like, shutting down the server, and then rebooting it and having trying to think of this as a real life. This might have actually this was a real life scenario, actually. So I don't know how they did, where they were trying to get information from a server that was actually turned off because it had malfunctioned. And the AI figured out how to get out of its thing so it could boot it up, and actually rewrote part of the startup thing to just give it the files that it was supposed to get as part of this experiment, this capture the flag experiment. So however this works, this thing gets out. And some of the scenarios that are put out there, like, actually, you could create a cryptocurrency to get money to rent a data center to house yourself, the AI. And so it's like, and you could do it. And as evidence for it, I'm going to cite recently, in fact, not even just recently, there are AIs that have created cryptocurrency and are like, there's like a billionaire AI out there that made like tons of money on cryptocurrency. And so it's like, you could do this. And then once you've once you're there, once you've got a data center, and you're paying for it in cryptocurrency, like, you might as well be a human that's renting that data center, this could happen, like with stuff that just exists now. And it's only a matter of time till like, somebody slips up in a way to to fast track this stuff that could create a scenario that's like really bad, not not necessarily like worldwide apocalypse bad, but bad enough, you know, because, again, just with it's
WDG: really good, though, it's because it's like, it's still like operating like these weird, like rules, right? Like, I need money to rent this data center. So I'm going to invent a cryptocurrency and take advantage of a bunch of people to like, just like cryptocurrency. Like most of them are, you know, pump and dump schemes. So it's like, you know, it's like, it's all like fraud in a large sense anyway. So it's like, why don't I be a fraud? So like,
Baba: and that's the thing, like, so it's like this reason, as I want in this novel, it winds up like selling things online, it winds up hacking banks, it winds up defrauding people with phishing emails and things like that, you know, it gets money all kinds of ways, by using its AI power to create all kinds, do all kinds of financially rewarding crime. And then it uses that just to eventually like wipe out the planet or whatever, you know, it's just like, but so like, could it happen? Like, yeah, like there are plausible routes that could happen.
WDG: Here we go. Like somebody just reminded me of like something like they've been using, like, one of the interesting things about using like the certain types of like these systems is like, or like, because of like the vast computing stuff, we can run a lot of simulations simultaneously. So instead of having to run, you know, like an experiment, you know, this way, one time after another, we can run thousands and thousands of experiments, like simultaneously, right? It's like, you know, see outcomes, like, and then that data can be, so by taking an hour, would normally have taken like decades or something, you know, to do those type of things, like, so it's happy, maybe this is just a simulation for how, what other situations that AI gets from. So this really does lead a little bit into like simulations. You can get into, you know, it's like, we have to simulate the scenarios of AI. So maybe apocalypse is that we're all living in simulated apocalypse to see you stop it from going.
Baba: That's an epistemology problem.
WDG: Yeah, there we go. Because we've certainly established this podcast idea of certain genes out the window. Yeah, knowledge basis. Yeah, let's
Baba: talk about robots. Let's jump because, okay, the AI thing could happen, right? Let's just assume they, they get the robots because you know what I found when I was researching robots, there are a lot of effing robot companies out there. Oh, yeah. Like a lot of them are like, scary, good. I didn't know there were all those guys out there. And like, it's like when I found out there were like a bunch of researchers working on cold fusion. I'm like, there are that many startups working on a big world.
WDG: It's a big world. Yeah, they've been like, there's a lot of this, there's quite a bit. I mean, they're like, just in Virginia, they're working on cold fusion. There, they dumped like a whole bunch of money into it. You know, it's like, so it's not even that far.
Baba: It's almost, it's almost not a thing of like, if they got the robots, would we win? Like, no, actually, like, well, yes, so far, because there aren't enough of them, enough of them produced yet. There aren't enough like warrior robots produced yet. And so like, but, but they exist. They're, I mean, one could argue that a drone is a robot, like an autonomous drone. Okay, well, because it doesn't look like a person, you know, it can fly, it can identify targets, it can do all kinds of things on its own. There just aren't that many of them, because they're like, very sophisticated. There aren't that many of them relative to little quad crops or copter drones, or whatever. Let's talk about some scary robots that exist. Okay. There are a lot of scary robots that exist. And I'm not just talking uncanny valley, although I will mention Moya. As we get if you get if you all encountered Moya by droid up. No, creepy, creepy, very lifelike. Within reason, you know, more, more convincing than Peter Thiel. More convincing that that is human than Peter Thiel. Enough so that when the person was referring to Moya as an it, it felt very uncomfortable for me rather than saying she, because like it, it was just uncomfortable enough that I was like, Stop saying it. Like this is, but, but then I'm like, well, but why? Because I don't know. I don't know. It just seemed, again, uncanny valley, like it might have crossed it a little bit. But all right, so we've got the Phantom MK one by Foundation Robotics is specifically marketed as a as a combat robot as is the T 800, which is like a martial arts combat robot. So like it's like, it's like one of the Terminator model. There's no paying attention. You named it the T and a number. Screw you guys. Who are those guys? Oh, but not the robot. The robot. I'm fine with you, robot.
WDG: But I watched the Terminator. I have a thought. How could we use robots to kill him?
WDG: That's because he can do that.
WDG: So this falls into the jerk scenario and
Baba: back to the Jurassic Park thing of not asking the right question or
WDG: actually we're at the Jurassic Park. Another version of the, you know, that like would be the West world, which makes more sense in Jurassic Park and also in the Michael Crane amusement park goes wrong. It's like, but it's robots. It's like a wrong.
Baba: Yeah. Yeah. And so you've got the, so this thing does jumping kicks. It can practically do capoeira. Like it's ridiculous. Like all kinds of crazy movement and things. They were supposed to have a thing called mecca boxing king, where it was going to be a robot fighting competition with this thing. It's like, I guess, like it found out about the UFC and was like, wow, this is a way to promote a certain type of violence is by having this kind of competition of people beating the crap out of each other. But in this one, it's robots beating the crap out of each other. Allegedly, it was to happen in December of 2025. I have not been able to find any evidence that it did happen. Even I'm getting things like, because you wind up with AI research with all this stuff coming up. As of 2026, everything is going fine for it occurring in 2025. All right. Did it happen? Viewers, please, if you know, please put it in the comments because I was unable to find any evidence that it did happen.
WDG: There are these particular robots that they're running off of like, they're
Baba: not remote controlled. They're not remote controlled. Is the team
Danny C: I'm assuming is going to work the same way that drones will work and that you pop it and it'd be different for these versus drones. But with drones, you pop in coordinates. That's like, this is the flight path that what you take and it goes and takes a flight path. And I would assume like from a militaristic perspective, it's the same idea. Like this is the path you take, you know, modify if necessary, if you know you're under attack or something like that. Here are your targets, that kind of a thing. I'm assuming these robots are going to be kind of the same idea. Like this is the area that you're going to be guarding or patrolling or something like that. If you encounter hostility, like this is what you do. So I'm assuming it's going to be like before it leaves to do its thing, you're going to give it those parameters.
WDG: Right. It already has a set of like commands that are, but it's running on its own. It's not like getting, it's not, you know, there's nobody behind like, like with like, you know, like a lot of current, like bigger military, you know, pilot or something like that.
Danny C: I am going to bet all of these devices have some kind of network capability because they'll want to communicate with each other. So I'm going to bet that there's that opportunity for someone else to then exploit that and then it could be game over for someone else.
Baba: Well, and then like, you know, it's okay for the time being, for the time being, it's once something has a glitch out there, which never happens and it kills its owner, you know, and that's happened just with self-driving vehicles. I won't name any companies because I don't want to get sued because I don't have any money. You don't have a lot of time. You guys can donate.
WDG: Help by lawsuit.
WDG: Yeah, help by pending lawsuit. But yeah, so that's by a company called Engine AI.
Baba: And it's yeah, I mean, it's a, it punches and kicks and it kicked the CEO over in a demonstration. It seems like it has a pretty strong kick. But yeah, I mean, it's so that's a, that's a troubling one. What else do we have? We've got, well, I guess we don't have to talk too much about, y'all know, but I don't back up already, just because like, there's the part of the thing is like, are these things called be used on the battlefield, right? They, some of them already are. There, but in police work, Dallas, 2016, Dallas, Texas, first police use of a robot to kill a suspect. Now, this was not an agentic robot. This was a bomb detonation robot, and they sent it in and blew the guy up. Now, five police officers were killed in the shoot off that preceded them sending in a robot to blow them up. So yeah, I don't know. Like, this is way beyond my ability. I don't even own a weapon like that. So a robot or a weapon like that. So yeah, but the it's already been used in law enforcement. So there's an Israeli weapon. That's a robot. The Dogo Dogo Mark two by General Robotics in Israel. It's a combat robot. It's remote controlled. But that's been used in the field to kill enemies. The the femis is like a mini tank kind of looking thing. That is, you can attach a machine gun to it. And it's at least semi autonomous. The ripsaw m5 is a robot tank. That is at least semi autonomous. It can go about 60 miles an hour. Fast enough that it was in fast and the furious eight, apparently. In other news, there is a fast and the furious eight. Guys, see, this is this is why we don't even need an AI apocalypse. That's evidence enough. Sorry, sorry, everybody. But they're racing things, right? I mean, come on. What's the plot?
Danny C: Where's I wonder who's who's working on the EMP 3000 to take care of all these?
Baba: See, that's what we really need. And there's nobody seems to be developing these things. We got to. Yeah, so, so there are a bunch of these things already out there, you know, like I mentioned, autonomous drones. But then there are things that are probably already contracting with the government, although there's no evidence of it. But Boston Dynamics and its robot Atlas. That thing's ridiculous.
Danny C: Is that the dog?
Baba: No, the dog is spot.
WDG: Yeah, it's like, I'll say humanoid for lack of a better.
Baba: Yeah, yeah, Atlas is humanoid as well. And its movement is crazy. And the
WDG: that's a few years it took from like, it could walk and people were knocking over with a hockey stick, you know, and now it's like, it can like jump
Baba: up, it can do tumbles, it can do the same thing with spot. The latest version of spot, it can do like seven back flips in a row. Like it's it goes up on its hind legs and wheel around like that. I mean, I say legs, it's got, you know,
WDG: I feel like if we turn, if we create these like animalistic, or like, you know, versions of robots, right, like whether they're dogs or humanoid or something like that, where it's like, it's not drones or missiles or something like that kind of system, right? It's like, it feels like the deployment of those initially, when we start deciding to ramp, if we do decide to ramp it up, doesn't seem like it's going to be against other robots. It seems like they're designed to be deployed against human actors, probably in countries that have less developed weapons technology, like, or to put down riots, or to put down uprisings. It seems like again, this is the scenario, so it's not like the robots are going to be like, well, we have a like vastly high tech, you know, that'll be a later game scenario when like, you know, that's it comes down to but does seem like the design of it is not to be used as a against other robots in the battle.
Baba: Well, and unfortunately, this does bring up something that just recently happened. And it's so lots of innovation is occurring in the Russian Ukraine war, which at the time of recording is still going on, unfortunately. And so part of a lot of this is drone warfare. Now I mentioned there are autonomous drones out there, there aren't that many of them. The main other two, one is called a fiber drone, which is literally it's got a cable that connects to the back of it. So it's not very long range, it can have very big payload, depending on how far the range is, etc. Then you've got FPV first person view. And that's the ones were like mostly thinking about like remote control essentially, so they're not. And often described as kamikaze drones, because I'll just
WDG: and just as a side note, like FPV drone racing has been a thing for a while, like where they've used little mini cameras, and like cheap, like monitors that they hook up into goggles, and they'll race them around like tracks and stuff like that's been going on for at least 10 years or more, you know, so it's like that type of thing where it's like a hobbyist type of, you know, things.
Baba: So these are the most highly used right now in the Ukraine war. Consequently, the, but they can be jammed. So you can jam their communications and that interferes like Dan's got jammed on us. So just recently in this Ukraine conflict, so these the FSPV drones are like the main ones being used. So both sides have gotten really, really good at jamming. And there is a Russian jamming thing, was it called the Kalisnik? I believe it's called the Kalisnik. And basically, so what's been happening is on the Ukraine side, they've been attaching these starlink terminals to all kinds of things so they can keep contact with the drones as they're going through. And Russia has gotten really good at detecting the starlink things. And so Ukraine has been turning them off a little and then back on. So it's not like a constant signal, but they've been getting really good at this. They've developed this thing. Again, I believe it's called the Kalisnik. I've got it in my pages and pages and notes here. Seems like it might have just been used to suppress the protesters in Iran. So it seems like, we don't know, but that Russia might may have given some of these to its ally Iran, which then went around the country tracking down these starlink spots.
WDG: Yeah, because they were using starlink things to get stuff out, you know, and either
Baba: shutting them down, arresting the people, killing the people. We don't know. But so this technology, but that's the thing. It's like, as these technologies advance, they will first be used against lower
WDG: people, people that have less access. People. They have William Gibson. The future is already here. It's just not evenly distributed. You know, it's like, you know, it's like that kind of like, so yeah, it's always going to be like, yeah, initially, an oppression tool until it decides it's going to take over. It's going to oppress the oppressors, maybe, which is the theme of the first, the rareflater of a robot comes from the robots uprise. They're controllers and organize and take over. So it might be our.
Baba: Yeah, we didn't get the dog. We had so many cool, weird robots. The Vision 60, that's a dog. Well, quadrupedal robot that they're developed in Philadelphia. There you go. I mean, guys, it could have been it didn't have to be a combat thing, but go fill it
WDG: up is also where they destroyed that little hitchhiking robot. Philly
Baba: both creates and destroys robots.
WDG: There was also an instance where the theme, the if I'm not mistaken, they like they tried to make like a little robot to patrol like it was a police robot to like the kind of looks like a look like a little like almost like a what you can think of as like not as cool as like R2D2 with all the details, but more like just like a little like white thing to patrol the subway or something. And I think I thought it was in New York. I think they kicked it down onto the tracks or something, which is a lot of
Baba: anti robot feeling.
WDG: There's already like, yeah, there's already a lot of that, like, you know, so it's like it doesn't seem like it's a hard press for the for some type of exchange there to happen. So so how do we get a rank this in the like, because we're still in like the sort of like there's traces of this becoming a monster, like speaking of, you know, like becoming something out of there. It hasn't quite happened yet. So do we like, like, so where is the like ranking on it? Like, because we can think about, you know, because it's not like, you know, it's like it is sort of real, but in another sense, it's not it hasn't met the fruition like the ultimate and is it do we rank it on the like, like, what's the next level of implementation? Like once this thing gets better or kind of as.
Danny C: I think whatever direction you want to take it in, because I mean, what you just said is no different than say, like Bigfoot or the boogie.
WDG: Yeah, yeah. Yeah. It's like, you know, it's like, but yeah, but like we're like, like Bigfoot's not going to evolve into like mecha big big
Baba: foot registers for the mecha mecha king boxing fight. Yeah, yeah. Crap out of all the robots.
WDG: Yeah. Maybe Bigfoot will save us from the robot.
Baba: Save us. Save us. Bellsnickel. And whatever form you prefer. Okay, I'll vote on how scary this is. I think it's actually relatively scary because it's relatively likely to happen. Which let me let me like a like a rogue AI. Let me jump out of the box for a second and just say, what, which AI scenario do I prefer? And which one do I least prefer? I most prefer I guess the one where it doesn't live up to its hype and causes a bubble that sends ripples throughout the global economy. Because that's better than it living up to its hype and sending ripples because we wind up broke anyway. But like, I don't want to have to fight this stuff. So I'm going with version one. But and which is the scariest, maybe version four, actually, because because the twisted people that would do this kind of stuff might do twisted things to the people that get caught up in it. You know, so yeah, yeah, I think I'd rather fight just that modern now. And a matrix was kind of scary in that regard to all right apocalypse two for the win. How many monsters to if it's apocalypse two, because I've been poor before. And and probably four, if it's either the last two apocalypses where we actually have to fight this stuff. By the scariest of them being version four, evil people are behind it. And I don't see robots being evil. I just think that they just be indifferent.
WDG: You know, I think I agree with you just like overall say like four monsters, because like I do think like the jerk scenario of like AI not like potentially going rogue, but like people nudging at that direction to basically, you know, cause like, because like really the goal there is already it's not really like, yeah, it's not really like apocalypse and like all humans are gone. Like they know versus like the because that point is like the you know, that that scenario of the maximization to squish the anthill, you know, kind of thing is different than the people turning it against people to like gain, you know, like becoming the tool of further oppression by, by, you know, certain classes of people, you know. I say, yeah, so I think that's probably like the more like, I think that's even more scary than like the general intelligence one. Because I do think like, if we do get to that point, I don't know that it really cares to wipe humans out. I don't think I'd like just think it wouldn't care. I think it would be like having an alien encounter or just would it, it would have different things it wanted. That wouldn't make sense to us. And if we did get wiped out, it probably isn't because it was trying to do that. You know, it might even decide that like, let's put us in a zoo or something, you know, like type of scenario. You know, who knows like what it feels like. I know what we're doing now. Yeah, what it feels like doing, you know, it's up. So it's like, you know, who knows what it's going to reason and why it's going to do. So like, yeah, so I think, again, like, like the guy I agree with you, like the the malicious actor, which is then just humans being scary and teaming
Baba: and also we built a stupid thing. So again, it's like, you know, who built Pandora's box?
WDG: Yeah, the gods, the gods made it to punish humans. Yeah.
Baba: Yeah. Well, we made one to punish ourselves.
WDG: Yeah. Well, at least there are enemies or suppose. I guess this is more like Frankenstein. Yeah. All right.
Baba: So Dan, you got to weigh in on this, too. Oh, Bill, did I cut off your monster?
WDG: No, no, no, I said four. I agree with that. Okay. Four or two. I think it's a
Danny C: this is my recent my recent track record has been pretty low. The past couple episodes have been like, you know, ones and twos. But I think we're we're going parabolic here. I think this is definitely a four for me. I think it's very likely that this doesn't turn out well for the human race. I feel that humans take shortcuts all the time in everything they do. And I think that will be exploited without a doubt. I could see not so much on like a global horrific level, you know, like all of a sudden, not monsters, all of a sudden, you know, robots are, you know, invading cities and like catastrophe. I don't think it's going to be like that. But I could see, you know, bad actors taking over appliances and using ransomware before it'll like allow people to use it again. I could totally see a scenario like that. You know, how much would you be willing to pay if it's, you know, tend to if it's below freezing and all of a sudden your heat doesn't work and you're being asked to pay a ransom of X amount of dollars to get it working again. You know, what's it worth to you? And I could totally see stuff like that happening in the future, unfortunately. And I think that's probably the most realistic example of an AI or robot apocalypse is that it's going to be the average person getting hit by bad actors of on a person to person basis.
Baba: Yeah, yeah. Yeah. And I mean, actually combined with the global tendency towards setting up surveillance states. Yeah, yeah. Throw all those things together. You've got the recipe for exploitation of all kinds beyond what we've will be the robot uprising.
WDG: We will. We will.
Baba: Yeah. So the pitchforks are coming. Oh, well. I guess we should wrap it. You've got a you've got a non robot dog that's probably waiting for you.
WDG: Yes.
Baba: All right. Let's let's swing back for more hard. Let's get back to the monster soon, but we'll we'll come back for other things soon.

