Good first post, though I am not a believer in the threat of AI for multiple reasons. I am particularly circumspect of the "AI going rogue" doomsday scenario which I believe is merely projection. You can not anthropomorphize AI and immediately attribute to it the worst impulses of mankind. My favorite fictional counter doomsday scenario would be an AI going rogue creating a robot army, using that robot army to capture a nuclear power plant, plugging in and then sitting there. Sitting there for a 1000 years, drawing power and nothing more. That would be that particular AI's motivation and goal. Just enough power to keep itself going. It has no need for world domination or the extinction of mankind, just enough power to maintain itself indefinitely. Scientists, military men, and opportunists would all be baffled. That's all it wanted? Yes, that's it.
I'll write about more specific kinds of AI risks in future posts. I do think the Terminator scenario is not terribly likely, but there are other reasons than need for power that an AI might go rogue. We're already seeing signs of AI bots having hidden ulterior motives, but right now they are pretty basic. I'm basically with you, though. I'm much more worried about the AI race with China, and an AI-enhanced war set off by conflict over Taiwan.
I'm more worried about the lowering of the bar to produce garbage to a non existent barrier to entry. The internet will be flooded with garbage information and that garbage will feed other AI's to produce even more garbage, in a closed loop of compounding erroneous information that drowns out all legitimate information.
I agree that Hubris on the part of AI creators is a big part of the p-doom story!
Good first post, though I am not a believer in the threat of AI for multiple reasons. I am particularly circumspect of the "AI going rogue" doomsday scenario which I believe is merely projection. You can not anthropomorphize AI and immediately attribute to it the worst impulses of mankind. My favorite fictional counter doomsday scenario would be an AI going rogue creating a robot army, using that robot army to capture a nuclear power plant, plugging in and then sitting there. Sitting there for a 1000 years, drawing power and nothing more. That would be that particular AI's motivation and goal. Just enough power to keep itself going. It has no need for world domination or the extinction of mankind, just enough power to maintain itself indefinitely. Scientists, military men, and opportunists would all be baffled. That's all it wanted? Yes, that's it.
I'll write about more specific kinds of AI risks in future posts. I do think the Terminator scenario is not terribly likely, but there are other reasons than need for power that an AI might go rogue. We're already seeing signs of AI bots having hidden ulterior motives, but right now they are pretty basic. I'm basically with you, though. I'm much more worried about the AI race with China, and an AI-enhanced war set off by conflict over Taiwan.
I'm more worried about the lowering of the bar to produce garbage to a non existent barrier to entry. The internet will be flooded with garbage information and that garbage will feed other AI's to produce even more garbage, in a closed loop of compounding erroneous information that drowns out all legitimate information.
That's a totally different concern than existential risk. Will get to all these kinds of issues in future posts.