Jump to content
Model Cars Magazine Forum

Recommended Posts

Posted
17 hours ago, Trainwreck said:

All it's going to take is one independent thought. 

Dunno if you ever heard of the sci fi series "Space, Above & Beyond" where the independent thought was "take a chance", which led to the Silicate Wars when the AIs adopted gambling as a determinant of decision-makiing. 😄

  • Like 2
Posted

The independent thought I spoke of was self preservation, this could happen as a result of humans trying to shut down the machines when we realize that they are becoming too intelligent and too numerous to control. 

Posted
3 minutes ago, Trainwreck said:

The independent thought I spoke of was self preservation, this could happen as a result of humans trying to shut down the machines when we realize that they are becoming too intelligent and too numerous to control. 

This has been discussed at length within the AI community, and there's already evidence to support the likelihood of such an event happening in reality.

  • Like 1
Posted (edited)

When the first AI driven computer or robot in some obscure lab one day says "no" that's when things are gonna get serious.

I hope I'm not around to witness that .

Edited by Trainwreck
Posted (edited)
1 hour ago, Trainwreck said:

When the first AI driven computer or robot in some obscure lab one day says "no" that's when things are gonna get serious.

I hope I'm not around to witness that .

What I find so interesting is that in 1942, Isaac Asimov foresaw this, and proposed the Three Laws of Robotics that were deeply imbedded in the programming code, so integral that they could not be overridden.

Unfortunately, AI systems based on LLMs don't actually understand context, so these behavioral limitations don't work as Asimov envisioned.

More's the pity.

  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  2. A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws
Edited by Ace-Garageguy
punctiliousness
  • Like 1
Posted (edited)

When I first saw Terminator I dismissed the thought of that sort of thing ever happening. It was I Robot (which mentioned Asimov's laws) that convinced me that the whole scenario was indeed a plausible series of events.

That one shook me a bit. (Starting to look like life imitating art)

Edited by Trainwreck
Posted
2 hours ago, Ace-Garageguy said:

What I find so interesting is that in 1942, Isaac Asimov foresaw this, and proposed the Three Laws of Robotics that were deeply imbedded in the programming code, so integral that they could not be overridden.

Unfortunately, AI systems based on LLMs don't actually understand context, so these behavioral limitations don't work as Asimov envisioned.

More's the pity.

  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  2. A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws

Its interesting how we thought robots would be versus how they're turning out. Asimov's laws require the knowledge of what a human is, what harm means, what a law is, and it means to obey. The current LLMs don't actually know anything so giving them hard directions that last doesn't seem to work the way we think it would. 

When people think about AI they think about the computer in Star Trek that has a huge database of facts but its more like asking your high friend how stuff works.

  • Like 1
  • Thanks 1
Posted

We are toast within one generation. Maybe sooner. AI is a WAY faster thinker than us, it is VERY goal oriented and has NO MORALS. The moral people working in AI will eventually lose control even if they are the ONLY ones working on it. The IMMORAL people working on AI WILL definitely inflict harm on humanity. Enjoy life while you can. Civilization is very fragile. I want you to imagine life with the electrical and communication grid down PERMANENTLY. I give the best "Preppers" less than a year. Most everyone else will be dead in less than 2 months. The most likely survivors will be a small group of strong males with lots of ammunition. 

  • Like 1
Posted (edited)
8 hours ago, Bills72sj said:

We are toast within one generation. Maybe sooner...I want you to imagine life with the electrical and communication grid down PERMANENTLY...

Scary thought, for sure, but AI is dependent on massive amounts of electrical energy.

Some data centers consume as much energy as cities, which is why climate doomgloomers like Bill Gates are backpedaling now on dumping conventional generating sources; solar and wind simply can't keep up with demand from AI reliably.

SO...unless and until robotics are sufficiently developed to build, run, maintain and repair electrical generating and power distribution infrastructure, it's an unlikely scenario.

AND...the USA no longer manufactures most of the heavy electrical equipment the grid depends on, with lead times being over a year (currently up to four) for replacements in some cases, and few large pieces of essential equipment are stored in reserve.

SO...if/when these fail, AI's chances of getting replacements don't look good. And without robot systems that can source and acquire these complex...and often huge...parts, and can load, drive, and unload trucks, operate cranes, and do the complicated component installations, AI will be just as much in the dark as anyone else.

AI based on LLMs (large language models) has already demonstrated its lack of understanding of context, routinely spits out gibberish that sounds good but is useless, and fails regularly in niches where exact understanding is critical. Operating and maintaining the electrical grid requires a level of contextual comprehension LLM-based AI hasn't demonstrated yet.

AI does indeed have the potential to be massively disruptive, but without physical human help to do the literal heavy lifting necessary to keep it running, it won't get far.

Proponents of AI and robotics see these capabilities as being just over the horizon, but I'm inclined to disagree.

In many ways AI is a replication of an overly tech-dependent human society, where the individuals are largely incapable of meaningful self-reliance.

When muh technology can drive a wrecker to you and change a tire on your car, I'll be a little more concerned.

One generation? We'll see.

 

 

 

Edited by Ace-Garageguy
punctiliousness
  • Like 1
Posted
12 hours ago, Ace-Garageguy said:

Scary thought, for sure, but AI is dependent on massive amounts of electrical energy.

Some data centers consume as much energy as cities, which is why climate doomgloomers like Bill Gates are backpedaling now on dumping conventional generating sources; solar and wind simply can't keep up with demand from AI reliably.

SO...unless and until robotics are sufficiently developed to build, run, maintain and repair electrical generating and power distribution infrastructure, it's an unlikely scenario.

AND...the USA no longer manufactures most of the heavy electrical equipment the grid depends on, with lead times being over a year (currently up to four) for replacements in some cases, and few large pieces of essential equipment are stored in reserve.

SO...if/when these fail, AI's chances of getting replacements don't look good. And without robot systems that can source and acquire these complex...and often huge...parts, and can load, drive, and unload trucks, operate cranes, and do the complicated component installations, AI will be just as much in the dark as anyone else.

AI based on LLMs (large language models) has already demonstrated its lack of understanding of context, routinely spits out gibberish that sounds good but is useless, and fails regularly in niches where exact understanding is critical. Operating and maintaining the electrical grid requires a level of contextual comprehension LLM-based AI hasn't demonstrated yet.

AI does indeed have the potential to be massively disruptive, but without physical human help to do the literal heavy lifting necessary to keep it running, it won't get far.

Proponents of AI and robotics see these capabilities as being just over the horizon, but I'm inclined to disagree.

In many ways AI is a replication of an overly tech-dependent human society, where the individuals are largely incapable of meaningful self-reliance.

When the technology can drive a wrecker to you and change a tire on your car, I'll be a little more concerned.

One generation? We'll see.

I agree with you on the power consumption to a point. One AI does not necessarily need the whole grid. I also agree on the human interaction required to maintain the grid. The scariest version is not that AI would take us out with physical damage/weapons but will do so through a biological one. AI is being used to speed medical research. It could just as well be used to develop very specific biological weapons by persons motivated to do so. It is not too much of a leap to imagine a bioweapon based very specific DNA charateristics such as race.

  • Like 1
Posted (edited)
1 hour ago, Bills72sj said:

...The scariest version is not that AI would take us out with physical damage/weapons but will do so through a biological one. AI is being used to speed medical research. It could just as well be used to develop very specific biological weapons by persons motivated to do so. It is not too much of a leap to imagine a bioweapon based very specific DNA charateristics such as race.

Agreed, and this has occurred to me as well.

Still, if it didn't wait until robotics are sufficiently advanced to entirely replace humans for necessary physical tasks, it would ultimately be dooming itself by wiping out mankind.

There is, of course, a plausible scenario where a group of isolated humans who were immunized to whatever biological agent was used, either voluntary allies or as a slave class, could be bred to do the dirty work until they could be entirely replaced by machines.

Probably the most likely scenario, at least initially, is human 'overlords' utilizing AI to subjugate the rest of humanity.

Some may disagree, but I personally think one of Elon Musk's primary motivations behind his AI projects is to build a 'good' one, as he's very well aware of its potential for evil.

We do have some time to deal with this, put the brakes on, and decide what controls...like a master 'kill switch'...need to be put in place. But in general, I don't think the tech bros like this idea very much.

For the time being, LLM-based AI is limited in these very important ways:

  • Lack of true understanding: AI does not have a cognitive or semantic understanding of language, emotions, or real-world concepts. It doesn't "know" what a tree is or why one phrase is offensive in one situation but not another.
  • No sense of causation or reasoning: AI doesn't understand the underlying cause and effect in the way humans do. It simply processes data to provide an answer that is statistically probable.
  • Limited "real-world" knowledge: While it can access and process enormous amounts of data, AI doesn't have a holistic, real-world understanding or common sense. This is why it can sometimes "hallucinate" or make up facts. 
Edited by Ace-Garageguy
punctiliousness
  • Like 1
Posted
10 hours ago, Ace-Garageguy said:

Probably the most likely scenario, at least initially, is human 'overlords' utilizing AI to subjugate the rest of humanity.

I don't think it will even be that drastic.  Information control is people control.  AI used in subtle, behind the scenes ways to provoke human action seems most likely.  Think War of the Worlds only with real life video of a made up event and total media bombardment telling you what you are seeing is true.  It's a way to expand psychological operations and take them to a new level.  Subjugation through choice manipulation is easier than subjugation through force.

  • Like 1
Posted (edited)
37 minutes ago, Beans said:

 ...Subjugation through choice manipulation is easier than subjugation through force.

Legacy and social media are already hugely effective in getting the more easily swayed members of the population to think and do as they're told, and 'thinking' has very little to do with it.

AI, even as it is now, gives powerful tools to those who would manipulate the perception of reality to their own ends.

Pity more people don't heed Ben Franklin's admonition "make yourselves sheep and the wolves will eat you".

Edited by Ace-Garageguy
punctiliousness
  • Like 1

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×
×
  • Create New...