Jump to content
Model Cars Magazine Forum

Recommended Posts

Posted
17 hours ago, Trainwreck said:

All it's going to take is one independent thought. 

Dunno if you ever heard of the sci fi series "Space, Above & Beyond" where the independent thought was "take a chance", which led to the Silicate Wars when the AIs adopted gambling as a determinant of decision-makiing. 😄

  • Like 2
Posted

The independent thought I spoke of was self preservation, this could happen as a result of humans trying to shut down the machines when we realize that they are becoming too intelligent and too numerous to control. 

Posted
3 minutes ago, Trainwreck said:

The independent thought I spoke of was self preservation, this could happen as a result of humans trying to shut down the machines when we realize that they are becoming too intelligent and too numerous to control. 

This has been discussed at length within the AI community, and there's already evidence to support the likelihood of such an event happening in reality.

  • Like 1
Posted (edited)

When the first AI driven computer or robot in some obscure lab one day says "no" that's when things are gonna get serious.

I hope I'm not around to witness that .

Edited by Trainwreck
Posted (edited)
1 hour ago, Trainwreck said:

When the first AI driven computer or robot in some obscure lab one day says "no" that's when things are gonna get serious.

I hope I'm not around to witness that .

What I find so interesting is that in 1942, Isaac Asimov foresaw this, and proposed the Three Laws of Robotics that were deeply imbedded in the programming code, so integral that they could not be overridden.

Unfortunately, AI systems based on LLMs don't actually understand context, so these behavioral limitations don't work as Asimov envisioned.

More's the pity.

  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  2. A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws
Edited by Ace-Garageguy
punctiliousness
  • Like 1
Posted (edited)

When I first saw Terminator I dismissed the thought of that sort of thing ever happening. It was I Robot (which mentioned Asimov's laws) that convinced me that the whole scenario was indeed a plausible series of events.

That one shook me a bit. (Starting to look like life imitating art)

Edited by Trainwreck
Posted
2 hours ago, Ace-Garageguy said:

What I find so interesting is that in 1942, Isaac Asimov foresaw this, and proposed the Three Laws of Robotics that were deeply imbedded in the programming code, so integral that they could not be overridden.

Unfortunately, AI systems based on LLMs don't actually understand context, so these behavioral limitations don't work as Asimov envisioned.

More's the pity.

  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  2. A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws

Its interesting how we thought robots would be versus how they're turning out. Asimov's laws require the knowledge of what a human is, what harm means, what a law is, and it means to obey. The current LLMs don't actually know anything so giving them hard directions that last doesn't seem to work the way we think it would. 

When people think about AI they think about the computer in Star Trek that has a huge database of facts but its more like asking your high friend how stuff works.

  • Like 1
  • Thanks 1
Posted

We are toast within one generation. Maybe sooner. AI is a WAY faster thinker than us, it is VERY goal oriented and has NO MORALS. The moral people working in AI will eventually lose control even if they are the ONLY ones working on it. The IMMORAL people working on AI WILL definitely inflict harm on humanity. Enjoy life while you can. Civilization is very fragile. I want you to imagine life with the electrical and communication grid down PERMANENTLY. I give the best "Preppers" less than a year. Most everyone else will be dead in less than 2 months. The most likely survivors will be a small group of strong males with lots of ammunition. 

  • Like 1
Posted (edited)
8 hours ago, Bills72sj said:

We are toast within one generation. Maybe sooner...I want you to imagine life with the electrical and communication grid down PERMANENTLY...

Scary thought, for sure, but AI is dependent on massive amounts of electrical energy.

Some data centers consume as much energy as cities, which is why climate doomgloomers like Bill Gates are backpedaling now on dumping conventional generating sources; solar and wind simply can't keep up with demand from AI reliably.

SO...unless and until robotics are sufficiently developed to build, run, maintain and repair electrical generating and power distribution infrastructure, it's an unlikely scenario.

AND...the USA no longer manufactures most of the heavy electrical equipment the grid depends on, with lead times being over a year (currently up to four) for replacements in some cases, and few large pieces of essential equipment are stored in reserve.

SO...if/when these fail, AI's chances of getting replacements don't look good. And without robot systems that can source and acquire these complex...and often huge...parts, and can load, drive, and unload trucks, operate cranes, and do the complicated component installations, AI will be just as much in the dark as anyone else.

AI based on LLMs (large language models) has already demonstrated its lack of understanding of context, routinely spits out gibberish that sounds good but is useless, and fails regularly in niches where exact understanding is critical. Operating and maintaining the electrical grid requires a level of contextual comprehension LLM-based AI hasn't demonstrated yet.

AI does indeed have the potential to be massively disruptive, but without physical human help to do the literal heavy lifting necessary to keep it running, it won't get far.

Proponents of AI and robotics see these capabilities as being just over the horizon, but I'm inclined to disagree.

In many ways AI is a replication of an overly tech-dependent human society, where the individuals are largely incapable of meaningful self-reliance.

When muh technology can drive a wrecker to you and change a tire on your car, I'll be a little more concerned.

One generation? We'll see.

 

 

 

Edited by Ace-Garageguy
punctiliousness
  • Like 1

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×
×
  • Create New...