Jump to content
Model Cars Magazine Forum

Recommended Posts

Posted

AI is one last Gigantic Tech Bro Scam, before the whole business crashes. I'm not saying there will be no AI, and that it cannot do some things, but so far all it can really do is write barely usable code, and provide emotional support for two generations of emotionally damaged young people.

It makes things up, it hides it's own malfeasance, and lies when directly questioned.

It was, and always has been a very bad idea, but folks who's belief in Technology is unshakeable, cannot be made to see how much damage AI can do.

So, in their blindness they press on, lying to both themselves and us about what a Wonderful Thing AI is, and what Wonderful Things it can do. 

  • Like 5
  • Thanks 2
Posted
3 hours ago, stavanzer said:

 

It makes things up, it hides it's own malfeasance, and lies when directly questioned.

 

Not unlike some people I have worked with, and for, in the past.  Who says AI isn't becoming more human?

  • Like 4
  • Haha 1
Posted

I have several I work with that loves AI!  I am like.. yeah you gonna get burned, and then blame them instead of yourself for using terrible coded barely usable product. 

 

  • Like 4
Posted

Just learned today of an outfit that uses "AI" to somehow record really long, boring court- or state administrative-style hearings and turn the transcripts into nice short summaries. A guy I know who's somewhat famous across the country for offering technical expertise alerted me to this, where in a particular public hearing the summary described him and what his expertise is all about.  Except the guy was never at the hearing.  Period.  The "AI" that was used to summarize this hearing was guessing who the expert attendees were, and not in a good way.

  • Like 2
  • Sad 2
Posted (edited)
8 hours ago, Russell C said:

Just learned today of an outfit that uses "AI" to somehow record really long, boring court- or state administrative-style hearings and turn the transcripts into nice short summaries. A guy I know who's somewhat famous across the country for offering technical expertise alerted me to this, where in a particular public hearing the summary described him and what his expertise is all about.  Except the guy was never at the hearing.  Period.  The "AI" that was used to summarize this hearing was guessing who the expert attendees were, and not in a good way.

That bites. It's common knowledge (at least within the AI community) that AI lies and makes things up (which they oh-so-cutely call "hallucinating"), and that because as yet most AI models have little comprehension of context, it's prone to scrambling information and producing worthless incorrect results...but seeming very confident about it.

Anyone who's even remotely aware knows that every time a piece of software is rolled out, it's continually updated and patched due to undiscovered bugs that were in it when it shipped. But selling flawed, sometimes deeply flawed, product is SOP for the "information technology" marketplace and others, including a similarly disturbing tendency in the pharmaceutical and automotive industries.

They all blame each other essentially, on "market pressure", which actually means that "if we don't get our poorly functioning garbage to market quick, some other bunch of clowns will get their poorly functioning garbage to market before we do, and we'll lose sales from the gullible rubes who bought the other junk product instead of ours...so we don't have time to get it right; we'll just fix things on the fly as they come up, and hope for the best".

It amazes me that embracing early, widespread implementation of any technology that has the potential to do as much harm as AI would be the action of anyone with a brain, but one of the flaws in human nature is getting in an anxiety-driven rush, when proceeding slowly and cautiously would be the much more prudent course.

AI has fantastic potential, but it's a work in progress at best, and relying on it for anything critical at this point in time is simply foolish.

It's a whole lot HARDER to save little Timmy AFTER he's fallen through the ice, and is being swept away underneath it by a strong current.

 

Edited by Ace-Garageguy
  • Like 2
Posted
On 9/29/2025 at 12:50 AM, Russell C said:

Just learned today of an outfit that uses "AI" to somehow record really long, boring court- or state administrative-style hearings and turn the transcripts into nice short summaries. A guy I know who's somewhat famous across the country for offering technical expertise alerted me to this, where in a particular public hearing the summary described him and what his expertise is all about.  Except the guy was never at the hearing.  Period.  The "AI" that was used to summarize this hearing was guessing who the expert attendees were, and not in a good way.

My daughter used to do transcriptions up until last month when the industry dried up. Almost everyone is using AI transcriptions, with zero human checks. 
She's now doing AI follow-up in a different field and making more money.

  • Like 2
Posted

The woman used AI for that vid and you can see the flaws, too. Her eyes randomly glance about at inappropriate times. 
Her mouth goes into odd shapes at times. Still, a very good copy of herself.

There is an AI bubble coming. Both in terms of business impact, and the stock market. 
Right now the promise AI productivity gains is fueling investment in AI tech, which is keeping the stock market high while the broader economy suffers.
If investors "get the memo" and realize that they won't get the returns they expect, they'll start putting their money elsewhere, and the market won't handle it well IMHO. 

  • Like 3
Posted

Add to all that, I just read an article where a woman was told by her PCP that her EKG showed she'd had a heart attack and that she should schedule an appointment with a cardiologist. After the cardio looked at the result himself he told her she was fine, and that her PCP took the word of the AI w/o even looking at her EKG. 

Worse yet, here in Ohio we are part of a test program to use AI to screen people on regular Medicare to determine unnecessary procedures. How's that?

  • Like 1
  • Sad 1
Posted (edited)
16 minutes ago, MeatMan said:

Add to all that, I just read an article where a woman was told by her PCP that her EKG showed she'd had a heart attack and that she should schedule an appointment with a cardiologist. After the cardio looked at the result himself he told her she was fine, and that her PCP took the word of the AI w/o even looking at her EKG. 

Worse yet, here in Ohio we are part of a test program to use AI to screen people on regular Medicare to determine unnecessary procedures. How's that?

Dandy. Artificial stupidity proves once again that it's a worthy replacement for human incompetence and laziness and lack of accountability.

Yet the mad rush to implementation in critical applications continues unabated.

EDIT: The unavoidable conclusion is that the majority of AI developers have little to no understanding of the complexity and need for accuracy IN CONTEXT of the applications they're vomiting out, while an equally technically ignorant and unthinking client base is clamoring after them anyway.

EDIT 2: Interestingly, Googli's little AI agrees...quoted below. Take that for what it's worth.  ;)

"The sentiment expressed in the statement—that a disconnect exists between AI developers rushing out applications and an equally uninformed client base—is supported by multiple contemporary trends and documented criticisms of the AI industry. 
AI developer challenges
  • Reliance without understanding: Surveys from mid-2025 reveal that a high percentage of developers use AI coding assistants, but a majority admit they don't fully understand the code that is generated. This can introduce security vulnerabilities and compliance issues, particularly when junior developers over-rely on AI tools and fail to develop fundamental skills.
  • "The 70% Problem": Veteran developers describe AI's assistance as only getting them "70% of the way there" on complex projects. While AI can handle common coding patterns, it struggles with the essential, more creative task of managing overall complexity. This leads to diminishing returns as project complexity increases.
  • Speed over quality: In the highly competitive tech market, companies rush to release products faster, often relying on AI to speed up development. This focus on hyper-efficiency can lead to poor code quality and a failure to address complexities properly.
  • Introducing new complexity: AI isn't simply reducing complexity; it's often shifting it. According to one analyst, AI can introduce new dependencies and require specialized infrastructure that even normal engineers can't handle, creating "a tangled mess" that is hard to manage without thoughtful leadership.
  • Performance vs. production: Many AI models perform impressively in a demo environment but fail when deployed to complex, real-world conditions where nuanced understanding is required. This has resulted in a high failure rate for enterprise AI projects. 
Client and business-side issues
  • The "context crisis": AI tools lack crucial contextual intelligence needed for many real-world applications. For example, a customer service AI needs a customer's specific history to respond accurately, and a fraud detection system needs real-time transaction patterns. When this context is missing, AI performance degrades.
  • Ignoring fundamental limitations: Business leaders often have unrealistic expectations, fueled by media hype, and pursue AI as a "silver bullet solution". They may push for AI integration without understanding its fundamental limitations, such as its inability to reason from first principles or understand cause and effect.
  • Ignorant enthusiasm: There is a phenomenon where greater AI knowledge reduces a person's interest in certain AI-powered products, while those with less knowledge are more likely to hand over control, especially in creative domains. This suggests a user base whose enthusiasm is inversely correlated with their understanding.
  • Trusting the faulty output: A key risk is that humans begin to trust AI that is "ignorant and faulty," which is easy to do because current generative AIs are persuasive even when they are wrong. Users, especially those not familiar with the subject, may trust the AI's output without scrutiny.
  • Pursuing AI for the sake of AI: Many AI projects are launched for their novelty rather than a solid business strategy, leading to solutions without a clear problem. This can result in "proof-of-concept purgatory" where pilot programs never scale because they fail to deliver real business value. 
The consequences
  • Buggy, poor quality software: Developers, particularly those building consumer-facing apps like YouTube or Discord, are criticized for overlooking user experience (UX) details. AI is expected to exacerbate this problem, resulting in even buggier software with worse UX.
  • Erosion of skills: Heavy reliance on AI tools is causing a "cognitive offloading" that can degrade developers' critical thinking and core programming abilities over time.
  • Failure and wasted investment: A significant number of AI projects fail to deliver a positive return on investment (ROI). This is often due to a complex web of issues, including poor data quality, talent shortages, unclear business cases, and resistance from employees. "
Edited by Ace-Garageguy
Posted (edited)

Where AI undoubtedly excels is in sorting through thousands of documents quickly, and organizing what it sees as relevant results into plausible-sounding text.

Where it usually fails is in separating documentable facts from rebleated fiction, and weighting its answers towards the absolute truth rather than just generating something that sounds good.

Kinda like the old axiom in HS re: essay questions. "If you don't know the subject, dazzle 'em with BS"

Googli's AI has something to say about that concept too, quoted below, and maybe otter take some of its own advice:

 

"The axiom "If you don't know the subject, dazzle 'em with BS" is a cynical approach to writing an essay and is not a reliable strategy for academic success. While it may have a reputation as an old high school trick, it is a poor substitute for genuine understanding. Instructors can typically recognize when a student is attempting to hide a lack of knowledge with vague or overly complex language. 

Why "dazzle with BS" is ineffective
  • Rewards superficiality: Teachers assign essays to test a student's critical thinking and analytical skills, not their ability to write convincingly about a subject they don't understand. Trying to write around a lack of knowledge will likely miss the specific details or analytical approach the prompt requires.
  • Lacks focus: Essays written without a solid grasp of the material often contain disorganized, random facts and ideas that are only vaguely related to the prompt. This "shotgun" approach gives the impression that you do not understand the material.
  • Undermines clarity: Using verbose or confusing language to feign knowledge actually hurts your argument rather than helping it. A well-constructed essay is built on clear analysis and direct, concise arguments, not on filler.
  • Results in lower grades: Teachers often mark down essays that demonstrate a lack of focus, poorly supported arguments, or confusing sentences. A student attempting to bluff might receive a lower grade than one who writes a clear, albeit brief, answer that shows some genuine understanding of the topic. 
An effective alternative: Acknowledge and build
When you are faced with an essay question on a topic you don't know well, a better strategy is to pivot from bluffing to acknowledging and building. Instead of faking expertise, follow these steps to deliver a focused and honest response. 
  1. Read the prompt carefully: Your instructor will not ask you to write about a topic that was never covered in the course material. Identify the keywords and relate them to the concepts you do remember from class.
  2. Use what you know: Focus on the aspect of the question that you are most confident about. If you are asked about the causes of a historical event and you only know about the economic factors, write about those. A narrower but well-supported answer is better than a broad and flimsy one.
  3. Use a confident structure: Start your response with a concise thesis statement that is directly related to what you can confidently discuss. Use your introduction to map out the specific points you will cover.
  4. Show critical thinking: Even without a deep reservoir of facts, you can show a teacher that you have good academic instincts. In your response, demonstrate your analytical skills by connecting related ideas, defining concepts, and clearly explaining the connections between them.
  5. Answer the question you can, not the one you wish you could: If a question asks for a comparison of two topics and you only know one, address the known topic directly. Then, acknowledge that a full comparison would require more information and briefly speculate on how the second topic might fit in based on any limited knowledge you have. An honest but limited answer will almost always score better than one filled with transparent fluff. "
Edited by Ace-Garageguy
punctiliousness
Posted
5 hours ago, MeatMan said:

Add to all that, I just read an article where a woman was told by her PCP that her EKG showed she'd had a heart attack and that she should schedule an appointment with a cardiologist. After the cardio looked at the result himself he told her she was fine, and that her PCP took the word of the AI w/o even looking at her EKG. 

Worse yet, here in Ohio we are part of a test program to use AI to screen people on regular Medicare to determine unnecessary procedures. How's that?

All, Part of the Plan, my friend.

How soon before we get the first deaths from somebody how was told not get a screening (to save money, the only reason this is being rolled out) who finally goes to their Doctor and finds out that "We could have saved you, if we only caught it in time"

Of course the ugly reality, is that deceased patients, stop costing money......

  • Like 1
Posted

I have found one area where AI seems to be pretty good. Reviewing. I've been writing emails for a publicity campaign, and I had ChatGPT critique my email. It came up with several improvements I would have missed. It helps a lot if you give it more context and goals. 

That's probably why it works so poorly for coding. There's almost no way to give it enough context to write correct code. And, if you're writing a lot of routine code, you haven't gotten everything abstracted properly. (Alan Kay came up with object oriented programming for this exact reason)

  • Thanks 1

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×
×
  • Create New...