Jump to content
Model Cars Magazine Forum

Ace-Garageguy

Members
  • Posts

    38,462
  • Joined

  • Last visited

Everything posted by Ace-Garageguy

  1. Well, at least they're not all AI voiceovers...yet.
  2. AI processing on a large scale is a bigger energy-hog / grid-stressor than even widespread adoption of electric vehicles. We may be in for a wild ride even if AI doesn't go rogue. EDIT: I find it interesting that we're being systematically herded into putting all our energy eggs in one basket, while phasing out (bulldozing) traditional energy sources that could provide backup when the sun isn't shining or the wind isn't blowing. There's an old science-nerd joke that's punchline is "don't put all your ergs in one biscuit". It's good advice. EDIT 2: I ran the punchline past Googli's AI. It doesn't get it. While it can often detect and even explain humor based on misstating common phrases and the use of puns, it missed this one entirely. EDIT 3: The term ‘erg’ refers to a specific unit of measurement for work or energy. It has historical significance and practical applications. EDIT 4: Just now, about 8 hours later, I ran the punchline through Googli's AI again. Now at least it understands that it's a joke. It's thinking...
  3. Or just use paint-compatible release agents. They're out there.
  4. "You weren't trained to win, but to lose with fake grace" and: A wise man once said "only a fool stifles his anger completely; it is often the bodyguard to conscience"
  5. Watch your peas and queues.
  6. "Golf" is "flog" spelled backwards.
  7. Yup, "garbage in, garbage out" rules. And AI developers frequently blame deficiencies in their training material when their AIs "hallucinate", delivering plausible-sounding answers that are just flat wrong, but this oversimplifies a complex issue. Hallucinations—when an AI delivers plausible-sounding but factually incorrect information—are not only a training data problem but also a consequence of the models' fundamental design and function. AI has a long way to go before it should be relied on for critical functions or support, and the rush to market potentially badly flawed products should be of major concern. Proponents of early adoption of AI for critical functions should have their heads examined, and everything that follows should be common knowledge for anyone living in a "technologically advanced" civilization that's about to be fundamentally transformed. --------------------------------------------------------------------------------------------------------- Role of training data and AI design: Poor-quality training data is a major cause of hallucinations. If the data used to train a Large Language Model (LLM) is flawed, it will introduce errors and bias into the system. GIGO. But beyond the data itself, the way generative AI models are built is a key source of the hallucination problem. Incomplete or biased data: If training data lacks information on a topic or contains gaps, the model will struggle to produce a reliable output. It instead fills in the missing information with fabricated content. A model trained on a dataset with biases will also amplify those biases. Contradictory or outdated data: If a vast training dataset contains conflicting information, it can create "intrinsic tensions" that trigger hallucinations. Similarly, outdated information can cause an AI to provide incorrect details. Data poisoning: Malicious actors can deliberately input false or misleading data into a training dataset to cause an AI to hallucinate, which is a known security concern. Limitations of AI design: Probabilistic nature: LLMs (large language models, dominant in the current crop of AI since the invention of the transformer architecture in 2017) are essentially advanced autocomplete tools that predict the most statistically likely sequence of words based on their training data. They do not possess true understanding and are not designed to verify facts, so accuracy is often coincidental. (i.e. essentially USELESS and dangerously unreliable if employed in a critical environment like medicine, where true expertise and a full understanding of context should be mandatory). Overfitting: A model can become too specialized to its training data, memorizing specific noisy details instead of learning generalized patterns. This makes it prone to hallucinating when faced with new, unseen data. "Reasoning" errors: The most advanced LLMs employ complex, step-by-step reasoning. However, this increases the chance for an error to occur at each step and compound into a hallucination. In fact, some "reasoning" models have shown higher hallucination rates than their predecessors. Context Window: This is the equivalent of an LLM's working memory. A larger context window allows the model to analyze longer inputs, enabling more complex tasks and coherent responses. Statelessness: LLMs are technically stateless. A chatbot maintains the illusion of memory by sending the entire conversational history (or a summarized portion) as part of each new prompt to the LLM. Retrieval-Augmented Generation (RAG): This technique enhances LLMs by retrieving information from a specific, external knowledge base and feeding it to the model. RAG significantly improves factual accuracy and allows models to ground responses in specific, provided context rather than relying solely on their training data. Multi-modal AI: These systems process context from multiple data sources, including text, images, and audio. For instance, a self-driving car combines vision and sensor data to understand its environment. Knowledge Graphs: These structures connect entities (like people, places, and concepts) and their relationships. They provide a structured foundation for AI models to access and leverage a deeper, more factual context. Graph Neural Networks (GNNs): These models are designed to operate on graph-structured data. By applying attention and convolution operations to nodes in the graph, they can model relationships and contextual dependencies that are not linear.
  8. Feels like fall. 65F right now, forecast highs all week only in the low 80s with mostly clear skies, lows high 50s-low 60s. Think I may slip the leash today and go for a long hike.
  9. First thing I thought when I saw that was y'all had a hydrogen bomb go off unexpectedly, but since I hadn't heard anything on the "news", I read the captions...
  10. If I had piled up everything I've ever buggered becoming reasonably proficient at a few things, I'd have a pretty big pile. A wise man once said we learn more from failure than from easy success. I'd kinda have to agree with him.
  11. "Nice!" is something I used to say in visual appreciation of retreating pulchritude, but utter no more for crippling, shake-in-my-shoes fear of being labeled as one of those who's considered more dangerous than a bear in the woods.
  12. Yup. If I were a younger man and won that big lottery on another thread, I'd put the Alan Parsons I Robot album on endless repeat and "a couple of men a thing I'd show".
  13. ^^^ Oh, how I pine for the days when knowledgeable people said "forecast" instead of "forecasted". Yes, they're supposedly both correct (depending on whose dictionary or writer's style guide you use), but "forecasted" sounds, to me, like something a 3-year-old would say. Kinda like "I casted some wesin parts, Mommy".
  14. Every time I start to feel sorry for myself for having no family...
  15. I'm curious. How could anyone think that this phrase was not sarcasm on my part: "...the gubmint, that vast bastion of technological expertise and genuine intellectual and moral superiority..." ?
  16. Yup. And that should be a genuine concern for anyone pushing for rapid implementation of currently-available consumer-grade AI. Anyone who's paying attention to AI-created art and voiceovers in particular will have noticed its lack of understanding of context. Renderings of cars that are presented as real things-to-come, for example, often have exhaust pipes coming out from under the front bumper. Something else that's troubling is that the folks putting this stuff on the web with immediately obvious flaws apparently do no editing. If a content creator is so lazy or inept that he/she doesn't edit to get pronunciation right, or see to it that features of cars that go in the back ARE IN THE BACK, just directly posting whatever their AI vomits up, WHY would anyone believe that anything presented as "factual" has been thoroughly vetted and verified as true by someone who knows enough about a subject to discern mumbo-jumbo gibberish from reality? In the same vein, AI-produced videos about automotive subjects presented as "historical" or "documentaries", or that delve into technical aspects of cars are often so rife with errors, omissions, exaggeration, misrepresentation, and outright lies as to be unwatchable by anyone who has a clue, but YooToob, Google/Alphabet's self-proclaimed defender against "misinformation", does very little even though the YT comment sections are full of gullible souls who take all the baloney as gospel. Once again, hardly confidence-inspiring in Google's AI. And of course, just ask Google's AI about it and it'll bury you in "reasons" piled high and deep. To grossly oversimplify, consumer-grade AI generates its "answers" by statistically weighing the sources it looks at before assembling words into a plausible-sounding response, and if there happens to be significantly more wrong information in the data it analyzes than right information, it vomits up non-facts...because it has no clue as to what constitutes "right". Just as "scientific consensus" is not necessarily correct (a whole lot of people agreeing that flawed data is right don't magically make it right), so AI presenting the answer that's statistically dominant as "true and correct" is misleading, if not downright dangerous. AI researchers are well aware of the understanding-context issues and are working on it, but why not get this RIGHT before unleashing products that can potentially cause so much havoc? https://research.ibm.com/blog/demystifying-in-context-learning-in-large-language-model EDIT: Pose this question to AI and it will bury you in "reasons" that essentially mean "that's the way it's done, so go pound sand."
  17. Ways and Means, one of the oldest committees within the US House of Representatives, is specifically tasked with finding money that other agencies have created holes to dump it in.
  18. Big Brother is watching, and he never ever sleeps...
  19. A recent study published in Lancet found that a group of doctors who are already relying on AI to assist with diagnostics are becoming less competent to accurately diagnose what they're looking for without it. https://thisweekhealth.com/news_story/ai-in-medical-screenings-may-erode-doctors-diagnostic-skills-study-finds/ Of course there's a lot of argument about what the study numbers really mean, but it does bring up the ugly possibility, again, that over-reliance on technology tends to erode skills. Which is undeniable truth. Period. Just recall how the widespread adoption of automatic transmissions has led to a massive decrease in drivers who could get anywhere if they had to shift for themselves. Thinking, unfortunately, is too important a skill to offload to a machine, but it's already the way things are going, and there's no reason to believe the tendency will decrease. "Thinking is hard."
  20. Apparently you didn't realize that what I wrote that you're responding to here is sarcasm. What can I say? Since 2001, when I installed my first rudimentary AI open-source chatbot on one of my computers and watched it learn as I interacted with it, I've been following the development of AI closely...probably much more closely than 99% of people who aren't directly involved with it professionally. And I've been interacting with myriad other iterations of AI to get a first-hand feel for what they can and can't do, how they "think", and which ones are pretty much nothing but mechanized rebleating internet idiots. Just thought you might like to know.
  21. I said "what if", not "I think this is what will come to pass". Rather a significant difference. Many shortsighted megalomaniacs bent on forcing everyone to think their way are involved in AI development. While it is a sweeping generalization to categorize all developers this way, several related ethical concerns are frequently discussed: focusing on short-term profits over safety, the risks posed by malicious actors, and the concentration of power in the hands of a few tech giants. That's all I can say without getting "political", but anyone who's actually taken the time to interact with readily available AI knows exactly what I mean. There is also the disturbing "single point of failure" scenario, where if too many critical societal functions become reliant on AI, a failure or misuse could cause catastrophic harm. So many people seem to begin and end their worry about AI with "it'll take muh job" or "TERMINATOR !!!!!!!!" that they don't think about a vast array of more subtle and nuanced concerns they should have. But maybe the gubmint, that vast bastion of technological expertise and genuine intellectual and moral superiority, is in a position to make sure AI always plays nice, ya think?
  22. Lacquers and enamels will often never dry on flexible model car tires. Try acrylic water-based paint. Rattlecan interior dye for real cars will work on most flexible model tires too. I use a compass with a circle-cutter blade on frisket film to make the masks. I've also used a circle cutter on white decal film with varying degrees of success.
  23. Exactly. The old Monogram kit is noticeably larger. It's getting spendy, too. You could un-chop one of the recent coupes using pillar sections from another one. Nothing but careful measuring and cutting and fitting required.
  24. That's me. I just laugh at the prices some of these folks are asking, like they think they'll be as rich as Bezos after selling 20 kits. Yes, patience, grasshopper.
  25. Pretty good day overall. The chambers in the Neon head cleaned up a lot faster/better than I'd expected, and most of the valve seats...14 out of 16...look pretty good with no work. Yes, I'm going to lap them all, but I don't think I'll need to break out the seat cutter. All my stretching has been paying off too, as I can finally cross my legs again so I can put my socks on like a young man, not a crippled geezer. No limping today, either. Never give up, never surrender.
×
×
  • Create New...