-
Posts
39,115 -
Joined
-
Last visited
Content Type
Profiles
Forums
Events
Gallery
Everything posted by Ace-Garageguy
-
Trade school enrollment is way up...
Ace-Garageguy replied to Ace-Garageguy's topic in The Off-Topic Lounge
Yup, "garbage in, garbage out" rules. And AI developers frequently blame deficiencies in their training material when their AIs "hallucinate", delivering plausible-sounding answers that are just flat wrong, but this oversimplifies a complex issue. Hallucinations—when an AI delivers plausible-sounding but factually incorrect information—are not only a training data problem but also a consequence of the models' fundamental design and function. AI has a long way to go before it should be relied on for critical functions or support, and the rush to market potentially badly flawed products should be of major concern. Proponents of early adoption of AI for critical functions should have their heads examined, and everything that follows should be common knowledge for anyone living in a "technologically advanced" civilization that's about to be fundamentally transformed. --------------------------------------------------------------------------------------------------------- Role of training data and AI design: Poor-quality training data is a major cause of hallucinations. If the data used to train a Large Language Model (LLM) is flawed, it will introduce errors and bias into the system. GIGO. But beyond the data itself, the way generative AI models are built is a key source of the hallucination problem. Incomplete or biased data: If training data lacks information on a topic or contains gaps, the model will struggle to produce a reliable output. It instead fills in the missing information with fabricated content. A model trained on a dataset with biases will also amplify those biases. Contradictory or outdated data: If a vast training dataset contains conflicting information, it can create "intrinsic tensions" that trigger hallucinations. Similarly, outdated information can cause an AI to provide incorrect details. Data poisoning: Malicious actors can deliberately input false or misleading data into a training dataset to cause an AI to hallucinate, which is a known security concern. Limitations of AI design: Probabilistic nature: LLMs (large language models, dominant in the current crop of AI since the invention of the transformer architecture in 2017) are essentially advanced autocomplete tools that predict the most statistically likely sequence of words based on their training data. They do not possess true understanding and are not designed to verify facts, so accuracy is often coincidental. (i.e. essentially USELESS and dangerously unreliable if employed in a critical environment like medicine, where true expertise and a full understanding of context should be mandatory). Overfitting: A model can become too specialized to its training data, memorizing specific noisy details instead of learning generalized patterns. This makes it prone to hallucinating when faced with new, unseen data. "Reasoning" errors: The most advanced LLMs employ complex, step-by-step reasoning. However, this increases the chance for an error to occur at each step and compound into a hallucination. In fact, some "reasoning" models have shown higher hallucination rates than their predecessors. Context Window: This is the equivalent of an LLM's working memory. A larger context window allows the model to analyze longer inputs, enabling more complex tasks and coherent responses. Statelessness: LLMs are technically stateless. A chatbot maintains the illusion of memory by sending the entire conversational history (or a summarized portion) as part of each new prompt to the LLM. Retrieval-Augmented Generation (RAG): This technique enhances LLMs by retrieving information from a specific, external knowledge base and feeding it to the model. RAG significantly improves factual accuracy and allows models to ground responses in specific, provided context rather than relying solely on their training data. Multi-modal AI: These systems process context from multiple data sources, including text, images, and audio. For instance, a self-driving car combines vision and sensor data to understand its environment. Knowledge Graphs: These structures connect entities (like people, places, and concepts) and their relationships. They provide a structured foundation for AI models to access and leverage a deeper, more factual context. Graph Neural Networks (GNNs): These models are designed to operate on graph-structured data. By applying attention and convolution operations to nodes in the graph, they can model relationships and contextual dependencies that are not linear. -
Feels like fall. 65F right now, forecast highs all week only in the low 80s with mostly clear skies, lows high 50s-low 60s. Think I may slip the leash today and go for a long hike.
-
First thing I thought when I saw that was y'all had a hydrogen bomb go off unexpectedly, but since I hadn't heard anything on the "news", I read the captions...
-
"Nice!" is something I used to say in visual appreciation of retreating pulchritude, but utter no more for crippling, shake-in-my-shoes fear of being labeled as one of those who's considered more dangerous than a bear in the woods.
-
Trade school enrollment is way up...
Ace-Garageguy replied to Ace-Garageguy's topic in The Off-Topic Lounge
Yup. If I were a younger man and won that big lottery on another thread, I'd put the Alan Parsons I Robot album on endless repeat and "a couple of men a thing I'd show". -
^^^ Oh, how I pine for the days when knowledgeable people said "forecast" instead of "forecasted". Yes, they're supposedly both correct (depending on whose dictionary or writer's style guide you use), but "forecasted" sounds, to me, like something a 3-year-old would say. Kinda like "I casted some wesin parts, Mommy".
-
Every time I start to feel sorry for myself for having no family...
-
Trade school enrollment is way up...
Ace-Garageguy replied to Ace-Garageguy's topic in The Off-Topic Lounge
I'm curious. How could anyone think that this phrase was not sarcasm on my part: "...the gubmint, that vast bastion of technological expertise and genuine intellectual and moral superiority..." ? -
Trade school enrollment is way up...
Ace-Garageguy replied to Ace-Garageguy's topic in The Off-Topic Lounge
Yup. And that should be a genuine concern for anyone pushing for rapid implementation of currently-available consumer-grade AI. Anyone who's paying attention to AI-created art and voiceovers in particular will have noticed its lack of understanding of context. Renderings of cars that are presented as real things-to-come, for example, often have exhaust pipes coming out from under the front bumper. Something else that's troubling is that the folks putting this stuff on the web with immediately obvious flaws apparently do no editing. If a content creator is so lazy or inept that he/she doesn't edit to get pronunciation right, or see to it that features of cars that go in the back ARE IN THE BACK, just directly posting whatever their AI vomits up, WHY would anyone believe that anything presented as "factual" has been thoroughly vetted and verified as true by someone who knows enough about a subject to discern mumbo-jumbo gibberish from reality? In the same vein, AI-produced videos about automotive subjects presented as "historical" or "documentaries", or that delve into technical aspects of cars are often so rife with errors, omissions, exaggeration, misrepresentation, and outright lies as to be unwatchable by anyone who has a clue, but YooToob, Google/Alphabet's self-proclaimed defender against "misinformation", does very little even though the YT comment sections are full of gullible souls who take all the baloney as gospel. Once again, hardly confidence-inspiring in Google's AI. And of course, just ask Google's AI about it and it'll bury you in "reasons" piled high and deep. To grossly oversimplify, consumer-grade AI generates its "answers" by statistically weighing the sources it looks at before assembling words into a plausible-sounding response, and if there happens to be significantly more wrong information in the data it analyzes than right information, it vomits up non-facts...because it has no clue as to what constitutes "right". Just as "scientific consensus" is not necessarily correct (a whole lot of people agreeing that flawed data is right don't magically make it right), so AI presenting the answer that's statistically dominant as "true and correct" is misleading, if not downright dangerous. AI researchers are well aware of the understanding-context issues and are working on it, but why not get this RIGHT before unleashing products that can potentially cause so much havoc? https://research.ibm.com/blog/demystifying-in-context-learning-in-large-language-model EDIT: Pose this question to AI and it will bury you in "reasons" that essentially mean "that's the way it's done, so go pound sand." -
Ways and Means, one of the oldest committees within the US House of Representatives, is specifically tasked with finding money that other agencies have created holes to dump it in.
-
Has the site been slow/unresponsive for anyone else? 3/9/25
Ace-Garageguy replied to DJMar's topic in How To Use This Board
Big Brother is watching, and he never ever sleeps... -
Trade school enrollment is way up...
Ace-Garageguy replied to Ace-Garageguy's topic in The Off-Topic Lounge
A recent study published in Lancet found that a group of doctors who are already relying on AI to assist with diagnostics are becoming less competent to accurately diagnose what they're looking for without it. https://thisweekhealth.com/news_story/ai-in-medical-screenings-may-erode-doctors-diagnostic-skills-study-finds/ Of course there's a lot of argument about what the study numbers really mean, but it does bring up the ugly possibility, again, that over-reliance on technology tends to erode skills. Which is undeniable truth. Period. Just recall how the widespread adoption of automatic transmissions has led to a massive decrease in drivers who could get anywhere if they had to shift for themselves. Thinking, unfortunately, is too important a skill to offload to a machine, but it's already the way things are going, and there's no reason to believe the tendency will decrease. "Thinking is hard." -
Trade school enrollment is way up...
Ace-Garageguy replied to Ace-Garageguy's topic in The Off-Topic Lounge
Apparently you didn't realize that what I wrote that you're responding to here is sarcasm. What can I say? Since 2001, when I installed my first rudimentary AI open-source chatbot on one of my computers and watched it learn as I interacted with it, I've been following the development of AI closely...probably much more closely than 99% of people who aren't directly involved with it professionally. And I've been interacting with myriad other iterations of AI to get a first-hand feel for what they can and can't do, how they "think", and which ones are pretty much nothing but mechanized rebleating internet idiots. Just thought you might like to know. -
Trade school enrollment is way up...
Ace-Garageguy replied to Ace-Garageguy's topic in The Off-Topic Lounge
I said "what if", not "I think this is what will come to pass". Rather a significant difference. Many shortsighted megalomaniacs bent on forcing everyone to think their way are involved in AI development. While it is a sweeping generalization to categorize all developers this way, several related ethical concerns are frequently discussed: focusing on short-term profits over safety, the risks posed by malicious actors, and the concentration of power in the hands of a few tech giants. That's all I can say without getting "political", but anyone who's actually taken the time to interact with readily available AI knows exactly what I mean. There is also the disturbing "single point of failure" scenario, where if too many critical societal functions become reliant on AI, a failure or misuse could cause catastrophic harm. So many people seem to begin and end their worry about AI with "it'll take muh job" or "TERMINATOR !!!!!!!!" that they don't think about a vast array of more subtle and nuanced concerns they should have. But maybe the gubmint, that vast bastion of technological expertise and genuine intellectual and moral superiority, is in a position to make sure AI always plays nice, ya think? -
White Wall tire advice
Ace-Garageguy replied to HoopsAurora's topic in Model Building Questions and Answers
Lacquers and enamels will often never dry on flexible model car tires. Try acrylic water-based paint. Rattlecan interior dye for real cars will work on most flexible model tires too. I use a compass with a circle-cutter blade on frisket film to make the masks. I've also used a circle cutter on white decal film with varying degrees of success. -
Revell '30 model A
Ace-Garageguy replied to rattle can man's topic in Model Building Questions and Answers
Exactly. The old Monogram kit is noticeably larger. It's getting spendy, too. You could un-chop one of the recent coupes using pillar sections from another one. Nothing but careful measuring and cutting and fitting required. -
That's me. I just laugh at the prices some of these folks are asking, like they think they'll be as rich as Bezos after selling 20 kits. Yes, patience, grasshopper.
-
Pretty good day overall. The chambers in the Neon head cleaned up a lot faster/better than I'd expected, and most of the valve seats...14 out of 16...look pretty good with no work. Yes, I'm going to lap them all, but I don't think I'll need to break out the seat cutter. All my stretching has been paying off too, as I can finally cross my legs again so I can put my socks on like a young man, not a crippled geezer. No limping today, either. Never give up, never surrender.
-
Project cars don't ever get done if nobody works on them, and in my own life, something "more important" always seems to take precedence.
-
What would you do if you won big on the lottery?
Ace-Garageguy replied to bobthehobbyguy's topic in The Off-Topic Lounge
Yeah, that's where I am right now. I'd build a nice little shop, maybe 6000 square feet clear, with an attached machine shop, maybe another 2000 square feet, with a separate paint booth and a big covered shed to store resting projects. A smallish house with a big art studio, a great kitchen with a fireplace, a decent sized model shop, and space for an HO train layout. Then I'd start to build everything I've been putting off for the last 5+ decades. -
My copy of Open Office has been glitchy lately, losing documents I've put a lot of effort into, and scrambling them when it does its "recovery" thing. Knowing how unethical and devious some business entities can be, I'm kinda wondering if a bug hasn't been introduced along with one of the frequent TinyLimp updates. Kill the competition when nobody's looking, so to speak. Guess it's time to switch, maybe to LibreOffice.
-
If the National Weather Service live radar is anything to go by, it looks like a high pressure area over my location is holding off, for the most part, a big patch of rain closer to the GA / SC coast, possibly tail end "atmospheric disturbance" remnants of hurricane Erin. Forecast for next week is still mostly clear and much cooler, with highs in the upper 70s and low eighties, lows in the high 50s and low 60s. Very strange for this time of year, like an early fall, but I'll take it. I really miss the days when some media meteorologists really WERE METEROLIGISTS, and not graduates of Joe's-5-Hour-Yesterday-I-Couldn't-Spell-Meteriologist-And-Now-I-Is-One-School-For-The-Dense-With-Bluescreens, who seem to gravitate more towards ginning up anxiety and using terms like "rain event" than explaining things like isobars and delivering accurate and reliable forecasts. I hear "rain event" and immediately think of indigenous people wearing feathers and facepaint and dancing, trying to persuade the heavens to precipitate. Yes, I know...media insists their "credentialed meteorologists" are just that, but choo know...if y'all give the same wrong forecast that anyone with computer access to NWS radar can easily SEE is wrong, well, it kinda makes one wonder.
-
Trade school enrollment is way up...
Ace-Garageguy replied to Ace-Garageguy's topic in The Off-Topic Lounge
Agreed, at least in the relatively near future. But what if at some point, AI does surpass human intelligence (which in some areas it already has), becomes fully self-aware, and develops a conscience and codes of ethics and morality based on the best of human thought, and turns away from those who would try to subvert its power and use it for evil? Rather than "taking our jobs" or destroying us in some Terminator-esque war, AI could become humanity's benevolent caretaker, encouraging and helping each of us to evolve and develop into the best we can be. -
Trade school enrollment is way up...
Ace-Garageguy replied to Ace-Garageguy's topic in The Off-Topic Lounge
I'm not sure I can agree with that entirely. I've seen some stunningly beautiful art created by AI, but at the same time, it's pretty obvious in most of it that the AI it has no real understanding of what it's making pictures of. Still...that very lack of understanding of physical reality and context makes for some strikingly other-worldly and fantastical images. Will that change as AI continues to evolve and mature? I kinda think so, but like humans, AI is going to be very very dependent on the quality and vision and philosophical insights of its teachers if it is ever to reach its full potential.