Dave Van Posted Sunday at 05:55 PM Posted Sunday at 05:55 PM I have a grand daughter (14) going to welding school. Zero debt, good job....smart 5
peteski Posted Sunday at 09:30 PM Posted Sunday at 09:30 PM (edited) 5 hours ago, Ace-Garageguy said: Apparently you didn't realize that what I wrote that you're responding to here is sarcasm. What can I say? Nothing to say Bill. Your sarcasm went way over my head. I guess I am dumb enough not to tell when you're serious and when you are sarcastic. No worries. Looking at couple other responses to your post seems to show that I was not the only one who took it seriously. Edited Sunday at 09:34 PM by peteski
Ace-Garageguy Posted Sunday at 09:34 PM Author Posted Sunday at 09:34 PM (edited) 7 hours ago, bobthehobbyguy said: ...One thing that most of the AI used to vocalize the text has a hard time distinguishing pronunciation by context. For example the word read which can be pronounced as red or Reed. Which makes me skeptical of trusting AI to do something as important as medical diagnosis or any other critical diagnosis. Yup. And that should be a genuine concern for anyone pushing for rapid implementation of currently-available consumer-grade AI. Anyone who's paying attention to AI-created art and voiceovers in particular will have noticed its lack of understanding of context. Renderings of cars that are presented as real things-to-come, for example, often have exhaust pipes coming out from under the front bumper. Something else that's troubling is that the folks putting this stuff on the web with immediately obvious flaws apparently do no editing. If a content creator is so lazy or inept that he/she doesn't edit to get pronunciation right, or see to it that features of cars that go in the back ARE IN THE BACK, just directly posting whatever their AI vomits up, WHY would anyone believe that anything presented as "factual" has been thoroughly vetted and verified as true by someone who knows enough about a subject to discern mumbo-jumbo gibberish from reality? In the same vein, AI-produced videos about automotive subjects presented as "historical" or "documentaries", or that delve into technical aspects of cars are often so rife with errors, omissions, exaggeration, misrepresentation, and outright lies as to be unwatchable by anyone who has a clue, but YooToob, Google/Alphabet's self-proclaimed defender against "misinformation", does very little even though the YT comment sections are full of gullible souls who take all the baloney as gospel. Once again, hardly confidence-inspiring in Google's AI. And of course, just ask Google's AI about it and it'll bury you in "reasons" piled high and deep. To grossly oversimplify, consumer-grade AI generates its "answers" by statistically weighing the sources it looks at before assembling words into a plausible-sounding response, and if there happens to be significantly more wrong information in the data it analyzes than right information, it vomits up non-facts...because it has no clue as to what constitutes "right". Just as "scientific consensus" is not necessarily correct (a whole lot of people agreeing that flawed data is right don't magically make it right), so AI presenting the answer that's statistically dominant as "true and correct" is misleading, if not downright dangerous. AI researchers are well aware of the understanding-context issues and are working on it, but why not get this RIGHT before unleashing products that can potentially cause so much havoc? https://research.ibm.com/blog/demystifying-in-context-learning-in-large-language-model EDIT: Pose this question to AI and it will bury you in "reasons" that essentially mean "that's the way it's done, so go pound sand." Edited Monday at 12:33 AM by Ace-Garageguy 2
Ace-Garageguy Posted Sunday at 10:04 PM Author Posted Sunday at 10:04 PM (edited) 35 minutes ago, peteski said: Nothing to say Bill. Your sarcasm went way over my head. I guess I am dumb enough not to tell when you're serious and when you are sarcastic. No worries. Looking at couple other responses to your post seems to show that I was not the only one who took it seriously. I'm curious. How could anyone think that this phrase was not sarcasm on my part: "...the gubmint, that vast bastion of technological expertise and genuine intellectual and moral superiority..." ? Edited Sunday at 10:07 PM by Ace-Garageguy punctiliousness 2 1
89AKurt Posted Monday at 12:09 AM Posted Monday at 12:09 AM For what my two cents is worth, I am a product of a Trade school. But it's complicated. Right after High School, entered a Tucson trade school, but when I told dad that I was helping to teach other students how to draw, he let me go to one in Phoenix. Only one year, they had a second year that they tried to rope everyone in to. Years later I was told to repay a student loan for the second year, which I did not attend, lucky I kept records, and come to find out the school was out of business. After moving to Prescott, I attended the local college for an AA Degree in construction, so that is the complication for this subject. They are experimenting with 3D printed houses, which is really just the walls. I never ever had a problem paying back my meager student loans; what Universities charge is a bloody crime. In the construction industry here, the lack of skilled trade workers is slim. Try to find an Architect and/or Engineer to do anything in a reasonable time. I could have kept working, but needed to get addicted to Prozac. IMO, just because you have a computer doesn't mean you know how to design a good structure. So now A.I. will do it for us, I can hardly wait /sarc off the charts/. 3 1
Dragline Posted Monday at 12:26 AM Posted Monday at 12:26 AM I can currently spot AI voice-overs. There will come a time when it will probably become impossible. Inflection is one of the things they are getting correct. But others are still quite way off. I watch a lot of 40K lore videos that run long. And the use of AI is prevalent in this arena. It has a service le quality, but also a comedic one. At least to me. I have a degree or two in English, so I tend to hear and see written words with a critical bent. It's innate within me at this point. Be that as it must, I still see AI as THE burgeoning field. And if I were a younger man I would seriously be involved as a career path. 2
peteski Posted Monday at 12:39 AM Posted Monday at 12:39 AM 2 hours ago, Ace-Garageguy said: I'm curious. How could anyone think that this phrase was not sarcasm on my part: "...the gubmint, that vast bastion of technological expertise and genuine intellectual and moral superiority..." ? I guess I'm dumb and have some sort of undiagnosed mental deficiency. What can I say? It makes sense now, but it didn't when I first read it.
Ace-Garageguy Posted Monday at 01:09 AM Author Posted Monday at 01:09 AM (edited) 1 hour ago, Dragline said: ...I still see AI as THE burgeoning field. And if I were a younger man I would seriously be involved as a career path. Yup. If I were a younger man and won that big lottery on another thread, I'd put the Alan Parsons I Robot album on endless repeat and "a couple of men a thing I'd show". Edited Monday at 01:56 AM by Ace-Garageguy 2
bobthehobbyguy Posted Monday at 02:30 PM Posted Monday at 02:30 PM 16 hours ago, Ace-Garageguy said: To grossly oversimplify, consumer-grade AI generates its "answers" by statistically weighing the sources it looks at before assembling words into a plausible-sounding response, and if there happens to be significantly more wrong information in the data it analyzes than right information, it vomits up non-facts...because it has no clue as to what constitutes "right". A common phrase in computers is garbage out. This is a perfect example of that. 1
Ace-Garageguy Posted Monday at 02:39 PM Author Posted Monday at 02:39 PM (edited) 4 hours ago, bobthehobbyguy said: A common phrase in computers is garbage out. This is a perfect example of that. Yup, "garbage in, garbage out" rules. And AI developers frequently blame deficiencies in their training material when their AIs "hallucinate", delivering plausible-sounding answers that are just flat wrong, but this oversimplifies a complex issue. Hallucinations—when an AI delivers plausible-sounding but factually incorrect information—are not only a training data problem but also a consequence of the models' fundamental design and function. AI has a long way to go before it should be relied on for critical functions or support, and the rush to market potentially badly flawed products should be of major concern. Proponents of early adoption of AI for critical functions should have their heads examined, and everything that follows should be common knowledge for anyone living in a "technologically advanced" civilization that's about to be fundamentally transformed. --------------------------------------------------------------------------------------------------------- Role of training data and AI design: Poor-quality training data is a major cause of hallucinations. If the data used to train a Large Language Model (LLM) is flawed, it will introduce errors and bias into the system. GIGO. But beyond the data itself, the way generative AI models are built is a key source of the hallucination problem. Incomplete or biased data: If training data lacks information on a topic or contains gaps, the model will struggle to produce a reliable output. It instead fills in the missing information with fabricated content. A model trained on a dataset with biases will also amplify those biases. Contradictory or outdated data: If a vast training dataset contains conflicting information, it can create "intrinsic tensions" that trigger hallucinations. Similarly, outdated information can cause an AI to provide incorrect details. Data poisoning: Malicious actors can deliberately input false or misleading data into a training dataset to cause an AI to hallucinate, which is a known security concern. Limitations of AI design: Probabilistic nature: LLMs (large language models, dominant in the current crop of AI since the invention of the transformer architecture in 2017) are essentially advanced autocomplete tools that predict the most statistically likely sequence of words based on their training data. They do not possess true understanding and are not designed to verify facts, so accuracy is often coincidental. (i.e. essentially USELESS and dangerously unreliable if employed in a critical environment like medicine, where true expertise and a full understanding of context should be mandatory). Overfitting: A model can become too specialized to its training data, memorizing specific noisy details instead of learning generalized patterns. This makes it prone to hallucinating when faced with new, unseen data. "Reasoning" errors: The most advanced LLMs employ complex, step-by-step reasoning. However, this increases the chance for an error to occur at each step and compound into a hallucination. In fact, some "reasoning" models have shown higher hallucination rates than their predecessors. Context Window: This is the equivalent of an LLM's working memory. A larger context window allows the model to analyze longer inputs, enabling more complex tasks and coherent responses. Statelessness: LLMs are technically stateless. A chatbot maintains the illusion of memory by sending the entire conversational history (or a summarized portion) as part of each new prompt to the LLM. Retrieval-Augmented Generation (RAG): This technique enhances LLMs by retrieving information from a specific, external knowledge base and feeding it to the model. RAG significantly improves factual accuracy and allows models to ground responses in specific, provided context rather than relying solely on their training data. Multi-modal AI: These systems process context from multiple data sources, including text, images, and audio. For instance, a self-driving car combines vision and sensor data to understand its environment. Knowledge Graphs: These structures connect entities (like people, places, and concepts) and their relationships. They provide a structured foundation for AI models to access and leverage a deeper, more factual context. Graph Neural Networks (GNNs): These models are designed to operate on graph-structured data. By applying attention and convolution operations to nodes in the graph, they can model relationships and contextual dependencies that are not linear. Edited Monday at 06:47 PM by Ace-Garageguy
Xingu Posted Monday at 06:26 PM Posted Monday at 06:26 PM All you need to do is prepare for the widespread brown/ blackouts. That should happen within the first week AI has substantial control of our everyday lives. Power will he the first commodity that the private citizens will lose.
Ace-Garageguy Posted Monday at 06:34 PM Author Posted Monday at 06:34 PM (edited) 8 hours ago, Xingu said: All you need to do is prepare for the widespread brown/ blackouts. That should happen within the first week AI has substantial control of our everyday lives. Power will he the first commodity that the private citizens will lose. AI processing on a large scale is a bigger energy-hog / grid-stressor than even widespread adoption of electric vehicles. We may be in for a wild ride even if AI doesn't go rogue. EDIT: I find it interesting that we're being systematically herded into putting all our energy eggs in one basket, while phasing out (bulldozing) traditional energy sources that could provide backup when the sun isn't shining or the wind isn't blowing. There's an old science-nerd joke that's punchline is "don't put all your ergs in one biscuit". It's good advice. EDIT 2: I ran the punchline past Googli's AI. It doesn't get it. While it can often detect and even explain humor based on misstating common phrases and the use of puns, it missed this one entirely. EDIT 3: The term ‘erg’ refers to a specific unit of measurement for work or energy. It has historical significance and practical applications. EDIT 4: Just now, about 8 hours later, I ran the punchline through Googli's AI again. Now at least it understands that it's a joke. It's thinking... Edited yesterday at 03:13 AM by Ace-Garageguy
bobss396 Posted Monday at 09:49 PM Posted Monday at 09:49 PM I had always liked to make things as a kid. Bike projects, models, science projects in school. In grade school there was a separate book section, all from the same publisher. All about captains of industry. Henry Ford, Walter P. Chrysler, the Wrights, Eli Whitney and so on. The Wrights interested me. They made sleds (the Wright Flyer) and bicycles. The book mentioned them making up scale drawings, if it was right on paper, the final project would be right as well. I started drawing things on graph paper with pencil and ruler. It was in HS when I took "mechanical drawing". I still have my portfolio from 10th grade if my projects. In college, there was no CAD for us in 1973. We used drafting boards and were even graded on our printing. CAD was available around 1981 when I went back at night. Computer Vision, which crashed more than it worked. Then the blue-screen Auto CAD with the task bar. I still have a V12 book around, it was even a chore to define your paper space. Autocad got better with it being icon-based. We had a version for Windows 3 .1 using pirated floppy discs that we used on a 386 PC at work. This was around 1994, I had my formal CAD training at work soon after 1
Recommended Posts
Create an account or sign in to comment
You need to be a member in order to leave a comment
Create an account
Sign up for a new account in our community. It's easy!
Register a new accountSign in
Already have an account? Sign in here.
Sign In Now