I’m with AOC. When (not if) the AI bubble pops, there should be absolutely NO BAILOUT from the Federal government. Absolutely none.
I’m with AOC. When (not if) the AI bubble pops, there should be absolutely NO BAILOUT from the Federal government. Absolutely none.
I worked at a very successful, global, old school company for over 25 years.
In the #dotcom era I remember this company (who wanted zero to do with the internet) squeezing managers to come up with stories about how much revenue they piped through "E business", so they could tell the share market the same, and hitch up to the #SharePriceBubble, which they were missing out on. The stories were tenuous at best. My global project booked $Billions in orders as "E business" after pushing B2B customers to use an IVR & forecast based ordering.
Apropos nothing, I saw a LinkedIn article the other day by a VP of the same company, spruiking their #AI credentials. Knowing this company and it's cultural system of management control, I can guarantee there is no way they'd let even a private #LLM model run through their data, primarily due to unauthorised data exposure risks.
But hey, got to feed the #AIHype machine and convince the share market they're a growth business.
I worked at a very successful, global, old school company for over 25 years.
In the #dotcom era I remember this company (who wanted zero to do with the internet) squeezing managers to come up with stories about how much revenue they piped through "E business", so they could tell the share market the same, and hitch up to the #SharePriceBubble, which they were missing out on. The stories were tenuous at best. My global project booked $Billions in orders as "E business" after pushing B2B customers to use an IVR & forecast based ordering.
Apropos nothing, I saw a LinkedIn article the other day by a VP of the same company, spruiking their #AI credentials. Knowing this company and it's cultural system of management control, I can guarantee there is no way they'd let even a private #LLM model run through their data, primarily due to unauthorised data exposure risks.
But hey, got to feed the #AIHype machine and convince the share market they're a growth business.
Well, that's a relief, isn't it? The Zuckerberg private foundation, sorry, the we-do-something-philanthropic-thingy-see-we-are-good-people-do-you?, stopped investing in diversity (bah!) and is now solving all our problems. How? With AI!
Whereas our colleagues publish study after study where in research applying AI does or does not make sense, some people still invest blindly in it. Or not?
Make no mistake: This is a well-founded investment! If you pour money with your business into little ROI, then putting money into something that creates apparent demand is a marketing investment.
Well, that's a relief, isn't it? The Zuckerberg private foundation, sorry, the we-do-something-philanthropic-thingy-see-we-are-good-people-do-you?, stopped investing in diversity (bah!) and is now solving all our problems. How? With AI!
Whereas our colleagues publish study after study where in research applying AI does or does not make sense, some people still invest blindly in it. Or not?
Make no mistake: This is a well-founded investment! If you pour money with your business into little ROI, then putting money into something that creates apparent demand is a marketing investment.
Stop overhyping AI, scientists tell von der Leyen @euractiv
「 These are marketing statements driven by profit-motive and ideology rather than empirical evidence and formal proof 」
「 The scientific development of any potentially useful AI is not served by amplifying the unscientific marketing claims of US tech firms 」
https://www.euractiv.com/news/stop-overhyping-ai-scientists-tell-von-der-leyen/
Stop overhyping AI, scientists tell von der Leyen @euractiv
「 These are marketing statements driven by profit-motive and ideology rather than empirical evidence and formal proof 」
「 The scientific development of any potentially useful AI is not served by amplifying the unscientific marketing claims of US tech firms 」
https://www.euractiv.com/news/stop-overhyping-ai-scientists-tell-von-der-leyen/
""With the exception of Nvidia, which is selling shovels in a gold rush, most generative AI companies are both wildly overvalued and wildly overhyped," Gary Marcus, Emeritus Professor of Psychology and Neural Science at New York University, told DW. "My guess is that it will all fall apart, possibly soon. The fundamentals, technical and economic, make no sense."
Garran, meanwhile, believes the era of rapid progress in large language models (LLMs) is drawing to a close, not because of technical limits, but because the economics no longer stack up.
"They [AI platforms] have already hit the wall," Garran said, adding that the cost of training new models is "skyrocketing, and the improvements aren’t much better."
Striking a more positive tone, Sarah Hoffman, director of AI Thought Leadership at the New York-based market intelligence firm AlphaSense, predicted a "market correction" in AI, rather than a "cataclysmic 'bubble bursting.'"
After an extended period of extraordinary hype, enterprise investment in AI will become far more discerning, Hoffmann told DW in an emailed statement, with the focus "shifting from big promises to clear proof of impact."
"More companies will begin formally tracking AI ROI [return on investment] to ensure projects deliver measurable returns," she added."
https://www.dw.com/en/will-the-ai-bubble-burst-as-investors-grow-wary-of-returns/a-74636881
"The biggest US-listed companies keep talking about artificial intelligence. But other than the “fear of missing out”, few appear to be able to describe how the technology is changing their businesses for the better.
That is the conclusion of a Financial Times analysis of hundreds of corporate filings and executive transcripts at S&P 500 companies last year, providing one of the most comprehensive insights yet into how the AI wave is rippling through American industry.
Big Tech giants such as Microsoft, Alphabet, Amazon and Meta have regularly extolled AI’s benefits, pledging to invest $300bn this year alone to develop the infrastructure around large language models.
Large companies far from Silicon Valley, from beverages giant Coca-Cola to sportswear maker Lululemon, are also discussing AI at ever-greater length in their regulatory filings. But they also largely paint a more sober picture of the technology’s usefulness, expressing concern over cyber security, legal risks and the potential for it to fail."
https://www.ft.com/content/e93e56df-dd9b-40c1-b77a-dba1ca01e473
""With the exception of Nvidia, which is selling shovels in a gold rush, most generative AI companies are both wildly overvalued and wildly overhyped," Gary Marcus, Emeritus Professor of Psychology and Neural Science at New York University, told DW. "My guess is that it will all fall apart, possibly soon. The fundamentals, technical and economic, make no sense."
Garran, meanwhile, believes the era of rapid progress in large language models (LLMs) is drawing to a close, not because of technical limits, but because the economics no longer stack up.
"They [AI platforms] have already hit the wall," Garran said, adding that the cost of training new models is "skyrocketing, and the improvements aren’t much better."
Striking a more positive tone, Sarah Hoffman, director of AI Thought Leadership at the New York-based market intelligence firm AlphaSense, predicted a "market correction" in AI, rather than a "cataclysmic 'bubble bursting.'"
After an extended period of extraordinary hype, enterprise investment in AI will become far more discerning, Hoffmann told DW in an emailed statement, with the focus "shifting from big promises to clear proof of impact."
"More companies will begin formally tracking AI ROI [return on investment] to ensure projects deliver measurable returns," she added."
https://www.dw.com/en/will-the-ai-bubble-burst-as-investors-grow-wary-of-returns/a-74636881
"The biggest US-listed companies keep talking about artificial intelligence. But other than the “fear of missing out”, few appear to be able to describe how the technology is changing their businesses for the better.
That is the conclusion of a Financial Times analysis of hundreds of corporate filings and executive transcripts at S&P 500 companies last year, providing one of the most comprehensive insights yet into how the AI wave is rippling through American industry.
Big Tech giants such as Microsoft, Alphabet, Amazon and Meta have regularly extolled AI’s benefits, pledging to invest $300bn this year alone to develop the infrastructure around large language models.
Large companies far from Silicon Valley, from beverages giant Coca-Cola to sportswear maker Lululemon, are also discussing AI at ever-greater length in their regulatory filings. But they also largely paint a more sober picture of the technology’s usefulness, expressing concern over cyber security, legal risks and the potential for it to fail."
https://www.ft.com/content/e93e56df-dd9b-40c1-b77a-dba1ca01e473
This is why nobody should ever try and vibe code a screen reader. Do not listen to the blind people that think this is a good idea. They are wrong. https://sightlessscribbles.com/posts/20250902/ #AI #AIHype #Accessibility #A11y
This is why nobody should ever try and vibe code a screen reader. Do not listen to the blind people that think this is a good idea. They are wrong. https://sightlessscribbles.com/posts/20250902/ #AI #AIHype #Accessibility #A11y
Is there a database of wrong answers of ChatGPT that could be used to give my students the experience that AI sometimes makes stuff up?
Bonus point for wrong facts related to environment/natural sciences/biology.
When they get a wrong answer, the resulting discussion could be a very productive opportunity to learn critical thinking in modern times.
The situation is dire and we really need to counter the #AIHype before it is too late. My colleagues are split into a small group of AI-brainwormed (unfortunately we are almost forced to attend seminars with titles like "How AI can improve your teaching") and many totally desperate professors that do not know how to deal with the situation.
@VaryIngweion
@capita_picat
#TeachingInTheAgeOfAI #Education #HigherEd #SciComm #AcademicChatter
Is there a database of wrong answers of ChatGPT that could be used to give my students the experience that AI sometimes makes stuff up?
Bonus point for wrong facts related to environment/natural sciences/biology.
When they get a wrong answer, the resulting discussion could be a very productive opportunity to learn critical thinking in modern times.
The situation is dire and we really need to counter the #AIHype before it is too late. My colleagues are split into a small group of AI-brainwormed (unfortunately we are almost forced to attend seminars with titles like "How AI can improve your teaching") and many totally desperate professors that do not know how to deal with the situation.
@VaryIngweion
@capita_picat
#TeachingInTheAgeOfAI #Education #HigherEd #SciComm #AcademicChatter
I forgot that I did eventually edit the first line of the book. It was originally, "I did not want to write this book."
Disabling Intelligences eBook is available now https://link.springer.com/book/10.1007/978-3-032-02665-1
Image text : I’ll be honest. When thinking about what books might live in me, this was never one of them. It was certainly not the book I thought I would write first. I never wanted to become known for artificial intelligence (AI) criticism at all. I want to sit in a lab and tinker with tech, building little gadgets that delight my disabled kin. I want to maintain surreptitious code bases of free little hacks that disrupt our perpetually inaccessible and downright-hostile world. I want to share disabled DIY specs through crumpled little zines posted in libraries and coffee houses. I want to run a free digital manufacturing center just for disabled people to come and build exactly what they want without a doctor, insurance company, or bank account telling them what body they’re allowed to have.
But in order to do that, I have to fight the inadequacies in technology policy and medical care. And in order to do that, I have to fight the AI industrial complex. Because underneath every insurance rejection is a predictive algorithm, behind every assistive technology is a data collection scheme, and now, behind every technology policy is an AI hype man.