This is the script of my national radio report yesterday discussing the rumor that #Google was using #Gmail to train #AI, and other issues surrounding confusion about when and how AI is actually being used by Big Tech.
- - -
So yes some rather viral stories started making the rounds claiming that Google was already or soon would be training their AI on the content of users' personal and/or business Gmail messages. And as you might expect this triggered quite an outcry, and Google has now denied that any of this is taking place. And I don't see any reason to doubt that statement.
However, this does further open up the Pandora's Box of the ever-growing generative AI train wreck that keeps accelerating with little sense that the Big Tech AI firms are willing to take responsibility for the problems that their Large Language Model generative AI systems are causing.
And this is of particular importance now because there have been reports that the administration was considering an executive order to override state regulations on AI, even though Congress recently overwhelmingly voted to give states the ability to do AI regulation.
Irrespective of the specifics of this particular Gmail story, the reality is that it's becoming increasingly difficult to know or understand if, when, or how one's documents and other communications, whether business or personal may actually be ingested into AI, and whether that ingestion is to a local on-device model or if your data may find its way back into centralized models either purposely or accidentally. Because we know there have been cases of such data that individuals and businesses would consider private showing up in public AI interactions.
One trend now that you may have noticed, is that some firms don't even explicitly mention the term AI even though they are using AI-based systems, perhaps in some cases because they know the term now understandably triggers concerns and alarm from so many people. The firms will push new features and options to supposedly make your life better and sometimes the only place where you might see the term AI is deep in their Terms of Service that hardly anybody reads and where often even fewer people have the background to really understand them.
And it can be suspicious because it seems pretty obvious that those features couldn't really be implemented without AI, whether or not your data was actually being used for AI training today.
And these kinds of pushes for you to accept these services can come in various forms. Sometimes it's just a button that you can easily ignore. Sometimes it's what in Google-speak is called a "Butter Bar" -- a banner across the top of the current page. And then there's what many people consider to be the really nasty approach -- and many firms use this for all kinds of reasons. And it's called a "modal pop-up" -- that's M-O-D-A-L, and that's when you get a box or new page blocking some or all of your page that you're actually trying to use, and you're often forced to make some sort of decision right then -- sometimes with the option to just close the pop-up and sometimes not -- before you can continue work on your actual page.
Google used these fairly recently when they changed some available interactions between services and their AI, and it was pretty in your face -- it seemed that you had to decide right then what you were going to want, whether you fully understood their explanations or not. And frankly I didn't fully understand what they were saying until I researched it in some depth.
Whether or not some firms are purposely trying to trick you into using their AI even if they haven't defaulted you into it, it's clear that absent strong regulations to help avoid AI abuses, much of Big Tech intends to use AI to steamroll right over individual choices and sometimes privacy as well, in their desperate quest to profit from the staggering sums they're pouring into AI development, and how society at large feels about this seems -- unfortunately for us -- often not to be on Big Tech's list of concerns.
- - -
L