
Y'ALL!!!!!!!!! I've been preaching this forever: DOWNLOAD EVERYTHING.
Thankfully, I downloaded this video before it got removed. Fuck corporate media.
Y'ALL!!!!!!!!! I've been preaching this forever: DOWNLOAD EVERYTHING.
Thankfully, I downloaded this video before it got removed. Fuck corporate media.
"sideloading" is a stupid made up term invented to delegitimize installing software.
Heres a bunch of other things I'm doing while "sidestepping" some supposed central authority:
- sideshopping (buying stuff from a store that isn't amazon)
- sidedining (eating or making food that isn't from mcdonalds)
- sidethinking (using my own brain instead of asking chatgpt)
- sidelistening (to my own music instead of on spotify)
- sidechatting (irl instead of online)
It’s not fair to compare a state-of-the-art-AI-assisted proprietary tool from a trillion-dollar corporation with a hand-coded free and open tool from a small not-for-profit cooperative.
(The latter works.)
https://github.com/orgs/community/discussions/170758#discussioncomment-14233260
#BigTech#AI #bullshit#GitHub#SmallTech #craft#Codeberg#Forgejo
Y'ALL!!!!!!!!! I've been preaching this forever: DOWNLOAD EVERYTHING.
Thankfully, I downloaded this video before it got removed. Fuck corporate media.
"sideloading" is a stupid made up term invented to delegitimize installing software.
Heres a bunch of other things I'm doing while "sidestepping" some supposed central authority:
- sideshopping (buying stuff from a store that isn't amazon)
- sidedining (eating or making food that isn't from mcdonalds)
- sidethinking (using my own brain instead of asking chatgpt)
- sidelistening (to my own music instead of on spotify)
- sidechatting (irl instead of online)
Today is Ukraine’s Independence Day. 🇺🇦
We celebrate a nation and its people, and their choice of freedom.
Those who are bravely resisting Russia’s unprovoked and unjustified war of aggression.
We stand by them for as long as it takes for a just and lasting peace.
Slava Ukraini.
You're taking away our freedom with #chatcontrol.
Nothing but a bunch of #hypocrite bastards.
This is, to put it mildly, utter bullshit.
You can store a decade of email for a million people—call it 10-20Gb each—in a 10-20Pb storage array that costs under £1M per year (and is constantly getting cheaper). 10 year old emails are *cold* storage: drives not even spinning most of the time: you only really need 5-10% of it available on demand.
The environment agency are gaslighting us. One wonders who put them up to it?
https://social.lol/@robb/115016579150112511
Als meer dan 90% van je antwoord pure onzin is… “winkelmandje vol” is GEEN titel van een lied of theaterprogramma van Yentl en de Boer.. niet een “fout” van AI, of “hallucinatie”, het is gewoon #bullshit en vervuilende #desinformatie.
Moet ik tegenwoordig een tweede zoekopdracht inschieten om te achterhalen of een eerste zoekresultaat klopt?
Is er een plug-in die alle AI antwoorden en eerste 4 pagina’s aan Google Adds niet wegfiltert maar gewoon in een vuilniscontainer kan storten?
Marketing departments are like neighborhood kids leaving bags of shit on your doorstep.
Marketing departments using generative AI are like neighborhood kids leaving burning bags of shit on your doorstep.
They're both shit, but one is being advertised as the new hotness while being objectively worse.
Marketing departments are like neighborhood kids leaving bags of shit on your doorstep.
Marketing departments using generative AI are like neighborhood kids leaving burning bags of shit on your doorstep.
They're both shit, but one is being advertised as the new hotness while being objectively worse.
A friend sent me the story of the LLM deleting a database during a code freeze and said "it lied when asked about it." I assert that a generative AI cannot lie. These aren't my original thoughts. But if you read Harry Frankfurt's famous essay On Bullshit (downloadable PDF here), he makes a very reasoned definition of bullshit. And this paragraph near the end of the essay explains why an LLM cannot lie.
It is impossible for someone to lie unless he thinks he knows the truth. Producing bullshit requires no such conviction. A person who lies is thereby responding to the truth, and he is to that extent respectful of it. When an honest man speaks, he says only what he believes to be true; and for the liar, it is correspondingly indispensable that he consider his statements to be false. For the bullshitter, however, all these bets are off: he is neither on the side of the true nor on the side of the false. His eye is not on the facts at all, as the eyes of the honest man and of the liar are, except insofar as they may be pertinent to his interest in getting away with what he says. He does not care whether the things he says describe reality correctly. He just picks them out, or makes them up, to suit his purpose.
And that's a generative artificial intelligence algorithm. Whether generating video, image, text, network traffic, whatever. It has no reference to the truth and is unaware of what truth is. It just says things. Sometimes they turn out to be true. Sometimes not. But that's irrelevant to an LLM. It doesn't know.
A friend sent me the story of the LLM deleting a database during a code freeze and said "it lied when asked about it." I assert that a generative AI cannot lie. These aren't my original thoughts. But if you read Harry Frankfurt's famous essay On Bullshit (downloadable PDF here), he makes a very reasoned definition of bullshit. And this paragraph near the end of the essay explains why an LLM cannot lie.
It is impossible for someone to lie unless he thinks he knows the truth. Producing bullshit requires no such conviction. A person who lies is thereby responding to the truth, and he is to that extent respectful of it. When an honest man speaks, he says only what he believes to be true; and for the liar, it is correspondingly indispensable that he consider his statements to be false. For the bullshitter, however, all these bets are off: he is neither on the side of the true nor on the side of the false. His eye is not on the facts at all, as the eyes of the honest man and of the liar are, except insofar as they may be pertinent to his interest in getting away with what he says. He does not care whether the things he says describe reality correctly. He just picks them out, or makes them up, to suit his purpose.
And that's a generative artificial intelligence algorithm. Whether generating video, image, text, network traffic, whatever. It has no reference to the truth and is unaware of what truth is. It just says things. Sometimes they turn out to be true. Sometimes not. But that's irrelevant to an LLM. It doesn't know.
A space for Bonfire maintainers and contributors to communicate