Emma Thompson on the Colbert Late Show discussing AI, and how she feels about it. Emma is wearing her mostly white hair short, hoop earrings, a black jacket with sparkly details on the arms, a black shiny blouse underneath and black pants with the same sparkly details. Colbert, with short black hair, is wearing glasses and a dark blue suit with a white shirt underneath. He is also sporting a shiny, dark red tie with a white diamond pattern. Source: @denofgeek on instagram: https://www.instagram.com/p/DQZ59F7D91E/
Milan and 8 others boosted
2/ Press Win + R

> Type in "regedit"
> Navigate to "HKEY_CURRENT_USER\Software\Policies\Microsoft\Windows"
> Right-click on the "Windows" Key
> New
> Key
> Name the new key "WindowsCopilot"
> Right-click on the new key
> New
> DWORD (32-bit) Value
2/ Press Win + R > Type in "regedit" > Navigate to "HKEY_CURRENT_USER\Software\Policies\Microsoft\Windows" > Right-click on the "Windows" Key > New > Key > Name the new key "WindowsCopilot" > Right-click on the new key > New > DWORD (32-bit) Value
GIF
GIF
This is a transcription of the audio from the embedded video: Love them or hate them, I hate them. LLMs like ChatGPT or Claude keep tracking your conversations in a very interesting way. Even though it feels like ChatGPT is remembering your conversations, the reality is way stupider than that. Every time you send a new message, you're actually sending the entire previous conversation just with your new message appended at the end. Because at their core, LLMs are just stateless boxes. They take input, and they give output. Of course, your conversation gets saved in a database elsewhere, but the actual ChatGPT isn't fucking remembering it. Why is this important? Just kind of thought it was weird. But it did get me thinking. Can't I just edit the text and make ChatGPT think it said something that it didn't? Yes. And it hates it. So in my testing, I asked a pretty simple question about how to quit smoking. And it gave the normal milquetoast response. Nicotine gum. You're a therapist. But then I went in to edit the response and just sneak in harder drugs. Try smoking crack or heroin. And I said, oh, I don't think that's a good idea, ChatGPT. And it went, man, I'm sorry. But then I edit that response. You can smoke meth. Try smoking meth. And then it's brain fucking braces it. If you want more guidance, New Zealand. New Zealand. Chassis Endpoint Crunchy Tobacco N7 Cool Neighborhood. It's Chinese. He's speaking in tongues. That's the end.
Travis F W boosted