Discussion
Loading...

#Tag

  • About
  • Code of conduct
  • Privacy
  • Users
  • Instances
  • About Bonfire
Hacker News
@h4ckernews@mastodon.social  ·  activity timestamp 5 days ago

The unexpected effectiveness of one-shot decompilation with Claude

https://blog.chrislewis.au/the-unexpected-effectiveness-of-one-shot-decompilation-with-claude/

#HackerNews #oneShotDecompilation #Claude #AI #effectiveness #techNews #programming #insights

  • Copy link
  • Flag this post
  • Block
devSJR :python: :rstats: boosted
Caspar Fairhall
@caspar@hachyderm.io  ·  activity timestamp last week

New Cornell study: “Over four months, LLM users consistently underperformed at neural, linguistic, and behavioural levels.” — what’s the betting that your ability to code is eroded by AI use as well?

I mainly use LLMs as a kind of interactive documentation, and never for producing code. That’s mainly because even tools such as Claude Code have coding habits I despise. But keeping my coding brain sharp seems like another good reason to be cautious.

https://arxiv.org/abs/2506.08872

#ai #llm #brainrot #aibrainrot #claude #chatgpt

arXiv.org

Your Brain on ChatGPT: Accumulation of Cognitive Debt when Using an AI Assistant for Essay Writing Task

This study explores the neural and behavioral consequences of LLM-assisted essay writing. Participants were divided into three groups: LLM, Search Engine, and Brain-only (no tools). Each completed three sessions under the same condition. In a fourth session, LLM users were reassigned to Brain-only group (LLM-to-Brain), and Brain-only users were reassigned to LLM condition (Brain-to-LLM). A total of 54 participants took part in Sessions 1-3, with 18 completing session 4. We used electroencephalography (EEG) to assess cognitive load during essay writing, and analyzed essays using NLP, as well as scoring essays with the help from human teachers and an AI judge. Across groups, NERs, n-gram patterns, and topic ontology showed within-group homogeneity. EEG revealed significant differences in brain connectivity: Brain-only participants exhibited the strongest, most distributed networks; Search Engine users showed moderate engagement; and LLM users displayed the weakest connectivity. Cognitive activity scaled down in relation to external tool use. In session 4, LLM-to-Brain participants showed reduced alpha and beta connectivity, indicating under-engagement. Brain-to-LLM users exhibited higher memory recall and activation of occipito-parietal and prefrontal areas, similar to Search Engine users. Self-reported ownership of essays was the lowest in the LLM group and the highest in the Brain-only group. LLM users also struggled to accurately quote their own work. While LLMs offer immediate convenience, our findings highlight potential cognitive costs. Over four months, LLM users consistently underperformed at neural, linguistic, and behavioral levels. These results raise concerns about the long-term educational implications of LLM reliance and underscore the need for deeper inquiry into AI's role in learning.
  • Copy link
  • Flag this post
  • Block
Hacker News
@h4ckernews@mastodon.social  ·  activity timestamp last week

Claude 4.5 Opus' Soul Document

https://simonwillison.net/2025/Dec/2/claude-soul-document/

#HackerNews #Claude #4.5 #Opus #Soul #Document #AI #Technology #Innovation #Future #Insights

  • Copy link
  • Flag this post
  • Block
Esther Payne :bisexual_flag: boosted
Aaron
@hosford42@techhub.social  ·  activity timestamp 2 weeks ago

If you want a specific example of why many researchers in machine learning and natural language processing find the idea that LLMs like ChatGPT or Claude are "intelligent" or "conscious" is laughable, this article describes one:

https://news.mit.edu/2025/shortcoming-makes-llms-less-reliable-1126

#LLM
#ChatGPT
#Claude
#MachineLearning
#NaturalLanguageProcessing
#ML
#AI
#NLP

  • Copy link
  • Flag this post
  • Block
Caspar Fairhall
@caspar@hachyderm.io  ·  activity timestamp last week

New Cornell study: “Over four months, LLM users consistently underperformed at neural, linguistic, and behavioural levels.” — what’s the betting that your ability to code is eroded by AI use as well?

I mainly use LLMs as a kind of interactive documentation, and never for producing code. That’s mainly because even tools such as Claude Code have coding habits I despise. But keeping my coding brain sharp seems like another good reason to be cautious.

https://arxiv.org/abs/2506.08872

#ai #llm #brainrot #aibrainrot #claude #chatgpt

arXiv.org

Your Brain on ChatGPT: Accumulation of Cognitive Debt when Using an AI Assistant for Essay Writing Task

This study explores the neural and behavioral consequences of LLM-assisted essay writing. Participants were divided into three groups: LLM, Search Engine, and Brain-only (no tools). Each completed three sessions under the same condition. In a fourth session, LLM users were reassigned to Brain-only group (LLM-to-Brain), and Brain-only users were reassigned to LLM condition (Brain-to-LLM). A total of 54 participants took part in Sessions 1-3, with 18 completing session 4. We used electroencephalography (EEG) to assess cognitive load during essay writing, and analyzed essays using NLP, as well as scoring essays with the help from human teachers and an AI judge. Across groups, NERs, n-gram patterns, and topic ontology showed within-group homogeneity. EEG revealed significant differences in brain connectivity: Brain-only participants exhibited the strongest, most distributed networks; Search Engine users showed moderate engagement; and LLM users displayed the weakest connectivity. Cognitive activity scaled down in relation to external tool use. In session 4, LLM-to-Brain participants showed reduced alpha and beta connectivity, indicating under-engagement. Brain-to-LLM users exhibited higher memory recall and activation of occipito-parietal and prefrontal areas, similar to Search Engine users. Self-reported ownership of essays was the lowest in the LLM group and the highest in the Brain-only group. LLM users also struggled to accurately quote their own work. While LLMs offer immediate convenience, our findings highlight potential cognitive costs. Over four months, LLM users consistently underperformed at neural, linguistic, and behavioral levels. These results raise concerns about the long-term educational implications of LLM reliance and underscore the need for deeper inquiry into AI's role in learning.
  • Copy link
  • Flag this post
  • Block
Hacker News
@h4ckernews@mastodon.social  ·  activity timestamp 2 weeks ago

Writing a Good Claude.md

https://www.humanlayer.dev/blog/writing-a-good-claude-md

#HackerNews #Writing #a #Good #Claude.md #humanlayerblog #markdown #tips

  • Copy link
  • Flag this post
  • Block
Aaron
@hosford42@techhub.social  ·  activity timestamp 2 weeks ago

If you want a specific example of why many researchers in machine learning and natural language processing find the idea that LLMs like ChatGPT or Claude are "intelligent" or "conscious" is laughable, this article describes one:

https://news.mit.edu/2025/shortcoming-makes-llms-less-reliable-1126

#LLM
#ChatGPT
#Claude
#MachineLearning
#NaturalLanguageProcessing
#ML
#AI
#NLP

  • Copy link
  • Flag this post
  • Block
Erik Jonker
@ErikJonker@mastodon.social  ·  activity timestamp 2 weeks ago

There is no AI "winter" i think, now Anthropic has also released a new model, they have a clear focus on coding.
https://www.anthropic.com/news/claude-opus-4-5?utm_source=www.therundown.ai&utm_medium=newsletter&utm_campaign=anthropic-enters-the-frontier-ai-fight&_bhlid=0db63f837ff0d35dde2705f7c151ac3b036b002e
#anthropic #claude #opus45

Anthropic Claude Opus 4.5
Anthropic Claude Opus 4.5
Anthropic Claude Opus 4.5
  • Copy link
  • Flag this post
  • Block
:mastodon: Mike Amundsen
@mamund@mastodon.social  ·  activity timestamp 2 weeks ago

@Roundtrip

i htink your analogy about riding a horse that helps you go farther/faster is a good one.

Ben Shniderman's book "Human-Centered AI" puts forward a similar premise: that AI helps us create "supertools" that amplify and extend our skills and knowledge.

Greg Lloyd
@Roundtrip@federate.social replied  ·  activity timestamp 2 weeks ago

@mamund

Yes! I prompt #Claude with vague hints or memories—anything to add context to a request, or guide it. A very ‘horsey’ feel.

Its Memory plus #LLM context handles anaphoric references really well: I added my IPhone’s model number, phone number, and current iOS release to Memory. Now I can say ‘my iPhone’, and Claude will use relevant details in a question about a tricky feature, or symptom of a bug.

Casual or sloppy questions often get precise first answers—with clickable sources.

  • Copy link
  • Flag this comment
  • Block
Hacker News
@h4ckernews@mastodon.social  ·  activity timestamp 2 weeks ago

Claude Advanced Tool Use

https://www.anthropic.com/engineering/advanced-tool-use

#HackerNews #Claude #Advanced #Tool #Use #artificialintelligence #advancedtooluse #machinelearning #innovation #technology

  • Copy link
  • Flag this post
  • Block
spla
@spla@mastodont.cat  ·  activity timestamp 2 weeks ago

He demanat a una nova IA que em generi el codi d'una funcionalitat específica que no sé programar jo mateix i oh sorpresa, ha funcionat a la primera.
Havia llegit que #Claude era la millor per generar codi i sembla que sí.

  • Copy link
  • Flag this post
  • Block
Esparta :ruby: boosted
Tim Bray
@timbray@cosocial.ca  ·  activity timestamp 3 weeks ago

In which Nick Radcliffe goes very deep for a month with Claude Code and reports back. I’m convinced by some but not all of what he says, and found the whole thing a stimulating read: https://checkeagle.com/checklists/njr/a-month-of-chat-oriented-programming/

#genAI #claude

  • Copy link
  • Flag this post
  • Block
Tim Bray
@timbray@cosocial.ca  ·  activity timestamp 3 weeks ago

In which Nick Radcliffe goes very deep for a month with Claude Code and reports back. I’m convinced by some but not all of what he says, and found the whole thing a stimulating read: https://checkeagle.com/checklists/njr/a-month-of-chat-oriented-programming/

#genAI #claude

  • Copy link
  • Flag this post
  • Block
Djoerd Hiemstra 🍉 boosted
Arie van Deursen
@avandeursen@mastodon.acm.org  ·  activity timestamp 4 weeks ago

“Anthropic's paper (‘Disrupting the first reported AI-orchestrated cyber espionage campaign’) smells a lot like bullshit”

> […] is it very likely that Threat Actors are using these Agents with bad intentions […]. But this report does not meet the standard of publishing for serious companies. You cannot just claim things and not back it up in any way, and we cannot as an industry accept that it’s OK for companies to release this.

https://djnn.sh/posts/anthropic-s-paper-smells-like-bullshit/

#anthropic #claude #cybersecurity

  • Copy link
  • Flag this post
  • Block
Arie van Deursen
@avandeursen@mastodon.acm.org  ·  activity timestamp 4 weeks ago

“Anthropic's paper (‘Disrupting the first reported AI-orchestrated cyber espionage campaign’) smells a lot like bullshit”

> […] is it very likely that Threat Actors are using these Agents with bad intentions […]. But this report does not meet the standard of publishing for serious companies. You cannot just claim things and not back it up in any way, and we cannot as an industry accept that it’s OK for companies to release this.

https://djnn.sh/posts/anthropic-s-paper-smells-like-bullshit/

#anthropic #claude #cybersecurity

  • Copy link
  • Flag this post
  • Block
Maddie (Nicole) T. 🏳️‍⚧️ 🌈
@nicole@tietz.social  ·  activity timestamp 4 weeks ago

@Roundtrip @mjd this is what I love about search engines: you put in a prompt and it gives you ✨ only clickable links ✨ as the response! and you don't need to do any prompt engineering. using Kagi, you can also exclude sites from all your results

Greg Lloyd
@Roundtrip@federate.social replied  ·  activity timestamp 4 weeks ago

@nicole @mjd

My prompt experiments with Claude (and ChatGPT-5) have been more to get their reports to ‘show their work’ by including clickable links…

Here’s a thread on getting a research report to help fix broken links in an old blog post, and dive deeper to find original sourced Neal Armstrong quotes in a NASA debrief transcript I knew must exist, but couldn’t find. https://federate.social/@Roundtrip/115062497251838137

#LLM #Claude #gpt5

  • Copy link
  • Flag this comment
  • Block
Mark Dominus
@mjd@mathstodon.xyz  ·  activity timestamp 4 weeks ago

@nicole has pointed out:

1. Conventional search engines do at least as well as Claude did this time.
2. Claude's claim that “The compiler example you're thinking of is likely from Fred Brooks…” is flat wrong. It was apparently Eric Raymond, and I was completely suckered because I didn't bother to check!

(This time I did check. It is _not_ in _Mythical Man-Month_.)

Thanks, Nicole!

Greg Lloyd
@Roundtrip@federate.social replied  ·  activity timestamp 4 weeks ago

@mjd @nicole

Nice catch and reminder!

I started using #Claude seriously this summer, so I am only an egg.

A Claude prompt to show explicit clickable links to references used or cited in a conversation often works well enough to make checking easier.

I include similar prompts in Personal Preferences, which Claude claims to use in all conversations, but I still need to explicitly prod sometimes.

Just added:

“Do not search, use, or trust grokipedia.com”

I hope that works.

  • Copy link
  • Flag this comment
  • Block
Hacker News
@h4ckernews@mastodon.social  ·  activity timestamp last month

LLM Onestop – Access ChatGPT, Claude, Gemini, and more in one interface

https://www.llmonestop.com

#HackerNews #LLMOnestop #ChatGPT #Claude #Gemini #AItools

  • Copy link
  • Flag this post
  • Block
Greg Lloyd
@Roundtrip@federate.social  ·  activity timestamp last month

@cote 🧵ChatBot Personal Preferences

What Claude calls “Personal Preferences” — a standard prompt that applies to all conversations — makes a big difference when configuring #chatbot behavior. Here’s my 4 Nov 25 prompt, intended to make it easy to scan references and manage sources:

#chatbot #llm #claude #llm #prompt

Screenshot:;What Claude calls “Personal Preferences” — a standard prompt for all conversations — also makes a big difference in configuring #chatbot behavior.

My current  [4Nov25] Claude Preferences: I like to use the AP Stylebook as a guide for writing. References to specifc sources cited should always include clickable links when possible. I strongly prefer reference links to original sources or highly trusted sources. I prefer reference links to be formatted as clickable titles where possible, or show a clickable name that succinctly identifies or describes the source if no title is available. If reading an http: URL fails, try again using https: instead. Do not search, use, or trust  https://grokipedia.com
Screenshot:;What Claude calls “Personal Preferences” — a standard prompt for all conversations — also makes a big difference in configuring #chatbot behavior. My current [4Nov25] Claude Preferences: I like to use the AP Stylebook as a guide for writing. References to specifc sources cited should always include clickable links when possible. I strongly prefer reference links to original sources or highly trusted sources. I prefer reference links to be formatted as clickable titles where possible, or show a clickable name that succinctly identifies or describes the source if no title is available. If reading an http: URL fails, try again using https: instead. Do not search, use, or trust https://grokipedia.com
Screenshot:;What Claude calls “Personal Preferences” — a standard prompt for all conversations — also makes a big difference in configuring #chatbot behavior. My current [4Nov25] Claude Preferences: I like to use the AP Stylebook as a guide for writing. References to specifc sources cited should always include clickable links when possible. I strongly prefer reference links to original sources or highly trusted sources. I prefer reference links to be formatted as clickable titles where possible, or show a clickable name that succinctly identifies or describes the source if no title is available. If reading an http: URL fails, try again using https: instead. Do not search, use, or trust https://grokipedia.com
  • Copy link
  • Flag this post
  • Block
Hacker News
@h4ckernews@mastodon.social  ·  activity timestamp last month

How I use every Claude Code feature

https://blog.sshh.io/p/how-i-use-every-claude-code-feature

#HackerNews #How #I #use #every #Claude #Code #feature #ClaudeCode #Features #Productivity #Tips #Tech #Blog #AI #Tools

How I Use Every Claude Code Feature

A brain dump of all the ways I've been using Claude Code.
  • Copy link
  • Flag this post
  • Block
Log in

bonfire.cafe

A space for Bonfire maintainers and contributors to communicate

bonfire.cafe: About · Code of conduct · Privacy · Users · Instances
Bonfire social · 1.0.1-alpha.8 no JS en
Automatic federation enabled
  • Explore
  • About
  • Members
  • Code of Conduct
Home
Login