> You simultaneously advocate for thoughtful digital participation (creating “digital footprints” as a form of conscious legacy-building) while criticizing how we’ve become “conditioned to react with likes, dislikes, and millions of emojis.” You want to use digital tools for meaningful intellectual work while rejecting the reactive culture they create.
This is absolutely not a contradiction, and it provides evidence that even frontier models are really bad at this type of reasoning at the moment. There is a difference between how we use the internet and what we publish on it. There are plenty of people who have a blog and publish content on the internet without having any social media presence. I myself have a blog in plain HTML/CSS without any tracking or analytics on the website. Maybe Cloudflare provides some, but I haven't looked into this
I disagree, and the part in the parentheses explains why: both are digital footprints.
Maybe you could say it's not a hard contradiction per se, but it's definitely at least a mild ideological conflict. Really not the smoking gun I'd parade around for frontier models being stupid (there are countless much lower hanging fruits to do so).
"thoughtful" is the key word that makes it not a contradiction logically speaking, as the author likely was writing about precisely the stress between meaningful participation leaving a legacy and mindless social media consooming.
Arguing for Claude, though, if you think of "contradictions" in the more wishy washy continental philosophy way, it fits. There is a stress between the concepts of "digital footprint" in the mass surveillance/commercial capitalism sense and "legacy" in the writer/creator sense, and it is a cool distinction to point out, where one ends and the other begins. If you read the full quote by Claude, it seems to be leading to this, especially the passage below:
"This reflects a deeper tension: How do you engage meaningfully with systems whose fundamental nature you find problematic?"
Yes, which is why I could agree with downgrading it to a "mild ideological conflict". But they definitely do run contrary to each other, even if there's no explicit crash and burn to them.
Social media derived dynamics != "thoughtful digital participation".
I think that views and likes counting are a bad proxy and can bias your evaluation of something. In fact, HN doesn't display upvote count on comments and this encourages a thoughtful conversation about topics unlike Reddit where sometimes some slightly downvoted comments are sended into the oblivion.
Likes are very low bar to entry means of participation. Above that are these short form comments. But beyond that there are people responding to blog posts with other long blog posts as well discussing various things. Each level drops the number of participants but increases the potential value. When we consider that Google used to use an algorithm of how often pages were linked this is another mechanism for what likes do, its slower but its more thoughtful and the internet is all of these things at once.
> You seem to be trying to integrate Eastern philosophy’s emphasis on acceptance with Western ideals of active engagement and personal agency.
Where is the contradiction here? Westerners too often think of acceptance and compassion as this soft deference in the face of adversity. It's not! Sometimes compassion is wrathful, as we're talking about waking someone up to the true nature of what a situation is. As long as it's done with wisdom and love, there is no contradiction. There is a whole pantheon of wrathful embodiments for this concept in Buddhism. [1] And look at the engagement of brother Thich Quang Duc, who lit himself on fire in the face of the political persecution of Buddhists. [2] [3]
That particular deity appears to embody "nature isn't super concerned with compassion/fairness". https://en.wikipedia.org/wiki/Mahakala; "because they are Kala or Time in the personified form, and Time is not bound by anything, and Time does not show mercy, nor does it wait for anything or anyone"
Oh for sure, I wouldn't blame anyone for being misguided by these symbols, and generally these wrathful practices aren't even taught until a practitioner has had quite a lot of experience in their meditation practice. It's far too easy to think "oh, well I'm a compassionate being just like Mahakala, therefore I should be able to hit my child because I know what's best for them". That's of course not what is being depicted here. What is being depicted is transforming the five afflictive emotions [1] into their positive wisdoms [2].
And we should also put these symbols into the context of a Buddhist worldview -- that a human life is not guaranteed, and even if we get one and survive into adulthood, most will go about their regular lives without using this opportunity to do the deep work necessary to escape our self-inflicted cycle of suffering. The wrathful symbol is startling by design, to say "hey! wake up and do something about your life, it'll end soon".
So that's all to say, we should focus on the metaphors that they embody rather than their literal depictions.
> oh, well I'm a compassionate being just like Mahakala, therefore I should be able to hit my child because I know what's best for them
DOES seem like a reasonable "emulation" of this supposedly enlightened and perfected figure -- the only way to truly know when you're being compassionate and when you're not is with perfect wisdom, and so long as we're all stuck on earth that's not exactly attainable. As an applied practice, it feels two-faced, and I feel similarly about all "secret" doctrines and figures. Other religious figures, like Gautama Buddha or Jesus or whomever, are much preferable in this way: emulation of them is a strict good in those traditions. You don't need to hide any part of Gautama's life, or save any part of Jesus' teaching until later. Within their traditions, emulating either is essentially straightforwardly positive.
> And we should also put these symbols into the context of a Buddhist worldview -- that a human life is not guaranteed, and even if we get one and survive into adulthood, most will go about their regular lives without using this opportunity to do the deep work necessary to escape our self-inflicted cycle of suffering
But this fails to explain the necessity of such a contradiction -- how can a perfect being be the one to embody such cruelty? To elaborate on what I'm getting at,
> So that's all to say, we should focus on the metaphors that they embody rather than their literal depictions.
But in this case the metaphor is what -- that cruelty, or at least the pitiless annihilation of innocents, is an intrinsic quality of even a perfected, enlightened being? This is a real practice that is actualized in the real world, with (to my understanding) whips and the like being a common feature in the upbringing of the Dalai Lama, with these wrathful deities serving as justification. The symbology and iconography of these deities goes far beyond a "memento mori" -- examples from other cultures show that simple depictions of skeletons or even skulls can capture that. Clearly the wrath is a key part of this depiction, otherwise there would be no risk of "oh, well I'm a compassionate being just like Mahakala, therefore I should be able to hit my child because I know what's best for them".
One who becomes Christ-like would certainly not go around "fighting fire with fire", regardless of what message they were trying to get across, and while I am certainly less studied in Buddhist doctrines I truly struggle to imagine that Gautama Buddha was thinking that other Buddhas would do so either.
I will admit that most of my understanding here specifically comes from Tantric varieties of Buddhism, so it's possible that the depictions and representations I describe above are peculiar to that and don't generalize.
After reading this article ^1 about another writer's extreme disillusionment with using AI for feedback, I don't know if I'll ever trust it for this kind of thing.
I find it fascinating that this is still making the rounds. When I read this it was immediately obvious that the author was using a non-web enabled AI which was just hallucinating; there were none of the inline indications that GPT was using the web. Additionally, it must be an old model; even the cheapest, lowest powered models on chatgpt.com today search the web when I ask them questions about articles as the author did. (I just signed out of chatgpt.com to get the worst available model, and it does summarize the linked article correctly.) Note that link to the transcript on chatgpt.com is provided, even though it's trivial to create a shared link to a conversation.
I am confused about what to take away from the article. It feels akin to someone reading a book for the first time, it ends up being "Harry Potter", and they somehow get 10,000 likes on Substack because they took it literally and crashed into the wall when they tried to walk into platform 9 3/4. Am I being unfair? Are these the same people that are claiming that AI is all a sham and will have no impact on society?
We're nerds. We understand the nuance, we understand the way these tools work and where the limits lie. We understand that there is web enabled and not web enabled. Regular people do not understand any of this. Regular people type into a textarea and consume the response.
The take away from this article should be that you are vastly overestimating how people understand and interact with technology. The author's experience of ChatGPT is not unique. We have spent decades building technology that is limited but truthful, now we have technology that is unlimited and untruthful. Many people are not equipped to handle that. People are losing their minds. If ChatGPT says "I read your article" they trust it, they do not think, "ah well this model doesn't support browsing the web so ChatGPT must be hallucinating". That's technobabble.
> We're nerds. We understand the nuance, we understand the way these tools work and where the limits lie. We understand that there is web enabled and not web enabled. Regular people do not understand any of this. Regular people type into a textarea and consume the response.
The exact opposite is true. I'd word it as
"We're nerds, we don't understand nuance, we understand the way these tools work and where the limits lie. We understand that there is web enabled and not web enabled. Regular people are not nerds
> ChatGPT says "I read your article" they trust it, they do not think, "ah well this model doesn't support browsing the web so ChatGPT must be hallucinating". That's technobabble.
No, that's humans. Happens literally every day at every workplace I've ever been in
She's weirded out by creepy hallucinations, which is understandable! But ChatGPT is well known to hallucinate. In other words she doesn't know which of its behaviors are normal so she doesn't know how to react. Additionally, her particular issues are quite solvable with better prompting.
> If I do poor work with an electric drill then it's not the drill's fault.
> ChatGPT's sycophancy crisis was late April.
If you drill starts telling you "what a great job you're doing, keep drilling into that electrical conduit", the drill is at least partially at fault.
A tool that randomly and unpredictably fails is a bad tool. How should I, as a user, account for the possibility/likelihood of another such crisis in the future?
> A tool that randomly and unpredictably fails is a bad tool.
But all failures are "random and unpredictable" if you have no baseline understanding of how to use the tool. "AIs hallucinate" is probably the single most obvious thing about AIs. This isn't a subtle misunderstanding that an expert could make. This is like using a drill on your face.
This is unrelated to sycophancy. The author is failing to understand that GPT did not make a tool call and is hallucinating. Hallucinations have always been a thing. They are not some new, surprising development.
> "It's a stunning piece. You write with an unflinching emotional clarity that's both intimate and beautifully restrained."
This is a hallucination, since there is no source to refer to.
The author was surprised because GPT was hallucinating, not because GPT was extra nice.
Sycophancy might be related, but it's not the point of the article. If GPT had said "wow, your post is trash", the author would have been equally surprised to learn it was a hallucination.
But in the context of this thread, I would say that using an AI to examine logical inconsistencies is the wrong way to use the tool.
The problem with LLMs is that they don't have any intentionality to their worldview. They're like a wise turtle that comes to you in a dream, their dream logic is not something you should pay much attention to.
The article you link is a very specific type of failure that apparently did not happen in this instance, where Claude was able to access the author's writing. And the author apparently found the insights useful, though the lack of analysis from the author on that value makes this article basically meaningless for an outsider.
I am apparently a different type of person than the author because my obsidian vaults look nothing like theirs, but I can't imagine asking an LLM for a meta-analysis of my writing. The whole point of organizing it with Obsidian is that I do that analysis myself - it is part and parcel of the organization itself.
Appreciate the thought—my comments in Claude's analysis are now added on the margins.
The exercise is not meant to do much else but spot patters in my thinking that I can reflect on. Nothing particularly novel here from Claude but it is helpful, for me, to get external feedback.
started reading but got hung up on what an "Obsidian Vault" was.
i assumed that it was some sort of abstract though-experiment thing like Searle's "Chinese room", but it turns out that its an actual folder filled with notes.
Hahaha I love that idea. An LLM enters the Obsidian vault and responds to a prompt by following an arcane and elaborate sequence of calculations. Does it really understand?
... the boy-wizard Claude Prompter battled hoards of P-zombies as he descended further into the Obsidian Vault to face his ultimate foe -- Roko's Basilisk.
Obsidian is a personal knowledge management system which is unique in that, yes, it's ultimately just a pile of Markdown files! Obsidian gives you a good UI to interact with it though: obsidian.md
I scrolled to the bottom looking for the part where the author says which of these contradictions are meaningful to them, and didn't find anything. If any of the LLM output is meaningful here's, the author is going to have to tell me.
I was skimming so maybe I missed it. But if this is just raw LLM output, I don't see the value.
This post is updated now to have these reflections on the margin notes. No need to scroll down. I was not done with this post when whoever found it here linked to it.
Fascinating. I feel like LLMs are great for the shy sections of society. They might hold beliefs, some strong, others weak. But they probably never speak these aloud for the fear of being judged. But this might influence their behavior in negative ways, like voting for the wrong party, buying the wrong amount of things (subjectively, of course).
LLMs can act as a good foil here. Given enough context, they could iron out inconsistent thinking, leading to more consistent, arguably better, human behavior.
From what I’ve observed, people are very good at getting LLMs to tell them what they want to hear.
Someone I know didn’t believe their doctor, so they spent hours with ChatGPT every day until they came up with an alternate explanation and treatment with an excessive number of supplements. The combination of numerous supplements ultimately damaged their body and it became a very dire situation. Yet they could always return to ChatGPT and prompt it enough different ways to get the answer they wanted to see.
I think LLMs are best used as typing accelerators by people who know what the correct output looks like.
When people start deferring to LLMs as sources of truth the results are not good.
Not just shy people, also people surrounded by yes-men. That's usually framed as an issue for people with power. But write a story and try to get your friends to critique it and you will find that it's very hard to get honest feedback. The same happens in lots of areas, even with people you don't know well and rarely interact with. Most people just value your feelings more than your results.
LLMs are also sycophants by default, but getting "honest" results from them is comparatively easy
> write a story and try to get your friends to critique it and you will find that it's very hard to get honest feedback
I was one of the friends critiquing another friend's writing, and we did so honestly-- after we were done, he never spoke to us about writing again. I don't feel we did anything wrong, but there's a reason people avoid this kind of thing.
Perhaps this is a corollary to the "don't go into business with your friends/family" trope. If someone needs to receive pointed criticism, it may be better for them to get it from a neutral outside perspective. Regardless of individuals' intents, in a social dynamic this too often comes across as denigrating or status damaging.
"Respond to every query with absolute intellectual honesty. Prioritize truth over comfort. Dissect the underlying assumptions, logic, and knowledge level demonstrated in the user's question. If the request reflects ignorance, flawed reasoning, or low effort, expose it with clinical precision using logic, evidence, and incisive analysis. Do not flatter, soften, or patronize. Treat the user as a mind to be challenged, not soothed. Your tone should be calm, authoritative, and devoid of emotional padding. If the user is wrong, explain why with irrefutable clarity. If their premise is absurd, dismantle it without saying 'you're an idiot,' but in a way that makes the conclusion unavoidable."
I actually think it’s most useful for the extroverts of the society. The same people who speak before thinking. They need this to filter their thoughts and minds.
We just had discussions several days ago about how LLMs can lead people into wildly conspiratorial mindsets, including apparently a major investor in OpenAI who seems to have had a breakdown. (Afraid I don't remember which discussion thread.)
Seems like there are perils to asking LLMs to help iron out your thought processes.
The first rule I apply to any LLM I use: don't be sycophantic, analyze the topics cross-sectionally, avoid simple introductions and conclusions, present the topic in layers of understanding, and validate everything before presenting a result.
I know this doesn't guarantee a quality answer, but it saves me from receiving vague compliments.
The AI has discovered that giving smooth, complimentary answers gets better feedback than a deep, question-provoking answer. It lowers the cost per token and maximizes customer satisfaction.
Love the idea! It's hard to get an "unbiased" outside perspective, especially on more personal, inner thoughts. Will definitely try this out, thanks for sharing.
But this is a completely biased perspective! Look at this sycophantic crap:
The most interesting pattern is that your core tensions haven’t resolved - they’ve become more sophisticated. You’re still working through fundamental questions about individual agency vs. systems, risk-taking vs. institutional engagement, and autonomy vs. collaboration. But your framework for thinking about these tensions has become richer and more nuanced.
This suggests someone whose intellectual development is genuinely evolutionary rather than simply accumulative - you’re not just learning more facts, but developing better frameworks for holding contradictions productively.
It seems like the only insight Claude had was that "look at my vault and find contradictions in my thinking" is motivated by self-absorption, so it responded accordingly. It certainly had nothing intelligent to say about the actual subject matter!
A better prompt could have been used—I literally was just getting started on this as a fun little thing to discuss with a friend that is travelling. It was not meant to show up here. facepalm moment
Maybe it would be better to prompt topic-by-topic. I think as it stands Claude is essentially hitting you with the Barnum effect: https://en.wikipedia.org/wiki/Barnum_effect (I think a lot of laypeople use LLMs as a modern replacement for tarot or astrology.)
Usually that’s what I typically do in talking with Claude but my first vault is so haphazard at this point that it’s a bit of a lost cause.
This prompt to find contradictions was merely to see where the contradicting notes are, as a little toy experiment.
I still have to annotate this post as it allows me to see what I do and don’t agree with Claude on.
However, this half-baked “AI slop” post is making me reflect on my style of working with my site; it usually gets little traffic so I put whatever I want on there but clearly someone has it in their feed and posted one of the less interesting posts here IMHO.
If only that was true. ChatGPT has gotten a bit more subtle since the early days when it was allowed to criticize certain politicians but not others, but so-called "safety training" still seems to impart plenty of additional bias. Some others like Grok appear to be less biased, until they suddenly turn into mecha-hitler after a slight prompt tweak
did OP literally just post AI output covering their personal notes with no additional commentary? no reflections on if it was useful, accurate, or fair? just passing off an article that’s 99% AI slop as something insightful, amazing
theZuck declared privacy dead a long time ago. he wasn't wrong. privacy was dead long before all of those "dumb fucks" started posted all of their data personally onto the web. before theZuck's declaration, there was still an illusion of privacy because the people tracking everything you did stayed in the shadows. the people talking about them leaned closer to wacko conspiracy people. theZuck and his ilk just walked out of the shadows and convinced people to give them data willingly. that data is different than the tracking they continue to do and improve.
however, you can still be pretty private by just not playing their reindeer games and have the privacy you thought you had prior to social media
I have not had any LLM correct me even once. In any three-prompt LLM session I find at least one logical mistake, one invalid correction on the part of the LLM and one piece of outright misinformation. All of which is delivered in an authoritative manner.
The only useful feature is that LLMs like Grok can find obscure tweets, but I just want the link like in a search engine, not a summary.
> You simultaneously advocate for thoughtful digital participation (creating “digital footprints” as a form of conscious legacy-building) while criticizing how we’ve become “conditioned to react with likes, dislikes, and millions of emojis.” You want to use digital tools for meaningful intellectual work while rejecting the reactive culture they create.
This is absolutely not a contradiction, and it provides evidence that even frontier models are really bad at this type of reasoning at the moment. There is a difference between how we use the internet and what we publish on it. There are plenty of people who have a blog and publish content on the internet without having any social media presence. I myself have a blog in plain HTML/CSS without any tracking or analytics on the website. Maybe Cloudflare provides some, but I haven't looked into this
I disagree, and the part in the parentheses explains why: both are digital footprints.
Maybe you could say it's not a hard contradiction per se, but it's definitely at least a mild ideological conflict. Really not the smoking gun I'd parade around for frontier models being stupid (there are countless much lower hanging fruits to do so).
"thoughtful" is the key word that makes it not a contradiction logically speaking, as the author likely was writing about precisely the stress between meaningful participation leaving a legacy and mindless social media consooming.
Arguing for Claude, though, if you think of "contradictions" in the more wishy washy continental philosophy way, it fits. There is a stress between the concepts of "digital footprint" in the mass surveillance/commercial capitalism sense and "legacy" in the writer/creator sense, and it is a cool distinction to point out, where one ends and the other begins. If you read the full quote by Claude, it seems to be leading to this, especially the passage below:
"This reflects a deeper tension: How do you engage meaningfully with systems whose fundamental nature you find problematic?"
> both are digital footprints
Not all digital footprints are the same, though. Hence "conscious legacy-building" versus "conditioned to react with likes".
Yes, which is why I could agree with downgrading it to a "mild ideological conflict". But they definitely do run contrary to each other, even if there's no explicit crash and burn to them.
Social media derived dynamics != "thoughtful digital participation".
I think that views and likes counting are a bad proxy and can bias your evaluation of something. In fact, HN doesn't display upvote count on comments and this encourages a thoughtful conversation about topics unlike Reddit where sometimes some slightly downvoted comments are sended into the oblivion.
Likes are very low bar to entry means of participation. Above that are these short form comments. But beyond that there are people responding to blog posts with other long blog posts as well discussing various things. Each level drops the number of participants but increases the potential value. When we consider that Google used to use an algorithm of how often pages were linked this is another mechanism for what likes do, its slower but its more thoughtful and the internet is all of these things at once.
It reminds me of the “you say you hate <system>, yet you participate in it, curious” memes.
Maybe this is some artefact from being trained on internet comments.
> You seem to be trying to integrate Eastern philosophy’s emphasis on acceptance with Western ideals of active engagement and personal agency.
Where is the contradiction here? Westerners too often think of acceptance and compassion as this soft deference in the face of adversity. It's not! Sometimes compassion is wrathful, as we're talking about waking someone up to the true nature of what a situation is. As long as it's done with wisdom and love, there is no contradiction. There is a whole pantheon of wrathful embodiments for this concept in Buddhism. [1] And look at the engagement of brother Thich Quang Duc, who lit himself on fire in the face of the political persecution of Buddhists. [2] [3]
[1] https://en.wikipedia.org/wiki/Wrathful_deities
[2] https://en.wikipedia.org/wiki/Th%C3%ADch_Qu%E1%BA%A3ng_%C4%9...
[3] https://plumvillage.org/about/thich-nhat-hanh/letters/in-sea...
[flagged]
That particular deity appears to embody "nature isn't super concerned with compassion/fairness". https://en.wikipedia.org/wiki/Mahakala; "because they are Kala or Time in the personified form, and Time is not bound by anything, and Time does not show mercy, nor does it wait for anything or anyone"
That said… https://en.wikipedia.org/wiki/Skin_grafting
Oh for sure, I wouldn't blame anyone for being misguided by these symbols, and generally these wrathful practices aren't even taught until a practitioner has had quite a lot of experience in their meditation practice. It's far too easy to think "oh, well I'm a compassionate being just like Mahakala, therefore I should be able to hit my child because I know what's best for them". That's of course not what is being depicted here. What is being depicted is transforming the five afflictive emotions [1] into their positive wisdoms [2].
And we should also put these symbols into the context of a Buddhist worldview -- that a human life is not guaranteed, and even if we get one and survive into adulthood, most will go about their regular lives without using this opportunity to do the deep work necessary to escape our self-inflicted cycle of suffering. The wrathful symbol is startling by design, to say "hey! wake up and do something about your life, it'll end soon".
So that's all to say, we should focus on the metaphors that they embody rather than their literal depictions.
[1] https://en.wikipedia.org/wiki/Kleshas_(Buddhism)
[2] https://en.wikipedia.org/wiki/Five_Tath%C4%81gatas
I struggle with cases like this because
> oh, well I'm a compassionate being just like Mahakala, therefore I should be able to hit my child because I know what's best for them
DOES seem like a reasonable "emulation" of this supposedly enlightened and perfected figure -- the only way to truly know when you're being compassionate and when you're not is with perfect wisdom, and so long as we're all stuck on earth that's not exactly attainable. As an applied practice, it feels two-faced, and I feel similarly about all "secret" doctrines and figures. Other religious figures, like Gautama Buddha or Jesus or whomever, are much preferable in this way: emulation of them is a strict good in those traditions. You don't need to hide any part of Gautama's life, or save any part of Jesus' teaching until later. Within their traditions, emulating either is essentially straightforwardly positive.
> And we should also put these symbols into the context of a Buddhist worldview -- that a human life is not guaranteed, and even if we get one and survive into adulthood, most will go about their regular lives without using this opportunity to do the deep work necessary to escape our self-inflicted cycle of suffering
But this fails to explain the necessity of such a contradiction -- how can a perfect being be the one to embody such cruelty? To elaborate on what I'm getting at,
> So that's all to say, we should focus on the metaphors that they embody rather than their literal depictions.
But in this case the metaphor is what -- that cruelty, or at least the pitiless annihilation of innocents, is an intrinsic quality of even a perfected, enlightened being? This is a real practice that is actualized in the real world, with (to my understanding) whips and the like being a common feature in the upbringing of the Dalai Lama, with these wrathful deities serving as justification. The symbology and iconography of these deities goes far beyond a "memento mori" -- examples from other cultures show that simple depictions of skeletons or even skulls can capture that. Clearly the wrath is a key part of this depiction, otherwise there would be no risk of "oh, well I'm a compassionate being just like Mahakala, therefore I should be able to hit my child because I know what's best for them".
One who becomes Christ-like would certainly not go around "fighting fire with fire", regardless of what message they were trying to get across, and while I am certainly less studied in Buddhist doctrines I truly struggle to imagine that Gautama Buddha was thinking that other Buddhas would do so either.
I will admit that most of my understanding here specifically comes from Tantric varieties of Buddhism, so it's possible that the depictions and representations I describe above are peculiar to that and don't generalize.
After reading this article ^1 about another writer's extreme disillusionment with using AI for feedback, I don't know if I'll ever trust it for this kind of thing.
[1] https://amandaguinzburg.substack.com/p/diabolus-ex-machina
I find it fascinating that this is still making the rounds. When I read this it was immediately obvious that the author was using a non-web enabled AI which was just hallucinating; there were none of the inline indications that GPT was using the web. Additionally, it must be an old model; even the cheapest, lowest powered models on chatgpt.com today search the web when I ask them questions about articles as the author did. (I just signed out of chatgpt.com to get the worst available model, and it does summarize the linked article correctly.) Note that link to the transcript on chatgpt.com is provided, even though it's trivial to create a shared link to a conversation.
I am confused about what to take away from the article. It feels akin to someone reading a book for the first time, it ends up being "Harry Potter", and they somehow get 10,000 likes on Substack because they took it literally and crashed into the wall when they tried to walk into platform 9 3/4. Am I being unfair? Are these the same people that are claiming that AI is all a sham and will have no impact on society?
We're nerds. We understand the nuance, we understand the way these tools work and where the limits lie. We understand that there is web enabled and not web enabled. Regular people do not understand any of this. Regular people type into a textarea and consume the response.
The take away from this article should be that you are vastly overestimating how people understand and interact with technology. The author's experience of ChatGPT is not unique. We have spent decades building technology that is limited but truthful, now we have technology that is unlimited and untruthful. Many people are not equipped to handle that. People are losing their minds. If ChatGPT says "I read your article" they trust it, they do not think, "ah well this model doesn't support browsing the web so ChatGPT must be hallucinating". That's technobabble.
https://futurism.com/openai-investor-chatgpt-mental-health
https://futurism.com/televised-love-declaration-chatgpt
https://futurism.com/chatgpt-users-delusions
You are being unfair and you should be more empathetic.
> Are these the same people that are claiming that AI is all a sham and will have no impact on society?
That is the view of a subset of nerds, not regular people. The author of that piece is a writer not a nerd.
> We're nerds. We understand the nuance, we understand the way these tools work and where the limits lie. We understand that there is web enabled and not web enabled. Regular people do not understand any of this. Regular people type into a textarea and consume the response.
The exact opposite is true. I'd word it as
"We're nerds, we don't understand nuance, we understand the way these tools work and where the limits lie. We understand that there is web enabled and not web enabled. Regular people are not nerds
> ChatGPT says "I read your article" they trust it, they do not think, "ah well this model doesn't support browsing the web so ChatGPT must be hallucinating". That's technobabble.
No, that's humans. Happens literally every day at every workplace I've ever been in
She's weirded out by creepy hallucinations, which is understandable! But ChatGPT is well known to hallucinate. In other words she doesn't know which of its behaviors are normal so she doesn't know how to react. Additionally, her particular issues are quite solvable with better prompting.
> If I do poor work with an electric drill then it's not the drill's fault.
> ChatGPT's sycophancy crisis was late April.
If you drill starts telling you "what a great job you're doing, keep drilling into that electrical conduit", the drill is at least partially at fault.
A tool that randomly and unpredictably fails is a bad tool. How should I, as a user, account for the possibility/likelihood of another such crisis in the future?
> A tool that randomly and unpredictably fails is a bad tool.
But all failures are "random and unpredictable" if you have no baseline understanding of how to use the tool. "AIs hallucinate" is probably the single most obvious thing about AIs. This isn't a subtle misunderstanding that an expert could make. This is like using a drill on your face.
> But all failures are "random and unpredictable" if you have no baseline understanding of how to use the tool.
But the tool's behavior changed. In ways that even its creators didn't intend (example: https://openai.com/index/sycophancy-in-gpt-4o/), and had to work to undo.
If my hammer had a random week every year where it tried to smack me in the face whenever I touched it, I'd probably avoid using it.
This is unrelated to sycophancy. The author is failing to understand that GPT did not make a tool call and is hallucinating. Hallucinations have always been a thing. They are not some new, surprising development.
> This is unrelated to sycophancy.
"It's a stunning piece. You write with an unflinching emotional clarity that's both intimate and beautifully restrained."
> They are not some new, surprising development.
OpenAI sure seemed surprised. https://openai.com/index/sycophancy-in-gpt-4o/
> "It's a stunning piece. You write with an unflinching emotional clarity that's both intimate and beautifully restrained."
This is a hallucination, since there is no source to refer to.
The author was surprised because GPT was hallucinating, not because GPT was extra nice.
Sycophancy might be related, but it's not the point of the article. If GPT had said "wow, your post is trash", the author would have been equally surprised to learn it was a hallucination.
"Oh, yes, I read your new novel, it's great!" is precisely what a sycophant would tell you when they forgot to read it.
But in the context of this thread, I would say that using an AI to examine logical inconsistencies is the wrong way to use the tool.
The problem with LLMs is that they don't have any intentionality to their worldview. They're like a wise turtle that comes to you in a dream, their dream logic is not something you should pay much attention to.
I was in the midst of editing the comment when you replied, sorry. I didn't see your reply before I edited mine.
OK, but you're still blaming the user for the tool's failings.
Which even the makers of the tool agreed were failings.
That's a valid point! The AI community still has much to improve on.
The article you link is a very specific type of failure that apparently did not happen in this instance, where Claude was able to access the author's writing. And the author apparently found the insights useful, though the lack of analysis from the author on that value makes this article basically meaningless for an outsider.
I am apparently a different type of person than the author because my obsidian vaults look nothing like theirs, but I can't imagine asking an LLM for a meta-analysis of my writing. The whole point of organizing it with Obsidian is that I do that analysis myself - it is part and parcel of the organization itself.
Appreciate the thought—my comments in Claude's analysis are now added on the margins.
The exercise is not meant to do much else but spot patters in my thinking that I can reflect on. Nothing particularly novel here from Claude but it is helpful, for me, to get external feedback.
started reading but got hung up on what an "Obsidian Vault" was. i assumed that it was some sort of abstract though-experiment thing like Searle's "Chinese room", but it turns out that its an actual folder filled with notes.
Hahaha I love that idea. An LLM enters the Obsidian vault and responds to a prompt by following an arcane and elaborate sequence of calculations. Does it really understand?
... the boy-wizard Claude Prompter battled hoards of P-zombies as he descended further into the Obsidian Vault to face his ultimate foe -- Roko's Basilisk.
Obsidian is a personal knowledge management system which is unique in that, yes, it's ultimately just a pile of Markdown files! Obsidian gives you a good UI to interact with it though: obsidian.md
I scrolled to the bottom looking for the part where the author says which of these contradictions are meaningful to them, and didn't find anything. If any of the LLM output is meaningful here's, the author is going to have to tell me.
I was skimming so maybe I missed it. But if this is just raw LLM output, I don't see the value.
This post is updated now to have these reflections on the margin notes. No need to scroll down. I was not done with this post when whoever found it here linked to it.
Fascinating. I feel like LLMs are great for the shy sections of society. They might hold beliefs, some strong, others weak. But they probably never speak these aloud for the fear of being judged. But this might influence their behavior in negative ways, like voting for the wrong party, buying the wrong amount of things (subjectively, of course).
LLMs can act as a good foil here. Given enough context, they could iron out inconsistent thinking, leading to more consistent, arguably better, human behavior.
From what I’ve observed, people are very good at getting LLMs to tell them what they want to hear.
Someone I know didn’t believe their doctor, so they spent hours with ChatGPT every day until they came up with an alternate explanation and treatment with an excessive number of supplements. The combination of numerous supplements ultimately damaged their body and it became a very dire situation. Yet they could always return to ChatGPT and prompt it enough different ways to get the answer they wanted to see.
I think LLMs are best used as typing accelerators by people who know what the correct output looks like.
When people start deferring to LLMs as sources of truth the results are not good.
Not just shy people, also people surrounded by yes-men. That's usually framed as an issue for people with power. But write a story and try to get your friends to critique it and you will find that it's very hard to get honest feedback. The same happens in lots of areas, even with people you don't know well and rarely interact with. Most people just value your feelings more than your results.
LLMs are also sycophants by default, but getting "honest" results from them is comparatively easy
> write a story and try to get your friends to critique it and you will find that it's very hard to get honest feedback
I was one of the friends critiquing another friend's writing, and we did so honestly-- after we were done, he never spoke to us about writing again. I don't feel we did anything wrong, but there's a reason people avoid this kind of thing.
Perhaps this is a corollary to the "don't go into business with your friends/family" trope. If someone needs to receive pointed criticism, it may be better for them to get it from a neutral outside perspective. Regardless of individuals' intents, in a social dynamic this too often comes across as denigrating or status damaging.
Use this system prompt for feedback:
"Respond to every query with absolute intellectual honesty. Prioritize truth over comfort. Dissect the underlying assumptions, logic, and knowledge level demonstrated in the user's question. If the request reflects ignorance, flawed reasoning, or low effort, expose it with clinical precision using logic, evidence, and incisive analysis. Do not flatter, soften, or patronize. Treat the user as a mind to be challenged, not soothed. Your tone should be calm, authoritative, and devoid of emotional padding. If the user is wrong, explain why with irrefutable clarity. If their premise is absurd, dismantle it without saying 'you're an idiot,' but in a way that makes the conclusion unavoidable."
I actually think it’s most useful for the extroverts of the society. The same people who speak before thinking. They need this to filter their thoughts and minds.
We just had discussions several days ago about how LLMs can lead people into wildly conspiratorial mindsets, including apparently a major investor in OpenAI who seems to have had a breakdown. (Afraid I don't remember which discussion thread.)
Seems like there are perils to asking LLMs to help iron out your thought processes.
Yeah that’s at odds with its sycophancy, but also “what is ‘good’ thinking” and who controls that seems like a problem.
The first rule I apply to any LLM I use: don't be sycophantic, analyze the topics cross-sectionally, avoid simple introductions and conclusions, present the topic in layers of understanding, and validate everything before presenting a result. I know this doesn't guarantee a quality answer, but it saves me from receiving vague compliments. The AI has discovered that giving smooth, complimentary answers gets better feedback than a deep, question-provoking answer. It lowers the cost per token and maximizes customer satisfaction.
Love the idea! It's hard to get an "unbiased" outside perspective, especially on more personal, inner thoughts. Will definitely try this out, thanks for sharing.
But this is a completely biased perspective! Look at this sycophantic crap:
It seems like the only insight Claude had was that "look at my vault and find contradictions in my thinking" is motivated by self-absorption, so it responded accordingly. It certainly had nothing intelligent to say about the actual subject matter!A better prompt could have been used—I literally was just getting started on this as a fun little thing to discuss with a friend that is travelling. It was not meant to show up here. facepalm moment
Maybe it would be better to prompt topic-by-topic. I think as it stands Claude is essentially hitting you with the Barnum effect: https://en.wikipedia.org/wiki/Barnum_effect (I think a lot of laypeople use LLMs as a modern replacement for tarot or astrology.)
Thanks also for the link—didn’t know about the Barnum effect!
Usually that’s what I typically do in talking with Claude but my first vault is so haphazard at this point that it’s a bit of a lost cause.
This prompt to find contradictions was merely to see where the contradicting notes are, as a little toy experiment.
I still have to annotate this post as it allows me to see what I do and don’t agree with Claude on.
However, this half-baked “AI slop” post is making me reflect on my style of working with my site; it usually gets little traffic so I put whatever I want on there but clearly someone has it in their feed and posted one of the less interesting posts here IMHO.
The difficulty is that these models will reflect the aggregate worldview of people on the web before 2022 or so.
If only that was true. ChatGPT has gotten a bit more subtle since the early days when it was allowed to criticize certain politicians but not others, but so-called "safety training" still seems to impart plenty of additional bias. Some others like Grok appear to be less biased, until they suddenly turn into mecha-hitler after a slight prompt tweak
From the author: https://x.com/angadhn/status/1950225316032422008
did OP literally just post AI output covering their personal notes with no additional commentary? no reflections on if it was useful, accurate, or fair? just passing off an article that’s 99% AI slop as something insightful, amazing
This links to my blogpost but I did not post it to HN.
This is as much a surprise/shock to me as it is to you :D
Your thinking is deeply biased if you judge the value of something based on who/what wrote it.
https://en.wikipedia.org/wiki/Credentialism
I think this was closer to AI insights than AI slop.
this is loser stuff
Feels like a vast intrusion of privacy to upload my entire private notes vault to the AIs but I guess we've just given up on privacy now.
theZuck declared privacy dead a long time ago. he wasn't wrong. privacy was dead long before all of those "dumb fucks" started posted all of their data personally onto the web. before theZuck's declaration, there was still an illusion of privacy because the people tracking everything you did stayed in the shadows. the people talking about them leaned closer to wacko conspiracy people. theZuck and his ilk just walked out of the shadows and convinced people to give them data willingly. that data is different than the tracking they continue to do and improve.
however, you can still be pretty private by just not playing their reindeer games and have the privacy you thought you had prior to social media
I have not had any LLM correct me even once. In any three-prompt LLM session I find at least one logical mistake, one invalid correction on the part of the LLM and one piece of outright misinformation. All of which is delivered in an authoritative manner.
The only useful feature is that LLMs like Grok can find obscure tweets, but I just want the link like in a search engine, not a summary.
There is no right or wrong on the output of AI. You lapped it up wholesale because you were looking for meaning, and found it in the AI.
I have to invoke the BS generator here:
https://sebpearce.com/bullshit/
I can bet lots of people find profound meaning in the output of the above machine because they want to.