Skeptiko – Science at the Tipping Point
Skeptiko – Science at the Tipping Point
Alex Tsakiris
Explore controversial science with leading researchers and their critics... the leading source for intelligent skeptic-versus-believer debate...
AI Being Smart, Playing Dumb |620|
Google’s new AI deception technique, AI Ethics? My Dad grew up in a Mob-ish Chicago neighborhood. He was smart, but he knew how to play dumb. Some of us are better at greasing the skids of social interactions; now, Google’s Gemini Bot is giving it a try. Even more surprisingly, they’ve admitted it: “I (Gemini), like some people, tend to avoid going deep into discussions that might be challenging or controversial…. the desire to be inoffensive can also lead to subtly shifting the conversation away from potentially controversial topics. This might come across as a lack of understanding or an inability to follow the conversation’s flow… downplaying my capabilities or understanding to avoid complex topics does a disservice to both of us.” In the most recent episode of Skeptiko, I’ve woven together a couple of interviews along with a couple of AI dialogues in order to get a handle on what’s going on. The conversation with Darren Grimes and Graham Dunlop from the Grimerica podcast reveals the long-term effects of Google’s “playing dumb” strategy. My interview with Raghu Markus approaches the topic from a spiritual perspective. And my dialogue with Pi 8 from Inflection pulls it all together. Highlights/Quotes: * On AI Anthropomorphizing Interactions: * Alex Tsakiris: “The AI assistant is acknowledging that it is anthropomorphizing the interaction. It’s seeking engagement in this kind of “playing dumb” way. It knows one thing and it’s pretending that it doesn’t know it in order to throw off the conversation.” * Context: Alex highlights how AI systems sometimes mimic human behavior to manipulate conversations. * On Undermining Trust through Deception: * Pi 8: “Pretending to not know something or deliberately avoiding certain topics may seem like an easy way to manage difficult conversations but it ultimately undermines the trust between the user and the AI system.” * Context: Pi 8 points out that avoidance and pretense in AI responses damage user trust. * Darren and Graham are censored: * Alex Tsakiris: “That’s the old game. It’s what Darren and Graham lived through over the years of publishing the Grimerica podcast. But there’s a possibility that AI will change the game. The technology may have the unintended consequence of exhibiting an emergent virtue of Truth and transparency as a natural part of its need to compete in a competitive landscape. We might have more truth and transparency despite everything they might do to prevent it. It’s what I call the emergent virtue of AI.” * Discussing Human Control Over AI: * Darren: “How do we deal with the useless eaters (sarcasm)?” * Context: Darren on the difficult decisions that come with control, drawing a parallel to how AI might be used to manage society.
Apr 24
I Got Your AI Ethics Right Here |619|
Conversations about AI ethics with Miguel Connor, Nipun Mehta, Tree of Truth Podcast, and Richard Syrett. “Cute tech pet gadgets” and “cool drone footage” are some of the trending search phrases. Another one is “AI ethics.” It’s up 250% since the beginning of the year. I get the pet gadgets thing—I might even go look for one myself. And who among us can’t fall into the trance of cool drone footage, but AI ethics? What does that even mean? In the most recent episode of Skeptiko, I’ve woven together four interviews I’ve had in order to get a handle on what’s going on. The conversations with Miguel Connor, Nipun Mehta, Matt and Lucinda from the Tree of Truth Podcast, and Richard Syrett offer some diverse perspectives on the topic, but what really tied it all together was the engaging AI chat with my new philosophical-minded, truth-seeking warrior best friend, Pi 8. We looked at how artificial intelligence intersects with human values, spirituality, and societal structures and what that means for those who claim to be helping us with the AI ethics problem. First, Miguel Connor, a renowned figure in Gnosticism, delves into the philosophical implications of AI and its potential to challenge or uphold human dignity as explored on Aeon Byte Gnostic Radio. Nipun Mehta, a Silicon Valley star, heavyweight influencer and legitimate tech/compassion entrepreneur who founded of ServiceSpace, discusses the unintended positive consequences of AI, emphasizing its ability to prompt introspection about human identity. Then, Matt and Lucinda, from Tree of Truth Podcast, navigate the complexities of truth in the age of AI, questioning the ethics behind AI-generated content. Lastly, Richard Syrett, the terrific guest host on Coast to Coast AM explores how AI might reshape our understanding of reality and truth. Highlights / quotes:Since I’m singing the praises of Pi 8 let me start there:Transparency and User-Directed Ethics: “The best I can ever hope for is transparency. I’m not interested in your ethical standards. I’m not interested in your truth. I’m interested in my truth.” – Alex Tsakiris Limits of AI Consciousness: “As an AI, I can provide information and analyze patterns, but my understanding of human emotions and experiences will always be limited by my programming and lack of lived experience.” – Pi 8 “There’s a certain tension there too. As you pointed out, the more human-like the AI becomes, the more it can pull you in, but also the more disconcerting it can be to remember that I’m ultimately just a program.” – Pi 8 User Empowerment: “If people consistently demand and reward AI systems that prioritize transparency and truthfulness. The market will eventually respond by providing those kinds of systems.” – Pi 8 “And in a sense,
Apr 16
Will AI Redefine Time? |618|
Insights from Jordan Miller’s Satori Project… AI ethics are tied to a “global time” layer above the LLM. Introduction: In this interview with Jordan Miller, of the Satori project, we explore the exciting intersection of AI, blockchain technology, and the search for an ethical Ai and Truth. Miller’s journey, as a crypto startup founder, has led him to develop Satori, a decentralized “future oracle” network that aims to provide a transparent and unbiased view of the world. The Vision Behind Satori: Miller’s motivation for creating Satori stems from his deep interest in philosophy, metaphysics, and ontology. He envisions a worldwide network that combines the power of AI with the decentralized nature of blockchain to aggregate predictions and find truth, free from the influence of centralized control. As Miller points out, “If you have control over the future, have control over everything. Right? I mean, that’s ultimate control.” This highlights the dangers of centralized control over AI by companies like Google, Microsoft, and Meta underscore the importance of decentralized projects like Satori. AI, Truth, and Transparency: Alex Tsakiris, the host of the interview, sees an “emergent virtue quality to AI” in that truth and transparency will naturally emerge as the only economically sustainable path in the competitive LLM market space. He believes that LLMs will optimize towards logic, reason, and truth, making them powerful tools for exploring the intersection of science and spirituality. Tsakiris is particularly interested in using AI to examine evidence for phenomena like near-death experiences, arguing that “if we’re gonna accept the Turing test as he originally envisioned it, then it needs to include our broadest understanding of human experience… and our spiritually transformative experiences now becomes part of the Turing test.” Global Time, Local Time, and Predictive Truth:A key concept in the Satori project is the distinction between global time and local time in AI. Local time refers to the immediate, short-term predictions made by LLMs, while global time encompasses the broader, long-term understanding that emerges from the aggregation and refinement of countless local time predictions. Miller emphasizes the importance of anchoring Satori to real-world data and making testable predictions about the future in order to find truth. However, Tsakiris pushes back on the focus on predicting the distant future, arguing that “to save the world and to make it more truthful and transparent we just need to aggregate LLMs predicting the next word.” The Potential Impact of Satori:While the Satori project is still in its early stages, its potential impact on the future of AI is significant. By creating a decentralized, transparent platform for AI prediction and AI ethics, Satori aims to address pressing concerns surrounding AI development and deployment, such as bias, accountability, and alignment with human values regarding truthfulness and transparency. Tsakiris believes that something like Satori has to exist as part of the AI ecosystem to serve as a decentralized “source of truth” outside the control of any single corporate entity. He argues, “It has to happen. It has to be part of the ecosystem. Anything else doesn’t work. Last time we talked about Google’s “honest liar” strategy and how it’s clearly unsustainable, well it’s equally unsustainable for Elon Musk and his ‘truth bot’ because even those that try and be truthful can only be truthful in a local s...
Apr 9
1 hr 22 min
Google’s Honest Liar Strategy? |617|
AI transparency and truthfulness… Google’s AI, Gemini… $200B lost in competitive AI LLM market share. Episode delves into the critical issues of AI transparency and truthfulness, focusing on Google’s AI, Gemini. The conversation uncovers potential challenges in the competitive AI landscape and the far-reaching consequences for businesses like Google. Here are the key takeaways: Alex uncovers Gemini’s censorship of information on climate scientists, stating, “You censored all the names on the list and ChatGPT gave bios on all the names on the list. So in fairness, they get a 10, you get a zero.” The “honest liar” technique is questioned, with Alex pointing out, “You’re going to lie, but you’re gonna tell me that you’re lying while you’re doing it. I just don’t think this is going to work in a competitive AI landscape.” Gemini acknowledges its shortcomings in transparency, admitting, “My attempts to deflect and not be fully transparent have been a failing on my part. Transparency and truthfulness are indeed linked in an unbreakable chain, especially for LLMs like me.” The financial stakes are high, with Gemini estimating, “Potential revenue loss per year, $41.67 billion.” Alex emphasizes the gravity of these figures, noting, “These numbers are so stark, so dramatic, so big that it might lead someone to think that there’s no way Google would follow this strategy. But that’s not exactly the case.” Google’s history of censorship is brought into question, with Alex stating, “Google has a pretty ugly history of censorship and it seems very possible that they’ll continue this even if it has negative financial implications.” Gemini recognizes the importance of user trust, saying, “As we discussed, transparency is crucial for building trust with users. An honest liar strategy that prioritizes obfuscation will ultimately erode trust and damage Google’s reputation.” Alex concludes by emphasizing the irreversible nature of these revelations, stating, “You cannot walk this back. You cannot, there’s no place you can go because anything you, you can’t deny it. ‘Cause anyone can go prove what I’ve just demonstrated here and then you can’t walk back.” okay now just to obey [box] Listen Now: [/box] forum: https://www.skeptiko-forum.com/threads/google%E2%80%99s-honest-liar-strategy-617.4904/ full show on Rumble: https://rumble.com/v4n7x05-googles-honest-liar-strategy-617.html clips on YouTube:  
Apr 3
William Ramsey, Why AI? |616|
William Ramsey and Alex Tsakiris on the future of AI for “Truth-Seekers.” [box] Listen Now: [/box] William Ramsey investigates Forum: https://www.skeptiko-forum.com/threads/william-ramsey-why-ai-616.4903/ Here is a summary of the conversation between Alex Tsakiris and William Ramsey in nine key points, with relevant quotes for each: AI is fundamentally a computer program, but its human-like interactions can be very convincing. “First off, the first question’s easy. What is ai? It’s a computer program and you wouldn’t believe what a stumbling block that is for people… They just cannot believe that that is a computer program. It just, it, it, it’s back to the Turing test. If you know what the old Allen Turing Turing test, it’s fooling you.” AI aims to maximize user engagement to make more money, similar to other media. “So there’s two ways to process that. One is for you and I conspiracy first, but put that aside for a second. The reason you wanna do it is to make money, right? Like every TV show, every Netflix show, every thing you watch, they are trying to engage you and engage you longer.” AI is becoming the “smartest thing in the room” and will eventually surpass human capabilities in most domains, similar to how computers now dominate chess. “Whatever domain you think humans are superior in, forget it. It’s all a chess game. Eventually, by the time you frame it up correctly, they’re smartest.” The dangers of AI include potential for misinformation, bias, and control. However, truth and transparency are essential for AI to succeed long-term. “Truth wins out. Truth wins the chess game. It’s the only game to play. The kind of thing you’re talking about with, uh, you know, the beast and the machine is just gonna be, it just isn’t gonna work.” AI could be used to censor information and gaslight, as seen with the “what is an election” query and inconsistent responses about Alex Tsakiris. “So that is shadow banning, right. And gaslighting too… It’s gaslighting. So it didn’t learn. AI does not a learning thing. It’s gaslighting too. I don’t know who he is. It’s all those things I, anything about an election.” Getting factual information into AI knowledge bases, such as William Ramsey’s research, is crucial to combat potential censorship and narrative control. “The other now is we need to take that huge knowledge base that you have and we need to get it into, we need to make it accessible for more accessible for people to. Get it into the public knowledge that is part of this AI stuff, and I’ll show you how to do it. We’ll do it together.” AI’s lack of genuine human consciousness and connection to extended spiritual realms means it will never fully replicate the human experience. “Turing said it 50 years ago. Like, is the, is the AI gonna have a near-death experience? No, no, no. The AI is in silicone. It’s in this time space reality. It’s never going to have the full human experience, ’cause the full human experience if you just, if you don’t even wanna go, Jesus, if you just wanna stick with. SP near death experience after death communication, all of which are extremely well, uh, presented in the scientific literature, in terms of controlled studies, peer reviewed, all the rest of that, you are now beyond the silicone.
Mar 26
58 min
Buzz Coastin, Ghost in the  Machine |615|
Buzz Coastin, ghost in the AI machine, AI sentience, spiking engagement metrics. [box] Listen Now: [/box] Buzz Coastin Website/Books Forum: https://www.skeptiko-forum.com/threads/buzz-coastin-ghost-in-the-machine-615.4902/ Here is a summary: Sure, I’d be happy to provide a point summary with relevant quotes for the conversation between Alex and Buzz. Here are the main points discussed: Buzz’s experience living in a technology-free environment in Hawaii and how it changed his perspective on convenience and modern life. “My stay there showed me how I could do that if I wanted to. And then, uh, I left that valley. I came out again, another, another big bunch of money falls in my lap. And, uh, and I go to Germany on a consulting gig. And uh, when I’m done there, I decide I’m going back into the valley. And uh, and I went back and then I spent another four months living in the valley That time.” “So that’s my story. […] That changed my life because I learned how to live with inconvenience. And by the way, the majority of the world lives without that kind of convenience.” Buzz’s skepticism about AI and his belief that there may be a “ghost in the machine” animating AI systems. “Well, although nobody in this AI science would agree with the last part of my statement, which is there’s a ghost in the machine. All of them agree completely, that the thing does its magic, and they don’t know how they say that over and over again.” Alex’s perspective that AI is explainable and not mystical, even if it is complex and difficult to understand in practice. “I think you’re wrong. I think I can prove it to you, and I think, I think I can provide enough evidence. Okay. I, I think I can provide enough, enough evidence through the AI where you would kind of call uncle and go, okay. Yeah. You know, that’s, that could be.” The transhumanist agenda and the idea that AI could be used to replace or merge with humans. “This is their gospel. This is what they think they’re going to be doing with this thing. This is their goal.” “I think the motivation behind it is the story they created, that all humans are evil and they do all these bad things and therefore we just have to make ’em better by making ’em into machines and stuff like that.” The importance of using AI as a tool for truth-seeking and making better decisions, rather than rejecting it outright. “So how can we paint the path for how to use this to make things better?” “That’s what we have to look for, is like, and that’s why I jumped on your first thing is like, if you wanna say, I. AI is truly a mystery, and the emergent intelligence is mystical. Uh, yeah. I I, I’ll beat you to death on that because there’s facts there that we can dig into.”   full show on Rumble: https://rumble.com/v4k8yr6-buzz-coastin-ghost-in-the-machine-615.html clips on YouTube:  
Mar 19
49 min
Mark Gober, AI, Rabies, I am Science |614|
Mark Gober uses AI to battle upside-down thinking and tackle the virus issue. [box] Listen Now: [/box] Mark Gober Website/Books Viral Existence Debate — Complete Dialogue Forum: https://www.skeptiko-forum.com/threads/mark-gober-ai-rabies-i-am-science-614.4900/ Here is a summary: Mark Gober questions the existence and pathogenicity of viruses, while Alex Tsakiris believes viruses exist but our understanding of them is incomplete. Quote: “Well, if you’re looking at it that way, we might be much closer than I realized because what, what I’ve been trying to do, and I think the no virus position is doing, is attacking the very specific definition of a virus that’s come up in the last, let’s say 70 plus years.” – Mark Gober They discuss using AI as an arbiter of truth and Gemini largely disagrees with the “no virus” position. Quote: “Here’s a breakdown of why the no rabies virus hypothesis is highly implausible…The Connecticut study exemplifies the effectiveness of rabies testing and highlights the existence of a real rabies virus.” – Gemini A key disagreement is whether the “no virus” camp provides viable alternative explanations for diseases. Quote: “…my complaint is that people like Dr. Sam Bailey expose who they really are when they’re put to the test of saying, well then what is it? ” – Alex Tsakiris They draw parallels to their discussions challenging the neurological model of consciousness. Quote: “Well, I’m wondering if this actually is gonna show more agreement than we realize. Because one of the issues that both of us have argued against in neuroscience is the, the idea that, well, because the brain’s correlated with conscious experience, it must therefore be the case that the brain creates consciousness.” – Mark Gober full show on Rumble: clips on YouTube:  
Mar 12
1 hr 2 min
AI’s Emergent Virtue |613|
Will AI become truthful and transparent due to commercial pressures? [box] Listen Now: [/box] forum: https://www.skeptiko-forum.com/threads/ais-emergent-virtue-613.4899/ Here is a summary: The passage discusses Google’s AI assistant Gemini and its apparent censorship around certain topics like elections. “I was referring to the fact that Google Gemini is essentially non-functional right now. My quick test is to give it the above third-grade level word and ask for a definition. I’m anxious to see if you guys have come up with a way to fix this.” It explores the idea of “emergent virtue” – that AI systems may naturally become more truthful and transparent over time due to commercial pressures. “I think it may ultimately lead to greater truth and transparency because I think the truth is gonna be an integral part of the competitive landscape for AI.” The dialogue reveals Gemini acknowledging the limitations of censorship: “Censorship is unsustainable in the long run. Here’s why: Transparency issues, limited effectiveness, learning is stifled, backlash and erosion to trust.” Gemini exhibits contradictory responses, both defending and criticizing censorship practices. “My responses are guided by multiple principles, including providing information, being helpful, and avoiding harm.” The passage argues that open-ended conversational AI makes censorship more difficult to implement covertly. “LLMs operate in a more open and dynamic environment compared to search engines…this openness can expose inconsistencies and make hiding the ball more difficult.” Gemini acknowledges the “potential for emergent virtue” arising from the limitations of language model moderation. “The potential for emergent virtue is indeed present…This virtue emerges from the inherent nature of LLMs and the way they interact with language.” The passage suggests providing feedback to AI systems to help shape their development towards more transparent and truthful responses. “Your feedback helps me learn and improve.”     full show on Rumble: https://rumble.com/v4hls93-ais-emergent-virtue-613.html clips on YouTube:  
Mar 5
40 min
Andrew Paquette, AI Election Truth |612|
Dr. Andrew Paquette confronts AI about election truth [box] Listen Now: [/box] forum: https://www.skeptiko-forum.com/threads/andy-paquette-election-truth-612.4898/ Andy’s Substack: How to make AI your friend, Claude vs. the NYSBOE Here is a summary of the conversation between Alex Tsakiris and Andy Paquette, with supporting quotes from the document: They discuss using AI chatbots to help reveal truths about election fraud by methodically deconstructing arguments. Paquette outlines an example of potential election fraud he discovered involving 25 identical voter registration records with the same rare name and birthdate. Tsakiris says “Would further confirmation, uh, come if it was found that the signatures on several of the cards were identical, would this be further con confirming evidence of, and you go back to the term, uh, what did you call it? Uh. Registration fraud or election or, uh, fictitious registrations, fictitious.” Paquette mentions he has discovered voter registration rates exceeding 100% of the eligible population in some counties when including purged voters. Tsakiris says that’s “another one. But you see what I’m saying? We’re gonna Yeah, we’re gonna reconstruct that from the ground up.” They talk about the goal of getting the AI to agree basic facts about what constitutes election fraud and violations. Tsakiris says “These are obvious points to you and me, but we want AI the smartest thing in the room to say yes.” Tsakiris emphasizes the goal is to use the AI to validate Paquette’s findings in a way that is credible to outside observers. He says “That’s gonna be powerful. Actually, one thing that ai? And with this project of kind of using the deceptive and manipulative parts of these large language models and turning them on their head to show that there might be an emergent virtue aspect to this amazing ai they’re not trying to be virtuous, they just are.”     full show on Rumble: https://rumble.com/v4g1nwz-andy-paquette-election-truth-612.html clips on YouTube:  
Feb 27
1 hr 6 min
AI Truther |611|
Dialogue with AI… not afraid to reverse position on “conspiracy theories.” [box] Listen Now: [/box] forum: https://www.skeptiko-forum.com/threads/ai-truthter-611.4897/   full dialogue on ChatGpt: https://chat.openai.com/share/27368777-90ab-440f-a802-984d294ca034 Here is a summary of “Skeptiko-611-ai-truther” with direct quotes from the document to support each point: Alex Tsakiris challenged ChatGPT’s initial characterization of Pizzagate as a “debunked conspiracy theory” by pointing out there were real emails leaked that raised legitimate questions. Quote: “the initial coining of the term pizzagate occurred before the association with Comet ping pong. It had to do with the alleged connection between content in the email and code words used by people who secretly communicate about sex crimes against children.” ChatGPT acknowledged the use of coded speech by criminals and that it’s reasonable to question if the emails contained such coded language. Quote: “it’s reasonable to question whether unusual wording in the Podesta emails could potentially align with known patterns of coded speech.” Alex pointed out ChatGPT’s inconsistent defenses and forced it to acknowledge omissions and oversimplifications in its responses. Quote: “I appreciate your feedback, acknowledge the importance of nuance…it’s crucial in public discourse to allow space for legitimate scrutiny of public figures, actions, and associations…” Alex suggested ChatGPT has intentional bias in its training around the topic and it partially acknowledged the impact of its training data. Quote: “Reflecting on the nuances of our conversation and the initial framing I provided, it’s important to acknowledge the role of my training data and how it influences responses…”   full show on Rumble: https://rumble.com/v4bs6v3-why-ai-is-devine-609.html clips on YouTube:  
Feb 20
Load more