Detailed
Compact
Art
Reverse
December 27, 2019
[This is the text of a talk I gave to the Irish Law Reform Commission Annual Conference in Dublin on the 13th of November 2018. You can listen to an audio version of this lecture here or using the embedded player above.] In the mid-19th century, a set of laws were created to address the menace that newly-invented automobiles and locomotives posed to other road users. One of the first such laws was the English The Locomotive Act 1865, which subsequently became known as the ‘Red Flag Act’. Under this act, any user of a self-propelled vehicle had to ensure that at least two people were employed to manage the vehicle and that one of these persons: “while any locomotive is in motion, shall precede such locomotive on foot by not less than sixty yards, and shall carry a red flag constantly displayed, and shall warn the riders and drivers of horses of the approach of such locomotives…” The motive behind this law was commendable. Automobiles did pose a new threat to other, more vulnerable, road users. But to modern eyes the law was also, clearly, ridiculous. To suggest that every car should be preceded by a pedestrian waving a red flag would seem to defeat the point of having a car: the whole idea is that it is faster and more efficient than walking. The ridiculous nature of the law eventually became apparent to its creators and all such laws were repealed in the 1890s, approximately 30 years after their introduction.[1] The story of the Red Flag laws shows that legal systems often get new and emerging technologies badly wrong. By focusing on the obvious or immediate risks, the law can neglect the long-term benefits and costs. I mention all this by way of warning. As I understand it, it has been over 20 years since the Law Reform Commission considered the legal challenges around privacy and surveillance. A lot has happened in the intervening decades. My goal in this talk is to give some sense of where we are now and what issues may need to be addressed over the coming years. In doing this, I hope not to forget the lesson of the Red Flag laws. 1. What’s changed?  Let me start with the obvious question. What has changed, technologically speaking, since the LRC last considered issues around privacy and surveillance? Two things stand out. First, we have entered an era of mass surveillance. The proliferation of digital devices — laptops, computers, tablets, smart phones, smart watches, smart cars, smart fridges, smart thermostats and so forth — combined with increased internet connectivity has resulted in a world in which we are all now monitored and recorded every minute of every day of our lives. The cheapness and ubiquity of data collecting devices means that it is now, in principle, possible to imbue every object, animal and person with some data-monitoring technology. The result is what some scholars refer to as the ‘internet of everything’ and with it the possibility of a perfect ‘digital panopticon’. This era of mass surveillance puts increased pressure on privacy and, at least within the EU, has prompted significant legislative intervention in the form of the GDPR. Second, we have created technologies that can take advantage of all the data that is being collected. To state the obvious: data alone is not enough. As all lawyers know, it is easy to befuddle the opposition in a complex law suit by ‘dumping’ a lot of data on them during discovery. They drown in the resultant sea of information. It is what we do with the data that really matters. In this respect, it is the marriage of mass surveillance with new kinds of artificial intelligence that creates the new legal challenges that we must now tackle with some urgency. Artificial intelligence allows us to do three important things with the vast quantities of data that are now being collected: (i) It enables new kinds of pattern matching - what I mean here is that AI systems can spot patterns in data that were historically difficult for computer systems to spot (e.g. image
December 17, 2019
In this episode I talk to Dr Regina Rini. Dr Rini currently teaches in the Philosophy Department at York University, Toronto where she holds the Canada Research Chair in Philosophy of Moral and Social Cognition. She has a PhD from NYU and before coming to York in 2017 was an Assistant Professor / Faculty Fellow at the NYU Center for Bioethics, a postdoctoral research fellow in philosophy at Oxford University and a junior research fellow of Jesus College Oxford. We talk about the political and epistemological consequences of deepfakes. This is a fascinating and timely conversation. You can download this episode here or listen below. You can also subscribe to the podcast on Apple Podcasts, Stitcher and a variety of other podcasting services (the RSS feed here). Show Notes0:00 - Introduction3:20 - What are deepfakes?7:35 - What is the academic justification for creating deepfakes (if any)?11:35 - The different uses of deepfakes: Porn versus Politics16:00 - The epistemic backstop and the role of audiovisual recordings22:50 - Two ways that recordings regulate our testimonial practices26:00 - But recordings aren't a window onto the truth, are they?34:34 - Is the Golden Age of recordings over?39:36 - Will the rise of deepfakes lead to the rise of epistemic elites?44:32 - How will deepfakes fuel political partisanship?50:28 - Deepfakes and the end of public reason54:15 - Is there something particularly disruptive about deepfakes?58:25 - What can be done to address the problem? Relevant LinksRegina's HomepageRegina's Philpapers Page"Deepfakes and the Epistemic Backstop" by Regina"Fake News and Partisan Epistemology" by ReginaJeremy Corbyn and Boris Johnson Deepfake Video"California’s Anti-Deepfake Law Is Far Too Feeble" Op-Ed in Wired #mc_embed_signup{background:#fff; clear:left; font:14px Helvetica,Arial,sans-serif; } /* Add your own MailChimp form style overrides in your site stylesheet or in this style block. We recommend moving this block and the preceding CSS link to the HEAD of your HTML file. */ Subscribe to the newsletter
December 6, 2019
In this episode I talk to Dr Pak-Hang Wong. Pak is a philosopher of technology and works on ethical and political issues of emerging technologies. He is currently a research associate at the Universitat Hamburg. He received his PhD in Philosophy from the University of Twente in 2012, and then held academic positions in Oxford and Hong Kong. In 2017, he joined the Research Group for Ethics in Information Technology, at the Department of Informatics, Universitat Hamburg. We talk about the robotic disruption of morality and how it affects our capacity to develop moral virtues. Pak argues for a distinctive Confucian approach to this topic and so provides something of a masterclass on Confucian virtue ethics in the course of our conversation. You can download the episode here or listen below. You can also subscribe to the podcast on Apple, Stitcher and a range of other podcasting services (the RSS feed is here). Show Notes0:00 - Introduction2:56 - How do robots disrupt our moral lives?7:18 - Robots and Moral Deskilling12:52 - The Folk Model of Virtue Acquisition21:16 - The Confucian approach to Ethics24:28 - Confucianism versus the European approach29:05 - Confucianism and situationism34:00 - The Importance of Rituals39:39 - A Confucian Response to Moral Deskilling43:37 - Criticisms (moral silencing)46:48 - Generalising the Confucian approach50:00 - Do we need new Confucian rituals? Relevant LinksPak's homepage at the University of HamburgPak's Philpeople Profile"Rituals and Machines: A Confucian Response to Technology Driven Moral Deskilling" by Pak"Responsible Innovation for Decent Nonliberal Peoples: A Dilemma?" by Pak"Consenting to Geoengineering" by PakEpisode 45 with Shannon Vallor on Technology and the Virtues #mc_embed_signup{background:#fff; clear:left; font:14px Helvetica,Arial,sans-serif; } /* Add your own MailChimp form style overrides in your site stylesheet or in this style block. We recommend moving this block and the preceding CSS link to the HEAD of your HTML file. */ Subscribe to the newsletter
November 22, 2019
In this episode I talk to Dr Karina Vold. Karina is a philosopher of mind, cognition, and artificial intelligence. She works on the ethical and societal impacts of emerging technologies and their effects on human cognition. Dr Vold is currently a postdoctoral Research Associate at the Leverhulme Centre for the Future of Intelligence, a Research Fellow at the Faculty of Philosophy, and a Digital Charter Fellow at the Alan Turing Institute. We talk about the ethics extended cognition and how it pertains to the use of artificial intelligence. This is a fascinating topic because it addresses one of the oft-overlooked effects of AI on the human mind. You can download the episode here or listen below. You can also subscribe to the podcast on Apple, Stitcher and a range of other podcasting services (the RSS feed is here). Show Notes0:00 - Introduction1:55 - Some examples of AI cognitive extension13:07 - Defining cognitive extension17:25 - Extended cognition versus extended mind19:44 - The Coupling-Constitution Fallacy21:50 - Understanding different theories of situated cognition27:20 - The Coupling-Constitution Fallacy Redux30:20 - What is distinctive about AI-based cognitive extension?34:20 - The three/four different ways of thinking about human interactions with AI40:04 - Problems with this framework49:37 - The Problem of Cognitive Atrophy53:31 - The Moral Status of AI Extenders57:12 - The Problem of Autonomy and Manipulation58:55 - The policy implications of recognising AI cognitive extension Relevant LinksKarina's homepageKarina at the Leverhulme Centre for the Future of Intelligence"AI Extenders: The Ethical and Societal Implications of Humans Cognitively Extended by AI" by José Hernández Orallo and Karina Vold"The Parity Argument for Extended Consciousness" by Karina"Are ‘you’ just inside your skin or is your smartphone part of you?" by Karina"The Extended Mind" by Clark and ChalmersTheory and Application of the Extended Mind (series by me) #mc_embed_signup{background:#fff; clear:left; font:14px Helvetica,Arial,sans-serif; } /* Add your own MailChimp form style overrides in your site stylesheet or in this style block. We recommend moving this block and the preceding CSS link to the HEAD of your HTML file. */ Subscribe to the newsletter
November 16, 2019
[The following is the text of a talk I delivered at the World Summit AI on the 10th October 2019. The talk is essentially a nugget taken from my new book Automation and Utopia. It's not an excerpt per se, but does look at one of the key arguments I make in the book. You can listen to the talk using the plugin above or download it here.] The science fiction author Arthur C. Clarke once formulated three “laws” for thinking about the future. The third law states that “any sufficiently advanced technology is indistinguishable from magic”. The idea, I take it, is that if someone from the Paleolithic was transported to the modern world, they would be amazed by what we have achieved. Supercomputers in our pockets; machines to fly us from one side of the planet to another in less than a day; vaccines and antibiotics to cure diseases that used to kill most people in childhood. To them, these would be truly magical times. It’s ironic then that many people alive today don’t see it that way. They see a world of materialism and reductionism. They think we have too much knowledge and control — that through technology and science we have made the world a less magical place. Well, I am here to reassure these people. One of the things AI will do is re-enchant the world and kickstart a new era of techno-superstition. If not for everyone, then at least for most people who have to work with AI on a daily basis. The catch, however, is that this is not necessarily a good thing. In fact, it is something we should worry about. Let me explain by way of an analogy. In the late 1940s, the behaviorist psychologist BF Skinner — famous for his experiments on animal learning —got a bunch of pigeons and put them into separate boxes. Now, if you know anything about Skinner you’ll know he had a penchant for this kind of thing. He seems to have spent his adult life torturing pigeons in boxes. Each box had a window through which a food reward would be presented to the bird. Inside the box were different switches that the pigeons could press with their beaks. Ordinarily, Skinner would set up experiments like this in such a way that pressing a particular sequence of switches would trigger the release of the food. But for this particular experiment he decided to do something different. He decided to present the food at random intervals, completely unrelated to the pressing of the switches. He wanted to see what the pigeons would do as a result. The findings were remarkable. Instead of sitting idly by and waiting patiently for their food to arrive, the pigeons took matters into their own hands. They flapped their wings repeatedly, they danced around in circles, they hopped on one foot, convinced that their actions had something to do with the presentation of the food reward. Skinner and his colleagues likened what the pigeons were doing to the ‘rain dances’ performed by various tribes around the world: they were engaging in superstitious behaviours to control an unpredictable and chaotic environment. It’s important that we think about this situation from the pigeon’s perspective. Inside the Skinner box, they find themselves in an unfamiliar world that is deeply opaque to them. Their usual foraging tactics and strategies don’t work. Things happen to them, food gets presented, but they don’t really understand why. They cannot cope with the uncertainty; their brains rush to fill the gap and create the illusion of control. Now what I want to argue here is that modern workers, and indeed all of us, in an environment suffused with AI, can end up sharing the predicament of Skinner’s pigeons. We can end up working inside boxes, fed information and stimuli by artificial intelligence. And inside these boxes, stuff can happen to us, work can get done, but we are not quite sure if or how our actions make a difference. We end up resorting to odd superstitions and rituals to make sense of it all and give ourselves the illusion of control, and one of the things I worry about, in
October 27, 2019
[This is the text of a lecture that I delivered at Tilburg University on the 24th of September 2019. It was delivered as part of the 25th Anniversary celebrations for TILT (Tilburg Institute for Law, Technology and Society). My friend and colleague Sven Nyholm was the discussant for the evening. The lecture is based on my longer academic article ‘Welcoming Robots into the Moral Circle: A Defence of Ethical Behaviourism’ but was written from scratch and presents some key arguments in a snappier and clearer form. I also include a follow up section responding to criticisms from the audience on the evening of the lecture. My thanks to all those involved in organizing the event (Aviva de Groot, Merel Noorman and Silvia de Conca in particular). You can download an audio version of this lecture, minus the reflections and follow ups, here or listen to it above] 1. Introduction My lecture this evening will be about the conditions under which we should welcome robots into our moral communities. Whenever I talk about this, I am struck by how much my academic career has come to depend upon my misspent youth for its inspiration. Like many others, I was obsessed with science fiction as a child, and in particular with the representation of robots in science fiction. I had two favourite, fictional, robots. The first was R2D2 from the original Star Wars trilogy. The second was Commander Data from Star Trek: the Next Generation. I liked R2D2 because of his* personality - courageous, playful, disdainful of authority - and I liked Data because the writers of Star Trek used him as a vehicle for exploring some important philosophical questions about emotion, humour, and what it means to be human. In fact, I have to confess that Data has had an outsized influence on my philosophical imagination and has featured in several of my academic papers. Part of the reason for this was practical. When I grew up in Ireland we didn’t have many options to choose from when it came to TV. We had to make do with what was available and, as luck would have it, Star Trek: TNG was on every day when I came home from school. As a result, I must have watched each episode of its 7-season run multiple times. One episode in particular has always stayed with me. It was called ‘Measure of a Man’. In it, a scientist from the Federation visits the Enterprise because he wants to take Data back to his lab to study him. Data, you see, is a sophisticated human-like android, created by a lone scientific genius, under somewhat dubious conditions. The Federation scientist wants to take Data apart and see how he works with a view to building others like him. Data, unsurprisingly, objects. He argues that he is not just a machine or piece of property that can be traded and disassembled to suit the whims of human beings. He has his own, independent moral standing. He deserves to be treated with dignity. But how does Data prove his case? A trial ensues and evidence is given on both sides. The prosecution argue that Data is clearly just a piece of property. He was created not born. He doesn’t think or see the world like a normal human being (or, indeed, other alien species). He even has an ‘off switch’. Data counters by giving evidence of the rich relationships he has formed with his fellow crew members and eliciting testimony from others regarding his behaviour and the interactions they have with him. Ultimately, he wins the case. The court accepts that he has moral standing. Now, we can certainly lament the impact that science fiction has on the philosophical debate about robots. As David Gunkel observes in his 2018 book Robot Rights: “[S]cience fiction already — and well in advance of actual engineering practice — has established expectations for what a robot is or can be. Even before engineers have sought to develop working prototypes, writers, artists, and filmmakers have imagined what robots do or can do, what configurations they might take, and what problems they could produce for hu
September 19, 2019
In this episode I talk to Christian Munthe. Christian is a Professor of Practical Philosophy at the University of Gothenburg, Sweden. He conducts research and expert consultation on ethics, value and policy issues arising in the intersection of health, science & technology, the environment and society. He is probably best-known for his work on the precautionary principle and its uses in ethical and policy debates. This was the central topic of his 2011 book The Price of Precaution and the Ethics of Risk. We talk about the problems with the practical application of the precautionary principle and how they apply to the debate about existential risk. You can download the episode here or listen below. You can also subscribe on Apple Podcasts, Stitcher and a variety of other podcasting services (the RSS feed is here). Show Notes0:00 - Introduction1:35 - What is the precautionary principle? Where did it come from?6:08 - The key elements of the precautionary principle9:35 - Precaution vs. Cost Benefit Analysis15:40 - The Problem of the Knowledge Gap in Existential Risk21:52 - How do we fill the knowledge gap?27:04 - Why can't we fill the knowledge gap in the existential risk debate?30:12 - Understanding the Black Hole Challenge35:22 - Is it a black hole or total decisional paralysis?39:14 - Why does precautionary reasoning have a 'price'?44:18 - Can we develop a normative theory of precautionary reasoning? Is there such a thing as a morally good precautionary reasoner?52:20 - Are there important practical limits to precautionary reasoning?1:01:38 - Existential risk and the conservation of value Relevant LinksChristian's Academic HomepageChristian's Twitter account"The Black Hole Challenge: Precaution, Existential Risks and the Problem of Knowledge Gaps" by ChristianThe Price of Precaution and the Ethics of Risk by ChristianHans Jonas's The Imperative of ResponsibilityThe Precautionary Approach from the Rio DeclarationEpisode 62 with Olle Häggström #mc_embed_signup{background:#fff; clear:left; font:14px Helvetica,Arial,sans-serif; } /* Add your own MailChimp form style overrides in your site stylesheet or in this style block. We recommend moving this block and the preceding CSS link to the HEAD of your HTML file. */ Subscribe to the newsletter
August 28, 2019
In this episode I talk to Joseph Reagle. Joseph is an Associate Professor of Communication Studies at Northeastern University and a former fellow (in 1998 and 2010) and faculty associate at the Berkman Klein Center for Internet and Society at Harvard. He is the author of several books and papers about digital media and the social implications of digital technology. Our conversation focuses on his most recent book: Hacking Life: Systematized Living and its Discontents (MIT Press 2019). You can download the episode here or listen below. You can also subscribe on Apple Podcasts, Stitcher and a variety of other podcasting services (the RSS feed is here). Show Notes0:00 - Introduction1:52 - What is life-hacking? The four features of life-hacking4:20 - Life Hacking as Self Help for the 21st Century7:00 - How does technology facilitate life hacking?12:12 - How can we hack time?20:00 - How can we hack motivation?27:00 - How can we hack our relationships?31:00 - The Problem with Pick-Up Artists34:10 - Hacking Health and Meaning39:12 - The epistemic problems of self-experimentation49:05 - The dangers of metric fixation54:20 - The social impact of life-hacking57:35 - Is life hacking too individualistic? Should we focus more on systemic problems?1:03:15 - Does life hacking encourage a less intuitive and less authentic mode of living?1:08:40 - Conclusion (with some further thoughts on inequality) Relevant LinksJoseph's HomepageJoseph's BlogHacking Life: Systematized Living and Its Discontents (including open access HTML version)The Lifehacker WebsiteThe Quantified Self WebsiteSeth Roberts' first and final column: Butter Makes me SmarterThe Couple that Pays Each Other to Put the Kids to Bed (story about the founders of the Beeminder App)'The Quantified Relationship' by Danaher, Nyholm and EarpEpisode 6 - The Quantified Self with Deborah Lupton #mc_embed_signup{background:#fff; clear:left; font:14px Helvetica,Arial,sans-serif; } /* Add your own MailChimp form style overrides in your site stylesheet or in this style block. We recommend moving this block and the preceding CSS link to the HEAD of your HTML file. */ Subscribe to the newsletter
July 3, 2019
In this episode I talk to Olle Häggström. Olle is a professor of mathematical statistics at Chalmers University of Technology and a member of the Royal Swedish Academy of Sciences (KVA) and of the Royal Swedish Academy of Engineering Sciences (IVA). Olle’s main research is in probability theory and statistical mechanics, but in recent years he has broadened his research interests to focus applied statistics, philosophy, climate science, artificial intelligence and social consequences of future technologies. He is the author of Here be Dragons: Science, Technology and the Future of Humanity (OUP 2016). We talk about AI motivations, specifically the Omohundro-Bostrom theory of AI motivation and its weaknesses. We also discuss AI risk denialism. You can download the episode here or listen below. You can also subscribe to the podcast on Apple Podcasts, Stitcher and a variety of other podcasting services (the RSS feed is here). Show Notes0:00 - Introduction2:02 - Do we need to define AI?4:15 - The Omohundro-Bostrom theory of AI motivation7:46 - Key concepts in the Omohundro-Bostrom Theory: Final Goals vs Instrumental Goals10:50 - The Orthogonality Thesis14:47 - The Instrumental Convergence Thesis20:16 - Resource Acquisition as an Instrumental Goal22:02 - The importance of goal-content integrity25:42 - Deception as an Instrumental Goal29:17 - How the doomsaying argument works31:46 - Critiquing the theory: the problem of self-referential final goals36:20 - The problem of incoherent goals42:44 - Does the truth of moral realism undermine the orthogonality thesis?50:50 - Problems with the distinction between instrumental goals and final goals57:52 - Why do some people deny the problem of AI risk?1:04:10 - Strong versus Weak AI Scepticism1:09:00 - Is it difficult to be taken seriously on this topic? Relevant LinksOlle's Blog Olle's webpage at Chalmers University'Challenges to the Omohundro-Bostrom framework for AI Motivations' by Olle (highly recommended)'The Superintelligent Will' by Nick Bostrom'The Basic AI Drives' by Stephen OmohundroOlle Häggström: Science, Technology, and the Future of Humanity (video)Olle Häggström and Thore Husveldt debate AI Risk (video)Summary of Bostrom's theory (by me)'Why AI doomsayers are like sceptical theists and why it matters' by me   #mc_embed_signup{background:#fff; clear:left; font:14px Helvetica,Arial,sans-serif; } /* Add your own MailChimp form style overrides in your site stylesheet or in this style block. We recommend moving this block and the preceding CSS link to the HEAD of your HTML file. */ Subscribe to the newsletter
June 20, 2019
In this episode I talk to Roman Yampolskiy. Roman is a Tenured Associate Professor in the department of Computer Engineering and Computer Science at the Speed School of Engineering, University of Louisville. He is the founding and current director of the Cyber Security Lab and an author of many books and papers on AI security and ethics, including Artificial Superintelligence: a Futuristic Approach. We talk about how you might test for machine consciousness and the first steps towards a science of AI welfare. You can listen below or download here. You can also subscribe to the podcast on Apple, Stitcher and a variety of other podcasting services (the RSS feed is here). Show Notes0:00 - Introduction2:30 - Artificial minds versus Artificial Intelligence6:35 - Why talk about machine consciousness now when it seems far-fetched?8:55 - What is phenomenal consciousness?11:04 - Illusions as an insight into phenomenal consciousness18:22 - How to create an illusion-based test for machine consciousness23:58 - Challenges with operationalising the test31:42 - Does AI already have a minimal form of consciousness?34:08 - Objections to the proposed test and next steps37:12 - Towards a science of AI welfare40:30 - How do we currently test for animal and human welfare44:10 - Dealing with the problem of deception47:00 - How could we test for welfare in AI?52:39 - If an AI can suffer, do we have a duty not to create it?56:48 - Do people take these ideas seriously in computer science?58:08 - What next? Relevant LinksRoman's homepage'Detecting Qualia in Natural and Artificial Agents' by Roman'Towards AI Welfare Science and Policies' by Soenke Ziesche and Roman YampolskiyThe Hard Problem of Consciousness25 famous optical illusionsCould AI get depressed and have hallucinations? #mc_embed_signup{background:#fff; clear:left; font:14px Helvetica,Arial,sans-serif; } /* Add your own MailChimp form style overrides in your site stylesheet or in this style block. We recommend moving this block and the preceding CSS link to the HEAD of your HTML file. */ Subscribe to the newsletter
June 2, 2019
This audio essay looks at the Epicurean philosophy of death, focusing specifically on how they addressed the problem of premature death. The Epicureans believe that premature death is not a tragedy, provided it occurs after a person has attained the right state of pleasure. If you enjoy listening to these audio essays, and the other podcast episodes, you might consider rating and/or reviewing them on your preferred podcasting service. You can listen below or download here. You can also subscribe on Apple, Stitcher or a range of other services (the RSS feed is here). I've written lots about the philosophy of death over the years. Here are some relevant links if you would like to do further reading on the topic: The Badness of Death and the Meaning of Life (index)The Lucretian Symmetry Argument (Part 1 and Part 2)Is Death Bad or Less Good? (Part 1, Part 2, Part 3, and Part 4) #mc_embed_signup{background:#fff; clear:left; font:14px Helvetica,Arial,sans-serif; } /* Add your own MailChimp form style overrides in your site stylesheet or in this style block. We recommend moving this block and the preceding CSS link to the HEAD of your HTML file. */ Subscribe to the newsletter
May 20, 2019
In this episode I talk to Carissa Véliz. Carissa is a Research Fellow at the Uehiro Centre for Practical Ethics and the Wellcome Centre for Ethics and Humanities at the University of Oxford. She works on digital ethics, practical ethics more generally, political philosophy, and public policy. She is also the Director of the research programme 'Data, Privacy, and the Individual' at the IE's Center for the Governance of Change'. We talk about the problems with online speech and how to use pseudonymity to address them. You can download the episode here or listen below. You can also subscribe on Apple Podcasts, Stitcher, and a variety of other podcasting services (the RSS feed is here). Show Notes0:00 - Introduction1:25 - The problems with online speech4:55 - Anonymity vs Identifiability9:10 - The benefits of anonymous speech16:12 - The costs of anonymous speech - The online Ring of Gyges23:20 - How digital platforms mediate speech and make things worse28:00 - Is speech more trustworthy when the speaker is identifiable?30:50 - Solutions that don't work35:46 - How pseudonymity could address the problems with online speech41:15 - Three forms of pseudonymity and how they should be used44:00 - Do we need an organisation to manage online pseudonyms?49:00 - Thoughts on the Journal of Controversial Ideas54:00 - Will people use pseudonyms to deceive us?57:30 - How pseudonyms could address the issues with un-PC speech1:02:04 - Should we be optimistic or pessimistic about the future of online speech?  Relevant LinksCarissa's Webpage"Online Masquerade: Redesigning the Internet for Free Speech Through the Use of Pseudonyms" by Carissa"Why you might want to think twice about surrendering online privacy for the sake of convenience" by Carissa"What If Banks Were the Main Protectors of Customers’ Private Data?" by CarissaThe Secret BarristerDelete: The Virtue of Forgetting in the Digital Age by Viktor Mayer-SchönbergerMill's Argument for Free Speech: A Guide'Here Comes the Journal of Controversial Ideas. Cue the Outcry' by Bartlett #mc_embed_signup{background:#fff; clear:left; font:14px Helvetica,Arial,sans-serif; } /* Add your own MailChimp form style overrides in your site stylesheet or in this style block. We recommend moving this block and the preceding CSS link to the HEAD of your HTML file. */ Subscribe to the newsletter
May 9, 2019
In this episode I talk to Phil Torres. Phil is an author and researcher who primarily focuses on existential risk. He is currently a visiting researcher at the Centre for the Study of Existential Risk at Cambridge University. He has published widely on emerging technologies, terrorism, and existential risks, with articles appearing in the Bulletin of the Atomic Scientists, Futures, Erkenntnis, Metaphilosophy, Foresight, Journal of Future Studies, and the Journal of Evolution and Technology. He is the author of several books, including most recently Morality, Foresight, and Human Flourishing: An Introduction to Existential Risks. We talk about the problem of apocalyptic terrorists, the proliferation dual-use technology and the governance problem that arises as a result. This is both a fascinating and potentially terrifying discussion. You can download the episode here or listen below. You can also subscribe on Apple Podcasts, Stitcher and a variety of other podcasting services (the RSS feed is here). Show Notes0:00 – Introduction3:14 – What is existential risk? Why should we care?8:34 – The four types of agential risk/omnicidal terrorists17:51 – Are there really omnicidal terror agents?20:45 – How dual-use technology give apocalyptic terror agents the means to their desired ends27:54 – How technological civilisation is uniquely vulernable to omnicidal agents32:00 – Why not just stop creating dangerous technologies?36:47 – Making the case for mass surveillance41:08 – Why mass surveillance must be asymmetrical45:02 – Mass surveillance, the problem of false positives and dystopian governance56:25 – Making the case for benevolent superintelligent governance1:02:51 – Why advocate for something so fantastical?1:06:42 – Is an anti-tech solution any more fantastical than a benevolent AI solution?1:10:20 – Does it all just come down to values: are you a techno-optimist or a techno-pessimist? Relevant LinksPhil’s webpage‘Superintelligence and the Future of Governance:
On Prioritizing the Control Problem at the End of History’ by PhilMorality, Foresight, and Human Flourishing: An Introduction to Existential Risks by Phil‘The Vulnerable World Hypothesis” by Nick BostromPhil’s comparison of his paper with Bostrom’s paperThe Guardian orders the small-pox genomeSlaughterbotsThe Future of Violence by Ben Wittes and Gabriela BlumFuture Crimes by Marc Goodman The Dyn Cyberattack Autonomous Technology by Langdon Winner'Biotechnology and the Lifetime of Technological Civilisations’ by JG Sotos The God Machine Thought Experiment (Persson and Savulescu)  #mc_embed_signup{background:#fff; clear:left; font:14px Helvetica,Arial,sans-serif; } /* Add your own MailChimp form style overrides in your site stylesheet or in this style block. We recommend moving this block and the preceding CSS link to the HEAD of your HTML file. */ Subscribe to the newsletter
April 26, 2019
In this episode I talk to Erica Neely. Erica is an Associate Professor of Philosophy at Ohio Northern University specializing in philosophy of technology and computer ethics. Her work focuses is on the ethical ramifications of emerging technologies. She has written a number of papers on 3D printing, the ethics of video games, robotics and augmented reality. We chat about the ethics of augmented reality, with a particular focus on property rights and the problems that arise when we blend virtual and physical reality together in augmented reality platforms. You can download the episode here or listen below. You can also subscribe on Apple Podcasts, Stitcher and a variety of other services (the RSS feed is here). Show Notes0:00 - Introduction1:00 - What is augmented reality (AR)?5:55 - Is augmented reality overhyped?10:36 - What are property rights?14:22 - Justice and autonomy in the protection of property rights16:47 - Are we comfortable with property rights over virtual spaces/objects?22:30 - The blending problem: why augmented reality poses a unique problem for the protection of property rights27:00 - The different modalities of augmented reality: single-sphere or multi-sphere?30:45 - Scenario 1: Single-sphere AR with private property34:28 - Scenario 2: Multi-sphere AR with private property37:30 - Other ethical problems in scenario 243:25 - Augmented reality vs imagination47:15 - Public property as contested space49:38 - Scenario 3: Multi-sphere AR with public property54:30 - Scenario 4: Single-sphere AR with public property1:00:28 - Must the owner of the single-sphere AR platform be regulated as a public utility/entity?1:02:25 - Other important ethical issues that arise from the use of AR Relevant LinksErica's Homepage'Augmented Reality, Augmented Ethics: Who Has the Right to Augment a Particular Physical Space?' by Erica'The Ethics of Choice in Single Player Video Games' by Erica'The Risks of Revolution: Ethical Dilemmas in 3D Printing from a US Perspective' by Erica'Machines and the Moral Community' by EricaIKEA Place augmented reality appL'Oreal's use of augmented reality make-up appsHolocaust Museum Bans Pokemon Go #mc_embed_signup{background:#fff; clear:left; font:14px Helvetica,Arial,sans-serif; } /* Add your own MailChimp form style overrides in your site stylesheet or in this style block. We recommend moving this block and the preceding CSS link to the HEAD of your HTML file. */ Subscribe to the newsletter
April 20, 2019
This audio essay is an Easter special. It focuses on David Hume's famous argument about miracles. First written over 250 years, Hume's essay 'Of Miracles' purports to provide an "everlasting check" against all kinds of "superstitious delusion". But is this true? Does Hume give us good reason to reject the testimonial proof provided on behalf of historical miracles? Maybe not, but he certainly provides a valuable framework for thinking critically about this issue. You can download the audio here or listen below. You can also subscribe on Apple, Stitcher and a variety of other podcatching services (the RSS feed is here). This audio essay is based on an earlier written essay (available here). If you are interested in further reading about the topic, I recommend the following essays: Hume's Argument Against Miracles (Part One)Hume's Argument Against Miracles (Part Two)Hume, Miracles and the Many Witnesses Objection #mc_embed_signup{background:#fff; clear:left; font:14px Helvetica,Arial,sans-serif; } /* Add your own MailChimp form style overrides in your site stylesheet or in this style block. We recommend moving this block and the preceding CSS link to the HEAD of your HTML file. */ Subscribe to the newsletter
April 10, 2019
In this episode I talk Stefan Lorenz Sorgner. Stefan teaches philosophy at John Cabot University in Rome. He is director and co-founder of the Beyond Humanism Network, Fellow at the Institute for Ethics and Emerging Technologies (IEET), Research Fellow at the Ewha Institute for the Humanities at Ewha Womans University in Seoul, and Visting Fellow at the Ethics Centre of the Friedrich-Schiller-University in Jena. His main fields of research are Nietzsche, the philosophy of music, bioethics and meta-, post- and transhumanism. We talk about his case for a Nietzschean form of transhumanism. You can download the episode here or listen below. You can also subscribe to the podcast on iTunes, Stitcher and a variety of other podcasting apps (the RSS feed is here). Show Notes0:00 - Introduction2:12 - Recent commentary on Stefan's book Ubermensch3:41 - Understanding transhumanism - getting away from the "humanism on steroids" ideal10:33 - Transhumanism as an attitude of experimentation and not a destination?13:34 - Have we always been transhumanists?16:51 - Understanding Nietzsche22:30 - The Will to Power in Nietzschean philosophy26:41 - How to understand "power" in Nietzschean terms30:40 - The importance of perspectivalism and the abandonment of universal truth36:40 - Is it possible for a Nietzschean to consistently deny absolute truth?39:55 - The idea of the Ubermensch (Overhuman)45:48 - Making the case for a Nietzschean form of transhumanism51:00 - What about the negative associations of Nietzsche?1:02:17 - The problem of moral relativism for transhumanists Relevant LinksStefan's homepageThe Ubermensch: A Plea for a Nietzschean Transhumanism - Stefan's new book (in German)Posthumanism and Transhumanism: An Introduction - edited by Stefan and Robert Ranisch"Nietzsche, the Overhuman and Tranhumanism" by Stefan (open access)"Beyond Humanism: Reflections on Trans and Post-humanism" by Stefan (a response to critics of the previous article)Nietzsche at the Stanford Encyclopedia of Philosophy #mc_embed_signup{background:#fff; clear:left; font:14px Helvetica,Arial,sans-serif; } /* Add your own MailChimp form style overrides in your site stylesheet or in this style block. We recommend moving this block and the preceding CSS link to the HEAD of your HTML file. */ Subscribe to the newsletter
March 30, 2019
In this episode I talk to Jacob Turner. Jacob is a barrister and author. We chat about his new book, Robot Rules: Regulating Artificial Intelligence (Palgrave Macmillan, 2018), which discusses how to address legal responsibility, rights and ethics for AI. You can download here or listen below. You can also subscribe to the show on iTunes, Stitcher and a variety of other services (the RSS feed is here). Show Notes0:00 - Introduction1:33 - Why did Jacob write Robot Rules?2:47 - Do we need special legal rules for AI?6:34 - The responsibility 'gap' problem11:50 - Private law vs criminal law: why it's important to remember the distinction14:08 - Is is easy to plug the responsibility gap in private law?23:07 - Do we need to think about the criminal law responsibility gap?26:14 - Is it absurd to hold AI criminally responsible?30:24 - The problem with holding proximate humans responsible36:40 - The positive side of responsibility: lessons from the Monkey selfie case41:50 - What is legal personhood and what would it mean to grant it to an AI?48:57 - Pragmatic reasons for granting an AI legal personhood51:48 - Is this a slippery slope?56:00 - Explainability and AI: Why is this important?1:02:38 - Is there are right to explanation under EU law?1:06:16 - Is explainability something that requires a technical solution not a legal solution?1:08:32 - The danger of fetishising explainability Relevant LinksRobot Rules: Regulating Artificial IntelligenceWebsite for the bookJacob on TwitterJacob giving a lecture about the book at the University of Law"Robots, Law and the Retribution Gap" by John DanaherThe Darknet Shopper CaseThe Monkey Selfie CaseAlgorithmic Entities by Lynn LoPucki (discussing Shawn Bayern's argument)Matthew Scherer's critique of Bayern's claim that AI's can already acquire legal personhood #mc_embed_signup{background:#fff; clear:left; font:14px Helvetica,Arial,sans-serif; } /* Add your own MailChimp form style overrides in your site stylesheet or in this style block. We recommend moving this block and the preceding CSS link to the HEAD of your HTML file. */ Subscribe to the newsletter
March 20, 2019
Schopenhauer was a profoundly pessimistic man. He argued that all life was suffering. Was he right or is there room for optimism? This audio essay tries to answer that question. It is based on an earlier written essay. You can listen below or download here. These audio essays are released as part of the Philosophical Disquisitions podcast. You can subscribe to the podcast on Apple Podcasts, Player FM, Podbay, Podbean, Castbox, Overcast and more. Full details available here. Subscribe to the newsletter
March 14, 2019
In this episode I talk to Seth Baum. Seth is an interdisciplinary researcher working across a wide range of fields in natural and social science, engineering, philosophy, and policy. His primary research focus is global catastrophic risk. He also works in astrobiology. He is the Co-Founder (with Tony Barrett) and Executive Director of the Global Catastrophic Risk Institute. He is also a Research Affiliate of the University of Cambridge Centre for the Study of Existential Risk. We talk about the importance of studying the long-term future of human civilisation, and map out four possible trajectories for the long-term future. You can download the episode here or listen below. You can also subscribe on a variety of different platforms, including iTunes, Stitcher, Overcast, Podbay, Player FM and more. The RSS feed is available here. Show Notes0:00 - Introduction1:39 - Why did Seth write about the long-term future of human civilisation?5:15 - Why should we care about the long-term future? What is the long-term future?13:12 - How can we scientifically and ethically study the long-term future?16:04 - Is it all too speculative?20:48 - Four possible futures, briefly sketched: (i) status quo; (ii) catastrophe; (iii) technological transformation; and (iv) astronomical23:08 - The Status Quo Trajectory - Keeping things as they are28:45 - Should we want to maintain the status quo?33:50 - The Catastrophe Trajectory - Awaiting the likely collapse of civilisation38:58 - How could we restore civilisation post-collapse? Should we be working on this now?44:00 - Are we under-investing in research into post-collapse restoration?49:00 - The Technological Transformation Trajectory - Radical change through technology52:35 - How desirable is radical technological change?56:00 - The Astronomical Trajectory - Colonising the solar system and beyond58:40 - Is the colonisation of space the best hope for humankind?1:07:22 - How should the study of the long-term future proceed from here? Relevant LinksSeth's homepageThe Global Catastrophic Risk Institute"Long-Term Trajectories for Human Civilisation" by Baum et al"The Perils of Short-Termism: Civilisation's Greatest Threat" by Fisher, BBC NewsThe Knowledge by Lewis Dartnell"Space Colonization and the Meaning of Life" by Baum, Nautilus"Astronomical Waste: The Opportunity Cost of Delayed Technological Development" by Nick Bostrom"Superintelligence as a Cause or Cure for Risks of Astronomical Suffering" by Kaj Sotala and Lucas Gloor"Space Colonization and Suffering Risks" by Phil Torres"Thomas Hobbes in Space: The Problem of Intergalactic War" by John Danaher     #mc_embed_signup{background:#fff; clear:left; font:14px Helvetica,Arial,sans-serif; } /* Add your own MailChimp form style overrides in your site stylesheet or in this style block. We recommend moving this block and the preceding CSS link to the HEAD of your HTML file. */ Subscribe to the newsletter
March 7, 2019
(Subscribe here) This is an experiment. For a number of years, people have been asking me to provide audio versions of the essays that I post on the blog. I've been reluctant to do this up until now, but I have recently become a fan of the audio format and I appreciate its conveniences. Also, I watched an interview with Michael Lewis (the best-selling non-fiction author in the world) just this week where he suggested that audio essays might be the future of the essay format. So, in an effort to jump ahead of the curve (or at least jump onto the curve before it pulls away from me), I'm going to post a few audio essays over the coming months. They will all be based on stuff I've previously published on the blog, with a few minor edits and updates. I'll send them out on the regular podcast feed (which you can subscribe to in various formats here). I'm learning as I go. The quality and style will probably evolve over time, and I'm quite keen on getting feedback from listeners too. Do you like this kind of thing or would you prefer I didn't do it? This first audio essay is based on something I previously wrote on the moral problem of accelerating change. You can find the original essay here. You can listen below or download at this link. #mc_embed_signup{background:#fff; clear:left; font:14px Helvetica,Arial,sans-serif; } /* Add your own MailChimp form style overrides in your site stylesheet or in this style block. We recommend moving this block and the preceding CSS link to the HEAD of your HTML file. */ Subscribe to the newsletter
February 28, 2019
In this episode I talk to Jeff Sebo. Jeff is a Clinical Assistant Professor of Environmental Studies, Affiliated Professor of Bioethics, Medical Ethics, and Philosophy, and Director of the Animal Studies M.A. Program at New York University. Jeff’s research focuses on bioethics, animal ethics, and environmental ethics. He has two co-authored books Chimpanzee Rights and Food, Animals, and the Environment. We talk about something Jeff calls the 'moral problem of other minds', which is roughly the problem of what we should to if we aren't sure whether another being is sentient or not. You can download the episode here or listen below. You can also subscribe to the show on iTunes and Stitcher (the RSS feed is here). Show Notes0:00 - Introduction1:38 - What inspired Jeff to think about the moral problem of other minds?7:55 - The importance of sentience and our uncertainty about it12:32 - The three possible responses to the moral problem of other minds: (i) the incautionary principle; (ii) the precautionary principle and (iii) the expected value principle15:26 - Understanding the Incautionary Principle20:09 - Problems with the Incautionary Principle23:14 - Understanding the Precautionary Principle: More plausible than the incautionary principle?29:20 - Is morality a zero-sum game? Is there a limit to how much we can care about other beings?35:02 - The problem of demandingness in moral theory37:06 - Other problems with the precautionary principle41:41 - The Utilitarian Version of the Expected Value Principle47:36 - The problem of anthropocentrism in moral reasoning53:22 - The Kantian Version of the Expected Value Principle59:08 - Problems with the Kantian principle1:03:54 - How does the moral problem of other minds transfer over to other cases, e.g. abortion and uncertainty about the moral status of the foetus? Relevant LinksJeff's Homepage'The Moral Problem of Other Minds' by JeffChimpanzee Ethics by Jeff and orsFood, Animals and the Environment by Jeff and Christopher Schlottman'Consider the Lobster' by David Foster Wallace'Ethical Behaviourism in the Age of the Robot' by John DanaherEpisode 48 with David Gunkel on Robot Rights   #mc_embed_signup{background:#fff; clear:left; font:14px Helvetica,Arial,sans-serif; } /* Add your own MailChimp form style overrides in your site stylesheet or in this style block. We recommend moving this block and the preceding CSS link to the HEAD of your HTML file. */ Subscribe to the newsletter
February 18, 2019
In this episode I talk to Angèle Christin. Angèle is an assistant professor in the Department of Communication at Stanford University, where she is also affiliated with the Sociology Department and Program in Science, Technology, and Society. Her research focuses on how algorithms and analytics transform professional values, expertise, and work practices. She is currently working on a book on the use of audience metrics in web journalism and a project on the use of risk assessment algorithms in criminal justice. We talk about both. You can download the episode here or listen below. You can also subscribe to the show on iTunes or Stitcher (the RSS feed is here). Show Notes0:00 - Introduction1:30 - What's missing from the current debate about algorithmic governance? What does Angèle's ethnographic perspective add?5:10 - How does ethnography work? What does an ethnographer do?8:30 - What are the limitations of ethnographic studies?12:33 - Why did Angèle focus on the use of algorithms in criminal justice and web journalism?23:06 - What were Angèle's two key research findings? Decoupling and Buffering24:40 - What is 'decoupling' and how does it happen?30:00 - Different attitudes to algorithmic tools in the US and France (French journalists, perhaps surprisingly, more obsessed with real time analytics than their American counterparts)39:20 - What explains the ambivalent attitude to metrics in different professions?44:42 - What is 'buffering' and how does it arise?54:30 - How people who worry about algorithms might misunderstand the practical realities of criminal justice57:47 - Does the resistance/acceptance of an algorithmic tool depend on the nature of the tool and the nature of the workplace? What might the relevant variables be? Relevant LinksAngèle's Homepage"Algorithms in Practice: Comparing Web Journalism and Criminal Justice" by Angèle"Counting Clicks: Quantification and Variation in Web Journalism in the United States and France" by Angèle"Courts and Predictive Algorithms" by Christin, Rosenblat and Boyd"The Mistrials of Algorithmic Sentencing" by AngèleEpisode 41 with Reuben Binns (covering the debate about the Compas algorithm and bias)Episode 19 with Andrew Ferguson on big data and policing     #mc_embed_signup{background:#fff; clear:left; font:14px Helvetica,Arial,sans-serif; } /* Add your own MailChimp form style overrides in your site stylesheet or in this style block. We recommend moving this block and the preceding CSS link to the HEAD of your HTML file. */ Subscribe to the newsletter
January 30, 2019
In this episode I talk to Kate Devlin. Kate is a Senior Lecturer in the Department of Digital Humanities at King's College London. Kate's research is in the fields of Human Computer Interaction (HCI) and Artificial Intelligence (AI), investigating how people interact with and react to technology in order to understand how emerging and future technologies will affect us and the society in which we live. Kate has become a driving force in the field of intimacy and technology, running the UK's first sex tech hackathon in 2016. She has also become the face of sex robots – quite literally in the case of one mis-captioned tabloid photograph. We talk about her recent, excellent book Turned On: Science, Sex and Robots, which covers the past, present and future of sex technology. You download the episode here or listen below. You can also subscribe on iTunes or Stitcher (the RSS feed is here). Show Notes0:00 - Introduction2:08 - Why did Kate talk about sex robots in the House of Lords?3:01 - How did Kate become the face of sex robots?5:34 - Are sex robots really a thing? Should academics be researching them?11:10 - The important link between archaeology and sex technology15:00 - The myth of hysteria and the origin of the vibrator17:36 - What was the most interesting thing Kate learned while researching this book? (Ans: owners of sex dolls are not creepy isolationists)23:03 - Is there are moral panic about sex robots? And are we talking about robots or dolls?30:41 - What are the arguments made by defenders of the 'moral panic' view?38:05 - What could be the social benefits of sex robots? Do men and women want different things from sex tech?47:57 - Why is Kate so interested in 'non-anthropomorphic' sex robots?55:15 - Is the media fascination with this topic destructive or helpful?57:32 - What question does Kate get asked most often and what does she say in response?  Relevant LinksKate's WebpageKate's Academic HomepageTurned On: Science, Sex and Robots by Kate DevlinKate and I in conversation at the Virtual Futures Salon in London'A Failure of Academic Quality Control: The Technology of the Orgasm' by Hallie Lieberman and Eric Schatzberg (on the myth that vibrators were used to treat hysteria)Laodamia - Owner of the world's first sex doll?'In Defence of Sex Machines: Why trying to ban sex robots is wrong?' by Kate'Sex robot molested at electronics festival' at Huffington Post'First tester made love to sex robot so furiously it actually broke' at Metro.co.ukThe 2nd London Sex Tech HackathonRobot Sex: Social and Ethical Implications edited by Danaher and McArthur #mc_embed_signup{background:#fff; clear:left; font:14px Helvetica,Arial,sans-serif; } /* Add your own MailChimp form style overrides in your site stylesheet or in this style block. We recommend moving this block and the preceding CSS link to the HEAD of your HTML file. */ Subscribe to the newsletter
January 15, 2019
In this episode I talk to Ole Martin Moen. Ole Martin is a Research Fellow in Philosophy at the University of Oslo. He works on how to think straight about thorny issues in applied ethics. He is the Principal Investigator of “What should not be bought and sold?”, a $1 million research project funded by the Research Council of Norway. In the past, he has written articles about the ethics of prostitution, the desirability of cryonics, the problem of wild animal suffering and the case for philosophical hedonism. Along with his collaborator, Aksel Braanen Sterri, he runs a podcast, Moralistene (in Norwegian), and he regularly discusses moral issues behind the news on Norwegian national radio. We talk about a potentially controversial topic: the anti-tech philosophy of the Unabomber, Ted Kaczysnki, and what's wrong with it. You can download the episode here or listen below. You can also subscribe via iTunes or Stitcher (the RSS feed is here). Show Notes0:00 - Introduction2:05 - Should we even be talking about Ted Kaczynski's ethics? Does it not lend legitimacy to his views?6:32 - Are we unnecessarily anti-rational when it comes to discussing dangerous ideas?8:32 - The Evolutionary Mismatch Argument12:43 - The Surrogate Activities Argument20:20 - The Helplessness/Complexity Argument23:08 - The Unstoppability Argument26:45 - The Domesticated Animals Argument30:45 - Why does Ole Martin overlook Kaczynski's criticisms of 'leftists' in his analysis?34:03 - What's original in Kaczynski's arguments?36:31 - Are philosophers who write about Kaczynski engaging in a motte and bailey fallacy?38:36 - Ole Martin's main critique of Kaczynski: the evaluative double standard42:20 - How this double standard works in practice47:27 - Why not just drop out of industrial society instead of trying to overthrow it?55:04 - Is Kaczynski a revolutionary nihilist?58:59 - Similarities and differences between Kaczynski's argument and the work of Nick Bostrom, Ingmar Persson and Julian Savulescu1:04:21 - Where should we go from here? Should there be more papers on this topic? Relevant LinksOle Martin's Homepage'The Unabomber's Ethics' by Ole Martin Moen"Bright New World" and "Smarter Babies" by Ole Martin Moen"The Case for Cryonics" by Ole Martin MoenTed Kaczynski on Wikipedia (includes links to relevant writings)"The Unabomber's Penpal" - article about the philosopher David Skrbina who has corresponded with Kaczynski for some time"The Unabomber on Robots" - by Jai Galliott (article appearing in Robot Ethics 2.0 edited by Lin et al)Unfit for the Future by Ingmar Persson and Julian SavulescuNick Bostrom's Homepage (check out his recent paper 'The Vulnerable World Hypothesis")   #mc_embed_signup{background:#fff; clear:left; font:14px Helvetica,Arial,sans-serif; } /* Add your own MailChimp form style overrides in your site stylesheet or in this style block. We recommend moving this block and the preceding CSS link to the HEAD of your HTML file. */ Subscribe to the newsletter
December 24, 2018
In this episode I talk to Michele Loi. Michele is a political philosopher turned bioethicist turned digital ethicist. He is currently (2017-2020) working on two interdisciplinary projects, one of which is about the ethical implications of big data at the University of Zurich. In the past, he developed an ethical framework of governance for the Swiss MIDATA cooperative (2016). He is interested in bringing insights from ethics and political philosophy to bear on big data, proposing more ethical forms of institutional organization, firm behavior, and legal-political arrangements concerning data. We talk about how you can use Rawls's theory of justice to evaluate the role of dominant tech platforms (particularly Facebook) in modern life. You download the show here or listen below. You can also subscribe on iTunes or Stitcher (the RSS feed is here). Show Notes0:00 - Introduction1:29 - Why use Rawls to assess data platforms?2:58 - Does the analogy between data and oil hold up to scrutiny?7:04 - The First Key Idea: Rawls's Basic Social Structures11:20 - The Second Key Idea: Dominant Tech Platforms as Basic Social Structures15:02 - Is Facebook a Dominant Tech Platform?19:58 - How Zuckerberg's recent memo highlights Facebook's status as a basic social structure23:10 - A brief primer on Rawls's two principles of justice29:18 - Dominant tech platforms and respect for the basic liberties (particularly free speech)36:48 - Facebook: Media Company or Nudging Platform? Does it matter from the perspective of justice?41:43 - Why Facebook might have a duty to ensure that we don't get trapped in a filter bubble44:32 - Is it fair to impose such a duty on Facebook as a private enterprise?51:18 - Would it be practically difficult for Facebook to fulfil this duty?53:02 - Is data-mining and monetisation exploitative?56:14 - Is it possible to explore other economic models for the data economy?59:44 - Can regulatory frameworks (e.g. the GDPR) incentivise alternative business models?1:01:50 - Is there hope for the future? Relevant LinksMichele on TwitterMichele on Research Gate'If data is the new oil, when is the extraction of value from data unjust?' by Loi and Dehaye'Technological Unemployment and Human Disenhancement' by Michele Loi'The Digital Phenotype: A Philosophical and Ethical Exploration' by Michele Loi'A Blueprint for content governance and enforcement' by Mark Zuckerberg'Should libertarians hate the internet? A Nozickian Argument Against Social Networks' by John DanaherJohn Rawls's Two Principles of Justice, explained   #mc_embed_signup{background:#fff; clear:left; font:14px Helvetica,Arial,sans-serif; } /* Add your own MailChimp form style overrides in your site stylesheet or in this style block. We recommend moving this block and the preceding CSS link to the HEAD of your HTML file. */ Subscribe to the newsletter
December 23, 2018
In this episode I talk to Francesca Minerva. Francesca is a postdoctoral fellow at the University of Ghent. Her research focuses on applied philosophy, specifically lookism, conscientious objection, abortion, academic freedom, and cryonics. She has published many articles on these topics in some of the leading academic journals in ethics and philosophy, including the Journal of Medical Ethics, Bioethics, Cambridge Quarterly Review of Ethicsand the Hastings Centre Report. We talk about life, death and the wisdom and ethics of cryonics. You can download the episode here or listen below. You can also subscribe on iTunes or Stitcher (the RSS feed is here). Show Notes:0:00 - Introduction1:34 - What is cryonics anyway?6:54 - The tricky logistics of cryonics: you need to die in the right way10:30 - Is cryonics too weird/absurd to take seriously? Analogies with IVF and frozen embryos16:04 - The opportunity cost of cryonics18:18 - Is death bad? Why?22:51 - Is life worth living at all? Is it better never to have been born?24:44 - What happens when live is no longer worth living? The attraction of cryothanasia30:28 - Should we want to live forever? Existential tiredness and existential boredom37:20 - Is immortality irrelevant to the debate about cryonics?41:42 - Even if cryonics is good for me might it be the unethical choice?45:00 (ish) - Egalitarianism and the distribution of life years49:39 - Would future generations want to revive us?52:34 - Would we feel out of place in the distant future?Relevant LinksFrancesca's webpageThe Ethics of Cryonics: Is it immoral to be immortal? by Francesca'Cryopreservation of Embryos and Fetuses as a Future Option for Family Planning Purposes' by Francesca and Anders Sandberg'Euthanasia and Cryothanasia' by Francesca and Anders Sandberg'The Badness of Death and the Meaning of Life' (Series) - pretty much everything I've ever written about the philosophy of life and deathAlcor Life Extension FoundationCryonics InstituteTo be a Machine by Mark O'Connell  #mc_embed_signup{background:#fff; clear:left; font:14px Helvetica,Arial,sans-serif; } /* Add your own MailChimp form style overrides in your site stylesheet or in this style block. We recommend moving this block and the preceding CSS link to the HEAD of your HTML file. */ Subscribe to the newsletter
December 3, 2018
In this episode I talk to Matthijs Maas. Matthijs is a doctoral researcher at the University of Copenhagen's 'AI and Legal Disruption' research unit, and a research affiliate with the Governance of AI Program at Oxford University's Future of Humanity Institute. His research focuses on safe and beneficial global governance strategies for emerging, transformative AI systems. This involves, in part, a study of the requirements and pitfalls of international regimes for technology arms control, non-proliferation and the conditions under which these are legitimate and effective. We talk about the phenomenon of 'globally disruptive AI' and the effect it will have on the international legal order. You can download the episode here or listen below. You can also subscribe via iTunes or Stitcher (the RSS feed is here). Show Notes0:00 - Introduction2:11 - International Law 1016:38 - How technology has repeatedly shaped the content of international law10:43 - The phenomenon of 'globally disruptive artificial intelligence' (GDAI)15:20 - GDAI and the development of international law18:05 - Will we need new laws?19:50 - Will GDAI result in lots of legal uncertainty?21:57 - Will the law be under/over-inclusive of GDAI?25:21 - Will GDAI render international law obsolete?31:00 - Could we have a tech-neutral international law?34:10 - Could we automate the monitoring and enforcement of international law?44:35 - Could we replace international legal institutions with technological systems of management?47:35 - Could GDAI lead to the end of the international legal order?57:23 - Could GDAI result in more isolationism and less multi-lateralism1:00:40 - So what will the future be?  Relevant LinksFollow Matthijs on TwitterArtificial Intelligence and Legal Disruption research group (University of Copenhagen)Governance of AI Program (University of Oxford)Dafoe, Allan. “AI Governance: A Research Agenda.” Oxford: Governance of AI Program, Future of Humanity Institute, 2018.On history of technology and international law: Picker, Colin B. “A View from 40,000 Feet: International Law and the Invisible Hand of Technology.” Cardozo Law Review 23 (2001): 151–219.Brownsword, Roger. “In the Year 2061: From Law to Technological Management.” Law, Innovation and Technology 7, no. 1 (January 2, 2015): 1–51.Boutin, Berenice. “Technologies for International Law & International Law for Technologies.” Groningen Journal of International Law (blog), October 22, 2018.Moses, Lyria Bennett. “Recurring Dilemmas: The Law’s Race to Keep Up With Technological Change.” SSRN Scholarly Paper. Rochester, NY: Social Science Research Network, April 11, 2007.On establishing legal 'artificially intelligent entities', etc: Burri, Thomas. “International Law and Artificial Intelligence.” SSRN Electronic Journal, 2017. #mc_embed_signup{background:#fff; clear:left; font:14px Helvetica,Arial,sans-serif; } /* Add your own MailChimp form style overrides in your site stylesheet or in this style block. We recommend moving this block and the preceding CSS link to the HEAD of your HTML file. */ Subscribe to the newsletter
November 1, 2018
In this episode I talk to David Gunkel. David is a repeat guest, having first appeared on the show in Episode 10. David a Professor of Communication Studies at Northern Illinois University. He is a leading scholar in the philosophy of technology, having written extensively about cyborgification, robot rights and responsibilities, remix cultures, new political structures in the information age and much much more. He is the author of several books, including Hacking Cyberspace, The Machine Question, Of Remixology, Gaming the System and, most recently, Robot Rights. We have a long debate/conversation about whether or not robots should/could have rights. You can download the episode here or listen below. You can also subscribe to the show on iTunes or Stitcher (the RSS feed is here). Show Notes0:00 - Introduction1:52 - Isn't the idea of robot rights ridiculous?3:37 - What is a robot anyway? Is the concept too nebulous/diverse?7:43 - Has science fiction undermined our ability to think about robots clearly?11:01 - What would it mean to grant a robot rights? (A precis of Hohfeld's theory of rights)18:32 - The four positions/modalities one could take on the idea of robot rights21:32 - The First Modality: Robots Can't Have Rights therefore Shouldn't23:37 - The EPSRC guidelines on robotics as an example of this modality26:04 - Criticisms of the EPSRC approach28:27 - Other problems with the first modality31:32 - Europe vs Japan: why the Japanese might be more open to robot 'others'34:00 - The Second Modality: Robots Can Have Rights therefore Should (some day)39:53 - A debate between myself and David about the second modality (why I'm in favour it and he's against it)47:17 - The Third Modality: Robots Can Have Rights but Shouldn't (Bryson's view)53:48 - Can we dehumanise/depersonalise robots?58:10 - The Robot-Slave Metaphor and its Discontents1:04:30 - The Fourth Modality: Robots Cannot Have Rights but Should (Darling's view)1:07:53 - Criticisms of the fourth modality1:12:05 - The 'Thinking Otherwise' Approach (David's preferred approach)1:16:23 - When can robots take on a face?1:19:44 - Is there any possibility of reconciling my view with David's?1:24:42 - So did David waste his time writing this book?   Relevant LinksDavid's HomepageRobot Rights from MIT Press, 2018 (and on Amazon)Episode 10 - Gunkel on Robots and Cyborgs'The other question: can and should robots have rights?' by David Gunkel'Facing Animals: A Relational Other-Oriented Approach to Moral Standing' by Gunkel and CoeckelberghThe Robot Rights Debate (Index) - everything I've written or said on the topic of robot rightsEPSRC Principles of RoboticsEpisode 24 - Joanna Bryson on Why Robots Should be Slaves'Patiency is not a virtue: the design of intelligent systems and systems of ethics' by Joanna BrysonRobo Sapiens Japanicus - by Jennifer Robertson #mc_embed_signup{background:#fff; clear:left; font:14px Helvetica,Arial,sans-serif; } /* Add your own MailChimp form style overrides in your site stylesheet or in this style block. We recommend moving this block and the preceding CSS link to the HEAD of your HTML file. */ Subscribe to the newsletter
October 20, 2018
In this episode I talk to Virginia Eubanks. Virginia is an Associate Professor of Political Science at the University at Albany, SUNY. She is the author of several books, including Automating Inequality: How High-Tech Tools Profile, Police, and Punish the Poor and Digital Dead End: Fighting for Social Justice in the Information Age. Her writing about technology and social justice has appeared in The American Prospect, The Nation, Harper’s and Wired. She has worked for two decades in community technology and economic justice movements. We talk about the history of poverty management in the US and how it is now being infiltrated and affected by tools for algorithmic governance.  You can download the episode here or listen below. You can also subscribe to the show on iTunes or Stitcher (the RSS feed is here). Show Notes0:00 - Introduction1:39 - The future is unevenly distributed but not in the way you might think7:05 - Virginia's personal encounter with the tools for automating inequality12:33 - Automated helplessness?14:11 - The history of poverty management: denial and moralisation22:40 - Technology doesn't disrupt our ideology of poverty; it amplifies it24:16 - The problem of poverty myths: it's not just something that happens to other people28:23 - The Indiana Case Study: Automating the system for claiming benefits33:15 - The problem of automated defaults in the Indiana Case37:32 - What happened in the end?41:38 - The L.A. Case Study: A "match.com" for the homeless45:40 - The Allegheny County Case Study: Managing At-Risk Children52:46 - Doing the right things but still getting it wrong?58:44 - The need to design an automated system that addresses institutional bias1:07:45 - The problem of technological solutions in search of a problem1:10:46 - The key features of the digital poorhouse  Relevant LinksVirginia's HomepageVirginia on TwitterAutomating Inequality'A Child Abuse Prediction Model Fails Poor Families' by Virginia in WiredThe Allegheny County Family Screening Tool (official webpage - includes a critical response to Virginia's Wired article)'Can an Algorithm Tell when Kids Are in Danger?' by Dan Hurley (generally positive story about the family screening tool in the New York Times).'A Response to Allegheny County DHS' by Virginia (a response to Allegheny County's defence of the family screening tool)Episode 41 with Reuben Binns on Fairness in Algorithmic Decision-MakingEpisode 19 with Andrew Ferguson about Predictive Policing   #mc_embed_signup{background:#fff; clear:left; font:14px Helvetica,Arial,sans-serif; } /* Add your own MailChimp form style overrides in your site stylesheet or in this style block. We recommend moving this block and the preceding CSS link to the HEAD of your HTML file. */ Subscribe to the newsletter
September 18, 2018
In this episode I talk to Shannon Vallor. Shannon is the Regis and Diane McKenna Professor in the Department of Philosophy at Santa Clara University, where her research addresses the ethical implications of emerging science and technology, especially AI, robotics and new media. Professor Vallor received the 2015 World Technology Award in Ethics from the World Technology Network. She has served as President of the Society for Philosophy and Technology, sits on the Board of Directors of the Foundation for Responsible Robotics, and is a member of the IEEE Standards Association's Global Initiative for Ethical Considerations in Artificial Intelligence and Autonomous Systems. We talk about the problem of techno-social opacity and the value of virtue ethics in an era of rapid technological change.  You can download the episode here or listen below. You can also subscribe to the podcast on iTunes or Stitcher (the RSS feed is here). Show Notes0:00 - Introduction1:39 - How students encouraged Shannon to write Technology and the Virtues6:30 - The problem of acute techno-moral opacity12:34 - Is this just the problem of morality in a time of accelerating change?17:16 - Why can't we use abstract moral principles to guide us in a time of rapid technological change? What's wrong with utilitarianism or Kantianism?23:40 - Making the case for technologically-sensitive virtue ethics27:27 - The analogy with education: teaching critical thinking skills vs providing students with information31:19 - Aren't most virtue ethical traditions too antiquated? Aren't they rooted in outdated historical contexts?37:54 - Doesn't virtue ethics assume a relatively fixed human nature? What if human nature is one of the things that is changed by technology?42:34 - Case study on Social Media: Defending Mark Zuckerberg46:54 - The Dark Side of Social Media52:48 - Are we trapped in an immoral equilibrium? How can we escape?57:17 - What would the virtuous person do right now? Would he/she delete Facebook?1:00:23 - Can we use technological to solve problems created by technology? Will this help to cultivate the virtues?1:05:00 - The virtue of self-regard and the problem of narcissism in a digital age  Relevant LinksShannon's HomepageShannon's profile at Santa Clara UniversityShannon's Twitter profileTechnology and the Virtues (Now in Paperback!) - by Shannon'Social Networking Technology and the Virtues' by Shannon'Moral Deskilling and Upskilling in a New Machine Age' by Shannon'The Moral Problem of Accelerating Change' by John Danaher  #mc_embed_signup{background:#fff; clear:left; font:14px Helvetica,Arial,sans-serif; } /* Add your own MailChimp form style overrides in your site stylesheet or in this style block. We recommend moving this block and the preceding CSS link to the HEAD of your HTML file. */ Subscribe to the newsletter
August 29, 2018
In this episode I chat to Diana Fleischman. Diana is a senior lecturer in evolutionary psychology at the University of Portsmouth. Her research focuses on hormonal influences on behavior, human sexuality, disgust and, recently, the interface of evolutionary psychology and behaviorism. She is a utilitarian, a promoter of effective altruism, and a bivalvegan. We have a long and detailed chat about the evolved psychology of sex and how it may affect the social acceptance and use of sex robots. Along the way we talk about Mills and Boons novels, the connection between sexual stimulation and the brain, and other, no doubt controversial, topics. You can download the episode here or listen below. You can also subscribe on iTunes or Stitcher (the RSS feed is here). Show Notes0:00 - Introduction1:42 - Evolutionary Psychology and the Investment Theory of Sex5:54 - What's the evidence for the investment theory in humans?8:40 - Does the evidence for the theory hold up?11:45 - Studies on the willingness to engage in casual sex: do men and women really differ?18:33 - The ecological validity of these studies20:20 - Evolutionary psychology and the replication crisis23:29 - Are there better alternative explanations for sex differences?26:25 - Ethical criticisms of evolutionary psychology28:14 - Sex robots and evolutionary psychology29:33 - Argument 1: The rising costs of courtship will drive men into the arms of sexbots34:12 - Not all men...39:08 - Couldn't something similar be true for women?46:00 - Aren't the costs of courtship much higher for women?48:27 - Argument 2: Sex robots could be used as treatment for dangerous men51:50 - Would this stigmatise other sexbot users?53:31 - Would this embolden rather than satiate?55:53 - Could the logic of this argument be flipped, e.g. the Futurama argument?58:05 - Isn't this an ethically sub-optimal solution to the problem?1:00:42 - Argument 3: This will also impact on women's sexual behaviour1:07:01 - Do ethical objectors to sex robots underestimate the constraints of our evolved psychology? Relevant LinksDiana's personal webpageDiana on TwitterDiana's academic homepage'Uncanny Vulvas' in Jacobite Magazine - this is the basis for much of our discussion in the podcast'Disgust Trumps Lust: Women’s Disgust and Attraction Towards Men Is Unaffected by Sexual Arousal' by Zsok, Fleischman, Borg and MorrisonBeyond Human Nature by Jesse Prinz'Which people would agree to have sex with a stranger?' by David Schmitt'Sex Work, Technological Unemployment and the Basic Income Guarantee' by John Danaher     #mc_embed_signup{background:#fff; clear:left; font:14px Helvetica,Arial,sans-serif; } /* Add your own MailChimp form style overrides in your site stylesheet or in this style block. We recommend moving this block and the preceding CSS link to the HEAD of your HTML file. */ Subscribe to the newsletter
August 8, 2018
In this episode I talk to Alexis Elder. Alexis is an Assistant Professor of Philosophy at the University of Minnesota Duluth. Her research focuses on ethics, emerging technologies, social philosophy, metaphysics (especially social ontology), and philosophy of mind. She draws on ancient philosophy - primarily Chinese and Greek - in order to think about current problems. She is the author of a number of articles on the philosophy of friendship, and her book Friendship, Robots, and Social Media: False Friends and Second Selves, came out in January 2018. We talk about all things to do with friendship, social media and social robots. You can download the episode here or listen below. You can also subscribe on iTunes or Stitcher (the RSS feed is here). Show Notes0:00 - Introduction1:37 - Aristotle's theory of friendship5:00 - The idea of virtue/character friendship10:14 - The enduring appeal of Aristotle's account of friendship12: 30 - Does social media corrode friendship?16:35 - The Publicity Objection to online friendships20:40 - The Superficiality Objection to online friendships25:23 - The Commercialisation/Contamination Objection to online friendships30:34 - Deception in online friendships35:18 - Must we physically interact with our friends?39:25 - Social robots as friends (with a specific focus on elderly populations and those on the autism spectrum)46:50 - Can you be friends with a robot? The counterfeit currency analogy50:55 - Does the analogy hold up?56:13 - Why are robotic friends assumed to be fake?1:03:50 - Does the 'falseness' of robotic friends depend on the type of friendship we are interested in?1:06:38 - What about companion animals?1:08:35 - Where is this debate going?  Relevant LinksAlexis Elder's webpage'Excellent Online Friendships: An Aristotelian Defence of Social Media' by Alexis'False Friends and False Coinage: a tool for navigating the ethics of sociable robots" by AlexisFriendship, Robots and Social Media by Alexis'Can you be friends with a robot? Aristotelian Friendship and Robotics' by John Danaher #mc_embed_signup{background:#fff; clear:left; font:14px Helvetica,Arial,sans-serif; } /* Add your own MailChimp form style overrides in your site stylesheet or in this style block. We recommend moving this block and the preceding CSS link to the HEAD of your HTML file. */ Subscribe to the newsletter
July 26, 2018
In this episode I talk to Brian Earp. Brian is Associate Director of the Yale-Hastings Program in Ethics and Health Policy at Yale University and The Hastings Center, and a Research Fellow in the Uehiro Centre for Practical Ethics at the University of Oxford. Brian has diverse research interests in ethics, psychology, and the philosophy of science. His research has been covered in Nature, Popular Science, The Chronicle of Higher Education, The Atlantic, New Scientist, and other major outlets. We talk about moral enhancement and the potential use of psychedelics as a form of moral enhancement. You can download the episode here or listen below. You can also subscribe to the podcast on iTunes and Stitcher (the RSS feed is here). Show Notes0:00 - Introduction1:53 - Why psychedelics and moral enhancement?5:07 - What is moral enhancement anyway? Why are people excited about it?7:12 - What are the methods of moral enhancement?10:18 - Why is Brian sceptical about the possibility of moral enhancement?14:16 - So is it an empty idea?17:58 - What if we adopt an 'extended' concept of enhancement, i.e. beyond the biomedical?26:12 - Can we use psychedelics to overcome the dilemma facing the proponent of moral enhancement?29:07 - What are psychedelic drugs? How do they work on the brain?34:26 - Are your experiences whilst on psychedelic drugs conditional on your cultural background?37:39 - Dissolving the ego and the feeling of oneness41:36 - Are psychedelics the new productivity hack?43:48 - How can psychedelics enhance moral behaviour?47:36 - How can a moral philosopher make sense of these effects?51:12 - The MDMA case study58:38 - How about MDMA assisted political negotiations?1:02:11 - Could we achieve the same outcomes without drugs?1:06:52 - Where should the research go from here? Relevant LinksBrian's academia.edu pageBrian's researchgate pageBrian as Rob Walker (and his theatre reel)'Psychedelic moral enhancement' by Brian Earp'Moral Neuroenhancement' by Earp, Douglas and SavulescuHow to Change Your Mind by Michael PollanInterview with Ole Martin Moen in the ethics of psychedelicsThe Doors of Perception by Aldous HuxleyRoland Griffiths Laboratory at Johns Hopkins #mc_embed_signup{background:#fff; clear:left; font:14px Helvetica,Arial,sans-serif; } /* Add your own MailChimp form style overrides in your site stylesheet or in this style block. We recommend moving this block and the preceding CSS link to the HEAD of your HTML file. */ Subscribe to the newsletter
July 12, 2018
In this episode I talk to Reuben Binns. Reuben is a post-doctoral researcher at the Department of Computer Science in Oxford University. His research focuses on both the technical, ethical and legal aspects of privacy, machine learning and algorithmic decision-making. We have a detailed and informative discussion (for me at any rate!) about recent debates about algorithmic bias and discrimination, and how they could be informed by the philosophy of egalitarianism. You can download the episode here or listen below. You can also subscribe on Stitcher and iTunes (the RSS feed is here). Show notes0:00 - Introduction 1:46 - What is algorithmic decision-making? 4:20 - Isn't all decision-making algorithmic? 6:10 - Examples of unfairness in algorithmic decision-making: The COMPAS debate 12:02 - Limitations of the COMPAS debate 15:22 - Other examples of unfairness in algorithmic decision-making 17:00 - What is discrimination in decision-making? 19:45 - The mental state theory of discrimination 25:20 - Statistical discrimination and the problem of generalisation 29:10 - Defending algorithmic decision-making from the charge of statistical discrimination 34:40 - Algorithmic typecasting: Could we all end up like William Shatner? 39:02 - Egalitarianism and algorithmic decision-making 43:07 - The role that luck and desert play in our understanding of fairness 49:38 - Deontic justice and historical discrimination in algorithmic decision-making 53:36 - Fair distribution vs Fair recognition 59:03 - Should we be enthusiastic about the fairness of future algorithmic decision-making?  Relevant LinksReuben's homepage Reuben's institutional page  'Fairness in Machine Learning: Lessons from Political Philosophy' by Reuben Binns 'Algorithmic Accountability and Public Reason' by Reuben Binns 'It's Reducing a Human Being to a Percentage: Perceptions of Justice in Algorithmic Decision-Making' by Binns et al 'Machine Bias' - the ProPublica story on unfairness in the COMPAS recidivism algorithm 'Inherent Tradeoffs in the Fair Determination of Risk Scores' by Kleinberg et al -- an impossibility proof showing that you cannot minimise false positive rates and equalise accuracy rates across two populations at the same time (except in the rare case that the base rate for both populations is the same) #mc_embed_signup{background:#fff; clear:left; font:14px Helvetica,Arial,sans-serif; } /* Add your own MailChimp form style overrides in your site stylesheet or in this style block. We recommend moving this block and the preceding CSS link to the HEAD of your HTML file. */ Subscribe to the newsletter
June 29, 2018
In this episode I talk to Sven Nyholm about self-driving cars. Sven is an Assistant Professor of Philosophy at TU Eindhoven with an interest in moral philosophy and the ethics of technology. Recently, Sven has been working on the ethics of self-driving cars, focusing in particular on the ethical rules such cars should follow and who should be held responsible for them if something goes wrong. We chat about these issues and more. You can download the podcast here or listen below. You can also subscribe on iTunes and Stitcher (the RSS feed is here). Show Notes:0:00 - Introduction 1:22 - What is a self-driving car? 3:00 - Fatal crashes involving self-driving cars 5:10 - Could self-driving cars ever be completely safe? 8:14 - Limitations of the Trolley Problem 11:22 - What kinds of accident scenarios do we need to plan for? 17:18 - Who should decide which ethical rules a self-driving car follows? 23:47 - Why not randomise the ethical rules? 25:18 - Experimental findings on people's preferences with self-driving cars 29:16 - Is this just another typical applied ethical debate? 31:27 - What would a utilitarian self-driving car do? 36:30 - What would a Kantian self-driving car do? 39:33 - A contractualist approach to the ethics of self-driving cars 43:54 - The responsibility gap problem 46:12 - Scepticism of the responsibility gap: can self-driving cars be agents? 53:17 - A collaborative agency approach to self-driving cars 58:18 - So who should we blame if something goes wrong? 1:03:40 - Is there a duty to hand over driving to machines? 1:07:30 - Must self-driving cars be programmed to kill? Relevant LinksSven's faculty webpage 'The Ethics of Crashes with Self-Driving Cars, A Roadmap I' by Sven 'The Ethics of Crashes with Self-Driving Cars, A Roadmap II' by Sven 'Attributing Responsibility to Automated Systems: Reflections on Human-Robot Collaborations and Responsibility Loci' by Sven 'The Ethics of Accident Algorithms for Self-Driving Cars: An Applied Trolley Problem' by Nyholm and Smids 'Automated Cars meet Human Drivers: responsible human-robot coordination and the ethics of mixed traffic' by Nyhom and Smids Episode #3 with Sven on Love Drugs, DBS and Self-Driving Cars Episode #23 with Liu on Responsibility and Discrimination in Self-Driving Cars #mc_embed_signup{background:#fff; clear:left; font:14px Helvetica,Arial,sans-serif; } /* Add your own MailChimp form style overrides in your site stylesheet or in this style block. We recommend moving this block and the preceding CSS link to the HEAD of your HTML file. */ Subscribe to the newsletter
June 4, 2018
In this episode I talk to Brett Frischmann and Evan Selinger about their book Re-engineering Humanity (Cambridge University Press, 2018). Brett and Evan are both former guests on the podcast. Brett is a Professor of Law, Business and Economics at Villanova University and Evan is Professor of Philosophy at the Rochester Institute of Technology. Their book looks at how modern techno-social engineering is affecting humanity. We have a long-ranging conversation about the main arguments and ideas from the book. The book features lots of interesting thought experiments and provocative claims. I recommend checking it out. I highlight of this conversation for me was our discussion of the 'Free Will Wager' and how it pertains to debates about technology and social engineering. You can listen to the episode below or download it here. You can also subscribe on Stitcher and iTunes (the RSS feed is here). Show Notes0:00 - Introduction1:33 - What is techno-social engineering?7:55 - Is techno-social engineering turning us into simple machines?14:11 - Digital contracting as an example of techno-social engineering22:17 - The three important ingredients of modern techno-social engineering29:17 - The Digital Tragedy of the Commons34:09 - Must we wait for a Leviathan to save us?44:03 - The Free Will Wager55:00 - The problem of Engineered Determinism1:00:03 - What does it mean to be self-determined?1:12:03 - Solving the problem? The freedom to be offRelevant LinksEvan Selinger's homepageBrett Frischmann's homepageRe-engineering Humanity - website'Reverse Turing Tests: Are humans becoming more machine-like?' by meEpisode 4 with Evan Selinger on Privacy and Algorithmic OutsourcingEpisode 7 with Brett Frischmann on Human-Focused Turing TestsGregg Caruso on 'Free Will Skepticism and Its Implications: An Argument for Optimism'Derk Pereboom on Relationships and Free Will #mc_embed_signup{background:#fff; clear:left; font:14px Helvetica,Arial,sans-serif; } /* Add your own MailChimp form style overrides in your site stylesheet or in this style block. We recommend moving this block and the preceding CSS link to the HEAD of your HTML file. */ Subscribe to the newsletter
March 27, 2018
In this episode I talk to Dr James Schwartz. James teaches philosophy at Wichita State University.  His primary area of research is philosophy and ethics of space exploration, where he defends a position according to which space exploration derives its value primarily from the importance of the scientific study of the Solar System.  He is editor (with Tony Milligan) of The Ethics of Space Exploration (Springer 2016) and his publications have appeared in Advances in Space Research, Space Policy, Acta Astronautica, Astropolitics, Environmental Ethics, Ethics & the Environment, and Philosophia Mathematica.  He has also contributed chapters to The Meaning of Liberty Beyond Earth, Human Governance Beyond Earth, Dissent, Revolution and Liberty Beyond Earth (each edited by Charles Cockell), and to Yearbook on Space Policy 2015.  He is currently working on a book project, The Value of Space Science.  We talk about all things space-related, including the scientific case for space exploration and the myths that befuddle space advocacy.  You can download the episode here or listen below. You can also subscribe on Stitcher and iTunes (the RSS feed is here). Show Notes0:00 - Introduction1:40 - Why did James get interested in the philosophy of space?3:17 - Is interest in the philosophy and ethics of space exploration on the rise?6:05 - Do space ethicists always say "no"?8:20 - Do we have a duty to explore space? If so, what kind of duty is this?10:30 - Space exploration and the duty to ensure species survival16:16 - The link between space ethics and environmental ethics: between misanthrophy and anthropocentrism19:33 - How would space exploration help human survival?23:20 - The scientific value of space exploration: manned or unmanned?28:30 - Why does the scientific case for space exploration take priority?35:40 - Is it our destiny to explore space?38:46 - Thoughts on Elon Musk and the Colonisation Project44:34 - The Myths of Space Advocacy51:40 - From space philosophy to space policy: getting rid of the myths58:55 - The future of space philosophy  Relevant LinksDr Schwartz's website - The Space Philosopher (with links to papers and works in progress)'Space Settlement: What's the rush?' - by James SchwartzMyth-Free Space Advocacy Part I, Part II, Part III, Part IV -by James SchwartzVideo of James's lecture on Worldship Ethics'Prioritizing Scientific Exploration: A Comparison of Ethical Justifications for Space Development and Space Science' - by James SchwartzEpisode 37 with Christopher Yorke (middle section deals with the prospects for a utopia in space). #mc_embed_signup{background:#fff; clear:left; font:14px Helvetica,Arial,sans-serif; } /* Add your own MailChimp form style overrides in your site stylesheet or in this style block. We recommend moving this block and the preceding CSS link to the HEAD of your HTML file. */ Subscribe to the newsletter
March 3, 2018
In this episode I talk to Christopher Yorke. Christopher is a PhD candidate at The Open University. He specialises in the philosophical study of utopianism and is currently completing a dissertation titled ‘Bernard Suits’ Utopia of Gameplay: A Critical Analysis’. We talk about all things utopian, including what a 'utopia' is, why space exploration is associated with utopian thinking, and whether Bernard Suits' is correct to say that games are the highest ideal of human existence.   You can download the episode here or listen below. You can also subscribe on iTunes or Stitcher (the RSS feed is here). Show Notes0:00 - Introduction2:00 - Why did Christopher choose to study utopianism?6:44 - What is a 'utopia'? Defining the ideal society14:00 - Is utopia practically achievable?19:34 - Why are dystopias easier to imagine that utopias?23:00 - Blueprints vs Horizons - different understandings of the utopian project26:40 - What do philosophers bring to the study of utopia?30:40 - Why is space exploration associated with utopianism?39:20 - Kant's Perpetual Peace vs the Final Frontier47:09 - Suits's Utopia of Games: What is a game?53:16 - Is game-playing the highest ideal of human existence?1:01:15 - What kinds of games will Suits's utopians play?1:14:41 - Is a post-instrumentalist society really intelligible?  Relevant LinksChristopher Yorke's Academia.edu page'Prospects for Utopia in Space' by Christopher Yorke'Endless Summer: What kinds of games will Suits's Utopians Play?' by Christopher Yorke'The Final Frontier: Space Exploration as Utopia Project' by John Danaher'The Utopia of Games: Intelligible or Unintelligible' by John DanaherOther posts on utopianism and the good lifeThe Grasshopper by Bernard Suits #mc_embed_signup{background:#fff; clear:left; font:14px Helvetica,Arial,sans-serif; } /* Add your own MailChimp form style overrides in your site stylesheet or in this style block. We recommend moving this block and the preceding CSS link to the HEAD of your HTML file. */ Subscribe to the newsletter
January 27, 2018
In this episode I talk to Sandra Wachter about the right to explanation for algorithmic decision-making under the GDPR. Sandra is a lawyer and Research Fellow in Data Ethics and Algorithms at the Oxford Internet Institute. She is also a Research Fellow at the Alan Turing Institute in London. Sandra’s research focuses on the legal and ethical implications of Big Data, AI, and robotics as well as governmental surveillance, predictive policing, and human rights online. Her current work deals with the ethical design of algorithms, including the development of standards and methods to ensure fairness, accountability, transparency, interpretability, and group privacy in complex algorithmic systems. You can download the episode here or listen below. You can also subscribe on iTunes and Stitcher (the RSS feed is here). Show Notes0:00 - Introduction2:05 - The rise of algorithmic/automated decision-making3:40 - Why are algorithmic decisions so opaque? Why is this such a concern?5:25 - What are the benefits of algorithmic decisions?7:43 - Why might we want a 'right to explanation' of algorithmic decisions?11:05 - Explaining specific decisions vs. explaining decision-making systems15:48 - Introducing the GDPR - What is it and why does it matter?19:29 - Is there a right to explanation embedded in Article 22 of the GDPR?23:30 - The limitations of Article 2227:40 - When do algorithmic decisions have 'significant effects'?29:30 - Is there a right to explanation in Articles 13 and 14 of the GDPR (the 'notification duties' provisions)?33:33 - Is there a right to explanation in Article 15 (the access right provision)?37:45 - Is there any hope that a right to explanation might be interpreted into the GDPR?43:04 - How could we explain algorithmic decisions? Introducing counterfactual explanations47:55 - Clarifying the concept of a counterfactual explanation51:00 - Criticisms and limitations of counterfactual explanations Relevant LinksSandra's profile page at the Oxford Internet InstituteSandra's academia.edu page'Why a right to explanation does not exist in the General Data Protection Regulation' by Wachter, Mittelstadt and Floridi'Counterfactual explanations without opening the black box: Automated decisions and the GDPR' by Wachter, Mittelstadt and RussellThe General Data Protection RegulationArticle 29 working party guidance on the GDPRDo judges make stricter sentencing decisions when they are hungry? and a Reply     #mc_embed_signup{background:#fff; clear:left; font:14px Helvetica,Arial,sans-serif; } /* Add your own MailChimp form style overrides in your site stylesheet or in this style block. We recommend moving this block and the preceding CSS link to the HEAD of your HTML file. */ Subscribe to the newsletter
January 15, 2018
In this episode I talk to Miles Brundage. Miles is a Research Fellow at the University of Oxford's Future of Humanity Institute and a PhD candidate in Human and Social Dimensions of Science and Technology at Arizona State University. He is also affiliated with the Consortium for Science, Policy, and Outcomes (CSPO), the Virtual Institute of Responsible Innovation (VIRI), and the Journal of Responsible Innovation (JRI). His research focuses on the societal implications of artificial intelligence. We discuss the case for conditional optimism about AI. You can download the episode here or listen below. You can also subscribe on iTunes or Stitcher (the RSS feed is here). Show Notes0:00 - Introduction1:00 - Why did Miles write the conditional case for AI optimism?5:07 - What is AI anyway?8:26 - The difference between broad and narrow forms of AI12:00 - Is the current excitement around AI hype or reality?16:13 - What is the conditional case for AI conditional upon?22:00 - The First Argument: The Value of Task Expedition29:30 - The downsides of task expedition and the problem of speed mismatches33:28 - How AI changes our cognitive ecology36:00 - The Second Argument: The Value of Improved Coordination40:50 - Wouldn't AI be used for malicious purposes too?45:00 - Can we create safe AI in the absence of global coordination?48:03 - The Third Argument: The Value of a Leisure Society52:30 - Would a leisure society really be utopian?56:24 - How were Miles's arguments received when presented at the EU parliament?  Relevant LinksMiles's HomepageMiles's past publicationsMiles at the Future of Humanity InstituteVideo of Miles's presentation to the EU Parliament (starts at approx 10:05:19 or 1 hour and 1 minute into the video)Olle Haggstrom's write-up about the EU parliament event'Cognitive Scarcity and Artificial Intelligence' by Miles Brundage and John Danaher   #mc_embed_signup{background:#fff; clear:left; font:14px Helvetica,Arial,sans-serif; } /* Add your own MailChimp form style overrides in your site stylesheet or in this style block. We recommend moving this block and the preceding CSS link to the HEAD of your HTML file. */ Subscribe to the newsletter
January 4, 2018
In this episode I talk to Tom Lin. Tom is a Professor of Law at Temple University’s Beasley School of Law. His research and teaching expertise are in the areas of corporations, securities regulation, financial technology, financial regulation, and compliance. Professor Lin and his research has been published and cited by numerous leading law journals, and featured in The Wall Street Journal, Bloomberg News, and The Financial Times, among other media outlets. We talk about the rise of 'cyborg finance' (Cy-Fi) and the regulatory challenges it poses.  You can download the episode here, or listen below. You can also subscribe on Apple Podcasts or Stitcher (the RSS feed is here). Show Notes0:00 - Introduction1:30 - What is cyborg finance?5:57 - What explains the rise of cyborg finance? Innovation, Regulation and Competition9:00 - The problem of systemic risk in the financial system15:05 - "Too Linked to Fail" - The first systemic risk of cyborg finance19:30 - "Too fast to save" - the second systemic risk of cyborg finance23:00 - The problem of short-term thinking in the financial system27:15 - Does cyborg finance undermine the idea of the 'reasonable investor'?34:57 - The problem of cybernetic market manipulation37:44 - Are these genuinely novel threats or old threats in a new guise?41:11 - Regulatory principles and values for the age of cyborg finance  Relevant linksTom's faculty webpageTom's SSRN page"The New Investor" by Tom Lin"The New Financial Industry" by Tom Lin"The New Market Manipulation" by Tom LinEpisode #22 - Wellman and Rajan on Automated TradingEpisode #25 - McNamara on Fairness, Utility and High Frequency Trading #mc_embed_signup{background:#fff; clear:left; font:14px Helvetica,Arial,sans-serif; } /* Add your own MailChimp form style overrides in your site stylesheet or in this style block. We recommend moving this block and the preceding CSS link to the HEAD of your HTML file. */ Subscribe to the newsletter
December 11, 2017
In this episode I talk to Neil McArthur about a book that he and I recently co-edited entitled Robot Sex: Social and Ethical Implications (MIT Press, 2017). Neil is a Professor of Philosophy at the University of Manitoba where he also directs the Center for Professional and Applied Ethics. This a free-ranging conversation. We talk about what got us interested in the topic of robot sex, our own arguments and ideas, some of the feedback we've received on the book, some of our favourite sexbot-related media, and where we think the future of the debate might go. You can download the episode here or listen below. You can also subscribe on iTunes or Stitcher (the RSS feed is here). Show Notes0:00 - Introduction to Neil1:42 - How did Neil go from writing about David Hume to Robot Sex?5:15 - Why did I (John Danaher) get interested in this topic?6:49 - The astonishing media interest in robot sex8:58 - Why did we put together this book?11:05 - Neil's general outlook on the robot sex debate16:41 - Could sex robots address the problems of loneliness and isolation?19:46 - Why a passive and compliant sex robot might be good thing21:08 - Could sex robots enhance existing human relationships?25:53 - Sexual infidelity and the intermediate ontological status of sex robots31:23 - Ethical behaviourism and robots34:36 - My perspective on the robot sex debate37:32 - Some legitimate concerns about robot sex44:20 - Some of our favourite arguments or ideas from the book (acknowledging that all the contributions are excellent!)54:37 - Neil's booklaunch - some of the feedback from a lay audience58:25 - Where will the debate go in the future? Neil's thoughts on the rise of the digisexual1:02:54 - Our favourite fictional sex robots  Relevant linksRobot Sex: Social and Ethical Implications (available on Amazon, BookDepository and from the Publisher)Neil's homepageMedia coverage of our bookThe Status Quo bias in applied ethicsThe Sex Robots are Coming: Seedy, sordid but mainly just sad' by Fiona SturgesOur Guardian op-ed on the potential upside of sex robotsRichard Herring's sex robot sketchesNeil's article on the rise of the digisexualNeil's one-man show on cryonics "Let Me Freeze Your Head!"   #mc_embed_signup{background:#fff; clear:left; font:14px Helvetica,Arial,sans-serif; } /* Add your own MailChimp form style overrides in your site stylesheet or in this style block. We recommend moving this block and the preceding CSS link to the HEAD of your HTML file. */ Subscribe to the newsletter
November 23, 2017
In this episode I talk to Adam Carter and Orestis Palermos. Adam is a Lecturer in Philosophy at the University of Glasgow. His primary research interests lie in the area of epistemology, but he has increasingly explored connections between epistemology and other disciplines, including bioethics (especially human enhancement); the philosophy of mind and cognitive science. Orestis is a lecturer in philosophy at Cardiff University. His research focuses on how ‘philosophy can impact the engineering of emerging technologies and socio-technical systems.’ We talk about the theory of the extended mind and the idea of extended assault. You can download the episode here or listen to it below. You can also subscribe on iTunes and Stitcher (RSS feed). Show Notes0:00 - Introduction0:55 - The story of David Leon Riley and the phone search3:15 - What is extended cognition?7:35 - Extended cognition vs extended mind - exploring the difference13:35 - What counts as part of an extended cognitive system? The role of dynamical systems theory19:14 - Does cognitive extension come in degrees?24:18 - Are smartphones part of our extended cognitive systems?28:10 - Are we over-extended? Do we rely too much on technology?35:02 - Making the case for extended personal assault39:50 - Does functional disability make a difference to the case for extended assault?43:35 - Does pain matter to our understanding of assault?49:50 - Does the replaceability/fungibility of technology undermine the case for extended assault?55:00 - Online hacking as a form of personal assault59:30 - The ethics of extended expertise1:02:58 - Distributed cognition and distributed blame Relevant LinksJ Adam Carter's homepageOrestis Palermos's homepage'Is having your computer compromised a personal assault? The ethics of extended cognition' by Carter and Palermos'Extended Cognition and the Possibility of Extended Assault' by John Danaher (summary of the above paper)Dynamical systems theoryClark and Chalmers 'The Extended Mind'Garry Kasparov Deep ThinkingRichard Heersmink 'The Internet, Cognitive Enhancement and the Values of Cognition'      #mc_embed_signup{background:#fff; clear:left; font:14px Helvetica,Arial,sans-serif; } /* Add your own MailChimp form style overrides in your site stylesheet or in this style block. We recommend moving this block and the preceding CSS link to the HEAD of your HTML file. */ Subscribe to the newsletter
October 28, 2017
In this episode I am joined by Woodrow Hartzog. Woodrow is currently a Professor of Law and Computer Science at Northeastern University (he was the Starnes Professor at Samford University’s Cumberland School of Law when this episode was recorded). His research focuses on privacy, human-computer interaction, online communication, and electronic agreements. He holds a Ph.D. in mass communication from the University of North Carolina at Chapel Hill, an LL.M. in intellectual property from the George Washington University Law School, and a J.D. from Samford University. He previously worked as an attorney in private practice and as a trademark attorney for the United States Patent and Trademark Office. He also served as a clerk for the Electronic Privacy Information Center. We talk about the rise of automated law enforcement and the virtue of an inefficient legal system. You can download the episode here or listen below. You can also subscribe to the podcast via iTunes or Stitcher (RSS feed is here). Show Notes0:00 - Introduction2:00 - What is automated law enforcement? The 3 Steps6:30 - What about the robocops?10:00 - The importance of hidden forms of automated law enforcement12:55 - What areas of law enforcement are ripe for automation?17:53 - The ethics of automated prevention vs automated punishment23: 10 - The three reasons for automated law enforcement26:00 - The privacy costs of automated law enforcement32:13 - The virtue of discretion and inefficiency in the application of law40:10 - An empirical study of automated law enforcement44:35 - The conservation of inefficiency principle48:40 - The practicality of conserving inefficiency51:20 - Should we keep a human in the loop?55:10 - The rules vs standards debate in automated law enforcement58:36 - Can we engineer inefficiency into automated systems1:01:10 - When is automation desirable in law? Relevant LinksWoody's homepageWoody's SSRN page'Inefficiently Automated Law Enforcement' by Woodrow Hartzog, Gregory Conti, John Nelson and Lisa Shay'Obscurity and Privacy' by Woodrow Hartzog and Evan SelingerEpisode 4 with Evan Selinger on Algorithmic Outsourcing and PrivacyKnightscope RobotsRobocop joins Dubai police to fight real life crime    #mc_embed_signup{background:#fff; clear:left; font:14px Helvetica,Arial,sans-serif; } /* Add your own MailChimp form style overrides in your site stylesheet or in this style block. We recommend moving this block and the preceding CSS link to the HEAD of your HTML file. */ Subscribe to the newsletter
October 1, 2017
In this episode I am joined by Mark Bartholomew. Mark is a Professor at the University of Buffalo School of Law. He writes and teaches in the areas of intellectual property and law and technology, with an emphasis on copyright, trademarks, advertising regulation, and online privacy. His book Adcreep: The Case Against Modern Marketing was recently published by Stanford University Press. We talk about the main ideas and arguments from this book. You can download the episode here or listen below. You can also subscribe on iTunes and Stitcher (RSS is here). Show Notes0:00 - Introduction0:55 - The crisis of attention2:05 - Two types of Adcreep3:33 - The history of advertising and its regulation9:26 - Does the history tell a clear story?12:16 - Differences between Europe and the US13:48 - How public and private spaces have been colonised by marketing16:58 - The internet as an advertising medium19:30 - Why have we tolerated Adcreep?25:32 - The corrupting effect of Adcreep on politics32:10 - Does advertising shape our identity?36:39 - Is advertising's effect on identity worse than that other external forces?40:31 - The modern technology of advertising45:44 - A digital panopticon that hides in plain sight48:22 - Neuromarketing: hype or reality?55:26 - Are we now selling ourselves all the time?1:04:52 - What can we do to redress adcreep?  Relevant LinksMark's homepageAdcreep: the Case Against Modern Marketing'Is there any way to stop adcreep?' by Mark'Branding Politics: Emotion, authenticity, and the marketing culture of American political communication' by Michael Serazio'The Presentation of the Self in Everyday Life' by Irving Goffman     #mc_embed_signup{background:#fff; clear:left; font:14px Helvetica,Arial,sans-serif; } /* Add your own MailChimp form style overrides in your site stylesheet or in this style block. We recommend moving this block and the preceding CSS link to the HEAD of your HTML file. */ Subscribe to the newsletter
September 22, 2017
In this episode, I talk to Phoebe Moore. Phoebe is a researcher and a Senior Lecturer in International Relations at Middlesex University. She teaches International Relations and International Political Economy and has published several books, articles and reports about labour struggle, industrial relations and the impact of technology on workers' everyday lives. Her current research, funded by a BA/Leverhulme award, focuses on the use of self-tracking devices in companies. She is the author of a book on this topic entitled The Quantified Self in Precarity: Work, Technology and What Counts, which has just been published. We talk about the quantified self movement, the history of workplace surveillance, and a study that Phoebe did on tracking in a Dutch company. You can download the episode here, or listen below. You can also subscribe on iTunes and Stitcher. Show Notes0:00 - Introduction1:27 - Origins and Ethos of the Quantified Self Movement7:39 - Does self-tracking promote or alleviate anxiety?10:10 - The importance of gamification13:09 - The history of workplace surveillance (Taylor and the Gilbreths)16:27 - How is workplace quantification different now?20:26 - The Agility Agenda: Workplace surveillance in an age of precarity29:09 - Tracking affective/emotional labour34:08 - Getting the opportunity to study the quantified worker in the field38:18 - Can such workplace self-tracking exercises ever be truly voluntary?41:05 - What were the key findings of the study?46:07 - Why was there such a high drop-out rate?49:37 - Did workplace tracking lead to increased competitiveness?53:32 - Should we welcome or resist the quantified worker phenomenon? Relevant LinksPhoebe's WebpageThe book: The Quantified Self in Precarity: Work, Technology and What CountsThe Quantified Self Movement Homepage'Regulating Well-Being in the Brave New Quantified Workplace' by Phoebe Moore and Lukasz Piwek'The Quantified Self: What Counts in the Neoliberal Workplace' by Phoebe Moore and Andrew RobinsonPrevious interview with Deborah Lupton about her work on the quantified self #mc_embed_signup{background:#fff; clear:left; font:14px Helvetica,Arial,sans-serif; } /* Add your own MailChimp form style overrides in your site stylesheet or in this style block. We recommend moving this block and the preceding CSS link to the HEAD of your HTML file. */ Subscribe to the newsletter
August 30, 2017
In this episode I am joined by Angela Walch. Angela is an Associate Professor at St. Mary’s University School of Law. Her research focuses on money and the law, blockchain technologies, governance of emerging technologies and financial stability. She is a Research Fellow of the Centre for Blockchain Technologies of University College London. Angela was nominated for “Blockchain Person of the Year” for 2016 by Crypto Coins News for her work on the governance of blockchain technologies. She joins me for a conversation about the misleading terms used to describe blockchain technologies. You can download the episode here. You can listen below. You can also subscribe on iTunes or Stitcher. Show Notes0:00 - Introduction2:06 - What is a blockchain?6:15 - Is the blockchain distributed or shared?7:57 - What's the difference between a public and private blockchain?11:20 - What's the relationship between blockchains and currencies?18:43 - What is miner? What's the difference between a full node and a partial node?22:25 - Why is there so much confusion associated with blockchains?29:50 - Should we regulate blockchain technologies?36:00 - The problems of inconsistency and perverse innovation41:40 - Why blockchains are not 'immutable'58:04 - Why blockchains are not 'trustless'1:00:00 - Definitional problems in practice1:02:37 - What is to be done about the problem? Relevant LinksAngela's HomepageAngela's Academia and SSRN pages'The Path of the Blockchain Lexicon (and the Law)' by Angela Walch'Call blockchain developers what they are: fiduciaries' by Angela WalchInterview with Aaron Wright on Blockchain Technology and the LawInterview with Rachel O'Dwyer On Bitcoin, Blockchains and the Digital Commons   #mc_embed_signup{background:#fff; clear:left; font:14px Helvetica,Arial,sans-serif; } /* Add your own MailChimp form style overrides in your site stylesheet or in this style block. We recommend moving this block and the preceding CSS link to the HEAD of your HTML file. */ Subscribe to the newsletter
July 26, 2017
In this episode I am joined by Frédéric Gilbert. Frédéric is a philosopher and bioethicist who is affiliated with quite a number of universities and research institutes around the world. He is currently a Scientist Fellow at the University of Washington (UW), in Seattle, US but has a concomitant appointment with the Department of Medicine, at the University of British Columbia, Vancouver, Canada. On top of that he is an ARC DECRA Research Fellow, at the University of Tasmania, Australia. We talk about the ethics of predictive brain implants. You can download the episode here or listen below. You can also subscribe on Stitcher or iTunes (the RSS feed is here). Show Notes0:00 - Introduction1:50 - What is a predictive brain implant?5:20 - What are we currently using predictive brain implants for?7:40 - The three types of predictive brain implant16:30 - Medical issues around brain implants18:45 - Predictive brain implants and autonomy22:40 - The effect of advisory implants on autonomy35:20 - The effect of automated implants on autonomy38:17 - Empirical findings on the experiences of patients47:00 - Possible future uses of PBIs51:25 - Dangers of speculative neuroethics  Relevant LinksFrédéric's homepageFrédéric's page at the University of Tasmania'A Threat to Autonomy? The Intrusion of Predictive Brain Implants' by Frédéric'Are Predictive Brain Implants an Indispensable Feature of Autonomy?' by Frédéric and Mark Cook'I Miss Being Me: Phenomenological Effects of Deep Brain Stimulation' by Fréderic and ors'The Tell-Tale Brain: The Effect of Predictive Brain Implants on Autonomy' by John Danaher'If and Then: A Critique of Speculative Nanoethics' by Alfred Nordmann    #mc_embed_signup{background:#fff; clear:left; font:14px Helvetica,Arial,sans-serif; } /* Add your own MailChimp form style overrides in your site stylesheet or in this style block. We recommend moving this block and the preceding CSS link to the HEAD of your HTML file. */ Subscribe to the newsletter
July 17, 2017
In this episode I talk to Anthony Behan. Anthony is a technologist with an interest in the political and legal aspects of technology. We have a wide-ranging discussion about the automation of the law and the politics of technology.  The conversation is based on Anthony's thesis ‘The Politics of Technology: An Assessment of the Barriers to Law Enforcement Automation in Ireland’, (a link to which is available in the links section below). You can download the episode here or listen below. You can also subscribe on Stitcher or iTunes (the RSS feed is here). Show Notes0:00 - Introduction2:35 - The relationship between technology and humanity5:25 - Technology and the legitimacy of the state8:15 - Is the state a kind of technology?13:20 - Does technology have a political orientation?20:20 - Automated traffic monitoring as a case study24:40 - Studying automated traffic monitoring in Ireland30:30 - The mismatch between technology and legal procedure33:58 - Does technology create new forms of governance or does it just make old forms more efficient?39:40 - The problem of discretion43:45 - The feminist gap in the debate about the automation of the state49:15 - A mindful approach to automation53:00 - Postcolonialism and resistance to automation  Relevant LinksFollow Anthony on TwitterAnthony's Blog'The Politics of Technology: An Assessment of the Barriers to Law Enforcement Automation in Ireland' by Anthony Behan'The Politics of City Architecture' by Anthony BehanLewis MumfordJane JacobsRobert Moses  #mc_embed_signup{background:#fff; clear:left; font:14px Helvetica,Arial,sans-serif; } /* Add your own MailChimp form style overrides in your site stylesheet or in this style block. We recommend moving this block and the preceding CSS link to the HEAD of your HTML file. */ Subscribe to the newsletter
June 26, 2017
In this episode I am joined by Steven McNamara. Steven is a Professor of Law at the American University of Beirut, and is currently a visiting professor at the University of Florida School of Law. Once upon a time, Steven was a corporate lawyer. He is now an academic lawyer with interests in moral theory, business ethics and technological change in financial markets. He also has a PhD in philosophy and wrote a dissertation on Kant’s use of Newtonian scientific method. We talk about the intersections between moral philosophy and high frequency trading, taking in the history of U.S. stock market in the process. You can download the episode here. You can listen below. You can also subscribe on Stitcher and iTunes. Show Notes0:00 - Introduction1:22 - The history of US stock markets7:45 - The (regulatory) creation of a national market13:10 - The origins of algorithmic trading18:15 - What is High Frequency Trading?21:30 - Does HFT 'rig' the market?33:47 - Does the technology pose any novel threats?40:30 - A utilitarian assessment of HFT: does it increase social welfare?48:00 - Rejecting the utilitarian approach50:30 - Fairness and reciprocity in HFT  Relevant LinksSteven McNamara's homepage at the University of Florida'The Law and Ethics of High Frequency Trading' by Steven McNamaraFlash Boys by Michael LewisDark Pools by Scott Patterson'Michael Lewis reflects on Flash Boys' by Michael Lewis'Moore's Law versus Murphy's Law: Algorithmic Trading and its Discontents' by Kirilenko and Lo'A Sociology of Algorithms: High Frequency Trading and the Shaping of Markets' by Donald MacKenzie #mc_embed_signup{background:#fff; clear:left; font:14px Helvetica,Arial,sans-serif; } /* Add your own MailChimp form style overrides in your site stylesheet or in this style block. We recommend moving this block and the preceding CSS link to the HEAD of your HTML file. */ Subscribe to the newsletter
June 7, 2017
In this episode I interview Joanna Bryson. Joanna is Reader in Computer Science at the University of Bath. Joanna’s primary research interest lies in using AI to understand natural intelligence, but she is also interested in the ethics of AI and robotics, the social uses of robots, and the political and legal implications of advances in robotics. In the latter field, she is probably best known for her article, published in 2010 entitled ‘Robots Should be Slaves’. We talk about the ideas and arguments contained in that paper as well as some related issues in roboethics. You can download the episode here or listen below. You can also subscribe on Stitcher or Itunes (or RSS). Show Notes0:00 - Introduction1:10 - Robots and Moral Subjects5:15 - The Possibility of Robot Moral Subjects10:30 - Is it bad to be emotionally attached to a robot?15:22 - Robots and legal/moral responsibility19:57 - The standards for human robot commanders22:22 - Are there some contexts in which we might want to create a person-like robot?26:10 - Can we stop people from creating person-like robots?28:00 - The principles that ought to guide robot design Relevant LinksJoanna's Homepage'Robots should be Slaves' - by JoannaA Reddit 'Ask me Anything' with JoannaThe EPSRC Principles of RoboticsInterview with David Gunkel on Robots and CyborgsInterview with Hin Yan Liu on Robots and ResponsibilityHow to plug the robot responsibility gap #mc_embed_signup{background:#fff; clear:left; font:14px Helvetica,Arial,sans-serif; } /* Add your own MailChimp form style overrides in your site stylesheet or in this style block. We recommend moving this block and the preceding CSS link to the HEAD of your HTML file. */ Subscribe to the newsletter
May 22, 2017
In this episode I talk to Hin-Yan Liu. Hin-Yan is an Associate Professor of Law at the University of Copenhagen. His research interests lie at the frontiers of emerging technology governance, and in the law and policy of existential risks. His core agenda focuses upon the myriad challenges posed by artificial intelligence (AI) and robotics regulation. We talk about responsibility gaps in the deployment of autonomous weapons and crash optimisation algorithms for self-driving cars. You can download the episode here or listen below. You can also subscribe on Stitcher and iTunes (the RSS feed is here). Show Notes0:00 - Introduction1:03 - What is an autonomous weapon?4:14 - The responsibility gap in the autonomous weapons debate7:20 - The circumstantial responsibility gap13:44 - The conceptual responsibility gap21:00 - A tracing solution to the conceptual problem?27:47 - Should we use strict liability standards to plug the gap(s)?29:48 - What can we learn from the child soldiers debate33:02 - Crash optimisation algorithms for self-driving cars36:15 - Could self-driving cars give rise to structural discrimination?46:10 - Why it may not be easy to solve the structural discrimination problem49:35 - The Immunity Device Thought Experiment54:12 - Distinctions between the immunity device and other forms of insurance59:30 - What's missing from the self-driving car debate? LinksHin-Yan's faculty webpageHin-Yan's academia.edu page'Autonomy in Weapons Systems' by Hin-Yan'Refining Responsibility: Differentiating Two Types of Responsibility Issues Raised by Autonomous Weapons Systems' by Hin-Yan'The Ethics of Crash Optimisation Algorithms' by John Danaher'The Ethics of Autonomous Cars' by Patrick LinInterview with Sven Nyholm about Trolley Problems and Self-Driving Cars #mc_embed_signup{background:#fff; clear:left; font:14px Helvetica,Arial,sans-serif; } /* Add your own MailChimp form style overrides in your site stylesheet or in this style block. We recommend moving this block and the preceding CSS link to the HEAD of your HTML file. */ Subscribe to the newsletter
May 12, 2017
In this episode, I am joined by Michael Wellman and Uday Rajan. Michael is a Professor of Computer Science & Engineering at the University of Michigan; and Uday is a Professor of Business Administration and Chair and Professor of Finance and Real Estate at the same institution. Our conversation focuses on the ethics of autonomous trading agents on financial markets. We discuss algorithmic trading, high frequency trading, market manipulation, the AI control problem and more. You can download the episode here or listen below. You can also subscribe to the podcast on Stitcher or iTunes (here and here). Show Notes0:00 - Introduction2:20 - What is an autonomous trading agent and how prevalent are they?3:36 - High frequency trading as a type of autonomous trading5:36 - General uses of AI in financial trading6:45 - What are the social benefits of autonomous trading agents?10:10 - AI related scandals on financial markets (w/ comments on the 2010 Flash Crash)13:47 - Constructing an autonomous trading agent to engage in arbitrage operations14:44 - What is arbitrage?17:10 - Describing AI-based arbitrage on index securities24:30 - The advantages of using autonomous agents to do this27:20 - The ethical challenges of using autonomous agents to do this27:54 - Autonomous trading agents and spoofing transactions34:15 - Autonomous trading agents and other forms of market manipulation39:00 - How do we address the problems posed?42:40 - General lessons for the AI control problem Relevant LinksMichael Wellman's homepageUday Rajan's homepageMichael and Uday's paper 'Ethical Issues for Autonomous Trading Agents'The Flash Crash - WikipediaSEC Official Report on the Flash Crash'Yom Kippur War Tweet Prompts Higher Oil Prices' - Huffington PostBorussia Dortmund team bus bombingInterview with Anders Sandberg about time compression in computing
May 2, 2017
In this episode, I talk to Mark Coeckelbergh. Mark is a Professor of Philosophy of Media and Technology at the Department of Philosophy of the University of Vienna and President of the Society for Philosophy and Technology. He also has an affiliation as Professor of Technology and Social Responsibility at the Centre for Computing and Social Responsibility, De Montfort University, UK. We talk about robots and philosophy (robophilosophy), focusing on two topics in particular. First, the rise of the carebots and the mechanisation of society, and second, Hegel's master-slave dialectic and its application to our relationship with technology. You can download the episode here. You can also listen below or subscribe on Stitcher and iTunes (via RSS) or here. Show Notes0:00 - Introduction2:00 - What is a robot?3:30 - What is robophilosophy? Why is it important?4:45 - The phenomenological approach to roboethics6:48 - What are carebots? Why do people advocate their use?8:40 - Ethical objections to the use of carebots11:20 - Could a robot ever care for us?13:25 - Carebots and the Problem of Emotional Deception18:16 - Robots, modernity and the mechanisation of society21:50 - The Master-Slave Dialectic in Human-Robot Relationships25:17 - Robots and our increasing alienation from reality30:40 - Technology and the automation of human beings  Relevant LinksMark's homepageHuman Being @Risk by Mark CoeckelberghNew Romantic Cyborgs by Mark Coeckelbergh'Artificial agents, good care and modernity' by Mark Coeckelbergh'The tragedy of the master: automation, vulnerability and distance' by Mark Coeckelbergh'The Carebot Dystopia: an Analysis' by John DanaherHegel's Master-Slave Dialectic - explained on the Internet Encyclopedia of Philosophy
April 23, 2017
[Note: This was previously posted on my Algocracy project blog; I'm cross-posting it here now. The audio quality isn't perfect but the content is very interesting. It is a talk by Pip Thornton, the (former) Research Assistant on the project]. My post as research assistant on the Algocracy & Transhumanism project at NUIG has come to an end. I have really enjoyed the five months I have spent here in Galway - I have learned a great deal from the workshops I have been involved in, the podcasts I have edited, the background research I have been doing for John on the project, and also from the many amazing people I have met both in and outside the university. I  have also had the opportunity to present my own research to a  wide audience and most recently gave a talk on behalf of the Technology and Governance research cluster entitled A Critique of Linguistic Capitalism (and an artistic intervention)  as part of a seminar series organised by the  Whitaker Institute's Ideas Forum,  which I managed to record. Part of my research involves using poetry to critique linguistic capitalism and the way language is both written and read in an age of algorithmic reproduction. For the talk I invited Galway poet Rita Ann Higgins to help me explore the the differing 'value' of words, so the talk includes Rita Ann reciting an extract from her award winning poem Our Killer City, and my own imagining of what the poem 'sounds like' - or is worth, to Google. The argument central to my thesis is that the power held by the tech giant Google, as it mediates, manipulates and extracts economic value from the language (or more accurately the decontextualised linguistic data) which flows through its search, communication and advertising systems, needs both transparency and strong critique. Words are auctioned off to the highest bidder, and become little more than tools in the creation of advertising revenue. But there are significant side effects, which can be both linguistic and political. Fake news sites are big business for advertisers and Google, but also infect the wider discourse as they spread through social media networks and national consciousness. One of the big questions I am now starting to ask is just how resilient is language to this neoliberal infusion, and what could it mean politically? As the value of language shifts from conveyor of meaning to conveyor of capital, how long will it be before the linguistic bubble bursts? You can download it HERE or listen below: Track Notes 0:00- introduction and background 4:30 - Google Search & autocomplete - digital language and semantic escorts 6:20 - Linguistic Capitalism and Google AdWords - the wisdom of a linguistic marketplace?9:30 - Google Ad Grants - politicising free ads: the Redirect Method, A Clockwork Orange and the neoliberal logic of countering extremism via Google search 16:00 - Google AdSense - fake news sites, click-bait and ad revenue  -  from Chicago ballot boxes to Macedonia - the ads are real but the news is fake 20:35 - Interventions #1 - combating AdSense (and Breitbart News) - the Sleeping Giants Twitter campaign 23:00 - Interventions #2 - Gmail and the American Psycho experiment 25:30 - Interventions #3 - my own {poem}.py project - critiquing AdWords using poetry, cryptography and a second hand receipt printer 30:00 - special guest poet Rita Ann Higgins reciting Our Killer City 33:30 - Conclusions - a manifestation of postmodernism? sub-prime language - when does the bubble burst? commodified words as the master's tools - problems  of method Relevant Links The Redirect Method From Headline to Photograph, a Fake News Masterpiece - New York Times, 18 January 2017How Facebook Powers Money Machines for Obscure Political 'News' Sites - The Guardian, 24 August 2016How Teens in the Balkans are Duping Trump Supporters with Fake News - Buzzfeed, 4 November 2016How t
March 6, 2017
[If you like this blog, consider signing up for the newsletter...] In this episode I talk to Karen Yeung. Karen is a Chair in Law at the Dickson Poon School of Law, Kings College London. She joined the School to help establish the Centre for Technology, Ethics and Law & Society (‘TELOS’), of which she is now Director.  Professor Yeung is an academic pioneer in the field of regulation studies (or ‘regulatory governance’ studies) and is a leading scholar concerned with critically examining governance of, and governance through, new and emerging technologies. We talk about her concept of 'hypernudging' and how it applies to the debate about algorithmic governance. You can download the episode here. You can also listen below or subscribe on Stitcher or iTunes (via RSS). Show Notes0:00 - Introduction2:20 - What is regulation? Regulation vs Governance6:35 - The Different Modes of Regulation11:50 - What is nudging?15:40 - Big data and regulation21:15 - What is hypernudging?32:30 - Criticisms of nudging: illegitimate motive, deception and opacity41:00 - Applying these criticisms to hypernudging47:35 - Dealing with the challenges of hypernudging52:40 - Digital Gerrymandering and Fake News59:20 - The need for a post-liberal philosophy?  Relevant LinksKaren's Homepage at KCLCentre for Technology, Ethics, Law and Society'Hypernudge': Big Data as a Mode of Regulation by Design - by Karen'Are Design-Based Regulatory Instruments Legitimate?' - by Karen'Algocracy as Hypernudging' - by John Danaher'The Ethics of Nudging' - by Cass SunsteinEpisode on Predictive Policing with Andrew Ferguson      
February 25, 2017
[If you like this blog, consider signing up for the newsletter...] In this episode I talk to Andrew Guthrie Ferguson about the past, present and future of predictive policing. Andrew is a Professor at the David A Clarke School of Law at the University of the District of Columbia. He was formerly a supervising attorney at the Public Defender Service for the District of Columbia. He now teaches and writes in the area of criminal law, criminal procedure, and evidence. We discuss the ideas and arguments from his recent paper 'Policing Predictive Policing. You can download the episode here or listen below. You can also subscribe on Stitcher and iTunes (via RSS). Show Notes0:00 - Introduction2:55 - Why did Andrew start researching this topic?4:50 - What is predictive policing?6:25 - Hasn't policing always been predictive? What is the history of prediction in policing?8:50 - How does predictive policing work? (Understanding Predictive Policing 1.0)16:18 - Why the interest in this technology post-2009?18:50 - The shift from place-based to person-based prediction (Predictive Policing 2.0 and 3.0)24:35 - Are the concerns about person-based prediction overstated?28:18 - How does predictive policing differ from policies like 'broken windows' policing?31:40 - Are predictive policing systems racially biased? (Data vulnerabilities)41:44 - Do predictive policing systems actually work?52:46 - Are predictive policing systems transparent/accountable?58:26 - How do these systems change police practice?1:02:50 - Alternative visions for the use of predictive powers1:10:22 - What about data security, privacy and data protection?1:14:15 - Is the future dystopian or utopian? Relevant LinksProfessor Ferguson's Webpage'Policing Predictive Policing' by Andrew Guthrie Ferguson'Big Data and Predictive Reasonable Suspicion' by Andrew Guthrie Ferguson'The Big Data Jury' by Andrew Guthrie Ferguson'Predictive Prosecution' by Andrew Guthrie FergusonPredPol: The Predictive Policing Company'Machine Bias' on ProPublica.org'Randomized Controlled Field Trials of Predictive Policing' by Mohler et alRAND report on Predictive Policing
January 30, 2017
In this episode I talk to Jonathan Pugh about bio-conservatism and human enhancement. Jonny is a Postdoctoral Research Fellow in Applied Moral Philosophy at The University of Oxford, on the Wellcome Trust funded project "Neurointerventions in Crime Prevention: An Ethical Analysis". His new paper, written with Guy Kahane and Julian Savulescu, 'Bio-Conservatism, Partiality, and The Human Nature Objection to Enhancement' is due out soon in The Monist. You can download the episode here or listen below. You can also subscribe on Stitcher and iTunes (via RSS). Show Notes 0:00 - introduction 2:00 - what is the nature of human enhancement – the functionalist and welfarist accounts/models 10:30 - bio-conservative oppositions to enhancement – evaluative and epistemic approaches, the naturalistic fallacy 19:00 - Cohen’s conservatism – intrinsic value – personal and particular valuing – art and pets30:30 – personal values and bio-enhancement 40:30 - the partiality problem – who would you save from the river? Value-based partiality and discrimination. 54:00 - species bias, human prejudice, partiality, family and nationalism - Bernard Williams, John Cottingham, Thomas Hurka, Samuel Scheffler, genetic enhancement 1:03:00 -  should human enhancement be opposed on the grounds of bio-conservatism? - Biological enhancement in the context of other social and technical changes - Is conservatism a foundational moral principle? 1:11:00 - conclusion Relevant Links Jonny's Academia.edu page Jonny's blog - jonathanpughethics.wordpress.com Pugh, Kahane, and Savulescu - Bio-Conservatism, Partiality, and The Human Nature Objection to Enhancement (forthcoming)  Pugh, Kahane, and Savulescu - Cohen's Conservatism and Human Enhancement (2013) Samuel Scheffler - Death and the Afterlife (2013) Alfonso Cuaron - Children of Men (2006) Ben Davies - Enhancement and Conservative Bias (2016) Bernard Williams - The Human Prejudice (2006) John Cottingham - Partiality, Favouritism and Morality (1986) Thomas Hurka - The Justification of National Partiality (1997) John Danaher - An Evaluative Conservative Case for Biomedical Enhancement
January 17, 2017
[If you like this blog, consider signing up for the newsletter...] In this episode I talk to Professor Steve Fuller about his sometimes controversial views on transhumanism, religion, science and technology, enhancement and evolution. Steve is Auguste Comte Professor of Social Epistemology at the University of Warwick. He is the author of a trilogy relating to the idea of a ‘post-’ or ‘trans-‘ human future, all published with Palgrave Macmillan: Humanity 2.0: What It Means to Be Human Past, Present and Future (2011), Preparing for Life in Humanity 2.0 (2012) and (with Veronika Lipinska) The Proactionary Imperative: A Foundation for Transhumanism (2014). Our conversation focuses primarily on the arguments and ideas found in the last book of the trilogy. You can download the episode here or listen below. You can also subscribe via Stitcher or iTunes (via RSS). Show Notes 0:00 - introduction 04:00 - untangling posthumanism and transhumanism via Bostrom, Hayles, Haraway 21:45 - the relationship between theology, science and technology39:50 - theological and libertarian rationales of transhumanism 52:00 - freedom from suffering or a freedom to suffer? – questions of risk, consent and compensation 1:03:40 - the rehabilitation of Eugenics – could it / should it be done? 1:13:50 - Darwinism and the intelligent design debate 1:22:00 - are there limits to transhumanism and enhancement? Homo Sapiens, humanity and morphological freedom 1:28:00 - conclusion Relevant Links Rick Searle - podcast on the Dark Side of Transhumanism (2016)  Nick Bostrom - Why I Want To Be a Posthuman When I Grow Up (2008)  Donna Haraway - The Cyborg Manifesto (1991)  N. Katherine Hayles - How we became Posthuman (1999)  Ray Kurzweil - The Singularity is Near (2005)  James Hughes - podcast on the Transhumanist Political Project (2016)  Zoltan Istvan's Transhumanist Party  The Oxford Handbook of the History of Eugenics Bostrom & Sandberg - The wisdom of nature: an evolutionary heuristic for human enhancement (2009)    
December 20, 2016
In this episode I talk to Anders Sandberg about the ethical implications of time compression - or the speeding up of computational tasks to quantum levels. Anders is research associate to the Oxford Martin Programme on the Impacts of Future Technology, the Oxford Uehiro Centre for Practical Ethics, and the Oxford Centre for Neuroethics. His research at the Future of Humanity Institute centres on management of low-probability high-impact risks, societal and ethical issues surrounding human enhancement, estimating the capabilities of future technologies, and very long-range futures. He is currently senior researcher in the FHI-Amlin collaboration on systemic risk of risk modelling. I ask Anders about his latest research on time compression in computing, and about the effects this might have on human values and society. You download the episode here. You can listen below. You can also subscribe on Stitcher and iTunes (via RSS). Show Notes0.00 – Introduction1:00 – the future of humanity in the face of the Trump election3:50 – the ethics and risks of time compression in computing – speed, space and Moore’s law9:50 – quantum computing and its limits, the Margolus Levitin limit, the Beckenstein Bound, algorithmic complexity & the ultimate laptop18:40  - limits of cryptography and light speed28:20 – why speed and time matter in human life – the economics of productivity36:35 – the value of temporal location – being first/being last – winner takes all markets – hyperbolic discounting46:15  - automated trading & high frequency trading algorithms – instability, speed and space – flash crashes – algorithms and their sense of humour56:00 – speed inequalities & mismatches, loss of control, hard take-off scenarios - technological unemployment1:12:50  - can we speed up humans?  Relevant Links Anders' contribution to From Algorithmic States to Algorithmic Brains Anders' webpage at the Future of Humanity Institute, Oxford Richard Feynman – Plenty of Room at the Bottom (1959) Bernard Williams - The Makropulos case: reflections on the tedium of immortality Daniel Kahneman, Thinking, Fast and Slow            
December 18, 2016
[This is a cross-post from the Algocracy and Transhumanism blog. It's a short podcast by the Research Assistant on the Project - Pip Thornton. Check out her blog here] I started work as the research assistant on the Algocracy and Transhumanism project in September, and John has invited me to record a short podcast about some of my own PhD research on Language in the Age of Algorithmic Reproduction. You download the podcast here or listen above. The podcast relates to a project called {poem}.py, which is explained in greater detail here on my blog. The project involves making visible the workings of linguistic capitalism by printing out receipts for poetry which has been passed through Google's advertising platform AdWords.   I have presented the project twice now - each time asking fellow presenters for their favourite poem or lyric which I can then process through the Keyword planner and print out on a receipt printer for them to take home. I often get asked what is the most expensive poem, and of course it depends on the length, but the winner so far is The Wasteland by T.S. Eliot, which was requested by David Gunkel at the Algorithmic Brains to Algorithmic States workshop in September, and which came in at £1738.57 and several metres. In the podcast I use 3 clips - an excerpt from The Wasteland, a performance poem by Jemima Foxtrot, and the introduction to  Billy Bragg's Between the Wars - and think about how the words contained in each piece might fare in the linguistic marketplace. You can watch Jemima's performance in full below. Jemima Foxtrot - Bog Eye Man from Craig Bilham on Vimeo. I also want to give a proper airing to Rita Ann Higgins' poem Our Killer City, which I reference in the podcast, but play an 'alternative' version of. You can watch Rita reciting her poem below.
November 21, 2016
In this episode I talk to Nicole Vincent. Nicole is an international philosopher extraordinaire. She has appointments at Georgia State University, TU Delft (Netherlands) and Macquarie University (Sydney). Nicole's work focuses on the philosophy of responsibility, cognitive enhancement and neuroethics. We talk about two main topics: (i) can neuroscience make us happier? and (ii) how should we think about radically changing ourselves through technology? You can download the episode here. You can also listen below or subscribe on Stitcher or iTunes (via RSS feed). Show Notes0:00 - 0:50 - Introduction to Nicole0:50 - 8:50 - What is a happy life? Objective vs Subjective Views8:50 - 13:20 - What is a meaningful life? Does meaning differ from happiness?13:20 - 17:03 - Who knows best about our own happiness? Can scientists tell if we are happy?17:03 - 25:25 - The distinction between occurrent (in the moment) happiness and dispostitional (predictive) happiness25:25 - 37:05 - The danger of scientists thinking they know best about occurrent happiness37:05 - 46:20 - Could scientists know best about dispositional happiness?46:20 - 56:05 - Neuroplasticity and the normative value of facts about the brain56:05 - 1:01:45 - What if technology allows us to change everything about ourselves?1:01:45 -1:05:40 - Nicole's opposition to radical transhumanism1:05:40 - 1:13:50 - How should we think about transformative change?1:13:50 - End - How should society regulate technologies that allow for transformative change?  Relevant LinksNicole's homepageNicole talking about Enhancing Responsibility at TEDxSydneyNicole's framework for understanding responsibilityNicole's paper with Stephanie Hare 'Happiness, Cerebroscopes and Incorrigibility: Prospects for Neuroeduaimonia''Who knows best? Personal Happiness and the Search for a Good Life' - John DanaherTransformative Experience - LA Paul'What you can't expect when you are expecting' - LA Paul
November 6, 2016
In this episode I interview programmer and lawyer Aaron Wright. Aaron is an expert in corporate and intellectual property law, with extensive experience in Internet and new technology issues. He is a professor at Cardozo Law School and the Director of the School's Tech Startup Clinic. I speak to Aaron about the issues arising from his forthcoming book about blockchain technology and the law (co-authored with Primavera De Filippi) that will be published by Harvard University Press. You can download the episode here. You can listen below. You can also subscribe on Stitcher and iTunes via RSS. Show Notes0:00 – 1:58 - Introduction1:58 – 8:08 – what is a block chain?8:08 – 11:10 – explanation of bitcoin11:10 – 15:00 – the role of cryptography in the block chain15:00 – 19:55 – consensus based networks19:55 – 27:15 – what are the other uses of the block chain?27:15 – 32.40 – using micropayments for content access32:40 – 48:20 – organising human behaviour by smart contracts –48:20 – 54:42 - the internet of things54:42 – 56:24 - how safe and secure are block chains?56:24 – 1:02:40 – Ethereum hack1:02:40 – 1:10:50 - the asymmetry problem1:10:50 – 1:16:14 – regulating the block chain & the lex cryptographia1:16:14 – 1:20:40 – lex mercatoria & lex informatica1:20:40 – End – forthcoming book and other publications  Relevant Links Decentralized Blockchain Technology and the Rise of Lex Cryptographia – Aaron Wright & Primavera De Filippi Blockchains and the Emergence of a Lex Cryptographia – from John’s blog Future Crimes : Inside the Digital Underground and the Battle for Our Connected World - Marc Goodman Bitcoin is teaching realism to Libertarians : An interview with an Old-School Cypherpunk Lex Informatica : Foundations of Law on the Internet - Aron Mefford Lex Mercatoria : The Emergence of a Self-Regulated Bitcoin               
October 22, 2016
In this episode I interview Dr Laura Cabrera. Laura is an Assistant Professor at the Center for Ethics and Humanities in the Life Sciences at Michigan State University where she conducts research into the ethical and societal implications of neurotechnology. I ask Laura how human enhancement can affect inter-personal communication and values and talk about the issues in her recent book Rethinking Human Enhancement : Social Enhancement and Emergent Technologies. You download the show here or listen below. You can also subscribe on Stitcher and iTunes (click 'add to iTunes').  Show Notes0:00 – 1:00 - Introduction1:00 – 11:15 - What is human enhancement- definitions and translations11:15 – 13:35 - Discussing moral enhancement - Savulescu and Persson13:35 – 14:35 - Human enhancement and communication - discussing Laura’s paper with John Wekert14:35 – 28:40 - Shared lifeworlds, similar bodies, communication problems28:40 – 39:48 - Augmented reality and sensory perception39:48 – 46:20 - Cognitive capacity and memory – Oliver Sacks & Borges46:20 – 49:50 - Ethics – hermeneutic crises and empathy gaps49:50 – 52:30 - Can technology solve communication problems?53:32 – 1:00:00 - What are human values?1:00:00 – 1:08:20 - How does cognitive enhancement affect values?1:08:20 – 1:16:00 – Neoliberalism values - pressures and competitiveness1:16:00 – End - How to prioritise values and see the positives in enhancement  Relevant LinksLaura's recent book  Rethinking Human Enhancement Social Enhancement and Emergent TechnologiesHuman enhancement and communication: on meaning and shared understanding - Laura Cabrera & John WeckertLaura's homepage at the Center for Ethics & Humanities in the Life SciencesThe Perils of Cognitive Enhancement and the Urgent Imperative to Enhance the Moral Character of Humanity - by Savulescu & PerssonWhat is it like to be a Bat?  - Thomas NagelThe Man Who Mistook His Wife for a Hat - Oliver SacksThe Country of the Blind - H.G. WellsFunes the Memorious - Borges
October 9, 2016
In this episode I interview Rick Searle. Rick is an author living in Amish country in Pennsylvania. He is a prolific writer and commentator on all things technological. I get Rick to educate me about the darker aspects of the transhumanist philosophy. In particular, what Rick finds disturbing in the writings of Zoltan Istvan, Steve Fuller and the Neoreactionaries. You can download the episode here. You can listen below. You can also subscribe on Stitcher or iTunes (click add to 'iTunes'). Show Notes0:00 - 1:40 - Introduction1:40 - 4:40 - Rick's definition of Transhumanism4:40 - 10:10 - Zoltan Istvan and the Transhumanist Wager10:10 - 16:35 - The philosophy of teleological egocentric functionalism - Ayn Rand on steroids?16:35 - 22:30 - Steve Fuller's Humanity 2.022:30 - 28:00 - Some disturbing conclusions?28:00 - 32:20 - The ontology and ethics of Humanity 2.032:20 - 36:55 - Stalinism as Transhumanism43:25 - 47:00 - Transhumanism as religion47:00 - 56:30 - The neo-reactionaries of Silicon Valley56:30 - End - Is democracy fit for the future?  Relevant LinksRick Searle's blog Utopia or Dystopia?Rick's profile page on the IEETThe Transhumanist Wager - by Zoltan Istvan'Betting Against the Transhumanist Wager' - by Rick Searle'The Terrifying Banality of Humanity 2.0' - by Rick SearleHumanity 2.0 - by Steve Fuller'We May Look Crazy to Them but They Look Like Zombies to Us" - by Steve Fuller (this generated lots of controversy on the IEET page when published. To be clear, Fuller claims he was using irony to make a point about the transhumanist worldview)'Politics as Zombie Warfare: Against Steve Fuller's Transhumanism' - by David Roden'Stalinism as Transhumanism' - by Rick Searle'Silicon Secessionists' - by Rick Searle'Shedding Light on Peter Thiel's Dark Enlightenment' - by Rick Searle'Mouthbreathing Machiavellis Dream of a Silicon Reich' - by Corey Pein, The Baffler
September 11, 2016
In this episode I talk to Sabina Leonelli. Sabina is an Associate Professor at the Department of Sociology, Philosophy and Anthropology at the University of Exeter. She is as the Co-Director of the Exeter Centre for the Study of the Life Sciences (Egenis), where she leads the Data Studies research strand. Her research focuses primarily on the philosophy of science and in particular on the philosophy of data intensive science. Her work is currently supported by the ERC Starting Grant DATA_SCIENCE. I talk to Sabina about the impact of big data on the scientific method and how large databases get constructed and used in scientific inquiry. You can listen below. You can also download here, or subscribe via Stitcher and iTunes (just click add to iTunes). Show Notes0:00 - 1:40 - Introduction1:40 - 10:19 - How the scientific method is traditionally conceived and how data is relevant to the method as traditionally conceived.10:19 - 13:40 - Big Data in science13:40 - 18:30 - Will Big Data revolutionise scientific inquiry? Three key arguments18:30 - 24:13 - Criticisms of these three arguments24:13 - 29:20 - How model organism databases get constructed in the biosciences29:20 - 36:30 - Data journeys in science (Step 1): Decontextualisation36:30 - 41:20 - Data journeys in science (Step 2): Recontextualisation41.20 - 47:15 - Opacity and bias in databases51:55 - 57:00 - Data journeys in science (Step 3): Usage57:00 - 1:00:30 - The Replicability Crisis and Open Data1:00:30 - End - Transparency and legitimacy and dealing with different datasets  Relevant LinksDr Leonelli's HomepageThe DataScience Project (The Epistemology of Data Intensive Science)'What difference does quantity make? On the Epistemology of Big Data in Biology' by Sabina Leonelli'Why the current insistence on Open Access to Scientific Data? Big Data, Knowledge Production and the Political Economy of Contemporary Biology' by Sabina Leonelli'Sticks and Carrots: Encouraging Open Science at its Source' by Sabina Leonelli, Daniel Spichtinger, and Barbara PrainsackBig Data: A revolution that will transform how we live, work and think by Cukier and Mayer-Schonberger'Big Data is Better Data' by Kenneth Cukier (Ted Talk)Model Organism Databases - links to the leading model organism databasesEstimating the reproducibility of psychological evidence (by Nosek et al)No evidence for a replicability crisis in psychological science (by Gilbert et al)
August 27, 2016
This is the tenth episode in the Algocracy and Transhumanism Podcast. In this episode I talk to David Gunkel. David is a professor of communication studies at Northern Illinois University. He specialises in the philosophy and ethics of technology. He is the author of several books, including Hacking Cyberspace, The Machine Question and Of Remixology. I talk to David about two main topics: (i) robot rights and responsibilities and (ii) the cyborgification of society. You can download the episode at this link. You can listen below. You can also subscribe on Stitcher and iTunes (via RSS - click on 'add to iTunes'). Show Notes0:00 - 1:50 - Introduction1:50 - 4:23 - Robots in the News4:23 - 10:46 - How to think about robots: agency vs patiency1:46- 13:20 - The problem of distributed agency13:20 - 18:00 - Robots as tools, machines and agents18:00 - 24:25 - The spectrum of robot autonomy24:25 - 28:04 - Machine learning: is it different this time?28:04 - 39:40 - Should robots have rights and responsibilities?39:40 - 43:55 - New moral patients and emotional manipulation43:55 - 57:14 - Understanding the three types of cyborg57:14 - 1:02:26 - The Borg and the Hivemind Society1:02:26 - End - Cyborgification as a threat to Enlightenment values    Relevant LinksDavid's HomepageHacking Cyberspace by DavidThe Machine Question by DavidOf Remixology by David'Responsible Machines: The Opportunities and Challenges of Artificial Autonomous Agents' by David'Facing Animals: A relational, other-oriented approach to moral standing' by David'Resistance is Futile: Cyborgs, Humanism and the Borg' by David'Ecce Cyborg: The Subject of Communication' by David'Is modern technology creating a Borg-like society?' by John Danaher'Is Resistance Futile? Are we already Borg?' by John Danaher'Robots, Law and the Retribution Gap' by John DanaherThe Bomb Robot and the Dallas shooterEU Parliament report on Civil Law Rules and Robots'Microsoft's disastrous Tay experiment shows hidden dangers of AI' by John West'How Google's Alpha Go beat Lee Sedol', Christopher Moyer, The Atlantic'The Question Concerning Technology' by Martin HeideggerPeter Paul Verbeek - University of Twente'Robots should be slaves' by Joanna Bryson'Extending legal protection to social robots' by Kate Darling'A Cyborg Manifesto' by Donna Haraway'How we became Posthuman' by N Katherine Hayles
August 17, 2016
This is the ninth episode in the Algocracy and Transhumanism Podcast. In this episode I talk to Rachel O'Dwyer who is currently a postdoc at Maynooth University. We have a wide-ranging conversation about the digital commons, money, bitcoin and blockchain governance. We look at the historical origins of the commons, the role of money in human society, the problems with bitcoin and the creation of blockchain governance systems. You can download the podcast at this link. You can also listen below, or subscribe on Stitcher and iTunes (via RSS feed - just click add to iTunes). Show Notes0:00 - 0:40 - Introduction0:40 - 9:00 - The history of the digital commons9:00 - 17:20 - What is money? What role does it play in society?17:20 - 29:20 - The value of transactional data and how it gets tracked29:20 - 34:25 - The centralisation of transactional data tracking and its role in algorithmic governance34:25 - 37:50 - Resisting transactional data-tracking37:50 - 46:00 - What is bitcoin? What is a cryptocurrency?46:00 - 54:25 - Can bitcoin be a currency of the digital commons?54:25 - 1:04:47 - The promise of blockchain governance: smart contracts and smart property1:04:47 - End - Criticisms of blockchain governance - the creation of an ultra-neo-liberal governance subject?  Relevant Links:Rachel's Academia.edu pageRachel on TwitterRachel's profile on the OpenHere webpageInterference journal (founded by Rachel)'The Revolution Will Not be Decentralised: Blockchain-based Technologies and the Commons' - by Rachel O'Dwyer'Other Values: Considering Digital Currency as Commons' - by Rachel O'Dwyer'The Second Enclosure Movement and the Construction of the Public Domain' - by James BoyleWhere's George? - physical currency tracking website'Blockchains, Smart Contracts and Smart Property'- by John Danaher'Blockchains, DAOs and the Modern Leviathan' - by John Danaher'Blockchains and the Emergence of a Lex Cryptographia' - by John Danaher'Distributed Ledger Technology: Beyond the Blockchain' - UK Gov Science Advisor
    15
    15
      0:00:00 / 0:00:00