PoddsändningarTeknologi80,000 Hours Podcast

80,000 Hours Podcast

The 80,000 Hours team
80,000 Hours Podcast
Senaste avsnittet

329 avsnitt

  • 80,000 Hours Podcast

    Village gossip, pesticide bans, and gene drives: 17 experts on the future of global health

    2026-04-07 | 4 h 6 min.
    What does it really take to lift millions out of poverty and prevent needless deaths?
    In this special compilation episode, 17 past guests — including economists, nonprofit founders, and policy advisors — share their most powerful and actionable insights from the front lines of global health and development. You’ll hear about the critical need to boost agricultural productivity in sub-Saharan Africa, the staggering impact of lead poisoning on children in low-income countries, and the social forces that contribute to high neonatal mortality rates in India.
    What’s so striking is how some of the most effective interventions sound almost too simple to work: banning certain pesticides, replacing thatch roofs, or identifying village “influencers” to spread health information.
    Full transcript and links to learn more: https://80k.info/ghd
    Chapters:
    Cold open (00:00:00)
    Luisa’s intro (00:00:58)
    Development consultant Karen Levy on why pushing for “sustainable” programmes isn’t as good as it sounds (00:02:15)
    Economist Dean Spears on the social forces and gender inequality that contribute to neonatal mortality in Uttar Pradesh (00:06:55)
    Charity founder Sarah Eustis-Guthrie on what we can learn from the massive failure of PlayPumps (00:14:33)
    Economist Rachel Glennerster on how randomised controlled trials are just one way to better understand tricky development problems (00:19:05)
    Data scientist Hannah Ritchie on why improving agricultural productivity in sub-Saharan Africa is critical to solving global poverty (00:24:36)
    Charity founder Lucia Coulter on the huge, neglected upsides of reducing lead exposure (00:47:48)
    Malaria expert James Tibenderana on using gene drives to wipe out the species of mosquitoes that cause malaria (00:53:11)
    Charity founder Varsha Venugopal on using village gossip to get kids their critical immunisations (01:04:14)
    Rachel Glennerster on solving tough global problems by creating the right incentives for innovation (01:11:31)
    Karen Levy on when governments should pay for programmes instead of NGOs (01:26:51)
    Open Philanthropy lead Alexander Berger on declining returns in global health, and finding and funding the most cost-effective interventions (01:29:40)
    GiveWell researcher James Snowden on making funding decisions with tricky moral weights (01:34:44)
    Lucia Coulter on “hits-based giving” approaches to funding global health and development projects (01:43:01)
    Rachel Glennerster on whether it’s better to fix problems in education with small-scale interventions versus systemic reforms (01:48:12)
    GiveDirectly cofounder Paul Niehaus on why it’s so important to give aid recipients a choice in how they spend their money (01:51:09)
    Sarah Eustis-Guthrie on whether more charities should scale back or shut down, and aligning incentives with beneficiaries (01:56:12)
    James Tibenderana on why we need loads better data to harness the power of AI to eradicate malaria (02:11:22)
    Lucia Coulter on rapidly scaling a light-touch intervention to more countries (02:20:14)
    Karen Levy on why pre-policy plans are so great at aligning perspectives (02:32:47)
    Rachel Glennerster on the value we get from doing the right RCTs well (02:40:04)
    Economist Mushtaq Khan on really drilling down into why “context matters” for development work (02:50:13)
    GiveWell cofounder Elie Hassenfeld on contrasting GiveWell’s approach with the subjective wellbeing approach of Happier Lives Institute (02:57:24)
    James Tibenderana on whether people actually use antimalarial bed nets for fishing — and why that’s the wrong thing to focus on (03:05:30)
    Karen Levy on working with governments to get big results (03:10:53)
    Leah Utyasheva on how a simple intervention reduced suicide in Sri Lanka by 70% (03:17:38)
    Karen Levy on working with academics to get the best results on the ground (03:29:03)
    James Tibenderana on the value of working with local researchers (03:32:15)
    Lucia Coulter on getting buy-in from both industry and government (03:35:05)
    Alexander Berger on reasons neartermist work makes sense even by longtermist standards (03:39:26)
    Economist Shruti Rajagopalan on the key skills to succeed in public policy careers, and seeing economics in everything (03:47:42)
    J-PAL lead Claire Walsh on her career advice for young people who want to get involved in global health and development (03:55:20)
    Audio engineering: Ben Cordell, Milo McGuire, Simon Monsour, and Dominic Armstrong
    Content editing: Katy Moore and Milo McGuire
    Music: CORBIT
    Coordination, transcriptions, and web: Katy Moore
  • 80,000 Hours Podcast

    What everyone is missing about Anthropic vs the Pentagon. And: The Meta leaks are worse than you think.

    2026-04-03 | 20 min.
    When the Pentagon tried to strong-arm Anthropic into dropping its ban on AI-only kill decisions and mass domestic surveillance, the company refused. Its critics went on the attack: Anthropic and its supporters are some combination of 'hypocritical', 'naive', and 'anti-democratic'. Rob Wiblin dissects each claim finding that all three are mediocre arguments dressed up as hard truths. (Though the 'naive' one is at least interesting.)
    Watch on YouTube: What Everyone is Missing about Anthropic vs The Pentagon
    Plus, from 13:43: Leaked documents from Meta revealed that 10% of the company's total revenue — around $16 billion a year — came from ads for scams and goods Meta had itself banned. These likely enabled the theft of around $50 billion dollars a year from Americans alone. But when an internal anti-fraud team developed a screening method that halved the rate of scams coming from China... well, it wasn't well received.
    Watch on YouTube: The Meta Leaks Are Worse Than You Think
    Chapters:
    Introduction (00:00:00)
    What Everyone is Missing about Anthropic vs The Pentagon (00:00:26)
    Charge 1: Hypocrisy (00:01:21)
    Charge 2: Naivety (00:04:55)
    Charge 3: Undemocratic (00:09:38)
    You don't have to debate on their terms (00:12:32)
    The Meta Leaks Are Worse Than You Think (00:13:43)
    Three fixes for social media's scam problem (00:16:48)
    We should regulate AI companies as strictly as banks (00:18:46)
    Video and audio editing: Dominic Armstrong and Simon Monsour
    Transcripts and web: Elizabeth Cox and Katy Moore
  • 80,000 Hours Podcast

    Could a biologist armed with AI kill a billion people? | Dr Richard Moulange

    2026-03-31 | 3 h 7 min.
    Last September, scientists used an AI model to design genomes for entirely new bacteriophages (viruses that infect bacteria). They then built them in a lab. Many were viable. And despite being entirely novel some even outperformed existing viruses from that family.

    That alone is remarkable. But as today's guest — Dr Richard Moulange, one of the world's top experts on 'AI–Biosecurity' — explains, it's just one of many data points showing how AI is dissolving the barriers that have historically kept biological weapons out of reach.
    For years, experts have reassured us that 'tacit knowledge' — the hands-on, hard-to-Google lab skills needed to work with dangerous pathogens — would prevent bad actors from weaponising biology. So far, they've been right.

    But as of 2025 that reassurance is crumbling. The Virology Capabilities Test measures exactly this kind of troubleshooting expertise, and finds that modern AI models crushed top human virologists even in their self-declared area of greatest specialisation and expertise — 45% to 22%.
    Meanwhile, Anthropic’s research shows PhD-level biologists getting meaningfully better at weapons-relevant tasks with AI assistance — with the effect growing with each new model generation.
    Richard joins host Rob Wiblin to discuss all that plus:
    What AI biology tools already exist
    Why mid-tier actors (not amateurs) are the ones getting the most dangerous boost
    The three main categories of defence we can pursue
    Whether there’s a plausible path to a world where engineered pandemics become a thing of the past
    This episode was recorded on January 16, 2026. Since recording this episode, Richard has seconded to the UK Government — please note that his views expressed here are entirely his own.
    Links to learn more, video, and full transcript: https://80k.info/rm
    Announcements:
    Our new book is available to preorder: 80,000 Hours: How to have a fulfilling career that does good is written by our cofounder Benjamin Todd. It’s a completely revised and updated edition of our existing career guide, with a big new updated section on AI — covering both the risks and the potential to steer it in a better direction, and how AI automation should affect your career planning and which skills one chooses to specialise in. Preorder now: https://geni.us/80000Hours
    We're hiring contract video editors for the podcast! For more information, check out the expression of interest page on the 80,000 Hours website: https://80k.info/video-editor
    Chapters:
    Cold open (00:00:00)
    Who's Richard Moulange? (00:00:31)
    AI can now design novel genomes (00:01:11)
    The end of the 'tacit knowledge' barrier (00:04:34)
    Are risks from bioterrorists overstated? (00:18:20)
    The 3 key disasters AI makes more likely (00:22:41)
    Which bad actors does AI help the most? (00:30:03)
    Experts are more scary than amateurs (00:41:17)
    Barriers to bioterrorists using AI (00:46:43)
    AI biorisks are sometimes dismissed (and that's a huge mistake) (00:48:54)
    Advanced AI biology tools we already have or will soon (01:04:10)
    Rob argues that the situation is hopeless (01:09:49)
    Intervention #1: Limit access (01:18:16)
    Intervention #2: Get AIs to refuse to help (01:32:58)
    Intervention #3: Surveillance and attribution (01:42:38)
    Intervention #4: Universal vaccines and antivirals (01:56:38)
    Intervention #5: Screen all orders for DNA (02:10:00)
    AI companies talk about def/acc more than they fund it (02:19:52)
    Can you build a profitable business solving this problem? (02:26:32)
    This doesn't have to interfere with useful science (much) (02:30:56)
    What are the best low-tech interventions? (02:33:01)
    Richard's top request for AI companies (02:37:59)
    Grok shows governments lack many legal levers (02:53:17)
    Best ways listeners can help fix AI-Bio (02:56:24)
    We might end all contagious disease in 20 years (03:03:37)

    Video and audio editing: Dominic Armstrong, Milo McGuire, Luke Monsour, and Simon Monsour
    Music: CORBIT
    Camera operator: Jeremy Chevillotte
    Transcripts and web: Elizabeth Cox and Katy Moore
  • 80,000 Hours Podcast

    #240 – Samuel Charap on how a Ukraine ceasefire could accidentally set Europe up for a bigger war

    2026-03-24 | 1 h 12 min.
    Many people believe a ceasefire in Ukraine will leave Europe safer. But today's guest lays out how a deal could potentially generate insidious new risks — leaving us in a situation that's equally dangerous, just in different ways.
    That’s the counterintuitive argument from Samuel Charap, Distinguished Chair in Russia and Eurasia Policy at RAND. He’s not worried about a Russian blitzkrieg on Estonia. He forecasts instead a fragile peace that breaks down and drags in European neighbours; instability in Belarus prompting Russian intervention; hybrid sabotage operations that escalate through tit-for-tat responses.
    Samuel’s case isn’t that peace is bad, but that the Ukraine conflict has remilitarised Europe, made Russia more resentful, and collapsed diplomatic relations between the two. That’s a postwar environment primed for the kind of miscalculation that starts unintended wars.
    What he prescribes isn’t a full peace treaty; it’s a negotiated settlement that stops the killing and begins a longer negotiation that gives neither side exactly what it wants, but just enough to deter renewed aggression. Both sides stop dying and the flames of war fizzle — hopefully.
    None of this is clean or satisfying: Russia invaded, committed war crimes, and is being offered a path back to partial normalcy. But Samuel argues that the alternatives — indefinite war or unstructured ceasefire — are much worse for Ukraine, Europe, and global stability.

    Links to learn more, video, and full transcript: https://80k.info/sc26
    This episode was recorded on February 27, 2026.
    Chapters:
    Cold open (00:00:00)
    Could peace in Ukraine lead to Europe’s next war? (00:00:47)
    Do Russia’s motives for war still matter? (00:11:41)
    What does a good ceasefire deal look like? (00:17:38)
    What’s still holding back a ceasefire (00:38:44)
    Why Russia might accept Ukraine’s EU membership (00:46:00)
    How to prevent a spiraling conflict with NATO (00:48:00)
    What’s next for nuclear arms control (00:49:57)
    Finland and Sweden strengthened NATO — but also raised the stakes for conflict (00:53:25)
    Putin isn’t Hitler: How to negotiate with autocrats (00:56:35)
    Why Russia still takes NATO seriously (01:02:01)
    Neither side wants to fight this war again (01:10:49)
    Video and audio editing: Dominic Armstrong, Milo McGuire, Luke Monsour, and Simon Monsour
    Music: CORBIT
    Transcripts and web: Nick Stockton, Elizabeth Cox, and Katy Moore
  • 80,000 Hours Podcast

    #239 – Rose Hadshar on why automating human labour will break our political system

    2026-03-17 | 2 h 14 min.
    The most important political question in the age of advanced AI might not be who wins elections. It might be whether elections continue to matter at all.
    That’s the view of Rose Hadshar, researcher at Forethought, who believes we could see extreme, AI-enabled power concentration without a coup or dramatic ‘end of democracy’ moment.
    She foresees something more insidious: an elite group with access to such powerful AI capabilities that the normal mechanisms for checking elite power — law, elections, public pressure, the threat of strikes — cease to have much effect. Those mechanisms could continue to exist on paper, but become ineffectual in a world where humans are no longer needed to execute even the largest-scale projects.
    Almost nobody wants this to happen — but we may find ourselves unable to prevent it.
    If AI disrupts our ability to make sense of things, will we even notice power getting severely concentrated, or be able to resist it? Once AI can substitute for human labour across the economy, what leverage will citizens have over those in power? And what does all of this imply for the institutions we’re relying on to prevent the worst outcomes?
    Rose has answers, and they’re not all reassuring.
    But she’s also hopeful we can make society more robust against these dynamics. We’ve got literally centuries of thinking about checks and balances to draw on. And there are some interventions she’s excited about — like building sophisticated AI tools for making sense of the world, or ensuring multiple branches of government have access to the best AI systems.
    Rose discusses all of this, and more, with host Zershaaneh Qureshi in today’s episode.
    Links to learn more, video, and full transcript: https://80k.info/rh
    This episode was recorded on December 18, 2025.
    Chapters:
    Cold open (00:00:00)
    Who's Rose Hadshar? (00:01:05)
    Three dynamics that could reshape political power in the AI era (00:02:37)
    AI gives small groups the productive power of millions (00:12:49)
    Dynamic 1: When a software update becomes a power grab (00:20:41)
    Dynamic 2: When AI labour means governments no longer need their citizens (00:31:20)
    How democracy could persist in name but not substance (00:45:15)
    Dynamic 3: When AI filters our reality (00:54:54)
    Good intentions won't stop power concentration (01:08:27)
    Slower-moving worlds could still get scary (01:23:57)
    Why AI-powered tyranny will be tough to topple (01:31:53)
    How power concentration compares to "gradual disempowerment" (01:38:18)
    Some interventions are cross-cutting — and others could backfire (01:43:54)
    What fighting back actually looks like (01:55:15)
    Why power concentration researchers should avoid getting too "spicy" (02:04:10)
    Why the "Manhattan Project" approach should worry you — but truly international projects might not be safe either (02:09:18)
    Rose wants to keep humans around! (02:12:06)
    Video and audio editing: Dominic Armstrong, Milo McGuire, Luke Monsour, and Simon Monsour
    Music: CORBIT
    Coordination, transcripts, and web: Nick Stockton and Katy Moore

Fler podcasts i Teknologi

Om 80,000 Hours Podcast

The most important conversations about artificial intelligence you won’t hear anywhere else. Subscribe by searching for '80000 Hours' wherever you get podcasts. Hosted by Rob Wiblin, Luisa Rodriguez, and Zershaaneh Qureshi.
Podcast-webbplats

Lyssna på 80,000 Hours Podcast, AI Sweden Podcast och många andra poddar från världens alla hörn med radio.se-appen

Hämta den kostnadsfria radio.se-appen

  • Bokmärk stationer och podcasts
  • Strömma via Wi-Fi eller Bluetooth
  • Stödjer Carplay & Android Auto
  • Många andra appfunktioner

80,000 Hours Podcast: Poddsändningar i Familj

Sociala nätverk
v8.8.6| © 2007-2026 radio.de GmbH
Generated: 4/7/2026 - 8:09:08 PM