Powered by RND
PoddsändningarTeknologiThinking Machines: AI & Philosophy
Lyssna på Thinking Machines: AI & Philosophy i appen
Lyssna på Thinking Machines: AI & Philosophy i appen
(2 266)(249 698)
Spara kanal
väckarklocka
Sleeptimer

Thinking Machines: AI & Philosophy

Podcast Thinking Machines: AI & Philosophy
Daniel Reid Cahn
“Thinking Machines,” hosted by Daniel Reid Cahn, bridges the worlds of artificial intelligence and philosophy - aimed at technical audiences. Episodes explore h...

Tillgängliga avsnitt

5 resultat 23
  • OpenAI o1: Another GPT-3 moment?
    GPT-3 didn't have much of a splash outside of the AI community, but it foreshadowed the AI explosion to come. Is o1 OpenAI's second GPT-3 moment?Machine Learning Researchers Guilherme Freire and Luka Smyth discuss OpenAI o1, it's impact, and it's potential. We discuss early impressions of o1, why inference-time compute and reinforcement learning matter in the LLM story, and the path from o1 to AI beginning to fulfill its potential.00:00 Introduction and Welcome00:22 Exploring O1: Initial Impressions03:44 O1's Reception06:42 Reasoning and Model Scaling18:36 The Role of Agents27:28 Impact on Prompting28:43 Copilot or Autopilot?32:17 Reinforcement Learning and Interaction37:36 Can AI do your taxes yet?43:37 Investment in AI vs. Crypto46:56 Future Applications and Proactive AI
    --------  
    51:52
  • The Future is Fine Tuned (with Dev Rishi, Predibase)
    Dev Rishi is the founder and CEO of Predibase, the company behind Ludwig and LoRAX. Predibase just released LoRA Land, a technical report showing 310 models that can outcompete GPT-4 on specific tasks through fine-tuning. In this episode, Dev tries (pretty successfully) to convince me that fine-tuning is the future, while answering a bunch of interesting questions, like:Is fine-tuning hard?If LoRAX is a competitive advantage for you, why open-source it?Is model hosting becoming commoditized? If so, how can anyone compete?What are people actually fine-tuning language models for?How worried are you about OpenAI eating your lunch?I had a ton of fun with Dev on this one. Also, check out Predibase’s newsletter called fine-tuned (great name!) and LoRA Land.
    --------  
    52:28
  • Pre-training LLMs: One Model To Rule Them All? with Talfan Evans, DeepMind
    Talfan Evans is a research engineer at DeepMind, where he focuses on data curation and foundational research for pre-training LLMs and multimodal models like Gemini. I ask Talfan: Will one model rule them all?What does "high quality data" actually mean in the context of LLM training?Is language model pre-training becoming commoditized?Are companies like Google and OpenAI keeping their AI secrets to themselves?Does the startup or open source community stand a chance next to the giants?Also check out Talfan's latest paper at DeepMind, Bad Students Make Good Teachers.
    --------  
    37:36
  • On Adversarial Training & Robustness with Bhavna Gopal
    "Understanding what's going on in a model is important to fine-tune it for specific tasks and to build trust."Bhavna Gopal is a PhD candidate at Duke, research intern at Slingshot with experience at Apple, Amazon and Vellum.We discussHow adversarial robustness research impacts the field of AI explainability.How do you evaluate a model's ability to generalize?What adversarial attacks should we be concerned about with LLMs?
    --------  
    44:05
  • On Emotionally Intelligent AI (with Chris Gagne, Hume AI)
    Chris Gagne manages AI research at Hume, which just released an expressive text-to-speech model in a super impressive demo. Chris and Daniel discuss AI and emotional understanding:How does “prosody” add a dimension to human communication? What is Hume hoping to gain by adding it to Human-AI communication?Do we want to interact with AI like we interact with humans? Or should the interaction models be different?Are we entering the Uncanny Valley phase of emotionally intelligent AI?Do LLMs actually have the ability to reason about emotions? Does it matter?What do we risk, by empowering AI with emotional understanding? Are there risks from deception and manipulation? Or even a loss of human agency?
    --------  
    39:53

Fler podcasts i Teknologi

Om Thinking Machines: AI & Philosophy

“Thinking Machines,” hosted by Daniel Reid Cahn, bridges the worlds of artificial intelligence and philosophy - aimed at technical audiences. Episodes explore how AI challenges our understanding of topics like consciousness, free will, and morality, featuring interviews with leading thinkers, AI leaders, founders, machine learning engineers, and philosophers. Daniel guides listeners through the complex landscape of artificial intelligence, questioning its impact on human knowledge, ethics, and the future. We talk through the big questions that are bubbling through the AI community, covering topics like "Can AI be Creative?" and "Is the Turing Test outdated?", introduce new concepts to our vocabulary like "human washing," and only occasionally agree with each other. Daniel is a machine learning engineer who misses his time as a philosopher at King's College London. Daniel is the cofounder and CEO of Slingshot AI, building the foundation model for psychology.
Podcast-webbplats

Lyssna på Thinking Machines: AI & Philosophy, Solcellskollens podcast och många andra poddar från världens alla hörn med radio.se-appen

Hämta den kostnadsfria radio.se-appen

  • Bokmärk stationer och podcasts
  • Strömma via Wi-Fi eller Bluetooth
  • Stödjer Carplay & Android Auto
  • Många andra appfunktioner
Sociala nätverk
v7.1.1 | © 2007-2024 radio.de GmbH
Generated: 12/27/2024 - 1:21:29 AM