Powered by RND
Lyssna på Training Data i appen
Lyssna på Training Data i appen
(2 266)(249 698)
Spara kanal
väckarklocka
Sleeptimer

Training Data

Podcast Training Data
Sequoia Capital
Join us as we train our neural nets on the theme of the century: AI. Sonya Huang, Pat Grady and more Sequoia Capital partners host conversations with leading AI...
Mer

Tillgängliga avsnitt

5 resultat 22
  • Decart’s Dean Leitersdorf on AI-Generated Video Games and Worlds
    Can GenAI allow us to connect our imagination to what we see on our screens? Decart’s Dean Leitersdorf believes it can. In this episode, Dean Leitersdorf breaks down how Decart is pushing the boundaries of compute in order to create AI-generated consumer experiences, from fully playable video games to immersive worlds. From achieving real-time video inference on existing hardware to building a fully vertically integrated stack, Dean explains why solving fundamental limitations rather than specific problems could lead to the next trillion-dollar company. Hosted by: Sonya Huang and Shaun Maguire, Sequoia Capital 00:00 Introduction 03:22 About Oasis 05:25 Solving a problem vs overcoming a limitation 08:42 The role of game engines 11:15 How video real-time inference works 14:10 World model vs pixel representation 17:17 Vertical integration 34:20 Building a moat 41:35 The future of consumer entertainment 43:17 Rapid fire questions
    --------  
    46:34
  • How Glean CEO Arvind Jain Solved the Enterprise Search Problem – and What It Means for AI at Work
    Years before co-founding Glean, Arvind was an early Google employee who helped design the search algorithm. Today, Glean is building search and work assistants inside the enterprise, which is arguably an even harder problem. One of the reasons enterprise search is so difficult is that each individual at the company has different permissions and access to different documents and information, meaning that every search needs to be fully personalized. Solving this difficult ingestion and ranking problem also unlocks a key problem for AI: feeding the right context into LLMs to make them useful for your enterprise context. Arvind and his team are harnessing generative AI to synthesize, make connections, and turbo-change knowledge work. Hear Arvind’s vision for what kind of work we’ll do when work AI assistants reach their potential.  Hosted by: Sonya Huang and Pat Grady, Sequoia Capital  00:00 - Introduction 08:35 - Search rankings  11:30 - Retrieval-Augmented Generation 15:52 - Where enterprise search meets RAG 19:13 - How is Glean changing work?  26:08 - Agentic reasoning  31:18 - Act 2: application platform  33:36 - Developers building on Glean  35:54 - 5 years into the future  38:48 - Advice for founders
    --------  
    44:48
  • OpenAI Researcher Dan Roberts on What Physics Can Teach Us About AI
    In recent years there’s been an influx of theoretical physicists into the leading AI labs. Do they have unique capabilities suited to studying large models or is it just herd behavior? To find out, we talked to our former AI Fellow (and now OpenAI researcher) Dan Roberts. Roberts, co-author of The Principles of Deep Learning Theory, is at the forefront of research that applies the tools of theoretical physics to another type of large complex system, deep neural networks. Dan believes that DLLs, and eventually LLMs, are interpretable in the same way a large collection of atoms is—at the system level. He also thinks that emphasis on scaling laws will balance with new ideas and architectures over time as scaling asymptotes economically. Hosted by: Sonya Huang and Pat Grady, Sequoia Capital  Mentioned in this episode: The Principles of Deep Learning Theory: An Effective Theory Approach to Understanding Neural Networks, by Daniel A. Roberts, Sho Yaida, Boris Hanin Black Holes and the Intelligence Explosion: Extreme scenarios of AI focus on what is logically possible rather than what is physically possible. What does physics have to say about AI risk? Yang-Mills & The Mass Gap: An unsolved Millennium Prize problem AI Math Olympiad: Dan is on the prize committee
    --------  
    41:42
  • Google NotebookLM’s Raiza Martin and Jason Spielman on Creating Delightful AI Podcast Hosts and the Potential for Source-Grounded AI
    NotebookLM from Google Labs has become the breakout viral AI product of the year. The feature that catapulted it to viral fame is Audio Overview, which generates eerily realistic two-host podcast audio from any input you upload—written doc, audio or video file, or even a PDF. But to describe NotebookLM as a “podcast generator” is to vastly undersell it. The real magic of the product is in offering multi-modal dimensions to explore your own content in new ways—with context that’s surprisingly additive. 200-page training manuals become synthesized into digestible chapters, turned into a 10-minute podcast—or both—and shared with the sales team, just to cite one example. Raiza Martin and Jason Speilman join us to discuss how the magic happens, and what’s next for source-grounded AI. Hosted by: Sonya Huang and Pat Grady, Sequoia Capital
    --------  
    32:07
  • Snowflake CEO Sridhar Ramaswamy on Using Data to Create Simple, Reliable AI for Businesses
    All of us as consumers have felt the magic of ChatGPT—but also the occasional errors and hallucinations that make off-the-shelf language models problematic for business use cases with no tolerance for errors. Case in point: A model deployed to help create a summary for this episode stated that Sridhar Ramaswamy previously led PyTorch at Meta. He did not. He spent years running Google’s ads business and now serves as CEO of Snowflake, which he describes as the data cloud for the AI era. Ramaswamy discusses how smart systems design helped Snowflake create reliable "talk-to-your-data" applications with over 90% accuracy, compared to around 45% for out-of-the-box solutions using off the shelf LLMs. He describes Snowflake's commitment to making reliable AI simple for their customers, turning complex software engineering projects into straightforward tasks.  Finally, he stresses that even as frontier models progress, there is significant value to be unlocked from current models by applying them more effectively across various domains. Hosted by: Sonya Huang and Pat Grady, Sequoia Capital Mentioned in this episode:  Cortex Analyst: Snowflake’s talk-to-your-data API Document AI: Snowflake feature that extracts in structured information from documents
    --------  
    59:29

Fler podcasts i Teknologi

Om Training Data

Podcast-webbplats

Lyssna på Training Data, Acquired och många andra poddar från världens alla hörn med radio.se-appen

Hämta den kostnadsfria radio.se-appen

  • Bokmärk stationer och podcasts
  • Strömma via Wi-Fi eller Bluetooth
  • Stödjer Carplay & Android Auto
  • Många andra appfunktioner

Training Data: Poddsändningar i Familj

Radio
Sociala nätverk
v6.28.0 | © 2007-2024 radio.de GmbH
Generated: 11/19/2024 - 11:21:50 PM