Eric Schmidt's recent talk at Stanford University in 2024 covered a range of provocative topics. He talked about the big three developments for LLM in the next year and post-transformer AI.
Google's AI Strategy: Schmidt criticized Google's focus on work-life balance, suggesting it hindered their competitiveness in AI against companies like OpenAI and Anthropic. He praised Elon Musk, startups and TSMC for their rigorous work environments. TSMC even makes physics PhDs work on factory floors in their first year.
NVIDIA's CUDA: He admitted to underestimating NVIDIA's CUDA technology, which has become crucial for AI models. He says AMD CEO Lisa Sui is working to make ROCM. Below I give some background on ROCM.
Microsoft and OpenAI: Schmidt was initially skeptical about Microsoft's partnership with OpenAI, considering them too small to matter, but acknowledged he was wrong.
Intellectual Property and AI Startups: Schmidt talked about TikTok. If you are starting a business, go ahead and steal whatever you can, like music. If you make it big, you can afford the best lawyers to cover your tracks. He says next level AI (perhaps 2025) will combine giant context windows, agents and text to action (making python programs) will be able to do most anything. His prompt would be to build a Tik Tok or Google version in 30 seconds, steal their customers and then deploy it in an hour. If they fail to go viral then make fixes and try again.
Global AI Landscape: Schmidt expressed doubts about Europe's tech innovation potential, except for some hope in France, and emphasized India's importance as a U.S. ally over China. He says Japan and Korea are with the US. He thinks the US needs to leverage Canada (for strong AI talent) and hydroelectric power for giant data centers.
Open-Source AI: He was pessimistic about the viability of open-source AI due to high costs.
Economic Impact of AI: Schmidt predicted AI would widen the economic gap between rich and poor countries and compared AI's potential to the early days of electricity.
Manufacturing and AI: He noted that AI would not revive manufacturing jobs due to automation.
The talk was initially live-streamed but later removed from YouTube, reportedly due to the controversial nature of his comments.
The entire transcript of the talk is on github.
NOTE: The videos of the Eric Schmidt talk keep getting taken down.
Eric Schmidt, former CEO of Google and co-founder of Schmidt Futures, shared his insights on the current state and near-term future of artificial intelligence (AI). He highlighted three major trends he expects to see in the next 1-2 years:
1. Very large context windows: AI models are developing the ability to process and retain much more information, potentially up to a million tokens or words. This vast expansion of context allows for more comprehensive analysis and improved short-term memory, enabling AI to handle more complex tasks and provide more nuanced responses.
2. AI agents: These are systems that can iteratively learn and improve their understanding, similar to how humans approach complex tasks. Schmidt described how these agents could potentially read about a subject like chemistry, discover its principles, test their understanding, and then incorporate that knowledge back into their base of understanding.
3. Text-to-action capabilities: This involves the ability to convert natural language instructions directly into executable code or digital commands. Schmidt provided an example of instructing an AI to create a TikTok competitor, complete with user acquisition and content generation, all from a simple text prompt.
Schmidt believes the combination of these three advancements will have profound and potentially unpredictable impacts on society, possibly even greater than the effects of social media. He emphasized the power of giving every individual access to their own "programmer" that can execute complex tasks based on simple instructions.
The current state of AI development and competition
Eric noted that the gap between frontier AI models (developed by just a few leading companies) and the rest of the field appears to be widening. This is a shift from his perspective six months prior when he believed the gap was narrowing. The resources required to develop and train these advanced models are immense, with estimates ranging from $10 billion to $100 billion or more. Sam Altman of OpenAI reportedly believes it could take about $300 billion. This level of investment is creating significant barriers to entry and concentrating power in the hands of a few well-funded entities. He talked about Inflection and others being forced to sell out because they could not raise the next round of tens of billions.
The importance of compute power and energy resources in AI development was highlighted. Schmidt suggested that the United States may need to partner closely with Canada to access sufficient hydroelectric power for large-scale AI training.
Schmidt also highlighted NVIDIA's current dominance in AI chips, particularly due to their CUDA architecture and highly optimized libraries. He explained that most AI code needs to run with CUDA optimizations, which are currently only supported by NVIDIA GPUs. This software ecosystem, built up over a decade, gives NVIDIA a significant advantage over potential competitors.
Regarding the global AI landscape, Schmidt sees the United States and China as the primary competitors, with India as a potential swing state due to its large pool of top AI talent. He expressed concerns about Europe's regulatory environment hindering AI innovation, particularly criticizing the EU's approach to AI regulation.
Schmidt stressed the importance of work ethic and intensity in AI development. He suggested that some established tech companies, including Google, may be falling behind due to prioritizing work-life balance over competitive drive. He contrasted this with the dedication seen in startups and some international competitors, emphasizing that the network effects and time-sensitive nature of AI development make this intensity crucial for success.
The discussion then shifted to broader implications of AI:
1. Economic and labor market impacts:
Erci discussed the potential effects of AI on various job sectors. He thought that high-skill jobs could adapt and work alongside AI but lower-skill jobs and those requiring less human judgment would be replaced. The full economic impact of new technologies often takes time to materialize as organizations learn to effectively implement them.
Eric drew parallels to previous technological revolutions, such as the introduction of electricity in factories. Initially, the productivity gains were minimal as factories simply replaced steam engines with electric motors without redesigning their processes. It took decades before the full potential of electricity was realized through new factory layouts and the introduction of assembly lines. This historical perspective suggests that the most significant productivity gains from AI may come from organizational and process innovations rather than the technology alone.
2. National security and geopolitics:
AI is seen as a critical area for maintaining technological superiority. Schmidt mentioned his work on an AI commission that examined these issues, leading to initiatives like the CHIPS Act to bolster U.S. competitiveness. He highlighted the U.S. government's actions to restrict the export of advanced AI chips to China, maintaining a technological advantage (about ten years) in areas like semiconductor manufacturing.
3. Regulation and ethics:
The need for balanced regulation that promotes innovation while addressing potential risks was discussed. Schmidt mentioned his involvement in shaping AI policies, including recent executive actions by the Biden administration. He described participating in an informal group that helped develop the basis for the administration's AI act, which includes measures like requiring companies to report to the government when they reach certain computational thresholds in AI development.
4. Education and skill development:
The speakers debated the continued importance of coding skills in an AI-driven world. While AI can assist in coding, understanding programming fundamentals remains valuable for effectively leveraging these tools. Schmidt suggested that computer science education might evolve to include AI assistants as natural partners in the learning process.
5. University research:
Schmidt advocated for increased funding for AI research at universities, particularly for high-performance computing resources. He expressed frustration that many top researchers are limited by lack of access to the computational power needed for cutting-edge AI research. However, it was also noted that universities might have a comparative advantage in developing new algorithms and conducting long-term research rather than training massive models. The discussion highlighted the unique role of academia in pursuing patient, long-term research that may not have immediate commercial applications.
6. AI literacy and public understanding:
The importance of improving AI literacy among non-technical stakeholders, including policymakers and the general public, was discussed. The speakers emphasized the need for a multidisciplinary approach, combining technical knowledge with insights from fields like economics, political science, and organizational behavior to fully understand and address the implications of AI.
7. Emerging capabilities and the path to AGI:
The speakers touched on the concept of artificial general intelligence (AGI) and the challenges in defining it. While current AI systems can perform many tasks once considered indicative of AGI, they still struggle with certain capabilities that humans find easy, particularly in physical tasks. The discussion highlighted the non-linear nature of AI progress, with some tasks that seem complex to humans being relatively easy for AI, while others that seem simple proving more challenging.
8. AI as a general-purpose technology:
The conversation explored the concept of AI as a general-purpose technology (GPT) and its potential for sparking complementary innovations across various sectors. Like electricity or the internet, AI has the potential to transform multiple industries and aspects of society. The speakers emphasized that realizing the full potential of AI will likely require substantial changes to business models, organizational structures, and societal institutions.
9. Rapid adoption vs. long-term transformation:
The speakers discussed the rapid adoption of tools like ChatGPT compared to historical technology transitions. While some AI applications are being integrated quickly, it was proposed that realizing the full potential of AI will likely require more substantial and time-consuming changes to business models and organizational structures.
10. Competition and market dynamics:
Schmidt discussed the current state of competition in the AI field, noting the dominance of a few large players due to the massive resources required for advanced AI development. He also touched on the debate between open-source and closed-source AI models, suggesting that the enormous capital costs involved might be pushing the field towards more closed systems.
11. AI in warfare and defense:
Schmidt shared his involvement in developing AI-powered defense technologies, particularly drones designed to counter traditional military equipment like tanks. He discussed the potential for AI to change the nature of warfare, potentially making certain types of conflicts, like land invasions, much more difficult.
12. Energy and infrastructure requirements:
The discussion highlighted the massive energy and infrastructure needs for advanced AI development. Schmidt emphasized the importance of securing reliable and abundant energy sources, suggesting partnerships with countries like Canada that have significant hydroelectric resources.
13. Philosophical implications:
The speakers touched on the changing nature of knowledge and understanding in the age of AI. They discussed the challenges of working with AI systems that can produce results without providing clear explanations of their reasoning, drawing parallels to how humans interact with complex systems or even other humans.
14. Global talent distribution:
Schmidt discussed the global distribution of AI talent, highlighting India's potential as a major player in the field. He suggested that countries might need to rethink their approach to retaining top talent in strategic fields like AI.
15. Ethical considerations:
While not extensively discussed, the conversation touched on the ethical implications of AI development, including issues of bias, privacy, and the potential for misuse.
Key takeaways and advice for students and entrepreneurs
He talked about how student and academic institutions could be part of the red teams companies that will form to attack AI with adversarial AI. The weaknesses found would feed into building the next AI.
There is currently about 18 month cycles for LLM. six months of preparation, six months of training, six months of fine tuning.
1. The AI field is currently in a period of rapid advancement with many opportunities for significant contributions. The combination of expanding context windows, AI agents, and text-to-action capabilities is expected to drive major innovations in the near future (probably 2025-2026).
2. Understanding both the technical aspects of AI and its broader implications (economic, social, ethical) is crucial. There's a growing need for individuals who can bridge the gap between technical development and practical applications across various domains.
3. There's a need for interdisciplinary approaches, combining technical knowledge with insights from fields like economics, political science, and organizational behavior. This holistic understanding will be key to effectively leveraging AI's potential while addressing its challenges.
4. Prototyping and demonstrating ideas quickly using AI tools is becoming increasingly important for entrepreneurs. The ability to rapidly iterate and test concepts will be a significant competitive advantage.
5. While large-scale model training may be dominated by well-funded entities, there are still many areas where smaller teams and academic researchers can make meaningful contributions, especially in algorithm development and specific applications.
6. The full potential of AI will likely be realized through complementary innovations in business models, organizational structures, and human capital development. Students and entrepreneurs should think beyond the technology itself to how it can transform entire systems and processes.
7. Critical thinking and the ability to evaluate AI-generated content will become increasingly important skills. As AI becomes more pervasive, the ability to discern and verify information will be crucial.
8. There are opportunities for innovation not just in AI technology itself, but in how it's applied across various domains and in addressing its societal impacts. This includes areas like AI governance, ethical AI development, and AI-human collaboration models.
9. The global nature of AI development means that understanding international dynamics, including differences in regulatory approaches and cultural attitudes towards AI, will be important for those looking to work in the field.
10. While coding skills remain important, the nature of software development is likely to change with AI assistance. Understanding how to effectively work with and direct AI coding tools may become as important as traditional programming skills.
11. The energy and infrastructure requirements for AI development present opportunities for innovation in areas like efficient computing, sustainable data centers, and novel energy solutions.
12. As AI capabilities expand, there will be increasing need for experts who can translate between technical AI concepts and their practical implications for business, policy, and society.
13. The rapid pace of AI development means that continuous learning and adaptation will be essential. Students and professionals should cultivate the ability to quickly assimilate new developments and adjust their strategies accordingly.
14. There may be significant opportunities in developing tools and methodologies for testing, validating, and ensuring the reliability of AI systems, particularly as they are deployed in more critical applications.
15. The potential for AI to transform warfare and defense highlights the need for experts who can navigate the complex ethical and strategic implications of these technologies.
The discussion highlighted the transformative potential of AI while also emphasizing the complexities and challenges involved in its development and integration into society. It underscored the need for a multifaceted approach to AI advancement, considering technological, economic, and social factors to fully leverage its capabilities while mitigating potential risks.
Eric emphasized that we are likely at the beginning of a major technological revolution, comparable to the introduction of electricity or the internet. While the immediate impacts of AI are already significant, the long-term transformations may be even more profound and difficult to predict. This creates both tremendous opportunities and significant responsibilities for the current generation of students, researchers, and entrepreneurs working in and around the field of AI.
AMD ROCM
AMD's equivalent to NVIDIA's CUDA is primarily the ROCm (Radeon Open Compute) platform. ROCm is an open-source software stack designed to provide a similar functionality to CUDA by enabling general-purpose computing on AMD GPUs. It supports various programming models, including HIP (Heterogeneous-compute Interface for Portability), which allows developers to port CUDA applications to run on AMD hardware.
Keep reading with a 7-day free trial
Subscribe to next BIG future to keep reading this post and get 7 days of free access to the full post archives.