Blog

Flood the Zone or Circle the Wagons: A Roadmap for Losing United States’ AI Leadership

By Ken Glueck, Executive Vice President, Oracle—Dec 19, 2024

The Clipper Chip was by far history’s worst government technology idea.

Until now.

A Biden Administration proposal, an “Export Control Framework for Artificial Intelligence Diffusion,” now takes that honor. It is a “framework” that guarantees the United States will cede most of the world’s advanced AI implementations to China. It creates opportunities for foreign competitors to gain in GPU design and deployment.i It practically guarantees that the next generation of AI large language models will be developed at the expense of U.S. interests.ii And it opens the door to the U.S. losing its currently insurmountable lead in cloud computing. What in the world does Artificial Intelligence Diffusion even mean?

To be clear. We are in a race that the U.S. must win, and to do so, U.S. companies must be able to compete to accelerate broad AI adoption on American technology platforms where the AI Genie is already out of the bottle.iii Failure results in a world with AI used solely as a tool to strengthen authoritarianism, repression, and surveillance—direct threats to U.S. national and economic security.

Missing from this Diffusion Framework is any discussion of the core question facing the U.S. as we deploy AI globally: is it more strategic for America to “flood the zone”—providing the global infrastructure on which AI models will be trained and run—ensuring U.S. leadership of this critical technology? Or should the U.S. “circle the wagons,” replacing decisive action with a thinly veiled hope to buy time (for what, exactly?), while throwing the door wide open for our adversaries and competitors.iv

As currently drafted, the Diffusion Framework is more aptly called the “Export Control Framework for the Advancement of Alibaba, Huawei, Tencent, and SMIC.” 

Unfortunately, anyone wading through more than two hundred pages of mind-boggling new regulation will quickly realize the Biden Administration’s fundamental misunderstanding of the very GPUs they seek to control. Unlike traditional CPUs which scale up in power (remember Moore’s Law), GPUs scale both up and scale out. Their current utility results from running lots and lots of GPUs together to complete many, many, many, tasks at the same time. Controlling GPUs makes no sense when you can achieve parity by simply adding more, if less-powerful, GPUs to solve the problem. What the market realizes that the Biden Administration does not, is this is a problem of capacity that Chinese chip makers and cloud providers will be all too happy to supply (and subsidize with money from Beijing). But only if American suppliers don’t get there first. Or, under this Diffusion Framework, are not allowed to get there at all.

GPUs are not new. Today’s hot new chips are an overnight success, decades in the making, and have long been a mainstay of computer graphics professionals and gamers alike. GPUs excel at “embarrassingly parallel” problems such as 3D video rendering.v Think Call of Duty or Super Mario Brothers; the term GPU itself was coined for the original Sony PlayStation back in 1994.vi

For decades the Bureau of Industry and Security (yes, that ominous name belongs to a U.S. entity) barely cared about GPUs at all. Enter Artificial Intelligence. It turns out GPUs also excel at the math useful for machine learning and neural networks. Workloads for training Large Language Models (LLMs) and operating them (inferencing) effectively at scale require GPUs to conduct the underlying computations. The GPUs driving AI today can be assembled in large clusters (i.e. scale out) in almost limitless fashion. We can call this “Mario’s Law.”

The worst of the ideas baked into this dystopian Diffusion Framework is a set of country quotas to limit any single country from gaining too many U.S. sourced GPUs. The problem with this proposal is it assumes there are no other non-U.S. suppliers from which to procure GPU technology. Right now, countries are willing to pay top dollar for the most advanced chips, which happen to be U.S. name brands. How quickly we forget what a stark difference this is from our concern with cheap foreign competitors (Huawei) who were able to secure a dominant global telecommunications market share with inexpensive (read, CCP subsidized) end-to-end offerings. Even if you assume a continued U.S. GPU lead, Mario’s Law tells us that if you just add more chips to the problem, you can keep playing the game. If your alternate supplier has less performance, you can achieve parity by just adding more GPUs for the task. Enter Huawei and Tencent. Do it at a cheaper (subsidized) price. Enter the CCP. And deploy it globally. Enter Alibaba.

Runner up for “best of the worst” ideas in this Diffusion Framework is the creation of a country club we’ll call the “AI 20”—twenty countries given VIP access to GPUs by the Bureau of Industry and Security.vii Countries notably absent from the AI 20 Club include Singapore, Mexico, Malaysia, UAE, Israel, Saudi Arabia, and India. Oh, and all other NATO members not on the list. Fear not, the Diffusion Framework says, feel free to export to the technology powerhouses of Sweden and Switzerland.

The problem with a club of 20 is that those not invited will form their own club, perhaps called the CCP for “Can’t Control Processors.” This club will be run by the “other” CCP to advance its interests in the 140 “other” countries not at the grown-up table. We saw this play out with telecom in Africa. The U.S. handed the entire continent’s telco business to Huawei, leaving the continent essentially blind to U.S. interests. Now this Diffusion Framework is setting the conditions to do it again, practically begging the CCP to again subsidize Chinese companies, bolstering their scale and available resources to grab market share and control the future of global AI advancements.viii, ix

As an exception to quotas and the AI 20 club, this Diffusion Framework creates yet another bureaucratic fever dream: Data Center Validated End Users (VEUs—i.e. data centers owned and operated by a U.S. cloud hyperscaler). In other words, U.S. cloud providers can essentially ship GPUs to themselves outside of the U.S. These validated end user licenses come with huge, new reporting and regulatory burdens, most likely unacceptable to many countries and most likely impossible to implement in any event. In turn, VEUs get a higher (but still embarrassingly low) cap of GPUs available to deploy. Advancements in GPUs will easily outrun these caps. But worse, the entire idea of VEUs freezes cloud innovation and ignores many current cloud business models.

Lastly, the Diffusion Framework attempts to place controls on LLMs themselves, which is the most absurd idea of all. It does not account for the fact that our adversaries can engage in lots of nefarious conduct without the most sophisticated models. It does not account for the fact that our adversaries can create their own LLMs based on massive amounts of data, the preponderance of which is derived from their own surveillance economies without our help.x It does not account for ongoing efforts by the global research community to develop advanced models which require far less compute and data.xi However, in a rare moment of recognition it’s Mario’s World already, the proposal does exempt “open source” LLMs from the controls.

Let’s say this slower. LLM Model “A” is controlled by the Bureau of Industry and Security to keep it out of the hands of our adversaries. But if LLM Model “A” is declared to be open source, the identical model is apparently no longer their concern because it is freely available. We all know how this movie ends.

The Diffusion Framework seeks to emplace on AI chips controls that were determined based on a fear of what could potentially happen—e.g., bad actors becoming very good at chemical, biological or nuclear weapon design courtesy of the latest LLM. A recent RAND study found that current LLMs do not provide a meaningful advantage to bad actors planning to unleash a large-scale biological weapons attack.xii That’s not to say as LLMs evolve, they won’t provide a meaningful advantage, but the LLM we know is clearly preferable to the LLM we do not know. Let’s also not forget there is a very long tail of weapon development that must occur in the physical world. Obtaining and storing biological as well as chemical or nuclear precursor materials, procuring and maintaining all necessary equipment, etc.,—remain areas where a range of capabilities can complicate development and that can be monitored and controlled. Reading a recipe is not the same as baking a cake. 

The U.S. government should use all its tools of national power to address national security and economic risks arising from AI adoption by malicious actors. However, the Biden Administration seemingly rests all hope on the Bureau of Industry and Security’s ability to impose 20th century export controls and craft new highly regulatory regimes. There should be active engagement with industry on the complex and interrelated dependencies among chip design and fabrication, hardware design, software tools, and compute infrastructure deployment—all essential elements of AI technology that must be understood to properly deny capabilities to those entities intent on eroding our national security.

If the idea here is to guarantee the loss of U.S. leadership in AI, GPUs and cloud in one swift stroke of an Interim Final Rule set to take place in mere weeks without any consultation with industry…. well, then the latest Bureau of Industry and Security proposal hit it out of the park.

The Clinton Administration brought us the Clipper Chip to protect our national security interests from the long parade of nearly-hysterical dangers from encryption. Of course, that horrible idea was killed in 1996 and, instead, a highly secure Internet brought us decades of economic growth, productivity, and U.S.-led technology. And while nearly all the dire predictions from strong encryption failed to materialize, strong encryption is now central in the fight against China’s cyber actors.xiii, xiv

Given the U.S. lead in cloud, AI and GPUs, there’s no need to slow down and circle the wagons. Let’s flood the zone, allow U.S. companies to compete and ensure the U.S. dominates the global AI marketplace. In the end we will be far more secure by not regulating U.S. industry out of the game.

i https://www.wsj.com/tech/ai/huawei-readies-new-chip-to-challenge-nvidia-surmounting-u-s-sanctions-e108187a
ii https://www.cnbc.com/2024/12/17/chinese-ai-models-are-popular-globally-and-are-beating-us-rivals-in-some-areas.html
iii https://reports.nscai.gov/final-report/
iv https://www.reuters.com/technology/after-us-curbs-tencent-small-chip-designers-chase-nvidias-china-crown-2023-12-11/
v https://en.wikipedia.org/wiki/Embarrassingly_parallel
vi https://www.computer.org/publications/tech-news/chasing-pixels/is-it-time-to-rename-the-gpu
vii The “AI 20” list includes the United States plus 20 trusted countries: Australia, Belgium, Canada, Denmark, Finland, France, Germany, Ireland, Italy, Japan, the Netherlands, New Zealand, Norway, Republic of Korea, Poland, Spain, Sweden, Switzerland, Taiwan, and the United Kingdom.
viii https://www.wsj.com/tech/china-unveils-48-billion-fund-to-bolster-chip-industry-10d7f1ce
ix https://www.reuters.com/world/china/shanghai-launches-138-bln-funds-boost-integrated-circuit-biomedicine-ai-sectors-2024-07-26/
x https://www.tomshardware.com/tech-industry/artificial-intelligence/chinese-llms-storm-hugging-faces-chatbot-benchmark-leaderboard-alibaba-runs-the-board-as-major-us-competitors-have-worsened
xi https://babylm.github.io/
xii https://www.rand.org/pubs/research_reports/RRA2977-2.html
xiii https://www.usatoday.com/story/tech/2024/12/05/apple-android-texts-hackers-encryption/76807100007/
xiv https://www.cisa.gov/resources-tools/resources/enhanced-visibility-and-hardening-guidance-communications-infrastructure