Introduction
Artificial intelligence (AI) has emerged as a major battleground among tech giants in recent years. Companies like Microsoft and Google have invested heavily in startups of generative AI, while Apple – once a pioneer with its Siri assistant – has come to be seen as a laggard in this race.
Meanwhile, U.S. regulators such as the Federal Trade Commission (FTC) and the Department of Justice (DOJ) have stepped up scrutiny of AI agreements and market structures, fearing that strategic partnerships and cross-investments could concentrate power and stifle competition.
This article takes an in-depth look at Apple's lag in the AI race, its complex relationship with OpenAI and Microsoft, and the central role of regulators in this technological and legal landscape.
Apple and its Delay in the AI Race
Apple introduced Siri in 2011, becoming the first Big Tech company to popularize an AI-powered voice assistant. However, the last decade has seen the company adopt a more cautious and slow approach to developing generative AI, focusing on incremental improvements. hardware (like the Neural Engine in mobile chips) and on-device privacy features. In contrast, OpenAI launched ChatGPT in 2022, catalyzing a generative AI revolution that Apple did not lead. Reports indicate that Apple itself acknowledges being a few years behind: According to Mark Gurman (Bloomberg), some within Apple believe that its generative AI technology is “more than two years ago of the industry leaders”
In fact, internal tests reportedly showed ChatGPT to be 25% more accurate than Siri and capable of answering 30% more questions, highlighting the lag in Apple's solutions.
The Cupertino company belatedly announced an effort in 2024 called Apple Intelligence – its new generative AI strategy – only during that year's WWDC conference. Meanwhile, rivals like Microsoft and NVIDIA were already reaping the benefits of this technology, temporarily becoming the most valuable companies in the market thanks to the hype surrounding AI. Apple has been developing its own language model, codenamed Ajax, and performed experiments with a chatbot internal code dubbed “Apple GPT.” However, Even executives admit that these initiatives fall short of the state of the artCEO Tim Cook has publicly confirmed that Apple has dedicated “tremendous effort” to developing AI capabilities and plans to deliver them to consumers. “by the end of this year” (2024). This statement demonstrates the urgency perceived by the company to regain ground.
From a technical perspective, analysts point out that Apple has historically focused on embedded AI and practical applications (e.g., facial recognition in Face ID, photo processing, and offline Siri commands), but has lacked a robust push into large-scale, cloud-based AI. This cautious strategy, possibly driven by privacy concerns and strict user experience curation, now leaves Apple in a vulnerable position in the race to chatbots advanced and assistant-based large language models (LLMs). The consensus is that Apple was “late to the new wave” of generative AI., and will need to leverage its vast installed base of 2 billion devices to avoid being irreversibly left behind.
After all, the company has plenty of financial resources and talent to turn things around, but the window for establishing leadership in AI is rapidly closing.
Legally, Apple's delay also has implications. The company currently doesn't face the same direct regulatory scrutiny in AI as its more aggressive competitors, but this is a reflection of its smaller market share. However, legal risks if Apple tries to make up for lost time in anti-competitive waysFor example, if Apple decided to use its control over the iOS ecosystem to prioritize its own AI service (once it had one), it could attract antitrust authorities' attention—just as it did with practices of pre-installing or setting default services on its devices. The recent history of the DOJ v. Google case, in which Apple is a co-conspirator, receiving approximately US$18 billion annually from Google to maintain the search engine as the default on the iPhone, shows that Apple's partnerships in critical areas are already raising allegations of lock-in and harmful exclusivity. Therefore, how Apple moves in the AI race will have to balance technological aggressiveness with regulatory caution.

The Microsoft–OpenAI Partnership and Its Competitive Impact
If Apple was hesitant about AI, Microsoft did exactly the opposite. In 2019, Microsoft invested US$1 billion in OpenAI and signed a long-term agreement to provide cloud infrastructure (Azure) to the startup, marking the beginning of a deep partnership. This initial investment has evolved into a total investment of approximately US$1.4 billion in OpenAI, granting Microsoft approximately 49% of economic equity in the company. In return, OpenAI began developing its advanced models exclusively using the services of cloud Azure, and Microsoft secured preferential integration rights for these technologies in its products. The result was a rapid competitive leap: less than a year after launching ChatGPT, Microsoft incorporated OpenAI's generative AI into a range of offerings – from search engines to Bing to the assistant Copilot in Windows and the Office suite. This strategy has given Microsoft a unique position as a challenger to Google's dominance in search and as a pioneer in infusing productive AI into productivity tools.
From a competitive perspective, the Microsoft–OpenAI partnership presents characteristics of vertical concentration and monopoly expansion in an unconventional wayMicrosoft didn't "buy" OpenAI in the traditional sense, but its massive investment and exclusivity agreements gave it many of the benefits of an acquisition without technically violating merger rules. Critics point out that Microsoft may be extending its power in the cloud computing (where Azure competes with AWS and Google Cloud) for the nascent AI platform market. A Reuters report highlighted concerns that the Redmond giant could expand its cloud dominance into the new frontier of AI through this alliance. Indeed, by ensuring that revolutionary models like GPT-4 run primarily in its data centers, Microsoft creates a knock-on effect. lock-in both for OpenAI (which relies on Azure infrastructure) and for enterprise customers who opt into the Microsoft/OpenAI ecosystem.
Antitrust regulators are already showing a direct interest in this relationship. In January 2024, the FTC issued formal orders of inquiry, under the authority of the Article 6(b) of the FTC Act, requiring Microsoft and OpenAI (among others) to provide detailed information about their investments and agreements involving AI. According to FTC Chairwoman Lina Khan, “As companies race to develop and monetize AI, we must guard against tactics that close this window of opportunity.”, investigating whether partnerships and contributions made by dominant companies can distort innovation and harm fair competitionThe Microsoft–OpenAI alliance is clearly a focus of this scrutiny, as it is emblematic of the rapprochement between a dominant infrastructure provider and a leading AI developer.
In addition to the FTC, the Department of Justice itself closely monitored these developments. In June 2024, there was even news of unprecedented coordination between the DOJ and the FTC to share investigations: the DOJ would examine potential antitrust violations by NVIDIA (dominant in chips of AI), while the FTC would evaluate the conduct of OpenAI and Microsoft. This division of duties, reported by the press at the time, reflects the broad concern of US authorities not to leave any flank unattended. Jonathan Kanter, head of the DOJ's antitrust division, stated at a conference that there is “structures and trends in AI that should give us pause”, highlighting that technology depends on massive amounts of data and computing power, which can give already dominant companies a substantial advantageThis statement alludes precisely to the situation of Microsoft and OpenAI: an established company providing essential resources (data, cloud, capital) to an AI partner, potentially raising barriers to smaller competitors who lack access to such resources.
So far, the Microsoft–OpenAI partnership has yielded notable results in innovation—such as the resounding success of ChatGPT and its integration into several products. On the business side, Microsoft has seen its market capitalization soar and gained the perception of a leader in AI, to the point of alternating with Apple as the world's most valuable company by 2023. However, this same multifaceted influence of Microsoft over OpenAI has raised regulatory alerts in the US and Europe, concerned about excessive power in such a strategic technologyThe dilemma posed is complex: how to encourage partnerships that accelerate innovation without allowing them to result in a duopoly or oligopoly of the next major technology platform? The answer to this question is still being developed by the relevant authorities, while the market evolves rapidly.

Apple, OpenAI, and Microsoft: A Complex Relationship
Amid Microsoft's advantage with OpenAI, Apple found itself in an unusual position of need to cooperate with rivals to avoid becoming irrelevant in AI. Historically, Apple prefers to develop its key technologies internally, or acquire startups promising in a discreet way. But in the face of the surge of OpenAI and other leaders, the company adopted remarkable pragmatism: in 2024, Apple forged a deal to integrate ChatGPT (OpenAI) technology directly into its iPhone operating systemAccording to a report by Bloomberg's Mark Gurman, this "once unlikely" partnership with Sam Altman (CEO of OpenAI) became necessary for Apple to regain ground in generative AI. The announcement was expected to be highlighted at WWDC 2024, and although the technical details have not been fully public, it is understood that OpenAI would collaborate to boost natural language capabilities and content generation on Apple devices.
This arrangement creates a curious situation: OpenAI, heavily supported by Microsoft, has also started collaborating with Apple, which competes with Microsoft on several fronts. From OpenAI's perspective, it makes sense to distribute its technology to as many users as possible (including those in the Apple ecosystem) and diversify partnerships. For Apple, relying on OpenAI meant immediate access to cutting-edge AI without having to wait for years of internal R&D. However, the triangular Apple–OpenAI–Microsoft alliance comes with sensitivities. Regulators and analysts fear that an excessive concentration of Big Tech around a few AI platforms will create a kind of cartelization of innovation: For example, if OpenAI simultaneously serves Microsoft and Apple, is there any room for another company to compete or offer alternatives? It's therefore not surprising that soon after news of the Apple-OpenAI pact emerged, Both Microsoft and Apple have withdrawn from holding seats on OpenAI's board of directors., in part to avoid the perception of collusion or excessive control.
Indeed, in July 2024 it was revealed that Microsoft, which had an observer seat on OpenAI's board, had resigned from that position, and that Apple – which had planned to appoint a representative – had also stepped down from the OpenAI board due to increasing regulatory scrutinyRegulators in the US and Europe had expressed concerns about the disproportionate influence Microsoft could exert over OpenAI. Apple's simultaneous presence on the company's board startup AI leader would generate even more alarms, suggesting that Big Tech could “close the club” and dictate the direction of AI in private. Under pressure, OpenAI announced that will no longer have observers on its board following Microsoft's departure, in an attempt to keep governance "at a safe distance" from large corporations. Still, sources close to the FTC indicated that this measure is insufficient to allay concerns about the Microsoft–OpenAI partnership, making it clear that surveillance will continue.
Apple, for its part, hasn't gone all-in on OpenAI. Reports indicate that the company also maintained active negotiations with Google for the use of Gemini, Google's advanced generative model, possibly in cloud functions or visual AI capabilities. This demonstrates that Apple is seeking alternatives and does not want to be at the mercy of a single AI vendor—especially one strongly aligned with rival Microsoft. Timing is a crucial factor: Apple plans to introduce generative AI capabilities "later in 2024" in its products, and the solution could involve a mix of its own and third-party models. Thus, while an on-device component could be served by the internal Ajax model, more demanding text or image generation capabilities could come through partnerships with OpenAI or Google. This hybrid ecosystem reflects the geopolitical and competitive complexity of AI: even giants like Apple must negotiate access to rival technologies to deliver value to their users..
It's instructive to note a parallel between this situation and the Apple-Google search deal. For years, Apple accepted billions from Google to keep Google Search as the default search engine on iOS—a deal now accused by the DOJ of being an anticompetitive practice that reinforced Google's search monopoly. In AI, Apple again considers strategic partnerships rather than competing directly, whether with OpenAI or Google, tacitly admitting that it has fallen behind in this technology. Legally, this raises a question: Can large-scale partnerships between competitors, in certain contexts, constitute antitrust violations? To date, technology licensing agreements have not received the same level of legal challenge as mergers or cartels explicit. However, if Apple and Google, for example, enter into a pact to divide or share AI capabilities (like Gemini on the iPhone), regulators could assess whether this reduces each company's incentive to compete independently—a concern similar to the search case. Therefore, Apple walks a fine line: collaborating with rivals may accelerate its entry into AI, but such moves will be scrutinized to ensure they don't represent a combination of powers to the detriment of the market.
Talent Acquisitions and Microsoft's Foray into Inflection AI
In addition to corporate partnerships, another striking aspect of the race for AI is the acquisition of talent and entire teams outside of traditional M&A channels. An exemplary case involves the startup Inflection AI. Founded in 2022 by entrepreneur Mustafa Suleyman (co-founder of DeepMind) and executive Karén Simonyan, Inflection AI quickly gained prominence in the industry with ambitions to develop conversational AI assistants (chatbots advanced). Microsoft, which had already led a US$1.3 billion investment in Inflection AI in 2023, surprised the market in March 2024 by hiring Suleyman, Simonyan, and several key members of the Inflection AI team to lead a new consumer AI division at Microsoft. Suleyman took over as Executive Vice President and CEO of Microsoft AI—a custom-built group within the company—reporting directly to Satya Nadella, while Simonyan became chief scientist for that division. In essence, Microsoft absorbed the brains behind a startup promising, without formally acquiring the entire company.
This move, described as "one of the strangest deals" by analysts, raises important legal and market considerations. On the one hand, Inflection AI continued to exist as a separate entity, but emptied of its leadership and a significant portion of its engineers, precisely after receiving a large investment in which Microsoft had participated. This can be seen as a "disguised acquisition": instead of buying the company—which would likely trigger traditional antitrust reviews—Microsoft opted to hire away its key employees en masse. The UK antitrust authority (CMA) interpreted this maneuver as a merger event susceptible to analysis, even opening an investigation to determine whether the personnel transfer constituted a "relevant merger situation" under UK law. In September 2024, the CMA concluded that, although the collective agreement did indeed represent an acquisition of certain Inflection assets (essentially human capital), it did not, in that case, create a substantial lessening of competition. Thus, the case was dismissed, but the mere fact that it was investigated sets a precedent: regulators are even paying attention to talent acquisitions as a way to concentrate the market.
In its report, the CMA justified its attention to the Inflection AI case by noting that it had identified “an interconnected web of over 90 partnerships and strategic investments involving the same large companies: Google, Apple, Microsoft, Meta, Amazon, and Nvidia.” In other words, the recruitment of Inflection employees was seen in the context of a larger scenario in which Big Tech establishes a myriad of agreements among themselves and with startups of AI. Apple appears mentioned in this web, although with a lower profile: the company has not made such bold moves publicly as Microsoft, but has made its own acquisitions of startups Apple has launched AI startups in recent years (such as Silk Labs in 2018, Xnor.ai in 2020, among others) and hired high-profile names, such as former Google AI chief John Giannandrea in 2018. These efforts by Apple were aimed at strengthening its AI team internally, but so far they have not translated into product leadership.
Returning to the Microsoft–Inflection case, the geopolitical repercussions were notable. Inflection AI had heavyweight Silicon Valley investors, and its “mass defection” to Microsoft (as described in an article in Bloomberg Businessweek) symbolized for many the difficulty of startups independent companies to compete with Big Tech resources. At the same time, the CMA's willingness to treat staffing as a merger demonstrates a regulatory trend: expand control tools beyond share acquisitions, also encompassing collaboration agreements and transfers of intangible assetsIn the US, the FTC and DOJ have traditionally not considered "hiring" from a merger perspective, but given the British example and the growing strategic importance of AI talent, it's not impossible to imagine similar scrutiny if extreme cases occur domestically. In short, the war for AI talent has become a central part of competitive dynamics, and companies like Microsoft and Apple are willing to pay huge sums or offer prominent positions to attract the best—which, in turn, attracts the attention of regulators concerned with maintaining a level playing field.
From a legal point of view, a question arises here: to what extent can the absorption of teams through hiring be used to circumvent merger control? Traditional antitrust law focuses on acquisitions of equity or marketable assets. Such legislation may not easily cover an "acquisition" whose subject matter is employment contracts terminated and re-signed by another company. However, if it is proven that these hires are part of a larger agreement to dismantle a potential competitor (e.g., a startup giving up competition in exchange for investment and relocation of its staff to the investing company), regulators could invoke theories of coordinated anticompetitive conduct or "competitive restraint agreements." So far, Microsoft has defended itself by claiming that it enabled one of the startups It's one of the world's most successful AI companies and has driven innovation in the industry through its partnerships. It's up to regulators to test the limits of these claims and, if necessary, establish new guidelines to ensure that aggressive talent recruitment isn't used to undermine competition.
Nvidia's Dominance in AI Infrastructure
No discussion of the AI race would be complete without addressing the role of Nvidia, described by many as “the mainstay” of the modern AI revolution. Nvidia dominates about 80% from the AI accelerated processing chip market– notably high-performance GPUs like the A100 and H100 series, which have become critical components for training and running large AI models. This dominant position made Nvidia one of the most valuable companies in the world in 2023, briefly reaching a trillion-dollar market cap, and gave it extraordinary gross profit margins (between 70% and 80%, well above the typical semiconductor industry). Virtually every major generative AI initiative relies on Nvidia hardware: OpenAI trains its models on Nvidia GPUs in the clusters Azure; Meta and Google also heavily utilize Nvidia GPUs (although Google has its own custom TPUs, they primarily serve Google itself); startups Companies like Inflection AI and Anthropic boast of building supercomputers with tens of thousands of Nvidia GPUs. Nvidia has thus become a nearly irreplaceable supplier—a strategic bottleneck—for any player seeking to compete at the forefront of AI.
Nvidia's dominance has significant competitive and geopolitical implications. In terms of competition, there are classic concerns about infrastructure dependence: If a company controls a critical input (in this case, AI hardware) and also participates in downstream markets, there is a risk of discrimination or foreclosure against rivals. Nvidia, to date, does not directly provide end-to-end AI services (such as machine learning or proprietary AI models) – their business is selling hardware and some related software tools. However, the simple vertical concentration The fact that so many different companies rely on a single supplier raises dilemmas. For example, if there were a shortage of GPUs, Nvidia could theoretically favor strategic partners or those who pay more, affecting the ability of other competitors to train their models on an equal footing. Furthermore, the extremely high cost of high-end GPUs (a single H100 can cost tens of thousands of dollars) creates barriers to entry: New entrants or independent researchers can hardly afford the necessary infrastructure, reinforcing the power of incumbents who have the capital to buy these chips in volume.
Antitrust regulators have already acted to prevent Nvidia from further expanding its dominance. In 2022, both the FTC and British and European authorities blocked Nvidia's attempt to acquire ARM Ltd., owner of the Arm chip architecture used in most of the world's mobile devices. The justification was that the merger could stifle innovation and competition in several chip markets, potentially impacting the future of AI chips by placing Nvidia's control over core technology used by competitors. While this case doesn't directly concern GPUs, it signals that antitrust authorities are willing to intervene when they see a risk of a company becoming the undisputed guardian of a technology ecosystem. With AI, there's currently no talk of actions to "break" Nvidia's dominance, but stakeholders Industry players are on the lookout for alternatives: companies like AMD and startups like Cerebras and Graphcore are trying to offer competing chips; large buyers (hyperscalars) are experimenting with developing chips internal initiatives to reduce dependence—Microsoft is reportedly working on a proprietary AI chip, Amazon has Trainium, and Google has its TPUs. However, none of these initiatives have substantially reduced Nvidia's market share yet.
On the geopolitical front, Nvidia is at the center of the technology dispute between the United States and China. Recognizing the strategic importance of AI chips, the U.S. government has imposed strict export controls that prohibit the sale of Nvidia's most advanced GPUs (such as the A100 and H100) to China without special licenses. Nvidia even developed "capped" versions (A800, H800) for the Chinese market, but in 2023 new restrictions closed these loopholes as well. Such measures aim to slow AI progress in China and preserve Western advantage, but they also have the side effect of cementing Nvidia as a near-exclusive supplier to US allies. For example, Chinese companies like Alibaba and Baidu now face difficulties obtaining cutting-edge hardware, while American and European companies freely access Nvidia technology—a competitive advantage enabled by government policy. Nvidia has thus become a key player not only in the market, but also in national and international strategy.Its "benevolent dominance" is tolerated by Western authorities in part because it serves the geopolitical interests of keeping this critical know-how away from adversaries. However, if Nvidia were to ever abuse this dominance (through unfair trade practices) or venture into vertically integrating AI services, it is not unlikely that it would face robust antitrust investigations.
For Apple specifically, Nvidia's dominance is a complicating factor. Apple develops its own chips (M-series for Macs, A-series for iPhones) and prides itself on their efficiency for embedded AI, but these chips are not suitable for training giant language models or large-scale vision – a task for which you would still need Nvidia or clouds others. There are rumors that Apple is investing in its own backend AI and even server projects, but in the short term, any generative AI initiative from Apple, such as integrating ChatGPT into the iPhone, will depend on third-party infrastructure. This reinforces the uncomfortable situation: Apple, a company known for its complete vertical integration, finds itself dependent on rivals at the software level (OpenAI/Microsoft or Google) and on a near-monopoly supplier at the hardware level (Nvidia)This scenario certainly fuels internal discussions about the extent to which Apple can tolerate these dependencies or whether, in the long term, it will need to seek autonomy in high-performance AI chips as well – which would be a multi-billion dollar undertaking with uncertain success.
Antitrust Practices in Focus: Lock-in, Vertical Integration and Bundling
The above cases illustrate several practices and market structures that are on regulators' radar for potential anticompetitive effects. Key concepts discussed in the AI sector include: technological lock-in, the vertical integration of services and the bundling (packaging) of products and features. Let's analyze each one in context:
- Exclusivities and Lock-in: Refers to the difficulty a customer or partner has in switching providers due to costs or contractually or technically imposed barriers. There are clear examples of this in the AI ecosystem. The Microsoft–OpenAI partnership, for example, involves OpenAI's exclusivity in using Azure as a cloud providerThis means that if, hypothetically, OpenAI wanted to run part of its models on Amazon AWS or Google Cloud, it could not do so without violating agreements – this is a lock-in contractual. Furthermore, companies that begin to heavily utilize certain APIs or proprietary models (such as GPT-4 via Azure OpenAI Service) may find themselves stuck in this ecosystem due to the high cost of migrating data, reprogramming applications, or training equivalent models in another environment. The FTC, in its January 2025 report, warned that partnerships between big techs and AI developers can “increase switching costs” for AI partners and impact access to essential inputs. Lina Khan highlighted that such partnerships can "to create lock-in, deprive startups of key AI inputs, and reveal sensitive information that undermines fair competition.”. In Apple's case, although it is in the position of purchasing other people's technology, there is also potential for lock-in in its ecosystem: If Apple deeply integrates ChatGPT into iOS, developers and users may become dependent on this native functionality, making it difficult for competing AI services to enter or use the iPhone.
- Vertical concentration: Occurs when a company controls multiple links in the production or value chain, potentially favoring its own business over its competitors. In the AI race, we're seeing significant vertical movements. Microsoft now operates in multiple layers – cloud infrastructure, model development (via OpenAI) and end applications (Office, Windows) – which gives it incentives and means to prioritize its solutions across the entire package. Apple is traditionally vertically integrated hardware + software + services, but in generative AI, it temporarily lost its edge in the models/services layer, trying to regain it through partnerships. Google also exemplifies vertical integration: it owns everything from chips (TPU), immense data repositories (search, YouTube), proprietary models (PaLM, Bard, Gemini), and even consumer products. This integration can lead to anticompetitive behavior if the company uses its power at one level to crush rivals at another. A concrete example: Google was accused by the European Commission of abusing Android (a mobile operating system) to impose its search engine and Chrome, offering mandatory packages to manufacturers—this is vertical abuse via bundling. With AI, we could see similar scenarios: for example, if a cloud company denied or hindered access to startups competing AI assistants because it partners with a specific one; or if an operating system (Windows, iOS, Android) tightly integrated just one AI assistant and made it difficult for competing apps. Regulators have already signaled that exclusivity in AI partnerships could “impact access to certain inputs, such as computing resources and engineering talent.”, which is a typical concern of vertical concentration (one provider dominating critical inputs).
- Bundling: This involves linking products or services together, so that access to one product necessarily includes another, or confers undue advantages to another. In the context of AI, an emerging issue is the bundling of AI capabilities with established platforms. Microsoft, for example, announced Windows Copilot, essentially placing a assistant AI natively in Windows 11. If this Copilot is always present and integrated into the system, independent assistant developers may find themselves excluded. Similarly, the incorporation of ChatGPT into Bing and the Edge browser—featuring it prominently in the interface—could be seen as bundling AI. search and the operating system (since Bing is built into Windows). The EU has a history of forcing the separation of such bundles (forced Microsoft to offer versions of Windows without Windows Media Player decades ago, for example). In 2023, European pressure made Microsoft announce that would separate Microsoft Teams from the Office 365 package in Europe, following an investigation into tied selling that harmed videoconferencing competitors. A similar logic can be applied to AI: If AI functionalities become determinants of consumer choice, integrating them exclusively into dominant products may constitute an undue extension of a monopoly.. The FTC seems attentive: its broad investigation into Microsoft (launched in the throes of 2023) reportedly examines even if the bundles company software, now enriched with AI, limits competitionIn Apple's case, AI bundling would manifest itself if, for example, Apple restricted certain new AI features to its premium apps or devices, reinforcing its closed ecosystem. While this is part of its business model (e.g., computational imaging capabilities exclusive to Apple hardware), in the field of AI assistants, there is a public interest in ensuring interoperability and choice. European regulators, through the Digital Markets Act (DMA), already require that gatekeepers like Apple to allow alternative app stores and interoperability of certain messaging services. In the future, one can imagine requirements for interoperability of AI assistants or prohibition of unfair self-preference – for example, preventing Apple or Microsoft from programming their assistants to search only in their engines or stores.
In short, Lock-in, vertical integration, and bundling practices in the AI sphere are under intense scrutiny. An FTC white paper in 2025 explained that cloud companies' partnerships with AI developers gave the former “rights of consultation, control and exclusivity” and access to partner information, creating risks of dependence and market foreclosure. This scenario reflects classic competition law concerns adapted to a new industry. Thus, lawyers and regulators now apply long-standing concepts—such as avoiding supplier lock-in and preventing “tied sales” – AI contracts and smart product launches. The goal is to ensure that AI innovation remains open and contestable: that a startup with a revolutionary idea is not stifled because all the doors (infrastructure, distribution, capital) are controlled by incumbents through these practices. The effectiveness of this regulatory oversight will only be proven over time, as concrete cases are evaluated and possibly litigated, setting specific precedents for the AI sector.
AI Regulation: Geopolitical Context and International Comparison
AI regulation has become a global issue, with different approaches in the United States, Europe, and Asia, influenced by both legal philosophies and geopolitical and economic interests. In the US, there is currently no comprehensive federal AI law equivalent to what Europe has enacted., which does not mean a lack of oversight – oversight occurs through agencies such as the FTC, DOJ, and even CFPB (in the case of algorithmic bias in credit, for example), applying existing competition and consumer protection laws. The European Union, on the other hand, has adopted a more direct and preventive strategy: in 2024, it finalized the text of AI Act, the world's first comprehensive legal framework on AIThis European regulation follows a risk classification approach, imposing strict obligations on "high-risk" AI systems (in areas such as healthcare, education, credit, employment, etc.) and even banning certain uses of AI deemed unacceptable, such as real-time biometric surveillance and citizen social scoring systems. Fines for non-compliance with the AI Act are draconian, reaching up to 7% of a company's annual global revenue—similar to GDPR fines for privacy violations, indicating the weight the EU places on compliance with these regulations.
This regulatory disparity is already reminiscent of the déjà vu of GDPR versus the absence of federal privacy law in the US in 2020, as noted by legal commentators. Global companies face the dilemma of adopt the highest (European) standards globally from now on or try to segment compliance by jurisdiction, which can be costly and complex. In the context of generative AI – such as chatbots and image-making models—the EU has included provisions requiring providers of these systems to implement safeguards against illegal content, disinformation, and other risks, as well as transparency obligations (labeling generated content, for example). The U.S., in contrast, has moved forward with non-legislative measures: the White House issued ethics guidelines (Blueprint for Trustworthy AI), and in October 2023, President Biden signed a Executive Order on Safe AI, imposing requirements to share safety testing results of very advanced models (frontier models) with the government, and outlining standards for AI governance in the public sector and for federal contractors. However, executive orders have limited scope and can change direction under new administrations. It is no coincidence that the future of the FTC and its aggressive technology stance became uncertain after the 2024 elections – with the prospect of a change in leadership, there were reports of plans to replace Lina Khan as the agency's chairman. This illustrates how, in the US, regulatory will on AI can fluctuate with the political climate, while in the EU, there is a more consolidated, cross-partisan consensus in favor of restrictions.
Another fundamental geopolitical element is the US–China AI competitionChina, in addition to investing heavily in national champions (Baidu, Alibaba, Huawei, Tencent, SenseTime, etc.), has implemented its own rules on generative AI, in effect since August 2023. These Chinese rules require, for example, that AI-generated content reflect "socialist values" and prohibit the generation of material that endangers national security or defames the country's reputation—in short, they are strongly focused on content control and censorship. There are also requirements for algorithm registration with the government and for holding companies accountable for misuse of their models. In terms of antitrust, China has historically been more permissive toward the creation of national tech giants, but has recently tightened the noose on domestic internet companies (such as Alibaba/Ant Group, Didi, etc.) in the name of "anti-monopoly." It remains to be seen how China will handle potential mergers in the AI sector—one indication was the Chinese veto in 2023 of Nvidia's acquisition of ARM, aligning itself with the Western position in that case. However, generally speaking, the Chinese ruler is more calibrated to ensure that AI serves the purposes of the state than to preserve market competition.
Amid this bipolar scenario, other countries and regions are seeking to position themselves. The United Kingdom, post-Brexit, is trying to balance a stance encouraging AI with moderate regulation. The British Prime Minister hosted a summit on AI safety in November 2023, signaling his ambition to take a leading role in the global debate. The British competition authority (CMA), as we have seen, is already actively investigating the sector from a competition perspective, even publishing a report in 2023 highlighting concerns about a small number of actors controlling foundational AI models and associated infrastructure. In the CMA's view, there is an "interconnected set of more than 90 partnerships" among Big Tech in AI that warrant special attention.This British assertiveness contrasts with the hesitation of some continental European bodies, which until now had focused more on privacy issues (via GDPR) or broad legislative initiatives (AI Act), without concrete high-profile antitrust investigations into AI.
Finally, international coordination will be crucial. AI is by nature transnational: models trained in one country can be accessed globally; data flows across borders; and military advances in AI in one country trigger defensive responses in others. The US and the EU, despite differences, have sought alignment on principles—for example, the creation of a Voluntary Code of Conduct for Generative AI Providers, announced in 2023, of which OpenAI, Microsoft, Google and Anthropic are signatories. These codes, however, are soft law and do not replace the need for robust surveillance. The geopolitical challenge, therefore, is twofold: (1) internally, each bloc must regulate without stifling innovation that can guarantee economic and strategic advantage; and (2) externally, avoid an uncontrolled "arms race" or standards so divergent that they harm interoperability and trade. In the specific case of Apple, Microsoft, and OpenAI—all US players—the geopolitical issue is more about competition with Chinese companies (such as China's AI giants) and their influence on global markets. US regulators may be cautious about weakening their leading companies too much, lest they cede ground to Chinese rivals. European regulators, however, lacking a strong local equivalent (there is no "European OpenAI" to match), tend to impose strict rules on foreigners, aiming to protect their citizens and create space for players minors. This tension between innovation leadership vs. regulatory control defines much of the current debate.
Strategic Partnerships vs. Mergers and Acquisitions: Limits and Legal Considerations
Given the many cases in which partnerships and strategic investments shape the AI market, a central question arises: Are these alliances becoming a way to get around the limits imposed on mergers and acquisitions? Traditionally, when two companies wanted to join forces structurally, they announced a merger or acquisition, subjecting themselves to review by antitrust agencies (such as the FTC/DOJ in the US, the EC in the EU, and CADE in Brazil, etc.). However, in the technology sector—and especially in AI—there is a proliferation of alternative arrangements: large minority stakes, joint ventures, exclusivity agreements, distribution partnerships, shared boardrooms, talent acquisition, among others. Often, These arrangements give the companies involved advantages similar to those of a merger, but without the formality (and potential regulatory objection) of a full corporate union..
Microsoft's investment in OpenAI exemplifies this trend. Rather than buying OpenAI (which would, in fact, be unlikely given the organization's hybrid status and original philanthropic commitments), Microsoft opted to inject capital, enter into technology exclusivity agreements, and gain substantial influence, all while keeping OpenAI a separate entity. The practical result—business integration, strategic alignment, mutual benefits—approaches a de facto merger, but legally does not constitute majority ownership. As a Reuters report noted, Antitrust authorities are aware of the risk of partnerships being used to “circumvent mandatory merger review processes”. Lina Khan, in the January 2024 inquiry announcement, emphasized that the FTC study would seek to clarify whether investments and partnerships by dominant companies with AI developers can distort innovation or undermine fair competition. This concern implies that such agreements could, in practice, replace an acquisition that might face blockage if formally attempted.
There is also an aspect of legal limits to consider: Commercial partnerships and contracts are primarily subject to antitrust laws ex post (that is, if they actually cause anticompetitive effects, they can be challenged), while mergers are regulated ex ante (require prior approval if they exceed certain criteria). Thus, non-corporate arrangements escape ex ante scrutiny and would only be prevented if, after implementation, it became evident that they violated antitrust law (for example, if they constituted a non-compete or illegal market-sharing agreement, or the creation of a monopoly). This asymmetry can encourage companies to seek the "safer" path of partnerships. For example, Amazon invested US$1,400,000 for a minority stake in Anthropic in 2023, tied to a preferential use agreement for AWS. This type of cooperation would hardly be barred in advance, but what if, combined with similar ones, it led to a situation where Amazon, Microsoft, and Google each partnered with a major AI developer (Anthropic, OpenAI, and perhaps a third party) and none startup can remain independent? Some critics compare this scenario to a big tech “fiefdom” over startups of AI – each has its own vassal, ensuring that no external threat arises.
From a legal-doctrinal point of view, a question in vogue is: Should antitrust agencies apply the Essential Facilities Theory or rules of conduct to regulate these quasi-integration partnerships? For example, if OpenAI (with Microsoft) or Anthropic (with Amazon/Google) become so prevalent that any new entrant must interact with them, would it be appropriate to mandate non-discriminatory access to their models or infrastructure? So far, this would be considered extreme intervention, as these startups are still new and not established monopolies. However, the history of technology shows a "winner takes all" trend—think Windows dominating operating systems, or Google dominating search—and regulators have been criticized for waiting too long to act. The current context suggests a change: there is greater preventive vigilance regarding market structures that may crystallize future domainsApple's close cooperation with OpenAI, followed by its backtracking on the board issue in the face of pressure, illustrates how even perceptions of collusion or excessive integration can generate swift reactions.
A related theme is the partnerships as substitutes for blocked acquisitionsLarge companies facing regulatory barriers to direct mergers may be content to form alliances. For example, suppose that, anticipating opposition, Microsoft never attempts to acquire OpenAI but maintains such a deep tie that it effectively controls the company's direction—achieving the objective of an acquisition (influence and synergy) without triggering legal merger control mechanisms. Regulators may attempt to demonstrate that, in these cases, even without a formal acquisition, the conduct violates Section 7 of the Clayton Act (in the US case), which prohibits acquisitions of assets or actions that could substantially lessen competition. A creative interpretation could consider that acquiring exclusive contractual rights and governance influence is equivalent to acquiring "competitively significant assets." Alternatively, agencies could invoke Section 5 of the FTC Act (for "unfair" practices) or Article 102 of the EU Treaty (abuse of a dominant position) to attack anticompetitive outcomes of partnerships. However, this is new and challenging territory—it would require foresight and perhaps litigation in less settled territory.
It's worth mentioning that strategic partnerships aren't inherently negative. Many innovations arise from collaborations between companies, sharing risks and knowledge. The danger lies when they serve to eliminate competitive pressures that would otherwise existFor example, if Apple and OpenAI remain partners for a long time, Apple may lack the incentive to develop a robust internal competitor model, and OpenAI may prioritize serving large partners over offering broad services or services to independent third parties. Similarly, Microsoft, by partnering with OpenAI, chose not to invest in its own fundamental AI research (it even closed part of its AI lab, according to reports investigated by the FTC). This may be effective in the short term, but some experts warn of the risks of oligopolization: few generalist AI platforms controlled by consortia of large corporations.
In short, the limits of partnerships as a substitute for mergers are still being testedThe informal agreement between the FTC and DOJ to split the investigations—the FTC focusing on Microsoft/OpenAI and the DOJ focusing on Nvidia—signals that both see something structural in these cases that deserves scrutiny comparable to a merger case. Internationally, authorities like the CMA have shown a willingness to classify hiring or partnerships as "merger situations" when applicable. On the companies' side, the response tends to be an argument that these partnerships are pro-competitive: enable rapid innovation, challenge incumbents (Microsoft argues that its tie-up with OpenAI “enabled one of the most successful AI startups in the world), and that the alternative would likely be bankruptcy or stagnation for these startups. Indeed, it cannot be ignored that OpenAI, despite all the hype, burned through resources quickly and required intensive capital to train increasingly larger models. The line between necessary cooperation and harmful collusion is thin., leaving it up to lawyers and economists to clarify each case individually.
Conclusion
Apple's saga in the AI race—from its early lead with Siri, through years of silence, to its rush to close last-minute partnerships with OpenAI and others—illustrates how even the biggest tech company can't afford to ignore a turning point technological. Apple, Microsoft, OpenAI, and Nvidia are intertwined in a unique competitive and collaborative web: Apple relies on OpenAI's (and possibly Google's) expertise to regain relevance in AI; OpenAI relies on Microsoft's capital and infrastructure (although it seeks to expand its ecosystem by also allying with Apple); Microsoft relies on OpenAI to challenge rivals and boost its cloud business; and all rely on Nvidia at the core. This unprecedented interdependence between giants and startups has redefined the boundaries of what it means to compete and what it means to cooperate. in the technology sector.
From a regulatory and legal perspective, we are living in a defining moment. The US FTC and DOJ, along with international counterparts (the EU, the UK's CMA, among others), are on high alert to prevent the AI revolution from being captured by a handful of dominant companies. It has already been identified that strategic partnerships can generate classic antitrust effects – lock-in, increased switching costs, privileged access to inputs and sensitive information, and potential exclusion of competitors. The concrete cases discussed—from OpenAI's board to Inflection AI's hiring—show regulators feeling the limits of their authority, whether by persuading companies to withdraw from certain arrangements or by aggressively investigating new forms of concentration.
Europe, through the AI Act, has chosen a path of broad regulation of AI content and safety, complemented by its already robust antitrust framework that can be applied as market conditions consolidate. The US is betting, for now, on applying existing laws on a case-by-case basis, combined with market studies via 6(b) and possibly new guidelines. In this chess game, Big Tech is also adjusting its moves: Apple avoids flashy acquisitions and prefers discreet deals (or silent investments, as seen in its presence and withdrawal from OpenAI's 2023 capital increase); Microsoft has learned not to demand a board seat to avoid attracting scrutiny, settling for de facto influence; OpenAI is trying to maintain governance that allows it to collaborate with multiple partners without being accused of being a puppet of a single one; and even companies like Nvidia, under the spotlight, will seek to demonstrate a commitment to serving everyone neutrally to avoid being summoned to the dock.
Ultimately, the race for AI is not just technological, but also legal and geopoliticalLeadership in AI can determine economic and military advantages between nations—hence the export controls, international forums, and the rhetoric of "not being left behind." At the same time, this leadership cannot be achieved at the expense of competition and consumers: there is a clear effort to avoid a repeat of the untouchable giants of the AI era. Big Tech traditional. The story is still unfolding. In the coming years, we'll see whether Apple can catch up with its rivals in artificial intelligence (perhaps by "developing, hiring, or acquiring" its way to the top, as Gurman predicted), and whether it will do so in a way that's compatible with maintaining open markets. We'll also see whether Microsoft–OpenAI and similar partnerships prove beneficial and innovative, or whether evidence of exclusionary behavior emerges that forces strong regulatory action.
For now, it is clear that regulatory surveillance already influences corporate strategies: decisions such as the departure of Microsoft and Apple from the OpenAI board or the spin-off of bundles at Microsoft show that the threat of intervention is taken seriously at the highest levels. This tense dialogue between companies seeking innovation and governments seeking to safeguard competition will define the pace and contours of the AI revolution. And, at the center of this stage, iconic companies like Apple will try to prove that, despite having started behind, they can contribute to and responsibly lead the next chapter of the technological revolution, without falling into the trap of excessive ambition or regulatory myopia. The legal and technological worlds will closely monitor each move, aware that what is at stake is nothing less than the future of global digital infrastructure.
- The Meta Wall: Banning ChatGPT to Crown 'Meta AI' on WhatsApp
- Digital ECA and the New Legal Architecture for Minors in the Digital Environment: A Complete Guide
- Artificial Intelligence Regulation in Vietnam: A Mirror for Brazil's Challenges
- Folha vs. OpenAI: The Battle for Copyright in the Age of AI Comes to Brazil
- Law 15.123/2025 and deepfake: a new milestone in psychological violence with AI
REFERENCES
BASS, Dina; NYLEN, Leah. Microsoft, Apple Drop OpenAI Board Plans. Bloomberg, 2024. Available at: https://www.bloomberg.com. Accessed on: April 5, 2025.
FEDERAL TRADE COMMISSION. Staff Report on Cloud/AI Partnerships. Washington, DC: FTC, 2024. Available at: https://www.ftc.gov/system/files/ftc_gov/pdf/p246201_aipartnerships6breport_redacted_0.pdf. Accessed on: April 5, 2025.
GURMAN, Mark. Apple Made Once-Unlikely Deal With Sam Altman. Bloomberg, 2024. Available at: https://www.bloomberg.com. Accessed on: April 5, 2025.
GURMAN, Mark. Apple's AI Lag: An Analysis. Bloomberg/WindowsCentral, 2024. Available at: https://www.bloomberg.com or https://www.windowscentral.com. Accessed on: April 5, 2025.
KHAN, Lina. FTC 6(b) Inquiry Press Release. Washington, DC: FTC, 2024. Available at: https://www.ftc.gov. Accessed on: April 5, 2025.
MENDES, Marcus. If Not OpenAI, Then Who Does Apple Need to Buy? MacMagazine, 2024. Available at: https://www.macmagazine.com. Accessed on: April 5, 2025.
REUTERS. US Regulators Set Stage for Antitrust Probes. Reuters, 2024. Available at: https://www.reuters.com. Accessed on: April 5, 2025.
TECHCRUNCH. FTC Flags Big Tech–AI Partnerships. TechCrunch, 2024. Available at: https://techcrunch.com. Accessed on: April 5, 2025.
TECHCRUNCH. Microsoft Hires Inflection AI Team. TechCrunch, 2024. Available at: https://techcrunch.com. Accessed on: April 5, 2025.
THE NEW YORK TIMES. Regulatory Scrutiny on AI Giants Intensifies. The New York Times, 2024. Available at: https://www.nytimes.com. Accessed on: April 5, 2025.
THE VERGE. Apple/Google AI Negotiations. The Verge, 2024. Available at: https://www.theverge.com. Accessed on: April 5, 2025.
THE WASHINGTON POST. AI Industry Faces Increased Antitrust Inquiries. The Washington Post, 2024. Available at: https://www.washingtonpost.com. Accessed on: April 5, 2025.
SILICONUK. CMA on Inflection AI Hiring. SiliconUK, 2024. Available at: https://www.siliconuk.com. Accessed on: April 5, 2025.




