Hundreds of employees at Google have called on Chief Executive Officer Sundar Pichai not to permit the company’s artificial intelligence tools to be used by the United States Department of Defense in classified environments.
In an open letter, staff members working on AI-related projects expressed concern over reports that Google is in discussions with the Pentagon regarding the possible deployment of its technology for classified workloads.
The employees argued that Google’s AI systems are not suitable for sensitive military settings where mistakes could carry severe consequences.
According to the letter, workers believe their direct involvement with the technology gives them a responsibility to warn against unethical or dangerous applications.
They said they want artificial intelligence to be developed for the benefit of humanity rather than for uses they described as inhumane or highly harmful.
The letter also emphasized that current AI systems are imperfect and capable of making errors, raising concerns about relying on them in national security or warfare contexts.
Among the risks highlighted by employees were lethal autonomous weapons and large-scale surveillance systems.
The signatories warned that decisions made at this stage could have lasting consequences not only for society but also for Google’s reputation, commercial interests, and standing around the world.
The internal protest reflects a broader debate across the technology sector over how advanced AI should be used by governments, militaries, and intelligence agencies.
In recent years, many engineers and researchers have voiced concern about the possibility of AI being applied to surveillance, targeting systems, misinformation campaigns, and battlefield decision-making.
Google has faced similar internal resistance before. In 2018, employee protests over Project Maven—a Pentagon initiative involving AI analysis of drone footage—led the company to step back from renewing the contract and later publish AI principles intended to guide ethical use.
The latest letter suggests tensions remain unresolved as governments seek access to cutting-edge commercial AI systems.
Reports indicate Google is now engaged in talks with the Defense Department over possible classified uses of its technology, though full details of any potential agreement have not been publicly disclosed.
The issue comes at a time when competition among major AI firms has intensified, with companies balancing commercial opportunities, national security partnerships, and public trust.
Earlier this year, OpenAI reportedly reached an agreement with the Pentagon that included restrictions against using its technology for large-scale domestic surveillance or to control autonomous weapons.
That arrangement has fueled discussion about whether AI firms should participate in defense work only under strict safeguards.
Supporters of cooperation between technology companies and defense agencies argue that democratic governments need access to advanced tools for cybersecurity, logistics, intelligence analysis, and national defense.
Critics counter that once systems are integrated into classified military operations, transparency becomes limited and ethical oversight becomes more difficult.
For Google, the dispute places CEO Sundar Pichai and senior leadership in a delicate position between employee concerns, business strategy, and geopolitical realities.
The company has invested heavily in AI and remains one of the leading global competitors in cloud computing and machine learning services.
How it chooses to engage with military clients could shape employee morale, public perception, and future policy standards across the industry.
The open letter demonstrates that many workers inside major technology firms increasingly see themselves not just as engineers, but as stakeholders in determining how powerful technologies are used.
Whether Google changes course, adds safeguards, or proceeds with defense partnerships, the controversy highlights one of the defining questions of the AI era: who should control advanced systems, and for what purpose.
