European Parliament Blocks AI Tools on Lawmakers’ Devices

The European Parliament has disabled built-in AI tools on official devices, citing cybersecurity and privacy risks associated with uploading sensitive data to cloud services.

By Maria Konash Published: Updated:

The European Parliament has prohibited lawmakers from using built-in AI tools on their work devices due to cybersecurity and privacy concerns. An internal IT email indicated that the security of data uploaded to AI company servers cannot be guaranteed and the full extent of shared information is still under review.

The restrictions apply to AI chatbots and assistants including Anthropic’s Claude, Microsoft Copilot, and OpenAI’s ChatGPT. Data uploaded to these tools can be subject to U.S. authorities’ requests, raising concerns over confidentiality. AI models also use uploaded information to improve performance, which could inadvertently expose sensitive content to other users.

This decision aligns with Europe’s stringent data protection rules. Last year, the European Commission proposed changes aimed at easing AI model training on European data, a move that drew criticism for potentially favoring U.S. tech companies. The restriction comes amid broader scrutiny of U.S. tech platforms, as government subpoenas have compelled companies such as Google, Meta, and Reddit to provide user data, sometimes without judicial oversight.

AI & Machine Learning, Consumer Tech, News, Regulation & Policy