Hong Kong Authorities Warn LinkedIn Users Ahead of AI Data Training Rollout in 2025
- Why Are Hong Kong Authorities Sounding the Alarm?
- How Did LinkedIn's Original Plan Spark Controversy?
- What Protections Did Hong Kong Secure for Users?
- How Can Users Opt Out of AI Training?
- Why Is This AI Data Scramble Happening Now?
- What Does This Mean for the Future of AI Development?
- Frequently Asked Questions
Hong Kong's privacy watchdog has issued a stark warning to LinkedIn users as the platform prepares to leverage personal data for generative AI model training starting November 3, 2025. The alert comes amid growing global concerns about tech giants scraping user information for AI development, with LinkedIn initially facing regulatory pushback before implementing stricter controls for Hong Kong users. This development mirrors similar moves by Meta last year and highlights the escalating battle between AI innovation and data privacy rights.
Why Are Hong Kong Authorities Sounding the Alarm?
The Office of the Privacy Commissioner for Personal Data (PCPD) has taken a proactive stance, reminding LinkedIn users to carefully review upcoming changes to the platform's privacy policy. Commissioner ADA Chung Lai-ling emphasized that users must understand how their professional profiles, resumes, and public activity data will feed AI training algorithms. "This isn't just about terms and conditions - it's about maintaining control over your digital identity in an AI-driven future," Chung stated during a press briefing last week.

How Did LinkedIn's Original Plan Spark Controversy?
Back in September 2025, LinkedIn announced plans to use member profiles, posts, resumes, and public activity to train its AI models, initially including data from Hong Kong alongside the UK, EU, Canada and other regions. The proposal hit immediate regulatory roadblocks. "We saw default opt-in settings that simply didn't meet Hong Kong's personal data protection standards," explained a PCPD spokesperson. The watchdog's intervention forced LinkedIn to suspend Hong Kong data processing in late 2024 until proper consent mechanisms were implemented.
What Protections Did Hong Kong Secure for Users?
After six months of negotiations between October 2024 and April 2025, LinkedIn agreed to significant concessions:
- Hong Kong users maintain control over AI training use of their data
- Strict compliance with Hong Kong's Personal Data (Privacy) Ordinance
- Exclusion of private messages from training datasets
- Automatic exclusion of users under 18
Microsoft-owned LinkedIn also committed to clearer disclosure about data sharing with Microsoft and its subsidiaries, including OpenAI where Microsoft has substantial investments. "This sets an important precedent for how multinational platforms must adapt to local data protection standards," noted BTCC market analyst James Wong.
How Can Users Opt Out of AI Training?
For professionals wanting to keep their data out of AI development, LinkedIn provides an opt-out path:
- Navigate to Account Settings > Data Privacy
- Select "Generative AI Data Improvement"
- Toggle off "Use my data for AI content creation model training"

Why Is This AI Data Scramble Happening Now?
The rush for training data reflects an industry-wide challenge. Goldman Sachs' Chief Data Officer Neema Raphael recently revealed that leading AI models like OpenAI's ChatGPT and Google's Gemini have nearly exhausted their available training data. OpenAI co-founder Ilya Sutskever warned last year that this data drought could "unquestionably end" the rapid advancement of AI unless new sources emerge.
Platforms like LinkedIn represent goldmines for AI developers - professional profiles offer structured, verified personal data that's incredibly valuable for training specialized models. "It's the difference between teaching with encyclopedia entries versus real-world case studies," explained a machine learning engineer at a Hong Kong fintech firm who asked to remain anonymous.
What Does This Mean for the Future of AI Development?
The Hong Kong-LinkedIn standoff illustrates the growing tension between technological progress and privacy rights. While autonomous AI systems promise revolutionary capabilities - from real-time cyber defense to personalized professional tools - their development increasingly depends on access to personal data. As Commissioner Chung noted, "The question isn't whether we should advance AI, but how we do so without compromising fundamental privacy principles."
This article does not constitute investment advice.
Frequently Asked Questions
When will LinkedIn start using my data for AI training?
LinkedIn plans to begin using member data for generative AI model training starting November 3, 2025, unless users specifically opt out before that date.
What types of my LinkedIn data will be used?
The platform will use information from your profile, public posts, resume details, and professional activity, but explicitly excludes private messages and data from users under 18.
How is Hong Kong's approach different from other regions?
Hong Kong secured unique protections through regulatory negotiations, including mandatory opt-in rather than default inclusion and stricter local compliance requirements under the Personal Data (Privacy) Ordinance.
Can Microsoft use my LinkedIn data for its AI projects?
Yes, LinkedIn's parent company Microsoft and its subsidiaries (including OpenAI) may access this data for their AI development unless you opt out through LinkedIn's privacy settings.
Why are tech companies so desperate for training data?
Leading AI models have largely exhausted publicly available training datasets, forcing developers to seek new sources of high-quality, structured information to continue improving their systems.