Imagine waking up one morning and finding that your favourite app must now ask you for explicit permission before using even a small piece of your data. That shift is not a distant dream, but it’s happening right now.
In this article, we will examine the major region-based laws that were implemented in 2025 around data privacy and governance, and speculate what the future might look like as artificial intelligence (AI) becomes increasingly essential to the ways in which data is used.
Key regional developments in 2025
India
The Digital Personal Data Protection Act, 2023 (DPDP Act) and some implementing rules took centre stage, with the new Digital Personal Data Protection Rules, 2025 formally notified to establish a consent-based, rights-based regime for "digital personal data". The main features included:
- Organisations (data fiduciaries) must obtain unambiguous, specific, and informed consent before processing personal data.
- Data subjects have access, correction, erasure, withdrawal of consent, and can nominate another person to be able to exercise their rights.
- Strong penalties for breach of the requirements, for example, higher penalties for breaches involving children or for not notifying.
- Extra-territorial application of the law means an organisation outside of India may fall under the law if they are processing data of residents of India.
- This is a transformative step for India's digital economy - companies operating in India need to rethink data flows, implement "privacy-by-design", retrain teams, and adapt to the new regime.
United Kingdom
The Data (Use and Access) Act 2025 (DUAA) received Royal Assent in June 2025, and below are some highlights:
- It amends the UK’s data protection laws (including the Data Protection Act 2018 and the UK-adapted version of the General Data Protection Regulation (“UK GDPR”)) to clarify rules for innovation and enable responsible data sharing.
- It introduces specific changes to automated decision-making, access rights, international transfers, legitimate interests, and the use of cookies/storage technologies.
- It begins a phased implementation, with changes scheduled to be implemented in stages between 2025 and 2026.
- Implications: the UK is signaling a pivot: balancing strong data protection with more flexible frameworks to support innovation and competitive advantage.
European Union
The European Union established the General Data Protection Regulation (GDPR) to provide a solid framework for data regulation legislation. However, new developments emerged in 2025:
- First, the EHD regulation came into force in March 2025, regulating access and exchange of health data across the EU, for both individual use, as well as secondary means of use (including research), which remain subject to strict safeguards.
- Moreover, the European Commission proposed to simplify requirements on small and medium enterprises to limit burdens associated with data regulation while adapting to the legal regulating innovation challenges, mid-2025—they proposed to revise rules to record-keeping requirements under GDPR for organizations with fewer than 750 employees.
- Lastly, the EU introduced new proposals on AI, further delaying the implementation of “high-risk” rules regarding AI until December 2027 as part of a larger digital regulation initiative. Why this is important: The EU continues to promote its data regulation model globally and uses data protection and regulation as a strategic tool to implement legislative frameworks to govern the digital economy, including AI.
What to expect next with AI and data governance
With these frameworks already in a state of change, attention turns to how they will interact with AI. There are a few trends that are already clear and likely to develop quickly.
- Data training and algorithmic use
AI systems rely heavily on large amounts of data for training purposes. With the data protection frameworks maturing, organizations will need to show that this use takes consent, legitimate interest, purpose limitation, and algorithmic transparency into account. For instance, the Indian regime does not have specific rules yet when it comes to AI, but the rules are likely to be developed.
New regulatory guidance is expected to require audits of AI models, an explanation of the decisions made through automation, and risk impact assessments, particularly if it is deemed "high risk."
- Cross-border data flows and localisation
With frameworks like India’s DPDP adding reach beyond borders, and the UK modifying the rules on international transfers, organizations that act internationally will have complicated obligations, particularly when it comes to AI, where all training data may be accessed and aggregated from other jurisdictions.
Future focus: Data transfer contracts, local jurisdiction obligations, and “data trusts” may become more common.
- High-risk AI regulation enforcement
There are clear regional efforts to regulate AI systems explicitly deemed high-risk for rights (biometric identification, profiling, critical infrastructure). The European combination of not acting on high-risk AI-but delays shows the regulator is trying to find a balance between innovation and oversight.
Companies that will take on AI innovation will need to begin preparing for certification, document their process, interface with human-in-the-loop requirements, and plan for monitoring after deployment.
- Privacy-by-design and built-in governance
New laws suggest that embedding privacy and governance into the systems from the beginning is a way forward. The Indian DPDP rules explicitly mention "privacy-by-design."
With AI, this means not just model governance; it will also mean embedding data lifecycle controls, data bias mitigation, and transparency as a default-not an afterthought.
- Harmonisation challenges and global fragmentation
While many countries in the world will undoubtedly look toward Europe for an inspiration (and to some extent a rights-based framework), these differences (like England's tilt toward innovation and India's heavy consent) will create a fractured regime. For multinational organisations and developers of AI, this means "regulatory mapping" will be a necessity.
Final thoughts
The year 2025 is a tipping point for data privacy and governance: India’s new consent-based system is in place, the UK’s approach looks to innovate, while the EU is adding specialised regimes related to health and AI. At the same time, the rise of AI is not just a technical shift; it is forcing organizations and regulators to rethink how data is used, shared, and governed.
If you are a business leader, technology executive, or compliance professional, the message is straightforward: data governance and AI readiness should be viewed as two sides of the same coin.
Take action now to map your data flows, review your AI models for regulatory risk, prepare for cross-border complications, and build governance into your systems. The new wave of compliance is not coming; it is here.


