Data Privacy in 2025: What Does Data Privacy Look Like in a Data-Driven World?
With the rapid expansion of technology, especially the rise of AI, data privacy and security are top of mind for many industries—higher ed is no exception. Legal frameworks are being developed around the use of AI and data. To kick off what promises to be a new year of further technological advances as well as fresh questions about what such advances mean for data privacy, we spoke to Anthology’s Compliance, Trustworthy AI, and Privacy Officer Stephan Geering. In this blog post, he shares Anthology’s efforts to safeguard our customers’ data, the potential impact of the EU AI Act, and more.
1. How does Anthology think about data privacy, security, and trustworthy AI?
The world has become increasingly data driven. As a consequence, countries around the globe have been stepping up efforts to introduce new or update existing data privacy, security, and trustworthy AI laws to manage the related risks. These laws are crucial to protect individuals and ensure that companies use data and technology appropriately and protect their users’ personal information.
We feel strongly that data privacy, security, and trustworthy AI are not just a legal obligation. For us, it is also about establishing and maintaining the trust of our customers. To earn and maintain this trust, we are committed to high data privacy, security, and trustworthy AI standards. And we demonstrate this commitment through an ambitious certification program which includes ISO, SOC 2, and StateRAMP certifications and through participating in pledges like the Student Privacy Pledge and the European AI Pact.
2. What have we learnt about the EU GDPR since it has come into effect?
The GDPR (EU General Data Protection Regulation) has significantly shaped how organizations approach data privacy since it was implemented by the European Union in May 2018, and is now generally considered the unrivalled global standard for data privacy laws. Many experts talk about the “Brussels effect”, the idea that the EU has become a successful exporter of regulatory frameworks which are adopted around the world. Whether this will also be the case for the new EU AI Act, which regulates AI systems, remains to be seen. But it has certainly been the case with the GDPR. Even US state consumer privacy laws and the various bills for a federal US consumer privacy law have been heavily influenced by the GDPR. At Anthology, we decided early on to apply the high standards of the GDPR and expand the high EU standards to our customers globally.
3. You mentioned the EU AI Act—how does this new law shape how AI and data privacy will be regulated?
While there are other laws regulating AI around the globe, the EU AI Act is the first comprehensive legal framework in the world. There is a lot of overlap between the principles of the EU AI Act and those of the GDPR (e.g., accuracy, transparency, accountability). But the foundations are quite different. The GDPR is an evolution of the EU Data Protection Directive, so built on existing and globally recognised human rights-based principles that have been established over the decades. For AI regulation, we don’t have the same long-standing experience. Most of the principles and frameworks for responsible AI have been developed in the last five years. The EU has therefore built the AI Act on the basis of product safety concepts. Data privacy questions regarding the use of AI (e.g., regarding personal information in training data sets) are still regulated by the GDPR. And we have seen the EU data protection authorities taking a very proactive approach on generative AI. But they will have to coordinate with an array of EU-level and country-level authorities responsible for overseeing the EU AI Act.
4. How does working with technology partners at Amazon Web Services (AWS), Microsoft, and other service providers affect data privacy, security, and trustworthy AI?
At Anthology, we partner with leading technology providers to deliver the best possible solutions for our customers. Like many technology service providers, we rely on third parties for specific product elements, such as hosting, because these partners bring unmatched expertise and scale.
We carefully vet third-party providers to ensure they meet our high standards for data privacy, security, and responsible AI. Partnering with industry leaders like Microsoft and AWS strengthens this commitment. These companies have established mature programs for data privacy, security, and responsible AI, which directly benefit our customers by enhancing the controls around the aspects of our products that they manage.
As we deliver our products through Software-as-a-Service (SaaS), we take on the responsibility of managing both the technology and the related risks to data privacy, security, and trustworthy AI for our customers. This ensures our customers can trust that their data and AI usage are well-managed and protected.
5. Finally, do you have tips for higher education institutions that grapple with data privacy, security, and trustworthy AI?
Implementing data privacy, security, and trustworthy AI programs across an institution can be challenging. But it’s needed. Effective risk management requires strong, cross-functional programs with executive-level support. We've seen this with GDPR, US state privacy laws, and now with trustworthy AI programs and the EU AI Act. The good news is that AI programs can build on existing risk management efforts. For more insights, check out Anthology’s AI Policy Framework and EU AI Act white paper.