DeFedOblivio

Data Sovereignty in Cross-Enterprise AI

Decentralized federated learning is becoming increasingly important for cross-enterprise AI in order to train reliable models even with limited data. At the same time, data protection regulations and a lack of control over data once it has been trained often prevent collaboration. Fujitsu and Fraunhofer ISST have therefore developed “DeFedOblivio”: a framework that allows contributions from a shared model to be removed in a targeted and secure manner.

Schiffs- und Kommunikationsnetzkonzept

The Challenge

Many companies do not have sufficiently large, balanced, or high-quality datasets to train robust AI models on their own. Decentralized federated learning therefore opens up the possibility of developing shared models across companies without centrally consolidating raw data. In practice, however, collaboration often fails due to a lack of control: once data contributions have been incorporated into the model, they cannot be easily withdrawn, even if the data later proves to be erroneous, sensitive, or problematic from a regulatory standpoint. Companies would thus have to relinquish control, flexibility, and data sovereignty—a hurdle that prevents many collaborations.

 

Our Service 

Together with Fujitsu, Fraunhofer ISST has developed “DeFedOblivio,” a framework that enables decentralized, federated “unlearning”—that is, the erasure of data from an AI—within collaborative enterprise ecosystems. Data contributions from individual participants can be removed from a jointly trained model in a targeted, traceable, and secure manner without having to restart the entire training process. The solution is committee-driven and organized fairly: All participants have the same rights and obligations, decisions are collectively validated, and even in the event of erroneous or manipulative behavior by individual participants, the system remains reliable and capable of functioning.

Diagramm Daten Zeit
Using federated unlearning, decentralized AI models revert to the state they were in before a data provider joined when that provider leaves. From that point on, they are retrained.

The Result  

With “DeFedOblivio”, collaborative AI for cross-organizational ecosystems becomes significantly more trustworthy and practical. Companies retain control over whether and for how long their data contributions are incorporated into a shared model. Erroneous or undesirable contributions can be specifically removed without having to completely retrain the collaborative model. In this way, the solution strengthens data sovereignty, builds trust in collaborative AI development, and lowers the barriers to participation in federated learning processes.

 

The Partners

  • Fujitsu Research

Podcast Episode: How Can AI Forget? [German]

Decentralized Federated Unlearning Without Loss of Quality

Privacy warning

With the click on the play button an external video from www.youtube.com is loaded and started. Your data is possible transferred and stored to third party. Do not start the video if you disagree. Find more about the youtube privacy statement under the following link: https://policies.google.com/privacy

Developing a proprietary AI model isn’t practical for every company. Instead, federated learning allows companies to use a decentralized, jointly trained model with distributed rights. But what happens if a participant drops out of the model? Do the other participants then have to start training the model all over again? At the 2026 Hannover Messe, Fujitsu Research and Fraunhofer ISST will demonstrate how a decentralized AI model can learn to forget. In this episode of “Die Datenräumer,” Janosch Haber and Florian Zimmer present their joint solution, “DeFedOblivio”: a framework that allows contributions to be selectively and securely removed from a collaborative model.