The Challenge
Many companies do not have sufficiently large, balanced, or high-quality datasets to train robust AI models on their own. Decentralized federated learning therefore opens up the possibility of developing shared models across companies without centrally consolidating raw data. In practice, however, collaboration often fails due to a lack of control: once data contributions have been incorporated into the model, they cannot be easily withdrawn, even if the data later proves to be erroneous, sensitive, or problematic from a regulatory standpoint. Companies would thus have to relinquish control, flexibility, and data sovereignty—a hurdle that prevents many collaborations.
Our Service
Together with Fujitsu, Fraunhofer ISST has developed “DeFedOblivio,” a framework that enables decentralized, federated “unlearning”—that is, the erasure of data from an AI—within collaborative enterprise ecosystems. Data contributions from individual participants can be removed from a jointly trained model in a targeted, traceable, and secure manner without having to restart the entire training process. The solution is committee-driven and organized fairly: All participants have the same rights and obligations, decisions are collectively validated, and even in the event of erroneous or manipulative behavior by individual participants, the system remains reliable and capable of functioning.