Over the past decade, Europe has set an example for the world through its pioneering regulation of thorny technological issues such as data privacy (with its now-famous GDPR), and content moderation on social networks. Recently, Europe extended this policy-making leadership role with the AI Act, the broadest regulatory attempt to date to grapple with the ever-expanding capabilities of artificial intelligence. The advanced draft released on 8 December 2023 enjoys broad political consensus among the Member States, paving the way to final enactment over the next 24 months.
Among the many features of the Act, two stand out. The first is a set of provisions for dealing with generative AI (which wasn’t even on the radar screen in the first draft of April 2021). For example, the “deepfake” images that we are beginning to see must now be clearly labelled as generated AI.
A second highlight of the Act is a risk-based approach that differentiates the rigour of the measures to be applied in a particular application of AI in proportion to the potential for harm. This approach permits us to focus on areas where it really counts while being more flexible in less critical areas and allowing innovation to flourish as much as possible in this fast-moving sector.
Trust-IT offers examples of this risk-based approach in action from its own portfolio of projects. The SmartCHANGE project promotes a “smart change” from unhealthy to healthy lifestyles in youth through AI trained on data that records health-related factors. Human oversight of the recommendations emerging from such applications is a requirement of the Act, and one emphasis of the project is on the explainability of AI-enabled decision-making. Another requirement of the Act concerns cybersecurity – the privacy-related safeguarding of the large sets of sensitive health-related data on which AI systems are trained. SmartCHANGE includes research on the well-established technique of federated learning, which allows different institutions to contribute their sensitive data to the training of AI applications without having to actually transfer the data out of its privacy-protected environment.
The AI SPRINT project is a good example of how the flexibility of the Act’s risk-based approach avoids creating unwanted obstacles to innovation. Instead of focusing on a specific application of AI, the project has created a toolkit of components that flexibly support arbitrary AI-enabled use cases. For example, the Privacy Assessment component gives AI applications a secure mechanism to evaluate models' resilience. The Scone component provides Trusted Execution Environments to ensure cybersecurity. The use cases developed in AI SPRINT from its toolkit range from the high-risk (a personalised healthcare application, with significant requirements to take into consideration) to the lesser-risk (inspection and maintenance, with considerably fewer requirements in the Act, and consequently more breathing room for innovation).
As a final example, the 6GStart, SNS ICE and SNS OPS projects deal with the next generations of wireless communication, 5G and 6G. By themselves, these technologies are neutral. But what if they are used in critical infrastructure management, such as water, energy, or emergency response systems? AI is playing an important part in innovations in the management of these infrastructures, such as load management in energy distribution systems, architecture and network orchestration. In other applications of 5G/6G, such as consumer gaming, the flexibility of the AI Act will help the European industry to continue to pursue its competitive advantages without unnecessary restrictions that hold it back in this exciting new sector.