Introducing first-generation AI services
We are past the point of no return. AI is here to stay, and we as a society bear the ethical and social responsibility for its safe and secure use. DEKRA is helping make AI trustworthy. In the 2023 fiscal year, we launched the first services specifically for testing AI models and systems.
As Director of AI & Advanced Analytics, Dr. Valero is responsible for all of DEKRA’s AI and advanced analytics activities. Together with his team, he has introduced a first-generation range of AI services to the market. Dr. Valero holds a doctorate in machine learning and has more than 15 years of experience in data science and the implementation of AI projects.
Why do we need AI safety and security services?
Probably the best-known AI application is ChatGPT. It offers impressive functionality, and its use has spread rapidly around the world. But this also brings risks, such as responses potentially being inaccurate, biased, or even discriminatory. The issues become even more severe in AI systems used in fields like automated driving and robot-assisted surgeries, where a malfunction would put lives at risk. So there is an urgent need to test and certify the safety and security of AI applications in terms of data quality and robustness.
How does DEKRA support customers and users of AI?
DEKRA has been working on AI services in recent years by partnering with first-rate universities, research institutions, and tech startups. We have leveraged our expertise in testing and certification to develop software, tools, and processes with a view to building a bridge between AI innovation and safety and security. For instance, our first-generation AI services include data quality analysis and testing of safety and security. We also offer advisory services and training on a wide range of topics such as AI management systems.
What is your vision for the future of AI and DEKRA’s role in it?
AI technology is growing dramatically and rapidly. Every product or service will soon either be built with AI or make use of the technology. As a result, our society needs neutral third parties like DEKRA around to test and certify AI models and products in an independent manner. We ensure safety and security by using our own solutions as well as by engaging with governments, standardization organizations, and industry alliances. Together, we are working out crucially important AI rules and standards.
Why should organizations offering or using AI models and applications seek certification?
First, to comply with current and upcoming AI regulations such as the EU AI Act. Second, to build the necessary trust with clients and users. Third, safe and secure AI technology will reduce economic damage from AI not working as it should. And last but not least, because by ensuring safe, secure, and responsible AI, companies can stand out from their competition.