Speakers
Synopsis
What should we consider as criteria to assess the multitude of security and ethical facets in the development and deployment of autonomous intelligent systems (AIS)? AI/ML/RPA are dominating our discussions and impacting every area of IT and human/machine interaction. This is an emerging technology with an unprecedented number of potential risks. Most organisations, and in fact, many countries, understand little about AI development and the security and ethical considerations for AI solutions. Regulations differ by country and sometimes by self-interest, How does an organisation demonstrate that it takes AI risk seriously and how does that organisation generate trust for market adoption and public confidence? We will look at assessing AI and how the IEEE CertifAIEd mark could be one mechanism with which organisations demonstrate a commitment and capability for the continuous assessment of transparency, accountability, algorithmic bias, and privacy to build trust in their AIS.