AGENDA

15th January 2025, Artificial Intelligence Compliance Professional (AICP) Certification

The development and establishment of a AI compliance governance structure, roadmap, and framework

  • Addressing the emergence of complex and challenging AI Governance regimes.
  • Grow your network with like-minded industry. Stay ahead of AI Compliance Risk and Corporate Governance curve.
  • Speakers will provide insight and tangible advice on the latest issues and concerns on AI
Timing Topics/Speakers
09:30 Registration and refreshments
10:00 – 10:05 Welcome Inspired AI Minds!
10:05 – 10:35

How to navigate the the EU and other AI Act’s requirements and strategically cut compliance costs. 

  • Key assumptions and provide clear methods to align your company with EU regulations.
  • The common challenges companies often encounter when attempting to comply with new AI regulations
  • Practical guidance on how companies typically adopt a structured approach to AI complianceand other related regulations.
  • How companies can enhance their chances of successfully complying with AI regulations and minimise non-compliance risks.
10:45 – 11:05

The AI Act introduces new rules on AI development and use;

  • Foundations of AI: Introduction to machine learning, deep learning, neural networks, and key AI concepts.
  • Data Governance for AI: Implement best practices for data quality, compliance, and governance in AI initiatives.
  • Compliance with the Act will require a major effort, in our view; regulatory complexity and litigation risks are key concerns
  • Idenyify the implications for the tech, auto and healthcare sectors
  • Early analysis of the Act’s impact on firms will allow business to better understand potential risks and opportunities
11:05 – 11:15 Coffee /Tea Break
11:15 – 11:35

Regulating AI in the corporate world

  • The new AI rules aim to mitigate potential adverse impacts of AI and support the industry by offering regulatory sandboxes for AI testing
  • Regulatory uncertainty will remain a challenge for firms until the codes of practice and changes to sectoral legislation are implemented
  • Management should be aware of the Act’s rules that may affect their own operations as well as portfolio companies
  • Practical AI Applications: Differentiate types of machine learning algorithms and apply them to real-world business scenarios.
11:35 – 11:55

A primary AI Act criticism is its complexity and lack of clarity

  • Identify and clarify the definitions of ‘AI systems’ that lead to uncertainties about the corporate AI scope and application
  • Identify the areas of risks and concerns that could impede effective AI implementation and compliance.
  • Business Integration: Identify AI opportunities tailored to your organisation, including customer journey optimisation.
11:55 – 12:15

The AI rules on synthetic content labelling, watermarking, transparency and clear and distinguishable disclosures

  • What are the most challenging AI areas for firms to comply with including addressing the risk of malicious content creation and production and dissemination of misinformation
  • AI-generated misinformation and disinformation and the key risks that may cause a material crisis
  • Responsibilities of the deployers of AI systems, which generate or manipulate audio, image or video content that has been artificially generated or manipulated.
  • Interacting with AI systems and transparency in regulatory compliance for building trust in the technology and mitigating risks.
12:15 – 13:00

 Key risk issues that the AI Act seeks to address

  • Risk and Compliance Management: Assess and mitigate AI risks, impacts, and controls across the AI lifecycle. Comply with frameworks like ISO 42001, and responsible AI principles.
  • Disinformation and misinformation
  • Exacerbation of bias, discrimination and fairness
  • Threats to cybersecurity
  • Risks to human safety and data privacy
  • Facilitating industry development and innovation
  • Ensure proper vetting and integration of third-party AI systems and software.
13:00 – 13:30 Lunch Break
13:30 – 14:00

Systemic, computational, trechnical, and human cognitive biases in decision-making processes across the AI lifecycle.

  • Technical documentation and record-keeping
  • Data and data governance
  • Human oversight and AI literacy
  • Increased awareness on sustainability and AI energy consumption
14:00 – 14:15

 Testing standards for AI 

  • Implementing and executing regulatory focus on GPAI model testing for identifying and mitigating systemic risks
  • GPAI models with system risk to perform model evaluation, including conducting and documenting adversarial testing
14:15 – 14:45

Implement AI-based software, risk, impact and compliance assessments

  • AI-Powered Risk Assessment: Leverage AI to identify and prioritize potential risks, enabling proactive risk management and mitigation strategies.
  • Impact Analysis: Utilize AI to assess the potential impact of AI-driven decisions on various stakeholders, including customers, employees, and society.
  • Compliance Assurance: Employ AI to monitor compliance with relevant regulations and industry standards, ensuring ethical and legal AI practices.
14:45 – 15:00

Tea/Coffee Break

15:00 – 15:40

Drive AI transformation with use cases and good governance policies

  • Identify High-Impact Use Cases: Discover and implement AI solutions that deliver significant business value, such as automating processes, enhancing customer experiences, or optimizing operations.
  • Establish Robust Governance Framework: Implement strong governance policies to ensure ethical, responsible, and transparent AI development and deployment.
  • Foster a Culture of Innovation: Encourage experimentation, collaboration, and continuous learning to drive AI adoption and innovation across the organization.
15:40 – 16:30

Identification of AI risks, compliance, transparency, accountability and performance management

  • Risk Identification and Mitigation: Proactively identify potential risks associated with AI systems, such as bias, security breaches, and unintended consequences, and implement strategies to mitigate them.
  • Compliance and Ethical AI: Ensure adherence to relevant regulations and ethical guidelines, promoting fairness, transparency, and accountability in AI development and deployment.
  • Performance Monitoring and Optimization: Continuously monitor AI systems’ performance, identify areas for improvement, and optimize models to achieve desired outcomes.
  • Strategic Planning: Develop business cases, allocate resources, and define actionable steps for adopting AI.
16:30 – 17:30 Exam and certification
*The Agenda/program is subject to change. The certiication seminar language is English