A HYBRID EVENT - NOV 30, 2023

Discuss the Future of AI Security

This event brings together industry experts, practitioners, and enthusiasts for a day of insightful discussions, networking, and knowledge sharing. Hosted by OASIS Open and Cisco, this summit delves into the evolving landscape of AI security. Engage with seasoned cybersecurity and AI professionals as they dissect the challenges and opportunities inherent when deploying AI solutions. Discuss the top AI threats and risks, AI vulnerability management and disclosure, challenges when monitoring AI implementations, upcoming regulations, AI Bill of Materials (AI BOMs), and more.

AI SECURITY SUMMIT

EVENT STARTS AT 9:00 AM EDT

CLICK HERE FOR WEBEX INFORMATION

SCHEDULE
NOV 30, 2023 - 9AM EDT

Speakers:
Omar Santos, Distinguished Engineer, Cisco & OASIS Board of Directors
Jamie Clark, General Counsel, OASIS Open

Speaker: Daniel Bardenstein, CTO, Manifest

The latest Artificial Intelligence (AI) / Machine Learning (ML) models, including OpenAI’s GPT4 and Meta’s Llama2, are rapidly proliferating around the world. From critical infrastructure, to defense, to financial institutions, the latest large language models (LLMs) and other Generative AI models are increasingly becoming incorporated into the systems that underpin our society. Do we know how those models are built, tested, and trained?

Do we know if software vendors are baking them into their products behind the scenes, with access to enterprise data? Support our research into the first big step towards AI transparency: the ML bill of materials, or MLBOM (aka AIBOM).

Panel:
Jamie Clark, General Counsel, OASIS Open
Ben Rossen, AI Policy and Regulation, OpenAI
Matt Fussa, Trust Officer, Cisco
Michael Meehan, General Counsel, Howso
Will Goddin, CIO, Howso

An exploration of AI regulation, where we examine the critical need for comprehensive regulatory frameworks in AI, encompassing ethical, legal, and technical dimensions. This presentation delves into the multifaceted landscape of AI governance, spotlighting its potential influence on industries, society, and global policymaking. We'll address key topics, such as data privacy, transparency, accountability, algorithmic fairness, and international collaboration, to encourage a deeper understanding of the world of AI governance.


Morning Break (15 minutes)

Panel:
Moderator: Daniella Taveau, President, Bold Text Strategies
Lucy Lim, Research Scientist, Google DeepMind
George Shea, DAD-CDM co-chair, FDD
Pablo Breuer, DISARM co-author

Explore strategies and solutions to protect digital domains from AI-Enhanced Disinformation Campaigns. This session provides practical insights, action steps, and emphasizes the vital role of the OASIS DAD-CDM Open project. AI-enhanced disinformation is now a reality with the presence and active utilization of FraudGPT and WormGPT.

Speaker:
Melinda Thielbar, Vice President, Data Science, Fidelity Investments

There are many software open source software packages for AI fairness tests. All popular packages, however, require protected group membership for each person in the data. Protected group membership is often based on information most people consider private, e.g. race or gender identity. Companies often do not collect this data, and many individuals are reluctant to share it–even with companies they trust. This lack of data prevents many practitioners and researchers from testing their AI models for unwanted bias.

In this session, we’ll introduce a state-of-the-art method for calculating fairness tests without collecting personal data on individuals and demonstrate its implementation in Jurity, an open source fairness testing package maintained by Fidelity Investments. First presented at the 2023 Learning and Intelligent Optimization conference in Lion, France, this technique has been tested with simulated and real-world data. Along the way, we’ll discuss how the open source community supports fairness evaluation, Jurity’s unique contributions to the community, and where opportunities still exist for open source developers to support AI fairness evaluation.

Speaker: Kojin Oshiba, Co-Founder, Robust Intelligence

Generative AI applications are hard to productize and operate. One of the main reasons is because it’s difficult to protect Gen AI against security, ethical, and operational risks. The enormous size of the input space and inherent complexity of third-party foundation models make this task more challenging than traditional ML models. Hence, a new paradigm is required to mitigate generative AI risk. In this session, we summarize the new risks introduced by the new class of generative foundation models and applications through several examples and compare how these risks relate to the risks of mainstream discriminative models. We’ll discuss how a combined approach of automated red-teaming and real-time validation can give companies the confidence to securely use Gen AI in production at scale.

Panel:
Amy Rose, PSIRT Leader, NVIDIA
Dr. Lisa Bradley, Senior Director, Product & Application Security, Dell
Dr. Jautau (Jay) White, Open Source Software and Supply Chain Security Strategy, Microsoft & OASIS Board of Directors
Omar Santos, Distinguished Engineer, Cisco & OASIS Board of Directors

What is an AI Security Vulnerability? When do you assign a CVE for an AI-related issue? In this presentation, experts in Product Security Incident Response Teams (PSIRTs), open source, supply chain security, and AI security will navigate the complex terrain of AI system vulnerability disclosure. Gain valuable insights into transparency and different disclosure methods.

Afternoon Break (15 minutes)

Speaker: Dr. Chris Hazard, Co-Founder & CTO, Howso

You just built a model, checked it for performance, used SHAP to make sure that the important features driving the model made sense, so your model is good to go, right? Maybe not. SHAP measures what's important to the model, not the data. The model may contain all sorts of potential vulnerabilities that are exploitable. But where? Which data was problematic? Was there any bad data, erroneous or malicious? And if you could determine concerning data, where did it come from? Using a black box AI is much like trying to secure closed source, proprietary software as a 3rd party.

Join Dr. Chris Hazard as he discusses how instance-based learning has made incredible progress in the past decade as a pragmatic base upon which to build practical, robust AI and ML systems. Instance-based learning means that data basically becomes the model, and every output can be attributed to the exact data that lead to the decision. Modern implementations enable the user to debug the data and understand its inner relationships, without having to trade-off accuracy. Further, such systems can be used for a large variety of tasks, from finding and understanding anomalies, to understanding the certainty around all aspects of the data, to creating synthetic data to reduce organizational vulnerabilities of data exfiltration.

Speaker: Akram Sheriff, Data Science, Cisco

Due to the surge in popularity and recent advancements in the development, distribution, and implementation of Large Language Models (LLMs) for Generative AI (GENAI) applications in both industry and academia, there has been a growing focus on their security, safety, and vulnerability risks. Notably, research has demonstrated that LLMs can be exploited for illicit activities such as fraud, impersonation, and even the creation of malware-based attacks. Given these concerns, it is crucial for AI developers, security experts, and Responsible AI (RAI) professionals to be well-informed about the security challenges associated with deploying LLMs in large-scale production environments.

Gain valuable insights into implementing LLM-based guardrails to enhance the security and reliability of conversational systems. Learn architectural best practices for deploying APIs using LLM frameworks to proactively mitigate common security threats.

This round table will be an opportunity to discuss in more detail the questions raised by attendees, identify prevalent trends, and confront the unresolved challenges that persist in securing AI systems. It will be a chance to build on the momentum of the summit, encouraging a shared understanding and setting the stage for future initiatives.

Participants can anticipate a forward-looking discussion that not only encapsulates the essence of the summit's discussions but also propels the conversation toward the next steps in AI security.

Omar Santos, Distinguished Engineer, Cisco

LOGISTICS

- This is a hybrid event. Space is limited for in-person attendance.
- Registration is required for both virtual (via Webex) and in-person.
- The Summit will be hosted at the Cisco offices in Research Triangle Park, NC (Building 5).
- Complimentary parking is available outside the building.
- Virtual participants will receive the Webex link by November 20th.  

Reserve your spot today!