Skip to main content

Introduction

When using Large Language Models (LLMs), also sometimes referred to by the more general name Artificial Intelligence or AI, there are many information security risks to consider, especially because these technologies are rapidly changing and evolving.  Alongside positive uses, ways to attack and exploit these systems are also rapidly being invented.  Because of this, the risks noted below are not exhaustive, and users and creators of these systems should always consult with the Office of Information Security if there is doubt.  Please also see the Statement on Guidance for the University of Pennsylvania (Penn) Community on Use of Generative Artificial Intelligence.

Risks When Using Someone Else's LLM (e.g. ChatGPT)

When using an LLM like ChatGPT, there are specific information security risks that should be kept in mind. 

Creating Your Own LLM or an Application that leverages LLM Tools

When creating your own LLM or a tool that leverages someone else's LLM as part of a larger system, a variety of new kinds of AI-specific attack techniques may be used against your plan.  Awareness of these potential attacks is a good first step to avoiding them.