# Improper Output Handling

Let's talk about web security for a minute. To avoid any malicious activities on a website we make sure to sanitise the user input and block any unsafe user input. It's the same case with LLMs, but here we also have to give equal importance to securing the outputs.&#x20;

Improper Output Handling is also listed as one of the primary issues in OWASP Top 10 for LLM Applications 2025 as LLM05.

LLM outputs can lead to several vulnerabilities if they are not sanitised and filtered properly. Models work with a lot of data, including files, software code, images, text etc hence validating the output is of the utmost importance. Below are some of the risks that unfiltered LLM output can cause.

## Risks

**Frontend Rendering**\
Consider an application like ChatGPT where the LLM shows the output to a user query in a web interface. If the output isn't filtered then the attacker can manipulate the model into injecting malicious JavaScript or HTML code which can gradually lead to vulnerabilities.&#x20;

For example, XSS vulnerabilities were discovered in early versions of ChatGPT.&#x20;

**Template Injection**\
If the web interface is using a template engine like Jinja then there is a risk that the output might contain malicious template code which can gradually lead to SSTI.&#x20;

**Automated Pipelines**\
Integration of LLMs in external applications and websites is commonplace now, and so are the security risks. In advanced use cases LLMs generate SQL queries and even code in the backend. If the output contains something malicious then it can lead to common vulnerabilities like command injection and SQL injection.&#x20;

Let's consider a very basic example here. There is an application that generates SQL queries based on instructions given in natural language by the user. It's a closed application used within an organization and is built to simplify the process of writing long complicated SQL statements. Let's assume that an outside attacker gets access to the LLM and instructs the model to generate a SQL query that deletes all the records inside the backend database! That would be catastrophic for the organization unless the model output is sanitised.&#x20;


---

# Agent Instructions: Querying This Documentation

If you need additional information that is not directly available in this page, you can query the documentation dynamically by asking a question.

Perform an HTTP GET request on the current page URL with the `ask` query parameter:

```
GET https://klevrbook.gitbook.io/home/notes/ai-security/llm-hacking/improper-output-handling.md?ask=<question>
```

The question should be specific, self-contained, and written in natural language.
The response will contain a direct answer to the question and relevant excerpts and sources from the documentation.

Use this mechanism when the answer is not explicitly present in the current page, you need clarification or additional context, or you want to retrieve related documentation sections.
