OWASP Top 10 for Large Language Model Applications, OWASP Foundation, 2023 - This authoritative industry guide lists the most significant security risks for LLM applications, with prompt injection as the primary vulnerability, offering practical mitigation guidance.
Security, LangChain Contributors, 2024 (LangChain) - The official LangChain documentation section provides practical guidance on security considerations, including detailed discussions and strategies for mitigating prompt injection within the LangChain framework.