Dieser Inhalt ist noch nicht in deiner Sprache verfügbar.
AI Security Rules
AI-driven Security Rules allow you to process input data using AI agents (LLMs), leveraging their advanced natural language understanding capabilities to determine whether a request should be allowed.
Related Resources
Setup
- Ensure Sentinel is installed.
- Configure an AI Provider.
- Create a new Security Group.
How It Works
When AI Security Rules are evaluated, the configured AI provider (e.g., a language model) processes the input data and context (such as IP address, location, device type, etc.) and returns a true
or false
result indicating whether the requester is allowed to proceed.
The prompt defines what the AI should evaluate. Keep the following in mind when writing prompts:
- AI Security Rules are only triggered on
POST
requests (e.g.,POST /v1/verify
,POST /v1/classifier
). - The AI must always return a boolean:
true
(allow) orfalse
(deny). - The
body
contains the submitted POST data and is accessible in the prompt. - The
context
contains additional information about the user (see below).
Configuring AI Rules
- In your Security Group, switch to the Advanced view.
- Choose the Set rule type.
- Under Set, select AI Prompt and enter your desired prompt text.
Alternatively, you can define the rule using the JSON editor:
[ { "action": "set", "set": [ { "field": "ai", "value": "Allow if the field 'message' contains meaningful text." } ] }]
Example Prompts
Allow only if the answer in the field xyz is correct.
Deny if countryCode is not in Europe.
Context
The context
object provides additional metadata about the requester, which is available to the AI during evaluation. It can include built-in properties as well as custom attributes.
You can set the context using either URL-style parameters or JSON format:
[ { "action": "set", "set": [ { "field": "context", "value": "attribute1=value&attribute2=123" } ] }]
Examples:
attribute1=value&attribute2=123
{"attribute1": "value", "attribute2": 123}
If multiple set context
rules are applied, their values are merged. In case of duplicate keys, the most recent value will overwrite the previous one.
Context Override
Context override allows dynamic configuration of security prompts using a single verification page. You can override the context
by passing a context
parameter in the URL:
?context={ENCRYPTED_CONTEXT_DATA}
For more details, see the Context Override guide.
Built-in Context Fields
These are always available:
browser
: Browser name (e.g., Chrome, Firefox).countryCode
: ISO country code based on the IP address or time-zone.device
: Device type (e.g., desktop, mobile, tablet, smartTV, wearable, bot).penalty
: Assigned penalty score (0–10), if applicable.
Use Cases & Examples
Security Questions
Simple human-verification challenges:
- Name an animal that lives in the ocean.
- Name a color in a rainbow.
- What color is a banana?
These questions are easy for humans but hard for bots to answer correctly.
[ { "action": "set", "set": [ { "field": "context", "value": "security_question=Name an animal that lives in the ocean." }, { "field": "ai", "value": "Allow only if the field 'security_answer' contains a correct answer to the 'security_question'." } ] }]
Job Applications & Recruitment
Discourage bots and low-effort applications:
- In 1–2 sentences, explain why you’re a good fit for this job.
- What experience do you have with XYZ?
- Describe a challenge you faced in XYZ and how you solved it.
Event Registrations
Improve attendee quality and reduce no-shows:
- Why do you want to attend this event?
- What topic are you most interested in?
- How did you hear about this event?
Scholarship / Grant Applications
Encourage thoughtful responses from applicants:
- In your own words, describe how this funding will help you.
- What impact do you hope to make in XYZ?
High-Value Form Submissions
Protect forms such as sales demos and pricing requests:
- What problem are you trying to solve with our product?
- What’s your timeline for purchasing?
Usage with Redirects
AI Security Rules can be used in conjunction with Links and Redirects to protect the destination URL.
To enable AI Security Rules for redirects:
- Add a field to the redirect page that includes the security question (e.g.,
security_answer
field). - Set the AI prompt as described above to evaluate the user’s answer.
Once configured, users must provide a valid answer to the security question before being redirected to the destination.
Testing
Use the Classifier API endpoint to test your configuration. Submit sample fields
and verify how the AI rule responds.
Prompt Injection
Prompt injection is a security risk where users manipulate the AI’s prompt or input to influence or override its behavior. While it is not technically feasible to prevent prompt injection entirely, Sentinel implements several mechanisms to minimize the risk:
- User input is passed as structured, JSON-encoded data, allowing for a clear separation between instructions and user data.
- The AI output is strictly validated to match a
true
orfalse
value, preventing unexpected or malformed responses.
To reduce the risk of prompt injection:
- Avoid phrasing prompts as open-ended questions.
- Use strict, rule-based instructions, such as:
Allow only if the field 'security_answer' contains a correct answer to the 'security_question'.
Tips
- For best results, use large language models with at least 7B parameters. Smaller models may struggle with accurate evaluation.
- Online providers (e.g., ChatGPT, Google, Anthropic) may introduce latency. Expect evaluation to take up to several seconds.
- For better privacy and control, consider using self-hosted models like LLaMA.