Rainer Stropek | time cockpit
We focus on threat scenarios for specific AI-based software projects
Don't forget traditional software engineering and cloud security.
Your AI 🤖 is to a certain degree just another API/cloud app
New threats require new protective skills and tools.
Regularly revalidate prompts against up-to-date ground truth
Add context explicitly instead of relying on assumed model memory
Include example completions in prompts
Avoid overfitting to one model version, generalize prompts
Periodically retrain or refresh fine-tuned models
Monitor model outputs continuously for performance degradation
Don’t replicate complex access control logic
Avoid duplicating intricate permission models from source systems
Error-prone, hard to maintain
Verify document access at source
Validate document permissions with the original system
Leverage source-native AI search APIs
Consider using built-in AI or semantic search features as AI tools
Log and audit retrieval steps
Maintain traceability of what was shown and why
Include source metadata in prompts
Add citation data and metadata to provide transparency
Add post-processing checks to filter non-existent/unauthorized docs