🛡️ Security
How TrialForge AI approaches security and responsible use.
Platform principles
TrialForge AI is designed as a research and educational simulation platform. Security controls focus on protecting configuration secrets, minimising data retention, and avoiding unnecessary processing of sensitive information.
Data handling posture
- No protected health information (PHI) is required or supported.
- Users are instructed not to enter real patient‑identifiable or confidential regulatory data.
- Protocol text is processed ephemerally to generate outputs and is not stored long‑term unless explicitly saved by the user.
- Aggregated, anonymised telemetry may be used to monitor stability and improve the product.
Technical safeguards
- Secrets and API keys are stored in environment variables.
- Only server‑side components call external AI providers; keys are never exposed in client‑side code.
- Simulation endpoints include basic rate‑limiting and validation to reduce abuse.
- Logs are restricted to operational metadata and error context, not full protocol content where avoidable.
Third‑party services
TrialForge AI may rely on third‑party infrastructure and AI providers (for example, cloud hosting and model APIs). Each provider maintains its own security controls and certifications. Users should review those providers’ documentation for details.
Reporting issues
If you believe you have identified a security vulnerability:
- Do not post details in public issues or forums.
- Email samoadeyemi@yahoo.co.uk with “Security Disclosure” in the subject line.
- Include enough information to reproduce the issue and assess its impact.