LLM Usage
This policy governs the use of large language models (LLMs) and AI-powered tools — including ChatGPT, Claude, Gemini, Copilot, and similar systems — while working on Folio projects.
Why This Policy Exists
Folio projects involve training and evaluating AI models. The value of this work depends entirely on the authenticity of human judgment. When trainers use AI tools to generate their responses, assessments, or annotations, the data produced is unreliable and potentially harmful to the models being trained.
Healthcare AI safety in particular requires the clinical reasoning of qualified professionals — not the output of another AI.
Permitted Uses
You may use LLMs and AI tools for:
General learning — Understanding new concepts, studying for Folio Learn courses, or researching background information
Administrative tasks — Drafting personal communications, organizing your schedule, or other tasks unrelated to project work
Reference — Quickly looking up general factual information (e.g., drug names, anatomy terms) that you then independently verify with your own expertise before applying
Prohibited Uses
You may not use LLMs or AI tools to:
Generate annotation responses — Do not use AI to write, suggest, or guide your annotations, rankings, ratings, or evaluations on any project task
Complete assessments — Do not use AI to assist with any Folio qualification test or skills assessment
Produce written feedback — Do not use AI to draft rationale, explanations, or justifications that are submitted as your own clinical judgment
Automate task completion — Do not use scripts, bots, or AI agents to complete tasks on your behalf
Why Human Judgment Matters
AI training tasks submitted on Folio are used to improve real healthcare AI systems that may be deployed in clinical settings. Submitting AI-generated responses as human feedback undermines the integrity of the training data and can cause AI models to develop unsafe behaviors — the opposite of what these projects aim to achieve.
Healthcare professionals are recruited for their clinical expertise. That expertise cannot be substituted or approximated by another AI model.
Enforcement
Folio monitors for patterns consistent with AI-generated submissions. Violations of this policy may result in:
Removal from one or more projects
Withholding of payment for affected work
Permanent suspension from the platform
If you are uncertain whether a specific use of an AI tool is permitted on your project, contact your project manager or Folio support before proceeding.