Safeguarding the AI Frontier: Forethought’s Approach to AI Security

Below are the robust controls that Forethought diligently applies on a daily basis to effectively address any potential AI risks that may arise. For further details, please refer to the security documents, such as the “Forethought SupportGPT Security Whitepaper,” available on the Trust Report page.

1. Applied best industry standards and audits
Forethought adheres to industry-leading standards, such as ISO 27001, NIST CSF, and NIST 800-53, when establishing policies and implementing controls. We prioritize the security and compliance of our operations by conducting annual audits against SOC2 and HIPAA. These audits serve as strong evidence of our commitment to meeting and surpassing regulatory requirements.

2. Security Assessments
For every release, Forethought completes extensive security checks, such as peer reviews, code, and runtime analysis, to find security bugs. Additionally, Forethought maintains a bug bounty program to conduct frequent security reviews of applications and infrastructure by top security researchers. The objective is to identify security bugs or misconfigurations that could lead to a material impact on Forethought’s security controls.

3. Redaction of sensitive data elements
Forethought’s services operate effectively without requiring personal data. For all data captured, Forethought carefully uses automation to redact sensitive data elements, such as Personally Identifiable Information (PII), Protected Health Information (PHI), and financial records (such as bank and credit card information), during the ingestion process in a secure environment. Once the redaction process is completed, the original data is securely deleted within 24 hours.

4. Real time reporting on security Controls
Forethought’s Trust Report to help demonstrate Forethought’s alignment with security and compliance controls in real time. This website and information is readily available to our current and prospective customers.

5. Controls in place to mitigate bias and hallucinations
Forethought uses paraphrasing techniques (prompting) to generate responses that are tailored to specific data sets. This approach minimizes the likelihood of generating inaccurate or irrelevant responses (AI hallucinations), thereby improving the accuracy and reliability of the AI-generated output. All data used in our training, internal review, and online prediction processes undergoes comprehensive redaction procedures to ensure that the privileged and personal information of users is protected at all times. As a direct result of these measures, our model is free of any intentional bias or discrimination with respect to user population, race or gender.

6. Securely process aggregated, de-identified data for robust model building aligned with privacy standards
By default per the Forethought’s Order Agreement and DPA Forethought may use Customer Content and data related to the use of the Services by Customer that (i) does not specifically identify Customer, Users or third parties, and (ii) is combined with the data of other customers, users or additional data sources (“Aggregated Data“) for the model training purposes. If required model training can be opted out model training for any customer per request.