Security
Keeping your data and resulting insights secure is important to us. Below, we outline how we approach data security at Rima.
Should you have any questions related to data security, please contact us at gana@getrima.ai.
General overview
- All credentials you provide for establishing connections to applications are properly masked and then stored securely.
- Your workspace data is isolated within our databases so your data is never exposed to any other workspace.
- We leverage OpenAI models with appropriate moderation policies to ensure a safe and reliable chat experience.
- We backup your data to prevent data loss.
- We do not use your data for training by the LLMs.
- We have implemented multi-factor authentication, requiring users to verify their identity through an additional step beyond the standard password. This extra layer helps protect user data by ensuring only authorized account owners can login.
Certifications and third party assessmentWe have completed our CASA (Cloud application Security Requirement)
Infrastructure security
- OpenAI: We leverage multiple OpenAI models to give AI responses and handle some data transformation. OpenAI LLMs get access to relevant data to enable answering your questions. We never pass any sensitive data to LLMs. We have a zero data retention agreement with OpenAI.
- Anthropic: We leverage Anthropic models to give AI responses and handle some data transformation. Anthropic LLMs get access to relevant data to enable answering your questions. We never pass any sensitive data to LLMs. We have a zero data retention agreement with Anthropic.
- Mistral: We rely on some Mistral models to give AI responses and leverage Mistral for its LLM capabilities. Mistral LLMs get access to relevant data to enable answering your questions. No private PII data reaches Mistral.
- Posthog: We use Posthog for some of our analytics data. Data stored includes session information and alerts.
- AWS: Our infrastructure is primarily hosted on AWS. All of our servers are in the US. Our infrastructure is designed with strict security isolation per environment. All ingress network traffic is controlled via strict security groups with only HTTPS ingress allowed through a centralized ALB. All servers are deployed in private subnets with least privilege access managed via IAM roles. We enforce multi-factor authentication for AWS for all users.
- Slack: We use Slack for internal communication. Slack has no access to your data.
- Stripe: We use Stripe for billing which gets access to payment data per your authorization.
- Google Workspace: We use Google Workspace to collaborate. No user data is shared with Google Workspace.
- Airbyte: We use Airbyte to reliably and securely ingest data from tools like QuickBooks and other third party systems, enabling seamless integration into our platform. Airbyte is designed with strong data security and tenant isolation in mind. User credentials are securely masked and stored, raw data is isolated in dedicated schemas per tenant, and all stored objects are prefixed with tenant specific IDs to prevent cross-tenant access.
AI RequestsTo provide its features, Rima makes AI requests to our server infrastructure at both the data transformation and chat interaction layers. For example, when we ingest financial data from tools like QuickBooks, we use AI to intelligently map and normalize that source data into Rima's unified financial model. Additionally, when you ask questions in chat or request performance insights, we generate AI requests to help analyze your data and offer actionable recommendations to improve profitability. These AI requests may include structured financial data, metadata, and contextual cues derived from your integrations and usage. All requests are processed on our secure backend, hosted on AWS, using LiteLLM to manage prompt routing, graceful retries, and fallback mechanisms. Depending on the task, requests are forwarded to trusted language model providers (such as OpenAI or Anthropic). Even if you configure your own API keys, all model interactions currently pass through our infrastructure to ensure consistent prompt construction and model behavior. We have a zero data retention policy. This means your data is not retained on any public LLM servers.
Account deletionYou can delete your account at any time in settings by clicking on the delete account button. You will get an email notification which you need to confirm and your account will be deleted. We guarantee that all your data will be deleted from our systems within 30 days. This takes long because some cloud systems have backups as such deletion is not instant.
Vulnerability DisclosuresIf you believe you have found a vulnerability in Rima, please submit a report togana@getrima.aiWe commit to acknowledging vulnerability reports as soon as possible and addressing them as soon as we are able to. Critical incidents will be communicated via email to all users.