Security & Data Protection
Your data. In Canada.
Encrypted. Segmented. Protected.
Security is not a feature we add at the end. It is the foundation we build from. Every platform we deliver is designed with multiple, independent layers of protection — so that no single failure can expose your data.
Risk reduction achieved through our security architecture
99.9%
Network interception risk reduced
99.7%
Database breach risk reduced
99.8%
Backup theft risk reduced
99%
Cross-tenant leak risk reduced
97%
Account takeover risk reduced
95%
Insider threat risk reduced
Risk reduction estimates are based on documented threat vectors and the mitigating controls in our standard security architecture. Actual outcomes depend on implementation scope and threat model.
Encryption
Four independent layers of encryption
We do not rely on a single encryption mechanism. Each layer is independent — meaning that a failure or compromise of one layer does not expose data protected by the others. This defence-in-depth approach is what separates serious security architecture from checkbox compliance.
Transport Encryption — TLS 1.3
Data in motion
256-bit
key strength
Every byte of data traveling between your users and our servers is encrypted using TLS 1.3 — the most current and secure version of the Transport Layer Security protocol. TLS 1.3 eliminates legacy cipher suites, requires forward secrecy on every connection, and reduces handshake latency compared to older versions. This means even if a network connection is intercepted, the data is computationally impossible to read. We enforce HTTPS across all endpoints with HSTS headers and reject downgrade attempts to older protocols.
Storage Encryption — AES-256
Data at rest
AES-256
military-grade standard
All data stored in our databases, file systems, and backups is encrypted using AES-256 — the Advanced Encryption Standard with a 256-bit key, the same standard used by governments and military organizations worldwide. This encryption is applied at the volume level (disk encryption) and at the field level for particularly sensitive data such as personally identifiable information (PII), health records, and legal documents. Encryption keys are managed separately from data using AWS Key Management Service (KMS), with automatic key rotation on a defined schedule.
Application-Level Encryption
Sensitive field isolation
3× encrypted
for sensitive fields
For platforms handling health data, legal records, or financial information, we apply an additional layer of application-level encryption on specific sensitive fields before they ever reach the database. This means even a database administrator with direct access to the underlying storage cannot read sensitive fields in plaintext. Each field is encrypted with a unique key derived from a combination of the user identity and a server-side secret — a technique known as envelope encryption. This creates cryptographic isolation between users and between data categories.
Backup Encryption
Data in cold storage
100%
of backups encrypted
Backups are often the weakest link in a security architecture. We treat backup data with the same seriousness as live data. All database snapshots and file backups are encrypted before being written to cold storage using AES-256. Backup archives are stored in geographically separate Canadian regions — never outside Canada — with access restricted to dedicated backup service accounts. Backups are tested regularly to confirm they can be restored and that their encryption is intact.
Data Segmentation
Six dimensions of data isolation
Encryption protects data that is intercepted or stolen. Segmentation limits how much data can be exposed in the first place — and who can reach it. These two strategies work together to create a security architecture where the blast radius of any incident is tightly contained.
Physical Segmentation — Canadian Servers Only
All data is stored exclusively on AWS ca-central-1 infrastructure located in Montreal, Canada. We do not use US-based, EU-based, or any non-Canadian cloud regions. Data is never replicated outside Canadian borders. For government and health projects, we can further restrict storage to specific availability zones within Montreal on request.
Logical Segmentation — Tenant Isolation
In multi-tenant environments, each organization's data is logically isolated at the database schema level. Each tenant has a unique schema identifier and all queries are scoped with row-level security policies enforced at the database engine level — not just at the application layer. A bug in the application layer cannot expose one tenant's data to another.
Network Segmentation — VPC Isolation
Application servers, database servers, and backup systems live in separate Virtual Private Clouds (VPCs) with strictly controlled ingress and egress rules. Database servers have no public internet access — they can only receive connections from application servers within the same VPC over a private network. This eliminates an entire class of attacks where databases are directly exposed to the internet.
Role-Based Data Segmentation
Within a single organization, data access is segmented by role. A support agent cannot see financial records. A clinician cannot access another patient's data. An analyst dashboard shows aggregated, anonymized data — not individual records. These rules are enforced at the database level using row-level security policies — not just at the UI level — so they cannot be bypassed through direct API calls.
Environment Segmentation — Prod / Staging / Dev
Production data never touches development or staging environments. Test and staging environments use synthetically generated data that has no relationship to real user records. Access to production systems is restricted to a minimal set of authorized engineers, requires multi-factor authentication, is logged, and triggers automated alerts for unusual access patterns.
Audit Log Segmentation
Audit logs are written to a separate, append-only log store that application systems cannot modify or delete. This means that even if an application server is compromised, the attacker cannot cover their tracks by tampering with logs. Logs capture every authentication event, data access, data modification, and administrative action with timestamps, IP addresses, and user identities.
Threat Model
Common threats and how we address each one
Most data breaches follow predictable patterns. We design our security architecture specifically to address the most common and most damaging attack vectors — not as a theoretical exercise, but as a concrete engineering requirement for every platform we build.
Man-in-the-Middle Attack
Without our controls
Attacker intercepts unencrypted network traffic and reads or modifies data in transit.
With our architecture
TLS 1.3 with forward secrecy makes interception computationally infeasible. HSTS prevents protocol downgrade attacks.
Database Breach
Without our controls
Attacker gains access to database and reads all stored data in plaintext.
With our architecture
AES-256 storage encryption means stolen database files are unreadable without encryption keys. Application-level encryption adds a second barrier for sensitive fields.
Insider Threat
Without our controls
Malicious employee accesses and exfiltrates sensitive user data.
With our architecture
Role-based access control limits data visibility. Audit logs capture all access. Field-level encryption means even DBAs cannot read sensitive fields in plaintext.
Cross-Tenant Data Leak
Without our controls
Bug in multi-tenant application exposes one customer's data to another.
With our architecture
Database-level row security policies enforce tenant isolation independently of application code. A bug cannot bypass database-enforced rules.
Compromised Backup
Without our controls
Attacker steals backup archives and extracts sensitive historical data.
With our architecture
All backups are AES-256 encrypted before storage. Backup storage is access-restricted and stored only in Canada.
Credential Stuffing / Account Takeover
Without our controls
Attacker uses leaked passwords from other breaches to access user accounts.
With our architecture
Multi-factor authentication required. Rate limiting on login attempts. Passwords stored as salted bcrypt hashes — never in plaintext or reversible form.
Architecture
Zero-Trust by design, not by retrofit
Traditional network security operates on a perimeter model: trust everything inside the network, distrust everything outside. This model fails the moment an attacker gets inside — through a phishing email, a compromised credential, or a rogue device.
Zero-trust inverts this assumption. Every request — regardless of where it originates — is treated as potentially hostile until it can be verified. This applies to users, services, APIs, and even internal systems communicating with each other.
We implement zero-trust from the first line of architecture. It is not a product we bolt on — it is a set of principles that govern how every component is designed, how services authenticate to each other, and how access decisions are made in real time.
Verify Explicitly
Every request is authenticated and authorized based on all available signals: identity, device health, location, time of day, and request context. No implicit trust is granted based on network location alone.
Least Privilege Access
Every user, service, and system component receives the minimum permissions required to perform its function — and no more. Permissions are reviewed quarterly and revoked immediately when no longer needed.
Assume Breach
We design systems assuming that a breach has already occurred somewhere. This means encrypting data even on internal networks, logging all internal service calls, and isolating systems so a compromise in one component cannot spread laterally.
Continuous Verification
Authentication is not a one-time event at login. Sessions are re-verified continuously. Anomalous behaviour — unusual login times, unexpected data access volumes, new device or location — triggers automatic step-up authentication.
Compliance
Built to Canadian standards
Compliance is the minimum bar, not the ceiling. We meet the required standards as a baseline — and go beyond them wherever practical, because the goal is actual data protection, not just documentation.
Canada's federal privacy law governing how private sector organizations collect, use, and disclose personal information. We design data flows to minimize collection, obtain meaningful consent, and provide individuals access and correction rights.
Ontario's health-specific privacy legislation for platforms handling health records. We implement the additional safeguards PHIPA requires for health information custodians and agents.
Accessibility standard required for government and public-sector platforms. All interfaces are tested with assistive technology and meet AA-level conformance.
Our infrastructure and processes are designed to meet SOC 2 Type II criteria for security, availability, and confidentiality. Formal audit on request for enterprise clients.
Our Commitment
What we will never do
We will never store your data outside Canada.
This is a hard architectural constraint, not a policy preference. Our infrastructure is configured to prevent data from leaving Canadian regions. Cross-border data transfers are technically blocked, not just contractually prohibited.
We will never sell or share your data with third parties.
Your data is used exclusively to provide the services you have contracted us to deliver. It is not used for analytics, training AI models, or sharing with partners. We do not have advertising relationships that involve user data.
We will never use production data in development or testing.
Development and staging environments use synthetically generated data. No real user records, health data, legal files, or personally identifiable information ever touches a non-production system.
We will never delay breach notification.
In the event of a confirmed data breach, we notify affected organizations within 72 hours — as required by PIPEDA — with a full incident report, containment steps taken, and a timeline of events. No waiting to see if the problem resolves itself.
Frequently asked questions
Where exactly is our data stored?
All data is stored on AWS ca-central-1 infrastructure in Montreal, Canada. We do not use any non-Canadian regions, and data is never replicated outside Canadian borders. This applies to primary databases, read replicas, backups, and log archives. For organizations that require additional specificity, we can restrict storage to a single availability zone within Montreal.
What happens to our data if we stop using your platform?
Upon contract termination, we provide a complete data export in a standard format (JSON, CSV, or SQL dump depending on the data type) within 30 days. After export is confirmed, all data — including backups, logs, and cached copies — is securely deleted from our infrastructure within 60 days. We provide a written confirmation of deletion.
Can your employees read our data?
Access to production systems is restricted to a minimal set of engineers and requires multi-factor authentication and a documented business reason. All access is logged and reviewed. For sensitive fields (health data, legal records, financial information), application-level encryption means even engineers with database access cannot read data in plaintext without the corresponding decryption keys, which are managed separately.
How do you handle a data breach if one occurs?
We have a documented incident response plan. In the event of a confirmed breach, we notify affected organizations within 72 hours as required by PIPEDA breach notification rules, provide a detailed incident report, take immediate containment steps, and conduct a post-incident review. We carry cyber liability insurance and maintain relationships with incident response specialists.
Do you conduct penetration testing?
Yes — we conduct penetration testing before the launch of every new platform and on an annual basis for existing platforms. We also run automated vulnerability scanning continuously using tools integrated into our CI/CD pipeline. Critical vulnerabilities are remediated within 24 hours; high-severity issues within 7 days.
Can we request a security audit or review your policies?
Yes — enterprise clients can request a security review session, access to our security policy documentation, and a summary of our most recent penetration test findings. We do not share full penetration test reports publicly, but we discuss findings and remediation status in a documented session with your security team.
How are encryption keys managed and rotated?
Encryption keys are managed using AWS Key Management Service (KMS) and are never stored alongside the data they protect. Master keys are rotated annually by default, with more frequent rotation available on request. Key access is logged, and no single engineer has unilateral access to production encryption keys — key operations require approval from a second authorized party.
Have specific security requirements?
We are happy to walk through our security architecture in detail, answer specific technical questions, or tailor our approach to your organization's threat model and compliance requirements.
Talk to Us About Security