Data we store in our servers in cloud

What data do we have access to?

When you install a Jira Cloud add-on, the add-on can request certain 'scopes' of access. In Automation for Jira, we require READ, WRITE and ADMIN scope. This means the add-on is granted access to all Jira's REST APIs marked with these permissions on this page: jira rest scopes.

We require this access to do things like adding comments to issues, edit issues, etc.

When a cloud add-on is installed, we store a public and secret key in our database. We store this so that our add-on can make authenticated requests to your Jira instance as well as receive authenticated requests from your Jira instance. This is pretty standard for any Atlassian connect add-on for Jira Cloud.

Full disclosure here - we can use the public key and secret to manually make authenticated REST calls to any of the REST APIs mentioned on the page linked above. However, we have only done this in rare circumstances when we needed extra information to debug a tricky support problem. We ask for your authorisation to do this, before performing these requests and you can revoke these rights at any time simply by disabling/uninstalling our add-on in your Jira instance.

What data do we store in our database?

We try to store as little identifying information about your data (issues, projects etc) as possible in our database. Things we do store:

  • Rule config information:
    • Rule name and description
    • Rule component config information (e.g. JQL strings used in triggers that could contain project keys etc)
  • Audit log entries:
    • We store what you see in the audit log UI, issue keys and ids, as well as changes to the issue shown on the left hand side in the audit log
  • Issue Details: We store full issue details for the lifetime of the rule execution. This enables us to make the rule execution queue more fault tolerant

We also collect Google analytics to better help us understand how our users use the front-end, so that we can build better features. We do not include identifying information in these analytics (such as issue data, config data, etc).

Myself and my co-founder have both worked at Atlassian for 10 years previous to launching Code Barrel, and we treat customer privacy and security seriously. We believe in full transparency and, as far as we are concerned, your data is yours and we do not share your data with any third parties (unless we are legally obligated to do so - however this case has not arisen yet).

For more details please see the data privacy policy

Data Security

Below is a copy of our cloud security self assessment for Atlassian dated 17/Oct/18.

1a. Customer Data

We store:

  • Rule configurations
  • Audit logs
  • Tenant info (JWT secrets etc)

We try to avoid storing identifying data as much as possible. For example we don't store issue data or anything like it.

We take protection of this data seriously. Data is stored on a AWS RDS instance in a private subnet in a VPC. Access is only possible for application code running in the same private subnets, or via ssh jumpboxes in this VPC. Application database logins and logins used by Code Barrel staff are separate (using strong passwords stored in 1password). We never store usernames and/or passwords in application code. SSH keys for jumpboxes are also a heavily guarded secret and boxes are protected with fail2ban.

1b. Customer Data

Data is hosted in the us-east-1 AWS zone in the US.

2. Sensitive Data

We don't store much 'sensitive' data.

We only have one place (the aforementioned AWS RDS production database), where access is heavily guarded and only possible by application code or via ssh jumpboxes to which only select CodeBarrel staff have access.

3. Security Policy

Our EULA covers data security and privacy concerns. Please see section '3. Data':

We also publish our privacy policy on our website:

4. Release Management

We follow the agile development processes:

  • All bugs/stories/tasks are tracked in Jira
  • We use a public issue tracker
  • All code changes are made on branches in source control and can only be merged to master after detailed review in a pull request
  • Automated tests are executed for code changes and deployments
  • Further manual tests are carried out when releasing
  • Releasing to Cloud is automated via secure CI server (SemaphoreCI)

5. Audits

Our main mechanism is pull request reviews. All code is reviewed by at least 3 other experienced developers on our team before merging.

Checking for security vulnerabilities such as the following is standard practice as part of these reviews:

  • XSS
  • XSRT
  • Other injection attacks
  • Permission issues
  • JWT token validation issues

We also often carry out manual exploratory testing of individual features for security vulnerabilities.

6. Accreditation


7. Penetration Testing


8. Notifying Atlassian

Yes. We have not encountered any security breaches yet, however, should this happen we would:

  • Treat this with the highest priority
  • Immediately notify Atlassian via Marketplace
  • Provide open and frequent updates to Atlassian about progress with fixes

Mostly, we are ex-Atlassians and follow the 'Open Company, No Bullshit' approach when it comes to security (and other incidents).

9. Employee Access

There are 8 employees in total. Our developers each have more than 10 years experience developing Atlassian tools & apps and as a result we all take great care when accessing customer data.

We control access via 1password shared vaults. Access to this vault can be revoked on a per employee basis. We also have the ability to disable access to any production data completely by disabling our SSH jumpboxes in case of an emergency (or individual ssh keys).

10. Confidentiality

Our employment contract is signed by all employees and includes strong clauses around confidentiality, including customer data and privacy.

11. Managing Security Vulnerabilities

If we were to find a security vulnerability, our process is as follows:

  • Assess the impact and severity of the vulunerabilty
  • We'd raise this vulnerability in a private Jira issue of the highest priority (may depend on impact and severity)
  • We'd immediately schedule this issue for release. We release software very often (several times a day to cloud and every 1-2 days in server)
  • Fixes would have to pass through pull request reviews as well as manual testing
  • Once a fix has been released we'd
    • Disclose the vulnerability publicly on our blog and prompt users to upgrade
    • Make the original issue public

12. Disaster Recovery

To prevent loss of data in our AWS RDS instance, we have the following in place:

  • Live read-replica database in a different availability zone in the same region
  • Daily snapshots of the database stored in S3 (which has very high data redundancy)

The live read-replica fail-over has been tested (and no data was lost).

13. Data Recovery

We have daily backups stored via AWS RDS snapshots. Recovering from one of these would only become necessary if both the primary and read-replica (in a separate AZ) RDS instances failed/were lost.

If this were to happen the maximum amount of data lost for a particular customer could be 24 hours. Even so, due to the nature of our app 'Automation for Jira', it would mostly consist of audit log data. Rule configuration data isn't edited that often, so losses would be fairly small.

Customers also have the option to export their configured rules to JSON and re-import them, giving them an option to backup rule configuration data themselves as well.