This is an archive of our API documentation. Our new API docs can be found at https://developer.statuspage.io
Full API and Integration Documentation

Postmortems

Postmortems are a great way to post information about an incident after it has passed. They generally contain detailed information about the events that caused the incident, mitigation steps to stop it from happening, and followup work that has and will occur to ensure similar incidents do not occur in the future. After postmortems are authored they can be sent to customers via email notifications or twitter, and will show on the incident details page.

FORMATTING

We support Github Flavored Markdown formatting for Postmortems, but will strip any HTML deemed possibly harmful to users. For best results, we recommend you stay within the bounds of defined GFM functionality.

Contents

Get the associated Postmortem
ENDPOINT
  GET /pages/[page_id]/incidents/[incident_id]/postmortem.json

SAMPLE CALL
  curl https://api.statuspage.io/v1/pages/hmzvdmpfxkjl/incidents/3fv9c1rdhbw2/postmortem.json \
    -H "Authorization: OAuth 2a7b9d4aac30956d537ac76850f4d78de30994703680056cc103862d53cf8074"

RESPONSE CODES
  200 - Successful call
  401 - Could not authenticate
  404 - Requested Postmortem could not be found

SAMPLE RESPONSE
  {
    "created_at": "2018-06-21T10:32:33-06:00",
    "preview_key": "8319gdas37s2",
    "body": "##### Issue\r\n\r\nAt approximately 17:02 UTC on 2018-06-21, our master database server unexpectedly went unresponsive to all network traffic. A reboot of the machine at 17:05 UTC resulted in a failed mount of a corrupted EBS volume, and we made the decision at that time to fail over the slave database.\r\n\r\n##### Resolution\r\n\r\nAt 17:12 UTC, the slave database had been successfully promoted to master and the application recovered enough to accept web traffic again. A new slave database node was created and placed into the rotation to guard against future master failures. The promoted slave database performed slowly for the next couple of hours as the query cache began to warm up, and eventually settled into a reasonable performance profile around 20:00 UTC.\r\n\r\n##### Future Mitigation Plans\r\n\r\nOver the past few months, we've been working on an overhaul to our data storage layer with a migration from a Postgres setup to a distributed, fault-tolerant, multi-region data layer using Riak. This initiative has been prioritized, and the migration will be performed in the coming weeks. We will notify our clients of the scheduled downtime via an incident on this status site, and via a blog post.",
    "body_updated_at": "2018-06-21T10:32:33-06:00",
    "body_draft": "##### Issue\r\n\r\nAt approximately 17:02 UTC on 2018-06-21, our master database server unexpectedly went unresponsive to all network traffic. A reboot of the machine at 17:05 UTC resulted in a failed mount of a corrupted EBS volume, and we made the decision at that time to fail over the slave database.\r\n\r\n##### Resolution\r\n\r\nAt 17:12 UTC, the slave database had been successfully promoted to master and the application recovered enough to accept web traffic again. A new slave database node was created and placed into the rotation to guard against future master failures. The promoted slave database performed slowly for the next couple of hours as the query cache began to warm up, and eventually settled into a reasonable performance profile around 20:00 UTC.\r\n\r\n##### Future Mitigation Plans\r\n\r\nOver the past few months, we've been working on an overhaul to our data storage layer with a migration from a Postgres setup to a distributed, fault-tolerant, multi-region data layer using Riak. This initiative has been prioritized, and the migration will be performed in the coming weeks. We will notify our clients of the scheduled downtime via an incident on this status site, and via a blog post.",
    "body_draft_updated_at": null,
    "published_at": "2018-06-21T10:32:33-06:00",
    "notify_subscribers": true,
    "notify_twitter": true,
    "custom_tweet": null,
    "updated_at": "2018-06-21T10:32:33-06:00" 
   }

Back to top

Create Postmortem, or if one exists, update it

This endpoint updates the draft of a postmortem. If no postmortem exists for the specified incident, one will be created. To promote a draft to a published state, see the publish endpoint below.
Updating a draft does NOT send notifications to your users.

ENDPOINT
  PUT /pages/[page_id]/incidents/[incident_id]/postmortem.json

PARAMETERS
  postmortem[body_draft] - The body of the Postmortem.

SAMPLE CALL
  curl https://api.statuspage.io/v1/pages/hmzvdmpfxkjl/incidents/3fv9c1rdhbw2/postmortem.json \
    -H "Authorization: OAuth 2a7b9d4aac30956d537ac76850f4d78de30994703680056cc103862d53cf8074" \
    -X PUT \
    -d "postmortem[body_draft]=New Postmortem body"

RESPONSE CODES
  200 - Successful call
  400 - Bad request
  401 - Could not authenticate

SAMPLE RESPONSE
  {
    "created_at": "2018-06-21T10:32:33-06:00",
    "preview_key": "371fdg624kjk",
    "body": "##### Issue\r\n\r\nAt approximately 17:02 UTC on 2018-06-21, our master database server unexpectedly went unresponsive to all network traffic. A reboot of the machine at 17:05 UTC resulted in a failed mount of a corrupted EBS volume, and we made the decision at that time to fail over the slave database.\r\n\r\n##### Resolution\r\n\r\nAt 17:12 UTC, the slave database had been successfully promoted to master and the application recovered enough to accept web traffic again. A new slave database node was created and placed into the rotation to guard against future master failures. The promoted slave database performed slowly for the next couple of hours as the query cache began to warm up, and eventually settled into a reasonable performance profile around 20:00 UTC.\r\n\r\n##### Future Mitigation Plans\r\n\r\nOver the past few months, we've been working on an overhaul to our data storage layer with a migration from a Postgres setup to a distributed, fault-tolerant, multi-region data layer using Riak. This initiative has been prioritized, and the migration will be performed in the coming weeks. We will notify our clients of the scheduled downtime via an incident on this status site, and via a blog post.",
    "body_updated_at": "2018-06-22T14:32:33-06:00",
    "body_draft": "##### Issue\r\n\r\nUPDATE: At approximately 17:02 UTC on 2018-06-21, our master database server unexpectedly went unresponsive to all network traffic. A reboot of the machine at 17:05 UTC resulted in a failed mount of a corrupted EBS volume, and we made the decision at that time to fail over the slave database.\r\n\r\n##### Resolution\r\n\r\nAt 17:12 UTC, the slave database had been successfully promoted to master and the application recovered enough to accept web traffic again. A new slave database node was created and placed into the rotation to guard against future master failures. The promoted slave database performed slowly for the next couple of hours as the query cache began to warm up, and eventually settled into a reasonable performance profile around 20:00 UTC.\r\n\r\n##### Future Mitigation Plans\r\n\r\nOver the past few months, we've been working on an overhaul to our data storage layer with a migration from a Postgres setup to a distributed, fault-tolerant, multi-region data layer using Riak. After prioritizing this iniatitive, we performed the migration today at noon. Our clients have already been notified of the incident and the steps we took to address the issue via an incident on this status page as well as an accompanying blog post.",
    "body_draft_updated_at": null,
    "published_at": null,
    "notify_subscribers": false,
    "notify_twitter": false,
    "custom_tweet": null,
    "updated_at": "2018-06-22T14:32:33-06:00" 
  }

Back to top

Publish associated Postmortem

Publishing a Postmortem promotes your draft to a published state (effectively copying body_draft to body). Optionally, you can choose to notify your users that a Postmortem has been published, but only on the first publishing of a Postmortem.
notify_subscribers: email notifications must be enabled for this option to work (sms and webhook are excluded from postmortem notifications)
notify_twitter: a Twitter account must have previously been created for this option to work

ENDPOINT
  PUT /pages/[page_id]/incidents/[incident_id]/postmortem/publish.json

PARAMETERS
  postmortem[notify_subscribers] - Boolean representing whether you want to notify email subscribers. (defaults to false)
  postmortem[notify_twitter] - Boolean representing whether you want to notify twitter followers. (defaults to false)
  portmortem[custom_tweet] - String representing the tweet to be sent. Must adhere to Twitter length requirements.

SAMPLE CALL
  curl https://api.statuspage.io/v1/pages/hmzvdmpfxkjl/incidents/3fv9c1rdhbw2/postmortem/publish.json \
    -H "Authorization: OAuth 2a7b9d4aac30956d537ac76850f4d78de30994703680056cc103862d53cf8074" \
    -X PUT
    -d "postmortem[notify_subscribers]=true" \
    -d "postmortem[notify_twitter]=true" \
    -d "postmortem[custom_tweet]=Tweet Body"

RESPONSE CODES
  200 - Successful call
  400 - Bad request
  401 - Could not authenticate
  404 - No postmortem found to publish

SAMPLE RESPONSE
  {
    "created_at": "2018-06-21T10:32:33-06:00",
    "preview_key": "371fdg624kjk",
    "body": "##### Issue\r\n\r\nAt approximately 17:02 UTC on 2018-06-21, our master database server unexpectedly went unresponsive to all network traffic. A reboot of the machine at 17:05 UTC resulted in a failed mount of a corrupted EBS volume, and we made the decision at that time to fail over the slave database.\r\n\r\n##### Resolution\r\n\r\nAt 17:12 UTC, the slave database had been successfully promoted to master and the application recovered enough to accept web traffic again. A new slave database node was created and placed into the rotation to guard against future master failures. The promoted slave database performed slowly for the next couple of hours as the query cache began to warm up, and eventually settled into a reasonable performance profile around 20:00 UTC.\r\n\r\n##### Future Mitigation Plans\r\n\r\nOver the past few months, we've been working on an overhaul to our data storage layer with a migration from a Postgres setup to a distributed, fault-tolerant, multi-region data layer using Riak. This initiative has been prioritized, and the migration will be performed in the coming weeks. We will notify our clients of the scheduled downtime via an incident on this status site, and via a blog post.",
    "body_updated_at": "2018-06-22T14:32:33-06:00",
    "body_draft": "##### Issue\r\n\r\nAt approximately 17:02 UTC on 2018-06-21, our master database server unexpectedly went unresponsive to all network traffic. A reboot of the machine at 17:05 UTC resulted in a failed mount of a corrupted EBS volume, and we made the decision at that time to fail over the slave database.\r\n\r\n##### Resolution\r\n\r\nAt 17:12 UTC, the slave database had been successfully promoted to master and the application recovered enough to accept web traffic again. A new slave database node was created and placed into the rotation to guard against future master failures. The promoted slave database performed slowly for the next couple of hours as the query cache began to warm up, and eventually settled into a reasonable performance profile around 20:00 UTC.\r\n\r\n##### Future Mitigation Plans\r\n\r\nOver the past few months, we've been working on an overhaul to our data storage layer with a migration from a Postgres setup to a distributed, fault-tolerant, multi-region data layer using Riak. This initiative has been prioritized, and the migration will be performed in the coming weeks. We will notify our clients of the scheduled downtime via an incident on this status site, and via a blog post.",
    "body_draft_updated_at": "2018-06-23T10:45:46-06:00",
    "published_at": "2018-06-23T10:45:46-06:00",
    "notify_subscribers": true,
    "notify_twitter": true,
    "custom_tweet": String,
    "updated_at": "2018-06-23T10:45:46-06:00"
  }

Back to top

Revert published Postmortem

Reverting a Postmortem takes the value in the published Postmortem body and moves it to body_draft. There must be a published Postmortem in order to use revert.

ENDPOINT
  PUT /pages/[page_id]/incidents/[incident_id]/postmortem/revert.json

SAMPLE CALL
  curl https://api.statuspage.io/v1/pages/hmzvdmpfxkjl/incidents/3fv9c1rdhbw2/postmortem/revert.json \
    -H "Authorization: OAuth 2a7b9d4aac30956d537ac76850f4d78de30994703680056cc103862d53cf8074" \
    -X PUT

RESPONSE CODES
  200 - Successful call
  400 - Bad request
  401 - Could not authenticate
  404 - No postmortem found to revert from

SAMPLE RESPONSE
  {
    "created_at": "2018-06-21T10:32:33-06:00",
    "preview_key": "371fdg624kjk",
    "body": "##### Issue\r\n\r\nAt approximately 17:02 UTC on 2018-06-21, our master database server unexpectedly went unresponsive to all network traffic. A reboot of the machine at 17:05 UTC resulted in a failed mount of a corrupted EBS volume, and we made the decision at that time to fail over the slave database.\r\n\r\n##### Resolution\r\n\r\nAt 17:12 UTC, the slave database had been successfully promoted to master and the application recovered enough to accept web traffic again. A new slave database node was created and placed into the rotation to guard against future master failures. The promoted slave database performed slowly for the next couple of hours as the query cache began to warm up, and eventually settled into a reasonable performance profile around 20:00 UTC.\r\n\r\n##### Future Mitigation Plans\r\n\r\nOver the past few months, we've been working on an overhaul to our data storage layer with a migration from a Postgres setup to a distributed, fault-tolerant, multi-region data layer using Riak. This initiative has been prioritized, and the migration will be performed in the coming weeks. We will notify our clients of the scheduled downtime via an incident on this status site, and via a blog post.",
    "body_draft": "##### Issue\r\n\r\nAt approximately 17:02 UTC on 2018-06-21, our master database server unexpectedly went unresponsive to all network traffic. A reboot of the machine at 17:05 UTC resulted in a failed mount of a corrupted EBS volume, and we made the decision at that time to fail over the slave database.\r\n\r\n##### Resolution\r\n\r\nAt 17:12 UTC, the slave database had been successfully promoted to master and the application recovered enough to accept web traffic again. A new slave database node was created and placed into the rotation to guard against future master failures. The promoted slave database performed slowly for the next couple of hours as the query cache began to warm up, and eventually settled into a reasonable performance profile around 20:00 UTC.\r\n\r\n##### Future Mitigation Plans\r\n\r\nOver the past few months, we've been working on an overhaul to our data storage layer with a migration from a Postgres setup to a distributed, fault-tolerant, multi-region data layer using Riak. This initiative has been prioritized, and the migration will be performed in the coming weeks. We will notify our clients of the scheduled downtime via an incident on this status site, and via a blog post.",
    "body_updated_at": "2018-06-22T14:32:33-06:00",
    "body_draft_updated_at": "2018-06-23T14:32:33-06:00",
    "published_at": "2018-06-22T14:32:33-06:00",
    "notify_subscribers": false,
    "notify_twitter": false,
    "custom_tweet": null,
    "updated_at": "2018-06-22T14:32:33-06:00"
  }

Back to top

Delete associated Postmortem
ENDPOINT
  DELETE /pages/[page_id]/incidents/[incident_id]/postmortem.json

SAMPLE CALL
  curl https://api.statuspage.io/v1/pages/hmzvdmpfxkjl/incidents/3fv9c1rdhbw2/postmortem.json \
    -H "Authorization: OAuth 2a7b9d4aac30956d537ac76850f4d78de30994703680056cc103862d53cf8074" \
    -X DELETE

RESPONSE CODES
  204 - Successful call
  401 - Could not authenticate
  404 - Requested Postmortem could not be found