Automatically upload Zendesk ticket data to an Amazon Elasticsearch Service



In the previous blog post, we retrieved ticket data and comment data using Python.

Check the Zendesk’s ticket using Python

In this post, we illustrate how to automatically upload Zendesk ticket data to an Amazon Elasticsearch Service.


  • Create an Amazon Elasticsearch Service on a Amazon Virtual Private Cloud(VPC) Private Subnet
  • Execute AWS Lambda via Amazon API Gateway
  • Automatically upload data to an Amazon Elasticsearch Service using Zendesk automations and webhooks
  • See below for the data schema for Amazon Elasticsearch Service
    "ticket_id": 12345,
    "url": "",
    "subject": "subject",
    "create_date": "2017-10-16T00:55:22Z",
    "updated_date": "2017-10-21T18:02:34Z",
    "nested_comments": [
                  "comment_id": 22222,
                  "public": True,
                  "body": comment
                  "comment_id": 33333,
                  "public": False,
                  "body": comment
  • Search similar tickets with Amazon Elasticsearch Service

How to setting

We will set up AWS Lambda to execute when the Zendesk ticket is closed.

Amazon API Gateway

We will use POST on the Zendesk ticket_id, so we create Amazon API Gateway using POST method.

AWS Lambda

The code of AWS Lambda has been uploaded to Github so please see it.

Zendesk: Creating webhooks

Create a webhook from: Settings → Extensions → Add Target → HTTP Target

For more, see Zendesk's support article on creating webhooks with the HTTP target


  • The URL specifies Amazon API Gateway's URL.

Zendesk: Creating automations

See Zendesk's article on creating and managing automations for time-based events.

Click the Admin icon in the sidebar, then select Automations and create custom automations. In this case, the ticket status is changed from solved to closed after several hours. We set the target to be notified at the time of closing the ticket.

Select Notify target, and select webhook that we just created. JSON body is only sends the ticket id, as can be seen below.


When Zendesk's ticket was closed, the data was uploaded to Amazon Elasticsearch Service.

Upload past data manually

The automation is complete, but past data must be uploaded manually. Since Amazon Elasticsearch Service is private and created on the subnet, the data should be uploaded from within the same VPC. Create an EC2 instance (Amazon Linux) on the public subnet and execute the following Python program to input past data.

sample code

The code has been uploaded to Github.

This concludes uploading all closing tickets to the Amazon Elasticsearch service.

Try searching

In the following example we are looking for a ticket containing ELB 5 xx in comment.body.

curl -s -H "Content-Type: application/json" -XGET "[Endpoint]/[IndexName]/TypeName]/_search" -d'
  "query": {
    "match": {
      "comment.body": "ELB 5xx"
}' | python -c 'import sys,json;print(json.dumps(json.loads(,indent=4,ensure_ascii=False))'
    "took": 9,
    "timed_out": false,
    "_shards": {
        "total": 5,
        "successful": 5,
        "skipped": 0,
        "failed": 0
    "hits": {
        "total": 1056,
        "max_score": 11.866644,
        "hits": [
                "_index": "IndexName",
                "_type": "TypeName",
                "_id": "id",
                "_score": 11.866644,
                "_source": {
                    "subject": "subject....",
                    "url": ""
                "_index": "IndexName",
                "_type": "TypeName",
                "_id": "11988",
                "_score": 11.810463,
                "_source": {
                    "subject": "subject...",
                    "url": ""


By using Amazon Elasticsearch Service you can customize your search. This time, we automated the uploading of Zendesk data into Amazon Elasticsearch Service.