
Movement of Data From S3 to DynamoDB using DataPipeline
DataPipeline
It is a service that helps in sort, reformating, analysing, filtering, report data and derive an outcome from it. It is a web service that helps you reliably process and move data between different AWS compute and storage services, as well as on-premises data sources, at specified intervals.
S3
Simple and popular AWS Service for storage. Replicates data by default across multiple facilities. It charges per usage. It is deeply integrated with AWS Services. Buckets are logical storage units. Objects are data added in the bucket. S3 has a storage class on object level which can save money by moving less frequently accessed objects to colder storage class.
DynamoDB
An Amazon AWS Web Services database that is fully managed, serverless, key-value NoSQL database, has high-performance applications at any scale. It has the following benefits such as built-in security, continuous backups, automated multi-region replication, in-memory caching, and data export tools
Check below blog to know which data format is accepted by dynamo DB and how to create the data format
Before starting the below demo please do create the following roles by following the below blog
Demo
Creating DynamoDB table
Clicking on create table
Now creating an S3 bucket
Click on create a bucket
Now uploading the data file into S3
Now creating the DataPipeline
Click on edit architect
Click on S3 DataNode
Now clicking on data format
Click on create new data format
Clicking on DataFormat
In type select DynamoDN format
Click on activate
Now wait for waiting_on_running to Running state
Checking Demo Dynamo table and data has transferred successfully