The 8 announcements from AWS re:Invent 2018 that had us most excited and why
24 January 2019
4 Minute Read
There were 106 new features and/or services launched during the week of re:Invent 2018 with many having already been covered in preliminary write ups.
We wanted to provide a more practical hands-on review of the new services and have taken the opportunity to use this as the focus of one of our 6-weekly Hackdays. Each Hackday has a different theme and goal, allowing our team to dig a little deeper, gain exposure to new things and learn different skills.
For this particular hackday, each team member could choose from any of the 106 launched services to experiment with, provide a demo to the team and share their experience using the new service.
Here is our list of services that have us excited and why. (click on service heading to skip ahead)
- AWS Lambda Runtime API and Lambda Layers
- Amazon SageMaker Ground Truth
- Amazon CloudWatch Logs Insights
- AWS Transit Gateway
- AWS Serverless Application Repository
- AWS Transfer for SFTP
- AWS DeepRacer
- Amazon Forecast
Contact us if you would like to discuss any of the services menitoned in this article and how they can be utilised to meet your company's needs.
While the ability to develop custom Lambda Runtimes is an exciting and welcome new capability, it's the potential for more sophisticated, robust and performant serverless CI pipelines which has us excited about Lambda Layers. Lambda function deployable assets can quickly grow in size as third-party libraries are added over time. This can lead to longer build times and slower deployments. It can also result in additional storage costs as assets are versioned and stored in S3 for promotion and rollback across environments. With the addition of Lambda Layers, you can now develop serverless CI pipelines which decouple the business logic of the function (which is generally small and often updated) from the third-party libraries (which are often large and less frequently updated).
You now have greater control over the release of dependency updates (both feature and security) by decoupling the application dependencies from the application core with Lambda Layers. Build and deployment pipelines won't break due to unexpected package updates, and security updates can be tested and rolled out quickly without impacting the application development and deployment pipeline. It will be exciting to see how the AWS user community will leverage these new capabilities.
I've been lucky enough to work on some ML projects of late and have been using some various components of Amazon SageMaker. I was very pleasantly surprised to see the announcement of Amazon SageMaker Ground Truth. Amazon SageMaker Ground Truth helps you build highly accurate training datasets for Machine Learning quickly. When it comes to training machine learning models having high-quality training data is most of the hard work. The major challenge is producing this training data. It is very costly and time consuming, it involves human labelling of datasets, and for many organisations is something they can't do very often. I have direct experience setting up a toolchain to allow the labelling of large datasets and it involved the deployment of many different software components as well as ongoing management. This can now be replaced with SageMaker Ground Truth.
A couple of killer features of SageMaker Ground Truth you want to check out:
- Ability to create both public and private human labellers and even easily outsource expert private labelling services all from within the SageMaker Ground Truth service
- Automatically apply labels to the raw data. Where the labelling model has lower confidence in its results, it will pass the data to humans to do the labelling. Greatly reducing the amount of human labelling required.
As part of a recent hackday I got hands on with SageMaker Ground Truth. I was able to set it up to provide an image labelling workflow within only a couple of hours replacing an existing labelling solution which took several weeks to setup. This easier and much faster way to build highly accurate training datasets is going to lower the barrier significantly to implement and improve the results of machine learning projects for many organisations. As an initial service release, it has a very rich feature set for you to take advantage of. I’m looking forward to seeing where the SageMaker team take this service.
The announcement of Amazon CloudWatch Logs Insights is definitely one we've been waiting for as it means no more digging through unreadable log streams to find needles in a hay stack. You can perform sophisticated queries on a log group which returns your results in an easy to visualise table or count the results and plot them on a chart. This also means there is nothing to setup. You just click and search! No more trying to setup Amazon Elasticsearch clusters with Kibana and AWS Lambda function to ingest data into it, then trying to secure access to Kibana.
All access to Insights is controlled through IAM and the ability to be run programmatically with the SDK or CLI. One aspect you'll need to be wary of is the pay-per-query and per-GB of data you ingest. I can see someone unsuspectingly racking up a large bill. But as long as you are restricting your time period on your data set, it should be cheaper and less painful than running Elasticsearch and Kibana.
The AWS Transit Gateway announcement didn't really come as a surprise and I almost ignored it. Networks are boring, right? We just want to build Apps. :P Somehow I ended up, kind of by accident, going to Nick Matthew's session - AWS Transit Gateway and Transit VPCs - Reference Architecture for Many VPCs (https://www.youtube.com/watch?v=ar6sLmJ45xs).
This session really got me thinking about how we are currently implementing large scale AWS network designs using a single transit VPC using virtual routing appliances like Cisco CSRs.
Transit Gateway being a fully managed solution is really cool but also creates greater flexibility to support many different network topologies. Many people have been a little hesitant to build applications using a VPC-per-application-approach due to the overhead of managing the network complexities. Transit Gateway goes a long away to reducing this complexity and making it simpler for individual teams to manage the whole stack for their applications including the network. I don't think we will see the full potential of this service for some time, but I believe it will become a core building block for any significant AWS solution.
The concept of this is great and has largely gone unnoticed but I think it's a great addition to the AWS Serverless Application Model (SAM). The AWS Serverless Application Repository gives you the ability to publish templates and then consume them in an upstream template as a nested stack. This allows you to write and manage common templates which can be shared across your serverless application. However, one big feature that seems to be lacking is the ability to share templates between accounts, because who has a single account these days? That aside, I think it is a great idea and I am very interested to see where SAM and the Serverless Application Repository go.
It’s funny that in 2019 we are still talking about something like SFTP. It shows that businesses still have a long way to go to be truly cloud native. However, with the introduction of AWS Transfer for SFTP AWS has produced something that is very appealing and simpler to use. It is a fully managed highly available SFTP solution with native S3 integration. From the very start, the steps that are to be followed to get your SFTP server up and running were clear and straight forward. I was able to launch, access, upload and download within a few minutes. You also have the ability to utilise IAM and Trust Relationships to create varying levels of access per user.
This is ideal when working with a significant amount of users, or when handling sensitive data. There is also the added bonus of being able to implement a scope-down policy, which makes a significant difference when working with many users on the one host. All of this from the convenience of your terminal on your local machine, simply requires your newly created SFTP endpoint and public SSH key. I believe that this is going to be an underestimated but powerful service from AWS.
One thing to be mindful of is that you pay for data transfer IN and OUT and this can have an impact on the cost of using this service. Regardless, it is a step in the right direction and will hopefully attract the attention of more users and further development from AWS.
Now this is the exact service we need to get rolling, pun intended, on Machine Learning (ML). This is a great way to get more exposure to the uses and applications for machine learning. It’s not always about conquering cities and taking over the world with your endless robot army. There are so many more applications that can be entertaining and reap great rewards. I have personally seen some very interesting applications for machine learning that have peaked my interest in this topic and now AWS are getting involved too! DeepRacer is a great way to challenge what you know and what you come to expect from machine learning.
It is a great opportunity for this to be your initial exposure to the topic or for you to show off your skills. Machine Learning is something that is only going to get more and more exposure and have a larger impact over the coming years. I personally think what AWS are doing with the DeepRacer and DeepRacer Tournament is an excellent approach to bringing a bit of fun to an otherwise quite serious and complex topic.
The Amazon Forecast announcement was one that I am sure peaked many peoples interest, including my own. Through the use of Machine Learning this forecast takes into account past data and events and aims to provide an insight into what adjustments and alterations can or should be made. Not only that, but Amazon Forecast will train, adjust and deploy completely custom private machine learning forecasting models that are directly applicable to your business. This means that they haven't just come up with a service that adopts the "one size fits all" approach.
This is tailored to each individual’s situation and gives the customer the confidence to trust that the information that they are receiving is directly applicable to their needs. It covers a whole range of industries and use cases and forecasting can be created using any time series data such as retail demand, finances and logistics, just to name a few. This is definitely an Amazon Service that you want to keep a close eye on. It has great potential to grow and become a fully managed service you can't do without.
Join our Team
We're always on the look out for cloud engineers interested in improving their knowledge and skills. If our hackdays sound like the type of activities you'd like to be involved in, then we may just be the perfect fit for each other. Check out our careers page to see what positions we have open and how you can contact us.
Related Blog Posts
Organisations should be thinking about automation often and optimising their pipelines to match their circumstances. If your organisation is on this journey, there are some important questions you should ask to determine what level and types of automation make sense for your business.
With data breaches occurring at an alarming frequency, PCI, ISO and GDPR standards can no longer be ignored. Find out which 71 compliance items are already covered by base2Services & AWS.
Want to know the most common mistake most companies make when it comes to DevOps? Find out and protect your business.
directly to your inbox
Join your peers and sign up to our best content, news, services and events.