AWS / reInvent 2020 / AWS Partner Network / devops /

re:Invent 2020 Week One re:Cap

Aaron Walker

07 December 2020

3 Minute Read

Due to the COVID-19 pandemic re:Invent 2020 is a completely virtual 3 week long event. Normally I'd be writing the re:Invent re:Cap at 30,000 feet somewhere over the Atlantic but this year I can write it from the comfort of my couch while watching some great sessions ;)

With the first week behind us, I thought I put together a quick post of the highlights from last week that included the first all virtual Andy Jassy Keynote.

Keynote

The keynote took a familiar format with Andy giving an update on how the AWS business is doing. What I found interesting is that he went into detail to explain what it means to grow to a $46B annual run rate. I suspect analysts have been critical of AWS' percentage growth rate when compared to other major cloud providers. He also went on to really press the point that there is still plenty of growth opportunities left especially with COVID-19 seemingly accelerating cloud adoption. This is definitely a trend we are seeing as well.

Andy's keynote was packed full of new product launches and updates and here is a link to the full list

Here's my highlight list from the first week:

  • Babelfish for Amazon Aurora PostgreSQL is available for Preview
  • AWS Lambda now supports container images as a packaging format
  • AWS Lambda now supports up to 10 GB of memory and 6 vCPU cores for Lambda Functions
  • AWS Lambda changes duration billing granularity from 100ms down to 1ms
  • Introducing Amazon ECS & EKS Anywhere
  • Announcing Amazon ECR Public and Amazon ECR Public Gallery
  • Introducing the next version of Amazon Aurora Serverless (V2) in Preview
  • AWS announces Amazon DevOps Guru in Preview, an ML-powered cloud operations service to improve application availability for AWS workloads

Babelfish for Amazon Aurora PostgreSQL is available for Preview

We have quite a few customers running SQL Server RDS workloads. If this service lives up to the hype it will be a pretty impactful one, and would help accelerate the migration from SQL Server to Postgres. From what I understand, you’ll be able to keep existing code that uses T-SQL and also write new code using Postgres natively. This allows for a migration to not have to be an "all or nothing" approach. Also, the fact they are open-sourcing it means if it doesn’t exactly handle your data or query model, you have the opportunity to contribute to it. Open-sourcing it also means it can run against any Postgres database and not just RDS. This will help with local development as you'll be able to run locally.

AWS Lambda now supports container images as a packaging format

Ever since Lambda was launched back in 2014 everyone has asked for a way to deploy custom packages, and with the announcement of custom runtimes last year we pretty much thought that was all we'd get support for. I guess I was wrong…. happily so. In fact so happy, I wrote a separate blog post on how to use it.

With native container support and the increase to 10GB and 6 vCPU, this opens up Lambda for many more workloads.

AWS Lambda now supports up to 10 GB of memory and 6 vCPU cores for Lambda Functions

You can now provision Lambda functions with a maximum of 10GB of memory which is more the 3x the previous limit. This is great for large ETL and media processing workloads. With the increase in memory it also increases the allocated vCPU resources with a max of 6 vCPUs. This makes Lambda a candidate for running compute intensive workloads like machine learning inference workloads.

AWS Lambda changes duration billing granularity from 100ms down to 1ms

This change doesn't really sound like much on the surface but it's actually pretty massive for a lot of serverless workloads. If you have lots of short duration functions it could easily cut your Lambda costs by 60-80%. Now, if your Lambda bill is $1.97 per month, like my personal account was last month, then a 80% saving won't buy you a coffee. But as more and more pure serverless apps are being developed it is good to see AWS adjust the pricing models to truly reflect the you only pay for what you use model.

Introducing Amazon ECS Anywhere

I've been a huge fan of ECS since its launch in 2015 and we use it in production with more than 80% of our customers. Now why should I care that I can now run ECS Anywhere? One big feature of ECS is its highly available and scalable control plane, and unlike other schedulers, you don't really need to care about it. Now with ECS Anywhere you can leverage the ECS control plane in different environments/clouds including my laptop ;)

Another use case this helps solve is cross-region application deployments. Right now you need to duplicate the ECS cluster configuration and task definitions etc. in all the regions you want to deploy to. Using ECS Anywhere, you can simply configure ECS instances in another region and configure placement rules to target these instances. I'm really looking forward to deep diving into this new feature.

Amazon Elastic Container Registry (ECR) has been the go-to registry when deploying images to ECS and more recently EKS. But it wasn't possible to share images publicly as it required IAM authentication. Amazon ECR Public is a fully managed registry that makes it simple to publicly share container images for anyone to download. The recent introduction of rate limiting to Docker and Docker Hub policies for public images has lead to a number of alternative registries to spring up including GitHub and now ECR.

ECR Public comes with a free use tier of 50 GB storage each month when sharing public images. Anyone who pulls images anonymously gets 500 GB of free data bandwidth each month. If you authenticate with an AWS account, this increases free data bandwidth to 5 TB each month.

Introducing the next version of Amazon Aurora Serverless (V2) in Preview

When AWS announced the initial Aurora Serverless version in 2018 I was very bullish and actually deployed it in production shortly after. A few months later, after various issues with the scaling up/down behavior, it was replaced with a normal Aurora cluster. We learnt a number of lessons, including that we really need to understand how Aurora Serverless scaled under the hood to be able to use it effectively. When scaling up it would double the ACUs (Aurora capacity units), which worked fine in most instances. However, scaling back down meant it would have to wait for a period of no database activity and this could often take hours and in one case for us days.

Amazon Aurora Serverless (V2) states it will address many issues with V1. It will be able to scale up and down in smaller ACU increments and the scaling down works even when the database is still under load.

It's interesting that AWS decided to release what basically looks like a completely new service with Aurora Serverless V2 rather than just announce it as improvements. I guess they wanted a clean slate to work with based on the lessons they learnt with V1. I hope they provide a clear migration path from V1 to V2. It would also be great to be able to create and store Aurora Serverless V2 clusters from a normal Aurora snapshot. #wishlist

AWS announces Amazon DevOps Guru in Preview, an ML-powered cloud operations service to improve application availability for AWS workloads

Amazon DevOps Guru is the newest in the line of ML-Powered AWS Services. DevOps Guru is designed to detect behaviors that deviate from normal operating patterns so you can identify operational issues long before they impact your customers. The first question you might ask yourself is "Am I out of a job?" :) But like most anomaly detection solutions, they will need on-going tuning and refinement to produce meaningful results. This is another service I plan to deep dive into in the coming weeks and will write a separate blog post with my findings.

One week down, 2 more to go :)



  • re:Invent 2020 Week Two re:Cap Read More
  • The Guide to SaaS Management Read More
  • How outsourcing could be the difference for your SaaS company Read More
More Blog Posts