Micro-Applications With AWS Lambda

At this year's Amazon Web Services re:Invent conference, AWS unveiled their second compute service - Lambda. AWS Lambda is a PaaS-like service that runs discrete chunks of code in response to a given event - without needing to manage any of the underlying compute resources. Lambda responds to a wide range of events within your AWS infrastructure within milliseconds of their occurrence and completely removes the need for inefficient polling as you wait for resources to change.

Lambda will fundamentally change the way applications are built in the future by allowing developers to shift from a single application to multiple event-based Micro-Applications. Today, traditional software development consists of three main components: Functions, Interactions, and Data. Functions are core business logic, Data holds business state, and the Interactions are the events that tie the two together. Lambda exists at the intersection of Functions, Interactions, and Data by responding to particular events within your AWS infrastructure and executing a code tailored to that specific event.

Functions, Events, and Data

With the new paradigm shift that Lambda introduces, your applications turn from single source code solutions (i.e. .NET solution/project) into multiple event based Micro-Applications that consist of the core building blocks of functionality required to run your application. This allows for the entire suite of back-end services in your application to scale in a huge way since they are individual, discrete components backed by the complete power of AWS EC2.

In the example below, three Lambda applications are run to extract metadata from a photo and write it to DynamoDB (based off of an S3 event), calculate trending data and write to DynamoDB (based off of a Dynamo event from the data inserted in the first step), and finally notify the end user of trending data based on their photo (based off of the second Dynamo insert).

AWS Lambda Workflow

Lambda also decouples your application from the compute resources required to run it. AWS handles all of this, so you no longer have to worry about managing the infrastructure, monitoring, and logging programatically - it's all done for you.

AWS offers unique pricing for Lambda functionality based on the number of requests run paired with execution time for a given block of memory.

  • $0.20 per million requests
  • $0.00000021 per 100 milliseconds at 128MB

If you are interested in getting started with Lambda, you can sign up for the Preview here. AWS is gracious enough to offer 3.2 million seconds of execution and 1 million requests as part of the Free Tier. What are you waiting for? It's time to experience a new way to build applications.

Presentation Slides: Internet of Things

Last weekend, I co-presented a talk on the Internet of Things at a company sponsored conference. According to the 2014 Gartner Hype Cycle, IoT is at the very peak of "Inflated Expectations" - essentially making any use of the term "Internet of Things" immediately useless.

Peak of Inflated Expectations: Early publicity produces a number of success stories — often accompanied by scores of failures. Some companies take action; many do not. -Gartner

2014 Gartner Hype Cycle

In our talk, Brian and I try to simplify IoT down to it's verticals - connected home, security, remote control for life (phone), etc. We also dive into how IoT applications work at their core and outline a potential architecture for development.

The presentation slides are included below, please feel free to reach out and let me know what you think.

Talk Slides: Distributed Computing, the CAP Theorem, and How to Improve System Architectures

Lots of companies - especially in the non-startup world - are starting to look closely at upgrading their legacy systems to the "next generation" - services, scalability, NoSQL, etc. Most of these systems have existed, in some form or fashion, for decades and are beginning to impede the business' ability to handle new customer demands - especially around time-to-market and ultra-slow workloads that are experiencing poor performance.

Whether you are creating a new, distributed, architecture or simply improving an existing slow process, there are complexity concerns that you will have to deal with. It's better to understand these issues up-front and make accommodations for them before you get blindsided in the middle of a long-term project.

In the talk below, Nathan and I discussed some of the basics around distributed computing, architecture, and storage and introduce some of the issues and constraints around creating the next-generation architecture for your organization that will sustain you through the next decade.