Distributed Computing, the CAP Theorem, and How to Improve System Architectures
Lots of companies - especially in the non-startup world - are starting to look closely at upgrading their legacy systems to the "next generation" - services, scalability, NoSQL, etc. Most of these systems have existed, in some form or fashion, for decades and are beginning to impede the business' ability to handle new customer demands - especially around time-to-market and ultra-slow workloads that are experiencing poor performance.
Whether you are creating a new, distributed, architecture or simply improving an existing slow process, there are complexity concerns that you will have to deal with. It's better to understand these issues up-front and make accommodations for them before you get blindsided in the middle of a long-term project.
In the talk below, Nathan and I discussed some of the basics around distributed computing, architecture, and storage and introduce some of the issues and constraints around creating the next-generation architecture for your organization that will sustain you through the next decade.
Subscribe for Free
Want to stay ahead of the curve? Subscribe now to receive the latest updates, actionable insights, and thought-provoking ideas around business, technology, and leadership straight to your inbox.