Why is inserting data into Azure Table Storage so slow?
I am working on a project where we are evaluating whether or not to move from SQL Azure (or whatever it's called this week) to Azure Table Storage.
When I started evaluating the performance of inserting data into Table Storage I was astonished to see how slowly things were moving. I was only able to insert around 400 rows per minute which was nowhere near the bandwidth I needed for my project.
Before declaring failure, I decided to do a little bit of research to figure out why things were going so slowly for me and instantly found the problem: Nagle's Algorithm.
Nagle's algorithm works by combining a number of small outgoing messages, and sending them all at once. Specifically, as long as there is a sent packet for which the sender has received no acknowledgment, the sender should keep buffering its output until it has a full packet's worth of output, so that output can be sent all at once.
It turns out that Nagle's algorithm is enabled by default in C# and can significantly slow down communications in applications that send several small messages and TCP Delayed ACKs.
Luckily for us, we can simply disable Nagle's algorithm in our project. The easiest way to accomplish this is to disable Nagling for every service point.
ServicePointManager.UseNagleAlgorithm = false;
If this isn't an option for you due to an existing production deployment, then you can disable Nagle's algorithm specifically for Table Storage.
var storageAccount = CloudStorageAccount.Parse(connectionString);
ServicePoint tableServicePoint = ServicePointManager.FindServicePoint(account.TableEndpoint);
tableServicePoint.UseNagleAlgorithm = false;
Note: It is important to turn Nagle's algorithm off before you make your first call to blob, table, and queue storage, otherwise the setting will not get applied.
By simply disabling Nagle's algorithm, I was able to gain a significant improvement in performance when dealing with Azure Table Storage. Hopefully, this will speed up things on your end as well.