Simply put, Azure table storage is too slow. The best I could get out of it was about 700 records/second for INSERT and 4600 records second for READ, which was unacceptable for the data sizes I had to deal with – several tables of around 100,000 records each.
Admittedly, I did not use batch operations, but it does not seem to matter: 700 records/second appears to be the true speed of the storage and it can’t be improved: see this StackOverflow article.
SQL Server, on the other hand, gave me and 1800-5000 records/second on INSERT (depending on the record size), and 30,000 records on READ, so I ditched the Table storage and went with the SQL server instead.
The data is summarized in the table below. Each value is records per second, higher values are better.
| Table Storage | SQL Server | |
|---|---|---|
| Read from Azure VM | 4601 | 30332 | 
| Write from Azure VM | 690 | 1867 | 
| Read from outside Azure | 3858 | 34456 | 
| Write from Azure VM | 213 | 1020 | 
Of course, the speed is highly variable depending on server load, network conditions and the like, but the upper limit of 700 records/second for table storage INSERT appears to be quite stable.
