Introducing Azure Services

As much as Version 3.0 was an upgrade to the front-end services, the next Version 3.5 is just as significant an upgrade to the backend infrastructure. Version 3.5 represents the evolution of our product moving completely into the Azure Cloud environment.

Although this upgrade is like replacing the roof on your house, lots of unseen benefits, there are still plenty of GUI upgrades and new functionality associated with moving to the cloud that are very exciting.

Version 3.5 Azure

Since day one, Version 3.0 has seen virtually 100% uptime, and whenever our clients needed to restore historic data we have been able to facilitate each and every request. In short, the Version 3.0 environment has been bullet proof (knock on wood), but recent developments in cloud technology have illustrated to me just how easily onsite and traditional hosting environments can fall behind in hardware and software technology, and how vulnerable they can be to all natures of disasters and nefarious activities.

In the earliest days of Version 3.0 development, I had an eye on the emerging cloud as it was obvious back then that this is where future solutions should reside. At the time the cloud was mostly a storage and virtualized-app platform – but today, there are true cloud-applications that can be built within the fabric of the Azure Cloud to instantly and measurably increase the value of hosted environments.

Our new Azure platform is built upon three pillars of Cloud technology: The new Azure SQL Database, Azure Storage, and Azure App Services.

Database Performance

Typically, associations maintain decades of historical data. On top of that, our clients collect a lot of activity log data. A lot. Our clients also maintain a ton of transaction audit trails that provide serious analytics for in depth review of user performance and historical usage.

The downside? The gigabytes of data collected over the years can quickly impact performance and backup infrastructure.

With the new SQL Azure Database platform, we now are able to introduce the Stretch Database service to address the common problem of maintaining historical data!

The Stretch Database is a form of partitioning, but, unlike SQL Server's table partitioning, there's no need to restructure tables to make it work. If six years' worth of data is in Azure storage (cold storage) and one year is stored locally (hot storage), a query asking for all the transactions in the past 30 days immediately eliminates 85 percent of the data from being searched. This is a huge performance savings and a huge cost savings since cold storage is a lot more affordable than local storage.

Isolated Performance

In order to ensure a seamless migration to Azure, since Azure SQL Database is using the latest technologies, we needed to build a migration strategy to move our client databases from SQL Server 2008 to SQL Server 2017.

After 15 years, we are moving our clients away from a Shared Resources model into an Isolated Environment model. This new model will ensure that our clients have all of the horsepower they need, when they need it, without being impacted by other large batch jobs (like backups, restores, mass mailings, disk and storage limitations, etc) that may also be running in the shared environment. This new model posed the biggest challenge, and an entirely new platform infrastructure was required to ensure continued support for our SSRS Reporting Engine, e-commerce Transactional Emails, and our Core EBlaster module.

This platform update was also a great opportunity to look at our services with fresh eyes in order to ensure that the move to Azure is not only a win in terms of performance, reliability, and security, but also a big win in terms of bringing newer, more powerful core functionality to the table. I will be sharing these upgrades with your shortly.

Load Balancing for large queries and reports

Use read-only replicas to load balance heavy, read-only workloads!

As you know, when someone performs a high-intensity read on the database, either a large query or a large report – all active users of the system are impacted.

While the move to Azure SQL will isolate your database from being impacted by other systems, and your new SQL Azure database will also include a lot of manual and automatic tuning and optimisations, we now have a powerful new tool at our disposal that will significantly reduce the impact of these large queries on your staff and online users.

Overview of Read Scale-Out

Each database in the Premium tier (https://docs.microsoft.com/en-us/azure/sql-database/sql-database-service-tiers-dtu) or in the Business-Critical tier (https://docs.microsoft.com/en-us/azure/sql-database/sql-database-service-tiers-vcore) is automatically provisioned with several Always ON replicas to support the availability SLA

These replicas are provisioned with the same performance level as the read-write replica used by the regular database connections. The Read Scale-Out feature allows you to load balance SQL Database read-only workloads using the capacity of the read-only replicas instead of sharing the read-write replica. This way the read-only workload will be isolated from the main read-write workload and will not affect its performance. The feature is intended for the applications that include logically separated read-only workloads, such as analytics, and therefore could gain performance benefits using this additional capacity at no extra cost.

Distributed Denial of Service Protection

Azure DDoS basic protection is integrated into the Azure platform by default and at no additional cost. Azure DDoS standard protection is a premium paid service that offers enhanced DDoS mitigation capabilities via adaptive tuning, attack notification, and telemetry to protect against the impacts of a DDoS attack for all protected resources within this virtual network.