Select Page

Our company has adopted several practices to automate MS SQL Server databases, thanks to which our clients’ databases work smoothly and efficiently. In this article, we will discuss the procedures of maintenance of SQL Server indexes and statistics, backup copying, and checking database integrity. 

Maintenance of SQL Server indexes and statistics 

Due to the fragmentation of indexes without their regular maintenance, table operations slow down, which can reduce business efficiency and lead to financial losses. That’s why at DB-Service we recommend using IndexOptimize – it is a procedure for rebuilding and reorganizing indexes, as well as updating statistics.

The procedure is customized and has many parameters. It can be configured to run both for all databases and for databases of a certain type (for example, system or user databases), as well as for a specific database. 

Different index rebuild modes are available for operation:

   ● Online. 

   ● Offline. 

   ● Reorganization modes. 

   ● Complex modes with conditions.

An example of a condition might be “Reorganize index. Rebuild the index online if reorganization is not supported. Do an offline rebuild if an offline rebuild is not otherwise possible.”

There are appropriate maintenance parameters for any level of index fragmentation, as the procedure checks this value in the sys.dm_db_index_physical_stats system dynamic view. 

In the procedure, you can also set a filter for the number of pages in the table. For example, if the number of pages is less than 50, the procedure is not effective, and therefore these tables will not be serviced by the task. 

The number of processors used for re-indexing is specified separately. By default, maximum parallelism is used which allows to perform the maintenance faster. 

Updating statistics is part of the maintenance and provides the query optimizer with the most efficient execution plans. This allows for faster data exchange. We configure a balanced update schedule – using the process too often is just as detrimental as using it too infrequently. It is possible to update statistics for all indexes and columns or selectively. 

Backup 

Using backup policies for MS SQL databases is an important practice for data security and protection. Databases are subject to various risks such as hard disc failures, operating system errors, virus attacks, or human error. Regular backups ensure that data can be recovered in case of data loss.

At DB-Service, we take a flexible approach for each client and offer different backup strategies. Depending on the industry your company operates in, you will be offered specific rules, regulations, and data storage requirements.

Our main solution is the flexible and efficient Database Backup procedure. It allows you to configure backup policies for all databases, or a subset – system or user databases. Special flags make it possible to exclude arbitrary databases from the service up to setting the policy exclusively for one database. It is possible to choose the location where copies are stored – a local directory or a network resource. 

Database Backup creates a directory structure with the server name, instance name, database name, and backup type in the root directory of the backup. If the database is part of an availability group, the cluster name and availability group name are used instead of the server name and instance name. 

There are three types of backups: 

1. Full; 

2. Differential; 

3. Transaction Log.   

Depending on the importance of the base and settings, you can apply different configurations. As a rule, we configure all three types of backups for sales bases, which makes it possible to restore them at any point in time.

The parameters cleanup mode and cleanup time mode are provided for storing copies. With their help, we configure the automatic deletion of unnecessary copies to save disc space for fresh backups. For example, BEFORE_BACKUP mode will delete the old copy before making a new one, and AFTER_BACKUP will delete old backup files after the backup is performed. In the latter case, if the backup fails, the backup files are not deleted. For additional space-saving, you can use copy compression.

Integrity check 

One of the key database practices is integrity checking. It is needed to confirm that the data in tables conform to certain rules and constraints.

We use an effective tool for this check – the DatabaseIntegrityCheck stored procedure. The task can be used for any subset of databases, including those configured with AAG. Integrity can be checked both at the database level and at the individual filegroup level. 

   ➡️ The PhysicalOnly parameter can limit the verification of the physical structure of the database. 

   ➡️ The DataPurity parameter allows you to assess the relevance of values for table fields. Depending on the system configuration. the job can run in multiple threads. 

   ➡️ The TimeLimit and LockTimeout parameters allow you to flexibly set the timing conditions of the job.

Our practices (algorithm) 

The main goal of our service is to provide fast, fault-tolerant, and reliable system operation. The practices listed above are only a part of our database management and automation work. We are scrupulous about setting the necessary parameters of the operating system and DBMS and following the following algorithm, which allows us to ensure high efficiency and reliability of MS SQL Server.

1. Before putting the server on support, we carefully set the necessary configurations for correct operation.

2. Then the appropriate power supply mode, Windows Defender and Firewall settings, file system parameters, and other settings are selected.

3. Taking into account the performance of the server is parameterized MS SQL Server.

4. Based on the number of processors, the number of tempdb database files is planned, which is the most important and common resource for all.

5. The size and automatic growth of data files and logs for each database are adjusted so that you do not have to resort to compression, which is a suboptimal solution for the work of the DBMS.

6. Important trace flags are configured.

7. Jobs are created that allow both real-time and retrospective assessment of the load and problems encountered on the server.