Red Gate SQL Backup Pro 7

SQL Backup Walkthrough Guides Set Up Your Backup And Restore Schedule

Walkthroughs

Find out how to easily set up a regular backup and restore schedule and get faster, smaller, verified SQL Server backups.

Database backup, restore, and verification

This walkthrough exploring the new features in SQL Backup Pro 7 covers:

  • Using the Schedule Backup Jobs wizard to run database checks and create reminders to verify your backups.
  • Launching the Schedule Restore Jobs wizard from the Reminder tab.
  • Scheduling a restore job to fully verify your backups, even if they do not exist yet.

Schedule a backup job

Step 1: Launch the Schedule Backup Jobs wizard

To schedule a backup of our database we launch the Schedule Backup Jobs wizard from the button on the toolbar.

By choosing the Schedule Backup Jobs wizard (rather than the Back Up wizard) we will have the option to create a reminder to verify our backups.

Step 2: Choose to perform a new backup or use previously saved settings

We now select the server from which we wish to back up the database. We can also choose to use settings from a template that we can customize and configure. In this case, we're going to create a new backup.

Step 3: Choose the database and the type of backup to create

Next, we choose the type of backup we want to create and select the database we want to back up. If full backups have previously been made, we can decide to perform differential or transaction log backups. In this walkthrough, we will create a full database backup.

Step 4: Create a backup schedule

Here, we can create a schedule for our backups. In this walkthrough, we're creating a full backup and we're scheduling it to occur every weekday at 18.00. We could opt for more frequent backups to suit our needs, and can decide whether we want this schedule to run for a finite period, or run indefinitely.

Step 5: Configure backup file settings

In this step, we decide whether to back up to a single file, a single file mirrored to a second location, or split the backup over multiple, smaller files. Splitting the backup across multiple files can reduce the time needed to create it.


You can select a single database for backup as I’ve done above, or you can select multiple databases, including system databases. Remember, you should backup the system databases to help recovery in the event of a catastrophic failure the entire system. You can also set these backups to be a COPY_ONLY in order to avoid messing up any differential backups you’re running if this is an ad hoc backup.


Full backup to individual files


Moving to the next step of SQL Backup conveniently brings us to the next point on my checklist: full backup to individual files. While you can put multiple backups inside a single file, there are multiple issues associated with this. SQL Backup Pro can automatically name your files for you so that each backup file is unique, using the date and time of the backup as part of the name. Even better, you can have the SQL Backup Pro help you by cleaning out older backup files automatically. This also proves useful if you’re copying the databases off to a tape system or an off-site facility, as you can get rid of the files once they’ve been copied.

Configure backup file settings


If you uncheck the “Name file automatically” box, you can provide a single file name. If you also uncheck the “Overwrite existing backup files of the same name” you will then get the stacked backup storage, i.e. multiple backups in one file, but I really don’t recommend it.


As you can see, I’ve also set it up to keep three backup files on the disk, and to run the deletes prior to running the backups. You have a lot of control over this clean-up operation including turning it off completely.


SQL Backup Pro offers additional functionality in that you can have it automatically copy your backup files to a network location, as an added security and as a protection against the possibility of some sort of outage of the backup location after the backup has been taken. You can also select to use multiple files on multiple disks, but these settings are a bit beyond the scope of this article. I just want to point them out.


Use CHECKSUM with full backup


Next up is using CHECKSUM when you take your backups, an option that we can handle in Step 5 of the SQL Backup Wizard (Step 4 covers some optional extras, such as compression and encryption).

Thanks,
Rajendar


I am using SQL Server version 2012 and suddenly I have got errors message while restoring backup database. Please solve my query and recommend some tool or utilities that will recover my backup database.


You can easily recover your corrupted SQL backup database into healthy environment. visit here for more information:


Hi There,
I have face problem when try to restore transaction log,

Suppose we have two DB servers : SERVER-A and SERVER-B
Database is PayrollDB on SERVER-A, which we want to logshipping on SERVER-B.

We have full back at 3:45 AM every day for PayrollDB on SERVER-A.
And transaction log backup every 10 minutes for every day for entire day.

Our last SERVER-A’s PayrollDB full back up start at 3:45 am and completed at 4:08 am (took around 23 min for full backup).

As transaction log backup at every 10 min , we have transaction log backup at 3:40 am, 3:50 am, 4:00 am, 4:10 am , 4:20 am ,4:30 am , 12:30 pm , 1:00 pm and for entire day.

For Start log shipping , I restore SERVER-A’s PayrollDB with “Non-operational (RESTORE WITH NORECOVERY)” option at 12:30 pm and restore database on SERVER-B with name “PayrollDB”.

В процессе восстановления все имеющиеся подключения к базе данных завершаются автоматически. Доставку журналов Red Gate SQL Backup Pro может выполнять без предварительной инициализации целевой базы данных. В случае допущения пользователем ошибки, в программе поддерживается опция возврата на шаг назад для повторного запуска операции.

As Figure 2 shows, Acronis Recovery for MS SQL Server provides a straightforward interface for backing up and restoring either some or all of the databases on a SQL Server instance. It supports all versions of SQL Server from SQL Server 7.0 SP4 through SQL Server 2008 R2. You can install Acronis Recovery on hosts running Windows 2000 SP4 or later. You need to deploy an agent to each server, but a single agent can be used to protect multiple instances installed on the same server. Acronis Recovery isn't cluster-aware.

Figure 2: Acronis Recovery management console 
Figure 2: Acronis Recovery management console

With Acronis Recovery, you can:

  • Back up and restore more than one database on an instance simultaneously
  • Automate recovery to the point of failure
  • Perform backups of remote machines using integrated FTP clients
  • Use the Acronis Recovery SDK to create custom solutions that integrate into existing SQL Server applications

3. Click on the General Tools category

4. Activate the Uninstall Programs feature

5. All the applications installed on your PC will appear

6. Scroll the list of applications until you locate SQL Backup 7 or simply activate the Search field and type in "SQL Backup 7". If it exists on your system the SQL Backup 7 application will be found very quickly. Notice that when you click SQL Backup 7 in the list of programs, some information regarding the program is made available to you:

.NET framework requirements

The graphical user interface requires Microsoft .NET version 2, 3.0 or 3.5.

YearProductCategoryAward
2011SQL Backup ProBest Backup and Recovery Software ProductSQL Server Magazine, Editors' Best, GOLD award]19[
2011SQL Backup ProBest Backup and Recovery Software ProductSQL Server Magazine, Community Choice, GOLD award]19[
2011SQL MonitorBest Database Monitoring and Performance ProductSQL Server Magazine, Editors' Best, BRONZE award]19[
2011SQL MonitorBest Database Monitoring and Performance ProductSQL Server Magazine, Community Choice, SILVER award]19[
2011SQL Developer BundleBest Database Development ToolSQL Server Magazine, Editors' Best, GOLD award]19[
2011SQL ToolbeltBest Database Development ToolSQL Server Magazine, Community Choice, SILVER award]19[
2011SQL ToolbeltBest Database Management ProductSQL Server Magazine, Community Choice, GOLD award]19[
2011Best Vendor Tech SupportSQL Server Magazine, BRONZE award]19[
2012SQL Backup ProBest Backup and Recovery ProductSQL Server Pro, Community Choice, GOLD award]20[
2012SQL MonitorBest Database Monitoring and Performance ProductSQL Server Pro, Community Choice, SILVER award]20[
2012SQL Developer BundleBest Database Development ProductSQL Server Pro, Editors' Best, GOLD award]20[
2012SQL Developer BundleBest Database Development ProductSQL Server Pro, Community Choice, SILVER award]20[
2012SQL ToolbeltBest Database Management ProductSQL Server Pro, Editors' Best, GOLD award]20[
2012SQL ToolbeltBest Database Management ProductSQL Server Pro, Community Choice, GOLD award]20[
2012SQL Scripts ManagerBest Free ToolSQL Server Pro, Editors' Best, GOLD award]20[
2012Best Vendor Tech SupportSQL Server Pro, BRONZE award]20[
2013SQL Backup ProBest Backup & Recovery ProductBest Backup & Recovery ProductSQL Server Pro, GOLD award]21[
2013SQL Developer BundleBest Database Development ToolSQL Server Pro, GOLD award]21[
2013SQL ToolbeltBest Database Management SuiteSQL Server Pro, BRONZE award]21[
2015SQL Source ControlDatabases and Data Development and ModelingVisual Studio Magazine, Reader’s Choice, GOLD award]22[
2015ANTS Performance ProfilerPerformance, Profiling, and Debugging ToolsVisual Studio Magazine, Reader’s Choice, SILVER award]22[

You can get full details in our PDF, Moving to SQL Backup Pro.

In SQL Server 2000, BACKUP LOG WITH TRUNCATE_ONLY was a supported way of forcing SQL Server to truncate the transaction log, while the database was operating in the FULL or BULK LOGGED model, without actually making a backup copy of the contents of the log; the records in the truncated VLFs are simply discarded. So, unlike with a normal log backup, you're destroying your LSN chain and will only be able to restore to a point in time in any previous log backup files. Also, even though the database is set to FULL (or BULK LOGGED) recovery, it will actually, from that point on, operate in an auto-truncate mode, continuing to truncate inactive VLFs on checkpoint. In order to get the database operating in FULL recovery again, and restart the LSN chain, you'd need to perform a full (or differential) backup.

This command was often used without people realizing the implications it had for disaster recovery, and it was deprecated in SQL Server 2005 and removed from SQL Server 2008. Unfortunately, an even more insidious variation of this technique, which continues to be supported, has crept up to take its place, and that is BACKUP LOG TO DISK='NUL', where NUL is a "virtual file" that discards any data that is written to it. The really nasty twist to this technique is that, unlike with BACKUP LOG WITH TRUNCATE_ONLY, SQL Server is unaware that the log records have simply been discarded. As far as SQL Server is concerned, a log backup has been performed, the log records are safely stored in a backup file so the LSN chain is intact, and any inactive VLFs in the live log can safely be truncated. Any subsequent, conventional log backups will succeed but will be entirely useless from the point of view of disaster recovery since a log backup file is "missing" and so the database can only be restored to some point in time covered by the last standard log backup that was taken before BACKUP LOG TO DISK='NUL' was issued.

Do not use either of these techniques. The right way to "force" log truncation is to temporarily switch the database into the SIMPLE recovery model, as discussed earlier.

Scheduled shrinking of the transaction log

As discussed there are rare circumstances, where transaction log growth has occurred due to a lack of management and where the log growth is now being actively managed, in which using DBCC SHRINKFILE to reclaim the space used by the transaction log file is an acceptable operation.

However, the transaction log should never be shrunk using DBCC SHRINKFILE, or a database maintenance plan step to shrink the database, as part of normal, scheduled maintenance operations. The reason for this is that every time you shrink the log, it will need to immediately grow again to store log records for subsequent transactions and every log. If auto-growth is being relied upon solely for transaction log growth (see the next section for a fuller discussion), excessive VLFs can accumulate in the log file and this log fragmentation will impact the performance of any process that needs to read the log file and, if fragmentation gets really bad, possibly even the performance of data modifications. Also, since the transaction log cannot take advantage of instant file initialization, all log growths incur the cost to zero-byte the storage space being allocated.

The best practice for the transaction log file continues to be to size it appropriately up front so it does not have to grow under normal operations, and then to monitor its usage periodically to determine if the need to grow it manually occurs, allowing you to determine the appropriate growth size and determine the number and size of VLFs that will be added to the log file.

Proper Log Management

In the absence of any unexpected operations or problems that have resulted in unusual log growth (uncommitted transactions, replication problems and so on), if the transaction log associated with a FULL recovery model database fills up, and is forced to grow, there are really only two causes:

  • the size of the log file was too small to support the volume of data changes that were occurring in the database.
  • the frequency of log backups was insufficient to allow rapid reuse of space within the log file.

The best thing to do, if you can't increase the frequency of the log backups by decreasing the amount of time between log backups, is to manually grow the log file to a size that prevents it from having to grow using auto-growth when under load, and then leave the log file that size. Having a large transaction log file that has been properly grown to minimize the number of VLFs is not a bad thing, even if the log file has free space a majority of the time.

When configuring the initial size of the transaction log for a database, it is important to take into account the size of the largest table in the database, and whether or not index maintenance operations will be performed. As a rule of thumb, the transaction log should be sized to 1.5 times the size of the largest index or table in the database, to allow for logging requirements to rebuild the index under FULL recovery.

In addition to the initial sizing requirements, it is important to monitor and adjust the size of the transaction log periodically to fit the size of the database as it grows. There are a couple of problems with the auto-growth settings that a database will inherit from model, which is currently to grow in steps of 10% of the current transaction log size:

  • initially, when the log file is small, the incremental growth will be small, resulting in the creation of a large number of small VLFs in the log, causing the fragmentation issues discussed earlier.
  • when the log file is very large, the growth increments will be correspondingly large; since the transaction log has to be zeroed out during initialization, large growth events can take time, and if the log can't be grown fast enough this can result in 9002 (transaction log full) errors and even in the auto-growth timing out and being rolled back.

The auto-growth settings should be configured explicitly to a fixed size that allows the log file to grow quickly, if necessary, while also minimizing the number of VLFs being added to the log file for each growth event.

To minimize the chances of a timeout occurring during normal transaction log growth, you should measure how long it takes to grow the transaction log by a variety of set sizes while the database is operating under normal workload, and based on the current IO subsystem configuration. In cases where the necessary zero-initialization performance characteristics are not known for a database, I recommend, as a general rule, a fixed auto-growth size of 512 MB.

Ultimately, though, remember that auto-growth should be configured as a security net only, allowing the system to automatically grow the log file when you are unable to respond manually. Auto-growth is not a replacement for appropriate monitoring and management of the transaction log file size.

Наверх