Database Reference
In-Depth Information
This template will allow you to save the trace you just created as a T-SQL file. Once you have the T-SQL, you can
configure it to run on any server that you like. The file path will have to be replaced, and you can reset the stop time
through parameters within the script. The following script shows the beginning of the T-SQL process used to set up
the server-side trace events:
/****************************************************/
/* Created by: SQL Server 2014 Profiler */
/* Date: 06/06/2014 02:58:35 PM */
/****************************************************/
-- Create a Queue
declare @rc int
declare @TraceID int
declare @maxfilesize bigint
set @DateTime = '2014-06-06 16:00:20.000'
set @maxfilesize = 50
-- Please replace the text InsertFileNameHere, with an appropriate
-- filename prefixed by a path, e.g., c:\MyFolder\MyTrace. The .trc extension
-- will be appended to the filename automatically. If you are writing from
-- remote server to local drive, please use UNC path and make sure server has
-- write access to your network share
exec @rc = sp_trace_create @TraceID output, 0, N'InsertFileNameHere', @maxfilesize, NULL
if (@rc != 0) goto error
You can edit the path where it says InsertFileNameHere and provide different values for @DateTime . At this point,
your script can be run on any SQL Server 2014 server.
The amount of information you collect really depends on what kind of test you want to run. For a standard
performance test, it's probably a good idea to collect at least one hour's worth of information; however, you wouldn't
want to capture more than two to three hours of data in most circumstances. Plus, it can't be emphasized enough,
trace events are not as lightweight as extended events, so the longer you capture data, the more you're negatively
impacting your production server. Capturing more than that would entail managing a lot more data, and it would
mean you were planning on running your tests for a long time.
Before you capture the data, you do need to think about where you're going to run your tests. Let's assume you're
not worried about disk space and that you don't need to protect legally audited data (if you have those issues, you'll
need to address them separately). If your database is not in Full Recovery mode, then you can't use the log backups
to restore it to a point in time. If this is the case, I strongly recommend running a database backup as part of starting
the trace data collection. The reason for this is that you need the database to be in the same condition it's in when you
start recording transactions. If it's not, you may get a larger number of errors, which could seriously change the way
your performance tests run. For example, attempting to select or modify data that doesn't exist will impact the I/O
and CPU measured in your tests. If your database remains in the same state that it was at or near the beginning of your
trace, then you should few, if any, errors.
With a copy of the database ready to go and a set of trace data, you're ready to run the Distributed Replay tool.
 
Search WWH ::




Custom Search