Database Reference
In-Depth Information
// Accumulate the next value if not null
public void Accumulate(SqlString value)
{
if (value.IsNull)
return;
this.intermediateResult.Append(value.Value).Append(',');
}
// Merges the partiually completed aggregates
public void Merge(Concatenate other)
{
this.intermediateResult.Append(other.intermediateResult);
}
// Called at the end of aggregation
public SqlString Terminate()
{
string output = string.Empty;
if (this.intermediateResult != null && this.intermediateResult.Length > 0)
{ // Deleting the trailing comma
output = this.intermediateResult.ToString(0, this.intermediateResult.Length - 1);
}
return new SqlString(output);
}
// Deserializing data
public void Read(BinaryReader r)
{
intermediateResult = new StringBuilder(r.ReadString());
}
// Serializing data
public void Write(BinaryWriter w)
{
w.Write(this.intermediateResult.ToString());
}
}
As with user-defined functions, it is extremely important to set the attributes that tell Query Optimizer about CLR
Aggregate behavior and implementation. This helps generate more efficient execution plans and prevents incorrect
results due to optimization. It is also important to specify the MaxByteSize attribute that defines the maximum size of
the aggregate output. In our case, we set it to -1 , which means that the aggregate can hold up to 2GB of data.
Let's compare the performance of two different T-SQL implementations. In the first one, we will use a SQL
variable to hold intermediate results. This approach implements imperative row-by-row processing under the
hood. The second method utilizes the FOR XML PATH technique that we discussed in Chapter 11. The code is shown
in Listing 13-17.
Search WWH ::




Custom Search