top of page

Who benefits from partial records: The true magic of SetLoadFields

Server-side perspective


One of the key characteristics of the C/AL language and its more modern ancestor, AL, that makes it so attractive for developers, is simplicity that makes the journey into the application development world much more comfortable even for people coming from a different background. The language is well tailored for its domain and has at hand all the features required to effectively solve business problems. But the simplicity of a programming language is a double-edged sword - it backfires on flexibility and often performance of the Business Central application code. Recent additions to the language have changed the old trend - AL developers enriched their toolbox with events and interfaces, new announcements promise namespaces, and likely even more things are yet to come.


Today, I want to have a closer look at one of those relatively recent additions - partial records.

Complete obscurity of the communication between the application and the database server is one of those tradeoffs between simplicity on the one hand and fine-tuned performance on the other (assuming the developers doing the optimisation know their trade). It is quite convenient for the application developer not to worry about building a proper database query - just call the Find or Get function and the complex structure of the AL compiler and the Business Central runtime environment will do the job. This convenience comes at a cost, though. Until recently, NAV/BC developers could not even control the list of table fields being loaded into the client application - everything stored in data pages would be queried. With the introduction of SetLoadFields, this function has quickly become mandatory. It is considered a basic code sanity check to keep the set of loaded fields to the necessary minimum.

What remains a mystery, thought, is the exact effect of partial record load on the query execution, client/server communication, and overall code performance. Whilst not challenging the partial records as a general coding pattern, I want to do some tests to understand when it is most beneficial.

OK, done with the introduction. Rest of the post will be a lot of boring dry statistics. Seriously. I warned you.


Test setup


All following demos are running queries against the table "Detailed Cust. Ledg. Entry" which has one million records (or 1 000 087 to be precise - 87 come from the demo data). Records are generated randomly, maintaining even statistics distribution for the main meaningful fields:

  • Customer No.: Selected randomly from all customer records;

  • Posting Date: Random date within the 3 months range (WorkDate +/- 45 days);

  • Document Type: 80% of the records have the document type "Initial Entry", the rest are distributed among other entry types;

  • Amount: Random Integer from 1 to 100 000.

Business Central and SQL Server both run in a local Docker container. SET STATISTICS TIME and SET STATISTICS IO execution options are enabled on SQL Server, so I can observe query stats on each execution.

I don't run really comprehensive tests with reliable statistical data here, but repeat every run twice: first collecting server stats on a cold data cache, then repeating the same query after the warm-up.

All queries are executed from the SQL Server Management Studio because is gives access to necessary performance data.


Example 1: Select records without SetLoadFields


In most of the examples, I try to imitate typical queries to the detailed customer ledger. This is the first test where I run a simple FindSet without limiting the loaded fields, but filtering the table on the "Customer No.", "Posting Date", and "Entry Type".

local procedure SelectDetCustLedgEntryAllFields()
var
    DetCustLedgEntry: Record "Detailed Cust. Ledg. Entry";
begin
    DetCustLedgEntry.SetRange("Customer No.", '10000');
    DetCustLedgEntry.SetRange(
        "Posting Date",
        CalcDate('<-CM>', WorkDate()), CalcDate('<CM>', WorkDate()));
    DetCustLedgEntry.SetRange(
        "Entry Type", DetCustLedgEntry."Entry Type"::Application);
    DetCustLedgEntry.FindSet();
end;

This piece of code generates the SQL query below.

SELECT
    -- All the table fields are listed here
FROM "CRONUS".dbo."SLFTest$Detailed Cust_ Ledg_ Entry$437dbf0e-84ff-417a-965d-ed2bb9650972" "379"
    WITH(READUNCOMMITTED)  
WHERE (
    "379"."Entry Type"=2 AND
    "379"."Posting Date">='2025-01-01 00:00:00' AND
    "379"."Posting Date"<='2025-01-31 00:00:00' AND
    "379"."Customer No_"='10000') 
ORDER BY "Entry No_" ASC OPTION(FAST 50)

Before running the query and checking the statistics, let's have a look at the execution plan that SQL Server suggests for the FindSet.


Query optimizer suggests to run the clustered index scan and warns us about a missing index that could significantly improve performance. In fact, this suggestion is not very helpful because it requires creating a non-clustered index with a list of included fields that covers each and every table field (since all of them are now included) in the query.

Well, keeping an index which includes all table fields does not sound like the best idea of performance optimisation, so let's just keep it in mind and execute the query.


SQL Server selects 83 rows and yields the following statistics output. Note that I removed all stats after the number of read-ahead reads because all numbers are 0.

(83 rows affected)
Table 'SLFTest$Detailed Cust_ Ledg_ Entry$437dbf0e-84ff-417a-965d-ed2bb9650972'.
Scan count 1, logical reads 98797, physical reads 3,
page server reads 0, read-ahead reads 98793,...

 SQL Server Execution Times:
   CPU time = 421 ms,  elapsed time = 1681 ms.

SQL Server has to read 98796 pages. The overwhelming majority of these pages are retrieved using the read-ahead optimisation, thanks to the very basic structure of the execution plan, but nonetheless, this query is quite heavy on the HDD utilisation. The overall execution time is 1681 ms.


The same query repeated with the warm cache takes 128 ms overall, with 125 ms of the CPU time. I will not be pasting server stats from every run to save space, but you can find a table with consolidated data at the end of the article.


A necessary side note


At first glance, a clustered index scan on a table with a million records does not look like the best strategy, especially knowing that there is an index that should theoretically improve the query performance. Why does the query optimizer choose this plan?

This happens because of the FAST(50) hint that requires SQL Server to return the first 50 records to the client as fast as possible, while the server process continues searching for other rows.

If I delete this hint without any other changes to the query, the plan changes significantly.

Optimizer is still complaining about a missing index, but the execution stats look dramatically different now (this is the result with a cold cache, after restarting the container).

(83 rows affected)
Table 'SLFTest$Detailed Cust_ Ledg_ Entry$437dbf0e-84ff-417a-965d-ed2bb9650972'.
Scan count 1, logical reads 502, physical reads 3, page server reads 0,
read-ahead reads 144, page server read-ahead reads 0,...

 SQL Server Execution Times:
   CPU time = 0 ms,  elapsed time = 4 ms.

Just 147 physical reads now (as opposed to 98796) and 4 ms. I will write about this query hint in later posts, and for now all we can do is take a note of this difference, since we don't have much control over this query hint anyway (the flipside of simplicity).


Example 2: Select records with SetLoadFields


In the second test, I am running almost the same query - it is the same FindSet on the Detailed Cust. Ledg. Entry with the same set of filters, but now with the SetLoadFields limiting the set of fields to just two: Document Type and Document No.


local procedure SelectDetCustLedgEntryBaseAppFields()
var
    DetCustLedgEntry: Record "Detailed Cust. Ledg. Entry";
begin
    DetCustLedgEntry.SetRange("Customer No.", '10000');
    DetCustLedgEntry.SetRange(
        "Posting Date",
        CalcDate('<-CM>', WorkDate()), CalcDate('<CM>', WorkDate()));
    DetCustLedgEntry.SetRange(
        "Entry Type", DetCustLedgEntry."Entry Type"::Application);
    DetCustLedgEntry.SetLoadFields("Document Type", "Document No.");
    DetCustLedgEntry.FindSet();
end;

The resulting query is not much different from te previous one. The only difference is actually that now I can list the full set of selected fields here.

SELECT 
	"379"."timestamp",
	"379"."Entry No_",
	"379"."Entry Type",
	"379"."Posting Date",
	"379"."Document Type",
	"379"."Document No_",
	"379"."Customer No_",
	"379"."$systemId",
	"379"."$systemCreatedAt",
	"379"."$systemCreatedBy",
	"379"."$systemModifiedAt",
	"379"."$systemModifiedBy" 
FROM "CRONUS".dbo."SLFTest$Detailed Cust_ Ledg_ Entry$437dbf0e-84ff-417a-965d-ed2bb9650972" "379"
	WITH(READUNCOMMITTED)  
WHERE (
	"379"."Entry Type"=2 AND
	"379"."Posting Date">='2025-01-01 00:00:00' AND
	"379"."Posting Date"<='2025-01-31 00:00:00' AND
	"379"."Customer No_"='10000') 
ORDER BY "Entry No_" ASC OPTION(FAST 50)

The list of query columns, as you can see, is not limited to the two fields specified in the SetLoadFields.Filtered fields Customer No., Posting Date, and Entry Type are implicitly included as well. Plus the timestamp, the primary key, and all the $system*$ fields are always loaded too.


But being able to copy and paste the query text from SSMS into the blog post editor is not main goal of this exercise. Let's run the query and see the stats.

And the server stats for this query are totally the same in terms of the number of reads. Execution time is 1697 ms. It fluctuates slightly, of course, but does not show any significant change.


So this is the first intermediate result on the server side. Limiting the number of fields in the query does not have any significant impact on the query execution time if the table does not have a covering index, and the server continues running clustered index scans.


Modified test setup: Detailed Cust. Ledger Entry is extended


For the next set of tests, I am introducing a change to the setup to spice it up - I extend the Detailed Cust. Ledg. Entry table, adding two custom fields.

tableextension 50100 "Det. Cust. Ledg. Entry" extends "Detailed Cust. Ledg. Entry"
{
    fields
    {
        field(50000; "My Amount"; Decimal) { }
        field(50001; "My Code"; Code[20]) { }
    }
}

Similar to generating my initial dataset, I filled the two new fields with random data, and running all following test on the table with this appendage.


Example 3: Select full records from the extended table


This is a very simple one again - just repeating the Example 1, but remembering that the server has to deal with the extended table now. The query has a different structure since it has to join two tables.

A note for readers new to the Business Central data structure. When you declare a table extension object in the development environment, behind the scenes this translates to a new table with the same primary key as the table being extended. So now each query on this (logical) table has to join two (physical) tables.

SELECT
   -- All the table fields are here
FROM "CRONUS".dbo."SLFTest$Detailed Cust_ Ledg_ Entry" "379" 
    WITH(READUNCOMMITTED) 
JOIN "CRONUS".dbo."SLFTest$Detailed Cust_ Ledg_ Entry" "379_e1"  
    WITH(READUNCOMMITTED)
ON ("379"."Entry No_" = "379_e1"."Entry No_")
WHERE (
    "379"."Entry Type"=2 AND
    "379"."Posting Date">='2025-01-01 00:00:00' AND
    "379"."Posting Date"<='2025-01-31 00:00:00' AND
    "379"."Customer No_"='10000')
ORDER BY "Entry No_" ASC OPTION(FAST 50)

Query execution plan changes respectively. Now besides the clustered index scan, SQL Server must perform clustered index seek on the extension table, and join both results.



Now let's see the server statistics. Cold cache.

(83 rows affected)
Table 'SLFTest$Detailed Cust_ Ledg_ Entry$1554f031-75cc-44eb-933a-4c284c07b96b'.
Scan count 0, logical reads 459, physical reads 1,
page server reads 0, read-ahead reads 121,...

Table 'SLFTest$Detailed Cust_ Ledg_ Entry$437dbf0e-84ff-417a-965d-ed2bb9650972'.
Scan count 1, logical reads 98797, physical reads 3,
page server reads 0, read-ahead reads 98793,

 SQL Server Execution Times:
   CPU time = 391 ms,  elapsed time = 1693 ms.

And the warm cache:

(83 rows affected)
Table 'SLFTest$Detailed Cust_ Ledg_ Entry$1554f031-75cc-44eb-933a-4c284c07b96b'. 
Scan count 0, logical reads 284, physical reads 0,
page server reads 0, read-ahead reads 0,...

Table 'SLFTest$Detailed Cust_ Ledg_ Entry$437dbf0e-84ff-417a-965d-ed2bb9650972'. Scan count 1, logical reads 98797, physical reads 0,
page server reads 0, read-ahead reads 0,...

 SQL Server Execution Times:
   CPU time = 125 ms,  elapsed time = 126 ms.

Obviously the results now include the second stream of data for the extension table. This includes 122 physical page reads on the cold cache, but these 122 pages are a mere 1% of the data volume that must be ploughed by the clustered index scan operation on the base table. After a warm-up, even this small difference completely dissolves.


Example 4: Select records including only extension fields


Let's make our spice-up example even spicier and start applying SetLoadFields on the extended table. Test 4 runs the FindSet, but limiting the column set to the custom fields from the extension.

local procedure SelectDetCustLedgEntryExtensionFields()
var
    DetCustLedgEntry: Record "Detailed Cust. Ledg. Entry";
begin
    DetCustLedgEntry.SetRange("Customer No.", '10000');
    DetCustLedgEntry.SetRange(
        "Posting Date",
        CalcDate('<-CM>', WorkDate()), CalcDate('<CM>', WorkDate()));
    DetCustLedgEntry.SetRange(
        "Entry Type", DetCustLedgEntry."Entry Type"::Application);
    DetCustLedgEntry.SetLoadFields("My Amount", "My Code");
    DetCustLedgEntry.FindSet();
end;

The query executed this time does not change much. Although, the SetLoadFields instruction limited the list of fields to those of the custom extension, Base Application fields are still included.

SELECT 
	"379"."timestamp",
	"379"."Entry No_",
	"379"."Entry Type",
	"379"."Posting Date",
	"379"."Customer No_",
	"379_e1"."My Amount",
	"379_e1"."My Code",
	"379"."$systemId",
	"379"."$systemCreatedAt",
	"379"."$systemCreatedBy",
	"379"."$systemModifiedAt",
	"379"."$systemModifiedBy"
FROM "CRONUS".dbo."SLFTest$Detailed Cust_ Ledg_ Entry$437dbf0e-84ff-417a-965d-ed2bb9650972" "379"
	WITH(READUNCOMMITTED)  
JOIN "CRONUS".dbo."SLFTest$Detailed Cust_ Ledg_ Entry$1554f031-75cc-44eb-933a-4c284c07b96b" "379_e1" 
	WITH(READUNCOMMITTED)
	ON ("379"."Entry No_" = "379_e1"."Entry No_") 
WHERE (
	"379"."Entry Type"=2 AND
	"379"."Posting Date">='2025-01-01 00:00:00' AND 
	"379"."Posting Date"<='2025-01-31 00:00:00' AND
	"379"."Customer No_"='10000') 
ORDER BY "Entry No_" ASC OPTION(FAST 50)

This is quite in line with the documentation which explains that certain fields of the table are always included, even if not specified explicitly in the SetLoadFields instruction. This includes the primary key fields, system fields, and fields which have filters applied.


Execution times are

  • CPU time = 344 ms, elapsed time = 1694 ms (cold cache)

  • CPU time = 109 ms, elapsed time = 128 ms (warm cache)


Example 5: Select records including only BaseApp fields


This is a variation of the previous exercise, only swapping the application providing the data. Here, I exclude all extension fields and load fields from the Base Application. So, essentially, this means running same query as Example 2.


I will skip both the AL source code and the SQL query generated from this code, because both are exactly the same as the respective code in the Example 2 which I referenced to above.

Execution times are not much different to, but this does not come as a surprise by now. All the execution stats have been fluctuating around the same values.


What matters in the results of this test, though, is that the query eliminated the join on the extension table. Although this did not affect the execution time, it will have an impact on the following tests.



Example 6: Large dataset retrieved from the server


In the last paragraph of this (as promised, long and boring) review I actually included 3 tests, very similar to those executed previously:

  • FindSet with the full record

  • FindSet with the partial record, including only BaseApp fields

  • FindSet with the partial record, including only extension fields


One thing I changed for this set of tests is the filtering. For all of the following cases, I am removing the filter on the document type, thus expanding the resulting recordset from 83 records to 5097 selected rows. I will try not overload the readers too much and will give only one example of the server statistics before moving to the final table collecting all results.


This is one of the results in this test set (reading the full record, cold cache).

(5097 rows affected)
Table 'SLFTest$Detailed Cust_ Ledg_ Entry$437dbf0e-84ff-417a-965d-ed2bb9650972'.
Scan count 1, logical reads 98797, physical reads 3,
page server reads 0, read-ahead reads 98793,...

Table 'SLFTest$Detailed Cust_ Ledg_ Entry$1554f031-75cc-44eb-933a-4c284c07b96b'.
Scan count 1, logical reads 18831, physical reads 3, page server reads 0, read-ahead reads 18832, page server read-ahead reads 0,

 SQL Server Execution Times:
   CPU time = 563 ms,  elapsed time = 2053 ms.

Now the execution time grows quite noticeably because of the much higher number of both logical and physical reads on the extension table. This is what we would expect in this case, since the number of rows to be retrieved grows by nearly two orders of magnitude, SQL Server has to read additional data pages for the extension table and run the join operation on the additional rows.

These numbers remain more or less the same when SetLoadFields is applied to read extension fields. Since filters on the DetCustLedgEntry variable are applied on BaseApp fields, the base application table must be scanned. But this changes significantly when the partial record is limited to BaseApp fields. Join clause disappears from the query and all stats drop back to the levels of a table without an extension.


Test results for the detailed customer ledger entries filtered on 3 fields:

  • Customer No.

  • Posting Date

  • Entry Type

Each query selects 83 rows.


Test results for the detailed customer ledger entries filtered on 2 fields:

  • Customer No.

  • Posting Date

Each query selects 5097 rows.


Conclusion (intermediate, though)


Using partial records and limiting the number of loaded fields is undoubtedly a valuable pattern and should be used. But don't expect it to be a silver bullet immediately solving performance issues. In most cases, SQL Server still has to read the same data pages - with or without SetLoadFields, and from the server perspective it will not make any difference.

One example when the code will clearly benefit from a partial record is the Base/System Application (or generally any application with an extension). Limiting the loaded fields can protect the application from unnecessary joins with their concomitant performance degradation when app's tables are extended.


This is just one, but the only one example. Another, maybe even more interesting is a combination of partial records with covering indexes.

And the server-side perspective is not the only one. There is the client as well. Let's talk about it next time.

Stay tuned.

726 views0 comments

Recent Posts

See All

Comments


bottom of page