A QA engineer walks into a bar. Orders a beer. Orders 0 beers. Orders 99999999999 beers. Orders a lizard. Orders -1 beers. Orders a ueicbksjdhd.
Countless variations of this joke are a perfect illustration of the nature of a negative test.
In my previous posts on software testing, I was following the same pattern in structuring the test: configure input value, execute the method under test, compare the output with the expected result. If the result matches the expectation, test succeeds, otherwise it fails.
This process flow describes a positive test where all data inputs are valid, consistent, and sufficient to perform the action being validated. But we all know very well that errors happen, and the ability of an application to handle the errors gracefully is an important part of its overall quality. What happens when your application cannot write a file because the user lacks access permissions for the specified folder? Does the application explain the access issue in detail or does it throw a generic I/O exception? If a purchase order cannot be posted because the related warehouse put-away must be allocated first, does the application explain the issue and gives enough details to fix it, or does it yield the notorious "Nothing to post" error?
A well-designed application captures errors, gives the user sufficient feedback to understand and fix the error, an does not leave behind any waste, like open temporary files or inconsistent transactions committed in the middle of a posting procedure. The more complex the software becomes, the more possibilities there are that something can go wrong, and this is where we need negative tests to ensure that the application treats erroneous situations properly. Generally speaking, a negative test follows the same "Setup - Execute - Verify" pattern, although the intent of the Setup and Verify steps changes compared to a positive flow. Now the setup step must prepare the test configuration in a way that will cause the execution to fail with an expected error, while the verification part catches the error to ensure that the error message is meaningful enough and all the post-error housekeeping work is done properly.
So what exactly do we need to do to make a negative test work and add value to the test coverage? Of course, before delving into coding we must understand the test scenario: the incomes that influence the result and the expected outcome of the test. Test scenario first - this is the rule number one that applies to any kind of testing, and this understanding will give the keys to all the test steps to be performed.
Scenario of a negative test
Speaking about a more specific example which we can use to illustrate the concept, let's have a look at a posting errors in Business Central which everyone is familiar with - missing vendor invoice number in a purchase invoice. The user carefully fills in all vendor details, enters order details in lines, but forgets about one field which is crucial for posting - and of course we want the system to notify them where the error is. And we want to test the erroneous scenario to verify that the correct error message is delivered. Following the rule No. 1 which was defines a few lines above, I will start from describing the scenario for this test.
// [SCENARIO] Purchase order cannot be invoiced if the vendor invoice no. is blank // [GIVEN] Create a purchase order // [GIVEN] Set the "Vendor Invoice No." to blank on the purchase order // [WHEN] Post the order as received and invoiced // [THEN] Error is returned: You need to enter the document number Now let's put this scenario into action and fill it with executable code.
[Test]
procedure ErrorMessageOnMissingVendorInvoiceNo()
var
PurchaseHeader: Record "Purchase Header";
DocumentNumberNeededErr: Label 'You need to enter the document number of the document from the vendor';
begin
// [SCENARIO] Purchase order cannot be invoiced if the vendor invoice no. is blank
// [GIVEN] Create a purchase order
LibraryPurchase.CreatePurchaseOrder(PurchaseHeader);
// [GIVEN] Set the "Vendor Invoice No." to blank on the purchase order
PurchaseHeader.Validate("Vendor Invoice No.", '');
PurchaseHeader.Modify();
// [WHEN] Post the order as received and invoiced
asserterror LibraryPurchase.PostPurchaseDocument(
PurchaseHeader, true, true);
// [THEN] Error is returned: You need to enter the document number
Assert.ExpectedError(DocumentNumberNeededErr);
end;
In this example, we can see two AL language structures very common in negative tests and often coming together in error verification scenarios: asserterror statement and the function Assert.ExpectedError.
Catching expected errors
These two statements play the central role in the verification of the error processing application code. Negative test is called "negative" for a reason. A positive test succeeds if no error happens in the application and the verification of the test outcome (assertion) is satisfied. Negative test, on the contrary, expects an error. Failing code and error handling is a part of the successful execution path for this kind of test. But an error is an error. If an application method fails, how do we make the test which triggered the method, to succeed? This is where asserterror comes to aid.
Asserterror statement tells the compiler that an error is expected inside the block enclosed by the statement, and this error must not fail the test. Quite contrary, the test will fail if the execution of the code under asserterror completes without errors.
In the example above, an error is expected during the posting of the purchase order:
asserterror LibraryPurchase.PostPurchaseDocument(
PurchaseHeader, true, true);
Successful posting in this case means a test failure - BC test framework will throw an error if no error occurs during the posting.
Negative test verifies that the application returns the proper error message that can help the user to fix the error and continue doing their task, therefore another common part of a negative test is the error verification. Assert codeunit provides two functions for this purpose:
Assert.ExpectedError
Assert.ExpectedErrorCode
The first one, ExpectedError(Expected: Text), compares the last error message with the argument, Expected. Expected does not have to match the error text exactly, it is a substring of the error message. Thanks to this, we don't have to compare the full message text "You need to enter the document number of the document from the vendor in the Vendor Invoice No. field, so that this document stays linked to the original." Instead, we can limit the expected error message to any meaningful part of it.
The other error verification function in the Assert codeunit is ExpectedErrorCode. This function validates the error code, or category, instead of the specific message. Because ExpectedErrorCode is less distinctive, it is less commonly used. For example, the error code returned by the sample test case above, is Dialog. But the same code is yielded by the AL Error function whenever it is called. For this reason, ExpectedError is used much more frequently. Similarly, it is hardly applicable to verify error caused by the TestField function. TestField error code is the second most frequent because of the number of testfields in the application.
ExpectedErrorCode can come in handy when the error code is more rare, for example database errors like DB:RecordNotFound or DB:NothingInsideFilter.
Another negative scenario: Blocked dimensions.
With all the introductory information in mind, we can try to compose another example of a negative test, with a slightly more complex setup. This one will be related to journal posting and verifying an error in case of a blocked dimension combination in the journal line dimension set.
In this scenario, I want to reproduce a situation which will cause the general journal posting routine to fail due to conflicting dimensions assigned to the line. So the setup must include these steps:
Create two dimensions;
Create a dimension combination for the same dimensions;
Set the combination restriction to Blocked;
Create a general journal line with a dimension set including both blocked dimensions
And, once this setup is done, the test must run the posting procedure under the asserterror statement to catch and verify the posting error.
[Test]
procedure ErrorOnBlockedDimensionCombination()
var
DimVal: array[2] of Record "Dimension Value";
DimComb: Record "Dimension Combination";
GenJnlLine: Record "Gen. Journal Line";
GenJnlTemplate: Record "Gen. Journal Template";
GenJnlBatch: Record "Gen. Journal Batch";
BlockedDimCombinationErr:
Label 'Dimensions %1 and %2 can''t be used concurrently',
Comment = '%1 and %2: Dimension codes';
begin
// [SCENARIO] Gen. journal line cannot be posted if the dimension combination in the line is blocked
// [GIVEN] Dimensions "D1" and "D2" with respective values "V1" and "V2"
LibraryDimension.CreateDimWithDimValue(DimVal[1]);
LibraryDimension.CreateDimWithDimValue(DimVal[2]);
// [GIVEN] Combination of dimensions "D1" and "D2" is blocked
LibraryDimension.CreateDimensionCombination(
DimComb, DimVal[1]."Dimension Code", DimVal[2]."Dimension Code");
DimComb.Validate(
"Combination Restriction",
DimComb."Combination Restriction"::Blocked);
DimComb.Modify(true);
// [GIVEN] Create general journal line
LibraryERM.FindGenJournalTemplate(GenJnlTemplate);
LibraryERM.FindGenJournalBatch(GenJnlBatch, GenJnlTemplate.Name);
LibraryERM.CreateGeneralJnlLineWithBalAcc(
GenJnlLine, GenJnlTemplate.Name, GenJnlBatch.Name,
GenJnlLine."Document Type"::" ",
GenJnlLine."Account Type"::"G/L Account",
LibraryERM.CreateGLAccountNo(),
GenJnlLine."Bal. Account Type"::"G/L Account",
LibraryERM.CreateGLAccountNo(),
LibraryRandom.RandDec(1000, 2));
// [GIVEN] Assign dimensions "D1" and "D2" with values to the line
GenJnlLine."Dimension Set ID" :=
LibraryDimension.CreateDimSet(
GenJnlLine."Dimension Set ID",
DimVal[1]."Dimension Code", DimVal[1].Code);
GenJnlLine."Dimension Set ID" :=
LibraryDimension.CreateDimSet(
GenJnlLine."Dimension Set ID",
DimVal[2]."Dimension Code", DimVal[2].Code);
GenJnlLine.Modify(true);
// [WHEN] Try to post the journal
asserterror LibraryERM.PostGeneralJnlLine(GenJnlLine);
// [THEN] Error is returned: Dimensions D1 and D2 can't be used concurrently
Assert.ExpectedError(
StrSubstNo(
BlockedDimCombinationErr, DimVal[1]."Dimension Code",
DimVal[2]."Dimension Code"));
end;
The sample test follows the same familiar test execution flow:
Setup: Prepare required configuration to simulate the faulty conditions.
Execute: Run the action under test.
Verify: Validate the resulting error.
When to use negative tests - a use case
Following the logic outlined above, we can cover any erroneous code path. But is every error is worth test coverage? And how do we identify the scope for testing?
Let's consider two examples: a discount percentage received in a JSON object and the same value as a decimal field in a table that receives user input. In both cases the value received from an external source must be validated - we must make sure that the value is a decimal number between 0 and 100. The difference between the two cases is that the table field has the benefit of the built-in platform validation, whilst the JSON message content requires custom verification code.
Discount percentage in a table field can be declared like this:
field(2; "Discount Pct."; Decimal)
{
MinValue = 0;
MaxValue = 100;
}
And below is a code sample that validates a JSON value.
procedure VerifyDiscountPct(Val: JsonValue)
var
DecValue: Decimal;
ValueMustBeDecErr: Label 'Value is not decimal.';
ValueOutsideOfRangeErr: Label 'Value must be between %1 and %2.',
Comment = '%1, %2: Min and max allowed values.';
begin
if not TryConvertJsonValueToDec(Val, DecValue) then
Error(ValueMustBeDecErr);
if (DecValue < 0) or (DecValue > 100) then
Error(ValueOutsideOfRangeErr, 0, 100);
end;
[TryFunction]
local procedure TryConvertJsonValueToDec(
JValue: JsonValue; var DecValue: Decimal)
begin
DecValue := JValue.AsDecimal();
end;
The latter case needs test coverage for one positive scenario and three possible negative outcomes: non-decimal value, a negative decimal, and a positive decimal greater than 100. All three are simple unit tests which only need to initialise a JsonValue variable that will be passed to the validation function, and check the error message.
The first one sets the JSON content to a non-decimal value before running the verification.
[Test]
procedure ErrorIfValueNotDecimal()
var
ParseJsonMsg: Codeunit "Parse JSON Message";
JVal: JsonValue;
begin
JVal.SetValue('Text');
asserterror ParseJsonMsg.VerifyDiscountPct(JVal);
Assert.ExpectedError(ValueMustBeDecErr);
end;
The second test will send a negative decimal to ensure that the discount below zero is not accepted.
[Test]
procedure ErrorIfValueBelowMin()
var
ParseJsonMsg: Codeunit "Parse JSON Message";
JVal: JsonValue;
begin
JVal.SetValue(Format(-LibraryRandom.RandDec(100, 2)));
asserterror ParseJsonMsg.VerifyDiscountPct(JVal);
Assert.ExpectedError(StrSubstNo(ValueOutsideOfRangeErr, 0, 100));
end;
And the last negative test case controls the third erroneous outcome - discount percentage exceeding 100.
[Test]
procedure ErrorIfValueAboveMax()
var
ParseJsonMsg: Codeunit "Parse JSON Message";
JVal: JsonValue;
begin
JVal.SetValue(Format(100 + LibraryRandom.RandDec(10, 2)));
asserterror ParseJsonMsg.VerifyDiscountPct(JVal);
Assert.ExpectedError(StrSubstNo(ValueOutsideOfRangeErr, 0, 100));
end;
To complete the test coverage, I will add a positive unit test as well. This is even simpler than negative cases because it does not catch and verify the error message. In fact, the positive test case does not have the verification part at all. This is so called "assertless test" which generally should be avoided, but can be acceptable for unit tests in conjunction with a set of negative test cases validating erroneous conditions. If the function under test does not return any value and does not commit any changes to the database, an assertless positive test navigates through fail states and exits without triggering any error.
[Test]
procedure NoErrorIfValueDecimalPct()
var
ParseJsonMsg: Codeunit "Parse JSON Message";
JVal: JsonValue;
begin
JVal.SetValue(Format(LibraryRandom.RandDec(100, 2)));
ParseJsonMsg.VerifyDiscountPct(JVal);
// Verification is successful, no error
end;
In this case, we can say that this is the set of negative tests that gives value to the only positive test case. Without the negative scenarios this functionality cannot be considered tested.
When a test becomes redundant
We just covered a set of tests sufficient to verify the discount percentage validation if this validation is done in the client code, but in different circumstances, the same can be achieved by setting up field properties. In fact, the same set of tests, with small adjustments, can be used to verify field properties - data type, MinValue and MaxValue. The question is whether we want to write and execute test cases to verify the configuration.
Usually this coverage is considered redundant because, unlike code, field properties, are not volatile - they are not refactored and normally do not require regression testing.
Another similar example is the TableRelation property. This is also an area where negative tests could find application, but usually are not worth the effort for the same reason - the property is easy to fix, it is not prone to regressions, and wrong property setup is easily caught as a side effect of other tests.
On the other hand, protection against regressions is not the only goal of test automation. If a team relies on other pillars of test-driven development, like test as specification and test as documentation, automated verification of field properties can be considered in the test plan. Still we should remember that automated tests have execution and maintenance costs and probably not every test is worth automating.
Comments