Mobile App Testing using Behavior-Driven Development (BDD) and Test-Driven Development (TDD) in Private Testing Phase:
Most developers would starve to produce bug-free software. However, bugs are an inevitable part of the software application as even the greatest of the most excellent programmers, and developers could miss out or slip on things to pay attention to, leading to a bug in the system.
No code is one hundred percent (100%) safe and bug-free. Therefore, bug and performance testing is an integral part of software development, including the minor application a programmer designs and codes.
Therefore, using Test-Driven Development and Behavior-Driven Development is the best way forward for all development companies that allow developers to make changes quickly and effectively. Furthermore, testing an application regularly and periodically will enable developers to get feedback on the application’s quality and performance to see if it does what it suppose to do and performs as per its intended use-case and or purposes.
Fixing the bug fast and earlier in the development phase makes the application to the market smooth and timely, bringing credibility to the developer and the team. Therefore, more immediate feedback on the design and its functionality from the real end-users is the key to delivering a project successfully. Therefore, the constant and continuous feedback loop must be essential for the business process.
Developers must “INVEST” in the feedback loop.
“I” — Independent: End-user must be independent of the development and design team where feedback comes unbiased and practical.
“N” — Negotiation: Feature request must get negotiated with the design team, developers, and the End-user collectively for its practicality of need and not just want of an individual.
“V” — Valuation: Feedback must get driven on the practical use-case valuation and not an emotional need of the tester.
“E” — Effectiveness: All suggested features must get implemented based on effectiveness and practicality to its end-user, contributing to its performance and requirements.
“S” — Simple: No feedback gets evaluated as simple and small to be considered valuable enough to get implemented for its contribution to bringing usefulness and effectiveness to the application and its end-users.
“T” — Timely: The feedback loop must be implemented periodically and on time to keep bugs out and the application’s performance intact. Testing the application for bugs and its intended use case is vital for professional software development, which must not get ignored regardless of the project deadlines. In addition, testing must be a part of the development cycle and launch process to market.
Whereas, Behavior-Driven development includes and uses a non-technical audience and or end-users effectively and efficiently compared to the tech-side of the end-users that contributes feedback at large to the non-technical stakeholders than the design team and the programmers.
Behavior-Driven Development (BDD) must be user-focused and easily comprehendible by non-technical application end-users. The application tests must get organized for its behavior monitoring in the hand of its intended non-technical end-users, reflecting application behavior and unintentional consequences in non-technical hands.
Test cases could get managed using real use-case scenarios and in a realistic environment. In contrast, actual end-users could use the application for its practicality and usefulness, providing feedback on its use-case and failure to meet the requirements and expectations of the end-users, simultaneously contributing towards the effectiveness and efficacy of the application.
Test under BDD must acknowledge most business requirements and clearly define every feature and use-case implemented within the application describing each step precisely, clearly, and understandable by the non-technical tester and or end-user.
Any specific tester must get given a scenario or a plan fully describing the steps of the test procedure. The tester must clearly understand each given strategy or plan for what action to take and what expected outcome the tester would receive after tacking that particular action. Likewise, a tester must fully understand what is given to them when and where they have to click or take action and what would result from that action or appear on the screen. Afterward, end-users must follow their instinct to use and act as they wish to test the application for its unintended use and use-case.
Finding bugs and unintended test results early would save the team tons of development time and embarrassment if not losing face to the client.
As mentioned above, both techniques efficiently bring the application from requirement gathering faster to the market into end-users hands.
Test Case Strategy for Mobile App Testing in Startups:
Research suggests that less than 25% of users reuse the app after its first download and use. In comparison, only 5% of end-users raise issues with its developers if they are unsatisfied with the application’s design and performance. As a result, contributing developers get a blind spot on design and useless features, and the application never gets honest and unbiased feedback, leading to its failure.
Mobile app testing is crucial for application development and must never get ignored. However, a good-looking mobile application must also act as per its intended use to its end-user. Without performing as per its intended use, any app would fail for its expectation if it not get tested for its performance in the natural environment and with actual end-users.
Not just application performance but application data security is a crucial part of the application design and development because it satisfies its end-user. Unfortunately, some end-users may ignore design but not its performance and data breach. Therefore, data security and performance go hand in hand.
While testing and fixing code errors and bugs, developers must also pay attention to the application’s User-Interface (UI) and its User-Experience (UX), ensuring all the business requirements get met, and the logic-flow is intact and unbreakable.
An inadequately tested app that fails the public or end-user expectations has the following but not limited to contributing factors for its failure:
1 — Insufficient time allocated to the testing team,
2 — Developer turnover within an organization which leads to the loss of the primary or talented developer,
3 — Missed delivery timeline, which leads to rushed coding practices.
4 — Time constraints which could lead to premature delivery of the project and or application launch,
5 — Immature development tools and an inadequate development platform used to develop the application.
Two main and vastly used application testing methods are as follows:
1 — Automatic Testing,
2 — Manual Testing.
Automatic testing is where the programmer writes test modules and or test code, creating automated testing tools before writing the application, leading to faster code testing during development, saving time and cost.
There is a prominent view within the development community that automatic testing is faster and more efficient for finding errors and bugs, reducing development time, and saving overall project costs in the long run.
In addition, it is a belief that there are very few chances to find the mistakes and bugs in the application after the launch, as the automatic testing goes together with the application’s actual development testing at the time of its coding stage, giving a better return on an investment (ROI).
On the other hand, manual testing gets done by an actual person in the natural testing environment to find bugs and errors. So, first, a technical tester would test all of the features an app offers for its performance and intended use, trying to discover the mistakes and bugs during the development phase. Afterward, before application launch, a non-technical end-user would get utilized to find any remaining bugs and errors within the application’s UI and UX and write a report to the development team with suggestions and all the bug findings, if any.
This manual testing is exploratory as it is done by an actual person and with an intuitive mindset that an automated test environment could not achieve. However, most developers find it time-consuming and not very cost-effective. Considering the pros and cons of the testing environments, as mentioned earlier, it is entirely the development team’s taste and preference to utilize the desired testing procedure. However, mixing both test environments to find errors and bugs to meet the end-user expectations and client’s requirements is the best foot forward.
Following testing steps and procedures could get followed to achieve desired results:
1 — All test cases must get designed, and the entire process gets documented in-house for future referencing, reviewing, and further analysis and understanding of the logic of the use case.
2 — The test team must follow pre-designed and determining guidelines for all test types while documenting the process.
3 — The development team must test the application for every feature, module, and class unit and get it manually done for each unit individually.
4 — Development team must ensure that the application runs smoothly without a glitch for every feature it contains, ensuring that each button is actionable and produces the desired outcome.
5 — To provide adequate satisfaction to application end-user, the development team must ensure that the User Interface is intuitive and actionable, leading to a more remarkable User-Experience. However, improving upon User-Experience is an ongoing process way beyond the initial launch of the application.
6 — Development team must also test the application for data overload and the data transmission mechanism used, including encryption for data safety.
7 — Feature Pivot testing and regression testing must also be considered seriously for any small code and feature changes on demand for requirement change that must not lead to app crashes and code safety breaches. The overall flow of the application must not get affected by pivot decisions made by management and by the new addition and or removal of an old unwanted feature. Modular design patterns are the best option forward to avoid such scenarios. Considering there is always room for improvements, user satisfaction is the hardest part of an application testing for its User-Experience.
Test Case Template with examples:
To verify a feature and functionality of the application, the development team must design the set of actions for an end-user to execute. In addition, to prove any given requirement, the development team must assign and create the precondition and postcondition for each given or possible scenario, including test steps taken and the test data used for a test case to get conducted successfully at a given time in the natural environment.
A test case must include exact variables and conditions that would get used by the tester to compare and verify the expected and actual outcome and determine if the desired outcome matches with the customer’s requirements. Therefore, each Test Scenario must include all possible Test Cases.
Test Scenario and its Test Cases in Manual Testing:
Test Scenarios are unique to every application and vary for each module and or feature. When Beta Testing, an application tester has to be very specific about the test scenario as there could be various test cases for every test scenario.
Test Scenario 1: Check Login / Registration Functionality for Zaap By Chang.
- Test Case 1: Verify outcome on entering valid First Name & Last Name.
- Test Case 2: Verify result on entering Invalid First Name & Last Name.
- Test Case 3: Verify the outcome of entering a valid Mobile number.
- Test Case 4: Verify effect on entering an Invalid Mobile number.
- Test Case 5: Check the response when an OTP is Empty & Login Button gets pressed.
- Test Case 6: Check the response when an OTP is Invalid & Login Button gets pressed.
- Test Case 7: Check the response when an OTP is Valid & Login Button gets pressed.
To Write Test Cases in a Manual Testing Environment, take the following steps to make it easy for a non-technical tester.
Step 1) Explain the test scenario for the tester.
- Test Case # : 001
- Test Case Description: Check the response when a valid mobile number gets entered
Step 2) To execute the test case, the tester needs test data. Adding it below
- Test Case #: 001
- Test Case Description: Check the response when a valid mobile number and OTP gets entered
- Test Data: +61-xxx-xxx-xxx and OTP: 123xxx
Test Data must get documented for the tester to use as it could get time-consuming to get it afresh each time around.
Step 3) To execute a test case, a tester must perform a specific set of actions on the Test-Device (Smartphone). This process must get documented as below:
- Test Case #: 001
- Test Case Description: Check the response when a valid mobile number and OTP gets entered
- Test Steps:
- Enter First Name
- Enter Last Name
- Enter Mobile Number
- Click Send
- Enter OTP
- Click Send
- Test Data: +61-xxx-xxx-xxx and OTP: 123xxx
Test Steps are not always as simple as mentioned above. Therefore, all steps need documentation. It gets a lot easier for a new tester to follow through in case of staff turnover. Even a random tester could execute the test case by following the documentation. Documented steps are helpful when facilitating reviews and reports for other stakeholders.
Step 4) The purpose of the test cases in a manual testing environment is to inspect the behavior of the test device for an expected result. This process also needs to get documented as below.
- Test Case #: 001
- Test Case Description: Check the response when a valid mobile number and OTP gets entered
- Test Steps:
- Enter First Name
- Enter Last Name
- Enter Mobile Number
- Click Send
- Enter OTP
- Click Send
- Test Data: +61-xxx-xxx-xxx and OTP: 123xxx
- Expected Result: At this stage, login/registration should be successful.
At this test execution stage, the tester would observe expected results against actual results and assign a pass or fail status to the test case.
Step 5) A given test case could have a field such as Pre — Condition, which defines things that must be in position before the tester could run the test case.
For the “Zaap By Chang” test case, a precondition would be to have a mobile app installed on the test device or smartphone, providing access to the app for testing. A test case could also include Post — Conditions that define anything that applies after the test case gets completed. For example, as mentioned earlier, our test case could have a postcondition to verify the time & date of login that gets logged in the cloud database.
- Test cases must be simple to follow through with by a non-technical tester.
- Test case designers must keep End-User in mind to comprehend the steps used.
- Test case designers must avoid repetition.
- Test case designers must not assume the tester’s capabilities and or comprehension of the application.
- Test case designers must ensure that all the possible test-case are covered.
- Test case designers must ensure that test cases are identifiable and easy to replicate if bugs get found.
- The tester must follow documented manual testing techniques.
- Test data must get self-cleaned or cleaned manually, and tests must avoid the production database.
- All test cases must be repeatable by any non-technical tester.
- AL tests must get peer-reviewed on test results.
- Start Feedback loop for modification.
The format of Standard Test Cases:
Test Case ID:
Test Case Description:
Test Steps:
Test Data:
Expected Results:
Actual Results:
Pass/Fail:
Test Case Drafts must include the following information:
- The description of the test requirements,
- The explanation of how the application should get tested provides for steps to follow,
- The test setup should have its version control for each application keeping records of under test, software version, data files used, the base operating system the application gets installed upon, hardware specifications, security access, physical or logical data, situations such as other varients of tests, and any additional setup information relevant to the requirements of the test case that gets conducted,
- Inputs data and the screenshots of the outputs or actions and expected results if successful,
- Any proofs or attachments and screenshots of the application,
- Use active case language.
- Any given Test Case should not be more than 15 steps.
- A draft test script is recommended with inputs, intentions, and expected results.
- The setup offers an alternative to prerequisite testing.
- Further tests scenario should have different test-case sequences.
Best Practice for writing a good Test Case:
1. Test Cases need to be transparent and straightforward:
Create test cases that are simple to follow through with by a non-technical tester. The test should be clear and concise as the test case writer may not execute them themselves. Clarity of instructions is the key.
Use strong language like going to the home page, entering data, clicking on this, and so on. On the other hand, an Assertive language for instruction makes test steps comprehendible by any non-technical tester.
2. Design Test Case with the non-technical customer in the viewpoint
The primary purpose of designing test cases is to test the application for any random test case that a customer could encounter. The application successfully passes that stage, meeting all requirements. A test writer must create test cases keeping for a non-technical audience.
3. Avoid test case repetition.
Do not repeat test cases. If a test case is required to execute another test case, call it by its test case id in the preconditioned column.
4. Do not Assume the actions taken by potential end-user
Assumptions for functionality and features of an application while preparing a test case are a sure case of terrible user experience. Instead, strictly follow the guidelines as per customer requirements.
5. Ensure 100% Coverage for odd actions and outputs
Write test cases that would comply with all application needs collected during customer requirement gathering. “Traceability Matrix” could ensure that no functions/conditions are left untested.
6. Test Cases must be identifiable.
We are naming the test case id to identify them effortlessly while tracking bugs/errors or identifying an application requirement later.
7. Assign and Practice specific Testing Procedures
Testing for every possible condition within an application is complex. However, manual Testing techniques would help increase the chances of finding errors and bugs.
- Boundary Value Analysis: Developers must conduct application testing for a specified range of values covering most typical scenarios.
- Parallelism Partition: These techniques breakups the range of tests into equal parts/groups with the same behavior.
- State Transition Technique: This method gets used when application behavior changes from one state to another following a specified action taken by the end-user.
- Error Assumption Technique: This is guessing/anticipating the error that may arise while doing manual testing. Error Assuming Technique is not a formal method and takes advantage of a tester’s interaction with the application and could be biased.
8. Self-Refreshing
Once all the test cases get performed, the application must return the “Test Environment” to the pre-test condition/state and should not render the test environment unusable by a new tester. Self-Cleaning is especially required when the testing environment gets different configuration settings per test case and scenario.
9. Bug and Error finding must be Repeatable and replicable
The test case should be replicable and produce the same results every time it gets conducted, no matter who tests it (Technical or non-technical tester).
10. Peer Review.
After creating test cases, get them reviewed by your colleagues. Your peers can uncover defects in your test case design, which you may easily miss.
The focus of this Test Case Management Document:
Features of a test case management document are as follows:
- For documenting Test Cases: Test Case creation using templates in excel.
- Conduct the Test Case and Document the results: Any non-technical tester using the template could execute the test cases, and the results could get quickly recorded.
- Automate the Bug/Error Tracking: Failed tests could get automatically linked to the bug tracker on GitHub, which in turn could get assigned to the developers and must get tracked and followed through by the project manager.
- Traceability: Requirements, Test cases, Execution of Test cases are all interlinked via the manual test template, and each case must be traced and tracked back to each other to have overall test coverage.
- Protecting Test Cases: Test cases should be reusable and protected from being lost or corrupted due to poor version control. Test Case Management Template offers the following features:
- Naming and numbering conventions
- Versioning
- Read-only storage
- Controlled access
- Off-site backup
One may practice all the rules as mentioned above in a private setting.
Thank you for reading this far.
HOW WOULD YOU CARRY YOUR DAY?
Thank you for reading. Our exclusive purpose in writing these articles is to make people think and not get people to agree with our perspective, which one would call the stuff of mind control. However, we don’t brainwash our audience. We hope this helps.
If you enjoyed reading our articles, you might like to subscribe to our mailing list as well and be the first to get notified of our latest or upcoming articles. Also, feel free to follow us on Twitter. Or, if you are a generous person, you can buy us a coffee at Ko-Fi here.
Disclaimer: This article is for educational and informational purposes only. It should not be considered Business, Financial or Legal Advice. Consult a financial professional before making any significant financial decisions, if any.