Improve your API Testing – 5 Ways to start

There are many ways to improve your API testing. Here are a few tips:

  • Use a variety of tools and techniques. No single tool or technique can test all aspects of an API. By using a variety of tools and techniques, you can get a more comprehensive view of your API’s functionality.
  • Write reusable tests. Once you have written a test, save it for future use. This will save you time and effort when you need to test the same functionality again.
  • Automate your tests. Automated tests can be run quickly and easily, and they can be repeated over and over again. This can help you to catch bugs early and to prevent them from being introduced into your code.
  • Document your tests. Documenting your tests will help you to understand what they are testing and how they work. This will make it easier to maintain your tests and to troubleshoot problems.
  • Test early and often. The earlier you start testing, the easier it will be to find and fix bugs. By testing early and often, you can avoid costly delays and problems down the road.

By following these tips, you can improve your API testing and ensure that your APIs are working as expected.

Key things to collect when reporting a bug

  1. Title/Summary: Provide a concise and descriptive title or summary that captures the essence of the bug. It should give a quick overview of the problem.
  2. Description: Include a detailed description of the bug, explaining what is happening and how it deviates from the expected behavior. Be specific and provide step-by-step instructions to reproduce the issue. Include any error messages or unusual behavior observed.
  3. Environment Details: Mention the specific environment or platform where the bug occurred. This includes the operating system, version, browser (if applicable), hardware configurations, and any other relevant software or tools involved. Note if the issue is specific to certain configurations.
  4. Version Information: Specify the version or build number of the software or application where the bug was encountered. This helps developers identify if the issue has already been fixed in newer releases.
  5. Screenshots or Recordings: Whenever possible, include screenshots or screen recordings that visually demonstrate the bug. Visual media can provide valuable context and make it easier for developers to understand the issue.
  6. Reproducibility Steps: Clearly outline the steps to reproduce the bug, starting from the initial state or conditions. Include specific inputs, actions, and any necessary configurations or settings. The more precise the reproduction steps, the higher the chances of the bug being addressed effectively.
  7. Expected and Actual Results: Describe the expected outcome or behavior, as well as the actual result observed when encountering the bug. This comparison helps developers identify the discrepancy and understand the impact of the bug.
  8. Frequency and Impact: Indicate how frequently the bug occurs. Is it consistently reproducible or intermittent? Also, explain the impact of the bug on the user experience, functionality, or performance. This helps prioritize the severity of the bug.
  9. Additional Information: Provide any additional relevant information that might assist in bug diagnosis and resolution. This could include log files, error reports, relevant code snippets, or any specific conditions or scenarios that trigger the bug.
  10. Contact Details: Lastly, provide your contact information or the preferred method for developers to reach out to you for further clarification or updates on the bug.

By collecting and presenting these key details, you can help ensure that the bug report is comprehensive, actionable, and assists developers in effectively identifying and resolving the issue.

Risk Analysis: Why Should I Test Using Risk Analysis

Risk analysis is a process of identifying and assessing the risks associated with a software project. It can be used to help prioritize testing efforts and to ensure that the most important areas of the software are tested thoroughly.

There are many different ways to perform risk analysis. One common approach is to use a risk register. A risk register is a document that lists all of the risks associated with a project, along with their likelihood and impact. The risks can then be prioritized based on their likelihood and impact.

Once the risks have been prioritized, the testing team can focus on testing the areas of the software that are most at risk. This will help to ensure that the most important areas of the software are tested thoroughly and that the risks associated with the project are minimized.

Here are some of the benefits of using risk analysis for testing software:

  • It can help to identify and prioritize risks.
  • It can help to ensure that the most important areas of the software are tested thoroughly.
  • It can help to minimize the risks associated with the project.
  • It can help to improve the quality of the software.
  • It can help to save time and money.

The goal of risk analysis is to proactively address potential issues and increase the chances of project success.

Here are some key aspects of risk analysis in software development:

  1. Risk Identification: This involves identifying potential risks that may arise during the software development lifecycle. Risks can include technical challenges, resource constraints, unclear requirements, schedule delays, or changes in project scope. Various techniques like brainstorming, checklists, and historical data analysis can help identify risks.
  2. Risk Assessment: Once risks are identified, they need to be assessed in terms of their probability of occurrence and potential impact on the project. Risks are typically evaluated based on their severity, likelihood, and detectability. This assessment helps prioritize risks and focus on those with the highest potential impact.
  3. Risk Mitigation: After assessing the risks, strategies and plans are developed to mitigate or reduce their impact. This involves implementing measures to avoid, transfer, accept, or mitigate the risks. Risk mitigation strategies can include adopting alternative technologies, adjusting project schedules, allocating additional resources, improving communication, or setting up contingency plans.
  4. Risk Monitoring and Control: Throughout the software development lifecycle, risks need to be continuously monitored and controlled. Regular risk assessments are performed to identify new risks that may emerge or changes in the severity of existing risks. Monitoring helps ensure that risk mitigation strategies are effective and that new risks are addressed promptly.
  5. Risk Documentation: It is important to maintain documentation of identified risks, their assessments, mitigation strategies, and outcomes. This documentation provides a historical record of risks encountered during the project and serves as a reference for future projects.
  6. Iterative Approach: Risk analysis is an iterative process that is performed at various stages of the software development lifecycle. As the project progresses and new information becomes available, the risk analysis is updated and refined.

By incorporating risk analysis into the software development process, organizations can proactively address potential issues, enhance decision-making, allocate resources effectively, and improve the chances of delivering a successful software product.

Top 10 Ideas for Successful Continuous Testing

Continuous Testing in 2022. It’s the stuff that Continuous Delivery is built upon. Do it right, and you deliver software at pace. Do it (CT) poorly, and you get nothing but a script to deliver crappy software just like we always have. CD without CT is just a pipe(line?) dream.

So how do you do it right? Lets take a look at 10 ideas that make Continuous Testing effective and successful:

  1. Automate your acceptance criteria
    • Start with a great acceptance criteria. Now take test automation and prove that you did it. Don’t stop short, don’t wave your hands and say it will work. Prove it with a test. Check it in with the code and run it every time you build. This is your oracle for the future. This is where you start putting coins in the bank for the future.
  2. Make your tests focused and fast
    • Forget end to end if you can. Focus your test on a single thing. Sure, write great unit tests that help show function level quality, but strive to create a test that proves working components. System level functionality (not system integration) – make your tests fast. Make them small, with limited scope. Get in, prove, clean up and get out. If you are doing setup to get to your point, you have failed. Think testability and architecture. Refactor so that your test is small, focused and fast.
  3. Develop and live by a definition of done that includes testing in your sprint
    • Your team must live by the fact that you are not done until you have delivered test automation that proves your code is going to work. Sure, that might not happen but that is the cliff we march to. We may find out that our test was not good enough, but the sprint is not done until we have the completed code and tests to go with it checked in and passing. No excuses.
  4. Make the entire team responsible for quality
    • People say this all the time and don’t live by it. Many times we sit around waiting for quality guy or gal to get the testing done. Wrong. If you are waiting for testing to complete, you are doing it wrong. Complete the testing. Make the testing faster. Do the test yourself. Build better testing. Make the framework measure. Your job is to deliver new features with high quality. Whatever your expertise is, use it. You are responsible. I once heard that the best way for a developer to get better at testing is to give them a pager. Give everyone a pager.
  5. Get skilled. Quit pretending that record and playback works
    • Laugh at the sales guy that brings the record and playback “we can make automators out of everyone” BS. It’s a lie. Always has been, always will be. Get some skills on your team – it takes code and hard work by well paid professionals. Big projects will have quality roles for SME’s and analysts with deep product knowledge but we are talking about Continuous Testing here. Automation. It takes engineering – don’t lie to yourself or get lied to by vendors.
  6. Tell your sponsors that systems without tests will not be delivered
    • Hey PM or Product owner. We write quality code. To do that, we have to write tests. It’s part of our estimates. We won’t be done with the sprint without the test automation. There is no CD without CT. Get over it. No, we will not start working on another feature until this one is done, tested, automated and complete. If you need us to skip test automation, please see #6.
  7. Listen to the tests
    • Thou shall not comment out, delete, cripple, or ignore your tests. You built them. If they are complaining, make them better. If they are failing, listen to what they say.
  8. Fight fragility. Mock, isolate, make the tests small
    • Fragility is your enemy. Make your tests boringly pass all of the time. If fragility is your problem, think testability, observability and architecture. Take the battle to the system under test and the way it is designed if there really is no way to stabilize your tests. Mock and isolate your testing using responders or other service virtualization if you can. Resist giant integration tests when small tests will cover the risk. If they are fragile, get everyone in the room and figure this one out – it’s too expensive to live with.
  9. Co-exist your tests
    • Never, ever let your test automation live in another place. They are first class citizens that should live with your production code (I didn’t say deployed with) and should most often be written in the same languages that the system is. Everyone whould be able to fix them. Never let there be an excuse of “i don’t understand that code” or “that harness is in another solution, get Jane/Jack to look at it”. Nope. Not Continuous Testing. Let them co-exist.
  10. Monitor everything, measure FTW
    • Monitor your systems. Measure your testing. Continuous Testing really works when you instrument your systems from production down to have machines looking at things that may help you diagnose and fix quick a problem that is on tested for initially. When you find problems this way, maybe you should go back, and write some test automation. FTW.

Copyright 2022 HeadRevison.com

HOW TO: PID Auto-Tuning for Ender 3 and Other 3D Printers

Swings in temperature for your 3D printer’s hot end, such as the Ender 3, is just plain no good for quality print results. Steady, controlled heat is what you are looking for and getting it right can be great for your print results. PID auto-tuning is a way to control the temperature by using an algorithm to determine the values that the printer uses to heat and maintain temperatures. Below you will find the instructions to set your PID values. This method is going to change the values that are stored in your printer, and used every time it heats. This method is great for setting the PID values if you use very similar filament, and cooling most every time you print. If you use a lot of varying filament, or use cooling on some and not on others, you will want to modify your slicer printer settings to set the PID values for each configuration. Lets take a look:

  • First off, use a terminal command processor to send commands to your printer – such as OctoPrint, Repetier Host, or Simplify 3D.
  • Start your printer in a cooled state, with the material you are going to use (such as PLA) primed in the hotend – either from a previous print or heat the printer and push through a few inches of filament and let it cool back down
  • Start the cooling fans if you intend to use them as part of the results you want from the PID test. Send the command M303 E0 S205; to the printer for a temperature of 205C – change the S value to whatever target temperature you are looking to get stable heating for like this:
M303 E0 S205;
  • The printer will take about 5 minutes or so and run through the auto-tune test.
  • When it is complete, Marlin will spit out the test values for P, I and D looking something like this near the end of the output:
Recv: PID Autotune finished! Put the last Kp, Ki and Kd constants from below into Configuration.h
Recv: #define DEFAULT_Kp 27.44
Recv: #define DEFAULT_Ki 3.60
Recv: #define DEFAULT_Kd 52.26
Recv: ok
  • Now tell your printer that you have new defaults, sending in new values for the PID values that you received from the test. In my example I send it the values like this
M301 H1 P27.44 I3.60 D52.26;
  • And it returns a success looking like this:
Recv: echo: p:27.44 i:3.60 d:52.26
Recv: ok
  • Next up, you will want to save your settings to the firmware, or the next time you cycle the power, you will lose the settings, so send the save settings command like this:
M500;

There you go, you should be all set to go with stable PID settings that make your printer produce better prints . A couple of quick things to note:

  1. I have seen some varying settings and re-running the whole thing a few times will give you interesting variations in the values returned. The first time I ran this on a printer, the resulting values produced oscillating temperatures (around +-4 degrees C) which is a little too much. You are looking for tight temperature ranges – I was happy with the settings above that roughly stayed very solid in the 204-206 degree C range. Re-running the test a few times you may find a set of values that really tighten it up for you as well.
  2. Remember, if you are swapping in another brand of filament, a different type of filament (like going from PLA to PETG), or using fans vs. no part fans, you will want to either re-run this test and store them in firmware to use until you change them again, or send the M301 command in your printer profile with each of the values for P, I, and D for the configuration each time you go to print. This method takes a little more work, but ensures that the settings are correct for the config you are intending to use.

That’s it for today, if you have a comment or tip leave it below – we would love to hear from you. Happy printing!

Testing in Production

This is a topic that always draws some great responses when discussed where I work. Do you Test on your production systems?

I always come to the same conclusion on this one. Why wouldn’t you want to test in production? I know, I know. Your system is too “special” or “secure” or “regulated” or whatever to be able to test in production. So what are you going to do? Let your customers test it for you? Throw the code over the wall to the people that matter most and hope that it works for them? Take the chance that your customer will just understand when the house of cards comes crashing down in a burning heap of lame?

To those that say it just can’t be done, I say that maybe your system is just lacking testability – you haven’t built it right. To me a testable system is one that has a great handle on control and is inherently observable. If you can’t control and observe the software, you are dead out of the gate. Often, if you solve the control and observation issue, you will find a system that you can test in production – because you engineered it to be easy to do so in any environment.

So take a look at your systems and ask yourself if there are any measures you can take to affect the testability of your system in a way that would lead you to be able to test your system in production. Face it, no matter how you try, your QA systems will never be the same as your production systems. The data, traffic, configurations, scale, timing, etc. will just never match well enough that the tests you run in those environments will catch everything. Change control, make it observable and make sure your system works in production before your customer does it for you!

Centralized vs. Federated Integration Test in the Enterprise

I have been working on the question of federated vs. centralized integration test practices in the enterprise lately. As I have done some research into the topic, I have found that few resources are around on the topic. While some white papers exist, it appears that most companies are in the federated camp: relying on individual divisions to create their own integration test strategies – even when there are many ties among their applications that could benefit from a centralized approach.

Some companies like Google have extremely large tests that involve many applications, and even automate them to some extent. Most though, including the ones that I have worked for, spend time testing software from within their respective silos in an effort to protect their own. Each of these groups tend to create and maintain redundant sets of tests that cover their application needs.

The problem is that many of these needs are the needs of many of the other groups and a great deal of redundant and poorly performing tests are written. Every group creates a test to “create a user and password” for instance. Each is created in their silo and when the functionality changes, each breaks in their own way. Tests that perform things as trivial as this, and of course much more elaborate are created all of the time that could be shared.

Creating a centralized integration test group may be able to fix this redundancy issue and help protect production quality as you do so. Sharing resources, test data management, and testing know how might be a way to create a group that solves the issue of poor communication across your organization when it comes to system integration test. This one set of testers will help build the “moat” that protects your production castle from impending doom.