Monday, May 13, 2019

QA Strategy: Things a Developer Can Work on During the End of the Sprint


Motivation

I call this QA Strategy because sometimes QA has to fight the battle of preventing development from "getting ahead" or as I'd call it, "violating core agile principles."  One place I worked, everyone seemed ok with having sprint planning on the Friday at the end of the sprint, even if QA was still working on current sprint... a battle I lost.  The devs were apparently not creative enough to come up with things they could do to help the team/project/company that didn't involve writing application code.  Not that it made me bitter or anything.

Ideas

Roughly in the order of what I'd request if I was the QA manager
  1. Testing
    1. ...other devs' code of course
    2. Helping QA do their testing
    3. Helping with automation (but not preventing QA from learning/doing automation)
  2. Prepping your demo (if devs do demos)
  3. Documentation
    1. Code comments
    2. Documenting features on the wiki
    3. Organizing / maintaining the wiki
  4. Writing unit tests
    1. Should be done in-sprint prior to giving to QA, but sometimes this doesn't happen
    2. Backfilling unit tests for old code
    3. Fixing old broken unit tests (let's be real)
  5. Writing in-house tools
    1. Integrations to things like Jira, cloud tools, analytics tools, etc.
    2. CI/CD tools
    3. (Test) data creation tools...
  6. Knowledge sharing
    1. Silo busting with other developers
    2. Training other groups in the organization (account managers, support center staff, sales team)
    3. Learning from other groups in the organization (account managers, support center staff, sales team)
    4. Mentoring junior devs
    5. Mentoring people from other groups in the org who want to be devs
  7. Learning new skills
  8. Simple refactoring

Wednesday, March 6, 2019

Language: Necessary and Sufficient

Motivation


This is a nice piece of language I learned in school long, long ago.  It's very useful when talking about requirements or specifications for a project, acceptance criteria for a user story, etc.  Bonus: it makes you sound smart.

Example


The long and short of it is that what you want in order to work efficiently and effectively is the necessary and sufficient information (or tools or whatever).  Since I like working on cars, I'll use the simple example of changing a tire.

If you have the following, you have the necessary and sufficient toolset:
  • a new tire
  • a lug wrench
  • a car jack

If you have the following, you have tools that are necessary, but not sufficient:
  • a new tire

If you have the following, you have tools that are sufficient, but not necessary:
  • a new tire
  • a lug wrench
  • a car jack
  • a hammer

If you have the following, you have tools that are not necessary, and not sufficient:
  • a hammer

Recap


Necessary defines the lower bounds of what you need while sufficient defines the upper bounds.  If you have both, you have the ability to do the work completely while being perfectly efficient.

Friday, January 4, 2019

Email: Changes to Automation That Fix TestA but Break TestB

Summary

As our Selenium automation team grew, it began to feel like we might be fighting with each other over element locators and library methods.  To try to avoid this, I recommended we look at git annotations while making these little code changes.

--------------------------------------------------------------------------

Team,
I was just going to send this to Andy and Victor, but it might be useful to more of us as more people contribute to the same codebase.  Hopefully this will be a helpful strategy to prevent us from breaking one test while fixing another.

This came to mind when I saw Victor's comment on a pull request:

Victor said "I will check all occurrences again."  This could take a lot of time, and sometimes it will be very difficult to even determine all the tests that use a certain library method or page element.


These little changes are at the front of our minds in OMS since we're going to be executing and re-executing over and over, trying to get tests to run consistently, fixing the little things that are breaking our tests.  In the example above, if 2 tests use the page element that Victor is changing, it's very possible that the locator is actually different in the scenarios in those 2 tests.  I think a good way to avoid fixing testA but breaking testB is to view Annotations in IntelliJ.  Just right-click in the gutter between the line numbers and the code area and choose Annotate:

If I were changing one of these page elements that was last updated several months ago, I would feel pretty confident that my change wasn't going to break any other tests.  But if the last changed date was a couple weeks ago, then I should ask that person, or just look at that commit.  IntelliJ makes that very easy too – just right click on the date or the person's name and choose Show Diff:

If you want to see their git commit message (this is why we write good commit messages!), you can just hover over the date or person's name, or from the menu above, select Select in Git Log.

This won't work all the time, but it should keep us from thrashing – where one person fixes testA, breaking testB, and then someone else fixes testB, breaking testA.

If anyone else has bright ideas on this, feel free to share them here...