Friday, September 20, 2013

Exploring, Problem Solving, and Looking for Trouble

Here I discuss briefly the kinds of testing I plan on doing for this blog and ones you could not pay me enough to do.

Exploring

A type of testing that is often ignored is what I think of as "exploring": watching what happens when the user explores an application for the first time and tries to figure out what it does and if they can do things with it that they will find useful. There are hundreds if not thousands of free apps available out there for people to wade through. Many users will download an application and try to use it and make a decision about whether to keep it in a very short period of time. In general, the more that a person knows about an application the worse they are at figuring out what will confuse or frustrate a new user. If you are creating an application you should solicit user feedback as early as possible (especially to make sure you are building the right thing in the first place) and look at a broad range of users. (And when you think your UI is pretty good, hand it over to a family member who didn't have a computer until they were an adult and tends to get annoyed rather than excited by new technology. I think most software engineers would find that a real eye-opener.)

Some companies do a reasonable job of usability testing; they want to know how easy the software is to use. However, usability testing I have seen often involves giving the user a specific set of tasks to perform and seeing where they get into trouble. At a minimum you need to have at least some beginning users and some more experienced users. (If you are lucky the customers talk out loud while they are doing this so you can learn more about what they are trying to do or expecting to see.) I have done this type of testing once for a college course and seen the results from one such study at Microsoft that was really fascinating. I wish more companies did this, especially in the early stages of the design process when the companies are capable of making significant changes based on the feedback they get from customers.

(Aside: If you have never worked in software you are probably thinking, "Why would anyone spend money to do usability testing if they weren't going to use the results to improve the UI (at least not any time soon)?" I will just say that that is a very good question. A friend once said to me something along the lines of, "Only people who don't work in software think that the things that happen in the Dilbert comic are ridiculous, and could never happen," which I think is very apt. Topic: The posting of Dilbert comics at software development companies as a form of passive-aggressive behavior: discuss!)




Exploring is the basic form of testing that I will often do on this blog for applications I have never used before or for applications that I think confuse the user early on (once I get the hang of of the application and try to customize them for my use I often end up in "problem solving" mode, below). Of course how users explore an application is going to vary widely. If I don't know anything about the application I am going to see if I can figure it out by looking at visual clues and menu items. If the application allows me to make something I will try to make a basic form/document/spreadsheet/project (Hello world!) from start to finish (an easy form of end-to-end testing) and see if it "works" (whatever that means in the context of the application). (You would think that this type of testing would be done occasionally on every product and after every new build of a product, but based on many of the server errors I have seen I can guarantee that this is not the case.)

As soon as I know what the application is basically doing I start to build a map in my head for what features I expect it to have (of course how much the application costs factors into this) and try to figure out if the application can do what I want it to do ("I WILL BEND THIS APP TO DO MY WILL <evil laugh>"). If the application is free then I will be looking for clues about what the differences are between the free and paid versions (I don't want to spend a lot of time on something and then realize that the free version doesn't allow me to save my work, for example).  Perhaps I want something quick and easy to use but don't care if it has all of the features of a desktop application. Perhaps this app is useless to me if I can't customize it by changing the order of items, creating categories, changing colors, etc. As many other users are, I am looking at how useful the app is to me compared to the cost in money and time.

I consider myself pretty good at finding places where people will get confused whether I am familiar with the software or not, so I am willing to be a test subject ("guinea pig") for looking at software and blogging about it. I am just barely arrogant enough to think that if I can't figure out how to use your application on my first try then the application is more at fault than I am and is going to confuse many other people the first time they try to use it in the same way. (I consider my approval necessary but not sufficient.)

Problem Solving

Sometimes I have used an application before but decide to try to do something slightly different with it. If the new task is really easy and works about how I expect it to, then I'm done (if the UI surprises me because it is even easier and cooler than I expect then I may give it a shout out someday). However, if I am having trouble figuring out if the application can do what I want, or how to do it, then I usually jump into problem solving mode. I have a specific task I am trying to achieve -- not trying to explore features or break the UI, at least not on purpose (The title of the great movie The Accidental Tourist is now running through my mind). Now that I have a blog, I will try to document all of the ways I try to accomplish my task (and what is going through my mind at the time) before I am successful or give up. Consider it an informal kind of usability testing where I get to pick my own set of tasks. I find a lot of UI bugs this way.

Looking for Trouble

"Looking for trouble" is a term I am hijacking to explain a type of testing that is my specialty on products that I actually work on. When I look for trouble I try to think about what things could go wrong and see if I can break the UI by trying them singly or in inventive combinations. I am naturally really good at this but haven't had much luck teaching other people this skill (which is why I joke about it being a minor superpower a.k.a. minorpower). I think being good at looking for trouble requires a certain amount of creativity, attention to detail, stubbornness, a mischievous spirit, and at least some basic software engineering knowledge (it helps to understand what edge cases are, how easy it can be to end up with an off by one error, and what happens when values are passed by reference or by value). Most companies don't specifically assign someone to do this, or don't have a minorhero like me on staff or as a friendly neighborhood beta tester. In my opinion that is one of the reasons they make obvious glaring UI mistakes.

I don't expect to use this type of testing much on this blog because I don't usually have to look for trouble -- trouble finds me (usually when I am "problem solving", described above). I am usually trying to use an application, innocently minding my own business, and run into a mire that makes it impossible to complete my task (at least without feeling like I am trying to navigate an obstacle course), at least in the way I had hoped. I know that I have amazing powers when it comes to breaking UI, but make me work for it people! I think that if this type of testing occasionally before the application was released to customers that the release application would provide a much better user experience. Surely there is a way to reward software developers for finding these bugs without encouraging software developers to write themselves a minivan.

Other Black Box Testing

I doubt I'm going to give any part of an application a thorough black box test unless the product has such great, unrealized potential (because the UI is so frustrating) that I can't help myself. (On a totally unrelated note I have signed up to be a UI beta tester for Google. Wish me luck.) If I am spending a lot of time testing something even after it has seriously disappointed me, I will probably think of this as "Please, please let this application work the way I want it to because all the other similar apps are worse" testing, although I don't expect that name to catch on. However if someone asks me politely for help and I feel like they will actually make changes based on my suggestions I am a total sucker -- but don't tell anyone.

Some Words About White Box Testing

Obviously since I do not work on any of the applications I expect to test, I can't perform white box testing on them. However, I have some words.

Every software engineer should have a solid understanding of the different types of testing even if they haven't performed all of those kinds of tests themselves. If someone tells you that they are a software engineer but claims that they do not spend any time testing then you should mentally add quotes to their title (e.g. "software engineer"). There are many kinds of white box testing such as unit tests, functional tests, etc. that I have done and will gleefully do again whenever I make code changes or as otherwise needed. Good times.

However, there are a lot other kinds of white box testing that are required when you are dealing with large-scale and high-availability web applications that I have no practical experience with. It is very difficult to really learn how to do that type of testing, or have the resources you need to do it for a large web application, except after being hired as a tester and learning on the job. I have no desire to ever do that sort of testing (you couldn't pay me enough to do it), but I'm glad there are a lot of competent people out there working on that so I don't have to.

I am very grateful that there are software engineers that specialize in testing out there that know how to make sure large scale applications don't break when millions of users try to use them at once. They make sure that when I click "send" or "enter" on a high-traffic site that something happens in a reasonable amount of time. They make sure my data is secure when I enter personal information on a website. They try to protect websites from a constantly evolving set of attackers. They get very little credit when things go right and lots of blame when things go wrong (sort of like air-traffic controllers - one of the jobs I would least like to have). So I just want to take a brief opportunity to say, "Yay, white box testers!"

2 comments: