Thursday, July 30, 2009

A Lesson In Defect Characterization - How A Twitter Security Enhancement Created a Twitgoo Problem

Sometimes defects aren't what they appear to be. I'm sure developers have countless stories of how defects they set out to fix were not at all like the defect report described. We ran into just such a situation recently while testing a twitter-enabled iPhone App - Photo Tweet. Photo Tweet would mysteriously, and seemingly arbitrarily, fail to upload photos to Twitgoo. It was one of those situations where one day a feature works and the next day it fails. And, of course, the developer had not made any changes.

Most of the time when you find a bug and the developer claims not to have made any changes, it's always best to quiz the developer with "what-if" questions. That allows you, as a tester, to avoid calling the developer's integrity into question. I usually try questions like this:
  • What if during last night's build, an older version of a component was used?
  • Even though you didn't make a change for that feature, what if some other change caused this bug?
Usually this sort of brainstorming session can tease out needed clues that lead up to finding the root cause of a mysterious bug. If that doesn't work, then you're left trying to find out what else could have contributed to the defect. Such was the case with Photo Tweet and it's inability to upload photos after a period of time. And so, a bug was written up that said Photo Tweet would stop uploading photos arbitrarily.

Subsequent conversations with the developer, including some what-if questions, lead us to believe that no changes were made for this feature and the bug resided outside the developers code. Well since Photo Tweet uploads its photos to Photo Bucket's Twitgoo service, we suspected they had instituted some sort of throttling for uploads. This is reasonable to expect given the popularity of Twitgoo. Maybe they just didn't want rogue apps flooding their service, and apps like Photo Tweet were paying a penalty for the nefarious activities of others. We could not have been more wrong about the root cause of our defect.

We reported our defect to Twitgoo and asked if they had instituted an upload limitation. I was amazed at how fast these folks responded and how helpful they were. Initially they were stumped since this was the first they heard of this problem and HAD NOT put a limit on uploads. At that point, I expected that they could not help any further and we would be left with having to do more debugging. But instead, Michael P. Clark, their SVP of Technology and Partnerships rallied his technical folks to find out what was at the heart of this problem. The next day I got an email from Justin Hart, a Sr. Engineer for Photobucket, informing me that they discovered an unannounced change to Twitter's service. Twitgoo relies on Twitter for those users wanting to "tweet" the photos they upload to Twitgoo. It turns out that Twitter had made a security enhancement. Michael and Justin found this issue very quickly and pointed me to a threaded discussion concerning what Twitter did. Much to Twitter's credit, they quickly made this issue public and extended an apology and an offer for input from developers. Here's a snippet of their statement:
A change shipped last week that limited the number of times a user could access the account/verify_credentials method [1] in a given hour. This change proved hasty and short-sighted as pointed out by the subsequent discussion [2]. We apologize to any developer that was adversely affected. Given the problems, we want to fix this in a public and transparent manner.
We learned a lot from this experience. First, the folks at Twitgoo are fantastic and serious about providing a great customer and technology partner experience. Second, some defects are certainly not what they appear to be. Third, when testing products that use APIs that connect to services that connect to other services, it's hard to determine root causes no matter how well intentioned your defect characterization process is. Finally, as you investigate why a defect occurs, don't assume you know the root cause until you have all the facts.

Wednesday, July 29, 2009

Agile Testing Framework For iPhone App Development

As we continue developing and adding to our iPhone App Testing Lab, we are on the lookout for helpful testing tools. While doing some research on iPhone App testing tools, I ran across a post by a developer asking for help in this area:
My iPhone app has grown to the point where I need some sort of automated testing to keep me from breaking things whenever I make a change. I thought Instruments would do the trick, but I can't seem to make it work. What do you use for automated regression testing?

This is an appropriate question to ask, but one I would suspect results in an answer with a limited set of choices. In fact, the only tool we've come across, and posted an article about, is yet to be delivered (Squish Support For Automated GUI Testing on Apple's iPhone and iPod Touch).

Fortunately, as we all share information in blogs and forums, others points out additional solutions. In particular, one of our readers pointed me to uispec, Behavior Driven Development for the iPhone. I especially liked his comment "It's open source too.". That's always an attention getter for those of us looking for useful tools.

The author of this tool describes it as a "Behavior Driven Development framework for the iPhone that provides a full automated testing solution that drives the actual iPhone UI. It is modeled after the very popular RSpec for Ruby." For those of you interested in learning more this type of Agile development framework, take a look at Behavior Driven Development on Wikipedia.

When you visit the uispec link, you'll find documentation, installation instructions and examples. It looks like a great way to build in BDD testing for those of you that do continuous iPhone app development.

Thursday, July 23, 2009

Testing For iPhone App Memory Problems

Problems with memory management has got to be the #1 issue for iPhone app development, and presents one of the biggest challenges to testers. Our experience with testing over a half dozen iPhone apps in the past few months has reinforced the need for developers to nail down their memory management techniques early and for testers to create test cases that stress the way an iPhone app handles memory. So far the process has been bumpy but we've arrived at some basic memory management testing concepts along with recommendations for how developers can improve their memory management techniques.

Basically there are 2 types of memory problems to watch out for:
  1. Primary Memory Leaks (in the "sandbox")
  2. Secondary Memory Build-Up (out of the "sandbox")
Primary memory leaks are the most common and most troublesome to developers. These leaks create the mysterious crashes for iPhone apps in early development and are the hardest to document in a bug report, mainly because primary memory can change out from under the developer depending on other apps such as Safari and Mail. Safari can use up a lot of memory especially if multiple pages are open. Primary memory leak bugs seem to go away by simply restarting the application.

Secondary memory build-up creates situations that, over time, seem to create a permanently smaller amount of memory available for the app to run in, i.e. smaller primary memory. This does not come about because of memory leaks per se, and hence is harder to find with memory checking tools. As opposed to primary memory leaks, crashes or memory error alerts will seem reproducible for this type of memory problem. Consequently, the tester will write up repeatable steps leading up to the crash or memory alert, only to find out that, after restarting their iPhone, these steps will no longer be reproducible.

Our approach to rooting out these types of bugs is to first be able to identify them as categorized above, and then employ exploratory techniques to attempt to find reproducible steps. However, these memory issues can be elusive, and, as stated above, often lead to misleading bug reports. The main thing is to be able to communicate to development what type of memory issue you're seeing so they can use debugging tools to find the problem.

When we find these types of memory problems, our bug reports provide a summary, steps to reproduce, other apps that are running (i.e. Mail or Safari) and a result, e.g. app crashes or memory alert is presented. If the bug is not 100% reproducible, then we comment on where it mostly occurs. Our main effort is to write our bugs up so that a developer is convinced that it's a memory problem of some sort. Once convinced, then they can instrument their code to find it. But if you don't provide a bug report that describes the defect in the context of a memory problem, then the bug will most likely end up as "cannot reproduce" and be ignored.

We then recommend that the developers instrument their code to find the memory issues. But since we are testers, we can only go so far in what we recommend. It's been my experience that iPhone developers, when faced with memory issues, appreciate all the help they can get. Here are the links we send them to:
Most developers visit the Apple iPhone Developers Forum and it's a great place for testers too. By reading the posts there, you get a feel for just how tough memory management is. But more importantly, you gain confidence that bugs you are seeing really can be fixed and should not be attributed to peculiarities of the iPhone. Apple developers write a lot of responses in those forums to help counter the claims that the OS is to blame for all these memory problems.

By understanding memory problems, testers can be authoritative in their defect reports. By knowing where to point developers for help, the test group becomes a resource for helpful investigation and research for tough memory problems.

Wednesday, July 15, 2009

Gearing Up To Develop and Test Palm webOS Apps

Palm and O'Reilly are offering developers and testers a look at the Palm webOS and what it takes to develop for the platform. You can get a sneak peek overview of webOS here, where you can learn how the OS works and the terminology for the UI. Basically, webOS is an embedded Linux operation system with a custom UI built on a browser. I think the biggest deal about this device and its OS is the underlying development environment embodied in their Mojo SDK. Mojo offers a JavaScript framework that provides standardized UI widgets, and access to selected device hardware and services. I think this will attract a lot more developers to the platform than other devices because of JavaScript, versus Objective-C or Java. However, it's not the language choice that attracts developers - it's the richness of the device and its APIs. From the looks of things, Palm's webOS brings a lot to the party. Moreover, developers are free to develop their apps from scratch if they don't want to use Palm's SDK.

If you are looking to catch up on webOS and the Mojo SDK, then I recommend watching Mitch Allen's (Palm's Software CTO) presentation on webOS development. In the video below, he gives a preview into application development with the Mojo SDK, the development environment and toolset for this new mobile web platform.

Friday, July 10, 2009

DeployStudio HOWTO - Installing on Mac OS 10.5

Our first article on DeployStudio came about because we needed a replacement for NetRestore at RTL (Using DeployStudio For Imaging and Restoring OS X Setups for Testing). Subsequently, we wrote a second article with a very useful tutorial video (DeployStudio 101 Tutorial). Now having used this tool at RTL for awhile now, we've honed our setup process and can share that with you in this article.

The steps below will guide you through the setup process for using DeployStudio with Mac OS 10.5. What you will end up with is a DeployStudio drive that can boot on both PPC and Intel Macs.

  1. Partition an External Firewire Drive to Apple Partition Map.
  2. Install Mac OS 10.5 (does not have to be Server, though it could be) on the external drive from a PPC machine*.
  3. Boot into the OS X installed.
  4. Download the latest build of Deploy Studio.

Wednesday, July 8, 2009

iPhone App Testing With The App Store Approval Process In Mind

Apple's App Store approval process can be a frustrating experience if you do not approach your iPhone/iPod touch app testing with the right end-goal in mind. There are countless stories of developers submitting their apps several times before Apple approves them for the App Store. Most of the publicized stories about failed App Store submissions usually are about content or illegal functionality. The stories you don't hear about are those that have to do with apps not meeting Apple's basic human interface guidelines. In talking with iPhone developers, we've found that Apple provides detailed feedback on why an app is not approved along with references to their guidelines. This is very helpful, but problems can be avoided and apps can be approved faster if developers utilize a suite of test cases and environments to ensure their app not only functions as intended but also meets applicable Human Interface guidelines. And now with iPhone OS 3.0, those guidelines have increased with the new functionality provided by 3.0.

What we've done to help our clients avoid known pitfalls in the App Store approval process, is develop a Core Tasks test suite that is used along with the functionality and compatibility tests we perform. This Core Tasks test suite has been developed using Apple's Human Interface guidelines. Here are a few examples from the test suite that we use to help developers determine if their app functions correctly and appropriately when:
  1. Using Cut, Copy & Paste
  2. Receiving a Push Notification (unrelated to their app)
  3. Unexpectedly terminating when receiving a call
  4. Internet connections are lost
  5. Launching while a user is listening to music from the iPod app
  6. Requiring location services
There's a lot to focus on when developing an iPhone app and the main concentration is usually on the unique functionality of the application. But leaving out some of these core task tests can lead to delays in the App Store submission process.