Monday, March 22, 2010

Automated Web Services Testing - An Interview With Matt Krapivner of SmartPilot Software

As more and more companies face decisions about automating their testing, it's important to understand some of the pitfalls of automation and explore methods that avoid those pitfalls. Likewise, we need to find mature open source testing tools that help bend the test automation curve in the right direction. In previous articles, we've written about model-based and keyword-driven testing, each promising to reduce test automation script maintenance costs while increasing effectiveness (coverage). As I've searched for answers in this areas, I've sought examples of how others have approached test automation as a way of benchmarking good ideas. Recently, I've had the good fortune to meet a practitioner who has pursued new and more effective approaches to test automation. His name is Matt Krapivner and his consultancy business has provided a very impressive test automation framework to a local Web 2.0 company.

We interviewed Matt recently and talked to him about the new approach he's using with his current client and how it's a step up from traditional test automation practices.

Tuesday, January 12, 2010

Understanding The Differences Between Testers And Developers

Are testers from Venus and developers from Mars? In an ACM article titled An Exploratory Research Study on Interpersonal Conflict between Developers and Testers in Software Development, the authors look at the difference between testers and developers.
As stated in the article, the goal is to better understand the differences between thse two actors in order to produce better software. The authors' call to action is based on two reasons:

  1. Because of a trend toward more agile software development, testers are in contact with developers earlier and more often
  2. Conflict can have negative consequences not only in relation to the end product but also in relation to the job satisfaction of both developers and tester
This paper points to studies that clearly separate testers from developers in terms of their goals: developers seek to maximize efficiency while testers are all about effectiveness. Developers seek to do get their work done with the least effort while testers seek the highest quality.  Those are clearly different goals that can easily create conflict if each group sub-optimizes. Unless upper management reconciles these different goals and helps align them to reach the broader objective of the company, failure may ensue.

How Much Is Your Offshore/Outsource Testing Defect Find/Fix Cycle Costing You?

Update:  Linda G. Hayes, who is the CTO of Worksoft, Inc., and the founder of three software companies including AutoTester, the first PC-based test automation tool, wrote an interesting article on this subject at StickyMinds.comShe wrote about the "promise of the same value proposition" and outlined some key factors for success or failure with offshoring.  One of the more interesting, and relevant comments (at least as it pertains to this article), was on "Time to Value".  Linda said:

Offshore resources are never a quick fix, especially for testing. A 2005 report from AMR Research found that it took between fourteen months and three years before the offshore testers had sufficient familiarity to be effective in finding the root cause of problems. Others have found that the lack of domain knowledge resulted in spurious defects being reported that actually increased development overhead due to review and response times. In fact, one organization identified that as many as 33 percent of reported issues were traceable to tester error.
There are many reasons to involve external testing services. Sometimes companies need the expertise of an interoperability lab, other times they are attempting to reduce testing staff costs. Whatever the reason, one cost that should be calculated based on a known process is the Find/Fix Cycle Cost for defects. It's important to evaluate these costs not just based on the hourly rate of the offshore/outsource testing service provider, but on the total cost involved in finding and fixing defects. Ultimately, this cost involves all QA and developer resource costs.

Monday, November 9, 2009

Thursday, November 5, 2009

Learning How To Test Mobile Device Applications (Part 3) - BlackBerry

In this HOWTO series, we've covered two popular mobile device platforms, webOS and Android, in these articles:
Learning How To Test Mobile Device Applications (Part 1) - Palm webOS
Learning How To Test Mobile Device Applications (Part 2) - Android
Today we'll look at BlackBerry. As we've stated in previous articles, our goal is learn enough from developing simple apps on each platform so that we have a better understanding of the platform as testers. And we want to accomplish this without having to buy each mobile device or any development software. Fortunately, developer offerings from RIM meet these requirements, albeit with less hipness than webOS and Android.

Wednesday, November 4, 2009

iPhone App Project Managers - How Did You Test Your App?

iPhone application development is exploding (How Hot Is The iPhone Application Development Market?). And users are demanding better quality - no longer is it acceptable for an iPhone app to crash unexpectedly with the explanation that apps crash every so often because of the iPhone OS. Moreover, with more-and-more iPhone and iPod touch models to choose from, developers face a compatibility testing challenge. So how are project managers testing their iPhone apps?

We'd like to hear from iPhone application developers and managers who have navigated their way onto the App Store and can share with us the problems they faced in testing their apps, as well as tips and recommendations.

If you are interested in helping out, please leave a comment or email us. We will send you questions by email and publish your interview here.

Thursday, October 29, 2009

Learning How To Test Mobile Device Applications (Part 2) - Android

Last week we introduced this series on learning how to test mobile device applications based on the idea that developing a simple mobile app on each platform will help you understand how to test them (Learning How To Test Mobile Device Applications (Part 1) - Palm webOS). And we want to do this for free and without having to buy an actual device. That's worked for webOS from Palm and will work with this week's platform - Android.

And as we recommended in Part 1, testers should start by downloading and installing the SDK and building one of the included apps. For the Android platform, the steps for building your first app is explicitly laid out at Android Developer's SDK page. The instructions provided take you through the following steps:

Monday, October 26, 2009

iPhone Development Basics - Memory Management Part 4

So far we've seen 3 videos by Mark Johnson on iPhone memory management:
iPhone Development Basics - Memory Management Part 1
iPhone Development Basics - Memory Management Part 2
iPhone Development Basics - Memory Management Part 3
And we've been using these videos to understand iPhone application memory management and gain some insight has to why iPhone apps crash.  In today's video, Mark describes a memory management programming convention used by iPhone developers for managing objects - either they're in the Autorelease pool or they are not. It's a very technical video but worth listening to because Mark warns of things not to do otherwise an app will crash - something worth being curious about when talking to your dev team.

Suggested questions for Part 4:
(4:02min) Why does returning a reference to a deleted object cause a crash? Why would that happen?
(5:17min) What does it mean to "own" an object versus having it be part of an Autorelease pool?