- RT @dieter_speidel: #Yahoo Board Approves $1.1 Billion #Tumblr Purchase t.co/7RglkP4hd2. Do you believe Tumblr is worth this amount? Time ago 7 Hours
- Can You Become a World-Class #Programmer without a Formal Education? t.co/AVC5JnbkXo Time ago 22 Hours
- Looking for #homeworker sid #income? #App #testers invited to join @passbrains and start #earning by #testing today! t.co/gyF791gQPq Time ago 2 Days
- UK and US based #testers earn $$$: Sign up today for our #website #testbash next Monday and Tuesday: t.co/gyF791gQPq Time ago 2 Days
- What do you do if you run into #apps #malfunctions like this one? #twitter #bug #facebook #connector t.co/YFHcaWb7wY Time ago 2 Days
Follow @passbrains on twitter.
Let the crowd run your betatests
Weblog on July 9, 2012in
Our goal is to get the quality and usability of our mobile, web or desktop software release candidates tested, under most realistic usage scenarios and on most major configurations being used in the market. So what is the best approach for getting this job done most effectively?
Well, definitely a good starting point is to have a proper test strategy, test plans and test cases in place to guide our testers to provide direction and focus to their tests. If we don't, we will most probably end up with a collection of random bug reports. If we do, we should find the bugs and issues of our software at least in the area of the functionalities, (pre-) conditions and system configurations which are covered in our test cases.
Ok so far, but are we really sure that one or a few single test runs can prove that our software will always behave the same way if the same tests would be repeated, eventually by other testers? And how would the same test case work under different conditions and with different system configurations – how do we cover the potential impacts of different device models (in case of mobile apps), different operating systems, browser versions, language versions etc? How can we ever find out about the potential impacts of other applications such as e.g. virus checkers running on the same device or computer, before our users find out?
Certainly, many things are possible in a nicely equipped test lab, but as there is an endless number of different system configurations out there, it is simply impossible to discover certain compatibility or interference issues. So how can we deal with this challenge?
Right – we need to run some in-the-wild beta-testing in addition to our properly planned, executed and documented software validation.
The concept of crowdsourcing might be worth being considered at this point. Crowdsourcing can be a very effective and efficient approach for running betatest campaigns on a large scale, detecting bugs we never found during verification and validation and providing valuable feedback on the usability of our product. But – there are a few critical criteria which need to be addressed to successfully integrate crowd testing concepts into corporate test strategies:
1. Crowd testers selection
- Tester profiles
For B2C applications, we might be interested to cover a wide representative spectrum of our targeted users. We might want to select testers based on demographic criteria such as age, education, culture, language etc.
- Tester professional background, skills and qualification
For B2B applications, certain domain knowledge specific skills and qualifications would be beneficial or even required. In most cases it would be highly recommended to select mainly professional software test engineers.
2. Managed Process
Don't expect too much from crowd testing projects without applying a managed process including proper planning and assignment of test cases and configurations, tester selection, engagement and support, as well as review and consolidation of test reports, bug and usability reports.
3. Cloud based project management platform
All the tools needed for the testing projects should be easily accessible ideally through one single platform. The platform should provide the functionality to manage the entire crowdtesting process from tester registration, tester selection and rewarding.
4. Committing and rewarding crowdtesters
Crowd testers expect sound financial rewards in return for their qualified deliverables, to be motivated investing their efforts in testing our products. Their commitment for timely and accurate delivery always goes with the amount they can earn through their participation – no surprise!
Thanks to crowd testing, we can have access to as many testers as needed at the time when they are needed and at comparably very low costs. Crowd testing fits perfectly as part of the test strategy for most software products, especially mobile and web, but also client and enterprise applications. It can also enable us to run special test campaigns we always wanted to do but never had the means nor resources for.
A crucial success factor for crowd testing campaigns is the assignment of the right number and profiles of testers. So we have to make sure we develop suitable methods to determine our crowd testing requirements properly based on our project's KPIs.
Establishing the entire process and tools platform internally and to build and maintain a crowd testers community is a huge effort. Of course the crowd is always accessible for everyone, but for those who want to keep on focussing on their core competencies and to avoid headaches, outsourcing the entire job to an experienced crowd testing services provider might be the better choice.
Author: Dieter Speidel, CEO PASS Technologies AG, owner of passbrains.com crowdtesting services