From the Trenches at Sun Identity, Part 8: Quality Assurance

Article

From the Trenches at Sun Identity, Part 8: Quality Assurance






By Marina Sum, October 14, 2008



See also:

Part 1: Access Management for Web Applications
Part 2: OpenSSO, a Thriving Community
Part 3: Federated Access Management Simplified
Part 4: Virtual Federation, a Pioneering Way for Exchanging Authentication Data
Part 5: Support for OpenSSO
Part 6: Identity Services for Securing Web Applications
Part 7:
Security for Web Services


— Indira Thangasamy, senior quality engineering manager, access and federation management, Sun Microsystems

Indira Thangasamy, senior quality engineering manager for Sun OpenSSO Enterprise, started his career as a developer of embedded systems at Robert Bosch in India and then in Germany. Shortly after moving to the United States in the late 1990s, he joined Sun in 1998 as a kernel test development engineer for the Solaris OS. Later on, Indira moved to the access and federation management team to lead its QA efforts.

Indira sat down with me recently to share his insight on testing OpenSSO Enterprise, the related tools, and the process.

The Harness

“It’s said that finding bugs costs a lot more than preventing them,” Indira says. “That’s spot on. It makes absolute sense to test software thoroughly so that it’s as bug-free as possible. In terms of efficiency, automation is the key.” Because Sun is committed to open source, Indira’s QA team opts for open-source tools to guarantee transparency to the community.

“The harness we’ve chosen is TestNG, an open-source, structured framework that enables scenario testing, which is necessary for a multitier product like OpenSSO Enterprise,” continues Indira. A harness, he explains, is a tool that runs test cases and generates a report of the results.

Besides being a scenario-based testing tool, TestNG features a robust grouping mechanism with which the QA team can test multiple LDAPv3 directories without replicating the code. “An example is the LDAP roles supported by Sun Java System Directory Server,” Indira points out. “We can just combine those specific tests as a separate group; the rest of the LDAPv3-compliant feature tests then go into a common group. TestNG is truly a flexible tool.”


The Process

Indira strongly believes that “quality is something to be built into the development phase itself, not an add-on to be plugged into the product later.” Given that philosophy, his QA team partners with product development to implement the relevant processes. For instance, each code check-in must undergo two reviews: one from a peer developer and the other from QA. “That way, we ensure that a fix or feature is thoroughly tested and that the related documentation is made available to QA and Tech Pubs,” Indira explains. Subsequently, QA recommends the fixes that must pass the automated regression test suite, preventing the bugs from occurring in the source itself as much as possible.

“With such a process, QA detects regression before the nightly build starts. Otherwise, we would catch regression only at the end of the day while running the nightly automated tests. By then, one day would already have been lost. Bottom line: QA is geared for efficiency and productivity,” says Indira.

The high quality of the nightly builds ensures that the community receives no “dead on arrival” builds. How? Altogether, approximately 2,500 core functional regression tests are executed on seven operating systems and six Web containers, as follows:

  • Operating systems: Solaris on x86 architecture, Solaris on SPARC technology, OpenSolaris, Windows Vista, Windows 2003 Enterprise Server, Ubuntu, and Red Hat Linux
  • Web containers: GlassFish application server, Sun Java System Application Server, BEA WebLogic, IBM WebSphere, Sun Java System Web Server, and Apache Tomcat

Once those nightly regression tests pass on all the deployment configurations, Release Engineering creates a nightly build, ready for deployment by the community. A plan is underway to share the nightly results with the community on opensso.org. In addition, nightly CRON jobs produce a consolidated results report. If a failure occurs, the development engineer concerned is alerted for a priority fix.

Indira emphasizes that the tests are all modular and extensible. Each module takes as little as 1 minute to 20 minutes to run, hence enabling the community to quickly validate a particular module without having to run the entire suite of tests.

Furthermore, the QA team runs nightly tests on Policy Agents 2.2 and 3.0 on Sun Java System Application Server, Apache Web Server, and the agents for GlassFish application Server, BEA WebLogic, and IBM WebSphere. Such a process ensures that any changes in the current release do not negatively impact the existing applications that work on the previous versions of OpenSSO Server and Policy Agents.

Other processes ensure product quality. Here are a few examples:

  • The development engineers run unit tests of their new code in parallel with development. A pass is mandatory before code check-in.
  • After a bug fix, QA invariably adds a corresponding test case to its test-case repository to eliminate recurrence of the bug in future patches or releases.
  • The QA team actively involves itself in the design phase to prepare in advance for testing new features and to influence the development team to introduce “hooks” in the code that would optimize QA’s productivity.

“We collaborate closely with the support and sustaining teams, too,” Indira continues. “Whenever issues arise at customer sites, the support folks will bring us into the loop so that we can add regression test cases to the repository if applicable. Our goal is to prevent recurrence in future releases.”

The test cases and test plans are well documented and will soon be available for free. You can download the automated test suite in the OpenSSO code base. Executing the tests takes only a few minutes: Just follow the simple procedure. All you need are the Java SDK and a few open-source Java archive (JAR) files.

Stay tuned for an upcoming article, in which Indira will share the details of the automation framework, including troubleshooting tips.

“A Wonderful Team”

“A common misconception is that QA work is boring,” Indira observes. “I completely disagree. It’s a rewarding and challenging field that requires expertise with the product being tested and with many internal and external tools, so QA engineers get to learn a lot. Customers count on QA—it’s a critical phase of the product development cycle.”

Indira credits his manager, director of engineering Jamie Nelson, for his tremendous leadership and support. “Not only does Jamie trust us to do our jobs, he sees that the necessary resources are there for us—people, software, tools,” beams Indira. “Our team is motivated, vigilant, and on top of the game. If a test fails, we don’t just say that it failed; we also explain why and where it did. We share excellent rapport with one another and with the development engineers. Communications are open and transparent because we have full and ready access to engineers, architects, and management alike. It’s truly a wonderful team.”

As Indira looks to the future, his number-one goal is to “automate testing as much as feasible” and eliminate manual test cases, which are error-prone. He also aims to include as many customer scenarios as possible. “Ultimately, QA is about making certain that our product works in the real world,” he concludes.

Note: Three openings are currently available in the OpenSSO QA team. Be sure to check them out!

References


//

Rate and Review
Tell us what you think of the content of this page.

Excellent
Good
Fair
Poor

Comments:
Your email address (no reply is possible without an address):

Sun Privacy Policy

Note: We are not able to respond to all submitted comments.



copyright © Sun Microsystems, Inc

Advertisements