Rules of Siebel call center implementation: Test, test ... and test

There are a variety of ways a Siebel call center implementation can go awry. Experts advise scheduling time for performance and functional testing before the "go live" date.

Before New York City rolled out its 311 citizen's complaint line, based on Siebel Systems Call Center Software, the systems integrator -- Accenture -- began testing the system nearly two months prior to the "go live" date.

That was a very good thing because testing revealed a number of small, but potentially significant, flaws that seriously hampered the system's performance. For instance, the database was slow in returning information about the caller to the agent's desktop screen, requiring some reworking of the database index and scripts. Also, callers who blocked Caller ID were treated as anonymous, forcing agents to take extra time getting their names and addresses.

"The major issue would have been the slowness of the system," said Mike Syed, Accenture's senior manager for call center technology and one of the consultants who worked on the project. "That would have been a huge problem."

Thanks to the extra time taken for testing, however, most such problems were eliminated before the 311 system went live in 2003.

That's a success story that's not often seen in call center implementations, said experts. That's because project managers optimistically assume there will be few, if any, major problems when it's time to test -- and are unpleasantly surprised by the number of errors that inevitably crop up.

"They say, 'Oops, the configuration didn't quite work the way we thought it would,' and have to go back and rework it all," said Dan Kostelnik, founding partner and co-CEO at CRP Solutions Inc., of Greenwood Village, Colorado.

Never underestimate the power of testing

Siebel Call Center implementations tend to be complex, with multiple components and customizations, providing lots of ways for things wrong. So if you're planning a Siebel Call Center implementation, experts advise that you include plenty of time for testing in the schedule and avoid the temptation to make up for other missed deadlines by cutting back on testing.

"Testing is critical because call centers involve a convergence [of systems and technologies]," explained Syed. "There's voice, the network, the data. So when testing, you have to replicate the life cycle of the call, from when the caller first encounters the IVR [Integrated Voice Response system], to the routing of the call based on data gathered by the IVR and CTI [Computer Telephone Integration] systems, down to the screen pop on an agent's desk," said Syed.

Kostelnik concurs: "I've seen a lot of projects where the testing was underestimated and which, all of a sudden, couldn't go live on time. You need to spend a good 20% of your time on testing."

Problems that can arise in a Siebel Call Center implementation span the gamut from bad phone connections to inaccurate data being delivered to the agent's desktop, said Sandra Tise, director of product marketing, enterprise solutions, at Waltham, Mass.-based Empirix Inc. "We see lots of things carrier issues in the 800 lines, a dead end in the IVR menu, routing that uses the wrong data to send a customer to the wrong agent or just slow response times," she said. "There are random quality issues, capacity issues and things like disaster recovery -- if my CTI middleware server goes down, for instance, can customers still get through?"

She recalls one client, an insurance company that had initially done little testing of their Siebel deployment prior to going live with it. The first day was a disaster, she said, "They fell flat on their face. Customers were getting blocked, there were carrier issues, the CTI wasn't routing calls to the right agents, and agents weren't getting screen pops. . . . They had to shut down 400 desktops they'd initially turned on."

Approaches to Testing

Experts said there are two main areas in which call center deployments need to be tested: functionality and performance.

Performance testing involves checking response times for things like agent screen pops and call routing, load testing to see how the system holds up under various caller and transaction loads and stress testing to find the "weak link" in a system, such as inadequate hardware or network bandwidth. "Some of the issues you look for involve CPU utilization, memory usage, even disk space usage," noted Arthur Povlot, business development manager at Atlanta-based Tescom Software Systems Testing Ltd.. "You may want to simulate a call scenario to see the size of the transactions involved -- is it just importing some data, or are they moving files back and forth."

Performance testing on the call center software can be done both manually -- by having testers call in to see how the application handles the volume of agent screen pops and data across the network -- and by automated tools, such as those made by Mercury Interactive Corp., CompuWare Corp., Segue Software Inc. and Empirix.

Functional testing, said Povlot, involves checking such things as the logical flow of the IVR menus, the usability of the agent screens, the accuracy of the data that's returned to the screens, and the impact that updates have on the functionality of existing systems.

Many of the companies that make performance-testing tools also make tools to automate functional tests -- Mercury Interactive makes the QuickTest Professional and WinRunner, Compuware makes QARun and Empirix makes a number of tools, including the Empirix Service Assurance for Siebel Call Center Software and Siebel 7, an end-to-end product for both performance and functional testing.

Povlot said functional testing tools usually cost around $10,000 per testing user, while performance testing tools can run $100,000 for a 500-user license. However, companies can also rent performance tools from a testing company for a few thousand dollars. He estimated the total cost of testing -- services as well as software -- to be 10%-15% of the call center implementation budget.

Manual testing can also be used for both performance testing and, more typically, for functional testing. In fact, said Kostelnik, many companies rely heavily on manual testing. However, automated tools are valuable for high-volume testing of both functional and performance testing; often because it's simply impossible for a company to draft dozens of employees to serve as call center testers for days at a time. Even with heavy reliance on tools, though, some live testing is always required to evaluate the human look-and-feel of the system, agreed Syed, Kostelnik and Povlot.

Syed noted that companies can take a few judicious shortcuts to reduce testing time and costs. For instance, they might test only the call flows that involve integration with other systems. Or, for live testing, they might have testers terminate calls as soon as they've been correctly routed, rather than simulating a full five-minute conversation. This cuts down on the number of employees needed for live tests.

Regardless of the specific approach you chose, said experts, testing is not only worth the cost, but an absolutely critical part of the budget. As Povlot observed, "80% of stuff doesn't work the first time around. Now, they'll realize that eventually, but it'll be the customers telling them about it, rather than us."

Sue Hildreth is a contributing writer based in Waltham, Mass. She can be reached at sue.hildreth@comcast.net.

This was first published in May 2004

Dig deeper on CRM implementation

Pro+

Features

Enjoy the benefits of Pro+ membership, learn more and join.

0 comments

Oldest 

Forgot Password?

No problem! Submit your e-mail address below. We'll send you an email containing your password.

Your password has been sent to:

-ADS BY GOOGLE

SearchBusinessAnalytics

SearchDataManagement

SearchSAP

SearchOracle

SearchAWS

SearchContentManagement

Close