With the ?season of giving? over, we move into the days of getting things done. One of the resolutions development and testing professionals should make for 2017 is to spend less time and money accessing third-party APIs.
From retail to financial services to healthcare, organizations from different industries depend more and more on either public APIs, partners, or third-party providers. These dependencies are the cornerstones of advancements in today?s API and the Application Economy, supported by Agile development and continuous testing practices.
In a 2015 report by voke Research titled Market Snapshot Report on Service Virtualization, over 500 companies validated that constraints are a major hurdle to innovation in the software development lifecycle. The report mentions that:
- 80% of teams experience delays in development due to constraints throughout the SDLC.
- 56% of critical dependencies are unavailable when dev and test need them.
- 70% of teams face prohibitive restrictions (i.e., delays, time, fees) when needing to access third-party systems.
My guess is that if this survey were retaken today, even more clients would cite access to third-party APIs and systems as a major roadblock to accelerating testing and ensuring quality apps are released into production.
In addition to the fact that more and more companies are relying more heavily on third-party APIs, they are also becoming suppliers of third-party APIs to their customers and partners. Typically, testing is an afterthought for whoever is supplying the API. The primary focus of any third-party API is to support real production and generate revenue. The first priority is assigned to live, revenue-generating, activity, extra capacity (if any) is then used for testing. Additional fees can also be charged as a way to generate revenue. Depending on what side of the fence you?re on, this may or may not be a good thing.
For example, take a delivery and logistics customer that provides access to its customers and partners via public APIs in a ?free? SaaS model. The cost for the logistics company to maintain systems for its customers and partners to access the system is significant. Despite the access being free to the live APIs, the logistics company continues to receive complaints that their service is not as good as other because they don?t offer a way to easily access the APIs for testing. Providing additional capacity for testing would eat into already slim margins, charging customers for testing would be viewed as a competitive disadvantage so what to do becomes the question?
Service Virtualization: What It Is and How It Works
CA Service Virtualization solves access issues to third-party APIs and systems by capturing and modeling them as virtual services that act just like the real services. The virtualized versions provide alternatives that can be used for functional and performance testing.
When developers and testers use Service Virtualization, the services behave and perform similarly to the real thing without the underlying hardware and software complexity of a physical system. Development and testing continue just as they always have, but less constrained and without contention between teams for environments, labs, test data and so on.
Access to a virtualized API can actually be better than the real thing for the simple reason that you can test all kinds of scenarios ? varying levels of functionality, performance and maintenance levels ? with a virtualized service that you never could with an API. You can make the virtualized API behave any way you want. Best of all, multiple copies of the same API can be created simultaneously so that multiple people can use the API for testing at the same time.
Sounds like magic, right? Well, not really. This is how Service Virtualization works:
Step 1: Capture the Conversation
When software components communicate, they use a structured format and conversation, also known as a protocol. Within this structured conversation, observations can be made about static vs. dynamic elements, conversation content and data (the payload), and other aspects of the relationship between the components to create an understanding of the software interaction.
Step 2: Process the Captured Conversation
Here, the Service Virtualization tool evaluates the requirements of the engineering specifications provided or analyzes the captured conversations between components. It is this processing and clever approach to the captured data that is what differentiates using a tool like CA Service Virtualization vs. stubs and mocks.
Step 3: Create a Usable Model
CA Service Virtualization converts these captured conversations and processed protocol request/response pairs into a sophisticated, dynamic model that lives and breathes very similarly to the real thing and provides the scenario coverage and capabilities for software development and testing activities. The Service Virtualization tool is able to handle difficult tasks such as identifying dynamic data, stateful conversations requiring session IDs and other stateful techniques, and observe real-world variability and dynamic behavior of the conversations. After processing, CA Service Virtualization compiles a conversation into a stateful (or stateless, if desired) model. In this model, we now have the ability to handle challenges like state, automatic dynamic data processing, even populating responses with ?fake? suitable test data, automatically solving a huge challenge common for customers around test data management. It is this compilation step that gives the model the rich, dynamic functionality required for realistic virtualization, and makes it not fragile and break-prone as developers use it in new cases and scenarios.
If you want more details on the specifics of how to create virtual services from recordings, R/R pairs, WSDLs and other forms, as well as seeing it all done on video, you can visit the latest CA Service Virtualization documentation site here.
With Service Virtualization, testing teams get access to the dependent systems and services they need to test without having to wait on development or ops. Say someone needs to test a particular service and the service has to be deployed or provisioned by a third party, and only after that happens will you be able to test your part of the code. Well, with Service Virtualization, by using sample request/response pairs, a tester can create, deploy and use the virtual service and no longer be dependent on any third party, saving time and money.
For example, CA Service Virtualization automatically detects dynamic date behavior and can model dates to be scenario accurate, regardless of the actual date of the transaction or virtualized response. This creates models that work well into the future, unlike static stubbing and mocking, which is either fragile or very expensive to create and maintain.
A huge opportunity for Service Virtualization users has always been performance engineering. Creating a lab capable of handling and testing to production capacity loads is incredibly, or even impossibly, expensive, and access to production systems such as mainframes and transaction servers may be realistically impossible. This makes performance testing both expensive and unreliable, especially if you have to pay a third party to provide an extreme number of interaction accesses.
There is also the disadvantage of the time and cost of waiting until an entire application architecture is assembled before testing for performance. Customers using Service Virtualization are able to performance test each individual component, identifying many performance problems earlier in the lifecycle and reducing, or even eliminating, the amount of final performance testing needed in a production-like lab. This essentially ?decouples? testing teams? dependence on third-party system access altogether.
At CA World 2016, Beth Johnson, SVP for release and testing at SunTrust Bank, got on stage and presented (see 28:30) how the bank uses Service Virtualization to virtualize its third-party services in order to prep test environments and data virtualization. SunTrust Bank has realized:
- 53% of delayed requirements now test on time.
- 18,505 defects found prior to production with a less than 1% defect rate. The target had been to get the defect rate lower than 4%.
- 22% cost savings on managed services compared with 2014. This involved $1.97m in savings due to automation and $900k in direct savings.
- 73,302 hours saved by tools and automation.
At the same event, OI Telecom from Brazil highlighted its use of Service Virtualization and API testing where it saved $300k with test automation of its integration and regression testing. You can check out OI Telecom?s story in a blog here.
Ultimately, these success numbers and the many other customer success stories solidify the fact that Service Virtualization is a way to produce applications with better quality, faster, while saving on the development and testing costs for third-party APIs, systems or services.
With Service Virtualization, developers have their own private environments for developing code. They don?t share environments and don?t need to wait for other third-party systems to be available, and they can test for performance without incurring higher costs from third-party partners.
With Service Virtualization, much of the testing at a component level can ?shift left,? or be moved earlier in the SDLC. Because each component can be tested individually instead of waiting for complete assembly, unit and regression testing happens sooner, is more complete, and defects are identified long before integration or user acceptance testing. Finding bugs earlier means issues are fixed sooner instead of moving on to other projects before the defects are identified and costs to remediate are substantially higher.