- WSCLim Graphical User Interface
- Case Study
- Description of the composite service TravelAgency:
- Preparation of the specification:
- Test scenarios:
- “Test Verdicts” block: This block shows the test verdicts percentage. We note that in this scenario, the percentage of FAIL is equal to 7.5%. This means that 3 BPEL instances from 40 have verdict FAIL.
- “FAIL Natures & Causes” block: This block present the nature and the cause (each cause is distinguished by a color) of each observed verdict. For this execution, the application (ie the composition under test) is the cause of these errors. 66% of all errors (2 FAIL) are erroneous delay and 33% are non specified behavior (1 FAIL).
- “BPEL Instance vs Response Time” block: This third block present response times of invoked BPEL instances.
- “Performance Monitoring” block: The fourth block graphically shows the performance data recorded during the test by the PerfMon tool. In our scenarios, we monitored the CPU occupancy rate (blue color) and exchange between memory and disk (red color). The values of these two criteria are taken periodically every five seconds throughout the load test. We plan to use the performance data in analysis and errors identification. Thus, we want to examine the possibility of introducing these observed performance in the interpretation of FAIL verdicts.
- WSCLim Overhead
In order to validate the proposed testing architecture, we have developed a tool for load testing and limitations detection of Web services compositions. Java is the programming language used to implement this tool. We present in this section a brief description of the main interface of our WSCLim tool.
This interface allows the user to specify:
- The path of the specification (Timed Automata) used as a reference in the test: this specification must be described in XML and generated by UPPAAL tool.
- The path of the WSDL specification of the Web service composition under test.
- The number of BPEL concurrent instances.
- The delay between each two successive invocations of the BPEL process under test.
By clicking the button “Execute”, the test is running. During execution, details of the test are stored in log files. At the end of the test, the analysis of results is launched by clicking the button “Start Analyze” and the interface containing the verdicts of the test is displayed. We will expose this interface in the next section by the application of a case study.
In this section, we illustrate how to use our proposed WSCLim tool by the application of a case study related to a travel agency as a composite service TravelAgency implemented in BPEL.
We suppose that the required business process (written in BPEL) composes services of: fly search (FS), hotel search (HS), fly book (FB) and hotel book (HB). As described in the next figure, When a client sends a trip request for the travel agency, the travel search process interacts with information systems of airline companies (resp. hotel chains) for flights (resp. hotel rooms) that match client needs. These two search are conditioned by a waiting time. Indeed, the process should receive a response from “FS” (resp. “HS”) within maximum 30 seconds. Otherwise, the process execution is stopped. In case of receiving both a fly search response and a hotel search response before reaching 30 seconds, “FB” service and “HB” service are invoked successively to perform travel booking. Finally, a detailed reply informing about final results is sent to the concerned client.
Before starting the test of BPEL composition, the WSCLim tool user has to design the timed automata using UPPAAL; you can see the tutorial available on UPPAAL website. UPPAAL is used also to simulate and verify the correctness of the specification.
In order to study the behavior of the composition TravelAgency, we defined several possible scenarios. In this section, we present two proposed scenarios. The first scenario is used to illustrate some errors which may occur in the application. The second scenario is designed to subject the composition under test to a higher load to identify the non-functional problems. In whalt follows, we assume that the maximum network waiting time is equal to 120 seconds.
- Scenario 1:
In this scenario, we assume that the developer has make mistakes while coding the BPEL composition as shown in the next figure (red color). In fact, the service “FB” was added in the BPEL implementation when exceeding the time limit for the flight search “FS”. Moreover, the timeout response of service “HS” implemented (60 seconds) is different from that specified in the timed automata (30 seconds). In this scenario, we invoked forty times the process TravelAgency with a one second delay between each two sucessives invocations.
The next figure shows the generated analysis interface, according to the first scenario. This interface consists of four blocks:
In the second scenario, we invoked a hundred times the TravelAgency process with a delay of 0.5 second between each two sucessives invocations. In addition, we consider an implementation complies with the specification. Then, We do not suspect problems with the application.
- Scenario 2 result:
Analysis of the results of this execution shows that there is a FAIL percentage of 7% (7 instances among 100). As shown in the following figure, 57% of problems (4 instances) are at the SUT node and 42% (3 instances) are problems of connection to partner services caused by the execution environment.
In order to determine the averhead of our WSCLim tool, we plotted, for both cases, the measurement curves of average execution time by varying the load. In the first case, tests are performed with WSCLim tool. In the second case, test executions are performed directly from the console of the orchestration server and without using the WSCLim tool. We have used the same process TravelAgency described the previous section in these experiments.