Occasionally I have to diagnose a performance issue in PeopleSoft. This is a reminder to myself of the tools available and what I can check.
This is a really useful tool because it gives real performance indications as to how the system as a whole is performing from the desktop to the database. The user can see the time taken at the browser, web server, application server and the database server for a simple query. When a slow response time is seen it is important to drill down to see what cause it. It could just be Windows downloading updates for example. PeopleSoft ping data is stored in the database used by the application in the table sysadm.ps_ptp_tst_cases. I used the following columns (See Dave Kurtz' website for the rest).
|The test case identifier entered when the ping test was started.
|Total duration of ping as measured on the client browser in seconds.
So to record the whole time, I can use the following SQL:
This takes an average over N minutes of the recorded response times, PTP_TOTAL_TIME. N is a bind variable, which I set to 5 to get the results below:
PeopleSoft Performance Monitor
This is a separate database, where performance data is stored. Similar to the PeopleSoft ping, data is stored at each tier, but this time a sample of all accessed pages is taken and stored. This is valuable as it is real performance suffered by users, but the numbers can be skewed if a few people run large queries at the same time. Again I average the data and hope this will smooth any effects caused by long running reports. I used the following SQL:
This is very similar to above. The table I query is sysadm.pspmtranshist, where the latest performance data should be found. If archiving is enabled, older data may be in sysadm.pspmtransarch. The following columns are used:
|Timestamp when the monitor received the event. This is used rather than pm_agent_strt_dttm as it appears in indexes.
|The length of time taken by the transaction. Units are milliseconds.
|A unique identifier for this record.
|The performance monitor keeps a record of response times at the web server, application server and database server. These records can be grouped together by the top_instance_id which includes the times of all instances below. Since I want the total transaction time, I only take the top events.
The SQL takes the web server time including application and database time of the transaction, for each transaction in each 5 minute period and averages them. The output is something like the following:
In my experience not too much can be read into the number of samples, they can’t be graphed as a proxy for number of users for example, as the fraction of sessions sampled is reduced as the server gets more busy. The reason for including it here is to make sure we have enough transactions to get a meaningful average from.
We have incorporated Google Analytics into our site, which gives us the user experience. There are alternatives such as Piwik which give more control over the collected data. The data is collected from the users browser, so it fills in the gap between the web server and the browser.
The way forward
I would like to take the PeopleSoft performance monitor data as above and graph it so we can see how the response of the application varies over time. We could then set performance thresholds and be alerted if performance falls below acceptable levels. There are two problems to bear in mind with this. One is that averages can hide problems, or if there is a small dataset, an average can be swayed unduly by a single outlier. However, hopefully the fact that we are monitoring performance will encourage people to pay attention to it and improve the overall performance of the system.