 |
SoftTree Technologies
Technical Support Forums
|
|
Author |
Message |
hberzajs
Joined: 21 Apr 2014 Posts: 6
|
|
Reporting all log steps for a job |
|
Hello,
We are moving from the desktop version to the server install. Most of the functionality exists but sadly we do not have the folder
level jobs summary that we find on the desktop. I see that this was addressed on post:
http://www.softtreetech.com/support/phpBB2/viewtopic.php?t=23804&highlight=jobs+folder
So I am trying to create reports against our SQL server DB where 24x7 is logging job and run results in real time.
My difficulty is grouping or tying together the log records for a given job. I find the start message, end message and other pertinant messages. But they all hold their own distinct timestamp. What I would like is some sort of unifying key for all the records/rows for a job run.
Can 24x7 write to the log table with a PID that can unify all job information messages of a given run?
I have created a view with a pivot to group all the job attributes together. That along with the log table allows for reporting.
Thanks
|
|
Mon Jan 12, 2015 3:42 pm |
|
 |
SysOp
Site Admin
Joined: 26 Nov 2006 Posts: 7948
|
|
|
|
Hi.
unfortunately the unique internal job run ID is not exposed to the job log. But, assuming you don't allow multiple instances of the same job run concurrently, which would be unusual if you do, you can always find out matching start and end events. End event is a finish or error record for the same job ID with minimal time after the start event, and in between there are no other start events for the same job.
|
|
Mon Jan 12, 2015 5:38 pm |
|
 |
hberzajs
Joined: 21 Apr 2014 Posts: 6
|
|
Reporting all log steps for a job |
|
While this is true I have tried to write an all encompassing SQL statement to bring these together in a reasonable time window along with this job and all others so grouped the same. I cannot come up with a good query. That is why I posted to see if anyone else has achieved the same.
And while I understand the logic of keeping them together within a reasonable time range, that is difficult. For some jobs we use the loop and restart logic to retry in 5 minutes while we wait for a FTP posting. These 5 minute apart groups of messages each belong to one job run series. Then we have those jobs that run for a long period of time and finding their start and finish (if finished) is not easy.
Thanks
|
|
Mon Jan 12, 2015 5:47 pm |
|
 |
SysOp
Site Admin
Joined: 26 Nov 2006 Posts: 7948
|
|
|
|
there is a simple way to make it fast. Please be sure the target table is indexed on job id and event time columns. In case the table is huge, you can also partition it, or move old historical data to a separate table. In my experience when that table is indexed and the queries are well written they run very fast (instantly or in a matter of seconds returning results)
Here are some of the queries we use for checking queue utilization and wait times. Hope you will find them helpful.
 |
 |
UPDATE t24x7_log
SET
status_icon = null,
job_id = null
WHERE job_id = 'null' |
 |
 |
SELECT isnull(t.queue_name, '* NEW JOB *') AS queue_name, t.start_time, t.end_time, DATEDIFF(ss, t.start_time, t.end_time) AS duration_seconds,
(
SELECT DATEDIFF(mi, max(l2.[time]), t.start_time)
FROM t24x7_log AS l2
JOIN t24x7_jobs AS j
ON j.job_id = l2.job_id
AND j.property_name = 'QUEUE'
WHERE l2.[time] <t> '2014-12-23'
) AS minutes_since_prev_event_same_queue,
(
SELECT DATEDIFF(mi, max(l2.[time]), t.start_time)
FROM t24x7_log AS l2
WHERE l2.[time] <t> '2014-12-23'
) AS minutes_since_prev_event_any_queue,
t.job_name,
t.is_async_job,
(
SELECT CASE j.property_value WHEN 'F' THEN 'semaphore file' WHEN 'D' THEN 'Daily/Weekly ' WHEN 'T' THEN 'All day recurring' ELSE 'Other' END
FROM t24x7_jobs AS j
WHERE j.job_id = t.job_id
AND j.property_name = 'SCHEDULE_TYPE'
) AS schedule_type
FROM
(
SELECT j.property_value AS queue_name, l.job_name, l.[time] AS start_time,
(
SELECT min(l2.[time])
FROM t24x7_log AS l2
WHERE l2.job_id = l.job_id
AND l2.[time] >= l.[time]
AND l2.[message] LIKE '%completed%'
AND l2.[time] > '2014-12-24'
) AS end_time,
CASE WHEN a.property_value = 'Y' then 'YES' ELSE '' END as is_async_job,
l.job_id
FROM t24x7_log AS l
LEFT JOIN t24x7_jobs AS j
ON j.job_id = l.job_id
AND j.property_name = 'QUEUE'
LEFT JOIN t24x7_jobs AS a
ON a.job_id = l.job_id
AND a.property_name = 'ASYNC'
WHERE l.[message] = 'Job started.'
AND l.[time] > '2014-12-24'
) AS t
UNION ALL
SELECT t.queue_name, l.[time], l.[time], 0 duration_seconds,
(
SELECT DATEDIFF(mi, max(l2.[time]), l.[time])
FROM t24x7_log AS l2
JOIN t24x7_jobs AS j
ON j.job_id = l2.job_id
AND j.property_name = 'QUEUE'
WHERE l2.[time] < l.[time]
AND j.property_value = t.queue_name
) AS minutes_since_prev_event_same_queue,
(
SELECT DATEDIFF(mi, max(l2.[time]), l.[time])
FROM t24x7_log AS l2
WHERE l2.[time] < l.[time]
) AS minutes_since_prev_event_any_queue,
'--------- ' + l.[message] + ' ---------' AS job_name,
'--' AS is_async_job,
'--' AS schedule_type
FROM t24x7_log AS l
CROSS APPLY
(
SELECT DISTINCT j.property_value AS queue_name
FROM t24x7_jobs AS j
WHERE j.property_name = 'QUEUE'
) AS t
WHERE l.job_id IS NULL
AND l.job_name = '24x7 Scheduler'
ORDER BY 1, 2 |
|
|
Tue Jan 13, 2015 12:00 am |
|
 |
pasbanfsd1000
Guest
|
Re: Poll Test |
|
And while I understand the logic of keeping them together within a reasonable time range, that is difficult. For some jobs we use the loop and restart logic to retry in 5 minutes while we wait for a FTP posting. These 5 minute apart groups of messages each belong to one job run series. Then we have those jobs that run for a long period of time and finding their start and finish (if finished) is not easy.
______________
Farasat
|
|
Mon Feb 09, 2015 4:24 am |
|
 |
barefootguru
Joined: 10 Aug 2007 Posts: 195
|
|
|
|
I've written a similar tracking program in SAS. Basically it reads the log file(s), and sorts by job ID and timestamp (in that order). This should result in pairs of start-finish entries in the sorted log.
It's pretty accurate, though there's a few edge cases to watch out for: log files rolling over, invalid timestamps in the log (http://www.softtreetech.com/support/phpBB2/viewtopic.php?t=23144), daylight savings transitions, and long jobs still running while the tracker is running.
|
|
Tue Feb 10, 2015 5:27 pm |
|
 |
|
|
You cannot post new topics in this forum You cannot reply to topics in this forum You cannot edit your posts in this forum You cannot delete your posts in this forum You cannot vote in polls in this forum
|
|
|