SoftTree Technologies SoftTree Technologies
Technical Support Forums
RegisterSearchFAQMemberlistUsergroupsLog in
Jobremoterun, remote agents

 
Reply to topic    SoftTree Technologies Forum Index » 24x7 Scheduler, Event Server, Automation Suite View previous topic
View next topic
Jobremoterun, remote agents
Author Message
LeeD



Joined: 17 May 2007
Posts: 311
Country: New Zealand

Post Jobremoterun, remote agents Reply with quote
Hi
I'm trying to use JobRemoteRun to create a process of dependent jobs with a master scheduler holding all the jobs and a remote agent on which all jobs will be executed.

When running locally JobRun worked perfectly.

With the remote agent-master setup mentioned above, how should I handle that? Should Jobremoterun be pointed at the master scheduler? Does I need to create a remote agent entry on the remote agent for the master?
Fri Jun 22, 2007 12:44 am View user's profile Send private message
SysOp
Site Admin


Joined: 26 Nov 2006
Posts: 7955

Post Reply with quote
You cannot do that. JobRemoteRun refers to a locally defined job whose definition you want to submit for execution to a remote system. Agents don't have own jobs and so cannot use JobRemoteRun. On the other hand, schedulers can send jobs to each other for remote execution.

If you need more then a plain vanilla job dependency like "run remote job - based on the status - execute another job local or remote" you can use global variables with script type jobs, have the remote job update a global variable on the agent, then after JobRemoteRun obtain the value of that variable using GetRemoteVariable and then do something like CHOOSE [value of the variable] CASE 1 DO JobA CASE 2 DO JobB ... CASE n DO JobN END. Of course this is just a pseudo code for illustrating the idea. There are also examples available in the on-line help, see GetRemoteVariable and SetRemoteVariable topics for more details.
Fri Jun 22, 2007 1:50 am View user's profile Send private message
LeeD



Joined: 17 May 2007
Posts: 311
Country: New Zealand

Post Reply with quote
It's relatively simple the dependency that I'm trying to create; indicatively;

a filewatch job, run on a remote agent, that does some action and then needs to run the next job in the chain (again on the remote agent) which in turn needs to run the next job in the chain etc etc

Originally I did this with jobrun, which worked well when the full job chain is run locally.
Now that I am trying to run the chain on a remote agent, the first job fails saying job 2 is not found in the active job pool.

I've tried changing the jobrun command to a jobremoterun command but no cigar; I could be using that wrong though. Context shouldn't be a problem?
Sun Jun 24, 2007 6:12 pm View user's profile Send private message
SysOp
Site Admin


Joined: 26 Nov 2006
Posts: 7955

Post Reply with quote
It is not going to work like this. Here is why. Scheduler A (computer A) deploys job #1 to Agent B (computer B) and tells the agent to run the deployed job. Agent B starts running the job according to the job definition and encounters an instruction to execute job #2. There is no such job on agent B and of course the agent says "I don't know this job". As a result, job #1 fails and agent reports job status back to the scheduler.


Solution #1 (Windows-to-Windows): Scheduler A deploys job #1 to agent B and makes the job check for the local file on computer where the job is run (this physically occurs on computer B). The result of this check is stored somewhere, for example in a global variable (this is still occurs on computer B). After completing job #1, the scheduler obtains the result of the check from the remote agent (GetRemoteVariable is being executed on A while the value is retrieved from B) and analyses the obtained value (this physically occurs on computer B). If the required condition is satisfied the scheduler then deploys job #2 to the agent B and asks the agent to run that job.

Solution #2 (Windows-to-Windows or Windows to-Unix). You install 24x7 Remote Agent (so-called RA agent) on computer B. Don't confuse this software with 24x7 running in the agent mode. Job #1 is programmed to call RAConnect to connect to the RA agent and then RAFileExists to check for the file remotely. In RAFileExists you refer to the file path as you see it on computer B. The same job #1 analyzes the result of the check and if the required condition is satisfied it then does RARun or RARunAndWait execute a process on computer B.
Sun Jun 24, 2007 8:42 pm View user's profile Send private message
LeeD



Joined: 17 May 2007
Posts: 311
Country: New Zealand

Post Reply with quote
So to clarify solution 1;
Job #1, run on agent b, uses the fileexists jal statement to check for the trigger file, and initialises and sets a global variable when it finds it.

job #2 checks using getremotevariable and if the variable equals yes , uses jobremoterun to run job #3, which might do a file copy, on agent b.


I'm trying to get job chaining without using semaphore files between each step as that introduces an undesirable delay because of the minimum 1 minute interval with normal file watch. Using a job (either repeating the job or looping) to check the value of the remote variable every so often will surely either overly stress the scheduler with getting it to check every 5 seconds or whatever or if the value is set higher not fix the delay issue.

How bout if I used a 'launcher' job which is run locally, and uses jobremoterun to run each job one by one on the remote agent? This is a little inelegant but could work. Will it fail if one of the remote jobs fails as is desirable?

What about using job run as a notification action? I think i've read that you don't recommend that as a chaining mechanism. why not?

Also; in the manual;

Important note:
• The JobRemoteRun statement is not supported in remote jobs executed on 24x7 Remote Agents. An error “Job
not found” will occur if you use this statement. However, it can be used if the remote job is executed on the 24x7
Master Scheduler.

I don't quite understand the 'however' bit. Does that not mean the remote job is not remote at all? Or is it run from a peer scheduler rather than a remote agent?
Sun Jun 24, 2007 10:13 pm View user's profile Send private message
LeeD



Joined: 17 May 2007
Posts: 311
Country: New Zealand

Post Reply with quote
amended previous post

What about using job run as a notification action? I think i've read that you don't recommend that as a chaining mechanism. why not?

was what I meant :-)
Sun Jun 24, 2007 10:24 pm View user's profile Send private message
SysOp
Site Admin


Joined: 26 Nov 2006
Posts: 7955

Post Reply with quote
Jobs can be submitted not only to agents (24x7 running in an agent mode) but also to other schedulers running with server option enabled. Here is an example, perhaps scheduler A has jobs #1, #2 and #3 and scheduler B has jobs #4, #5 and #6 and there is also agent C. Scheduler A can deploy job #1 to scheduler B and that job can contain an instruction (pseudo-code) JobRemoteRun #5 to run job #5 on agent C. When job #1 is remotely run on B, and that instruction is reached, scheduler B deploys its job #5 to agent C. So, essentially job #1 started by scheduler A creates a chain reaction making scheduler B to run its job #5 on agent C.

Since agents don't have own jobs, this cannot happen when B is run as an agent.

Regarding the 'launcher' job, if any callable job fails the entire process fails too, the job is aborted (unless you use OnError… commands in the script or job Ignore Errors option) and then the default error handling associated with the launcher job is invoked.

There are 2 disadvantages of using JobRun and similar notification actions for job chaining:
1. This dependency method is not persistent. If the scheduler is shutdown, job aborted, system halted or something else bad happens, the processing state of the entire job stream is lost and the entire process needs to be restarted. On contrary if you use file-based dependencies, which represent a persistent state, in case of scheduler/system restart the process automatically starts from the last failed step or as setup by the dependencies.
2. You loose ability to use job queues efficiently and control job order and overall system load. Small jobs are forced to become part of a big job process, which cannot be stopped nor can be controlled on a job/queue level.

Yet, there could be some business cases when something is needed right away and it cannot wait for the queue and other pending jobs.

***
Now let's talk about your remote job situation. Why are you trying to make the entire thing so complicated? Why you don't want to run everything in job #1? Let it check the file and if the file is found right away run whatever you needed on B. No multiple steps and multiple network trips involving multiple connects, disconnects, data transfers and so on. Simple, yet elegant.

The main purpose of the built-in scripting functions is to provide you with the methods to automate complex processing without much work and without need to use lots of external processes and components. For example, a job on scheduler A can check for files on B using built-in functions for FTP or SFT or Telnet of file shares without a need to involve agent B for this work.
Sun Jun 24, 2007 10:55 pm View user's profile Send private message
LeeD



Joined: 17 May 2007
Posts: 311
Country: New Zealand

Post Reply with quote
Thanks for your time by the way.

a) Leaving any issues aside of whether it's a great idea, would it work to mirror the relevant jobs (with a script or whatever) between two schedulers, labelling one as the master and one as the slave, and use jobrun to run the chain as designed, triggered and controlled ? That would solve the context issues if I'm not wrong.

b) non-persistent dependency is not so much of an issue for us; If a failure occurs, the 24x7 logs and the action logs show at which step it fails, and we can restart the job chain from that step. in my testing of local file-watches for job chaining I've needed to delete the semaphore file before running the action of the job anyway to prevent it running over itself on long file copies for example, which means the scheduler won't restart the process automatically anyway, and for copy processes there are files lying around all over the place which would stop it restarting properly anyway.

I don't really see how semaphore job chaining does job order better than jobrun either. How is job queue efficiency affected?

The point in running discrete jobs rather than one big fat one is so that we can do restartability from any point in the process, which is a business requirement. Network load is really not an issue as the the boxes will be next to each other. The main file watch will be done from the master anyway; but the action needs to be done on the second machine.
Sun Jun 24, 2007 11:21 pm View user's profile Send private message
SysOp
Site Admin


Joined: 26 Nov 2006
Posts: 7955

Post Reply with quote
in regard to a) If you just replicate job definitions as is you are going to end up with two identical sets of jobs running. While it is possible to disable/ enable or modify job properties during the replication, the maintenance would likely become a nightmare, for example, in case a job becomes disabled, without a thorough investigation the personnel wouldn't know for sure whether that had happened as a result of the replication or because of some other problem.
Mon Jun 25, 2007 12:06 am View user's profile Send private message
Display posts from previous:    
Reply to topic    SoftTree Technologies Forum Index » 24x7 Scheduler, Event Server, Automation Suite All times are GMT - 4 Hours
Page 1 of 1

 
Jump to: 
You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum
You cannot vote in polls in this forum


 

 

Powered by phpBB © 2001, 2005 phpBB Group
Design by Freestyle XL / Flowers Online.