The "master server" == "master scheduler" The idea is to have one central place for job scheduling and monitoring while being able to run them in many places. If you use built-in scripting jobs only, all job deployment is done automatically by the master scheduler. In fact the deployment is performed during job run-time. After the scheduler establishes an agent connection it automatically hands the job definition to agent and ask it to run it as a dynamic job. The agent then runs the job and reports status and errors (if any) back to the scheduler. Note that all job queuing, linking, dependencies and other notification actions are performed on the master. Therefore it is possible to link jobs run using different agents into one or multiple controlled processes. For instance, make agent A setup on a file server to run some processes and FTP results to a database server where agent B picks the file and loads into the database then run a job on a reporting server where agent C runs some reports and emails results to the associated distribution group. Of course if you develop a batch file on the master and want that batch file to run on an agent then you need to take care of copying that batch file to the agent. Hope this helps. : Can you educate me on the 'proper' method of setting up one master server and : multiple remote agents? : The master server should be the only one pointing to the master scheduler, : and all agents should point to their own schedule database? : Does this mean that each time a global script is updated , I need to copy it : to each remote agent? (or at least set up sync jobs) : I am having several major problems in my enviornment, (see other thread) and : I want to start fresh with a properly setup enviornment
|