operating systems – LONG TERM SCHEDULER and how does it interact with USER

If I select chrome browser, then I select Android Studio, do they directly open in Ready Queue (allocated to RAM main memory), what processes actually goes into job queue? Does Job Queue gets filled when RAM has no space available? What I want to ask basically is if I opened a word processor and a browser, do these processors are first there in job queue and then ready queue, or directly in ready queue?

I know Windows and UNIX have almost done away with LONG TERM SCHEDULER but consider a general interactive OS.

0xC0000142 Errors from Tasks in Windows Task Scheduler

I previously had many C++ .exe programs (developed with C++ Builder XE7) running as scheduled tasks in a Windows 2008 R2 Datacenter server. These tasks were being run by the SYSTEM account and I never had any issues with them before.

I recently imported these tasks to a new Windows 2019 Datacenter server and set these tasks up in the Task Scheduler. The same SYSTEM account is being used to run the tasks, but with the updated Windows Server, these tasks now give me a run result of 0xC0000142.

Most of the resources I found online say to increase the desktop heap size in the registry editor – I have done this multiple times and restarted the server after each increase, but I was still getting the same results with this method so I reset the desktop heap size back to the original value.

I also thought it had to do with missing C++ redistributables – the new server only had redistributables from 2015-2019, while the 2008 R2 server had these along with redistributables from 2013 and 2008. So I installed these extra redistributables but I still got the same result.

I have tried manually recreating the tasks, I tried running the tasks with different domain admin accounts, also played around with the “run only when user is logged in/run whether user is logged in or not” setting. All of these led to the same 0xC0000142 error.

Also, there were no errors being shown in Windows Task Scheduler history or in the Event Viewer.

I tried monitoring the task with Process Monitor, below is a snippet of the log output leading up to the exit code. I know that error code 0xC0000142 is STATUS_DLL_INIT_FAILED. Right before the failure, tzres.dll.mui and tzres.dll are the last files accessed. There are no failure messages with these files other than the FILE LOCKED WITH ONLY READERS – I believe this is for read-only access, but these files are also read-only access on the old server where the tasks are working…

Process Monitor logs

Any extra tips/guidance would be much appreciated!

How do I properly sync data between two Windows 10 systems on the same LAN utilizing Robocopy, a .bat file, and Task Scheduler?

I’ve read a number of posts regarding the use of robocopy to sync data between two Windows systems. I tried various configurations, and the settings I currently have in place are what seemed to work best for most users.

System A runs Windows 10 Home, and its desktop is shared to a Microsoft user account w/ full privileges.
System B runs Windows 10 Pro, and its desktop is shared to same Microsoft user account w/ full privileges.

The .bat files were stored on each system’s respective desktop and scheduled to run every three minutes.

System_A sync.bat:
cd C:UsersusernameDesktop
robocopy C:UsersusernameDesktopdirectory_to_sync ‘System_BDesktopdirectory_to_sync’ /E /MIR /mt /z

System B sync.bat:
cd C:UsersusernameOneDriveDesktop
robocopy C:UsersusernameOneDriveDesktopdirectory_to_sync ‘System_ADesktopdirectory_to_sync’ /E /MIR /mt /z

Using System_A’s sync.bat as an example, I set the task to run w/ highest privileges, and I configured it for Windows 10, since it defaulted to Vista/Server 2008. I triggered it to run at task creation/modification, and repeat every three minutes indefinitely, only stopping if the task were to run longer than three hours. I set it active to a time earlier this morning, and I synched it across time zones.

The Actions Tab is where most of the posts I’d read had made changes and saw varying degrees of success.
My configuration is as follows:

Action: Start a program
Program/script: cmd
Add arguments (optional): /c sync.bat (Note: The /c was auto-added by Windows for whatever reason.)
Start in (optional): C:UsersusernameDesktop

The job history reports that it completes w/ an operational code of 2, but nothing is synched. I’m out of ideas, so any help would be greatly appreciated. Thank you.

design – Designin a task scheduler and processing mechanism in a single thread with a possibility to pause/disable/resume it

Hello i have a problem in which i need to schedule some “tasks” at a certain point of time (using system clock, the time is saved in Task.processingTime_ member)

The tasks can come from other threads, but the task processing logic must be done in a single thread.

My current design:
The processing thread is stuck in the while loop:

while (shouldRun_)
{
    if (auto task = taskProvider_.provide())
    {
        taskProcessor_.process(std::move(*task));
    }
}

Beforehand i had the provider(AKA Scheduler but not renamed or more likely provider/scheduler hybrid class :P) and the processor in this while loop. The class split into taskProecssor seemed to work well, but splitting the scheduling logic into taskProvider doesn’t show any direct benefits. Perhaps the design isn’t the best.

Code for the Provider:

class TaskProvider
{
public:
    std::optional<Task> provide()
    {
        std::unique_lock lock{mutex_};
        while (true)
        {
            cv_.wait(lock, (this){return shouldTryReturnTask();});
            if (scheduleState_ == ScheduleState::Disabled)
            {
                return {};
            }
            const auto status = cv_.wait_until(lock, tasks_.top().processingTime_);
            if (status == std::cv_status::timeout && scheduleState_ == ScheduleState::Running)
            {
                auto ret = tasks_.top();
                tasks_.pop();
                return ret;
            }
        }
    }

    void add(Task task)
    {
        if (scheduleState_ == ScheduleState::Running)
        {
            const std::scoped_lock lock{mutex_};
            tasks_.push(std::move(task));
            cv_.notify_one();
        }
    }

    void resume()
    {
        const std::scoped_lock lock{mutex_};
        scheduleState_ = ScheduleState::Running;
        cv_.notify_one();
    }

    void pause()
    {
        const std::scoped_lock lock{mutex_};
        scheduleState_ = ScheduleState::Paused;
        cv_.notify_one();
    }

    void disable()
    {
        const std::scoped_lock lock{mutex_};
        scheduleState_ = ScheduleState::Disabled;
        cv_.notify_one();
    }

    void clear()
    {
        const std::scoped_lock lock{mutex_};
        scheduleState_ = ScheduleState::Clearing;
        tasks_ = decltype(tasks_){};
    }
private:
    bool shouldTryReturnTask() const
    {
        return scheduleState_ != ScheduleState::Paused && (!tasks_.empty() || scheduleState_ == ScheduleState::Disabled);
    }

    struct TaskComp
    {
        bool operator()(const Task& lhs, const Task& rhs)
        {
            return lhs.processingTime_ > rhs.processingTime_;
        }
    };

    enum class ScheduleState
    {
        Running,
        Paused,
        Disabled,
        Clearing,
    };

    std::atomic<ScheduleState> scheduleState_{ScheduleState::Running};
    mutable std::mutex mutex_;
    mutable std::condition_variable cv_;
    std::priority_queue<Task, std::vector<Task>, TaskComp> tasks_;
};

This design threw a few problems at me. I think my solution isn’t the best and i am trying to check alternative, but cant figure them out by myself.

How the system is used:
a) processing the next available task

This whole thing must happen in one thread. The application is using too many of them and additional thread isn’t needed to satisfy this use case.

The processing thread will call taskProvider_.provide(). Now the thread will be blocked by the taskProvider. It will return the task when time has come for the task to be processed, and it will be forwarded to taskProcessor to do some processing logic.

The new tasks can also be scheduled while taskProvider_.provide is blocked and this new task might have to be processed sooner than any of the tasks. This is why i used cv_.wait_until. I want to block the thread until the soonest task is available, when a new sooner task is added i notify the condition variable and if it didn’t sleep for specified time then it should update the sleep time for the new soonest task.

b) pause

I want to pause the running thread and all of the tasks will be cleared.
From the outside it is done by calling TaskProvider::pause and then TaskProvider::clear.

Also adding new tasks should be ignored when the pause was called.

Since the processing thread will be blocked in the TaskProvider i have thought that maybe the pause should be implemented by making condition_variable wait until it is resumed by resume function.
If i implemented the pause in working thread then i would need a way to signal Provider anyway to exit the provide.

c) resume
Resume is quite simple, we change the status variable which allows first conditionVariable to proceed if some task will be added in the meantime

d) Disabling

This is needed for when i want to destroy the working thread.
The working thread might be blocked in the provide call, so i have created this mechanism to return std::optional. The call to disable will disallow cv_.wait_until to return task + the first cv_.wait will be interrupted by call to TaskProvider::disable.

Thanks to that when i want to destroy the working thread i would do:

taskProvider::disable()
workingThread.shouldRun_ = false;

Please let me know if you can find simpler solution, especialy one which i could unit test 🙂
What i dislike the most is having these two conditionVariable wait calls and return of std::optional so i can handle the exit from running thread (maybe an exception would be better. The user won’t be pausing it very frequently so i find it a little more elegant than std::optional).

Map SharePoint Online Library as Network Drive With PowerShell in a Scheduler Job

I am trying to map a SharePoint online document library to a network drive with PowerShell. It worked when I execute PowerShell with my account but failing when I try to execute same cod in a scheduler job whcih is configured to run as my account

Start-Transcript -Path "$PSScriptRoottest.log";

$SPLibraryURL ="\mycompany.sharepoint.comDavWWWRootsitesSandpitTEST Incoming Mail"

Write-Host $SPLibraryURL

New-PSDrive -Name "B" -PSProvider FileSystem -Root $SPLibraryURL

$sourceRiskFile = Get-ChildItem "B:"

Foreach ($file in $sourceRiskFile)
{

 Write-Host $file.Name

}

Stop-Transcript

I have also tried this URL but no success

https://mycompany.sharepoint.com/sites/Sandpit/TEST%20Incoming%20Mail

Error I am getting is

New-PSDrive : The specified drive root
“https://mycompany.sharepoint.com/sites/Sandpit/TEST%20Incoming%20Mail” either does not exist,
or it is not a folder.

job configuration is

enter image description here

There must be something to do with Scheduler job as same code works fine if I run PowerShell directly.

python – Save a dynamically created DAG in Airflow instead of registering it to the scheduler

I would like to save my dynamic DAGs and not have them autoscheduled by the Airflow scheduler and so I an not using the globals() utility. Is this the right way? I saw pickling_dag but doesn’t seem apt. I want to simply see the DAG that was dynamically created and save it, instead of just auto-scheduling on Airflow. I am reading some Spark confs from a config yaml file. This is the code:


def create_dag(dag_id,
               schedule,
               dag_number,
               default_args,
               session=None):

    with open("/Users/conf.yaml") as stream:
        try:
            t3Params = yaml.safe_load(stream)
        except yaml.YAMLError as err:
            print(str(err))

    dag = DAG(
        dag_id,
        is_paused_upon_creation=False,
        default_args=default_args,
        description='A dynamic DAG generated from conf files',
        schedule_interval=schedule,
    )

    myt3Params = {
        'queue': t3Params('conf')('queue'),
        'exe_mem': t3Params('conf')('num_exe'),
        'exe_core': t3Params('conf')('exe_core')
    }

    t3Cmd = "ssh myCluster@127.10.10.1 bash /home/myCluser/test-runner.sh {{ params.queue }} {{ params.exe_mem }} {{ params.exe_core }}"

    task_id="xyz"
    with dag:
        t3 = BashOperator(task_id='test_spark_submit3', bash_command=t3Cmd, params=myt3Params, dag=dag)

    dag_pickle = DagPickle(t3)
    pickle.dump(dag, open("/Users/airflow/dags/pickled_dags", 'wb'))

    session.add(dag_pickle)
    session.commit()


    return dag

for n in range(1, 5):
    dag_id = 'saved_dynamic_conf_dags_{}'.format(str(n))

    default_args = {
        'owner': 'me',
        'depends_on_past': False,
        'start_date': datetime(2020, 9, 23, 21, 00),
        'email': ('airflow@example.com'),
        'email_on_failure': False,
        'email_on_retry': False,
        'retries': 1,
        'retry_delay': timedelta(minutes=1)
    }

    schedule = '35,36,37 * * * *'

    dag_number = n

    create_dag(dag_id,schedule,dag_number,default_args)

Am I using pickle correctly? Is there a better way to do this?

architecture – Design time based job scheduler

Currently, I have a script

install_crontab.py -u <user> -c <config>

This script takes care of installing cron job that runs as a given user. In my install_crontab script, I check (using sudo) if the user running the script has privileges over the user he is trying to install cron job for. If no, then the script fails. If yes, then it updates the crontab of a given user – crontab -u user1 -e

Now I am writing my own job scheduler (which does not use Unix cron but has its own in-memory queue and triggering mechanism). There are four components to this application,

  1. One Database (where job schedules are stored for all users).
  2. One schedule management service – RESTful service for DB read/write operations.
  3. ‘n’ scheduler services – one service per Unix user. This is responsible for actually triggering the job. It maintains an in-memory queue of jobs to trigger for the user this service is running as. The in-memory queue is refreshed periodically from the schedule information stored in the database.
  4. Installer interface (same install_crontab.py script).

Users use installer script for scheduling the job.

  • Installer script does auth-related validations by checking if the user running the script has privileges to the user he is trying to schedule the job for (using sudo checks).
  • The installer script then makes the REST call (instead of updating crontab) to the schedule management service. Schedule management service takes care of writing schedule and job-related information to the DB.

At the moment, I don’t have auth related validation at the schedule management service layer. This is because I assume that the calls to the schedule management service are already authorized by the installer script. But I see there is a security loophole here, i.e. users can make direct REST calls to the schedule management service and install jobs that runs as some other users. How can I prevent this from happening? Note that I cannot run the schedule management service as the root user. But still, I want to build some auth layer to prevent this security loophole.