magento2 – Magento crontab vs Magento Cron (Scheduled Tasks)

I am working with the Magento2 Cron functionality, I have created a cron in the crontab.xml (Every 5 mins) default group

<job instance="NamespaceModuleNameCronServiceOrdersStatusUpdate" method="execute" name="ns_cron_status_update_for_service_orders">
   <config_path>*/5 * * * *</config_path>
        </job>

After that I noticed in admin system->config->system->Cron (Scheduled Tasks). there has some setting (Cron configuration options for group: default).

So when the cron will be execute what is the difference of the both.

Also I’ve created a command line for my cronfile, and added that into Magento crontab as follows

* */2 * * * php /var/www/html/bin/magento  service-order:status-update
#~ MAGENTO START 69dd2b02e1f3a65918182048ea4e29979a849d8942e8f53ed20a4bf10e529b36
#* * * * * /usr/bin/php /var/www/html/bin/magento cron:run 2>&1 | grep -v "Ran jobs by schedule" >> /var/www/html/var/log/cron/magento.cron.log
#* * * * * /usr/bin/php /var/www/html/update/cron.php >> /var/www/html/var/log/cron/update.cron.log
#* * * * * /usr/bin/php /var/www/html/bin/magento setup:cron:run >> /var/www/html/var/log/cron/setup.cron.log
#~ MAGENTO END 69dd2b02e1f3a65918182048ea4e29979a849d8942e8f53ed20a4bf10e529b36

If i am adding the my cron in the crontab, do i need to keep that cron in crontab.xml ?

☑️NEW – Reward for Tasks: Up to 50,000 PUP (~$20) | Proxies-free

Earnings Disclaimer:  All the posts published herein are merely based on individual views, and they do not expressly or by implications represent those of Proxies-free or its owner. It is hereby made clear that Proxies-free does not endorse, support, adopt or vouch any views, programs and/or business opportunities posted herein. Proxies-free also does not give and/or offer any investment advice to any members and/or it’s readers. All members and readers are advised to independently consult their own consultants, lawyers and/or families before making any investment and/or business decisions. This forum is merely a place for general discussions. It is hereby agreed by all members and/or readers that Proxies-free is in no way responsible and/or liable for any damages and/or losses suffered by anyone of you.

scheduling – Sequential Tasks with Greedy Algorithm

We have N tasks that need to be scheduled for processing. Each task consists of two parts that need to executed in order. The first one is guarded by a mutex and therefore only one task can be executing this part at a time. The second part also has a constraint and only one of the tasks can be executing this at the same time. For task i we know how much time it needs to spend in each part, namely mi for the guarded part, and ai for the part that can be executed in parallel.

The problem is to find a permutation of the tasks such that the time needed to execute all of them is minimized.

Originally I got this question from Greedy sequential/parallel task scheduling, but that got me wondering if there is a greedy solution to this variation of the problem too, where only one task can be executed on the second half at one time as well.

My intuition tells me sort in decreasing order of ai – mi, but I can’t think of a proof for this. I can’t even think of a simple formula for the time it takes to finish all the tasks given some ordering of the tasks. I think that the time it takes is one of choices from $$sum_{i=0}^k m_i + sum_{j=k}^N a_j$$ for some $j$ where $1 leq j leq N$. Anyone have a closed form formula for the time given some ordering? Also is there a greedy algorithm and proof to go along with it?

tasks – Is there a way to specify different value sets for a custom field in Clickup?

Basically I want to have the following situation:

  • List X with label-type custom field “My custom field” with values A, B and C.
  • List Y with label-type custom field “My custom field” with values D, E and F.
  • Both lists have a view created from a template, which shows “My custom field”. When I update the template, I don’t want to have to manually change what’s showed.

I know it’s possible to create 2 different custom fields with the same name and specify different values there, but then they are not detected by the template as the same.

python – Airflow: Importing decorated Task vs all tasks in a single DAG file?

I recently started using Apache Airflow and one of its new concept Taskflow API. I have a DAG with multiple decorated tasks where each task has 50+ lines of code. So I decided to move each task into a separate file.

After referring stackoverflow I could somehow move the tasks in the DAG into separate file per task. Now, my question is:

  1. Does both the code samples shown below work same? (I am worried about the scope of the tasks).
  2. How will they share data b/w them?
  3. Is there any difference in performance? (I read Subdags are discouraged due to performance issues, though this is not Subdags just concerned).

All the code samples I see in the web (and in official documentation) put all the tasks in a single file.

Sample 1

import logging
from airflow.decorators import dag, task
from datetime import datetime

default_args = {"owner": "airflow", "start_date": datetime(2021, 1, 1)}

@dag(default_args=default_args, schedule_interval=None)
def No_Import_Tasks():
    # Task 1
    @task()
    def Task_A():
        logging.info(f"Task A: Received param None")
        # Some 100 lines of code
        return "A"

    # Task 2
    @task()
    def Task_B(a):
        logging.info(f"Task B: Received param {a}")
        # Some 100 lines of code
        return str(a + "B")

    a = Task_A()
    ab = Task_B(a)

No_Import_Tasks = No_Import_Tasks()

Sample 2 Folder structure:

- dags
    - tasks
        - Task_A.py
        - Task_B.py
    - Main_DAG.py

File Task_A.py

import logging
from airflow.decorators import task

@task()
def Task_A():
    logging.info(f"Task A: Received param None")
    # Some 100 lines of code
    return "A"

File Task_B.py

import logging
from airflow.decorators import task

@task()
def Task_B(a):
    logging.info(f"Task B: Received param {a}")
    # Some 100 lines of code
    return str(a + "B")

File Main_Dag.py

from airflow.decorators import dag
from datetime import datetime
from tasks.Task_A import Task_A
from tasks.Task_B import Task_B

default_args = {"owner": "airflow", "start_date": datetime(2021, 1, 1)}

@dag(default_args=default_args, schedule_interval=None)
def Import_Tasks():
    a = Task_A()
    ab = Task_B(a)

Import_Tasks_dag = Import_Tasks()

Thanks in advance!

☑️NEW – For Completing the tasks – Get 10 1SOL | Proxies-free

Earnings Disclaimer:  All the posts published herein are merely based on individual views, and they do not expressly or by implications represent those of Proxies-free or its owner. It is hereby made clear that Proxies-free does not endorse, support, adopt or vouch any views, programs and/or business opportunities posted herein. Proxies-free also does not give and/or offer any investment advice to any members and/or it’s readers. All members and readers are advised to independently consult their own consultants, lawyers and/or families before making any investment and/or business decisions. This forum is merely a place for general discussions. It is hereby agreed by all members and/or readers that Proxies-free is in no way responsible and/or liable for any damages and/or losses suffered by anyone of you.

shell – Delaying pods creation or recreating pods until tasks in other pods are completed

I need some help with better kubernetes resource deployment.

Essentially, I’ve 2 components. C1 and C2.

My task is to not deploy C2 pods or any resources until the tasks which are running in C1 are finished.

C1 will install some dars into server machine. –> this will take almost an hour to finish
C2 is dependent on a dar that c1 will install.

We have a single helm in which all resources are defined.
As soon as we install that chart. It will deploy every resources(pods, services, stateful state) in parallel.
C1 pods will come up, c2 pods will also come up.
But when you check container logs. C2 containers will have lots of error saying not found class or resources.

I am looking for a way by which I will be able to wait, until all dars are deployed through c1. If not wait then atleast, keep destroying and recreating until all dars are installed.

development – How can I programmatically prevent tasks rename and deletion on Project Server 2019 in Project Web Access?

How can I programmatically prevent tasks rename and deletion on Project Server 2019 in Project Web Access (/PWA/Project%20Detail%20Pages/Schedule.aspx?ProjUid=00000000-0000-0000-0000-000000000000&ret=0)?
I have three solutions for these purposes: Project Professional Add-in, Project server local event handler and custom web service. But I can’t prevent tasks rename and deletion on Project Server 2019 in Project Web Access. When I use this technic (https://sharepointprojectserver.com/project-online-jsom-event-handler-for-save-publish-any-other-buttons/) I can only log button clicks, but can’t prevent it. Something like:
_grid = projectDrilldownComponent.get_GridSatellite().GetJsGridControlInstance();
_grid.AttachEvent(SP.JsGrid.EventType.BeforeTaskDelete
OR
Something like this in Project Professional add-in:
private void Application_ProjectBeforeTaskDelete(Task tsk, ref bool Cancel)
private void Application_ProjectBeforeTaskChange(Task tsk, PjField Field, object NewVal, ref bool Cancel)
OR maybe I can do it in local Project Server event handler?
With kind regards, Vasily

scheduled tasks – How to create an email reminder in Sharepoint 2013

I am trying to create a sharepoint task reminder using sharepoint 2013.
I have implemented the following code, but is not working as expected.

What am I doing wrong ? Please see the attached screenshot.

Sharepoint task reminder

EDIT:

I tried a different approach, but still no result.

enter image description here

What am I doing wrong ?

I setup the today variable, I add 7 days to it and output to due date. Then I create a loop while today is less than due date.

I receive no emails with this.