SQL Server – Need help concurrently running current queries running

I need help with understanding if there's a problem with the very high CPU usage on my read-only server. Read-only means that this server is secondarily read-only through LS, and the DW team can run any type of Crapy query that runs for hours.

Issue – Primary LS Server is primarily OLTP and its vendor-based product, which avoids clustered indexes because DB, as the primary LS, undergoes extensive writes throughout the day. Secondary level recovery occurs once a night and users are DC for that time. For the rest of the day, there are always selected queries.

Index optimization is quite difficult here (no vendor support) because the primary index can not run with many indexes like CI or more NCIs. Therefore, most of the tables read on the secondary page go through the table scan or the key search.

For read-only servers with the following configuration

Logical processors = 80
Total RAM= 512, MAX=440
Numa node =4
MAXDOP=8 and CTOP=5 ( i know this is too low but setting this to even 100 on this server does not work , per my analysis sub tree cost of most of the queries on this server is over 1000 avg)

Now come at most 6-10 of these crappy queries and run in parallel. The CPU utilization is usually 90%. Users do not complain a lot because they do not take care of questions that take more than 4 hours.
But from a server point of view, I think it's not good, especially when 10-15 of them start together and the CPU goes down 100% and I'm afraid the server will fail.

Since index optimization is not currently included, I should increase MAXDOP by 16, because on average, there are 600 worker threads of 2944 available for these queries to execute quickly, or should they be lowered to 4 or 6 to match them lower CPU usage?

CXPACKET waits are the highest wait time with 95% of the average wait time for the last week.

From a storage perspective: Pending memory allocations are mostly 0, but pending average is 30-35 and PLE is an average of 1000, but will be too low at 100 if these parallel queries are executed

Avoid running concurrently jobs in multibranch jobs with inputs to jenkins

I currently have the following problem.

I have a Jenkins job (multibranch) in which the user selects the name of the release to be implemented using "input"

def userInput = input
id: & # 39; userInput & # 39 ;, message: & # 39; fill in the required fields to start the deployment & # 39 ;,
parameter:[string (name: & # 39; nameRelease & # 39 ;, defaultValue: & # 39; none & # 39 ;, description: & 39; name of release & # 39;)

I have to avoid executing 2 jobs with the same value for "nameRelease". That is, if one is already running, the second is aborted

Is there any functionality in Jenkins that allows it?

a greeting
Javi

python python3 queue for multiprocessing queues when using thread and process concurrently

I have used multi-processes to handle the CPU-intensive task. I have a thread that reads data from stdin and places it in an input_queue. A thread fetches data from output_queue and writes it to stdout and set it to output_queue.But It will block forever. I doubt it was not appropriate to use the multiprocessing queue. But I do not know how to solve it. Can someone help me? My code as follows:

Import multiprocessing
Import sys
Import threading
import time
from the multiprocessing import queue


def write_to_stdout (result_queue: Queue):
"" "Write queue data to stdout" "" "
while true:
data = result_queue.get ()
if data is StopIteration:
break
sys.stdout.write (data)
sys.stdout.flush ()


def read_from_stdin (queue):
"" "Read data from stdin, queue for processing" ""
To attempt:
for line in sys.stdin:
queue.put (line)
Finally:
queue.put (StopIteration)


def process_func (input_queue, result_queue):
Retrieve data from input_queue, process, store result in result_queue "" "
To attempt:
while true:
data = input_queue.get ()
if data is StopIteration:
break
# cpu intensive task, use time.sleep instead
# result = compute_something (data)
Time sleep (0,1)
result_queue.put (data)
Finally:
# Make sure all processes are complete
input_queue.put (StopIteration)


If __name__ == & # 39; __ main __ & # 39 ;:
# Queue for reading in stdout
input_queue = queue (1000)

# Queue for writing in stdout
result_queue = queue (1000)

# Thread reading data from stdin
input_thread = threading.Thread (target = read_from_stdin, args = (input_queue,))
input_thread.start ()

# Thread reading data from stdin
output_thread = threading.Thread (target = write_to_stdout, args = (result_queue,))
output_thread.start ()

Processes = []
    cpu_count = multiprocessing.cpu_count ()
# Start multiprocessing to do some CPU-intensive tasks
for i within range (cpu_count):
proc = multiprocessing.Process (target = process_func, args = (input_queue, result_queue))
proc.start ()
process.append (proc)

# connected the input thread
input_thread.join ()

# participated in all task processes
for proc in processes:
proc.join ()

# Make sure the output thread is finished
result_queue.put (StopIteration)

# connected the output thread
output_thread.join ()

Test environment:

Python3.6
ubuntu16.04 lts