Performance – How to Fix a Slow iSCSI Speed ​​(Compared to SMB)?

I have a NAS that provides SMB and iSCSI targets, and a server that connects directly through a 10GbE connection (no switch in between).

I have slow speeds with the iSCSI target as opposed to "high" (expected) speeds with an "SMB target."

See the speed comparison between SMB (Y 🙂 and iSCSI (E :):

Speed ​​comparison SMB (Y :) vs iSCSI (E :)

How can I investigate the performance issues with the iSCSI target?

NAS: Synology RackStation RS3617xs +

python – Simple code performance that converts an RGB tuple to a hex string

I use tkinter to translate a full-color Mandelbrot Set Explorer into Python. For that I have to be able to convert one Tuple(int, int, int) in a hexadecimal string in the form #123456, Here are some examples of the two variants that I came up with:

>>>rgb_to_hex(123, 45, 6)
'#7b2d06'

>>>rgb_to_hex(99, 88, 77)
'#63584d'

>>>tup_rgb_to_hex((123, 45, 6))
'#7b2d06'

>>>tup_rgb_to_hex((99, 88, 77))
'#63584d'

>>>rgb_to_hex(*(123, 45, 6))
'#7b2d06'

>>>rgb_to_hex(*(99, 88, 77))
'#63584d'

The functions I have come up with are very simple but unbearably slow. This code is a rare case where performance is a real problem. It needs to be called once per pixel, and my goal is to support the creation of images up to 50,000 x 30,000 pixels (1,500,000,000 in total). 100 million executions take about 300 seconds:

>>>timeit.timeit(lambda: rgb_to_hex(255, 254, 253), number=int(1e8))
304.3993674000001

What if my math is not accurate, this feature means alone I will take 75 minutes Overall for my extreme case.

I have written two versions. The latter should reduce the redundancy (and I'll use tuples anyway), but it was even slower, so in the first version I just used the unpacking:

# Takes a tuple instead
>>>timeit.timeit(lambda: tup_rgb_to_hex((255, 254, 253)), number=int(1e8))
342.8174099

# Unpacks arguments
>>>timeit.timeit(lambda: rgb_to_hex(*(255, 254, 253)), number=int(1e8))
308.64342439999973

The code:

from typing import Tuple

def _channel_to_hex(color_val: int) -> str:
    raw: str = hex(color_val)(2:)
    return raw.zfill(2)

def rgb_to_hex(red: int, green: int, blue: int) -> str:
    return "#" + _channel_to_hex(red) + _channel_to_hex(green) + _channel_to_hex(blue)

def tup_rgb_to_hex(rgb: Tuple(int, int, int)) -> str:
    return "#" + "".join((_channel_to_hex(c) for c in rgb))

I would prefer to be able to use that tup_ Clean version, but there may not be a good way to automate the iteration with acceptable overhead.

Performance-related tips (or anything else if you see something) are welcome.

dnd 5e – Is there a performance that allows a player to change at will, if not, would it be balanced?

I have a shardmind fighter in a 5e game who can turn halfway into objects without taking their values. It's made of crystals like a stick or a spear, etc. It's not a 5e race, so we discussed that ability. Now the player wants a performance where he can "transform" properly. I think it's not a bad idea, but I need to know if such a thing already exists, and if not and we brewed it ourselves, how can we balance it?

amazon rds – MySQL RDS queries the time performance much slower than MySQL under EC2

We have a MySQL 8.0 db runs on EC2 (t2.xlarge Example: 300 GB SSD (General Purpose SSD), which coped fairly well with our total load (approximately 500 requests per minute during working hours).

We have just done a migration to use RDS (db.m5.2xlarge Example: 300 GB (general purpose SSD) as slave for ours SELECTs queries.

However, RDS performance is much slower than EC2. It's just crazy how slow it is compared to EC2.

The graph below clearly shows that the APDEX score (obtained from a new relic) has dropped abruptly, as in the migration (Figure 1). And not only that, but the db response time was significantly worse on the SELECTs statements (Fig. 2).

Apdex before and after RDS

Enter image description here

To be clear, we are 100% sure that this is not a replication problem with the database on EC2. Since we had used RDS as our only master in the past and the APDEX score was worst (permanently in the gray area of ​​the graph).

Remarks:

  1. Both databases are in the same availability zone.
  2. There is no problem with using credit on RDS.
  3. MySQL 8.0.x does not support a query cache.
  4. Our inquiries have already been agreed.
  5. IOPS is the same for both instances.
  6. Items 3 and 4 are not really relevant to the problem because they are the same DB version, the same queries, the same region, and only different architectures (EC2 vs. RDS).

We really want to use RDS because we are able to perform restores, backups / snapshots, upgrades, and so on. However, this achievement makes it difficult for us to be a sustainable option. Is there a way to improve / solve this?

dnd 3.5e – Is it possible to be considered evil to qualify for a performance?

Changelings have the talent of racial emulation to count them as other races for qualifying for things. Goliaths and Half-Giants have a powerful form that allows them to be considered large under different circumstances. In many classes and races, you can ignore some requirements for certain talents / classes. And many characters can use UMD to be considered something they do not use a magic device for.

Is there a way to be considered evil in terms of qualification for accomplishments?

Performance algorithm for competing cells with 0s and 1s. Can it be made faster?

I am working on an exercise algorithm problem that reads as follows:

There are eight houses that are represented as cells. Every day, the houses compete with neighboring ones. 1 stands for an "active" house and 0 for an "inactive" house. If the neighbors on either side of a particular home are either both active or both inactive, that home will be inactive the next day. Otherwise it will be active. For example, if we had a group of neighbors (0, 1, 0), the house at (1) would become 0 because both the house on the left and the one on the right are inactive. The cells at both ends have only one adjacent cell, so assume that the unoccupied space on the other side is an inactive cell.

Even after updating the cell, you must remember the previous status when updating the others so that the status information for each cell is updated at the same time.

The function takes the series of states and a number of days and should output the state of the houses after the specified number of days.

Examples:

    input: states = (1, 0, 0, 0, 0, 1, 0, 0), days = 1
    output should be (0, 1, 0, 0, 1, 0, 1, 0)
    input: states = (1, 1, 1, 0, 1, 1, 1, 1), days = 2
    output should be (0, 0, 0, 0, 0, 1, 1, 0)

My solution:

def cell_compete(states, days):
    for _ in range(days):
        prev_cell, next_cell, index = 0, 0, 0
        while index < len(states):
            if index < len(states) - 1:
                next_cell = states(index + 1)
            elif index == len(states) - 1:
                next_cell = 0
            if next_cell == prev_cell:
                prev_cell = states(index)
                states(index) = 0
            else:
                prev_cell = states(index)
                states(index) = 1
            index += 1
    return states

I used to think of taking advantage of the fact that they are just zeros and ones, and using bitwise operators, but that did not quite work.

Can I improve the efficiency of this algorithm? Any ideas for the "optimal" solution?

Appreciate all feedback, thanks!

graphics – rendering up close, OpenGL performance issues

During the past 18 months, which is coming together nicely!

Now i have an issue i can not really wrap my head around.

If i render 100 models from a distance where the camera can see them all, everthing works perfectly. If i then render them when there camera is up close, my GPU goes to 100% and of course everything starts to lag. I do not really get the difference, the math is the same.

Works on Nvidia m1000m and burns on AMD RX5700.
OpenGl version 4.1.

CPU, memory and GPU memory stays the same so I do not suspect any leaks.

I've tried debugging with glGetError () and CodeXL, no luck.

I'm missing something simple in OpenGL?

oracle – Help for Query Performance

A short story. I have an Oracle SQL query that I use to retrieve data when a particular setting is enabled. In this query, I had a date range to filter the data to a specific range. This has been changed due to the security of the service account with access to specific tables. My DBA suggested a view. This query refers to this view, but refilling is currently taking a few minutes. Suggestions for optimizing this query?

SELECT
(SELECT o.SALESFORCE_ID from org_unit o WHERE o.ORG_UNIT_ID = a.vendor_id AND o.delete_flag = '0') as SenderId,
(SELECT o.SALESFORCE_ID from org_unit o WHERE o.ORG_UNIT_ID = a.CUSTOMER_ID AND o.delete_flag = '0') as ReceiverId,
a.invoice_id,
a.INVOICE_DATETIME
FROM invoice a
WHERE a.invoice_id IN (
    SELECT document_id 
    FROM oi_action oi 
    WHERE oi.action = 'Submit'
    and oi.status='Submitted'
    )
and a.sub_type not in ( 10008,10007)
and EXISTS (
    SELECT 1
    FROM org_preference op 
    WHERE op.ORG_UNIT_ID = a.customer_id 
    AND op.PREFERENCE_TYPE_ID = 60276
    AND op.preference_value = 1
    AND op.modified_by_id IS NULL
    )
AND EXISTS (
    SELECT 1
    FROM org_unit o 
    WHERE o.ORG_UNIT_ID = a.vendor_id 
    AND o.delete_flag = '0'
    AND o.salesforce_id IS NOT NULL
)

GROUP BY a.vendor_id, a.customer_id, a.invoice_id, a.invoice_datetime;

Is there a way to view the view only if the preset is enabled?
Example:

AND a.CREATED_DATETIME >= (SELECT o.CREATED_DATETIME
FROM org_unit o 
WHERE o.ORG_UNIT_ID = a.vendor_id 
AND o.delete_flag = '0'
AND o.salesforce_id IS NOT NULL
)

Any suggestions for the best route I can take?

Performance – 12G SAS Expander with 6G Hard Drives That Limit Speed?

I am building a server with 24 SATA III hard drives attached to a Chenbro 12G SAS expander on a 12G SAS HBA (LSI 9300) with a single SFF-8643 cable. The overall write speed is slower than hoped, and I wonder if this is due to the lack of Store & Forward technology. I can write to each disk individually at 170 Mbps, but when writing to all 24 at the same time, I get 75 Mbps or 1800 Mbps each.

According to this answer, SAS does not support S & F. Since the hard drives have 6 Gbps, I have a total of 24 Gbps between the HBA and the expander, or 2400 Mbps, and there may be another SATA translation overhead that further limits it to 1800 Mbps , With S & F support I should have 4800 Mbps for the expander.

But here is a draft proposal for SAS2 in support of S & F, which was published in 2006. I can not find more up-to-date information about it. Has S & F ever been implemented? If so, what are the benefits of buying 12G SAS hard drives instead of 6G SAS, considering that no spinning hard disk reaches nearly 6 Gbps?

http://www.t10.org/ftp/t10/document.06/06-386r0.pdf

Quote from the proposal:

Such a solution would involve data being transferred between and
Expander and target devices with 3Gb per second and data transfer
Reciprocation between expanders and 6Gbps initiator devices
second, without sacrificing connection utilization at both ends.