c# – How to optimize Floor Meshes with different material in unity

I have several meshes (floor tiles) that consist of unique materials as they have different textures. I see that it is taking too much batch counter and it has around 100m vertex.

Is there anyway to improve it through programming. The long way that it should be ported to 3d modelling and combine it.

I also see the mesh combine but it requires same material.

database – What are the means to which one can optimize key-value pairing in SQLITE?

Being holistically unfamiliar with the inner-workings and development history of SQL, I have to imagine that there are various strategies to optimize SQLITE in regards to key-value pairing. ( This is what I am primarily concerned with )

In researching the topic, I came across this white paper about SQLITE4 which is no longer being developed. SQLITE4 seemed to have a particular interest in optimizing itself for Key-Value Pairs.

https://www.sqlite.org/src4/doc/trunk/www/design.wiki

Some take-aways I took from it:

An instance of an sqlite4_env object defines how SQLite4 interacts with the rest of the system. An sqlite4_env object includes methods to:
access and control the underlying key/value storage engines,

The default built-in storage engine is a log-structured merge database. It is very fast, faster than LevelDB, supports nested transactions, and stores all content in a single disk file. Future versions of SQLite4 might also include a built-in B-Tree storage engine.

The PRIMARY KEY Is The Real Primary Key

SQLite3 allows one to declare any column or columns of a table to be the primary key. But internally, SQLite3 simply treats that PRIMARY KEY as a UNIQUE constraint. The actual key used for storage in SQLite is the rowid associated with each row.

SQLite4, on the other hand, actually uses the declared PRIMARY KEY of a table (or, more precisely, an encoding of the PRIMARY KEY value) as the key into the storage engine.

It has been stated that various lessons were learned from SQLITE4 and merged into SQLITE3, and I would like to know in addition what of the above, still applies?
Namely:

  1. Can one access and control the underlying key/value storage engines in SQLITE?
  2. Is there a B-Tree storage engine? Does it provide better write speeds?
  3. Is the Primary Key now, the “Real primary key”?
  4. If any of the previous questions ring true, what are the means these provide means to which one can optimize SQLITE for key-value pairs?

In addition, discussing the topic writ large, it has been suggested in private to me that

  1. Smaller files provide better write speeds.
  2. Paths to files are vastly faster than storing file data.
  3. Key Types will impact lookups.
  4. Reads can be accessed in multiple threads, however all writes should be delegated to one thread.

I would like to know if any of this is true.

Finally and most generally, are there any means not yet covered that one can utilize to optimize key value pairing? And more cynically; is this even a topic that warrants discussion, implying that any attempt to optimize is going to be superfluous?

Thanks.

What is the best algorithm for Hindley Milner type inference when one wants to optimize for simplicity and errormessages

I want to implement Hindley-Milner type inference but as a non-academic person that doesn’t know type theory at all, I’m getting a bit overwhelmed by all the different algorithms and their properties, the dependencies of papers on papers and all the new concepts I have to learn.

I’m looking an algorithm or a few algorithms that stand out in terms of the error messages it can generate (Something that Algorithm W and Algorithm M are supposedly not very good at).

Can anyone point me to any helpful resources on this, or explain to me what I should be looking for in an algorithm to find out if it will be good for generating error messages, or both?

Note: It would be nice if it can support higher kinded types, but it’s not an immediate requirement.

Optimize apache CWP7

Hi, I’m new to cwp7 and I need help to optimize apache, it confuses me that there are two configuration files and I don’t know which of the … | Read the rest of https://www.webhostingtalk.com/showthread.php?t=1841556&goto=newpost

python – Can I optimize two for loops that look for the closest zip code based on lat/lon?

I am new to python and I had the task to find the US zip code based on latitude and longitude. After messing with arcgis I realized that this was giving me empty values for certain locations. I ended up coding something that accomplishes my task by taking a dataset containing all US codes and using Euclidean distance to determine the closest zip code based on their lat/lon. However, this takes approximately 1.3 seconds on average to compute which for my nearly million records will take a while as a need a zip code for each entry. I looked that vectorization is a thing on python to speed up tasks. But, I cannot find a way to apply it to my code. Here is my code and any feedback would be appreciated:

for j in range(len(myFile)):
    p1=0
    p1=0
    point1 = np.array((myFile("Latitude")(j), myFile("Longitude")(j)))  # This is the reference point
    i = 0
    resultZip = str(usZips("Zip")(0))
    dist = np.linalg.norm(point1 - np.array((float(usZips("Latitude")(0)), float(usZips("Longitude")(0)))))
    for i in range(0, len(usZips)):
        lat = float(usZips("Latitude")(i))
        lon = float(usZips("Longitude")(i))
        point2 = np.array((lat, lon))  # This will serve as the comparison from the dataset
        temp = np.linalg.norm(point1 - point2)
        if (temp <= dist):  # IF the temp euclidean distance is lower than the alread set it will:
            dist = temp  # set the new distance to temp and...
            resultZip = str(usZips("Zip")(i))  # will save the zip that has the same index as the new temp
            # p1=float(myFile("Latitude")(58435))
            # p2=float(myFile("Longitude")(58435))
        i += 1

I am aware Google also has a reverse geocoder API but it has a request limit per day.
The file called myFile is a csv file with the attributes userId, latitude, longitude, timestamp with about a million entries. The file usZips is public dataset with information about the city, lat, lon, zip and timezone with about 43k records of zips across the US.

influx db – Optimize InfluxQL query for multi core use?

I have an InfluxQL query (Influxdb 1.8) & it takes a lot of time for it to finish even on c5.4xlarge with Intel Xeon Platinum 8000 with Turbo CPU clock speed of up to 3.6 GHz + EBS io2 with 16k iops available.

But when I look at htop I see only 1 core loaded 100% & the rest are idle. Aws monitoring shows that volume iops while running the query are ~8k which is far from max 16k & ram is at 20%.

Is there a way to optimize the query to spread the load on all cores?

I have other queries & they load all cores ok.

Here’s the problematic query:

select count(*) from 
(SELECT "pitch" AS "AAAA" FROM "AAAA"."autogen"."imu_messages"),
(SELECT "pitch" AS "BBBB" FROM "BBBB"."autogen"."imu_messages"),
(SELECT "pitch" AS "CCCC" FROM "CCCC"."autogen"."imu_messages"),
(SELECT "pitch" AS "DDDD" FROM "DDDD"."autogen"."imu_messages"),
(SELECT "pitch" AS "EEEE" FROM "EEEE"."autogen"."imu_messages"),
(SELECT "pitch" AS "FFFF" FROM "FFFF"."autogen"."imu_messages"),
(SELECT "pitch" AS "GGGG" FROM "GGGG"."autogen"."imu_messages"),
(SELECT "pitch" AS "HHHH" FROM "HHHH"."autogen"."imu_messages"),
WHERE time > now() - 60d GROUP BY time(1m) FILL(-1)

If you need any additional info let me know & I’ll update the question with it.

Thanks.

magento2 – Optimize Product Collection Filter

I have a Model in my code that creates a custom product collection based on a user search term. That search term should search products and filter it by sku or name… The issue is that it takes too much time to retrieve the data, maybe from 4 to 9 seconds… my catalog has 8000 skus. I would like to optimize this filter. At the same time I have flat catalog enabled as well as categories. And I’m also have Elasticsearch enabled (My current installation has Magento 2.3.5p1). Here is my code:

public function getSearchResult($queryText)
{
    try {
        /** @var MagentoCatalogModelResourceModelProductCollection $productCollection */
        $productCollection = $this->layerResolver->get()->getProductCollection();

        $queryLike = $this->_getQueryPattern($queryText);

        $productCollection
            ->setVisibility((
                Visibility::VISIBILITY_BOTH,
                Visibility::VISIBILITY_IN_SEARCH,
                Visibility::VISIBILITY_IN_CATALOG
            ))
            ->addFieldToFilter('status', Status::STATUS_ENABLED)
            ->addAttributeToFilter('type_id', ('neq' => ProductType::TYPE_BUNDLE))
            ->addAttributeToFilter(
                (
                    ('attribute' => 'sku', 'like' => $queryLike),
                    ('attribute' => 'name', 'like' => $queryLike)
                )
            );
        $productCollection->getSelect()->limit($this->searchModel->getMaxResShow());

        $this->stockFilter->addInStockFilterToCollection($productCollection);

        $productCollection = $this->searchModel->getResData($productCollection);

        $query_test = $productCollection->getSelect()->__toString();

        $this->_logger->debug(print_r($query_test,true));

        if (!empty($productCollection)) {
            $data = $productCollection->toArray((
                'name',
                'sku',
                'entity_id',
                'type_id',
                'product_hide_price',
                'product_hide_html',
                'product_thumbnail',
                'product_url',
                'popup',
                'product_price',
                'product_price_amount',
                'product_price_exc_tax_html',
                'product_price_exc_tax',
                'inner',
                'master',
            ));
            return $data;
        }
        return false;
    } catch (MagentoFrameworkExceptionNoSuchEntityException $exception) {
        return false;
    }
}

/**
 * @param string $queryText
 * @return string
 */
private function _getQueryPattern($queryText)
{
    $queryText = preg_replace('/s+/', '%', $queryText);
    $queryLike = '%' . $queryText . '%';
    return $queryLike;
}

Now, the raw SQL generated by this filter… on $query_test.. is the following: https://pastebin.pl/view/raw/ae39c71f . Please check!.

My question is… is there a way to optimize this?. Or, is there a way to use Elasticsearch to improve this?. I mean… this is taking too much time to retrieve the data… I mean, it takes from 4 to 9 seconds to retrieve the filter. What I think it could be optimized is the e.entity_id IN part… but based on magento 2 practices that I think I’m following what should I do to improve it?.

Thanks in advance for any kind of help and thanks for reading.