performance – MySQL InnoDB Insertion Faster Than MyISAM

I wrote code that inserts about 15 millions rows into tables in MyISAM and InnoDB for comparsion. MyISAM and InnoDB are set up with no optimization configuration in /etc/mysql/my.cnf. And tables are created with no indexes. Insertion are done using statements like “INSERT INTO Table (columnA, columnB, columnC) VALUES (?, ?, ?)…” (10,000 (?, ?, ?)s) and db.Exec(stmt, valueArgs…) with valueArgs being the actual “VALUES.” It is widely believed that MyISAM is faster than InnoDB in insertion. Baron Schwartz, author of the book High Performance MySQL, explained why in this post Mysql: Insert performance INNODB vs MYISAM. However, in my experiment, I found out that InnoDB is actually faster than MyISAM. Could anyone give me some insights?

mysql  Ver 14.14 Distrib 5.7.30, for Linux (x86_64) using  EditLine wrapper

Regolithmedia Since 2010 – High Performance Premium SSD VPS, 1Gbps, 32 Locations start From $8/Month

REGOLITH MEDIA – Your All In One Business Solution.
Since 2010 We already serve thousands transactions and client all around the world. Join US! You will discover services suitable for your needs.


Code:

News! You can Access Regolithmedia Website by enter "reg.uno" in your browser Check out our website to discover our other service Like SSD Shared Hosting, Cheap Domain, Cheap SSL, And Premium Cloud VPS

PROMOTION
All Plan Bonus Free Rego’s Web Optimization Worth > $500
Premium SSD VPS SP1

Get 10% Discount on 512MB Plan, use “10OFF“, Discount recurring as long as you renew for monthly payment.

Premium SSD VPS

Get 10% Discount on 1GB and 2GB plan, use “RegExp10“, Discount recurring as long as you renew for monthly payment.

Premium SSD VPS
Locations

1. Jakarta, Indonesia located on CBN Indonesia Data Centre(Ipv6 Support)

2. Hong Kong, China Located On Equinix Data centre

3. Singapore, Singapore Located on Ascenix Singapore Data Centre (Ipv6 Support)

4. Sydney, Australia Located on Equinix Data centre (Ipv6 Support)

5. Buffalo, New York(Central USA) located on Colocrossing Datacenter

6. Dallas, Texas (Central USA ) located on Colocrossing Datacenter

7. Los Angeles, California(West Coast ) located on Colocrossing Datacenter(Asia Optimized IP)

8. Chicago, Illinois (Central USA ) located on Colocrossing Datacenter

9. SCDallas, Texas (Central USA ) located on Psychz Data cente(Ipv6 Support)

10. Toronto, Canada Located On Prioritycolo Data centre(Ipv6 Support)

11. Amsterdam, Netherlands (Amsterdam) located on Serverius datacenter (Ipv6 Support)

12. Docklands, England located on Connexions4London datacenter(Ipv6 Support)

13. Maidenhead, England located on Iomart datacenter (Ipv6 Support)

14. Frankfurt, Germany located on First Colo datacenter (Ipv6 Support)

15. Dublin, Ireland located on Digiweb Datacenter (Ipv6 Support)

16. Oslo, Norway located on Powertech Information System Datacenter (Ipv6 Support)

17. Melbourne, Australia Located on Equinix Data centre

18. Tokyo, Japan Located On Equinix Asia Datacenter

19. Dubai, Dubai Located On Equinix United Arab Emirates Data centre

Dallas 512MB 1024MB 2048MB 4096MB Custom (For 4GB need or above)
Dedicated Core 1 Dedicated CPU Cores @ Intel Xeon E5-2660 OR Xeon Gold 1 Dedicated CPU Cores @ Intel Xeon E5-2660 OR Xeon Gold 2 Dedicated CPU Cores @ Intel Xeon E5-2660 OR Xeon Gold 2 Dedicated CPU Cores @ Intel Xeon E5-2660 OR Xeon Gold ? Dedicated CPU Cores @ Intel Xeon E5-2660 OR Xeon Gold
Disk Space 9GB SSD Space @ Raid-10 Enterprise-grade SSD 14GB SSD Space @ Raid-10 Enterprise-grade SSD 29GB SSD Space @ Raid-10 Enterprise-grade SSD 29GB SSD Space @ Raid-10 Enterprise-grade SSD ?GB SSD Space @ Raid-10 Enterprise-grade SSD
Premium Bandwidth 750GB Data Transfer @ 250Mbps 1.5TB Data Transfer @ 250Mbps 3TB Data Transfer @ 250Mbps 6TB Data Transfer @ 250Mbps ?TB Data Transfer @ 250Mbps
Backup Storage 12.5GB Backup Storage(Snapshot backup) 25GB Backup Storage(Snapshot backup) 50GB Backup Storage(Snapshot backup) 50GB Backup Storage(Snapshot backup) ?GB Backup Storage(Snapshot backup)
Ram & Swap 512MB Ram DDR4+ 1024 MB Swap 1024MB Ram DDR4 + 1024 MB Swap 2048MB Ram DDR4 + 1024 MB Swap 4096MB Ram DDR4 + 1024 MB Swap ?MB Ram DDR4 + 1024 MB Swap
Control Panel & 100% Uptime Warranty Yes Yes Yes Yes Yes
Choices Of 16 locations No Yes Yes Yes Yes
Free Cloudflare Railgun (Worth $200/month) Yes Yes Yes Yes Yes
IPv4 & Ipv6 1 IPv4 Addresses & Ipv6 Available & Internal Ip(Private Lan @1Gbps) 1 IPv4 Addresses & Ipv6 Available & Internal Ip(Private Lan @1Gbps) 1 IPv4 Addresses & Ipv6 Available & Internal Ip(Private Lan @1Gbps) 1 IPv4 Addresses & Ipv6 Available & Internal Ip(Private Lan @1Gbps) ? IPv4 Addresses & Ipv6 Available & Internal Ip(Private Lan @1Gbps)
Price $10.77/Month $19.5/Month $24/Month $51/Month $?/Month
ORDER NOW ORDER NOW ORDER NOW ORDER NOW Contact US

Can’t Order? create ticket to sales we will help you to order
Buy in Bulk? Contact US, We can Arrange something special for you

Premium SSD VPS SP1
Locations

1. Miami, Florida(West Coast USA) located on Total Server Solution Datacenter

2. Dallas, Texas (Central USA ) located on Total Server Solution Datacenter

3. Los Angeles, California(West Coast ) located on Total Server Solution Datacenter

4. Chicago, Illinois (Central USA ) located on Total Server Solution Datacenter

5. New York City, New York (East Coast USA) located on Telehouse Datacenter(Ipv6 Support)

6. Salt Lake City, Utah(West Coast USA) located on Softlayer Datacenter

7. Seattle (West Coast USA) located on Total Server Solution Datacenter

8. Atlanta, Georgia (East Coast USA) located on Total Server Solution Datacenter

9. Phoenix (West Coast USA) located on Total Server Solution Datacenter

10. Toronto, Canada Located On Total Server Solution Data centre

11. Vancouver, Canada Located On Total Server Solution Data centre

12. London, England located on UK2 datacenter(Ipv6 Support)

13. Amsterdam, Netherlands located on Total Server Solution datacenter(Ipv6 Support)

SP1 512MB SP1 1024MB SP1 2048MB SP1 4096MB SP1 8192MB SP1 16384MB
Dedicated Core 1 Dedicated CPU Cores @ Intel Xeon E5-2660 4 Dedicated CPU Cores @ Intel Xeon E5-2660 4 Dedicated CPU Cores @ Intel Xeon E5-2660 4 Dedicated CPU Cores @ Intel Xeon E5-2660 4 Dedicated CPU Cores @ Intel Xeon E5-2660 4 Dedicated CPU Cores @ Intel Xeon E5-2660
Disk Space 15GB SSD Space @ Raid-10 25GB SSD Space @ Raid-10 50GB SSD Space @ Raid-10 100GB SSD Space @ Raid-10 200GB SSD Space @ Raid-10 400GB SSD Space @ Raid-10
Premium Bandwidth 1TB Data Transfer @ 1Gbps 3TB Data Transfer @ 1Gbps 4TB Data Transfer @ 1Gbps 5TB Data Transfer @ 1Gbps 6TB Data Transfer @ 1Gbps 7TB Data Transfer @ 1Gbps
Ram & Swap 512MB DDR4 Ram + 1024 MB Swap 1024MB DDR4 Ram + 1024 MB Swap 2048MB DDR4 Ram + 1024 MB Swap 4096MB DDR4 Ram + 1024 MB Swap 8192MB DDR4 Ram + 1024 MB Swap 16384MB DDR4 Ram + 1024 MB Swap
IPv4 1 IPv4 Addresses 1 IPv4 Addresses 1 IPv4 Addresses 1 IPv4 Addresses 1 IPv4 Addresses 1 IPv4 Addresses
Free SSl & 100% Uptime Warranty Yes Yes Yes Yes Yes Yes
Location: US, UK, NL, CA Yes Yes Yes Yes Yes Yes
No Control Panel, Root access Only Yes Yes Yes Yes Yes Yes
Price $10/Month $19/Month $35/Month $65/Month $125/Month $245/Month
ORDER NOW ORDER NOW ORDER NOW ORDER NOW ORDER NOW ORDER NOW

Can’t Order? create ticket to sales we will help you to order
Buy in Bulk? Contact US, We can Arrange something special for you

Payment

-Paypal (USD)

-Local Bank : BCA(IDR)


REGOLITH MEDIA – Provide Business Solution Since 2010

javascript – Best method to further execute the DRY principle and/or raise performance in my script? (client login/signup system)

Just for practice purposes, I written a basic client sided login/signup system that allows you to make an account, and then log into the account from my webpage. What it does is append to an object known as database for every new account made, and when you sign in, it loops through the database and see if your credentials match any account information.

There’s a couple of noticable problems with my script. One being that I believe that I didn’t follow the DRY principle to the fullest. I felt like I repeated myself a couple of times, and wondering if that can be avoided. The second problem is that looping through the “database” object may not be the most efficient solution, especially since the longer the table, the longer it may take. There may be a solution that is more efficient in performance that I am unaware of.

Other than that I would just like general feedback, tips, and red flags (if any) in my code. I am trying to improve myself as a JavaScript developer, and that requires feedback of my script from my peers.

TL;DR: I am a new programmer, so sorry if this is a bad question or if I have terrible code. I am still learning.

const database = ()

function usp(parent) {
  return (parent.username.value, parent.password.value)
}

window.onload = () => {
  const create = document.forms.createAcount;
  const log = document.forms.logIN;

  create.addEventListener('click', (e) => {
    if(e.target == create.submit){
      const information = usp(create);
      database.push({user : information(0), pass : information(1)});
      console.log(database)
    }
  });

  log.addEventListener('click', (e) => {
    if (e.target == log.submit) {
      const information = usp(log);
      for (let x of database) {
        if (x.user == information(0) && x.pass == information(1)) {
          alert(`You successfully logged into your account! Welcome to my website ${information(0)}`)
        }
      }
    }
  })
}
#createAcount {
  background-color: skyblue;
}
#logIN {
  background-color: lightgreen;
}
<!DOCTYPE html>
  <html>
    <head>
      <title>SignUp!</title>
      <link rel = "stylesheet" href = 'style.css'>
    </head>
    <body>
      <h1>Sign Up</h1>
      <form name = "createAcount" id = "createAcount">
        <h1>Create a Username</h1>
        <input type = 'text' placeholder="Falkyraizu"  name = 'username' maxlength=20>
        <h1>Create a password</h1>
        <input type = 'password' placeholder="password" name = 'password' maxlength=20>
        <h2>Done</h2>
        <input class = 'reset' type = 'button' value = "Reset" name = 'reset'>
        <input class = 'submit' type = 'button' value = "Submit" name = 'submit'>
      </form>
      <h1>Log In</h1>
      <form name = "logIN" id = "logIN">
        <h1>What is your userName</h1>
        <input type = 'text' placeholder="Falkyraizu" name = "username" maxlength=20>
        <h1>What is your password</h1>
        <input type = 'password' placeholder="password" name = "password" maxlength=20>
        <h2>Done</h2>
        <input class = 'reset' type = 'button' value = "Reset" name = 'reset'>
        <input class = 'submit' type = 'button' value = "Submit" name = 'submit'>
      </form>
      <script src = "http://codereview.stackexchange.com/script.js"></script>
    </body>
  </html>

Side Note: If you notice something else that can be fixed such as my HTML, you can also include it in your response, however I would still like my main problems fixed (read above if you missed it). Thanks for the help!

sql server – Improve Select query for performance

We have the following properties.

Db : Microsoft SQL Server 2012 (SP3) (KB3072779) - 11.0.6020.0 (X64) 
    Oct 20 2015 15:36:27 
    Copyright (c) Microsoft Corporation
    Web Edition (64-bit) on Windows NT 6.3 <X64> (Build 14393: )

Machine Capacity : 32 GB RAM 
                 : Processor Inter(R) Xeon (R) CPU E5530 @ 2.40 GHz 2.40 GHz

We have a table which contains 2 million records ( Total 32 Columns, 1 composite index )
( used in OLTP file write ( Insert ) and reporting ( Select ) )

Index Definition :

CREATE NONCLUSTERED INDEX (NonClusteredIndex-DSP_DTL) ON (dbo).(Dispatch_detail)
    (
        (DSPDTL_FIN_YEAR) ASC,
        (DSPDTL_DIV_ID) ASC,
        (DSPDTL_PROD_ID) ASC,
        (DSPDTL_DT) ASC
    )WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, SORT_IN_TEMPDB = OFF, DROP_EXISTING = OFF, ONLINE = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON (PRIMARY)

Right Now, if i fire a ‘ select * from ‘ for the table its takes 2 mins and 45 seconds to fetch the data
The host where the db is running has a good configuration,Even a ‘ select * from ‘ wont take this much time.

How can i increase the performance of the query ?
Please suggest your ideas on this .

f stop – How can I compute low light performance for different formats?

How can I compare low light performance for different formats?

Easy. By calculating the 35mm equivalent f-stop.

The 35mm equivalent f-stop divisor is the divisor in the f-stop multiplied by crop factor.

An example:

  • You have 17-55mm f/2.8 lens for a system with crop factor 1.6x and 24-105mm f/4 lens for full frame.
  • The 24-105mm f/4 lens for full frame is simply f/4 equivalent.
  • The 17-55mm f/2.8 lens is f/4.48 equivalent because 2.8*1.6 = 4.48

Thus, this example shows that a full frame 24-105mm f/4 lens has better low-light performance than a 17-55mm f/2.8 crop lens.

Someone could complain that the same true f-stop, same shutter speed and same ISO gives the same exposure, and therefore, because shutter speed is limited by motion blur and ISO is limited by noise, the true f-stop (and not the 35mm equivalent f-stop) would tell the low-light performance.

Such a complainer would ignore the fact that on full frame, you can use higher ISO than on crop. For example, if you have a 1.6x crop camera where ISO 800 is the highest acceptable ISO and higher ISOs have too much noise, a full frame camera with equivalent generation of technology would allow you to use 1.6^2 * 800 = 2048 as the ISO.

Caching or in-memory table in Azure for performance

I am building an Angular web application that retrieves part of its data from a Azure SQL Database table via APIs developed in Azure Functions (with Azure API Management at the API Gateway). The data in the table (30k records) do not change for at least 24 hours. The web app needs to display this data in a grid (table structure) with pagination and users can apply filter conditions to retrieve and show a subset of the data in the grid (again with pagination). They can also sort the data on a column in the grid. The web app will need to be accessed by few hundred users on their iPad/tablet with 3G internet speed. Keeping the latency in mind, I am considering one of these two options for optimum performance of the web app:

1) Cache all the records from the DB table in Azure Redis Cache with cache refresh every 24 hours, so that the application will fetch the data to populate the grid from the cache, thus avoiding expensive SQL DB disk I/O. However, I am not sure how the filtering based on a field value or range of values will happen from Redis Cache data. I have read about using Hash data type for storing multivalued objects in Redis and SortedSet for storing sorted data, but I am particularly not sure about filtering data in Redis based on the range of numeric values (similar to BETWEEN clause in SQL) in Redis Cache. Also, is it at all advisable to use Redis in this way for my use case?

2) Use in-memory OLTP (memory optimized table for this particular DB table) in Azure SQL DB for faster data retrieval. This will allow to handle the filtering and sorting requests from the web app with plain SQL queries. However, I am not sure if it’s appropriate to use memory optimized tables for improving just table read performance (from what I read, Microsoft suggests to use it for insert-heavy transactional operations).

Any comments or suggestions on the above two options or any other alternative way to achieve performance optimization?

applications – MIUI 12 performance boost?

My question is in reference to Redmi Note 7, which I have seen slowing down in performance over time. So my question:
Will the new MIUI 12 update on all Xiaomi phones bring increase performance on say gaming and add more fluidity to using the device?

performance – Rate limit a method by generating particular load on demand in C#

I am working on a project where I want to generate random throughput on a particular method so that I can do some performance testing on it. This way I can test my method by generating random throughputs and see how my method works under all those scenarios.

For example: I need to call my doIOStuff method at an approximate rate of x requests per second from multiple threads where x will be less than 2000 mostly but it really doesn’t matter in this case. It doesn’t have to be accurate so there is some room for an error but the overall idea is I need to make sure that my method doIOStuff is executed no more than x times in a sliding window of y seconds.

Assuming we start n threads and want a maximum of m calls per second. We can achieve this by having each thread generate a random number between 0 and 1, k times per second and call doIOStuff method only if the generated number is less than m / n / k.

Below is the code I got which uses global variables and it does the job but I think it can be improved a lot where I can use some cancellation tokens as well and make it more efficient and clean.

using System;
using System.Collections.Generic;
using System.Threading.Tasks;
using System.Threading;


namespace ConsoleApp
{
    class Program
    {
        const int m_threads = 100;
        const int n_throughput = 2000;
        const int k_toss_per_second = 2000; // Note that k_toss_per_second x  m_threads >= n_throughput

        static void Main(string() args)
        {
            var tasks = new List<Task>();

            for (int i = 0; i < m_threads; i++)
                tasks.Add(Task.Factory.StartNew(() => doIOStuff()));

            Task.WaitAll(tasks.ToArray());
            Console.WriteLine("All threads complete");
        }


        static void callDoIOStuff()
        {
           int sleep_time = (int) (1000 * 1.0d / k_toss_per_second);
           double threshold = (double) n_throughput / m_threads / k_toss_per_second; 
           Random random = new Random();
           while (true) {
                Thread.Sleep(sleep_time);
                if (random.NextDouble() < threshold)
                    doIOStuff();
            }
        }

        static void doIOStuff()
        {
            // do some IO work
        }
    }
}

I wanted to see what can we do here to make it more efficient and clean so that it can used in production testing for generating random throughput load.

postgresql – Performance of indexing varchar or text as an alternative to UUIDs?

I was reading up on nanoid and was possibly considering using the generated ids as the primary key for a table. The generated IDs are strings such as V1StGXR8_Z5jdHi6B-myT.

In my research I came across the following comment:

One of the benefits at least in postgres is that uuid v4 can be treated like a native type when stored which makes it fast to search.

Is it necessarily true that a primary key based on a UUID column would be more performant than a primary key based on a text or varchar column? If so, is there some other data type I can use to store nanoids which would match the performance of the native UUID type?