bitcoind – Raspiblitz slow sync performance

I’m syncing my raspberry pi 4 model B, 4GB and after 4 days I’m less than 50% synced. It seems as though there is something wrong with bitcoind as there are some weird symptoms going on. The pi is running raspiblitz 1.7RC2 (64bit). I have 400mbps internet and it’s connected to a new 1TB sandisk ssd.

First I looked at the debug.log file (sudo tail -f /mnt/hdd/bitcoin/debug.log) When bitoind is first started up I see blocks being added at a rate of several per second. After letting it run for awhile it slows down to one block every few seconds. I also get ping timouts and my peer connections are constantly cutting off and don’t exceed ~10.

Trying to communicate with bitcoind also becomes slow, bitcoin-cli getnetworkinfo | grep connections at the start is instant but after awhile this command can take 30 seconds to execute.

Finally using nmon the system usage is very low except bitcoind is reading from the ssd at a rate of ~250 mbps but not doing much writing. The cpu usage is blocked by waiting io. The only thing I have tried was lowering the dbcache by 500mb thinking there wasn’t enough free memory but it did not help.

bitcoin.conf:

# bitcoind configuration
# mainnet/testnet
testnet=0
# Bitcoind options
server=1
daemon=1
txindex=0
disablewallet=1
peerbloomfilters=1
# Connection settings
rpcuser=raspibolt
rpcpassword=c4w0Mh2q
rpcport=8332
rpcallowip=127.0.0.1
rpcbind=127.0.0.1:8332
zmqpubrawblock=tcp://127.0.0.1:28332
zmqpubrawtx=tcp://127.0.0.1:28333
# Raspberry Pi optimizations
dbcache=2560
maxorphantx=10
maxmempool=300
maxconnections=40
maxuploadtarget=5000
datadir=/mnt/hdd/bitcoin

nmon

Edit
After several restarts the sync seems to be going faster than before, even after waiting. The disk reads are much lower with some writes as well. Not sure why lower read speeds results in faster syncing…

Eidt 2

I noticed after “leaving block file” it’s operating at about 1/2 speed for awhile now. Look at the time stamps in the attached screenshot. Things slow down quite a bit after that.enter image description here

Very slow loading of MySQL DB-backup

I’m working on a transition of a web-site, which uses MySQL database as a backend.

The transition involves copying the production database — into another on the same server. We do this by taking a backup into a file (with mysqldump), then loading it into the new DB. Nothing out of the ordinary here.

The problem is, though it takes under a minute to create the dump, loading it a newly-created DB takes about 30 hours.

The server is reasonably beefy and uses SSD drives. While the loading is running, I can see mysqld process being quite busy — probably rebuilding the table-indexes.

Is there a way to improve this operation — such as by changing, how we’re taking the backups, for example?

Slow Squarespace site….

Hello guys

Anyone using a Squarespace site at the moment? Any tips on speeding it up as my website is getting really slow….Someone said you could change the domain hosting (I don’t know what it means anyway – someone said web hosting is different from domain hosting…) but I like my Squarespace template design honestly, and WordPress is too complicated to me.

Cheers all!

 

postgresql – Very slow query for massive stats calculation

In our app, we need to calculate the price comparison between a listing and its applicable comparables. It needs to happen for all active listings in DB on daily basis. The numbers of listings we are talking about are anywhere between 100k-200k (per day).

The idea is that we calculate two comparables (in city and area) and add these records to a log table to further use.

Initially, we created a query that creates this log record for a single listing and managed everything else in code. That worked great, but it added a lot of complexity and overall was slow. We ended up playing catch-up.

Next step was to create a single query that creates the price logs for all. And it looks like this:

with listings_to_process as (
  select * from listing l  
  where 
    l.status IN ('live', 'updated') 
    and l.deleted_date is null
    and not exists (
      select * from price_log pl where pl.date = CURRENT_DATE and pl.listing_id = l.id
    )
)
insert into price_log (listing_id, date, price, city_average, subdivision_average)
select 
  ltp.id as listing_id, 
  CURRENT_DATE as date, 
  (select list_price from listing where id = ltp.id) as price,
  (
    select avg(list_price) from listing 
    where 
      status IN ('live', 'updated') 
      and deleted_date is null 
      and id != ltp.id 
      and country = ltp.country 
      and city = ltp.city 
      and type = ltp.type 
      and ownership_type = ltp.ownership_type 
      and (ltp.bedrooms is null or ltp.bedrooms = 0 or bedrooms = ltp.bedrooms)
      and (ltp.living_area is null or ltp.living_area = 0 or living_area <@ int4range((ltp.living_area - 150), (ltp.living_area + 150)))
      and (ltp.year_built is null or ltp.year_built = 0 or year_built <@ int4range((ltp.year_built - 5), (ltp.year_built + 5)))
  ) as city_average,
  (
    select avg(list_price) from listing 
    where 
      status IN ('live', 'updated') 
      and deleted_date is null 
      and id != ltp.id 
      and country = ltp.country 
      and city = ltp.city 
      and subdivision = ltp.subdivision
      and type = ltp.type 
      and ownership_type = ltp.ownership_type 
      and (ltp.bedrooms is null or ltp.bedrooms = 0 or bedrooms = ltp.bedrooms)
      and (ltp.living_area is null or ltp.living_area = 0 or living_area <@ int4range((ltp.living_area - 150), (ltp.living_area + 150)))
      and (ltp.year_built is null or ltp.year_built = 0 or year_built <@ int4range((ltp.year_built - 5), (ltp.year_built + 5)))
  ) as subdivision_average
from listings_to_process ltp
on conflict (listing_id,date)
do nothing;

Logically it works and quite fast for small datasets. On full DB it runs forever, and I can’t figure out how to improve it any further.

Here is the explain for that:

Insert on price_log  (cost=1295.97..609417575.98 rows=50654 width=36)
  Conflict Resolution: NOTHING
  Conflict Arbiter Indexes: price_log_listing_id_date_idx
  ->  Hash Anti Join  (cost=1295.97..609417575.98 rows=50654 width=36)
        Hash Cond: (l.id = pl.listing_id)
        ->  Index Only Scan using lising_price_stats_idx on listing l  (cost=0.42..10192.64 rows=64604 width=67)
              Index Cond: (status = ANY ('{live,updated}'::text()))
        ->  Hash  (cost=748.39..748.39 rows=33293 width=4)
              ->  Seq Scan on price_log pl  (cost=0.00..748.39 rows=33293 width=4)
                    Filter: (date = CURRENT_DATE)
        SubPlan 1
          ->  Bitmap Heap Scan on listing  (cost=1.43..2.44 rows=1 width=8)
                Recheck Cond: (id = l.id)
                ->  Bitmap Index Scan on listing_pkey  (cost=0.00..1.43 rows=1 width=0)
                      Index Cond: (id = l.id)
        SubPlan 2
          ->  Aggregate  (cost=6007.42..6007.43 rows=1 width=8)
                ->  Bitmap Heap Scan on listing listing_1  (cost=427.62..6007.42 rows=1 width=8)
                      Recheck Cond: (((type)::text = (l.type)::text) AND (deleted_date IS NULL))
                      Filter: (((status)::text = ANY ('{live,updated}'::text())) AND (id <> l.id) AND (country = l.country) AND ((city)::text = (l.city)::text) AND ((ownership_type)::text = (l.ownership_type)::text) AND ((l.bedrooms IS NULL) OR (l.bedrooms = 0) OR (bedrooms = l.bedrooms)) AND ((l.living_area IS NULL) OR (l.living_area = 0) OR (living_area <@ int4range((l.living_area - 150), (l.living_area + 150)))) AND ((l.year_built IS NULL) OR (l.year_built = 0) OR (year_built <@ int4range((l.year_built - 5), (l.year_built + 5)))))
                      ->  Bitmap Index Scan on listing_type_idx  (cost=0.00..427.62 rows=5360 width=0)
                            Index Cond: ((type)::text = (l.type)::text)
        SubPlan 3
          ->  Aggregate  (cost=6020.82..6020.83 rows=1 width=8)
                ->  Bitmap Heap Scan on listing listing_2  (cost=427.62..6020.82 rows=1 width=8)
                      Recheck Cond: (((type)::text = (l.type)::text) AND (deleted_date IS NULL))
                      Filter: (((status)::text = ANY ('{live,updated}'::text())) AND (id <> l.id) AND (country = l.country) AND ((city)::text = (l.city)::text) AND (subdivision = l.subdivision) AND ((ownership_type)::text = (l.ownership_type)::text) AND ((l.bedrooms IS NULL) OR (l.bedrooms = 0) OR (bedrooms = l.bedrooms)) AND ((l.living_area IS NULL) OR (l.living_area = 0) OR (living_area <@ int4range((l.living_area - 150), (l.living_area + 150)))) AND ((l.year_built IS NULL) OR (l.year_built = 0) OR (year_built <@ int4range((l.year_built - 5), (l.year_built + 5)))))
                      ->  Bitmap Index Scan on listing_type_idx  (cost=0.00..427.62 rows=5360 width=0)
                            Index Cond: ((type)::text = (l.type)::text)
JIT:
  Functions: 49
  Options: Inlining true, Optimization true, Expressions true, Deforming true

As you can see the cost is gigantic and I can get rid of Hash Anti Join.
Is there any way to make it more efficient?

mac – Message is extreemly slow to open Big Sur

mac – Message is extreemly slow to open Big Sur – Ask Different

query performance – MySql ORDER BY slow with join, fast as 2 queries

Why is the following query slow but fast when I provide the values inline?

select u.* from user u
join user_group g on u.group_id = g.id
where g.account_id = 1
order by u.id limit 10;
-- takes ~30ms
select id from user_group where account_id = 1;
-- which is (99,198,297,396,495,594,693,792,891,990)

select * from user
where group_id in (99,198,297,396,495,594,693,792,891,990)
order by id limit 10;
-- takes ~1ms

Sub query is also slow. The plan is the same as the join.

select u.* from user u
where u.group_id in (select id from user_group where account_id = 1)
order by u.id limit 10;
-- ~30ms

All queries produce the same result.

98  0   99
197 0   198
296 0   297
395 0   396
494 0   495
593 0   594
692 0   693
791 0   792
890 0   891
989 0   990

Schema

I’ve got the following simple table structure, with 100 accounts 1k user_groups and 10 million users.

create table account(
  id int primary key auto_increment
);

create table user_group(
  id int primary key auto_increment,
  account_id int not null,
  foreign key (account_id) references account(id)
);

create table user(
  id int primary key auto_increment,  
  deleted tinyint default 0,
  group_id int not null,
  foreign key (group_id) references user_group(id)
);

-- I've been trying with this index, but it doesnt seem to help.
create index user_1 on user(group_id, id, deleted);

Plans

fast explain

The plan with the join using indexes and doing temporary filesort.
slow explain

I don’t understand why MySql seems to think it needs actually do the complete join to filter the data down. We clearly don’t read anything from user_group.

For comparison I’ve tried the same thing in PostgreSQL and both queries run fast.

Why is it slow and is there a way to write the query (a single query!) that does it quickly? Sub select doesnt work.

This dbfiddle shows the problem.
https://dbfiddle.uk/?rdbms=mysql_8.0&fiddle=8ed68310d8ca72e9daef389dc0469a6f

Using MySql 8.0.23

Thanks

Edit

Here are the complete Session Status Handler debug details. It looks like the slow query is reading every value unlike the fast one.

-- FLUSH STATUS;
-- select u.* from user u join user_group g on u.group_id = g.id where g.account_id = 1 order by u.id limit 10;
-- SHOW SESSION STATUS LIKE 'Handler%';

Handler_commit  1
Handler_delete  0
Handler_discover    0
Handler_external_lock   4
Handler_mrr_init    0
Handler_prepare 0
Handler_read_first  0
Handler_read_key    11
Handler_read_last   0
Handler_read_next   100110
Handler_read_prev   0
Handler_read_rnd    0
Handler_read_rnd_next   0
Handler_rollback    0
Handler_savepoint   0
Handler_savepoint_rollback  0
Handler_update  0
Handler_write   0
-- FLUSH STATUS;
-- set @uGroups := (select group_concat(id) from user_group where account_id = 1 group by account_id);
-- select * from user where group_id in (select @uGroups) order by id limit 10;
-- SHOW SESSION STATUS LIKE 'Handler%';

Handler_commit  2
Handler_delete  0
Handler_discover    0
Handler_external_lock   4
Handler_mrr_init    0
Handler_prepare 0
Handler_read_first  0
Handler_read_key    2
Handler_read_last   0
Handler_read_next   19
Handler_read_prev   0
Handler_read_rnd    0
Handler_read_rnd_next   0
Handler_rollback    0
Handler_savepoint   0
Handler_savepoint_rollback  0
Handler_update  0
Handler_write   0

magento2 – M2: ‘Add to Cart’ still extremely slow

Thanks to Varnish cache, my Magento 2 installation loads within a second. The checkout process however, is an absolute UX killer. In particular the the cart process is really slow.

Using Chrome’s network tools, I noticed the slow load after clicking ‘Add to Cart’ is caused by this URL:

https://example.com/checkout/cart/add/uenc/aHR0cHM6Ly93d3cuZ2Vycml0c21haW50ZXJpZXVyLm5sL3RvYmlhcy1ncmF1LXNhbHQtcGVwcGVy/product/13971/

The TTFB of these URLs variate between 3-10 secs. This isn’t the case for adding products to the cart though, modifying quantities in the cart itself (done through ajax call) takes forever as well.

I’ve been searching for solutions on StackExchange and elsewhere, but none seem to be having an effect:

  • Disabled inventory management
  • Removed cart rules
  • Disable Minimum Advertised Price

Does anybody have any suggestions or the same experience?

  • Magento 2.4.1
  • Nginx reverse proxy -> Varnish -> Apache
  • VPS: 4 cores, 16GB RAM (assigned properly), MySQL
  • Total products: +/- 12k (9000 simple / 3000 configurable)

virtualbox – Hyper-V VM extremely slow – 100% disk usage always

Windows 10 Pro host

Windows 10 Developer VM

Operations related to installing/removing applications are extremely slow on my VM and task manager shows disk usage always at 100%, average response time is between 80 – 250ms. CPU usage never gets above 2 – 5% and memory normally stays below 80%. I’ve tried messing around with RAM starting off with 2gb allocated and eventually trying 8gb allocation with no effect. My host has 32gb RAM.

So given this I can only assume that my RAM/CPU configurations aren’t issue and there’s something else wrong. I’ve read many posts online and tried suggested solutions but nothing I find has made any difference. I’ve been using VirtualBox up until now and the performance of those VMs have been infinitely better, I’ve read that Hyper-V is generally meant to be faster so I guess this further suggests there might be a misconfiguration somewhere.

How can I improve/troubleshoot my VMs performance?

performance – PostgreSQL: Query runs fine in Prod but really slow in QA

Postgres 10.4 dbs hosted in an AWS environment.

I have a query which joins two tables with a many-to-many relationship. I learned while troubleshooting that Analyzer won’t run until the number of rows inserted are more than “autovacuum_analyze_scale_factor” number. i.e. .1 = 10% of records.

It all makes sense that that’s why the query is so slow and “EXPLAIN VERBOSE” won’t display correct plan until I run “Vacuum Analyze” manually on those two tables.

BUT, the same logic won’t apply in Production, where as soon as data added to those two tables, even though “autovacuum_analyze_scale_factor” hasn’t met, “EXPLAIN VERBOSE” as well as query performance is just fine.

I checked pg_stat_user_tables table and verified that neither AutoVacuum nor AutoAnalyzer has run since the data insert. So, I am confused how things are working fine in PROD and why I don’t see similar behavior in QA?

postgresql – Postgres database insert become slow after 10 days, and then it needs a full vacuum

We have a postgres database which is almost 15GB’s in size after vacuum. With a dedicated hardware i,e 32GB of ram with 12 Cores. The data automatically gets inserted around 300k inserts per day, and we also process the data. The data is deleted after 3 months, in order to keep the db size down.

NORMALLY WAS 1200 INSERTS / MINUTE

One of the important table has quite a large trigger on it (We cant do anything about it unfortunately), the DB can insert roughly around 1200 records per minute, which is quite acceptable for us. These records are not inserted in bulk, but rather individually after processing.

AFTER THE PROBLEM WAS 200 INSERTS / MINUTE

About 10 days ago, our database suddenly got extremely slow i.e 200 inserts per minute. So after a few sleepless nights we vacuumed top 10 largest tables, but only increased the performance to 300 inserts per minute.

AFTER VACUUM 10 LARGE TABLE WAS 300 INSERTS / MINUTE

So we vacuum analyzed the whole database, and also REINDEXED the 10 largest tables. This worked extremely well and we were quite satisfied with the results

AFTER FULL VACUUM ANALYSE AND REINDEX 1400 INSERTS / MINUTE

But just after 10 days we began to have the same problem again, now the inserts are back to 200 inserts/min. We didn’t change any configurations of postgres.

AFTER 10 DAYS BACK TO 200 INSERTS / MINUTE

Can you please help me identify the problem? i know this is not much information to work with, but if u guys did have this problem before you might instantly identify it. Can this be problem with Disk or too many deadlocks?
Hard disk might not be a problem, as we did gain good speeds after the vacuum, but please let me know if you think that.

Thank you for helping me out.

DreamProxies - Cheapest USA Elite Private Proxies 100 Private Proxies 200 Private Proxies 400 Private Proxies 1000 Private Proxies 2000 Private Proxies ExtraProxies.com - Buy Cheap Private Proxies Buy 50 Private Proxies Buy 100 Private Proxies Buy 200 Private Proxies Buy 500 Private Proxies Buy 1000 Private Proxies Buy 2000 Private Proxies ProxiesLive Proxies-free.com New Proxy Lists Every Day Proxies123