20.04 – Firefox 89.0 on Ubuntu – poor performance

I’ve clean install of Ubuntu 20.04.2 LTS, and Firefox 89.0.

Performance of Firefox is very poor, you can see that even when scrolling simple pages, it looks like there are just 10 FPS.

I know that FF does not support hardware acceleration for my config but I think that my specs should still allow it to work fluent.

Any recommendation what should I do to improve FF performance?

My Hardware specs:

  • AMD® Ryzen 7 5800x 8-core processor × 16
  • NVIDIA Corporation GP104 [GeForce GTX 1070 Ti]
  • RAM 31,3 GiB

legending – Poor label and legend resolution using MaTeX

I am using Mathematica 12.3 and the MaTeX package in order to typeset my labels and legends correctly. I get quite blurry text and I don’t know if there is a way to improve the resolution of the generated text. My code is

EntropyPlotr0Finite = 
 Legended[Show[
   Join[Table[
     ListPlot[EntropyForr0[[i]], PlotStyle -> ColorList[[i]], 
      PlotMarkers -> {MarkersList[[i]], Small}], {i, 1, 4}], 
    Table[Plot[fitsForr0[[i]][x], {x, 0.1, 5}, 
      PlotStyle -> {Thickness -> 0.002, Orange}], {i, 1, 4}]], 
   Frame -> True, Axes -> True, PlotRange -> All, 
   FrameLabel -> {MaTeX["\textbf{r}\boldsymbol{_0}", 
      FontSize -> 20], MaTeX["\boldsymbol{S_N}", FontSize -> 20]}, 
   LabelStyle -> {Black, Bold, Medium}, 
   FrameTicksStyle -> Directive[Italic, FontSize -> 17], 
   ImageSize -> 800], 
  Placed[Framed[
    Column[{PointLegend[
       ColorData[97, "ColorList"][[;; 2]], {Style[
         MaTeX["\textbf{Values of }\boldsymbol{S_N}\textbf{ for 
}\boldsymbol{\kappa_m=0.1}", FontSize -> 14]]}, 
       LegendMarkerSize -> 15, 
       LegendMarkers -> {[FilledCircle], 10}], 
      PointLegend[{Red}, {Style[
         MaTeX["\textbf{Values of }\boldsymbol{S_N}\textbf{ for 
}\boldsymbol{\kappa_{m+1}\textbf{ from Eq.}}", FontSize -> 14]]}, 
       LegendMarkerSize -> 15, 
       LegendMarkers -> {[SixPointedStar], 10}], 
      PointLegend[{Orange}, {Style[
         MaTeX["\textbf{Values of }\boldsymbol{S_N}\textbf{ for 
}\boldsymbol{\kappa_{m+2}\textbf{ from Eq.}}", FontSize -> 14]]}, 
       LegendMarkerSize -> 15, 
       LegendMarkers -> {[FilledSquare], 10}], 
      PointLegend[{Black}, {Style[
         MaTeX["\textbf{Values of }\boldsymbol{S_N}\textbf{ for 
}\boldsymbol{\kappa_{m+3}\textbf{ from Eq.}}", FontSize -> 14]]}, 
       LegendMarkerSize -> 15, 
       LegendMarkers -> {[FilledUpTriangle], 10}]}], 
    RoundingRadius -> 5], {1, 0.6}]]

and the resulting image after exporting it with

Export["EntropyPlotr0.pdf", EntropyPlotr0Finite, 
 ImageResolution -> 2000]

looks like: enter image description here
In particular the legend looks like:
enter image description here
which isn’t the best quality.

nikon – Why are most smartphone cameras poor at capturing photos during the nighttime compared to DSLR cameras?

Phone cameras have come a long way since the 2000s in terms of megapixels however one glaringly obvious way in which they fail is their ability to capture photos in low-light conditions.

I have a Huawei smartphone as well as a Nikon D3X from my friend and previously owned a Samsung Galaxy S3 a few years ago. Both of the phones could capture very good daytime photos however the difference in quality was apparent when I tried to take a photo of the moon or sodium lamps. The photos from both smartphones were very unclear and had a lot of noise.

I’ve also read this online.

8 – Workarounds for Migrate Modules’ poor performance

8 – Workarounds for Migrate Modules’ poor performance – Drupal Answers

connection issues – No or very poor cell signal in areas I know have good signal

My phone worked fine till about the middle of yesterday. I live out in the country but usually I see 3 to 4 bars sitting at my desk in the basement. But now even if I walk up to the top of the hill (200 ft) on the property, from which I can see 3 cell towers, the signal comes and goes. For about 30 seconds there is full strength and then nothing, and I mean absolute zero signal, for a minute or 2. I had this problem last year as the phone was in warranty I just got a replacement. I figured the antenna wasn’t connected correctly and it came loose. But to suddenly get this bad again, and I am sure I haven’t dropped or jarred phone, I have to think something else is screwed up.

This morning I figure since I hadn’t done any updates in a while I should do them. I used my WiFi connection to update and it worked for a couple minutes after I did the update, then the signal was gone again. This makes me think some setting is messed up but where and how to fix it is my problem.

As I drive around I get signal, but noticeably I have to be closer to a tower to get signal and then it gives me 4 to 5 bars otherwise nothing.

I have checked the “Mobile Network” settings; “Roaming” is on, “Preferred Network Type” is set to “Global”, and “Enhanced 4G LTE Mode” is on. I have tried changing them all but no difference.

If it is relevant my phone is Motorola Moto E6

This is such an odd problem I am not even sure where to start digging for answers

elasticsearch – Poor Site Performance after Upgrade from 2.4.1 to 2.4.2

After upgrading from Magento 2.4.1 to 2.4.2 on Feb 25th our install has been experiencing very poor performance. The 2.4.1 install ran flawlessly with an uptime of in excess of 80 days, now symptoms include;

  • Random (every hour or so) high server loads and very high frontend response times, these sometimes settle back down
  • Random server hangs (no response and 500s every 24-48hrs) requiring an external reboot to get it back online
  • Magento double processing orders (ie customers are charged twice and two orders appear backend)
  • Magento takes customer payment (via stripe or Paypal) but does not complete order and no order appears in the backend (this usually coincides with a high server load event) – a while later a failed payment email will come through saying products are out of stock that should have a saleable quantity
  • when server hangs and recovers sometimes Elasticsearch will not recover in this instance the site and server will run fine but no products will be available on the site obviously!
  • During these high load events if i execute top -c the CPUs seem to be tied up with a high %wa

I’m guessing that there are some changes between 2.4.1 and 2.4.2 that are causing elasticsearch to overload now possibly with reindexing or a cron job change? Sooo…

Environment (off the top of my head)

  • Centos 7.9 VPS
  • WHM/cPanel
  • 2CPUs
  • 2gb Ram
  • 3gb Swap
  • 60gb disk sapce (69% used)
  • Litespeed websserver 5.4.12 and Litemage cache plugin
  • PHP 7.4.16
  • MySql 8.0.23

Things I have tried so far…

  • switched from PHP7.3 to PHP7.4.16
  • Upgaraded the MFTF from 2.5.4 to 3.4
  • Upgraded from MySQL 7 to MySQL 8
  • Set elasticsearch Xms1g Xmx1g in jvm.options.d – this was done before upgrade and worked fine with 2.4.1
  • added LimitMEMLOCK=infinity in etc/systemd/system/elasticsearch.service.d/override.conf – this was done before upgarde and worked fine with 2.4.1
  • added Restart=always in etc/systemd/system/elasticsearch.service.d/override.conf – this hasnt stopped the overloads though.
  • Installed sodium PHP module – Had to be done otherwise users could not log in after upgrade

Anyone else having similar issues after upgrading from 2.4.1 to 2.4.2? any thoughts? I’ sure this is impacting on my sales.

MongoDB Aggregate Poor Index Usage

I’ve been trying to understand the MongoDB Aggregate process so I can better optimize my queries and I’m confused by usage and $match and $sort together.

Sample DB has only one collection people

({
    "name": "Joe Smith",
    "age": 40,
    "admin": false
},
{
    "name": "Jen Ford",
    "age": 45,
    "admin": true
},
{
    "name": "Steve Nash",
    "age": 45,
    "admin": true
},
{
    "name": "Ben Simmons",
    "age": 45,
    "admin": true
})

I’ve multiplied this data x1000 just as a POC.

The DB above has one index name_1

The Following query

db.people.find({"name": "Jen Ford"}).sort({"_id": -1}).explain()

Has the following output

{ queryPlanner: 
   { plannerVersion: 1,
     namespace: 'db.people',
     indexFilterSet: false,
     parsedQuery: { name: { '$eq': 'Jen Ford' } },
     queryHash: '3AE4BDA3',
     planCacheKey: '2A9CC473',
     winningPlan: 
      { stage: 'SORT',
        sortPattern: { _id: -1 },
        inputStage: 
         { stage: 'SORT_KEY_GENERATOR',
           inputStage: 
            { stage: 'FETCH',
              inputStage: 
               { stage: 'IXSCAN',
                 keyPattern: { name: 1 },
                 indexName: 'name_1',
                 isMultiKey: false,
                 multiKeyPaths: { name: () },
                 isUnique: false,
                 isSparse: false,
                 isPartial: false,
                 indexVersion: 2,
                 direction: 'forward',
                 indexBounds: { name: ( '("Jen Ford", "Jen Ford")' ) } } } } },
     rejectedPlans: 
      ( { stage: 'FETCH',
          filter: { name: { '$eq': 'Jen Ford' } },
          inputStage: 
           { stage: 'IXSCAN',
             keyPattern: { _id: 1 },
             indexName: '_id_',
             isMultiKey: false,
             multiKeyPaths: { _id: () },
             isUnique: true,
             isSparse: false,
             isPartial: false,
             indexVersion: 2,
             direction: 'backward',
             indexBounds: { _id: ( '(MaxKey, MinKey)' ) } } } ) },
  serverInfo: 
   { host: '373ea645996b',
     port: 27017,
     version: '4.2.0',
     gitVersion: 'a4b751dcf51dd249c5865812b390cfd1c0129c30' },
  ok: 1 }

This makes total sense.

However

The following query results in the same set but uses the aggregate pipeline

db.people.aggregate(( { $match: { $and: ({ name: "Jen Ford" })}}, { $sort: {"_id": -1}}), {"explain": true})

Has the following output.

{ queryPlanner: 
   { plannerVersion: 1,
     namespace: 'db.people',
     indexFilterSet: false,
     parsedQuery: { name: { '$eq': 'Jen Ford' } },
     queryHash: '3AE4BDA3',
     planCacheKey: '2A9CC473',
     optimizedPipeline: true,
     winningPlan: 
      { stage: 'FETCH',
        filter: { name: { '$eq': 'Jen Ford' } },
        inputStage: 
         { stage: 'IXSCAN',
           keyPattern: { _id: 1 },
           indexName: '_id_',
           isMultiKey: false,
           multiKeyPaths: { _id: () },
           isUnique: true,
           isSparse: false,
           isPartial: false,
           indexVersion: 2,
           direction: 'backward',
           indexBounds: { _id: ( '(MaxKey, MinKey)' ) } } },
     rejectedPlans: () },
  serverInfo: 
   { host: '373ea645996b',
     port: 27017,
     version: '4.2.0',
     gitVersion: 'a4b751dcf51dd249c5865812b390cfd1c0129c30' },
  ok: 1 }

Notice how the Aggregate Query is unable to recognize it should utilize the name index against the $match. This has massive implications as the size of the collection grows

I’ve seen this behavior now in Mongo 3.4, 3.6, and 4.2.

https://docs.mongodb.com/v4.2/core/aggregation-pipeline-optimization/ provides this blurb

$sort + $match Sequence Optimization:
When you have a sequence with $sort followed by a $match, the $match moves before the $sort to minimize the number of objects to sort.

From all this, I think I’m fundamentally misunderstanding something with the Mongo aggregate command.

I already understand that if I create a composite index name,_id then it will work as it includes the fields used in my $match and my $sort clause.

But why must an index include a field from the $sort clause to be utilized to restrict my $match set? It seems obvious that we would prefer to $sort on the smallest set possible?

K10D poor focus – manual or auto – with full-frame zoom lens

I’m getting very poor focus particularly at the long end of an 80-320mm tele lens that I am using with my Pentax K10D.

The lens was originally bought for my 35mm Pentax SLR (maybe an MZ5 or something, I don’t remember, and it’s at teh back of a cupboard these days), and worked very well with the film camera.

Theoretically the lens is compatible with the K10D, though obviously the different sensor sizes mean that it’s not a direct 80-320mm equivalent.

The problem has been present since I first bought the camera about 15(?) years ago, but I encountered it a couple of nights ago again and the frustration led me to post this question in case there is a solution.

The photo below shows the issue.

Out of focus image

The image was shot at the ‘320mm’ end of the tele lens and the settings are shown below.

Image information

The image is absolutely pin sharp in the veiw finder, but as you can see the resulting photo is blurred. Not only that, but the ‘in-focus’ beep and dot are on when the image appears to be in focus.

A while after gettng the camera I swapped the focus plate for a ‘Katz Eye Optics’ split prism partly to see if this led to any improvement and partly because I always loved this type of focus on my dad’s old Pentax film cameras. Sadly it made no difference so it seems that the problem was not that the image was out of focus in the VF.

I’ve read about the AF issues on the K10D, but this is when manually focussing and also when ignoring the ‘in-focus’ beep (I tried ignoring the beep years ago when I first saw the issue – long before I read about the K10D AF issue).

I haven’t seen the problem with the 28-80mm lens supplied with the K10D, but I have assumed that that is because it has a max focal length of 80mm and the problem only seems to manifest at the long end of the tele lens.

Is this problem likely to be lens incompatability, an issue that is only noticeable when shooting subjects like this at a distance, or something inherantly wrong in the body (like the AF / back focus issue)?

Edit: I forgot to mention that I also had a Circular Polarizer fitted for this shot as the moon was very very bright that night, but the same blur occurs without when shooting other subjects.

Magento2 poor LCP on product page

I’m trying to improve the LCP (core vitals) of my product page.
I’m using latest 2.4.1 Magento version.

Although LCP are quite good on homepage and category page (1.7s & 1.4s), it is very bad on product page (3,1s to 3,7 s), and we aim at a CLP of 1.2, or near this figure.

The very best result I get is 3,1s.
All images, JS, CSS are already optimised, using WebP, defering JS, using critical css.

Even on a clean M2 with demo data, the LCP is bad.

As anyone succeed in improving this CLP ?

Thanks !

DreamProxies - Cheapest USA Elite Private Proxies 100 Private Proxies 200 Private Proxies 400 Private Proxies 1000 Private Proxies 2000 Private Proxies ExtraProxies.com - Buy Cheap Private Proxies Buy 50 Private Proxies Buy 100 Private Proxies Buy 200 Private Proxies Buy 500 Private Proxies Buy 1000 Private Proxies Buy 2000 Private Proxies ProxiesLive Proxies-free.com New Proxy Lists Every Day Proxies123