cryptography – How are extremely large integers stored and implemented in programming languages?

MPI stands for Multiple Precision Integer. Multiple precision arithmetic is what you need when you work with integer types that go beyond the machine width $w$.

The basic idea is simple, you represent a large integer with multiple fixed-width words where the i-th word is the i-th “digit” in base B where $B = 2^w$.
For example, most current machines are 64-bit so the width $w$ is 64, so with a single word you can represent unsigned integers up $2^{64}-1$. To represent integers larger than $2^{64}-1$, say a 1024-bit integer as in your RSA example, you use $lceil{1024 / 64}rceil = 16$ words $a_0, a_1, a_2, ldots, a_{15}$. Then your integer $x$ of choice is encoded as
$$
x = a_0 + 2^{64} a_1 + 2^{2*64} a_2 + ldots + 2^{15*64} a_{15}.
$$

Note that this is essentially a $1024$-bit representation, the only difference is that the bits are grouped into blocks of size 64.

Operations like additions, multiplication etcetera are implemented by building on machine addition, multiplication and so on, but of course additional work is needed to take care of carries and the like. I am not sure what the Linux kernel is using, but in the GNU/Linux world a widely used multiple precision arithmetic library is the GMP.

Extremely Powerful 2500 Manually High Quality Dofollow PBN backlinks for Google Rank in your website for $125

Extremely Powerful 2500 Manually High Quality Dofollow PBN backlinks for Google Rank in your website

<<< Welcome Personal Blog Network Backlink Gig >>>

Packages:-

Basic———————> 800 PBN DA 30+ (60$)
Standard———————> 1500 PBN DA 30+ (90$)
Premium———————> 2500 PBN DA 30+ (125$)

Improve your Google Ranking:-

With 2500 High Quality Manually Do follow PBN Backlinks on DA 30+ Authority Sites:

PBN Backlinks are the best type of backlink for any website SEO. Here is the Service to boost SERP With High Quality PBN Backlinks.
This Service is guaranteed to get unlimited traffic to your blog or websites.

Features:-

  • Trusted Seoclerk Seller
  • You send me URL 1 or 2 keyword for post
  • All Unique Real Domain PBN Sites ( Not Web 2.0)
  • Low spam score websites
  • All links are Do follow
  • All Links High Domain Authority DA 30+
  • All website are indexed in Google
  • Detailed MS Excel Report
  • 100% Guaranteed Results
  • I work Quickly and efficiently to get the job done
  • On Time Delivery

I will provide a complete assistance throughout theprocess of right SEO analysis for your website. Feel free to contact me .

I’ll be available to work on demand anytime you need.

I Guarantee you will like my Service.

Order Now

.(tagsToTranslate)PBNs(t)PBNBacklinks(t)SEOBacklinks(t)DofollowBacklink(t)HighQuality(t)OffPageSEO

postgresql – Using subquery in WHERE makes query extremely slow

I have this rather basic query that is very slow for reasons I can’t figure out:

SELECT s.id 
FROM segments s
WHERE
    ST_DWithin(
        s.geom::GEOGRAPHY,
        ST_Envelope((SELECT ST_COLLECT(s2.geom) FROM segments s2 WHERE s2.id IN (407820025,  407820024,  407817407,  407817408,  407816908,  407816909,  407817413,  407817414,  407817409,  407817410,  407817405,  407817406,  407816905,  407816907,  407817412,  407817411,  407816906,  407816904,  407816764,  407816765)))::GEOGRAPHY,
        30
    );

                                                                                                                           QUERY PLAN                                                                                                                            
-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
 Seq Scan on segments s  (cost=55.58..48476381.06 rows=7444984 width=4)
   Filter: st_dwithin((geom)::geography, (st_astext(st_envelope($0)))::geography, '30'::double precision)
   InitPlan 1 (returns $0)
     ->  Aggregate  (cost=55.57..55.58 rows=1 width=32)
           ->  Index Scan using segments_pkey on segments s2  (cost=0.44..55.52 rows=20 width=113)
                 Index Cond: (id = ANY ('{407820025,407820024,407817407,407817408,407816908,407816909,407817413,407817414,407817409,407817410,407817405,407817406,407816905,407816907,407817412,407817411,407816906,407816904,407816764,407816765}'::integer()))

Where I’m really confused is that the ST_Envelope with the subquery is very fast by itself

SELECT ST_Envelope((SELECT ST_COLLECT(geom) FROM segments WHERE id IN (407820025,  407820024,  407817407,  407817408,  407816908,  407816909,  407817413,  407817414,  407817409,  407817410,  407817405,  407817406,  407816905,  407816907,  407817412,  407817411,  407816906,  407816904,  407816764,  407816765)))::GEOGRAPHY;

                                                                                                                           QUERY PLAN                                                                                                                            
-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
 Result  (cost=55.58..55.60 rows=1 width=32)
   InitPlan 1 (returns $0)
     ->  Aggregate  (cost=55.57..55.58 rows=1 width=32)
           ->  Index Scan using segments_pkey on segments  (cost=0.44..55.52 rows=20 width=113)
                 Index Cond: (id = ANY ('{407820025,407820024,407817407,407817408,407816908,407816909,407817413,407817414,407817409,407817410,407817405,407817406,407816905,407816907,407817412,407817411,407816906,407816904,407816764,407816765}'::integer()))

And so is the main query if I plug the result of the ST_Envelope

SELECT id 
FROM segments
WHERE
    st_dwithin(
        geom::geography,
        '0103000020E61000000100000005000000C87B6E0D8FB85EC04BFD8462B9C34640C87B6E0D8FB85EC0929B35C16DC44640BBF8DDA6F2B75EC0929B35C16DC44640BBF8DDA6F2B75EC04BFD8462B9C34640C87B6E0D8FB85EC04BFD8462B9C34640'::GEOGRAPHY,
        30
    );

                                                                                                                                                                                                                                                                                QUERY PLAN                                                                                                                                                                                                                                                                                
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
 Index Scan using segments_geom_geo_idx on segments  (cost=0.42..4.82 rows=1 width=4)
   Index Cond: ((geom)::geography && '0103000020E61000000100000005000000C87B6E0D8FB85EC04BFD8462B9C34640C87B6E0D8FB85EC0929B35C16DC44640BBF8DDA6F2B75EC0929B35C16DC44640BBF8DDA6F2B75EC04BFD8462B9C34640C87B6E0D8FB85EC04BFD8462B9C34640'::geography)
   Filter: (('0103000020E61000000100000005000000C87B6E0D8FB85EC04BFD8462B9C34640C87B6E0D8FB85EC0929B35C16DC44640BBF8DDA6F2B75EC0929B35C16DC44640BBF8DDA6F2B75EC04BFD8462B9C34640C87B6E0D8FB85EC04BFD8462B9C34640'::geography && _st_expand((geom)::geography, '30'::double precision)) AND _st_dwithin((geom)::geography, '0103000020E61000000100000005000000C87B6E0D8FB85EC04BFD8462B9C34640C87B6E0D8FB85EC0929B35C16DC44640BBF8DDA6F2B75EC0929B35C16DC44640BBF8DDA6F2B75EC04BFD8462B9C34640C87B6E0D8FB85EC04BFD8462B9C34640'::geography, '30'::double precision, true))

Shouldn’t Postgres compute the ST_Envelope once and then use it for the WHERE condition, effectively doing what I did manually? I also don’t get why no index is used to do the Filter in the original query.

I tried putting the subquery in a CTE but that didn’t solve the issue.

mysql – Extremely slow SQL query

My problem is with an SQL query that is running extremely slow. Since I’m not that experienced with SQL I can’t seem to find the best places to create indices for the query. The SQL query consists of multiple inner joints. Here is the snippet:

SELECT cf.id as coverageFamilyId, 
       df.prior_auth_required as priorAuthRequired, 
       dn.generic_product_identifier as genericProductIdentifier, 
       dn.multi_source_summary_code as multiSourceSummaryCode, 
       dn.brand_name_code as brandNameCode, 
       dn.maintenance_drug_code as maintenanceDrugCode, 
       dn.strength as strength, 
       dn.strength_unit_of_measure as strengthUnitOfMeasure, 
       dgpi.tc_gpi_name as tcGpiName, 
       dval.value_description as valueDescription , 
       dn.drug_name as drugName, 
       df.quantity_limit_applies as quantityLimitApplies, 
       df.step_therapy_required as stepTherapyRequired 
FROM drug_name dn 
INNER JOIN drug_gpi dgpi on SUBSTRING(dn.generic_product_identifier,1,10) = dgpi.tc_gpi_key 
INNER JOIN drug_ndc dndc on dn.drug_descriptor_identifier = dndc.drug_descriptor_identifier 
INNER JOIN drug_val dval on dn.dosage_form = dval.field_value 
INNER JOIN service_code sc on dndc.ndc_upc_hri = sc.code 
INNER JOIN service_code_type sct on sc.service_code_type_id = sct.id AND sct.code = 'NDC' 
INNER JOIN drug_formulary df on sc.id = df.service_code_id 
INNER JOIN service s on df.service_id = s.id 
INNER JOIN coverage_family_service cfs on s.id = cfs.service_id 
INNER JOIN coverage_family cf on cfs.coverage_family_id = cf.id 
INNER JOIN policy_coverage_family pcf on pcf.coverage_family_id = cf.id 
WHERE pcf.policy_id = :policyId 
AND df.pbm_group_id = :pbmGroupId 
AND dndc.drug_descriptor_identifier = :ddi 
AND dndc.item_status_flag = 'A' 
AND dndc.repackage_code != 'X' 
AND dndc.innerpack_code = 'N' 
AND dndc.clinic_pack_code = 'N'

I’d be glad if you can point me in the right direction with the indices.

Extremely powerful SEO packages with high-quality ranking guaranteed result for $3

Extremely powerful SEO packages with high-quality ranking guaranteed result

Hi.
Do you need SEO services? Then you are in the right place. We provide a complete and detailed report of the links. We can help your website grow naturally. You will love my service. I am highly responsible and result-oriented. Backlinks are the most important google ranking factors. My SEO backlinks service can give your website an extra push and ranking for portable search terms on google.

Industry expertise: Language:


arts, business, crypto & blockchain Bengali, English

E-commerce, education

environmental, financial services/ banking

games, government& public sector,

kids & family,

legal, media and entertainment,

medical & pharmaceutical,

music, news, real estate,

retail & wholesale,society & culture,

technology & internet,

transportation & automotive,

My services:

Dofollow,

Contextual,

Manual backlinks,

Increase da and pa,

Quick ranking improvement service,

Permanent and 100% google safe backlinks,

Increase keyword ranking,

Multiple keywords,

Improve keyword ranking,
Unique thing:
Every link blog is unique no repeat.

.

javascript – Bookmarklet works fine in Opera, but when running in Firefox I get extremely weird results

I wrote this bookmarklet:

javascript:var b = document.createElement("button");b.innerHTML = "Scroll to current video";b.addEventListener("click",() => doItYesly());b.style.position  = "fixed";b.style.left = 0;b.style.top = 0;b.style.zIndex = "99999999";document.body.prepend(b);var s = document.createElement("button");s.style.position  = "fixed";s.style.left = 0;s.style.top = "50px";s.style.zIndex = "99999999";s.innerHTML = "Set";s.addEventListener("click",() => localStorage.setItem("scrolldistanceforosautoscroller",window.scrollY));document.body.prepend(s);function doItYesly(){let inter = setInterval(() => {scrollTo(0,parseInt(localStorage.getItem("scrolldistanceforosautoscroller")));if(window.scrollY === parseInt(localStorage.getItem("scrolldistanceforosautoscroller"))){clearInterval(inter);}},100);}window

Basically it’s a tiny bookmarklet to allow a user to auto-scroll to a specific point in a long list of videos on YouTube. I wrote it for a friend of mine who repeatedly navigates to the same page over and over, and wanted to try and save some time and prevent from having to manually scroll through all the videos every time he wanted to return to the exact same spot.

When I click this bookmark in my browser (I use Opera) it works just fine. When I open the same bookmark in Firefox (he uses Firefox) it redirects me to a page that says “this page is hosted on your computer” and it simply says (object Window). This is because, in Opera when I would run my code it would print out “input scroll distance”, because the last expression evaluated to that. To fix that, I simply pointed to the window object, causing the final expression to evaluate to the page itself. This fixed the problem for Opera, but for Firefox it doesn’t fix the issue… Instead of just rendering the page like usual, it simply outputs a textual representation of the window object…

Is there any way around this? I assume this is for security, and if so then there probably isn’t a workaround… But perhaps there’s something I could do to make Firefox stop behaving this way?

When I run this same exact code from the dev console it works perfectly as expected, the problem only occurs when I save it as a bookmarklet and click on it. Any ideas?

postgresql – Extremely safe Postgres table permissions

Let’s say that I have four tables in my Postgres database, two of which are private and contain highly sensitive information (private1 and private2), and two that contain information I want to allow anyone in the world to be able to query arbitrarily (public1 and public2). I’m aware that this is a very poor design, but bear with me.

I want to set up a user that can solely run SELECTs on the two public tables, but can in no way do anything else even remotely malicious with the other two tables (or the database more generally).

My naive approach would be to do something like set up a new user public_querier, run a REVOKE ALL ON private1, private2, public1, public2 FROM public_querier; and then a GRANT SELECT ON public1, public2 TO public_querier;.

I suspect that this does not fulfill my security desideratum because of some subtleties that I don’t have knowledge of, and I’d greatly appreciate (1) hearing if my suspicion is true and (2) any references that would help guide me in the right direction if my suspicion is false.

Cheers!

vanitygen – How do extremely difficult vanity addresses get found in the first place?

Funds are spendable by public keys and addresses contain public key hashes. Vanity addresses are created by hashing lots of public keys until the hash is in an expected range. What you mentioned is an example of a burn address, not a vanity address. Burn addresses are crafted by manually editing the public key hash with a specific the corresponding address in mind. Burn addresses do have corresponding public key(s), but since we it is impossible to find the public key from the public key hash, burn addresses cannot spend their funds. They are similar to addresses whose owners mistakenly deleted their wallets, where the funds are in a locked state.

The last digits of burn addresses is random is because addresses also contain a checksum which is the hash of everything else encoded in the address.

r – Extremely inefficient code written in C++

I am moving this question from stackoverflow https://stackoverflow.com/questions/67407274/extremely-inefficient-code-written-in-c?noredirect=1#comment119145723_67407274 . The computation time for the following function is very high. I am calling the following function from R using Rcpp package. Is there any room for improvement? Should I be accessing the elements of matrix X differently? I appreciate any comments or suggestions.

#include <RcppArmadillo.h>
using namespace Rcpp;
using namespace arma;
// ((Rcpp::depends(RcppArmadillo)))

// ((Rcpp::export))


arma::mat myfunc(const int& n,
                 const int& p,
                 arma::mat& X,
                 arma::rowvec& y,
                 const arma::rowvec& types,
                 const arma::mat& rat,
                 const arma::rowvec& betas){
  
  arma::mat final(p+p,n);
  final.zeros();
  int i,j;
  
  for(i=0; i < n; ++i){
    arma::colvec finalfirst(p+p); finalfirst.zeros();
    for(j=0; j < n; ++j){
      arma::mat Xt = X * log(y(j));
      arma::mat finalX = join_rows(X,Xt);
      
      arma::rowvec Xi = finalX.row(i);
      
      if(types(i)==1 && y(j)==y(i)){
        finalfirst += (Xi.t() - rat.col(j));
      }
      if(types(i)>1 && y(j) > y(i)){
        finalfirst -= (Xi.t() - rat.col(j)) * exp(arma::as_scalar(betas*Xi.t()));
        
      }
      else if(y(j) <= y(i)){
        finalfirst -= Xi.t() * exp(arma::as_scalar(betas*Xi.t()));
      }
    }
    
    final.col(i) = finalfirst;
  }
  
  return(final);
}




/*** R
m=4000
types = runif(m,0,5)
types(types<=1) = 0 ; types(types > 1 & types < 3) = 1; types(types>=3)=2
microbenchmark(out = myfunc(n=m,p=2,X=matrix(rnorm(m*2),nrow=m,ncol=2),y=runif(m,0,3),types=types,rat=matrix(rnorm(m*4),nrow=4,ncol=m),betas=c(1,2,3,4)))
*/

router – connecting with external ip is extremely slow

To be clear, I am an extreme noob when it comes to networking, so please be kind.

It takes an exorbitant amount of time to connect to my server (running on my home network) with an external ip (domain or the resolved ip), but only for certain ports. For example, when I try to load my webpage, it’s fast as can be, but when I try to connect to a cpu(1) server with drawterm, it takes several minutes. But then when that cpu(1) server finishes, and it needs to connect to the auth server, it’s fast once again. If I use the internal ips (192.168.22.*), these are all fast. To be clear, all of these use TCP.

My server is running 9front, but this is not a 9front issue, as I had similar problems when I was using void linux running some server software I wrote myself.

Has anyone experienced this before? Is it an issue with my router or my computer maybe?

Thank you in advance