domain name system – Router caching “A” zone record?

OK, this is driving me bonkers.

I had a website at a subdomain, site.domain.com, set up with an A record that pointed to my home IP address with the website running on a NAS device at 192.168.1.122. I have modified the zone record so it is now a CNAME that points to a totally different domain with an IP address outside of my home network.

On my mobile devices, and only on the mobile devices (I’ve tested on my iPhone and iPad and my son’s iPhone), I still pull up the website from a machine on my home network. The really weird thing is that every once in a while, the new site will pop up if I refresh the browser.

When I do a DNS query of my router with dig @192.168.1.1 site.domain.com from my desktop machine, it pulls up an “A” record–with the IP address of the local machine the website used to be on–and not the CNAME record:

; <<>> DiG 9.10.6 <<>> @192.168.1.1 site.domain.com
; (1 server found)
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 46256
;; flags: qr aa rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 1

;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 4096
;; QUESTION SECTION:
;site.domain.com.             IN      A

;; ANSWER SECTION:
site.domain.com.      0       IN      A       192.168.1.122

;; Query time: 5 msec
;; SERVER: 192.168.1.1#53(192.168.1.1)
;; WHEN: Wed Jul 08 19:38:50 EDT 2020
;; MSG SIZE  rcvd: 62

The desktop computer is set to query my local DNS server on my network. The mobile devices are set to get DNS from the router at 192.168.1.1. So that explains why the desktop works and the mobile devices do not.

So it looks to me like my router is caching the A record. But how? It’s been well over 24 hours now. I have also rebooted the router to try to clear stuff from the cache.

I have double checked my local DNS zone settings on my local network, also at 192.168.1.122, and those look fine. And querying my local DNS machine also returns the proper value.

8 – How to programmatically print a webform block with caching enabled?

When displaying a Webform inside a normal block (using admin/structure/block) Drupal manage to correctly cache the page, the HTTP header displays X-Drupal-Dynamic-Cache: HIT.

However we need to display this webform inside a paragraph so we tried the following :

  • use twig_tweak module and {{ drupal_block('webform_...') }}
  • programmatically put the block in a template preprocess like this :
$my_form = DrupalwebformEntityWebform::load('contact_new');
$output = Drupal::entityManager()
          ->getViewBuilder('webform')
          ->view($my_form);
$variables('contact_form') = $output;

Both solutions seem to make the page uncacheable: X-Drupal-Dynamic-Cache: UNCACHEABLE.

What would be the correct way to put a block in a paragraph while make the page cacheable ?
How to mimic the standard block system to display a in our template ?

caching – How do I decide an initial in-memory cache size given my DB size and expected load throughput?

(Purely for learning purposes)

Say the DB contains 1 billion rows with 200 bytes per row = 200 GB of data.

The traffic at peak is 1000 requests/s, with each request asking for one DB row.

What cache size would I begin with to ease off the load on the DB? I realize that this is determined best empirically and can be tuned as time goes on.

Caches are usually not too large given memory constraint (unless you go for a distributed cache like redis), so we can’t have the in-memory cache be more than say 200 MB of space, which accounts for way less than 1% of the DB size and seems too small. The cache might just spend all its time being 100% occupied with 95% misses and evicting entries and caching new entries using a simple LRU scheme.

Perhaps there’s no point bothering to cache anything in-memory here. In that case, how would you go about coming up with an initial cache size in a redis cache?

caching – How is data cache implemented in this case?

I have downloaded source code for GDBM. In its header there is the following commented typedef:

/* We want to keep from reading buckets as much as possible.  The following is
   to implement a bucket cache.  When full, buckets will be dropped in a
   least recently read from disk order.  */

/* To speed up fetching and "sequential" access, we need to implement a
   data cache for key/data pairs read from the file.  To find a key, we
   must exactly match the key from the file.  To reduce overhead, the
   data will be read at the same time.  Both key and data will be stored
   in a data cache.  Each bucket cached will have a one element data
   cache.  */

typedef struct
{
  int     hash_val;
  int     data_size;
  int     key_size;
  char    *dptr;
  size_t  dsize;
  int     elem_loc;
} data_cache_elem;

From the given comments I understand that this data struct will somehow increase the speed of
access to hash table’s elements by caching some of them. I just can’t understand how it can be done. Is there some special approach which allows the data to be explicitly cached as is noted in the comments? Or it is done by creating an usual static array. So far I can’t get the deatils from GDBM’s sources themselves because this project has lots of big-sized source files which are almost impossible for me to understand.

I will create eye caching book covers for $10

I will create eye caching book covers

*** I will Do Book Cover Design, Book Cover ***I  am a graphic designer, perfectly positioned to help you and your company to design a book cover. Through experience gained over time, I can create from scratch or turn your existing book cover into a kindle cover or other covers within the shortest interval of time. I specialize in increasing your reputation and credibility, through book cover design that will grab your readers attention What you will get from this gig titled :I will Do Book Cover Design, Book CoverA correctly formatted book coversPrint Ready Book cover that will stand out on any shelf.
All designs will match your existing book covers.
Delivery in JPG, PNG, and PDF (300dpi/CMYK)High resolution
unlimited of Revisions.
You might want to stop reading and hit the order button now and let’s get you on the bestseller list.AND If my design doesn’t suit you, you will be paid 40%.


THANK YOU

.(tagsToTranslate)REDESIGN(t)MINIMALIST(t)KINDLE(t)CHILDREN(t)OTHERS

plugins – WordPress CDN is caching admin bar – Hide Admin Bar

I am using Stackpath CDN to cache my WordPress site which is actually caching the whole page ( html, CSS, scripts ).

Now some of the non admin users also see admin bar. I changed some of the functionality for CDN e.g. origin-control cache etc. but non of them seems to be working accurately.

Is there any permanent fix to this problem?

Thank you,

Caching or in-memory table in Azure for performance

I am building an Angular web application that retrieves part of its data from a Azure SQL Database table via APIs developed in Azure Functions (with Azure API Management at the API Gateway). The data in the table (30k records) do not change for at least 24 hours. The web app needs to display this data in a grid (table structure) with pagination and users can apply filter conditions to retrieve and show a subset of the data in the grid (again with pagination). They can also sort the data on a column in the grid. The web app will need to be accessed by few hundred users on their iPad/tablet with 3G internet speed. Keeping the latency in mind, I am considering one of these two options for optimum performance of the web app:

1) Cache all the records from the DB table in Azure Redis Cache with cache refresh every 24 hours, so that the application will fetch the data to populate the grid from the cache, thus avoiding expensive SQL DB disk I/O. However, I am not sure how the filtering based on a field value or range of values will happen from Redis Cache data. I have read about using Hash data type for storing multivalued objects in Redis and SortedSet for storing sorted data, but I am particularly not sure about filtering data in Redis based on the range of numeric values (similar to BETWEEN clause in SQL) in Redis Cache. Also, is it at all advisable to use Redis in this way for my use case?

2) Use in-memory OLTP (memory optimized table for this particular DB table) in Azure SQL DB for faster data retrieval. This will allow to handle the filtering and sorting requests from the web app with plain SQL queries. However, I am not sure if it’s appropriate to use memory optimized tables for improving just table read performance (from what I read, Microsoft suggests to use it for insert-heavy transactional operations).

Any comments or suggestions on the above two options or any other alternative way to achieve performance optimization?

postgresql – Pre Caching Index on a large table in PostgrSQL

I have a table with about 10mln rows in it with a primary key and an index defined on it:

    create table test.test_table(
        date_info date not null,
        string_data varchar(64) not null,
        data bigint
        primary key(date_info, string_data));
        create index test_table_idx 
        on test.test_table(string_data);

I have a query that makes the use of the test_table_idx:

select distinct date_info from test.test_table where string_data = 'some_val';

The issue is that first time around it could take up to 20 seconds to run the query and < 2 seconds on any subsequent runs.

Is there a way to pull load the entire index into memory rather then have DB load information on first access?

websockets – Preferred framework for caching user query subscription?

Our users log in request for Live data. That request comes in the form of a query like

{
“type”:”weather”,
“location”:”London”
}

User will not only receive the current weather of London but also now he is subscribed to live weather data of London.

Our application receives wether data for various cities. Whenever application receives some data, that goes through active subscriptions if it belongs to any of the active subscriptions then an updated response will be sent to the user.

Apache Storm is used to process wether messages and Cassandra is used to persist weather data. Which framework is best suited To keep active cached active user subscriptions so that Storm can connect to that while processing the data?