template redirect – I want to replace after WordPress is completely loaded

The replacement process is always done with the following code.
You can also use filter hooks, but some themes are hard-coded and you can’t use filters.
So I’m using this code (template_redirect action) so that I can replace even hard-coded parts.

function my_replace_html($content) {

    $content = str_replace('<div class="aaa">',<div class="aaa bbb">', $content );
    return $content;

add_action('template_redirect', function(){ ob_start('my_replace_html'); });

However, this does not work for mobile themes set with the “Multi Device Switcher” plugin.

It has been replaced as usual with themes for PC.

HTML compression such as the “Autoptimize” plugin is also applied to the mobile theme set by the “Multi Device Switcher” plugin.

What’s wrong with my method?

How to leave or completely mute a channel in Discord for desktop?

I’m in several Discord channels, some potentially useful but full of irrelevant “@here” messages, and would like to get rid of their notifications in my task bar. I’ve already muted them and set their notification setting to “Nothing”, yet still get the red message count icon on my task bar button even with the latest version of Discord. How to completely stop channel notifications, even by leaving the channel? Editing the channels is forbidden according to my mouse cursor.

security – Is it possible to completely reset an iPhone so its software/firmware is guaranteed to be 100% factory fresh?

When purchasing a used iPhone with a completely unknown history (which includes the possibility of multiple previous owners), is it possible to reset it and be sure that all software and firmware on it is 100% identical to the factory image?

Please keep in mind that since the history is unknown, it’s possible the device was previously jailbroken/unlocked/rooted/etc.

If the answer varies depending on iPhone model, please indicate to which models your answer pertains.

I’m interested in answers for all iPhone models.

tcp – Chunked HTTP response does not come through completely

Consider the following setup

 +-------+            +--------+           +----------+
 |       |            |        |           |          |
 |       +----------->+        +---------->+          |
 |       | TCP TUNNEL |        |    HTTPS  |          |
 +-------+            +--------+           +----------+
  User                  SSH Server         Web server

User Performs a HTTP GET request to

User is being TCP tunneled from to SSH Server, using SSH port forwarding

User is being TCP tunnel from SSH Server to Web server which responds with Transfer-Encoding: chunked

User receives only first 700KB of data and the connection hangs.

The same thing works perfectly when using a fixed Content-Length.


When performing the same HTTP GET request from the SSH server itself – The whole response is received

When performing the same HTTP GET request from another type of TCP tunneling (i.e SOCKS from USER to SSH SERVER) – The user receives only 700kb of data

When performing a HTTP GET request through the tunnel, and the HTTP server responds with a fixed-length body, all of the data is received

The riddle

Where should we start looking? Why does the HTTP GET works perfectly when running directly from SSH server, but does not work when tunneled through it?

input – Why do console games not allow completely custom key bindings?

So I plugged in my PlayStation 2 for the first time in ten years and realized that I could not customize my key bindings, only pick from select pre-sets that the developers put in, which I thought was strange. So I checked through a few of my games on Xbox 360, PlayStation 2, and Nintendo 64 and realized that none of them had customizable controls, only control schemes (if even that)!

Why is this? Is it a limitation of the consoles or is it policy from Sony’s and Microsoft’s part that games should only offer select pre-sets, not completely customizable key bindings? The same games have customizable key bindings on PC and have no issues with binding to my Xbox 360 PC controllers.

I don’t own any more modern consoles, so maybe it’s different nowadays. If so, why did the old consoles have these limitations that the new ones do not have?

reference request – Completely symmetric (economy-like) environment-agent reinforcement learning which improves both – the agent and environment?

I have idea about the completely-symmetric reinforcement learning which improves both the agent and environment. Is this my idea new or are there any references in the literature? My question is about the references and about the academia term for my idea about symmetric RL?

The usual setting is that there is agent nn which observes the state of environment s and then selects the action a=nn(s) and submits this action to the environment and the environment returns reward and the next state (s’, r)=env(s, a). Agent uses this reward to update itself nn=F(nn, r). After some training with some teacher environment env, agent can connect to other environment (mostly it is the requirement of machine learning paradigm, that this is somehow similar – distribution-wise) env_2 and execute real actions and earn real rewards.

So, agent is as good as teacher environment. The core question is – is it possible for the agent to send back reward to the environment as a gratitude for the good teaching (directly of through the interaction with other agents)? Or is it possible for the agent to sue environment and ask compensation for damage (or announce publicly that environment is bad and harm this environment in other ways?)?

RL has quite common notion of sparse reward. This sparse-reward notion can be used to the delayed award/compensation request from the environment as well.

But generally the scheme is – that the agent not only sends action to the environment – but more generally – it can sent both – monetary reward (reward) and non-monetary reward (some extra information, e.g. state of the agent). So, the completely symmetric RL scheme emerges:

  1. (agent-state, agent-issued-pay-for-teaching, action)<-agent(environment-state, reward, additional-action-like-info-from-environment)
  2. (environment-state, reward, additional-action-like-info-from-environment)<-environment(action, agent-issed-pay-for-teaching, agent-state)

Essentially: action can incorporate (agent-state, agent-issued-pay-for-teaching) as arguments. And environment-state can incorporate (additional-action-like-info-from-environment). But such explicit specification may make the model of symmetric-RL more interesting, more concrete for the research.

One can go further – research the information and economic dynamics of the connected symmetrical agents-environments or even more general multi-agent systems. One can even deduce the super-symmetry of the complexity-information from the one side and the economic value from the other side.

I have read a bit about reinforcement learning in multiagent systems and that formulation is a bit different – there is still one (essentially immutable environment) and the multiple agents that are trying to cooperate and solve this environment. In my proposed scheme the immutable environment is just one agent, special one and there can be different environments with differing degree of immutability and adaptability/learning potential.

My question is about references – how such symmetrical reinforcement learning scheme is call in academia and what are the important references for that? Thanks!

Haar measure on compact group completely positive

Is it true that the Haar measure $mu$ on a compact group $G$ is always completely positive, i.e. every nonempty open set has positive measure? I think I have a very simple proof of it, but honestly, the fact I tried Googling this fact and couldn’t find any mention of it, so I’m second-guessing my argument, which is as follows:

Let $G$ be a compact group, and $U subseteq G$ a nonempty open subset of $G$. Then $mathscr{U} = { g U : g in G }$ is an open cover of $G$, so there exist $g_1, ldots, g_n in G$ such that $G = bigcup_{j = 1}^n g_j U$. Then $1 = mu(G) leq sum_{j = 1}^n mu (g_j U) = n cdot mu(U)$. Thus $mu(U) geq frac{1}{n} >0$.

How to disable DWM completely on Windows 10?

I want to disable Desktop Window Manager completely on Windows 10 Pro! This method was promising, but doesn’t work: is there a way to disable DWM on Windows 10 1903+?

I edited value in registry for a particular game. Then i opened a game and monitored CPU usage of DWM. There was still CPU usage when in-game!!! So this method clearly doesn’t work! DWM is known to cause immense input lag and was disabled by professional gamers on Windows 7!

Please is there any known way to disable DWM completely, which doesn’t break your PC? And is palatable?

I already tried that method: where you have to close explorer and suspend winlogon, it is laborious and i had mixed results… Otherwise i don’t remember everything i tried over years, nothing worked so far!