calculus and analysis – Series of inverse functions, unclear numerical constant

I was answering another question here and came up with this simple illustrative example that should have an analytic solution. Indeed it has, but I do not understand it. In particular, where 85 is coming from?

g[x_]:=BesselJ[0,x]
f[x_]:=Exp[x]
Series[f[InverseFunction[g][y]],{y,0,0}]
Out[1]= E^-BesselJZero[0,85]+O[y]^1

functional programming – Unclear as to the informal definition of foldr

I’ve encountered the infomral definition of foldr in a couple of books.

I’ve attached an image from Bird and Wadler’s “Introduction to Functional Programming” (1988), but I’ve seen the same informal definition in Hutton’s “Programming in Haskell” (2nd ed).

In the image attached I’m referring to the 1st and 3rd lines, and what puzzles me is why is there an ellipsis (the 3 dots) to the right of (f x_n a) in the 1st line, or to the right of (x_n # a) in the 3rd line (I’m using ‘#’ in place of the operator that has a circle with a ‘+’ in it).

If we look at the last 4 lines of the image attached we can see that after the operator is applied to the initial value ‘a’ then all there is to the right are the terminating closing parentheses.

Many thanks,
Sarel

P.S.
This is a a question about functional programming, which is why I’m posting it in “Software Engineering Stack Exchange” rather than in “Math Exchange”. However, I was in need of some math formatting (such as the circle with a ‘+’ in it, or ‘x’ with a subscript ‘n’) – is there a way to do that in “Software Engineering Stack Exchange”? Should I have posted it elsewhere?

Informal definition of foldr in Bird and Wadler (1988)

vmware esxi – Unclear why crash consistent backups are unsafe for MySQL

My understanding of the InnoDB engine is that the database is always recoverable after a crash, due to the internal architecture of the engine itself. That is, InnoDB can correctly recover from a crash consistent state. I.e. if the power is pulled out.

On the other hand, when snapshotting a virtual machine in vSphere, common wisdom on the internet says that the backup will be crash consistent but may cause corruption of the InnoDB database unless the virtual machine is quiesced. This seems inconsistent with the above.

Tldr; is InnoDB truly able to recover from a crash consistent state, and if so, shouldn’t snapshotting a VM without quiescing be a safe backup method?

Distributed Computing – Unclear cloud-based parts of the web crawler architecture

I am currently researching distributed web crawler architectures and came across this academic conference paper describing the distributed cloud-based crawler architecture and implementation details using the Azure cloud platform: UCMERCED – cloud-based web crawler architecture PDF

I am particularly interested in Section IV: A CLOUD-BASED WEB CRAWLER of paper (architectural presentation). After reading the newspaper several times, I still couldn't get the whole picture. The document's footnote states that the open source implementation of the main component for web crawlers called CWCE (Cloud-based Web Crawler Engine) should be available at:
http://cloudlab.ucmerced.edu/cloud-based-web-crawler. However, the link is no longer available and the internet archive has not provided any helpful snapshots in the past.

The overall architecture of the web crawler from paper:

Web crawler architecture made of paper

Agent registrar

As described in the document, this component manages the list of currently active crawlers in the system from A1 to An (implemented as a SQL Server database table). The table structure could look something like this:

CrawlerQueue

Azure storage queue that temporarily contains a list of URLs to be visited by an appropriate agent (who has the same assigned partition key zone as the URL host).

CrawlerLog

Azure storage table (NoSQL database) that contains permanent data about the visited page. Ideally suited to the unstructured nature of any website:

Static parts of the Azure table entry

Sample entries in the Azure table

As you can see, the partitioning is based on the website host. This ensures that only the appropriate agent with the appropriate partition zone processes this page.

CrawlerArchive

Azure Blob storage – stores multimedia page files (images, PDF documents, videos).

CWCE

Main component of the web crawler described. It is responsible for starting the entire crawling process.

The paper describes that the architecture presented comprises two main steps:

  1. Initialization step
  2. Infinite looping process for getting messages from the queue and crawling URLs, and storing the results in the blob storage of the queue and the Azure table

Initialization steps:

In the initialization step of CWCE, the first agent creates and
Retrieves a URL from the DNS resolver. If the retrieved URL is not in the
Queue and was never visited (if the value of the visit field is from
the URL in the table == "False"), then the URL is added to the queue and
into the table as a non-visited URL (the value of the visit field of the
URL in the table = "False"). If the URL is already crawled (if the value
of the visit field of the URL in the table == "True") the CWCE ignores
The URL retrieves another URL from the DNS resolver and repeats all of them
Steps.

Grinding process:

  1. Get the top URL as a ui from the queue.

  2. if an agent registered for the partition of the retrieved URL (agent
    same zone as the host of the URL), then the retrieved URL is ignored
    (because the corresponding agent crawls the URL). Otherwise the CWCE
    Controls the maximum number of agents (Max) and the number of living
    Agents (Aalive) from Agent Registrar by the following statement: If
    Max = Aalive, then the CWCE waits until part of the agent is alive again
    set and Max> Aalive. Otherwise, the CWCE creates a new agent for
    The partition of the URL and the CWCE removes the retrieved URL from the
    Queue.

So, I would like to clarify my most important unclear points with you:

  1. During the initialization process, the CWCE engine receives initial URLs from the DNS resolver Component and after checking for duplication, they are placed in queue and table stores. Can you imagine this as the traditional component of the seed URL list when speaking in words of web crawling systems?

  2. The loop step specifies that if the agent matches the zone with the URL host, the message is ignored because the corresponding agent takes care of that URL. So it seems to me that the entire looping process is done by CWCE itself? Not the agents? Should the message not be processed if the agent zone matches the URL in the queue message?

  3. It is also written in the looping process that CWCE itself controls the process of managing active agents (adding new agents for the detected new URL host partition key). So the CWCE should somehow see CrawlerQueue message information (URL) to be sent through communication and exchange of information between the CWCE and the agent. It's kind of confusing, since paper says that agents are also responsible for spanning new agents if the partition key doesn't match. However, the loop section indicates that CWCE performs the entire agent management process.

I hope someone would have the ideas about the loop process written and explain them to me, that would answer this question ideally. The difficult part is that the paper doesn't exactly describe the responsibilities between the agent component and the CWCE component – these seem to be mixed.

Maybe someone has done similar research on this article and has an open source resource that is now dead (404)? It would definitely answer most of the questions asked.

Unclear if Gray Card can be used for post-production exposure?

Two photographers developed in the 1890s a method for the precise exposure of films and papers and to measure the development of the material resulting blackening. The measure they use is called "density". This is a numeric value that determines how much light will delay the darkness. The inverse is how much light the darkness lets happen. This measure is the percentage transmission.

As you know, films and papers have a range of hues. The center of this range is a battleship gray. The gray card is a poster that reflects 18% of the incoming light. Event is an old French meaning arriving soon.

In the mid-1930s, Messrs. Jones and Condit of the Kodak Laboratory found that statistically a typical sunlit scene with a reflectance of about 18% was included. Around this time, Western Electric Company launched the first photometer. Kodak Labs publish a recommendation. Place a Kodak Filmbox in the scene. It seems that the box reflects 18% of the ambient light. Now measure the reflected light from the top of the box and use this value to adjust your exposure.

In 1941, Ansel Adams, a well-known landscape photographer, and his friend Fred Archer, a publisher of a photo magazine, jointly released the zone system, which gives photographers a method for fine-tuning the exposure. Their zone system revolves around the use of an 18% -platform (battleship gray). This card replaces the Kodak top case. The 18% gray target became the de facto standard. Nowadays, the film and paper sensitivity and the digital chip are calibrated and the film and digital ISO are determined using the 18 percent gray card.

This hue is unique because the resulting correctly exposed and processed film image of this object measures 0.75 transmission density units when photographing an object that reflects 18%. If this film is printed and processed correctly, the object also measures 0.75 units of reflected density on the print. The uniqueness is that the object, film and print have the same density when exposure, processing and printing are perfectly matched.

Again, the density value of 0.75 as the midpoint of the scale of film and paper is well thought out. This value is the defacto standard used to calibrate instruments for measuring photographic film and paper. This value is also transferred to the calibration of the light meter.

The scaling of the digital image in 8-bit ansi terms = 128. We can also call this value Zone V in the Ansel Adams zone system.

If the same scheme is used, a picture of a gray card would probably have a value of about 128 if properly repented.

Are definitions of functions that are common in literature unclear?

Do I only understand some implicit rules or are most of the definitions of functions found in literature ambiguous (partly in physics)? I am particularly interested in the ambiguity of the equal sign (see the following explanations), d. H. Is it really ambiguous or do I not understand something?

The following definition of a function seems to me at least ambiguous: y = 5; This is because the definition neither explicitly says nor implies that it depends on a variable. I admit that such a definition is usually given in a context, e.g. However, one can argue that y = 5 does not depend on x and represents only one point at the mark 5 on the y-axis. The example given is perhaps too trivial to explain the ambiguity. Consider a function in R ^ 2 given as follows: f = < 2t, sin(t) >, The assumption is that it is a vector function in 2D that depends only on t (ie H. F (t) = = < 2t, sin(t) >) and therefore represents a curve (a set of points in 2D). However, since no dependencies are specified, it may also be that this depends on other variables, e.g. on x, y and t, which means that the function f (x, y, t), for example, represents a time-dependent 2D vector field (an infinite set of vectors for any given t). Similarly, t can not be a parameter, but one of two space coordinates, i. H.F (t, u), which means that it represents a constant 2D vector field (an infinite set of vectors). In addition, the equals sign can be ambiguous when "defining" functions. Consider a generalized position vector in R ^ 3, e.g. r = < x,y,z >, When I say G = r do I define a vector field? G or I simply define another position vector G? According to my taste G(r) = r would rather indicate the definition of the vector field, whereas G = r would rather define another position vector. Or, for some obscure reason, it may be assumed that only one "generalized" position vector ("the" infinite set of vectors pointing from the origin to all possible locations in R ^ 3) exists, if you define it r as a generalized position vector every time you say u = r Do you automatically define a vector field?

Blockchain – The concept of Block Weight and Segwit is still unclear

Take a look at this: https://blockchair.com/bitcoin/block/0000000000000000cbbceb342e07071f9621607e044ec909aa86fcdf88e8a

Size = 1,158,038 bytes
Weight units = 3,992,825 WU

What does it mean now? So size is what you probably understand well – if you have a file on your hard disk, its size is measured in bytes, and that's exactly what the size is here. This is the number of bytes you would need to save such a block in memory or on the hard disk. This is the sum of non-production data (nWD) and witness data (WD). Let's call this the absolute size for clarity (AS).

There is also something like virtual size (VS). This is a new concept where the block is measured in new units (vbytes). It is calculated as (absolute size of non-production data) + (absolute size of the witness data) / 4.

The weight of the block (BW) is calculated in units of weight and calculated as (absolute size of the non-production data) * 4 + (absolute size of the witness data). This is actually what you mentioned in your post: (tx size with witness data removed) * 3 + (tx size) because the transaction itself contains both witness and non-witness data.

Now we know that:

AS = 1,158,038 bytes = nWD bytes + WD bytes
BW = 3,992,825 WU = nWD * 4 weight units + WD weight units

What are the limits in the protocol? Currently there is no more block size restriction. There is only one limit to the weight of the block. This limit is set at 4,000,000 WU. It follows that in the case of a block without Segwit txs, the weight of each byte is 4 WU, which is why such a block without Segwit txs is limited by the absolute size of 1 MB.

Blocks containing Segwit txs may be larger, and there are some theoretical calculations that say you can achieve an absolute block size of about 3.7 MB. But this theoretical limit is exactly that – theoretically. In practice, its absolute size would not reach that limit, even if the block were filled with Segwit txs.

At present, most blocks contain both txs – segwit and legacy, so we usually see block sizes of full blocks in the range of 1 – 2.2 MB. The block is full when its weight is very close to the protocol limit of 4,000,000 WU.

This implies that it is incorrect to assume that a 1.1 MB block consists of 1,000,000 bytes nWD and the remainder is WD. That would not work because the weight of 1,000,000 nWD bytes would already be 4,000,000 WU. So if we add the weight of WD, we would be over the limit. This is not possible.

So if we know that AS = 1,158,038 KB, then it's rather 944,929 bytes of nWD weighing 3,779,716 WU and the rest is 213,109 bytes WD weighing 213,109. This gives a total weight of the block of 3,779,716 + 213,109 = 3,992,825 WU.

We can now calculate VS:

VS = nWD vBytes + WD / 4 vBytes = 944,929 + 213,109 / 4 = 998,206,25 vBytes

The virtual size (VS) can not be larger than 1.000.000 vByte. So that's what it means when someone says that the new block size is still limited to 1 MB – in fact, that means 1 million vbytes.

Restoring the forest of Active Directory – Doubt in unclear sentence in the MS documentation

I'm trying to apply and test best practices for fully recovering Active Directory forests, as described in Best Practices for Schema Update Implementation or How I Learned to Stop Worrying and Restoring Themselves love "and explained in detail in the" AD Forest Recovery Guide ". ,

However, this note is very unclear to me: "Caution: Perform an authoritative (or primary) SYSVOL restore on only the first domain controller that is restored to the forest root domain. Incorrectly performing SYSVOL primary restore operations on other domain controllers causes replication conflicts of SYSVOL data. "(Https://docs.microsoft.com/en-us/windows-server/identity/ad-ds/manage/ad-forest) (restore, perform, initial restore)

What are the reasons why SYSVOL should not be authorized to be restored once per domain, but only once per forest (in the root domain)? SYSVOL is not just replicated at the domain level? So, would not it be correct to perform an authoritative restore of just one SYSVOL for each domain of the forest (that of the restored domain controller for each domain in the forest) instead of just the root domain? Not only should the conflict occur if I have the SYSVOL folder as authoritative in more than one domain controller in the same domain.
It's just a Microsoft typo (less likely, but possible) or am I missing something (certainly more likely)?

Thanks, Diego

How can I make data unclear?

Recently, I delete my data because I could not download anything. Nothing bad happens, no crashes or the like. But there is a problem. All my in-app purchases for games and apps seem to stop working. I also did not get my money back, it's like asking you to pay again for it to work. Please help me. 🙁
Enter the image description hereEnter the image description here