## Are there (What are the) occurrences of “stacks” outside algebraic geometry?

Are there any (What are the) occurrences of the notion of “stack” outside algebraic geometry?

In most of the references, the introduction of the notion of a stack takes the following steps:

1. Fix a category $$mathcal{C}$$.
2. Define the notion of category fibered in groupoids/ fibered category over $$mathcal{C}$$; which is simply a functor $$mathcal{D}rightarrow mathcal{C}$$ satisfying certain conditions.
3. Fix a Grothendieck topology on $$mathcal{C}$$; this associates to each object $$U$$ of $$mathcal{C}$$, a collection $$mathcal{J}_U$$ (that is a collection of collections of arrows whose target is $$U$$) that are required to satisfy certain conditions.
4. To each object $$U$$ of $$mathcal{C}$$ and a cover $${U_alpharightarrow U}$$, one associates what is called a descent category of $$U$$ with respect to the cover $${U_alpharightarrow U}$$, usually denoted by $$mathcal{D}({U_alpharightarrow U})$$. It is then observed that there is an obvious way to produce a functor $$mathcal{D}(U)rightarrow mathcal{D}({U_alpharightarrow U})$$, where $$mathcal{D}(U)$$ is the “fiber category” of $$U$$.
5. A category fibered in groupoids $$mathcal{D}rightarrow mathcal{C}$$ is then called a $$mathcal{J}$$-stack (or simply a stack), if, for each object $$U$$ of $$mathcal{C}$$ and for each cover $${U_alpharightarrow U}$$, the functor $$mathcal{D}(U)rightarrow mathcal{D}({U_alpharightarrow U})$$ is an equivalence of categories.

None of the above 5 steps has anything to do with the set up of algebraic geometry. But, immediately after defining the notion of a stack, we restrict ourselves to one of the following categories, with an appropriate Grothendieck topology:

1. Fix a scheme $$S$$ and consider the category $$text{Sch}/S$$.
2. Category of manifolds $$text{Man}$$.
3. Category of topological spaces $$text{Top}$$.

Frequency of occurrence of stacks over above categories is in the decreasing order of magnitude. Unfortunately, I myself have seen exactly four research articles (Noohi – Foundations of topological stacks I; Carchedi – Categorical properties of topological and differentiable stacks; Noohi – Homotopy types of topological stacks; Metzler – Topological and smooth stacks) talking about stacks over the category of topological spaces.

So, the following question arises:

Are there any (What are the) occurrences of the notion of “stack” outside algebraic geometry (other than what I have mentioned above)?

## algorithms – Counting occurrences of word in a text

Let’s say I have a long text of 1M words and I would like to create a table of all the words ordered by the number of occurrences in the text.

One approach would be populating a dynamic array with each word and linear search them to count the occurrences in $$O(n^2)$$ then sort the array by occurrences in $$O(ncdot log~n)$$.

Another approach would be to use y priority queue and a trie. The insertion in the priority queue is $$O(log n)$$ and the build of the trie is $$O(n)$$. But traversing the trie to build the priority queue is somehow difficult to evaluate.

Eventually using a hash map seems to be the best solution, but computing the hash could cost a little bit of time even though it is just a constant. In this you have $$n$$ insertion/lookup in $$O(1)$$ then a final sort of the hashmap by occurrences in $$O(ncdot log~n)$$.

So it is clear that the former approach is the worse and the latter the best. But how can I evaluate the complexity of the second one?

## Count occurrences of a specific word in Google Spreadsheet

I have some cells with text. I need to count the occurrences of a specific word from those cells.

## Displays the number of occurrences for each unique value in SQL Server

The table name is test and the column name is keyPhrase. The table data is listed below

## Key phrase

[& # 39; hello how are you & # 39;]
[& # 39; you & # 39;]
[& # 39; Hello & # 39;]
[& # 39; are you & # 39;]

I want the result as below

Hello 2nd
Like 1
Are 2
You 3

## Count occurrences based on criteria on all sheets

Is there a more elegant – more programmatic – possibility to use some criteria, e.g. B. the question of whether they were checked counting over the leaves?

For example, with the attached sheets, the result would be:

``````NAME
Apples:     1
Oranges:    2
Watermelon: 1
``````

The best formula I could find is:

``````=COUNTIFS(
{
'2020'!A:A;
'2019'!A:A;
'2018'!A:A
}
,A1,
{
'2020'!B:B;
'2019'!B:B;
'2018'!B:B
}
,"✓"
)
``````

## google sheets – Count occurrences of unknown value

I'm really new to Google Sheets and formulas in general.
I know that you can count the occurrence of values ​​if you know what you are counting, eg. The times in which red appears are indicated in the column, and you can search for the word. I just want to count how many times things are displayed.
For example;
Sarah
failure
Mike
Dolly
Sarah
Frances
Frances
Jemima
Sarah

My list would continue, and I do not know what name will come next, or I will not track the names that are added, so I can not say to search for a specific name. Is there a way to tell sheets to maintain, track and update a list of occurrences when a new name is added to the list?

## Representation Theory – Number of occurrences of subgraphs as a unique identifier

given $$q in mathbb {N}$$, To let $$B_q$$ be a consequence of all (not isomorphic) connected graphs with at most $$q$$ Corners. Now for a given connected graph $$G$$Define the signature of $$G$$ ($$sig_q (G)$$) as an integer length vector $$| B_q |$$ so that $$sig_q (G) (i) =$$ Number of occurrences of the graph $$B_q (i)$$ in the $$G$$,

The question is: how big $$q$$ Do we have to take some graph with it? $$n$$ Vertices is uniquely determined by $$sig_q (G)$$?

I thought it would be enough to take $$q$$ near the diameter of $$G$$However, the following counterexample shows two diameter 4 diagrams having the same $$sig_4$$ but they are not isomorphic.

Two non-isomorphic graphs with the same signature

## Construct an NFA over {0,1} * so that each string contains exactly two occurrences of 10

NFA is always a bit trickier for me than DFA. Anyway, this problem seems easy, but I just can not figure it out. I tried, but the solution does not seem right.

## Algorithms – Assign each character to the next occurrence based on the number of unique characters between occurrences

To optimize my LF mapping, I was asked to do the following. Let's say a string $$abaxyxwxbx$$ I need to code it so that each index stores the value of the number of unique characters found since its last occurrence $$+ 1$$ and $$0$$ Otherwise. Use the earlier string. The coding would be:

$$abaxyxwxbx$$
$$2502020200$$

First $$& # 39; a & # 39;$$we meet a $$& Bb; & # 39; b & # 39;$$ and then one $$& # 39; a & # 39;$$, So we count the one unique character, $$& Bb; & # 39; b & # 39;$$ and add $$1$$ Add it and save it as encoding for the first one $$& # 39; a & # 39;$$, First $$& Bb; & # 39; b & # 39;$$We meet the next one $$& Bb; & # 39; b & # 39;$$ on index $$8$$, From the first $$& Bb; & # 39; b & # 39;$$ second $$& Bb; & # 39; b & # 39;$$We encounter 4 unique letters $$(a, x, y, w)$$ and add $$1$$ therefore we save it $$5$$ as coding for the first $$& Bb; & # 39; b & # 39;$$, Second $$& # 39; a & # 39;$$, there is no $$& # 39; a & # 39;$$ therefore we will be able to encounter his coding $$0$$,

My first approach was one $$sigma$$ x 3 large array ($$sigma$$ is the size of the alphabet). In the first column the character is stored, in the second column the number of characters found since the last occurrence and in the third column the index of the last occurrence.

The second method I tested was to create an AVL tree with order statistics that only apply at a specific time $$sigma$$ Characters can exist within the tree. This method is again $$O (n * sigma)$$,

Is there a way to do this $$O (n)$$ Time?

## Aggregating and summing occurrences of raw data piped in Java using lambda and streams

I transform the following raw data (FYI: I generate this by calling some APIs of countly product)

``````iOS | 66:0 | abc.abc@somedomain.com
iOS | 67:0 | xyz@somedomain.com
iOS | 67:0 | abc.xyz@somedomain.com
Android | 0:0:88 | david@somedomain.com
Android | 0:0:88 | smith@somedomain.com
iOS | 66:0 | s.kally@somedomain.com
Android | 0:0:85 | roger.f@somedomain.com
Android | 0:0:85 | david.smith@somedomain.com
``````

in the following `json` Create structure through appropriate `Classes` in the `Java`,

``````(
{
"type": "iOS",
"count": 7,
"percentage": 0.0,
"versions": (
{
"name": " 172:1",
"count": 5,
"percentage": 71.42857142857143
},
{
"name": " 172:0",
"count": 2,
"percentage": 28.571428571428573
}
)
},
{
"type": "Android",
"count": 6,
"percentage": 0.0,
"versions": (
{
"name": " 1:1:24",
"count": 5,
"percentage": 83.33333333333333
},
{
"name": " 1:1:23",
"count": 1,
"percentage": 16.666666666666668
}
)
}
)
``````

The class structure is as follows:

``````public class Platform {

private String type;
private int count;
private double percentage;
private List versions;
//getters & setters
}
public class Version implements Serializable {
private String name;
private int count;
private double percentage;
//getters & setters
}
``````

At the moment I am analyzing a lot with the following steps:

1. A … Create `Set` from raw data where `Segment` Class by dividing the line and each line with regex `\s+\|` to get three fields & object of `Segment`:

``````public class SegmentationVO {

private String platform;
private String version;
//getters & setters
}
``````
2. Create a card of the type `HashMap> platformVersionCountMap` from above `set

``````HashMap> platformVersionCountMap = new HashMap<>();

segmentationSet.forEach(segment ->{
if(platformVersionCountMap.containsKey(segment.getPlatform())){
if(platformVersionCountMap.get(segment.getPlatform()).containsKey(segment.getVersion())){
int existingCount =  platformVersionCountMap.get(segment.getPlatform()).get(segment.getVersion());
platformVersionCountMap.get(segment.getPlatform()).put(segment.getVersion(),existingCount+1);

}
else{
platformVersionCountMap.get(segment.getPlatform()).put(segment.getVersion(),1);
}
}
else {
HashMap versionCountMap = new HashMap<>();
versionCountMap.put(segment.getVersion(),1);
platformVersionCountMap.put(segment.getPlatform(),versionCountMap);
}

});
``````
3. Assign to `Platform` Object as below:

``````private List mapToPlatform(HashMap> map){
List platforms = new ArrayList<>();

map.forEach((key, value) -> {
Platform platform = new Platform();
platform.setType(key);

value.forEach((key1, value1) -> {
Version version = new Version();
version.setName(key1);
version.setCount(value1);
if (platform.getVersions() == null) {
List versions = new ArrayList<>();
platform.setVersions(versions);
} else {
}
});
});
return platforms;
``````
4. Last step to calculate the percentage of each version and sort by number in reverse order.

``````private List sortByPercentage(List platforms){
platforms.forEach(platform -> {
int totalVersionCount = platform.getVersions()
.stream()
.map(Version::getCount)
.mapToInt(Integer::intValue)
.sum();

platform.getVersions().forEach(version-> version.setPercentage((100*version.getCount())/(double)totalVersionCount));

platform.getVersions().sort(Comparator.comparing(Version::getCount).reversed());
platform.setCount(totalVersionCount);
});
return platforms;
}
``````

I get the output after the execution `4th` Step is exactly the same as expected.
How can I optimize this complete part? Like in my code, I stream the list a lot of time. Is there a better way to do this kind of transformation?
(The title of my question could be wrong, do not hesitate to suggest a suitable title.)