SEO – How can I index Sitemap of my website?

I would like to optimize the sitelink on my website via the search console / webmaster

I want to search on google that certain menus appear. For example, menus, a, menu b and menu c

I'm looking for some references. To be able to set using the search console, only a sitemap.xml is required, which is searched first

So I accessed this https://www.xml-sitemaps.com/. I enter my domain and click on the start button. After the process is complete, I downloaded the Sitemap file

my question is if i upload the xml file directly to my hosting? or do I have to edit it first to determine which menus to display when entering keywords in Google?

update:

My xml as follows:







  https://www.mywebsite.com/
  2019-10-13T23:46:01+00:00
  1.00

...

  https://www.mywebsite.com/menu-a
  2019-10-13T23:46:01+00:00
  0.80

...

  https://www.mywebsite.com/menu-b
  2019-10-13T23:46:01+00:00
  0.64




There are 2000 lines of code, but here are just a few

To optimize the number (*) by using only the foreign key index in PostgreSQL

I have a web app that uses a large number of data tables to connect to the database to retrieve data.
The problem is, however, that always first the total results have to be counted before the data can be paginated limit and offset

Are there any kind of special indexes or configurations that always force the calculation of the count to index only, without relying on the end table? because now it seems that it ignores the index when it asks for the count.

SQL Server query slowness despite index

problem

I need to create a user binding graph over time, similar to the following:
User Retention Image

If I ignore the percentages for a minute, I have a query that shows unique users for a particular "cohort" and then the number of returning users. However, due to the volume of data that we have collected in the last few weeks, the query will not stop.

query

;WITH dates AS
(
-- Set up the date range
SELECT convert(date,GETDATE()) as dt, 1 as Id
UNION ALL
SELECT DATEADD(dd,-1,dt),dates.Id - 1
FROM dates
WHERE Id >= -84
)
, cohort as (
-- create the cohorts
SELECT dt AS StartDate, 
    convert(date,CASE WHEN DATEADD(DD, 6, dt) > convert(date,GETDATE()) THEN convert(date,GETDATE()) ELSE DATEADD(DD, 6, dt) END) as EndDate, 
    CONCAT(FORMAT(dt, 'MMM dd'), ' - ', FORMAT(CASE WHEN DATEADD(DD, 6, dt) > GETDATE() THEN GETDATE() ELSE DATEADD(DD, 6, dt) END, 'MMM dd')) as Cohort,
    row_number() over (order by dt) as CohortNo
FROM dates A
WHERE  DATEPART(dw,dt)=1
)
 , cohortevent as (
-- The complete set of cohorts and their events
select c.*, e.*
from cohort c
left join Event e on e.eventtime between c.StartDate and C.EndDate
)
, Retained as(
-- Recursive CTE that works out how long each user has been retained
select c.StartDate,c.EndDate,c.CoHort,c.CohortNo,c.EventId,c.EventTime,c.Count,c.UserID, case when Userid is not null then 1 else 0 end as ret
from cohortevent c
union all
select c.StartDate,c.EndDate,c.CoHort,c.CohortNo,c.EventId,c.EventTime,c.Count,c.UserID, ret+1
from cohortevent c
join Retained on Retained.userid=c.userid and Retained.CohortNo=c.CohortNo-1 and Retained.eventid

Surroundings

All tables are CTEs except "Event" with two main columns, UserId and EventTime.

What I tried

I have added indices for UserId and EventTime. I've noticed that the DTUs (this is an Azure SQL instance) were initially maximally used, but I scaled the database instance vertically, so the database runs at 70% DTU utilization and always runs after more than 30 minutes not finished yet. Currently there are only 40,000 lines in Event,

Python – Creating Inverted Index and Posting Lists takes a long time

I am working on an information retrieval project where I have to process ~ 1.5 GB of text data and create a dictionary (words, document frequency) and a publication list (document ID, term frequency). According to the professor, this should take about 10 to 15 minutes. But my code has been running for more than 8 hours now! I tried a smaller data set (~ 35 MB) and the processing took 5 hours.

I'm a newbie to Python and I think it takes so long because I've created many Python dictionaries and lists in my code. I tried using Generator, but I'm not sure how to handle it.

file = open(filename, 'rt')
text = file.read()
file.close()

p = r'

.*?

' tag = RegexpTokenizer(p) passage = tag.tokenize(text) doc_re = re.compile(r"

") def process_data(docu): tokens = RegexpTokenizer(r'w+') lower_tokens = (word.lower() for word in tokens.tokenize(docu)) #convert to lower case table = str.maketrans('','', string.punctuation) stripped = (w.translate(table) for w in lower_tokens) #remove punctuation alpha = (word for word in stripped if word.isalpha()) #remove tokens that are not alphabetic stopwordlist = stopwords.words('english') stopped = (w for w in alpha if not w in stopwordlist) #remove stopwords return stopped data = {} #dictionary: key = Doc ID, value: word/term for doc in passage: group_docID = doc_re.match(doc) docID = group_docID.group(1) tokens = process_data(doc) data(docID) = list(set(tokens)) vocab = (item for i in data.values() for item in i) #all words in the dataset total_vocab = list(set(vocab)) #unique word/vocbulary for the whole dataset total_vocab.sort() print('Document Size = ', len(data)) #no. of documents print('Collection Size = ', len(vocab)) #no. of words print('Vocabulary Size= ', len(total_vocab)) #no. of unique words inv_index = {} #dictionary: key =word/term, value: (docid, termfrequency) for x in total_vocab: for y, z in data.items(): if x in z: wordfreq = z.count(x) inv_index.setdefault(x, ()).append((int(y), wordfreq)) flattend = (item for tag in inv_index.values() for item in tag) #((docid, tf)) posting = (item for tag in flattend for item in tag ) #(docid, tf) #document frequency for each vocabulary/words doc_freq=() for k,v in inv_index.items(): freq1=len((item for item in v if item)) doc_freq.append((freq1)) #offset value of each vocabulary/words offset = () offset1=0 for i in range(len(doc_freq)): if i>0: offset1 =offset1 + (doc_freq(i-1)*2) offset.append((offset1)) #create dcitionary of words, document frequency and offset dictionary = {} for i in range(len(total_vocab)): dictionary(total_vocab(i))=(doc_freq(i),offset(i)) #dictionary of word, inverse document frequency idf = {} for i in range(len(dictionary)): a = np.log2(len(data)/doc_freq(i)) idf(total_vocab(i)) = a with open('dictionary.json', 'w') as f: json.dump(dictionary,f) with open('idf.json', 'w') as f: json.dump(idf, f) binary_file = open('binary_file.txt', 'wb') for i in range(0, len(posting)): binary_int = (posting(i)).to_bytes(4, byteorder = 'big') #print(binary_int) binary_file.write(binary_int) binary_file.close()

Could someone please help me rewrite this code to make it more computationally and time efficient?

There are about 57982 such documents.
Input file:

Background Adrenal cortex oncocytic carcinoma (AOC) represents an exceptional pathological entity, since only 22 cases have been documented in the literature so far. Case presentation Our case concerns a 54-year-old man with past medical history of right adrenal excision with partial hepatectomy, due to an adrenocortical carcinoma. The patient was admitted in our hospital to undergo surgical resection of a left lung mass newly detected on chest Computed Tomography scan. The histological and immunohistochemical study revealed a metastatic AOC. Although the patient was given mitotane orally in adjuvant basis, he experienced relapse with multiple metastases in the thorax twice.....

I try to tag each document word by word and store the frequency of documents for each word in a dictionary. Tried to save it in JSON file.
dictionary

word document_frequency offset
medical 2500 3414
research 320 4200

In addition, an index is created in which a publication list with document ID and term frequency exists for each word

medical (2630932, 20), (2795320, 2), (26350421, 31).... 
research (2783546, 243), (28517364, 310)....

and then save these postings in a binary file:

2630932 20 2795320 2 2635041 31....

with an offset value for each word. When I load the posting list from the hard drive, I can use the search function to retrieve the posting list for each corresponding word.

Generate a custom index array

Sorry, if this is a duplicate. I try (but not) to generate the following nxn matrix in Mathematica:
Enter image description here

Where G (tj, t0) is a function along the main diagonal and G (tj, tj) is a function that fills the rest.

I'm trying to use Map, but have difficulty specifying the arguments.

For example, with n = 5:

Array(g(#, #2) &, {5, 5}, {{0, 1}, {0, 1}})

Web Application – Are Index, Nonce, and HMAC sufficient for session management?

I'm investigating session management for web applications. I have examined a few passages, and to my knowledge, we should not use secret as a session identifier (index). Because it can come to timing attacks.

Let's say server-side performance sessions are being cached. And the index is reset each time (for example, 1) when the server is restarted or when all are deleted.

session_payload = index || HMAC(server_key, index)

But if you do that, there's room for reruns, right? An attacker can generate a set of session payloads and save them for later hijacking sessions. Something is required for each session payload unique to prevent that, right?

So what about:

payload = index || nonce
session_payload = payload || HMAC(server_key, payload)

If my understanding is right, the nonce Must be unique to make the session payload clear. Should it just be the output of a CSPRNG, RNG or the current time (milliseconds?, Nanoseconds?)? What are the reservations of everyone?

If the above is done correctly, it should be able to avoid the following:

  • Timing attacks.
  • Volume attacks.
  • Repeat attacks. *
  • Manipulation.

Right? And are there any other attacks I should watch out for? Please exclude the session fix, which can be reduced by refreshing the session payload as permissions escalate.

  • What I'm defining through a replay attack is that opponents could save in-game session payloads and hijack sessions later, so using the nonce.

bitcoind – How do I get a complete transaction index with Bitcoin Core?

How can I sync a Bitcoin full node so I can access all the details of transactions that happen on other nodes and on other wallets like blockchain.info.

I've synced a full node with the following commands, but this includes details about the account details in the Node Wallet.

sudo apt-add-repository ppa:bitcoin/bitcoin
sudo apt-get update
sudo apt-get install bitcoin-qt
sudo apt-get install bitcoins

Programming Languages ​​- I do not understand why the array[1..3, 1…2] is faster to index than array[1…2, 1…3]?

In my programming concept class, we've learned that it's faster for the array type to index in memory when the length of an element is = $ 2 ^ n $, In this case, we only have to move instead of actually multiply.

We also learned about the representation of multivariable arrays with row major. It was then said that the array [1 … 3, 1 … 2] is faster to index than the array [1 … 2, 1 … 3] because we have the length of an element, that is = $ 2 ^ n $, I do not understand why you can determine that?

at.algebraic topology – main symbol of a non-local operator and Atiyah-Singer index formula

I'm trying to understand the Atiyah-Singer index formula for pseudo-differential operators. As far as I understand, the Fredholm index of the operator $ A $ on a manifold can be calculated just by knowing its associated main symbol $ sigma_p (A) $At least for elliptic operators. However, I am interested in a non-local operator of the form
$$
A (f) (x) = int _ { mathbb R} tilde A (x-x)) f (x)) , mathrm d x & # 39;
$$

With $ f $ in the Schwartz room.

Accept $ B $ affects $ mathcal C ^ { infty} ( mathbb R, mathbb R) $ as:
$$
B (f) (x) = int _ { mathbb R} e ^ {- (x-x)) ^ 2/2} f (x)) , mathrm d x & # 39;
$$

and has symbol
$$
sigma (B) (x, xi) = e ^ {- xi ^ 2/2}, quad (x, xi) in mathbb R ^ 2
$$

(note, no dependence on $ x $).

$ sigma (B) $ is a Hörmander icon in the class $ mathcal S ^ {m} _ {1,0} $. $ forall , m in mathbb R $,

question: Does $ sigma_p (B) $ exist? Is it a general characteristic of a non-local operator not to have a main symbol? Can the Atiyah-Singer index formula be applied to such operators?

N.B. For the term Hörmander symbol you can search for pseudo-differential operators on the Wikipedia page.