2010 – Sharepoint filtering for large lists

Currently in Sharepiont 2010.
I have a list of over 5000 items. I have indexed columns and all views are filtered with these indexed columns. One of the columns is of the type "date and time", which is called "recommendation date". I hope you use a filter that displays the information from the last 12 months. Filter this date so that everything appears with a referral date that is greater than or equal to [Today]-365.

If I have [Today]Says my list is too big. Play with the numbers, greater or equal at the referral date [Today]-282, I returned 1722 articles. At more than or equal [Today]It is said that I have too many. Exactly the same) [Today]-283, returns 10 items. My [Today]-365 should be well under 5000 articles (it took me 3 years to get to that point), any thoughts on what I'm missing?

bitcoin cash – Why do large blocks increase the likelihood of chain loss?

This shows the inherent error with the idea of ​​increasing block sizes if the network can not handle it spreading in time.

If the block sizes are very large and the storage pools of the connected nodes are out of sync, the full node must essentially download almost the entire block before it is added to its chain and then transferred to its associated nodes. This can take a long time. When a miner removes a block H'They will generally try to build a block h + 1 on the block they found. It's only after blocking h + 1 (before they have mined it) will know that they have lost the "race" for h + 1, The miners then organize their chain to the best height h + 1 they have just received and mine in addition.

If the block sizes are very large, the miner can get a block in the above example h + 1 only after he has mined h + 1, So the miner starts building on the block he's mined and is fooled to be the first to find the solution to the block h + 1, If the block sizes are very large, this can extend to several heights until the miner receives a block in height h + 6 before he has mined the block at this altitude. Therefore, the miner will now reorganize his chain after the blocks he has received in the past. The full knots, which are closely connected to the miners, must reorganize their chain to make this possible.

postgresql – work_mem Setting for large queries

I have a table that is ~ 600 GB in size Copy (Select * from the table space, with the ID in (n) in order 1) Command for the table and it generates around 200 GB of temp files since my current work_mem is with the default configuration. I know that I need to change some parameters to avoid temporary files. But I am confused about the following points.

  1. copy (select * from the media format) also creates more temporary files. In my understanding sort by or any aggregate would use the volumes. If the result size of the query is> work_mem, will it produce more temporary files / use disks?

  2. Is it true that work_mem does not support more than 2 GB?

  3. My RAM is 128GB and SSD memory how much work_mem do I have to use this for this query? (I will create a separate user to query this table and change the user work_mem)

  4. Is there a way to abort the query in advance if the space error is exceeded?
    (I could limit this by enabling the maximum temporary file, but I'm not sure what the temporary file size will be and how many files will be generated for a particular query.)

(that is, finding the temporary file / large result (> 10G) to generate the query, the trigger pg_cancel_backend for that, before it crashes the server)

Info visualization – displaying object states with minimal UI changes (for a large number of objects)

I was assigned a design challenge that seemed so easy to me, but I almost ended up in a dead end.

There are states for an object that need to be visualized on a page. For example, the object (a network group) must first be modeled. In this phase, an administrator creates a model of his network. The model would then be queued for the top brass to be approved. Once approved, it is put into a synchronized state. At this stage, the model and the network are in sync with each other. When the user changes the synchronized status, he changes to an updated status, which is followed by the Queue and Synchronized cycle. Phew! ..

In essence, the states start with MODELED. Once the modeling is complete, you will be switched to QUEUED. Once the queued model is approved, it will switch to SYNCRONIZED. Any changes to a synchronized state would immediately put it in the UPDATED state, and if it is ready to be evaluated, it will be moved back to QUEUED.

I have to design visual representations for each of these states. The state of an object in the initial state (MODELED) has already been defined previously.

Modeled object

The task is to represent the rest of the states with minimal pixel changes, it should be clear (since we have about 3-4000 of them on the canvas at any one time – it's an infinite canvas) … and it should not just depend on the color because accessibility is a problem.

The stakeholders do not want to bring too many icons because the screen seems too complex with too many variations.

architecture – How to code a large map in a Trading Card Game Java

I want to create a trading card game like Yu-Gi-Oh or Magic. I have many cards in my game and even the user can create new cards. Cards are objects and they have attributes like HP and mana, etc. But they also have a special power, which is another object and has different types, some of which are activated when a card is placed in the field, some at death, etc. etc. way is simply a hardcoding for the different cards in different classes. That way, I can not implement a user-created map. I will use enumerations for the main maps, but I do not know what to do for custom maps.

Is there a standard way to implement such things?

I am very new to Java and OOP. I think maybe databases could help, but I do not know what I should use or search to find the right choice for that matter.

A detailed overview of the best space heating for large spaces in 2019

Thanks to the sun we are often warm, whether outside or in our living rooms. However, the weather can sometimes be icy and cool. Well, it's important that you have room heating that you can rely on when the weather gets cold. With the best space heater, the day can be saved.

Sometimes you may also have a challenge with the space in which you live. Small heaters for heating are uncomfortable to heat large rooms. Therefore, you may need the best space heater for a large room.

If you want more, just have a look here: https://rightpicknow.com/

Architecture – Architecture principles for creating a large e-mail server that will not be blacklisted

I think about what it takes to implement an e-mail server. In principle, Google Cloud does not allow the sending of emails to a certain extent (they pretty much block the email ports), even though it sounds like you might be receiving emails. On the other hand, you can use AWS to send e-mails for about $ 1 per $ 10,000. This sums up some other SMTP services like SendGrid and the associated costs.

I am (vaguely) aware that there are many problems that Internet Service Providers (ISPs) want to prevent, such as: E-mail spam. It sounds like they have IP blacklists, somehow catching the emails and finding out if they are spam by checking their content. Somehow they also get access to abandoned email accounts and check who writes emails there (I have no idea how that works, but if there are helpful links, I would like to know, even though they are not relevant to the question are). Basically, the ISP uses all sorts of techniques to find out if your email service is spam so it can block and shut down your IP address. I can not understand why this has to happen at the ISP level, but that's not the point.

I'm wondering how to design an email server so it's not blacklisted and that it works around the clock, 24/7. Suppose I want to implement a service like Gmail or SendGrid. I wonder what action you should take to create an e-mail server. That is, the best practices are architectural to create a successful e-mail server.

Wherever I am, Amazon SES seems to be the best option. It is by far the cheapest and has no frills. Otherwise, you would have to buy your own hardware and build your own cloud if you wanted to buy a cheaper or lower level that I would imagine, and buy your own IP addresses. In short, the use of AWS SES sounds like a good option.

They give you the opportunity to use dedicated IP addresses.

Most email certification programs require dedicated IP addresses because you agree to manage your email reputation.

So e-mail server architecture principle have 1 dedicated IP addresses. But I do not want to do that yet and then get blacklisted for some unknown reason, which brings me to the heart of the question. How can you not blacklist yourself?, Since this is a service like Gmail or SendGrid, millions of marketing e-mails and millions of personal e-mails could be sent from millions of different e-mail accounts. every day, I do not see how I can determine if I'm using the right things for the email server to be of the highest quality and potentially "certified" (not sure what email server certification really is or whether it's one thing Google Search does not reveal anything, but AWS mentions it). That's the high level things are that you should put in place warranty that all e-mails are always delivered (or all e-mails from all "good" e-mail accounts on your system are delivered). If it is not possible warranty Then I would like to know why not, and the answer could only be tailored to what comes closest to a guarantee we can get.

Basically, the architectural measures required for an e-mail server to deliver emails consistently without being blocked.

I'm referring (for this question) not to scaling the email server or creating the email server itself, but only the best practices of architecture to prevent it from being blacklisted.

As I understand it, some of the original principles are:

  1. Have a dedicated IP address. (Not sure if you should have only one or if you can have 2 or 3 or 100).
  2. Do not send spam.

That's all I can think about. For (2), this means that you must have good spam filters and other security measures, such as: For example, checking that someone is behind the email account, etc. But even for (2) I'm unsure how to handle the problem of false positives. That is, some users send more than 100 people daily, maybe even a few mass-marketing emails like those marketing sites that make themselves rich with Adwords, with email lists in the tens of thousands. I would like to know if that is pure volume emails causes a red flag and how to handle this. Then the content is just important to make sure that this is purely based on internal spam filters, and that the ISP does not block such things.

If this is a broad topic, I want to focus it closely. I can imagine that part of it is going to learn more about preventing email spam, which I will do. So this question does not have to treat the spam stuff in detail. To put it briefly, I wonder what architectural measures should be taken to avoid being blacklisted. This could include (simply make):

  • Have a fixed number of dedicated IP addresses that is less than the number x
  • Contact some ISP providers and tell them manually about your business goals, even on the phone.
  • Implement spam filters to prevent spam from being sent in the first place.
  • If you have geographically dispersed e-mail servers, you might also have something.
  • Send the canceled accounts or closed accounts programmatically to the ISP for verification.
  • You may be able to access other providers by manually creating an API integration and partnership.
  • Assign phone numbers to the accounts.
  • etc.

I can understand it to implement an e-mail / SMTP server and send and receive messages in scale. So architecturally, that makes sense. What is missing in the picture are the architectural components to prevent blacklisting on such a scale.

In short, I'd like to know how Gmail and SendGrid avoid blacklisting, but that's probably proprietary 🙂

Azure SQL Server – large query to select a failed execution plan

Let's just throw this out: If a query is big and confusing, it's probably also big and confusing for the optimizer.

Should I specify the join type and index name for each join to keep the original execution plan? Is that the best way?

No, because these hints may not always be the right choice.

It would probably make more sense to inherit the part of the query for which you get a good plan and copy it to a dump #temp Table.

From there, add your other shortcuts to the temporary table. That way, the optimizer has less bad decisions to make. Sure, you may have to put an index on the temp table, but that's a lot less painful than putting together a bunch of brittle cues.