We’ve been using Fortigate for years and i feel like its time to upgrade the hardware to something newer and more powerful, w… | Read the rest of https://www.webhostingtalk.com/showthread.php?t=1845618&goto=newpost
I am currently having a FPS game with client side prediction. Every 20ms, the client sends it’s input to the server. The server buffers 5 ticks worth of input (100ms) and starts consuming the inputs each tick and sends a world snapshot to the client. The client then validates the predictions and corrects the simulations if there is a misprediction. The setup works fine under ideal network conditions. However when the latency suddenly spikes from say 50ms to 400ms, the client’s tick packet is delayed which causes the server to eat through all of the available inputs in the buffer and resort to using the previous tick values to perform server simulation. When the client input finally arrives at the server late, the server discards the input because it has already processed the input for that particular tick (using old input values).
How do I handle this edge case of latency increasing and decreasing? Sure I could use a larger buffer size for the inputs, but that will cause delays between client state and server state. Using a smaller buffer will cause the server to stall from inputs if there is a jitter that is more than the duration required by the server to process the inputs. I was under an impression that I would need some kind of dynamic buffer size that expands and contracts. However, I am out of ideas to implement something like this.
Does anyone have a solution?
I’m thinking about upgrading my Asus VivoBook X505ZA BR234T
It currently has
- 8 GB RAM
- 128 GB – C: – SSD SATA M.2
- 1 TB – D: – HDD SATA de 1 TB de 2,5 in 5400 rpm
In the specifications it says it has an expansion slot for DDR4 SO-DIMM, and it can go up to 16GB in total. How do I find out what kind of memory (of 8GB) is compatible with my machine or existing RAM?
For the storage I’m a bit confused since I would like to have a bigger C: drive and have also my D: drive in SSD (not HDD). Is it possible to buy a SATA M.2 drive of 480GB? Are there SDDs that fit in the SATA slot? I currently have a hybrid hard drive so I don’t even know what slots are available for what drive types.
I need to understand the concepts or possibilities for my upgrade, I don’t need specific brands, just what specs should I look for.
A player wants to join our campaign and I asked them if they had a character ready to go. They said yes, and I asked them what the character was.
“He’s a human barbarian, level 1.”
Me, “Ok, cool, that should work out well.”
Player, “Oh, and he’s 2 and a half years old.”
At first I was all, “No.” But then we talked about it and two things came to light:
I’m already running a lighthearted and somewhat silly campaign and this PC would add hilarity on so many levels it would be hard to pass up.
There’s nothing in RAW in D&D 5e that says you can’t be a 2 and a half year old Barbarian, or any class for that matter. Toddlers, children and even babies are not mentioned in the rulebooks.
The player had rolled fixed ability scores and got:
STR 16, DEX 7, CON 17, INT 9, WIS 6, CHA 15
So which race is kinda below average smart (in game play), has very little wisdom but a solid personality, and can leverage this personality to get what they want, and has such low dexterity they practically tumble over themselves? A human toddler of course! (at least according to this player.) From there, the class was an easy choice: a raging barbarian.
I’m not changing the stats to account for age or applying any disadvantages based on age alone. I think the rolled stats are already a good match for this character choice and reflect the deficiencies of the toddler (a really strong toddler).
My question is not “should I allow this?” I am. How can I resist? (Especially considering this player is a new parent.)
What I’m mostly looking for are role playing considerations. Mechanically I’m just going to treat them as any other character, albeit one that can’t speak very well and has a hard time grasping concepts
My question is, have you ever allowed a PC at a ridiculously young age and what are some aspects I will have to consider as GM?
(and is there a diaper changing mechanic?)
Since a full node needs to process the entire block chain from the beginning and it keeps getting bigger this requires more computing resources over time. Does it seem more likely in the near future that this will continue to be processed on a general purpose CPU or will the computation be hardware accelerated through specialized hardware ? Will the computational complexity outpace CPU improvements ? Is hardware acceleration possible for this task?
Database Administrators Stack Exchange is a question and answer site for database professionals who wish to improve their database skills and learn from others in the community. It only takes a minute to sign up.
Sign up to join this community
Anybody can ask a question
Anybody can answer
The best answers are voted up and rise to the top
I’ve been doing some index usage reviews (using DMV seek, scan and lookup stats) and have identified a number of ‘unused’ indexes in our biggest and busiest DB. I am interested in the opinions of this forum on what checks or confirmations other than reads should be evaluated before performing the index DROPs. One that comes immediately to mind is whether the index is being used to enforce a necessary unique constraint. What other factors are important to consider?
If there was a clever person who decided to use index hints in their application’s queries, dropping said index will cause the query to fail outright if/when it runs.
Something like a quarter or year-end report might not be showing any index usage due to its infrequency of execution depending on how often the system is restarted.
If the index is not enforcing uniqueness, it is only there to (potentially) assist in read access. I’d just caution against dropping indexes that might be used for quarter end/year end reporting jobs that have been created to avoid locking tables for extended periods of time. You’ll have to use some judgment and knowledge of the tables to determine that or just wait until you’ve collected enough information to know for sure.
Updates is the one you IMO you should weigh the positive aspects of the index (seek and scan) against. With few updates, then the the overhead is marginal. Unless you consider diskspace, but I assume you are after “what makes things go slower” as opposed to “what uses storage”.
Note that if an index hasn’t been touched since startup, you won’t see it in sys.dm_db_index_usage_stats. OTOH, then it doesn’t carry any operational overhead, so you may not bother anyhow (as per above reasoning).
In addition to unique indexes, indexes supporting foreign keys should normally be retained even if they are infrequently used.
If you are unsure if you have a periodic report or job running that might use an index you would be well advised to disable the index rather than dropping it as then you have the definition in situ should you discover that it was after all required.
ALTER INDEX IX_Employee_ManagerID ON HumanResources.Employee DISABLE;
ALTER INDEX IX_Employee_ManagerID ON HumanResources.Employee REBUILD;
should you discover the index was after all needed
my API is divided to some layers:
- Presentation layer (PL) – API
- Bussiness Logic layer (BLL)
- Data access layer (DAL)
- database (DB)
PL is the controllers, that have some methods and every method accept XRequest (some request model) and returns XResult (some response model). These models are just DTO (Data Transfer Object) models.
DTO are a part of BLL, because Request model is passed to BLL, when it is being processed and it returns Response model to be returned by controller.
Only DAL has access to DB, of course, so if BLL needs something it communicates through DAL.
I think it is understandable, as it is some kind of N-Tier architecture and it is very similar to architecture described here (diagram 2): https://stackoverflow.com/questions/56100420/typical-layered-architecture-project-structure?rq=1
My DAL does not exposes Entities, so it returns to BLL some kind of DTO object. I have no idea where should be these object classes stored (objects that are used to communicate between BLL and DAL).
Should it be in bussiness layer or in data access layer?
If first option is correct, then my diagram will look like this: PL→BLL⟷DAL.
Isn’t it wrong that DAL uses classes from BLL?
If second option is correct, then it will look like this: PL→BLL→DAL.
But in this case the bussiness layer depends on data access layer, and I think it is not correct also, because in my opinion bussiness layer should define which data load from database.
If it is wrong, please tell me, but I think the best option is something like this: PL→BLL←DAL.
So here the bussiness layer exposes the interface and objects that should be implemented in DAL. It doesn’t look like N-Tier architecture anymore, isn’t it?
Also, I don’t know how to achieve it, because in first and second case it is simple that DAL object are injected in BLL service constructor, and services are injected to controllers. It is easy to achive with ASP.NET dependency injection, but what if I would like to BLL be the core?
Please help, maybe I should just use first approach and doesn’t care about that?
I’d like to build a simple privnote-type clone for fun. The idea is this:
- User A writes a note in their browser, browser encrypts it client-side
- Server saves the pre-encrypted note without knowing the decryption key
- User A then sends a link like
abc.hidden/mynoteid#mydecryptionkeyto user B
- User B decrypts the message on a local browser
The question I’m struggling with is this – should the server allow anyone to fetch
abc.hidden/mynoteid? Server being able to decrypt messages (I’d like this to be entirely immune to logging of any sort and all encryption/decryption happening clientside) defeats the entire purpose.
Because the notes are one-time-use-only, a fetching of the note must destroy it. But how can I know that a correct decryption key was supplied without decrypting the message server-side exposing it to logging?
Lastly, would a React app and a generic REST server with Redis to store messages suffice for this task? (Supposing that messages have a TTL, Redis seems an ideal choice) What happens if a malicious actor gains access somehow (without knowing the decryption keys which should be generated on the spot and only just once)
What encryption algorithm is best suited for this task? I don’t think we need 10 seconds of
I understand that sending sensitive info over the internet is yucky but it happens a lot and if it does happen in a proverbial “marketing department”, having a tool like that could ease some worries about PII.
Plus, I think it’s a fun project either way.
I’d like to switch over my main OS to Linux but I will need to run Windows 10 in some form or another for work purposes (mostly Visual Studio, IIS and MS SQL Server Management Studio for older asp.net and or WinForms app development). I’ve tried to read what I can on the subject but a lot of info seems to be quite dated so I thought it best to ask. Based on what I read, and because I’d like to be able to access both OSes simultaneously, I’m considering virtualizing Windows 10 with VirtualBox. I do however have a few points where I’d like some advice so I can make an informed decision as to whether this would work out for me:
Am I missing out on any significant performance areas by not using xen/kvm or another type 1 hypervisor
(I found many benchmarks and articles from around 2-8 years ago that suggest that this may have been
the case, but I am not sure if VirtualBox has improved since then) Has VirtualBox become closer to a
type 1 hypervisor with the release of 6.1 which drops software virtualization?
Assuming I have a dedicated NVME disk for windows – I’m interested in snapshot ability and I assume
(please correct me if I’m wrong) that I would need to use a virtual HDD format to get that ability.
That being the case do I lose much performance from having a virtual HDD for windows on top of a Linux
file system. Are there other features only available to raw access or virtualised access?
What effect does the virtualised graphics have? I was wondering if I should consider purchasing an
additional graphics card and using PCI passthrough because I will have an available PCIe slot. I ask as
some things such as Windows’ desktop window manager, Chrome and MS Teams show up as using my current
system’s graphics card from time to time. Would they be significantly impacted by running on the
If bit locker is enabled on the windows virtual machine does that have any implications I should be
aware of? either for backups or loss of data etc.
I can’t find it now, but I saw someone’s comments that bit locker
might enable encryption on the controller of an NVME drive – and that
it might use theoretically immutable hardware id’s as the key. I
would be concerned that if I passed through the raw NVME drive would
the guest OS potentially be able to do this using perhaps non
immutable hardware id’s of virtualised hardware – thus bricking the
I’m not sure I agree with your pros & cons. I use both flash & LED for macro. I don’t see a colour difference between them, or not one that matters for flowers.
LED pros… you can see what you’re shooting before you shoot it.
With flash you’re basically trying to focus in the dark, unless you have high-end studio lights with modelling, so it’s a budget thing.
Indoor macro still life doesn’t need any particular shutter speed, ISO or even aperture, because your DoF is so short anyway & your lighting is controlled to the nth degree, TTL is pointless. You’re in manual, for everything. 200 shots to get the lighting right… your subject never gets bored, take as long as you like.
I use LED video lighting for the subject light, & dial up some flash to illuminate my background after my foreground is set.
I paid about £300 for a pair of intensity-variable but fixed 56k temperature video panels, approx 12″x9″ but I add diffuser gel to the barn doors making them broader & flatter, & about £400 for a pair of Godox flashes & the controller. All 4 are used on the photo below.
To go to the opposite end of the budget scale – this week I was on night shoots for a [secret massive budget movie] & the ratio of LED to tungsten has now really swung to LED. Almost all the floods on a massive 500m square backlot set were LED. Only half a dozen tungsten lights on the entire set, used as pick-up spots, mainly on odd dark corners of the far backdrop blue-screen.