ubuntu – Clear ZFS Checksum errors?

TLDR;
My ZFS mirror pool got some checksum errors. I replaced the controller, thinking that was the most likely cause, but the errors won’t clear. pool clear temporarily resets them, but they come back the next time I run a scrub. How can I clear them for good?

Full story:
I have had a ZFS mirror-0 set up and running on ubuntu 20.04.2 LTS for some time. When one of the drives died, I took advantage of the failure to replace both drives with larger ones, as well as adding a SATA-III PCI card for the new drives (the old ones had been connected to the on-board SATA II controller, as I had no more SATA III ports available). After running on the new drives and controller for a few weeks, ZFS complained about checksum errors on both new drives, and put the array into a “degraded” state as a result.

Some research led me to the conclusion that since both drives were showing the exact same number of checksum errors, it was much more likely to be an issue with the controller than with the drives themselves. So I pulled the new controller and put the drives back on the onboard SATA II controller for now, intending to replace the controller card once I verify that is the issue. I then deleted the two files that zpool status -v showed as having permanent errors, issued a zpool clear data to reset the errors, and ran a scrub.

Unfortunately, after the scrub the errors re-appeared, only now a -v no longer showed a file, but just the address (inode, I believe), presumably for one of the files I had deleted earlier. I tried again, with the same result. Every time I run a scrub, it comes back with the following result:

root@watchman:~# zpool status -v
  pool: data
 state: DEGRADED
status: One or more devices has experienced an error resulting in data
    corruption.  Applications may be affected.
action: Restore the file in question if possible.  Otherwise restore the
    entire pool from backup.
   see: http://zfsonlinux.org/msg/ZFS-8000-8A
  scan: scrub repaired 16K in 0 days 09:10:20 with 1 errors on Sat Jul 24 15:48:21 2021
config:

    NAME                                 STATE     READ WRITE CKSUM
    data                                 DEGRADED     0     0     0
      mirror-0                           DEGRADED     0     0     0
        ata-ST8000VE000-2P6101_WSD1M5NW  DEGRADED     0     0    15  too many errors
        ata-ST8000VE000-2P6101_WSD1HEJX  DEGRADED     0     0    15  too many errors

errors: Permanent errors have been detected in the following files:

        data:<0x380508>

From what I can tell, this is just the same issue that already existed due, presumably, to the bad controller, but I can’t seem to clear it out. How can I restore my mirror to a fully-functioning state?

google chrome – Clear all browsing data except for certain sites

Occasionally I want to clear my browsing data, but for certain sites I don’t want to lose the browsing data.

For example on websites on the Stack Exchange network I still want to know which posts I’ve visited before.

I could of course manually delete all history entries except for those of certain sites, but this requires a lot of manual work which I would like to avoid.

Is there a way to do this?

graphs – Making it clear to user that Value=0 or Error: Unable to fetch value

I have the following use case:

The user can select one or multiple of Items A/B/C… and be shown a time series graph of var X against t for that Item(s). There is only 1 graph shown, so when multiple items are selected, the frontend overlays them into a single graph.

Sometimes, the value of X can just be 0 for the entire duration for an Item(s).
OR there could be a technical error where the data can’t be fetched.

This can lead to a situation where the user sees 0 for the entire duration for Item A, they might think its a technical error, when its really the case that the value is 0.

How do I make this differentiation (Error fetching data VS value=0) clear to the user?
Especially when Item A might be a case of value=0, Item B is an error, and both are shown on the same graph.

sharepoint online – How to clear data from certain cells in a row (either a list or form in power apps) based on selection from dropdown

Is there a way to clear data from certain cells in a row (either a list or form in power apps) based on selection from dropdown? Example in an edit form, the user clicks on status and changes current selection (In use) to another selection (Available) which then clears only the Pickup Date, Return Date and the Current Owner.

unity – How to clear Resources.Load cache?

I am loading my assets and tile information through YAML files.

While working on the game, I needed to quickly change some values and test them in-game, and I realized I could just re-load the YAML information without having to restart.

However, before my files are processed by my YAML parser, they are loaded by using Resources.LoadAll, like this:

    foreach (var buildingTileType in Resources.LoadAll(Settings.BUILDING_TILES_PATH, typeof(TextAsset)))
    {
        var buildingTile = Deserializer.Deserialize<BuildingTile.Initializer>(((TextAsset) buildingTileType).text);
        BuildingTileInstances.Add(buildingTile.Name, buildingTile.CreateInstance());
    }

When I want to reload my defs, I clear the BuildingTileInstances dictionary and call the def-loading method again. However, I noticed the values were remaining the same until I restarted the game.

I’ve read that Resources.Load actually caches your files for optimization, and in this case, I’d like to somehow clear that cache to allow for live reloading.

I’ve tried using Resources.UnloadUnusedAssets() and Caching.ClearCache(), but none of them worked. Is there any way to achieve this?

encryption – How bad is it to store credentials in clear text on disk and in memory?

Yeah, it depends. A good answer would provide some reflections on this. I have two concrete scenarios in mind, in two concrete (and I believe common) contexts. Context 1. At home, you’re the only one with access to the computer. Context 2, at work, you and anyone in the IT department have access to the computer.

When it comes to “secrets in-memory”, I have two things in mind. Storing too many secrets in memory, and the ability of user space applications to dump the memory of other processes.

  • Scenario 1: GCP (and I believe AWS and Azure) stores all credentials used by the user in cleartext on disk. GCP stores keys and everything in a cleartext sqlite db. Let’s assume that the disk is encrypted at rest, but decrypted upon user login (this is what macOS FileVault would do, and I believe LUKS too)
  • Scenario 2: 1Password, LastPass and other password managers decrypt and load all secrets to memory, master password included, upon application login, and they don’t necessarily clean up upon application “lock”. This was “revealed” a couple of years back in the Washington post; see this forum post for 1Password response. A notable exception is pass which is probably what you should use if you have the same concerns as listed in this question.

The tldr from 1Password is that protecting from attackers that have access to your system is nonsense, they could always intercept the keyboard for instance. While that’s true, I would feel very uncomfortable if all it took to steal my entire online identity was to simply dump the memory of a process. Anyone in the IT department could do that; installing a keylogger on the other hand, while possible, is much more intrusive. Dumping process memory is not straight forward with SIP enabled for macOS, but I don’t know what it would be like on Linux or Windows.

For cleartext credentials on disk, the consequence of these credentials falling into the wrong hands is very high for particular keys and access tokens. The company IT department is maybe not the greatest threat here, but this still feels like something you would want to have as part of a blind threat model (to reference GitLab’s threat model framework). For a lot of use cases, envchain and sops can solve this for you. I don’t think it’s that straight forward with cloud vendor credentials, primarily because secrets are stored in multiple databases, and every single client library that hard codes where to find the credentials would break.

So that’s it. It seems clear that the modus operandi of a lot of companies is to break common sense security guidelines, keep secrets encrypted and only decrypt what you need. Could anyone provide some feedback on this train-of-thoughts and the conclusion? Is it really bad to have cleartext credentials on disk, and decrypt far more than what you need?

How to clear cells-A:2 is on a sheet(Incdt) that is linked to A:2(Resident) on another sheet. Want to clear B2:F2 on (Incdt) when Resident is modified

You have a spreadsheet containing two sheets: Incdt/Obs Rpt and Resident List. Data on Resident List is being updated daily. Data on Incdt/Obs Rpt is linked to cells/values on Resident List, specifically:

  • Cell A2 on Incdt/Obs Rpt is linked to Cell A2 on Resident List.

When cell A2 on Resident List is modified or changed, you want to clear the range B2:F2 on Incdt/Obs Rpt. Your script is designed to do this, but you are unsure what/where to modify in the script.


function onEdit(e){

  // create variables
  var sheet2monitor = "Resident List";
  var cell2monitor = "A2"
  var reportsheetname = "Incdt/Obs Rpt";
  var range2clear = "B2:F2"
  var source = e.source;
  var sheet = source.getSheetByName(reportsheetname);

  if(e.range.getA1Notation() == cell2monitor && e.range.getSheet().getName() == sheet2monitor) {
    // if is successful, do stuff
    // Logger.log("DEBUG: if is successful")
    sheet.getRange(range2clear).clear()
    
  }
  else{
    // if is not successful, do nothing
    // Logger.log("DEBUG: if is not successful");

  }
}

I created several variables that will enable you to easily modify ranges or sheet names if the need arises.

There are a couple of key differences to your draft script:

  • the cell that is being monitored for a change is A2 on sheet = Resident List
  • the range to be cleared is B2:F2 on sheet=Incdt/Obs Rpt

website design – Why do people not notice our enormous, prominent, clear and contrasting purple banner?

I’m part of a MediaWiki site called D&D Wiki. Among others, one of our longstanding issues in the public eye was our failure to label clearly enough that certain pages are categorised ‘Homebrew’, as opposed to ‘Official’.

Consequently, we pushed through a solution wherein all pages that are not ‘Official’ are labelled with this lovely homebrew banner. Contrasting with the site’s light, creamy-browns, brazenly displayed is this page-wide, striking black/dark purple/red banner, complete with black-bordered white text that is very largely and clearly displaying the words “Homebrew Page”, with extra minor explanation.

Official pages and homebrew pages have different colour schemes, different fonts, different text sizes, different table layouts, different title schemes, and, notably, a different banner declaring it ‘official content’ that is noticeably different at the shortest glance.

However, I have heard multiple times from reddit, to our chat, to stackexchange itself that, and I quote: “the homebrew banner is inexplicably hard to notice despite being bright purple.”. Somehow people are still getting these two categories of pages mixed up?

I profess my own inability to understand this situation. Did we overshoot human perception? Did we make it so noticeable, so.. obvious, that it could not be seen from within; Like humanity itself being unaware of the entirety of the universe around them?

How do we make people actually notice our banner? Or is there a better way to inform people of the homebrew nature of the content they’re seeing? Are these blind people all weird freaks, or am I somehow off my nut?

enter image description here

EDIT: Thanks all for the interest and helpful responses! For those interested, our subsequent discussion on the matter can be found on the site, here.