Windows 10 Startup Menu (and others) will not work (v1709 and above, for Tile Store, NOT Tile Data Layer)

Some background information:

At one point, all Windows apps did not work in the "Modern UI" style anymore; H. the calculator, "clocks & alarms" etc.

When restarting (to see if the issue is resolved), the Start menu does not appear, nor does the clock appear on the taskbar or Action Center (enabled) Win + A). Sound, battery and Internet icons on the right side of the taskbar (what's that mean?) Also do not work.

The right-click functionality works for all these functions, with the exception of the Action Center, which I can not reach without the link, but that's fine.

Windows settings are still coming Win + I and I can still access the Control Panel from the Run dialog box (Win + R, type in control panel)


In trying to follow this response, I encountered two problems:

  1. When logging into the temporary accounts, I get the black screen of death after "loading" Windows (still not sure how to handle it).
  2. Looking for C:UsersTempAdmin1AppDataLocalTileDataLayer came empty – the folder is not there.

The TileDataLayer folder does not exist because Tile Data Layer is outdated in favor of Tile Store in version 1709.

Therefore, this answer is no longer valid. How do I reset or repair the Tile Store or otherwise repair the Start menu and the rest of the malfunctioning UI?

Complex Geometry – Shows that a specific layer is set [of a continuous family of holomorphic maps] is locally path-connected

I work with a continuous function $ P: (0,1) times W to mathbb {C} ^ n $, Where $ W subset mathbb {C} ^ n $ is an open, relatively compact sphere centered at the origin. The map $ P $ meets the following conditions:

  1. $ P (t, cdot) $ is holomorphic for everyone $ t in (0,1) $.

  2. The sentence $ ( {t } times W) cap P ^ {- 1} (0) $ is finite and not empty for everyone $ t in (0,1) $, and

  3. To let $ epsilon> 0 $, If $ (s, zeta_s) in P ^ {- 1} (0) $, then there is something $ Delta> 0 $ so for all $ t in (s- delta, s + delta) cap (0,1) $is there $ (t, zeta_t) in P ^ {- 1} (0) $ With $ | zeta_s – zeta_t | < epsilon $,

I would like to know if the above conditions give the set level $ P ^ {- 1} (0) $ Connected to a local path, and if not, what other assumptions might be needed to ensure this?

(I will give this problem a bit more context, though this may not be necessary: $ P $ is a & # 39; periodic chart & # 39; a certain continuous function $ Psi: (0,1) times W times D to mathbb {C} setminus {0 } $ Where $ D $ is a compact, continuous Riemann surface with boundary. That is, a non-vanishing holomorphic 1-form $ theta $ is fixed on $ D $and every component of $ P (t, zeta) $ is a period of the holomorphic 1-form $ Psi (t, zeta, cdot) theta $, Essentially, I am looking for a homotopy of holomorphic functions $ D $with disappearing periods).

I do not know if there is a general proposition that will allow us to draw this conclusion, or whether other conditions may have to be imposed.

java – The service layer returns DTO to the controller, but must return the model for other services

Following this post https://stackoverflow.com/questions/21554977/sollten-services-immer-return-dtos-oder-can-the-also-return-domain-models and Best Practices in Software Arch proposals by Martin Fowler

A service layer defines the boundaries of an application (Cockburn PloP) and its available operations from the perspective of connecting client layers. It encapsulates the business logic of the application

I have a problem when I consider the following:

UserService {
     UserDto findUser();
}

UserService should be ok when used in the control where i only need data for dto to be enough.

But here is the problem, if I used this service in another service, eg CustomerService I need the model myself User Object because the model should be managed by a persistence context

e.g.

CustomerService {
     void addCustomer() {
           Customer customer = new Customer();
           User user = userService.fimndUser(xxx); // BAM compilation fails since findUser returns UserDto not User
           customer.setUser(user);
     }
} 

What would be best practices here, I should have 2 copies of findUser Method with 2 different return types or 2 copies of UserService Class one for use as a controller and others for use as a service or core package. Or should I implement a proxy pattern?

web scraping – Homemade web scraper that feeds perceptron with hidden layer (python)

So this is my first time here and one of my first projects in general. First of all, I apologize for the confusion that is my code. I know that it is very difficult to analyze, inefficient and probably not idiomatic. I just do not know how to fix it. That's why I'm here.

The whole thing is a project for the school. Basically, I scan data from an online database with information about the expression of certain genes in certain cells. I then feed this data into a Perceptron model to predict the effects of certain genes on cell morphology. I know that it would be better in any case to use a framework for both parts (eg TensorFlow, Scrapy), but since it is for the school, I limit myself to "standard libraries".

I would be very happy about any advice. Tear me apart!

import numpy as np  
import csv  
from sklearn.preprocessing import LabelEncoder  
from sklearn.preprocessing import OneHotEncoder  
from bs4 import BeautifulSoup  
import requests  

class Scraper:
    def create_genes(self, amazonia_list): #'http://amazonia.transcriptome.eu/list.php?section=display&id=388'
        results = requests.get(amazonia_list) 
        list_page = results.text

        soup = BeautifulSoup(list_page, 'html.parser')

        table = soup.find('div', class_='field')
        rows = table.find_all('tr')

        with open('data/genes.csv', 'w') as file:
            fieldnames = ()
            for row in rows:
                header_sections = row.find_all('th')
                for item in header_sections:
                    fieldnames.append(item.get_text())

            writer = csv.writer(file, delimiter=',', lineterminator='n')
            writer.writerow(fieldnames)

            for row in rows:
                body_sections = row.find_all('td')
                gene = ()
                for i in range(len(body_sections)):
                    gene.append(body_sections(i).get_text())
                writer.writerow(gene)

        fieldnames = ()
        body_sections = ()
        gene = ()
        rows = ()

    def genes_length(self):
        with open('data/genes.csv', 'r') as file:
            return len(file.readlines())

    def id(self, line_num):
        with open('data/genes.csv', 'r') as file:
            reader = csv.DictReader(file, delimiter=',')
            for line in reader:
                if (reader.line_num == line_num):
                    return line("List Item")

    def link(self, id):
        link = "http://amazonia.transcriptome.eu/expression.php?section=displayData&probeId=" + str(id) + "&series=HBI"
        return link

    def abbreviation(self, id):
        with open('data/genes.csv', 'r') as file:
            reader = csv.DictReader(file, delimiter=',')
            for line in reader:
                if (line("List Item") == id): 
                    return line("Abbreviation")

    def create_data(self):
        with open('data/data.csv', 'w') as output:
            fieldnames = ("Gene", "Samples", "Signal", "p-Value")
            writer = csv.writer(output, delimiter=',', lineterminator='n')
            writer.writerow(fieldnames)

            for line in range(Scraper.genes_length(self)):        
                results = requests.get(Scraper.link(self, Scraper.id(self, line)))
                page = results.text
                soup = BeautifulSoup(page, 'html.parser')

                table = soup.find_all('tr')
                for row in table:
                    gene = ()
                    entries = row.find_all('td')
                    for i in range(3):
                        gene.append(entries(i).get_text())
                    entry = ()
                    entry.append(Scraper.abbreviation(self, Scraper.id(self, line)))
                    entry += gene

                    writer.writerow(entry)

        fieldnames = ()
        table = ()
        gene = ()
        entry = ()


    def main(self):
        Scraper.create_genes(self, 'http://amazonia.transcriptome.eu/list.php?section=display&id=388')
        Scraper.create_data(self)

class Model:
    def nonlin(self, X, deriv = False):
        if (deriv == True):
            return X * (1 - X)
        else:
            return 1 / (1 + np.exp(-X))

    def init_data(self):
        scraper = Scraper()
        scraper.main()

        with open('data/data.csv', 'r') as file:
            reader = csv.reader(file, delimiter=',')
            headers = next(reader)
            data = list(reader)
            data = np.array(data, dtype=object)

            abbreviation = data(:,0)
            cell = data(:,1)
            signal = data(:,2)
            p_value = data(:,3)

            onehot_encoder = OneHotEncoder(sparse=False)

            abbreviation = abbreviation.reshape(len(abbreviation), 1)
            abbreviation_encoded = onehot_encoder.fit_transform(abbreviation)

            cell = cell.reshape(len(cell), 1)
            cell_encoded = onehot_encoder.fit_transform(cell)

            abbreviation_encoded = np.array(abbreviation_encoded)
            cell_encoded = np.array(cell_encoded)
            p_value = np.array(p_value)
            signal = np.array(signal)

            input = np.column_stack((abbreviation_encoded, cell_encoded, p_value))
            output = signal

            data = ()
            abbreviation = ()
            cell = ()
            abbreviation_encoded = ()
            cell_encoded = ()
            p_value = ()

        return input, output

    def error(self, out, Y):
        sum = 0
        error = (1/2) * ((Y - out) ** 2)

        rows = np.shape(error)(0)
        columns = np.shape(error)(1)
        for row in range(rows):
            for column in range(columns):
                sum += out(row)(column)

        mean = sum / (rows * columns)
        return round(mean * 100, 2)

    def layer_out(self, layer_input, layer_weights):
        net = np.dot(layer_input, layer_weights)
        out = nonlin(net)
        return net, out

    def update_weights(self, layer_input, layer_weights, Y):
        net, out = layer_out(layer_input, layer_weights)
        self.weight_delta = -(Y - out) * (net * (1 - net)) * layer_input

        net = ()
        out = ()

        return weight_delta

    def main(self):
        X, Y = Model.init_data(self)
        for epoch in range(100000):
            print("Training Epoch:", epoch, "-", (epoch/10000) * 100, "%")

            inputs = np.shape(X)(1)
            hidden_neurons = 2 * inputs
            output_neurons = np.shape(Y)(0)

            input_weights = np.multiply(2, np.random.rand(inputs, hidden_neurons)) - 1
            hidden_weights = 2 * (np.random.rand(hidden_neurons, output_neurons)) - 1
            output_weights = 2 * (np.random.rand(output_neurons, 0)) - 1

            l1 = Model.layer_out(self, X, input_weights)
            input_weights += Model.update_weights(self, X, input_weights)

            h1 = Model.layer_out(self, l1, hidden_weights)
            first_hidden_weights += Model.update_weights(self, l1, hidden_weights)

            output = Model.layer_out(self, h1, output_weights)
            output_weights += Model.update_weights(self, h1, output_weights)

        print("Predicted:n", output)
        print("nActual:n", Y)
        print("n Mean Error:n", Model.error(self, output, Y),"%")

model = Model()
model.main()

Thank you all!

Java – Is the builder pattern suitable for updating objects in a service layer?

Currently, we are passing an ID and a new updated value in our service layer, something similar

updatePersonName(Person person, Name name)

which in turn calls the appropriate repository functions.
This works fine now, if only a single value should be updated. However, as soon as we want to update multiple values ​​at the same time, we either no longer have to call multiple service methods in succession or define a service method that requires multiple arguments to update.
This is bearable, but it gets worse when we need to update several values ​​that need to be updated together. This means that we need to define a (possibly new) method in the service that ensures that the combination is met.
And if you start to have limitations (possibly a person who is marked as not being updatable for a variety of reasons), that only adds to the complexity.

Lately, I've been thinking about using the builder pattern, but not for creating objects, but for updating. Something like this (all methods are chosen arbitrarily, but you understand the point):

PersonService.updater(person)
    .setName("newName")
    .addRole(newRoleObject)
    .setFlag(PersonFlag.NOT_UNDERAGE)
    .overrideBlock()
    .withEventDescription("Person changed her name on birthday!")
    .update();

The builder can internally resolve the logic without bringing too much complexity to the outside. The floating API is easy to use for all other services / components that need access. I do not have to develop a wealth of methods to meet the requirements. Multiple updates are easy to chain together. If you want to update something that is now allowed, it can be internally blocked unless you override this block. And it would be able to enforce certain fields for security reasons, such as an EventObject.

More importantly, this in turn could ensure that instead of several, we only make one trip to the repository. This would improve runtime, especially for critical algorithms that otherwise require many passes to the database.

Of course I also see some problems with this approach. It's bulky, is an unconventional API for people who are unfamiliar with it, and can lead to abuse. The implementation is not trivial, while ensuring that the internal logic holds together. However, I think that the positive ones outweigh the negative ones in my situation.

Do I miss something?

amazon web services – Docker Layer Timeouts on some pulls, but not on all pulls for a particular app

We use Harbor for our private registration. Traffic goes through an AWS Application Load Balancer to get to Harbor. Last night, the layers of a Docker image of a particular application ran out on some moves, but not all. The Docker image is less than 1 GB in size. If a docking layer represented an instruction in the docking file of the image, I removed the line in the docking file that corresponded to the problematic image plane and recreated the image, thinking that a particular layer had a problem. However, later-drawn images started on completely different levels with a timeout. I then pulled working image versions for this app and saw the same timeouts.

The purpose of this question is to understand:
1. What can cause a timed layer download?
2. Why does a time-out for a given level sometimes occur only on some hosts? (ie – if a layer has a problem, why would not it time out?)
3. For what reasons can the images of a particular app time out? (that is, if it's a network or a registry, you can expect that problem to manifest itself in other image downloads.)

Browsers can not download files, they seem to have a problem with the storage layer

The phone is Samsung Galaxy Note N7000. In recent years, I use NightOwl Custom Rome (Android 7). I can not remember when I faced this problem, possibly after repartitioning with REPIT, but now I can not take any more photos. The photo app says I need to insert an SD card, but I can take photos through the Evernote app. Also, I can not see any photos in WhatsUp because my phone can not download them and web browsers can not download files. But I can install most programs and, for example, cache my notes in Evernote.

In recovery mode in TWRP, I found that I have a partition sdcard1 that's 0 bytes in size. Maybe it will fix my memory problem if I remove this partition?
TWRP partitions
Twrp partition SD card1 zero
Twrp partition sdcard0

I tried flashing the same ROM again.

I asked in the thread for this Rome, but nobody answered.

The phone is without external SD card.

Thanks for your help.

Best Practices – MySQL Sharding. Application layer vs. MySQL layer

I'm going to create shards in the MySQL table, and I'm stuck if I should choose application-level sharding or let Mysql handle it.
I mean, if I should find out which shard to query at the application level, or if I pass the query to the MySQL driver and let it decide.
I've read about some pros and cons of both [insert the link description here1, but I can not draw a conclusion.

My personal opinion is to choose MySQL drivers. Open for discussion.
Please share your views / experiences on the same.