I will make profit and loss statement of your business for $10

I will make profit and loss statement of your business

WELCOME TO MY GIG !

Do you want to know the PROFIT and LOSS of your Business ?

I am here to help you out to Prepare your Profit and Loss Statement on Daily, Weekly, Monthly and Annually as per your Requirement.

In this Statement, I will Calculate the following Line Items :

  • Sales Generated by your Business
  • Cost of Goods/Services of Sales generated by your Business
  • Profit before Expenses and Taxes
  • All Expenses related to the Business
  • Tax related to the Business
  • Profit after Expenses and Taxes

and All Other Comprehensive Income which is not generated by your Business but you want those Income to recognized in your Business.

Feel free to message me first before placing an Order, So that I can understand your Requirement in detail.

GROW YOUR BUSINESS WITH US

.

linux – Add GPT to disk that has bare EXT4 file system without data loss

I have a few big spinning drives that I formatted as EXT4 without adding a GPT first. They work just fine on Linux obviously, but I need to move this drives to a FreeBSD box, and the mount plugin for EXT4 won’t support these. These are big drives and copying over the whole contents to another drive be able to wipe them and then transfer back after partitioning would take many days. Is there a way to move the partition forward a bit and add a partition table?

image processing – What is the default loss function used by the GradientBoostedTrees classifier?

I’ve used Classify[…] for a two class image classification problem which has yielded pretty good results. I let Mathematica select an algorithm for me and it picked a GBT model, everything looked good and the algorithm performed well.

I put the whole thing away for a while, but now it’s come time to publish and although it’s a small part of the study I feel I should know what loss function Mathematica chose for the model. I figured this would be easily accessible but I can’t seem to find a citation or mention of this anywhere. I’m fairly certain it must use cross entropy, but “fairly certain” obviously isn’t going to get past a reviewer.

If anyone has any idea, or knows how to get this information out of the model after training, I’d be very grateful if you could share your insights.

profit or loss when going short in spot trading BTC/USDT

HOW DO YOU CALCULATE THE PROFIT/LOSS IN A SALE SPOT ORDER BTC/USDT?

I like this formula

  1. First of all, let a number of bitcoins be bought and (x1) is how much you paid for it.

  2. Now the price of 1 bitcoin changes

  3. Then you use the formula:

    x2= (New value of 1 bitcoin)*(Number of bitcoins you have)

Now you just find the difference between x1 and x2. This will give you your profit/loss.

All you have to remember is the initial value. In your case it is 50.

Therefore,

x1=50

your bitcoins, in this case, = 50/3916.74 ~ 0.0128

x2= 4200*0.0128
= 53.76

EX. SELL $950 BTC AT 45000 and then the price goes up to 50000 profit or loss

x 1 is the bitcoins i bought or the number of bitcoins i sold?

example 950/45000 0.021
0.021 x 50000 = 1000 1055-950= 105 loss???? OR

950/50000 = 0.019 0.019X45000 = 855 950-855 = 95 LOSS

THANK YOU !!!!!!!!!!!!!!!!!!!!!!!!

If any other formula it will be appreciated.

Many Thanks

formatting – Loss function in python is a bit cumbersome

This is my first post here. I seek to improve my way of writing Python and C++ code. I hope I can also contribute to others when I have increased my skill.

Here is a python function I have for a neural network I have implemented. I feel it become a bit cumbersome. The intention is to weight positive labels in channels 1:end higher than background pixels. Hence the distinction between foreground and background.

def loss_function(
        self,
        x: torch.tensor,
        groundtruth: torch.tensor,
        weight: float
) -> torch.tensor:

    delta = 0.00001
    groundtruth_pos = groundtruth(:, 1:, :, :) == 1
    groundtruth_neg = groundtruth(:, 1:, :, :) == 0
    foreground_gt = groundtruth(:, 1:, :, :)
    background_gt = groundtruth(:, 0, :, :)

    foreground_x = x(:, 1:, :, :)
    background_x = x(:, 0, :, :)
    loss_foreground_pos = -(torch.sum(foreground_gt(groundtruth_pos) * torch.log(foreground_x(groundtruth_pos) + delta))
                            + torch.sum((1 - foreground_gt(groundtruth_pos)) * torch.log(1 - foreground_x(groundtruth_pos) + delta)))
    loss_foreground_neg = -(torch.sum(foreground_gt(groundtruth_neg) * torch.log(foreground_x(groundtruth_neg) + delta))
                            + torch.sum((1 - foreground_gt(groundtruth_neg)) * torch.log(1 - foreground_x(groundtruth_neg) + delta)))

    loss_background = -(torch.sum(background_gt * torch.log(background_x + delta))
                        + torch.sum((1 - background_gt) * torch.log(1 - background_x + delta)))
    return weight * loss_foreground_pos + loss_foreground_neg + loss_background

performance – Monte Carlo Tree Search Optimization and Loss Prevention

I’m working on a implementation of Monte Carlo Tree Search in Swift.

It’s not bad, but it could be better! I’m principally interested in making my algorithm:

  1. faster (more iterations/second)
  2. prioritize moves that prevent instant losses (you’ll see…)

Here is the main driver:

final class MonteCarloTreeSearch {
    var player: Player
    var timeBudget: Double
    var maxDepth: Int
    var explorationConstant: Double
    var root: Node?
    var iterations: Int

    init(for player: Player, timeBudget: Double = 5, maxDepth: Int = 5, explorationConstant: Double = sqrt(2)) {
        self.player = player
        self.timeBudget = timeBudget
        self.maxDepth = maxDepth
        self.explorationConstant = explorationConstant
        self.iterations = 0
    }
    
    func update(with game: Game) {
        if let newRoot = findNode(for: game) {
            newRoot.parent = nil
            newRoot.move = nil
            root = newRoot
        } else {
            root = Node(game: game)
        }
    }

    func findMove(for game: Game? = nil) -> Move? {
        iterations = 0
        let start = CFAbsoluteTimeGetCurrent()
        if let game = game {
            update(with: game)
        }
        while CFAbsoluteTimeGetCurrent() - start < timeBudget {
            refine()
            iterations += 1
        }
        print("Iterations: (iterations)")
        return bestMove
    }
    
    private func refine() {
        let leafNode = root!.select(explorationConstant)
        let value = rollout(leafNode)
        leafNode.backpropogate(value)
    }
    
    private func rollout(_ node: Node) -> Double {
        var depth = 0
        var game = node.game
        while !game.isFinished {
            if depth >= maxDepth { break }
            guard let move = game.randomMove() else { break }
            game = game.update(move)
            depth += 1
        }
        let value = game.evaluate(for: player).value
        return value
    }
    
    private var bestMove: Move? {
        root?.selectChildWithMaxUcb(0)?.move
    }
    
    private func findNode(for game: Game) -> Node? {
        guard let root = root else { return nil }
        var queue = (root)
        while !queue.isEmpty {
            let head = queue.removeFirst()
            if head.game == game {
                return head
            }
            for child in head.children {
                queue.append(child)
            }
        }
        return nil
    }
}

I built this driver with a maxDepth argument because playouts/rollouts in my real game are fairly long and I have a access to a decent static evaluation function. Also, the BFS findNode method is so that I can reuse parts of the tree.

Here’s what a node in the driver looks like:

final class Node {
    weak var parent: Node?
    var move: Move?
    var game: Game
    var untriedMoves: (Move)
    var children: (Node)
    var cumulativeValueFor: Double
    var cumulativeValueAgainst: Double
    var visits: Double

    init(parent: Node? = nil, move: Move? = nil, game: Game) {
        self.parent = parent
        self.move = move
        self.game = game
        self.children = ()
        self.untriedMoves = game.availableMoves()
        self.cumulativeValueFor = 0
        self.cumulativeValueAgainst = 0
        self.visits = 0
    }
    
    var isFullyExpanded: Bool {
        untriedMoves.isEmpty
    }
    
    lazy var isTerminal: Bool = {
        game.isFinished
    }()
    
    func select(_ c: Double) -> Node {
        var leafNode = self
        while !leafNode.isTerminal {
            if !leafNode.isFullyExpanded {
                return leafNode.expand()
            } else {
                leafNode = leafNode.selectChildWithMaxUcb(c)!
            }
        }
        return leafNode
    }
    
    func expand() -> Node {
        let move = untriedMoves.popLast()!
        let nextGame = game.update(move)
        let childNode = Node(parent: self, move: move, game: nextGame)
        children.append(childNode)
        return childNode
    }
    
    func backpropogate(_ value: Double) {
        visits += 1
        cumulativeValueFor += value
        if let parent = parent {
            parent.backpropogate(value)
        }
    }
    
    func selectChildWithMaxUcb(_ c: Double) -> Node? {
        children.max { $0.ucb(c) < $1.ucb(c) }
    }

    func ucb(_ c: Double) -> Double {
        q + c * u
    }
    
    private var q: Double {
        let value = cumulativeValueFor - cumulativeValueAgainst
        return value / visits
    }
    
    private var u: Double {
        sqrt(log(parent!.visits) / visits)
    }
}

extension Node: CustomStringConvertible {
    var description: String {
        guard let move = move else { return "" }
        return "(move) ((cumulativeValueFor)/(visits))"
    }
}

I don’t think there’s anything extraordinary about my node object? (I am hoping, though, that I can do something to/about q so that I might prevent an “instant” loss in my test game…


I’ve been testing this implementation of MCTS on a 1-D variant of “Connect 4”.

Here’s the game and all of it’s primitives:

enum Player: Int {
    case one = 1
    case two = 2
    
    var opposite: Self {
        switch self {
        case .one: return .two
        case .two: return .one
        }
    }
}

extension Player: CustomStringConvertible {
    var description: String {
        "(rawValue)"
    }
}

typealias Move = Int

enum Evaluation {
    case win
    case loss
    case draw
    case ongoing(Double)
    
    var value: Double {
        switch self {
        case .win: return 1
        case .loss: return 0
        case .draw: return 0.5
        case .ongoing(let v): return v
        }
    }
}

struct Game {
    var array: Array<Int>
    var currentPlayer: Player
    
    init(length: Int = 10, currentPlayer: Player = .one) {
        self.array = Array.init(repeating: 0, count: length)
        self.currentPlayer = currentPlayer
    }
    
    var isFinished: Bool {
        switch evaluate() {
        case .ongoing: return false
        default: return true
        }
    }

    func availableMoves() -> (Move) {
        array
            .enumerated()
            .compactMap { $0.element == 0 ? Move($0.offset) : nil}
    }
    
    func update(_ move: Move) -> Self {
        var copy = self
        copy.array(move) = currentPlayer.rawValue
        copy.currentPlayer = currentPlayer.opposite
        return copy
    }
    
    func evaluate(for player: Player) -> Evaluation {
        let player3 = three(for: player)
        let oppo3 = three(for: player.opposite)
        let remaining0 = array.contains(0)
        switch (player3, oppo3, remaining0) {
        case (true, true, _): return .draw
        case (true, false, _): return .win
        case (false, true, _): return .loss
        case (false, false, false): return .draw
        default: return .ongoing(0.5)
        }
    }
    
    private func three(for player: Player) -> Bool {
        var count = 0
        for slot in array {
            if slot == player.rawValue {
                count += 1
            } else {
                count = 0
            }
            if count == 3 {
                return true
            }
        }
        return false
    }
}

extension Game {
    func evaluate() -> Evaluation {
        evaluate(for: currentPlayer)
    }
    
    func randomMove() -> Move? {
        availableMoves().randomElement()
    }
}

extension Game: CustomStringConvertible {
    var description: String {
        return array.reduce(into: "") { result, i in
            result += String(i)
        }
    }
}

extension Game: Equatable {}

While there are definitely efficiencies to be gained in optimizing the evaluate/three(for:) scoring methods, I’m more concerned about improving the performance of the driver and the node as this “1d-connect-3” game isn’t my real game. That said, if there’s a huge mistake here and a simple fix I’ll take it!

Another note: I am actually using ongoing(Double) in my real game (I’ve got a static evaluation function that can reliably score a player as 1-99% likely to win).


A bit of Playground code:


var mcts = MonteCarloTreeSearch(for: .two, timeBudget: 5, maxDepth: 3)
var game = Game(length: 10)
// 0000000000
game = game.update(0) // player 1
// 1000000000
game = game.update(8) // player 2
// 1000000020
game = game.update(1) // player 1
// 1100000020
let move1 = mcts.findMove(for: game)!
// usually 7 or 9... and not 2
print(mcts.root!.children)
game = game.update(move1) // player 2
mcts.update(with: game)
game = game.update(4) // player 1
mcts.update(with: game)
let move2 = mcts.findMove()!

Unfortunately, move1 in this sample “playthru” doesn’t try to prevent the instant win-condition on the next turn for player 1?! (I know that orthodox Monte Carlo Tree Search is in the business of maximizing winning not minimizing losing, but not picking 2 here is unfortunate).

So yeah, any help in making all this faster (perhaps through parallelization), and fixing the “instant-loss” business would be swell!

Reload Windows wallpaper after connection loss

Our Windows domain provides a wallpaper image via group policy. The image file is accessible through a network path.

If there is no network connection during client computer boot, the wallpaper will be black, because Windows couldn’t access it. This sometimes happens if you use Wi-Fi connection and always with VPN.

Unfortunately Windows doesn’t automatically try to load it again as soon as there the network connection is established.

How can I force Windows to reload the background image?

python – Conv-Variational-Autoencoder Loss is NaN

I am training a Variational Autoencoder. Suddenly my Loss Explodes and then becomes NaN
and I dont know why.
When evaluating the trained Vae on an Image the output data has Inf value, so I guess its happening in the sampling Method of the VAE, but why does it suddenly explode and how can i prevent it?


class VAE(nn.Module):
   def __init__(self, input_shape, z_dim):
       super().__init__()
       self.z_dim = z_dim
       self.input_shape = input_shape

       # encoder
       self.encoder_conv = nn.Sequential(
           nn.Conv2d(1, 32, 3, stride=2, padding=1),
           nn.BatchNorm2d(32),
           nn.LeakyReLU(),
           nn.Conv2d(32, 64, 3, stride=2, padding=1),
           nn.BatchNorm2d(64),
           nn.LeakyReLU(),
           nn.Conv2d(64, 64, 3, stride=2, padding=1),
           nn.BatchNorm2d(64),
           nn.LeakyReLU(),
           nn.Conv2d(64, 64, 3, stride=2, padding=1),
           nn.BatchNorm2d(64),
           nn.LeakyReLU()
       )
       self.conv_out_size = self._get_conv_out_size(input_shape)
       self.mu = nn.Sequential(
           nn.Linear(self.conv_out_size, z_dim),
           nn.LeakyReLU(),
           nn.Dropout(0.2)
       )
       self.log_var = nn.Sequential(
           nn.Linear(self.conv_out_size, z_dim),
           nn.LeakyReLU(),
           nn.Dropout(0.2)
       )

       # decoder
       self.decoder_linear = nn.Sequential(
           nn.Linear(z_dim, self.conv_out_size),
           nn.LeakyReLU(),
           nn.Dropout(0.2)
       )
       
       self.decoder_conv = nn.Sequential(
           nn.UpsamplingNearest2d(scale_factor=2),
           nn.ConvTranspose2d(64, 64, 3, stride=1, padding=1),
           nn.BatchNorm2d(64),
           nn.LeakyReLU(),
           nn.UpsamplingNearest2d(scale_factor=2),
           nn.ConvTranspose2d(64, 64, 3, stride=1, padding=1),
           nn.BatchNorm2d(64),
           nn.LeakyReLU(),
           nn.UpsamplingNearest2d(scale_factor=2),
           nn.ConvTranspose2d(64, 32, 3, stride=1, padding=1),
           nn.BatchNorm2d(32),
           nn.LeakyReLU(),
           nn.UpsamplingNearest2d(scale_factor=2),
           nn.ConvTranspose2d(32, 1, 3, stride=1, padding=(5,3)),
           nn.Sigmoid()
       )

   def sampling(self, mu, log_var):
       ## TODO: epsilon should be at the model's device (not CUDA)
       epsilon = torch.Tensor(np.random.normal(size=(self.z_dim), scale=1.0)).cuda()
       return mu + epsilon * torch.exp(log_var / 2)

   def forward_encoder(self, x):
       x = self.encoder_conv(x)
       x = x.view(x.size()(0), -1)
       mu_p = self.mu(x)
       log_var_p = self.log_var(x)
       return (mu_p, log_var_p)

   def forward_decoder(self, x):
       x = self.decoder_linear(x)
       x = x.view(x.size()(0), *self.conv_out_shape(1:))
       x = self.decoder_conv(x)
       return x

   def forward(self, x):
       mu_p, log_var_p = self.forward_encoder(x)
       x = self.sampling(mu_p, log_var_p)
       images_p = self.forward_decoder(x)
       return (mu_p, log_var_p, images_p)

   def _get_conv_out_size(self, shape):
       out = self.encoder_conv(torch.zeros(1, *shape))
       self.conv_out_shape = out.size()
       return int(np.prod(self.conv_out_shape))


   def forward_no_epsilon(self, x):
       mu_p, log_var_p = self.forward_encoder(x)
       x = mu_p
       images_p = self.forward_decoder(x)
       return images_p

Loss:

def kl_loss(mu, log_var):
    # TODO: dividir entre el numero de batches? 
    return -0.5 * torch.mean(1 + log_var - mu.pow(2) - torch.exp(log_var))

def r_loss(y_train, y_pred):
    r_loss = torch.mean((y_train - y_pred) ** 2)
    return r_loss

train:

mu_v, log_var_v, images_out_v = vae(images_v)
r_loss_v = r_loss(images_out_v, labels_v)
kl_loss_v = kl_loss(mu_v, log_var_v)
loss = kl_loss_v + r_loss_v * 10000.0
loss.backward()
optimizer.step()

Losses:

python – Como definir Loss Function customizada?

Olá, estou tentando fazer uma Loss Function personalizada através da API Functional do Keras e estou tendo problemas com essa implementação, visto que nunca fiz algo do tipo.

Quero fazer uma Loss Function que além de receber os dados de y_true e y_pred, ele receba também uma terceira variável (y_teorico) que seria calculada através dos próprios dados de entrada. À seguir, como eu gostaria de fazer a loss:

Loss = MSE (y_true, y_pred) + MSE (y_pred, y_teorico)

À seguir o meu dataset, onde as 14 primeiras linhas são a entrada da rede e a saída é a última coluna ‘M’, que é uma medição direta.

TEST

Essa terceira variável(y_teorico) que quero introduzir na Loss Function é calculada através de uma fórmula com os dados de entrada:

y_teorico = (np.sqrt(
                     2*(CONSTANTE**2) + 
                     2*((E1*E2)) -
                     2*((px1*px2) + (py1*py2) + (pz1*pz2))
                     ))

Não sei como fazer isso dentro da minha loss, estava lendo que poderia ser feito através de um train step customizado como no link à seguir, mas mesmo assim estou tendo dificuldades para tal.

https://keras.io/guides/customizing_what_happens_in_fit/#a-first-simple-example

À Seguir como estou tentando fazer:

Definindo treino e teste:

x_train = data.drop(columns=('M'))
X_train = x.apply(lambda x: (x - np.mean(x)) / (np.max(x) - np.min(x)))

y_train = data.M

Criando o modelo

class CustomModel(keras.Model):
def train_step(self, data):
    X_train, y_train = data
   
    with tf.GradientTape() as tape:
        y_pred = self(x, training=True)

        elec_mass = 0.5109989461
        y_teoric = (np.sqrt(
                      2*(elec_mass**2) + 
                      2*((self.E1*self.E2)) -
                      2*((self.px1*self.px2) + (self.py1*self.py2) + (self.pz1*self.pz2))
                      ))
                        
        # Compute the loss value
        # (the loss function is configured in `compile()`)

        GAMMA = 1
        L_data = self.compiled_loss(y, y_pred, y_teoric, regularization_losses=self.losses)
        L_phy = GAMMA * self.compiled_loss(y_pred, y_teoric)
        custom_loss = L_data + L_phy
        loss = custom_loss

    # Compute gradients
    trainable_vars = self.trainable_variables
    gradients = tape.gradient(loss, trainable_vars)
    
    # Update weights
    self.optimizer.apply_gradients(zip(gradients, trainable_vars))
    
    # Update metrics (includes the metric that tracks the loss)
    self.compiled_metrics.update_state(y, y_pred)
    
    # Return a dict mapping metric names to current value
    return {m.name: m.result() for m in self.metrics}

#Layers

#Model Layers
input = keras.Input(shape=(14,))
hidden1 = keras.layers.Dense(14, activation='relu')(input)
hidden2 = keras.layers.Dense(7, activation='relu')(hidden1)
output = keras.layers.Dense(1, activation= relu_advanced)(hidden2)

model = CustomModel(inputs=input, outputs=output)

#Finalmente o FIT()

model.compile(optimizer="adam", loss="mse", metrics=("mse"))
early_stop = keras.callbacks.EarlyStopping(monitor='val_loss', patience=5)

history = model.fit(X_train, y_train, 
                    epochs           = 100,
                    batch_size       = 100,
                    verbose          = 2,
                    validation_split = 0.2,
                    callbacks=(early_stop)
                    )