python – Wrangling and reading several xlsx files from sftp server and concatenating them into a single dataframe

Please find below my code. Any recommendations/suggestions on how to improve it and make it more readable will be hugely appreciated. I’ve tried to comment it as much as possible so everyone can understand what my purpose was in every part of the code, but please feel free to ask me for more clarifications.

# Libraries
import pandas as pd
from datetime import datetime as dt, time as ti, timedelta as td
import io
import numpy as np
# Custom library with some auxiliary functions
import jtp_aux as aux

dict_attempts = {
    'Agency1': r'/path1/subpath/',
    'Agency2': r'/path2/subpath2/',
    'Agency3': r'/path3/subpath3/',
    'Agency4': r'/path4/subpath4/'
}


def fetch_attempts(attempts_):
   # Connect to sftp server via method defined in custom library
    sftp_ = aux.sftp_connect()

   # Initialise the dataframe which I will use to concatenate the files read from the sftp server
    out = ()

   # Get the files between today at midnight and 1 week ago at midnight
    today_midnight = dt.combine(dt.today(), ti.min)
    last_week_midnight = today_midnight - td(days=7)
    for agency_, path_ in attempts_.items():
        for file in sftp_.listdir_attr(path_):
            mtime = dt.fromtimestamp(file.st_mtime)
            if (last_week_midnight <= mtime) and (mtime <= today_midnight):
                with sftp_.open(path_ + file.filename) as fp:
                    logger.info(path_ + file.filename)
                    bytes_p = fp.read()
                    file_obj = io.BytesIO(bytes_p)
                    fp_aux = pd.read_excel(
                        file_obj
                    )
                    fp_aux.dropna(axis=0, how='all', inplace=True)  # Delete the rows with all NaNs as they were causing issues
                    fp_aux.dropna(axis=1, how='all', inplace=True)  # Delete the columns with all NaNs as they were causing issues
                    # Need to insert the agency name and the execution date as this info does not appear in the files
                    fp_aux.insert(0, 'AGENCY', agency_)
                    fp_aux.insert(0, 'EXEC_DATE', today_midnight)

                    # Since the files have different column names (although same number of columns) need to standardise the 
            # column names to concatenate the dataframes.
                    fp_aux.set_axis(
                        ('EXEC_DATE', 'AGENCY', 'CALL_DATE', 'CALL_TIME', 'CALL_TYPE',
                         'CALL_DIRECTION', 'LIVE_FINAL', 'CACONT_ACC',
                         'PHONE_NUMBER', 'DESCRIPTION', 'CONTACTS',
                         'ATTEMPTS', 'RPC',
                         'AGENT', 'DOM_SME'
                         ), axis=1, inplace=True)

                    # If the file is valid, append it
                    out.append(fp_aux)
                    logger.info(path_)

                    # Note: not sure if I should handle here the errors in the files and pass to the following file without
                    # making the program crash.

    # Concatenate the obtained dataframes into a sigle dataframe
    df_out = pd.concat(out)

    # Need to replace the strings 'nan' with actual nulls for inserting the data into a DB
    df_out = df_out.replace({np.nan: ''})

    # The date is provided in string format in the xlsxs so I make sure it's converted to datetime
    df_out('CALL_DATE') = pd.to_datetime(df_out('CALL_DATE'), dayfirst=True)

    # This commented part is one I use to debug the program. If the error happens after this line, I do not want to reexecute the whole
    # script but save the dataframe in a pickle file so I can take it from here.

    # df2pickle(df_out, r'output.pckl')
    # df_out = pickle2df(r'output.pckl')


    # I need to format this column to 12 digit string with leading 0 if needed. Example: from 1234 to '000000001234'
    df_out('CACONT_ACC') = df_out('CACONT_ACC').map('{:0>12}'.format).astype(str).
        str.slice(0, 11)

    # These columns need to be numeric so I force them to be so.
    cols_number = ('CONTACTS', 'ATTEMPTS', 'RPC')
    df_out(cols_number) = df_out(cols_number).apply(pd.to_numeric, errors='coerce').
        fillna(0, downcast='infer')

    # Convert these columns to string.
    cols_string = ('AGENCY', 'CALL_TIME', 'CALL_TYPE', 'CALL_DIRECTION',
                   'LIVE_FINAL', 'PHONE_NUMBER', 'DESCRIPTION', 'AGENT', 'DOM_SME')
    cols_trunc10 = ('CALL_TYPE', 'CALL_DIRECTION', 'LIVE_FINAL')
    cols_trunc50 = ('AGENCY', 'PHONE_NUMBER', 'AGENT', 'DOM_SME')
    df_out(cols_string) = df_out(cols_string).astype(str)

    # Truncate the string fields not to break the DB
    df_out(cols_trunc10) = df_out(cols_trunc10).astype(str).apply(lambda x: x.str(:10))
    df_out(cols_trunc50) = df_out(cols_trunc50).astype(str).apply(lambda x: x.str(:50))

    return df_out


# Logger
path_logger = r'log_attempts.log'
logger = aux.init_logger(path_logger)


def main():
    try:
        # Read Attempts from sftp server
        logger.info('START Read Attempts from sftp server')
        df_attempts = fetch_attempts(dict_attempts)
        logger.info('END Read Attempts from sftp server')
    
    except Exception as e:
        logger.error(e, exc_info=True)
        raise e


if __name__ == "__main__":
    main()

android – Reading environment variables on build.gradle from GitLab

I’m using GitLab for the CI on my Android apps and I have defined a set of secrets to generate the release builds from the main branch.

I read the secrets using these lines on my build.gradle file

Properties keyProps = new Properties()
keyProps.setProperty("store", System.getenv("store").toString())

this worked fine until I updated my compile and target SDK versions to 31, now this command is not working and returns null for all the properties.

How can I retrieve the secrets from GitLab in my build.gradle files? Thanks!

google chart – Uncaught TypeError: Cannot read property ‘results’ of null – when reading SharePoint list with JS

I am using an ajax call to get elements of multiple sharepoint list choice(checkboxes) columns called “Orbit”, “SD”,…. to create google pie charts for each column:

data.d.results.forEach(function(row) {
    // each row
    row.Orbit.results.forEach(function(choiceOrbit) {
        // each choice of Orbit of single row
        if (countOrbit.hasOwnProperty(choiceOrbit)) {
            countOrbit(choiceOrbit) += 1;
        } else {
            countOrbit(choiceOrbit) = 1;
        }
    });

But if one row in the column is empty, I get an error:

Uncaught TypeError: Cannot read property 'results' of null
at Statistics.aspx:658
at Array.forEach (<anonymous>)
at Object.success (Statistics.aspx:632)
at i (jquery.min.js:2)
at Object.fireWith (as resolveWith) (jquery.min.js:2)
at A (jquery.min.js:4)
at XMLHttpRequest.<anonymous> (jquery.min.js:4)

How can i add an if(), so that an empty item will be neglected in the counting? Something like:

if(row.Orbit.results.forEach!=null)

or

 if(data.d.results.length > 0)
  {
    // Add your code

  }

or

if(row!=null)

doesnt work

EDIT: my full code is:

<html>
<head>

<script src="https://www.gstatic.com/charts/loader.js" type="text/javaScript"></script>
<script src="https://cdnjs.cloudflare.com/ajax/libs/jquery/3.0.0/jquery.min.js" type="text/javaScript"></script>
<script type="text/javaScript">
    $(document).ready(function () {
    var tempArray = ();
    var chartObj = (('Orbit', 'Number'));

AjaxCall(_spPageContextInfo.webAbsoluteUrl + "/_api/web/lists/GetByTitle('list')/items", function (data) {

    var countOrbit = {};
    var countSD = {}; 
    var countMT = {}; 
    var countIT = {}; 
    var countAI = {}; 


    data.d.results.forEach(function(row) {
        // each row
        row.Orbit.results.forEach(function(choiceOrbit) {
            // each choice of Orbit of single row
            if (countOrbit.hasOwnProperty(choiceOrbit)) {
                countOrbit(choiceOrbit) += 1;
            } else {
                countOrbit(choiceOrbit) = 1;
            }
        });
        row.SD.results.forEach(function(choiceSD) {
            // each choice of Orbit of single row
            if (countSD.hasOwnProperty(choiceSD)) {
                countSD(choiceSD) += 1;
            } else {
                countSD(choiceSD) = 1;
            }
        });
        row.MT.results.forEach(function(choiceMT) {
            // each choice of MT of single row
            if (countMT.hasOwnProperty(choiceMT)) {
                countMT(choiceMT) += 1;
            } else {
                countMT(choiceMT) = 1;
            }
        });
        row.IT.results.forEach(function(choiceIT) {
            // each choice of IT of single row
            if (countIT.hasOwnProperty(choiceIT)) {
                countIT(choiceIT) += 1;
            } else {
                countIT(choiceIT) = 1;
            }
        });
        row.AI.results.forEach(function(choiceAI) {
            // each choice of IT of single row
            if (countAI.hasOwnProperty(choiceAI)) {
                countAI(choiceAI) += 1;
            } else {
                countAI(choiceAI) = 1;
            }
        });
    });

    // Load Charts and the corechart package.
    google.charts.load('current', {packages:('corechart')});
    google.charts.setOnLoadCallback(function(){drawAllCharts(countOrbit, 'Orbit')});
    google.charts.setOnLoadCallback(function(){drawAllCharts(countSD, 'SD')});
    google.charts.setOnLoadCallback(function(){drawAllCharts(countMT, 'MT')});
    google.charts.setOnLoadCallback(function(){drawAllCharts(countIT, 'IT')});
    google.charts.setOnLoadCallback(function(){drawAllCharts(countAI, 'AI')});


    // Function for all Charts

    function drawAllCharts(countChoices, name) {
        var rows = ();
        for (var property in countChoices) {
            if (countChoices.hasOwnProperty(property)) {
                rows.push((property, countChoices(property)));
            }
        }


        var datatable = new google.visualization.DataTable();
        datatable.addColumn('string', 'Type');
        datatable.addColumn('number', 'Quantity');
        datatable.addRows(rows);
        var options = {
            title: String(name),
            is3D: 'true'
        };
        var chart = new google.visualization.PieChart(document.getElementById(String(name)));
        chart.draw(datatable, options);
    }
    // End of function to draw charts


});
});
                function AjaxCall(url, success) {
                    $.ajax({
                        url: url,
                        type: "GET",
                        headers: {
                            "accept": "application/json;odata=verbose",
                        },
                        success: success,
                        error: function (error) {
                            console.log(JSON.stringify(error));
                            alert('Something Went Wrong');
                        }
                    });
                }
</script>
</head>
  <body>
    <!--Table and divs that hold the pie charts-->
    <table class="columns">
      <tr>
        <td><div id="Orbit" style="width:700px;height:500px;"></div></td>
        <td><div id="SD" style="width:700px;height:500px;"></div></td>
      </tr>
      <tr>
        <td><div id="MT" style="width:700px;height:500px;"></div></td>
        <td><div id="IT" style="width:700px;height:500px;"></div></td>
        <td><div id="AI" style="width:700px;height:500px;"></div></td>
      </tr>
    </table>
  </body>
</html>

functional programming – Reading a CSV file using Haskell

I’m reading a CSV using Haskell. I’m not sure if this is the appropriate way to do it.

This is what I’m doing:

  1. Read rows from a CSV -> returnlazy byte string
  2. Parse the rows from the CSV to a Stock record -> (headers, (Stock))
  3. Remove the headers -> (Stock)
  4. Filter the stocks that are “Common Stock” -> (Stock)
  5. Print the resulting stocks

Any feedback on how to write better Haskell code is appreciated!

The code is a Stack project, you can find the project and instructions on how to run it: here

I read this section of Stephen Diehl’s guide before writing the code: What I wish I knew when learning Haskell

Here is the code to read the CSV file. The main function is printStocks.

{-# LANGUAGE OverloadedStrings #-}

module Lib (printStocks) where

import Control.Monad
import qualified Data.ByteString.Lazy as BL
import Data.Csv
import qualified Data.Vector as V

-- data type to model a stock
data Stock = Stock
  { code :: String,
    name :: String,
    country :: String,
    exchange :: String,
    currency :: String,
    instrumentType :: String
  }
  deriving (Show)

instance FromNamedRecord Stock where
  parseNamedRecord record =
    Stock
      <$> record .: "Code"
      <*> record .: "Name"
      <*> record .: "Country"
      <*> record .: "Exchange"
      <*> record .: "Currency"
      <*> record .: "Type"

-- type synonyms to handle the CSV contents
type ErrorMsg = String

type CsvData = (Header, V.Vector Stock)

-- Function to read the CSV
parseCSV :: FilePath -> IO (Either ErrorMsg CsvData)
parseCSV filePath = do
  contents <- BL.readFile filePath
  return $ decodeByName contents

-- Discard headers from CsvData
removeHeaders :: CsvData -> V.Vector Stock
removeHeaders = snd

-- Check if the given element is a Common Stock
isStock :: Stock -> Bool
isStock stock = instrumentType stock == "Common Stock"

filterStocks :: V.Vector Stock -> V.Vector Stock
filterStocks = V.filter isStock

-- Print the stocks from the CSV file
printStocks :: FilePath -> IO ()
printStocks filePath =
  parseCSV filePath
    >>= print . fmap (filterStocks . removeHeaders)

arduino – Python PySerial Program Not Reading Data Correctly

Before I get into the background of what’s going on, the problem, and the steps I’ve taken to fix it, here’s a quick summary of the equipment I’m using for my project

Equipment I’m using:

Infrared Encoder: https://www.amazon.ca/dp/B07B3PLZZ2/ref=cm_sw_r_oth_api_glt_i_NCFT5KHJBARTBC76XG7Y?_encoding=UTF8&psc=1

Arduino Mega 2560 w/ 9V 1A power adapter: https://www.amazon.ca/gp/product/B01H4ZDYCE/ref=ppx_yo_dt_b_asin_title_o01_s00?ie=UTF8&psc=1

Computer: Macbook Pro 2020, M1 Chip, 8 GB RAM

IDE: Pycahrm Community Edition

Python Interpreter: Python 3.8

Background:

I have a fan that spins at a constant speed. Intercepting the blades of the fan is my Infrared encoder that’s hooked up to my Arduino. as the blades spin through the infrared encoder, the Arduino script I wrote is supposed to print a ‘1’ if the sensor is triggered and a ‘0’ if it isn’t.

This works fine.

Now, I want to be able to use this ‘1’ and ‘0’ data in Python because I have an idea for a video game I want to prototype using Pygame.

On Pycharm, I installed pyserial so I’m able to access all the data. In order to capture the 1’s and 0’s in Pycharm, here’s the code I’ve written:

serialInst = serial.Serial()
serialInst.port = '/dev/cu.usbmodem11301'
serialInst.baudrate = 9600
serialInst.timeout = 0.5
serialInst.open()
def readData(self):

    if serialInst.inWaiting():

        packet = serialInst.readline()

        if int(packet.decode('utf').rstrip('n')) == 1:
            
            print('1')

        elif int(packet.decode('utf').rstrip('n')) == 0:

            print('0')

The way it works is fairly simple. As I described before, if the fan triggers the sensor, Arduino prints out a ‘1’, and then Python reads the data and confirms it with the if statement.

Same operation for when the sensor is not triggered.

This function is the first function called in the main while loop of my python code. There are other functions that are called afterward before it loops around to the readData function however when those functions are called, it’s almost as if the Arduino data stops checking to see if there’s been any changes.

No matter if the fans blades go through the sensor, the output is always 0

If I comment out any function that’s used in the main while loop, everything works 100% fine. The issue only happens when I start to introduce other functions into the main while loop.

I currently have 9 other tabs open with similar Pyserial issues people have reported here on StackOverflow and here are some of the things I’ve read about and tried:

  • Adding a serialInst.timeout value. I played with a variety of numbers from 0.01 to 0.5 and haven’t found success

  • Using serialInst.reset_output_buffer(), serialInst.flushOutput(), serialInst.reset_input_buffer(), or serialInst.flushInput() in a variety of different locations

  • And I’ve tried many more potential solutions however, this was when my problem was even worse… Before, I had the readData function as part of a Class in another however, everytime I tried to run it from the main while loop, it would skip data, not give any data, show very odd characters… it was just a nightmare.

Now I have the readData function in the main game Class and just need to solve this last issue and everything will work.

I have a python function called readData that reads data coming from an Arduino that’s either a ‘1’ or a ‘0’. When the readData is the only function being called in the main while loop of my program, everything works fine; there are ‘1’s when there should be and ‘0’ when there should be.

The problem arises when I uncomment the other functions in the while loop that I have to use.

The program definitely loops through the entire while loop however, when it reaches the readData function, it gets “stuck” at reading just ‘0’… even when there’s supposed to be a ‘1’.

The readData function if statements should print out a ‘1’ or ‘0’ correctly, no matter if the other functions in the main while loop are commented out or not.

I need to “unfreeze” whatever is holding up that part of the program.

Please let me know if you need any more information. I really appreciate your time.

python – Is there a way to run this without extracting the archive but still reading all the xml files and for mhtl use in office applications(CVE 2021-40444)?

I have a python3 script to detect the use of mhtml in xml tags. However, it’s not as optimized as I would like and would like some suggestions, if possible.

This is in an effort to help mitigate and detect CVE-2021-40444. Right now, the script extracts the xml files from the document and I was wondering if there is a way to do this without the extraction process. I have to run the extraction process twice to deal with a None type error when using xmlfiles = (os.walk(output_file))

from zipfile import ZipFile
import os
import subprocess
import codecs
import sys
import time
print('Current Python version information:'+sys.version)
print('Purpose:CVE 2021-40444 scanning documents for use of mhtmlnAuthor:Patrick GraynThis was made for python version 3.9.7nn')

mhtml_links=()
input_file=input('Enter Path to document to process:')
outfile=input('Enter path to export xml files to:')

def scan():
    try:
        output_file = os.mkdir(outfile, 755)
    except FileExistsError:
        output_file = outfile
        pass
    with ZipFile(input_file, 'r') as zipObj:
        zipObj.extractall(outfile)

    print('nPath Created successfullyn')
    try:
        xmlfiles = (os.walk(output_file))
    except TypeError:
        print('No files detected checking output file again')
        try:
            output_file = os.mkdir(outfile, 755)
        except FileExistsError:
            output_file = outfile
            pass
    with ZipFile(input_file, 'r') as zipObj:
        zipObj.extractall(outfile)

    for (root, dirs, files) in os.walk(outfile, topdown=True):
        for name in files:
            with codecs.open(os.path.join(root,name), 'r', encoding='utf-8', errors='ignore') as read_file:
                lines = read_file.readlines()
                for line in lines:
                    if 'mhtml' in line:
                        mhtml_links.append(str(os.path.join(root,name)))
                        print('ATTACK DETECTEDn'+line)

                
    print('nnn')
    print('files containing exploit:n')
    for item in mhtml_links:
        print(item)
scan()

nikon d90 – Why is exposure meter reading not changing in Manual mode?

The manual exposure meter only displays when it’s helpful to do so. Mainly when you have the camera in Manual (M) mode.

In S, A, P modes, the camera is calculating the exposure so you don’t need it and it disappears.

It reappears in those modes only if you use the thumbwheels to alter the exposure past the range of valid apertures, shutter speeds or ISO. For example, in low light, if you have the camera in S mode and increase the shutter speed with the thumbwheel, the camera will compensate by using a larger aperture until it reaches the maximum aperture, then will adjust the ISO to the maximum. If you keep increasing the shutter speed, it will run out of options, reach max ISO and aperture, and that’s when it displays “Lo” and the exposure meter appears to show you that you are now underexposing the image.

Imre is probably right that you have the exposure set past the limits of that exposure meter. As suggested, put the camera in P, S or A mode, note what settings the camera has chosen, then put the camera in M mode and change to those settings. You should then be roughly in the center of that exposure meter.

What lens are you using? If it’s an older lens the camera might not be able to meter properly. Is it an AF-D or AF-S lens?

mysql – Getting fatal error 1236 from master when reading data from binary log in mariadb master master replication

I am facing the situation similar to this Error 1236 – “Could not find first log file name in binary log index file”

I have tried all the answers but I am still not able to get rid of this error on Server id = 1:

         Last_IO_Errno: 1236
         Last_IO_Error: Got fatal error 1236 from master when reading data from binary log: 'Could not find first log file name in binary log index file'

I have two nodes running mariadb 10 the master node Server id 1 and and second master node Server id = 2

Server id 1:

MariaDB ((none))> SHOW SLAVE STATUSG
*************************** 1. row ***************************
                Slave_IO_State: 
                   Master_Host: 192.168.1.10
                   Master_User: replicator
                   Master_Port: 6306
                 Connect_Retry: 60
               Master_Log_File: mariadb-bin.000001
           Read_Master_Log_Pos: 313
                Relay_Log_File: mysqld-relay-bin.000001
                 Relay_Log_Pos: 4
         Relay_Master_Log_File: mariadb-bin.000001
              Slave_IO_Running: No
             Slave_SQL_Running: Yes
               Replicate_Do_DB: 
           Replicate_Ignore_DB: 
            Replicate_Do_Table: 
        Replicate_Ignore_Table: 
       Replicate_Wild_Do_Table: 
   Replicate_Wild_Ignore_Table: 
                    Last_Errno: 0
                    Last_Error: 
                  Skip_Counter: 0
           Exec_Master_Log_Pos: 313
               Relay_Log_Space: 256
               Until_Condition: None
                Until_Log_File: 
                 Until_Log_Pos: 0
            Master_SSL_Allowed: No
            Master_SSL_CA_File: 
            Master_SSL_CA_Path: 
               Master_SSL_Cert: 
             Master_SSL_Cipher: 
                Master_SSL_Key: 
         Seconds_Behind_Master: NULL
 Master_SSL_Verify_Server_Cert: No
                 Last_IO_Errno: 1236
                 Last_IO_Error: Got fatal error 1236 from master when reading data from binary log: 'Could not find first log file name in binary log index file'
                Last_SQL_Errno: 0
                Last_SQL_Error: 
   Replicate_Ignore_Server_Ids: 
              Master_Server_Id: 2
                Master_SSL_Crl: 
            Master_SSL_Crlpath: 
                    Using_Gtid: No
                   Gtid_IO_Pos: 
       Replicate_Do_Domain_Ids: 
   Replicate_Ignore_Domain_Ids: 
                 Parallel_Mode: conservative
                     SQL_Delay: 0
           SQL_Remaining_Delay: NULL
       Slave_SQL_Running_State: Slave has read all relay log; waiting for the slave I/O thread to update it
              Slave_DDL_Groups: 0
Slave_Non_Transactional_Groups: 0
    Slave_Transactional_Groups: 0
1 row in set (0.001 sec)

checked binary logs:

MariaDB ((none))> SHOW BINARY LOGS;
+------------------+-----------+
| Log_name         | File_size |
+------------------+-----------+
| mysql-bin.000001 |  11337613 |
| mysql-bin.000002 |    138563 |
| mysql-bin.000003 |   1100347 |
| mysql-bin.000004 |   7406418 |
| mysql-bin.000005 | 110683302 |
| mysql-bin.000006 |    144929 |
+------------------+-----------+
MariaDB ((none))> show master status;
+------------------+----------+--------------+------------------+
| File             | Position | Binlog_Do_DB | Binlog_Ignore_DB |
+------------------+----------+--------------+------------------+
| mysql-bin.000006 |   173699 |              |                  |
+------------------+----------+--------------+------------------+

then Server id 2:

MariaDB ((none))> SHOW SLAVE STATUSG
*************************** 1. row ***************************
               Slave_IO_State: Waiting for master to send event
                  Master_Host: 192.168.1.11
                  Master_User: replicator
                  Master_Port: 3306
                Connect_Retry: 60
              Master_Log_File: mysql-bin.000006
          Read_Master_Log_Pos: 125740
               Relay_Log_File: mysqld-relay-bin.000002
                Relay_Log_Pos: 10034
        Relay_Master_Log_File: mysql-bin.000006
             Slave_IO_Running: Yes
            Slave_SQL_Running: Yes
              Replicate_Do_DB: 
          Replicate_Ignore_DB: 
           Replicate_Do_Table: 
       Replicate_Ignore_Table: 
      Replicate_Wild_Do_Table: 
  Replicate_Wild_Ignore_Table: 
                   Last_Errno: 0
                   Last_Error: 
                 Skip_Counter: 0
          Exec_Master_Log_Pos: 125740
              Relay_Log_Space: 10337
              Until_Condition: None
               Until_Log_File: 
                Until_Log_Pos: 0
           Master_SSL_Allowed: No
           Master_SSL_CA_File: 
           Master_SSL_CA_Path: 
              Master_SSL_Cert: 
            Master_SSL_Cipher: 
               Master_SSL_Key: 
        Seconds_Behind_Master: 0
Master_SSL_Verify_Server_Cert: No
                Last_IO_Errno: 0
                Last_IO_Error: 
               Last_SQL_Errno: 0
               Last_SQL_Error: 
  Replicate_Ignore_Server_Ids: 
             Master_Server_Id: 1
               Master_SSL_Crl: 
           Master_SSL_Crlpath: 
                   Using_Gtid: No
                  Gtid_IO_Pos: 
      Replicate_Do_Domain_Ids: 
  Replicate_Ignore_Domain_Ids: 
                Parallel_Mode: conservative
1 row in set (0.00 sec)

and binary logs:

MariaDB ((none))> SHOW BINARY LOGS;
+------------------+-----------+
| Log_name         | File_size |
+------------------+-----------+
| mysql-bin.000001 |       313 |
+------------------+-----------+

MariaDB ((none))> show master status;
+------------------+----------+--------------+------------------+
| File             | Position | Binlog_Do_DB | Binlog_Ignore_DB |
+------------------+----------+--------------+------------------+
| mysql-bin.000001 |      313 |              |                  |
+------------------+----------+--------------+------------------+

I do have backup of database , but I still want to understand and fix the issue rather then restarting from beginning.

magento2 – FPC and “upstream sent too big header while reading response header from upstream” error on nginx?

As soon as I enable FPC in Magento 2.4.2, I am getting a upstream sent too big header while reading response header from upstream error when I try searching. It does not happen for all search terms, but for most.

My shop is running on nginx so I tried adding this to the nginx.conf:

fastcgi_buffers 16 16k;
fastcgi_buffer_size 32k;
proxy_buffer_size   128k;
proxy_buffers   4 256k;
proxy_busy_buffers_size   256k;

Did not help a bit. Any ideas how to solve this?

Thanks!

t sql – Creating sql server custom database role that only allows reading some columns and nothing else

I want to create an sql server Custom Database Role which only allows reading some columns of some tables, and nothing else. this is my attempt.

    CREATE ROLE CustomDatabaseRole01
    GO
    GRANT SELECT ON OBJECT::dbo.Table1(Col1,Col2) To 
    CustomDatabaseRole01
    GO
    GRANT SELECT ON OBJECT::dbo.Table2(Col3,Col4) To 
    CustomDatabaseRole01
    GO
    ALTER ROLE CustomDatabaseRole01 ADD MEMBER UserTest01

the script executes, the role is created and UserTest01 is added as a
member however if I execute as UserTest01:

SELECT * FROM Table1;

I get all the columns, and if I execute:

SELECT * FROM Table3;

then again I can read Table3, without granting

please could someone explain