array – Can my javascript code for pulling data from an API and working with it in Goole Sheets be more efficient?

I’m trying to build out some DeFi position tracking pages in Google Sheets. I want to pull from the Zapper API (or other APIs) to track trades, positions, gains, losses, fees…etc. This is my first crack at tracking a position on SushiSwap.

I tried to make re-usable functions and organize the code so it’s easy to understand/work with.

I’d love some insight on better ways to write the functions and/or organize the code. I think this could be a big project and I want to make sure I’m using a good system to write code and organize things. Basically, am I on the right track? Where could I do better?

// function to grab the API data and parse it.
// returns an array
function pullAndParseAPISushi (walletAddress, networkName){
  var apiKEY = "96e0cc51-a62e-42ca-acee-910ea7d2a241";  // API key from Zapper
  var url = ""+ 
walletAddress + "&network=" + networkName + "&api_key=" + apiKEY;  // assembles the API URL with the wallet address and network name
  var response = UrlFetchApp.fetch(url); // pulls data from the API
  var theparsedJSONdata = JSON.parse(response); // parses the JSON response from the API

  return theparsedJSONdata  // returns the parsed array

// access level 2 of the json data - it gets at the array of values after the wallet address 
/** this is the key line that breaks open the data.  Need this for next level data */
/** first level is the wallet address, second level is the big array, third level is the reward tokens...etc. */
function createTheArray(parsedDataFromJson, walletAddress) { 

  var levelTwo = parsedDataFromJson(walletAddress)(0) // 'breaks through' the first array

  var lastArrayReturn = ();  /** creates and loads an array with all the various pairs */
        for (const key in levelTwo){
           //console.log(key + " -> " + levelTwo(key))
           var tempArray = new Array(0)    // not sure why new array has argument for 0
           tempArray.push(key,levelTwo(key))  // stores each pair in a sigle array
           lastArrayReturn.push(tempArray) // loads smaller arrays into larger, single array; 2D array

  // lastArrayReturn is an array of arrays for level 2 data (i.e. the data after the wallet address in the JSON object)

  return lastArrayReturn

// transposes a given array 
function transposeArray(theArray){
  var result = new Array(theArray(0).length);
  for(var i=0; i < result.length; i++){
      result(i) = new Array(theArray.length);
      for(var j = 0; j < result(i).length; j++){
        result(i)(j) = theArray(j)(i)
  console.log("the transposed array is: " + result)
  return result

// putting the array from the api parse into the Sushi Data Pull sheet
function placeSushiData (anArray) {

    theSushiArray = transposeArray(anArray)  // call a function to transpose the array
    let ss = SpreadsheetApp.getActiveSpreadsheet();           // get active spreadsheet
    let targetSheet = ss.getSheetByName('Sushi Data Pull');   // the tab where the data is going
    let targetRange = targetSheet.getRange(1,4, theSushiArray.length, theSushiArray(0).length);  // set the targe range of cells
    let targetDateRange = targetSheet.getRange(2,3,1,1);  // range for the timestamp; 2nd row, 3rd column

    targetRange.setValues(theSushiArray); // sets cells in the target range to the values in the array
    targetDateRange.setValue(new Date()); // puts time stamp in the date column

// fuction to run the data pull and placement
function runTheProgram(){
    var walletAddress = "0x00000000000000000000000000";
    var networkName = "polygon";
    var dataArray = pullAndParseAPISushi(walletAddress, networkName)
    var adjustedArray = createTheArray(dataArray, walletAddress)

Google Sheets pulling more data than the criteria specified

I have a tab in my Google Sheet for my raw data. I have another tab which has the report form which includes to criteria fields which I want to be able use to pull the appropriate data from the raw data tab. I am using “INDEX” and “MATCH” along with my two criteria fields to extract the records in my report. Everything appears to work on the first criteria field by fails on the second criteria field in that it pulls all future dates greater then the date set in the criteria field. My formula is {=ArrayFormula(INDEX(‘Traffic Discrepancy Log’!$A1:$I,MATCH(1,(‘Traffic Discrepancy Log’!B:B=C$2) * (‘Traffic Discrepancy Log’!D:D=C$1),0),0))}

My raw date looks like this enter image description here

The report returns this enter image description here

Note that although I have the date in my criteria set for 4/28/2021 the report lists 4/29/20, 4/30/2021 and 5/01/2021. Ideally I only want to display the date that aligns with the criteria in cell “c1” of the report tab.

Any suggestions on how to resolve this would be appreciated.

How to print a sentence pulling information from a dictionary that has a list inside

I have the following dictionary. If I want to print the output such as the following, how should I write it in python?
John is 20 years old with GPA 3.3
Shannon is 21 years old with GPA 3.4
Eileen is 20 years old with GPA 3.5

students = {
    101: ["John", 20, 3.3],
    102: ["Shannon", 21, 3.4],
    103: ["Eileen", 20, 3.5]

media library – Pulling images from a sub-directory

hope this is in the right place.

I have created a sub-domain and In the Dbase I have changed the required paths inserting the root and the url to the sub-domain. I have also changed these in the options.php in admin setup.

All my media is uploading nicely to the new subdomain, with the year and all that folders being automatically created, also when I am in media library and click image it show the link to the collect location of the image. But the image in the media library is just a blank grey box.

Am I missing something here with permissions or needing to enqueue in functions.php.

Any help would be grateful


scripting – Pulling data from websites using their API in google sheets?

I’m trying to use the Blizzard API to pull data into a google sheets document. I’m starting to dig into the documentation on scripting in google sheets using a curl request as suggested by the Blizzard API. The initial issue I’m seeing is that I don’t understand how to make a curl request in this scripting. Are there any resources that I can use to better understand this type of process?

google sheets – Skip a specific number of rows while pulling data from another worksheet

I am pulling data from another worksheet in the same file, and I would like the formula to skip 3 rows instead of 1 row, as I drag the formula through the column.

For example, Worksheet1!A10=Worksheet2!A10, Worksheet1!A11=Worksheet2!A13, Worksheet1!A12=Worksheet2!A15, Worksheet1!A13=Worksheet2!A16

Is this possible to do in Google Sheets?

I tried linking the first few rows manually and then dragging the formula, hoping Google Sheets will understand the relationship, but it doesn’t.

8 – Image Migration Failed – pulling the source ID instead

I have a migration that just failed. Error: File ‘public://import/3’ does not exist

The images I am pulling into the node are in that folder, public://import. How do I get the image (image name) and get it into the node? I am confused with all the reading and code snippets I find always show something different. I finally got this far now I need to figure out the next step but I am not sure what to do here. Please help if you can. Thanks!

This shows the mmsg for the migration

The excel doc


id: photo
migration_group: null
label: Photos
  plugin: csv
  path: modules/custom/custom_migrate/assets/csv/profile.csv
  header_row_count: 1
  enclosure: '"'
    - photo
      name: photo
      label: Photo
    source_uri: 'public://import'
    dest_uri: 'public://photos'
      plugin: callback
      callable: trim
      source: photo
      plugin: concat
      delimiter: /
        - constants/source_uri
        - '@file_name'
      plugin: urlencode
      plugin: concat
      delimiter: /
        - constants/dest_uri
        - '@file_name'
      plugin: urlencode
      plugin: concat
      delimiter: /
        - constants/dest_uri
        - '@file_name'
      plugin: urlencode
  filename: '@file_name'
    plugin: file_copy
      - '@file_source'
      - '@file_dest'
      - 'TRUE'
    plugin: default_value
    default_value: 1
    plugin: default_value
    default_value: 1
  plugin: 'entity:file'
  required: {  }
  optional: {  }

Here is the profile code to hopefully help explain the full picture. Thank you for any guidance!!!!

    - migrate_source_csv
id: profile
  - CSV
migration_group: null
label: Profile
  plugin: csv
  path: modules/custom/custom_migrate/assets/csv/profile.csv
  header_row_count: 1
    - id
  #  0:
  #    name: image_file
  #    label: Image file
  #  - id
      name: id
      label: ID
      name: first_name
      label: First Name
      name: last_name
      label: Last Name
      name: birthday
      label: Birthday
      name: email
      label: Email
      name: photo
      label: Photo
      name: languages
      label: Language
    plugin: default_value
    default_value: profile
    plugin: concat
      - first_name
      - last_name
    delimiter: ' '
  field_first_name: first_name
  field_last_name: last_name
  field_birthday: birthday
  field_email: email
    plugin: entity_generate
    source: languages
      plugin: explode
      source: photo
      delimiter: ;
      plugin: callback
      callable: trim
      plugin: callback
      callable: strtoupper
      plugin: migration
      migration: photo
      no_stub: true
  plugin: 'entity:node'
    - photo

web development – Back-end solution for pulling from CSV files

I’m building a data visualization that displays COVID information for the United States, at the city, state, and county level.

The ultimate source of truth are three CSVs published by the New York Times on Github in this repo:

The CSVs are updated once per day with new data from the previous day.

The front-end involves selecting a state, county, and type of statistic (number of deaths, number of cases, etc.). Three line charts are then displayed, showing the rate of change over time – at the national, state, and county level.

Right now, the app is purely front-end. It downloads the set of three CSVs (which are quite large), then does a series of calculations on the data, and when the Promise completes, the visualization is finally displayed in the browser. It takes good 5-10 seconds to complete on a good internet connection – hardly sustainable in production, and also requires the user to download the entirety of the data, even though they might be only looking for a few combinations of states / counties.

Is there a solution that could speed this up, without requiring a back-end? Or is a formal database / backend structure needed?

Here is my general idea of what the back-end solution (I would use a Node.js / Express REST API setup) would entail, but looking for suggestions:

  1. Deploy a Node.js script that downloads the CSVs once per day and puts the data in a database. I could either download the entirety of the CSVs and rewrite the entire database, or download just the new data and add it to the database.

  2. Do some additional calculations on the data (for example, calculate change from the previous day) and then send those to the database. These additional calculations could also be done client-side (this is how it is working currently in my front-end solution)

  3. When the user loads the page, have the front-end query for a list of states and counties from the back-end, so front-end can load.

  4. When the user selects a state / county combination, send just that information to the back-end via a REST API. Have the back-end query the database and return just the requested information to the front-end.

Miscellaneous concerns:

a. Obviously, a no-backend solution would be preferred, but I can’t
think of a way where I can query these CSVs with just the
user-supplied information without downloading them in their entirety

b. From a database perspective, it is a big lift / cost to delete all
the data and rewrite it entirely? Or would it be more cost-efficient
(assuming this is a cloud-based solution) to only add the new data?
(assuming the old data does not change, which is an assumption)

c. I’ve been looking at GraphQL as an alternative to REST, but I’m
not sure it will solve the problem of having to download the CSVs in
their entirety and “store” them somewhere. There are several
open-source APIs online already that provide a more convenient way to
query the data:

But these all seem to be pulling from the CSV, and they take a long time. Is this because they are accessing the data from a CSV instead of a database which I’m assuming has much faster access?