Flask SQLAlchemy db.create_all() not creating tables in database

Im trying to create a table in a database at Heroku but when i run my code ‘python book.py’ it wont create the desired table. I set the database url as well by the cmd. The statement at main() isnt printed as well, so i guess main() is the problem

book.py:

import csv
import os
from flask import Flask
from flask_sqlalchemy import SQLAlchemy
from sqlalchemy import create_engine
from sqlalchemy.orm import scoped_session, sessionmaker

db = SQLAlchemy()

app = Flask(__name__)

app.config("SQLALCHEMY_DATABASE_URI") = os.getenv("DATABASE_URL")
app.config("SQLALCHEMY_TRACK_MODIFICATIONS") = False


db.init_app(app)


class Book(db.Model):
    __tablename__ = "books"
    id = db.Column(db.Integer, primary_key=True)
    isbn = db.Column(db.String, nullable=False)
    title = db.Column(db.String, nullable=False)
    author = db.Column(db.String, nullable=False)
    year = db.Column(db.Date, nullable=False)


def main():
    db.create_all()
    db.session.commit() 
    print "yeet"

if __name__ == "__main__":
    main()

sql – How to use sqlalchemy and flask to construct multiple tables, or is it even necessary?

I’m new to database. I want to construct a class rating website for my university. I want to list 500 courses in the main page. For each course, there will be a page for course reviews. I need to store course review information(the review, post time, the number of likes) for each course. So I’m thinking I would need 500 tables, one for each course, to store these information. Is it ok? If so, how can I implement it? I already know how to do it with one table, but I’m not sure how to automatically create 500.

mysql – After changing engine from myisam to innodb tables are larger in size, is this an expected behaviour?

I updated a WordPress database from myisam to innodb on a server using apache and mysql, and I noticed that some tables, such as wp-posts grew in size after running the ALTER TABLE wp_posts ENGINE=InnoDB; command, about 20MB. So, this is the first time I’ve done this, and I’m wondering if this is an expected behaviour?

Thanks,

dnd 5e – Are there alternative XP/Level Tables in 5e?

The tweet in question came out (07/14) before any of the official playbooks (PHB was 08/14 and the DMG was 12/14). So it would seem likely that WotC had planned for such a thing, but it got scrubbed before launch for unknown reasons. Perhaps a tweet to JC reminding him would answer that question.

Options?

If you want ultimate control over the speed of leveling then use Milestones (DMG, p261). In that way, you set the pace. This also gives the feeling of “chapters” to a character’s life.

Having rid all the basements in the town from rats, the party is thrown a celebration. Congrats, you are all now level 2.

This also helps if you have players missing sessions. If using straight XP, then you’ll have people fall behind. With milestones, they all stay at the same level. I personally prefer Milestones in my campaigns.

What else?

Adventure League play uses a Milestone variant based on hours of play. The system changes from season to season but the basics are that for ever four hours of play you gain a level until you reach level 5. After that it’s eight hours of play. Players can opt not to level up if they want to stay in a lower tier, for instance if they are waiting for a friend to catch up. 1

But I want XP!

The only way to slow down or speed up leveling and still keep XP numbers is to either change the spacing between levels (Eww, lots of math), or add a “multiplier” to XP given (Ehh, simpler math).

Since changing the rate at which characters level is so 1st-Edition 2, I would recommend using multipliers. It’s a similar concept to how some video games do it.

So let’s use a Gnoll as an example: Challenge 1/2 (100 XP)

When characters are lower level, Gnolls can be a threat. But at higher levels, they become fodder. So depending on how close the APL is to the CR of the monsters, they get full value, or a multiplier of (x1). After a while, the party levels up and the multiplier becomes (x.75) or worth only 75xp per kill. Then (x.5) and they are worth 50xp, and so on. There is a diminishing return for fighting Gnolls. So the party still gets XP, but at a much slower rate.

So to slow the parties leveling, apply a fractional multiplier to all their XP. To speed it up, make it a value between 1 and 2.

I’m no good at multiplying!

Then make all monster XP half, but add on bonus XP for situations.

  • 5 Gnolls = 500 xp, halved = 250xp
  • It was an ambush so the party gets an extra 200xp

Basically, it defaults to slow, but you can add bonuses to make up the difference and speed things up when needed.

Disclosure: I’ve only ever used standard XP and Milestone leveling because I’m okay with those systems. So I cannot attest to the other methods other than many RPG video games use a multiplier system and it seems to work for them.


1 This is just the basics. I’m not trying to explain the whole thing so please hold off on comments/downvotes explaining why my explanation is wrong.

2 Look it up. In 1st-Edition, each class needed different XP caps per level. So a Fighter leveled faster than a Magic-User even if they had the same number of XP.

sql server – Cleaning replication tables before snapshot applied to subscriber

We have a snapshot repliaction setup.

In some cases (like previous replications failed etc), some subcribers get following error

The ALTER TABLE statement conflicted with the FOREIGN KEY constraint “<FK_constraint>”. The conflict occurred in database “”, table “”, column ‘’. (Source: MSSQLServer, Error number: 547)

I found out that executing following clean up sql on the subscriber would solve the issue

delete dbo.MSsavedForeignKeys where constraint_name = N'<FK_constraint>'

Or silmply, executing the above sql without any where clause to clean the dbo.MSsavedForeignKeys table and all problems got fixed. But it was getting a problem when number of subscribers increase.

What I wonder is,

  1. Is it safe to add this delete stetement (without where clause) to pre-execution scripts to automatize the process?
  2. Is thre any other repliaction tables that could cause such problems and could be deleted with pre-execution scripts?

postgresql – How to detect performance issues in function working on temporary tables?

I have a function which creates many temporary tables and performs inserts, updates, deletes and selects on them.
Some columns are textual and I have a lot of SIMILAR TO operations going on

The function is taking too long to execute and I don’t know how to detect the problem, because I can’t do an explain analyze on temporary tables to detect expensive operations.

What could possibly be done? Any suggestion would be appreciated.

PostgreSql 12 – Can’t see table details dt (table) but can see all tables dt

I have a custom schema and one table in it. Following answers about searching through custom schemas I have modified by search_path to include new schema.

I can see now all objects in the new schema, so this one works:

xekata_dev=> dt
                List of relations
 Schema |     Name     | Type  |      Owner
--------+--------------+-------+-----------------
 xekata | TestNodeBase | table | xekata_user_web
(1 row)

Now when I want to see table details I do:

xekata_dev=> d TestNodeBase
Did not find any relation named "TestNodeBase".

which shows nothing… 🙁

I tried different ways to but nothing works:

xekata_dev-> d xekata.TestNodeBase
Did not find any relation named "xekata.TestNodeBase".

I am logged as xekata_user_web and my search_path is:

xekata_dev=> show search_path;
  search_path
----------------
 xekata, public

What’s going on? Why I can’t see table details?

data warehouse – How to deal with equally growing fact/dimension tables in datawarehouse design?

I have a source data set with:

  1. customer
  2. customer_product_purchase
  3. customer_support_plan_purchase
  4. customer_support_request

All of them have a relationship such that a support request is raised against a plan and product purchase. And that a customer buys a support plan for a product (which the customer also buys).

In order to design a data warehouse schema for this, I was thinking of creating a single fact table, I thought of the following approaches:

  1. Having a consolidated fact table with customer_product_purchase, customer_support_plan_purchase and customer_support_request into one as they have a few common attributes (and few uncommon ones which can be kept blank for others). As I believe they are at the same granularity (purchase of product/support plan, raising a request against a support plan). This would mean losing some specific information as to make it generic, like product warranty and support plan validity under the same name validity
  2. Creating a fact table from customer_product_purchase and customer_support_plan_purchase which are inherently purchases, and can be kept together with some common and some uncommon attributes. The customer_support_request can be treated as separately.
  3. Creating a fact table around customer_support_request as it ties up to both the other tables, which can be the dimensions. However, this will mean that the dimensions will also grow at the same rate as the fact (which I have read, is an indicator of bad design).

So how can I deal with such a situation where the support plan, service request and product purchase can grow by themselves individually, is it best to just keep all of them separate? But because they (all or two of them) have similar granularity, shouldn’t they be consolidated?