FASTAPI 运行 结合 Alembic,但自动生成不检测模型
FASTAPI run in conjunction with Alembic, but autogenerate does not detect the models
我对 FASTAPI 比较陌生,但决定使用 Postgres 和 Alembic 设置一个项目。每次我使用自动迁移时,我都设法让迁移创建新版本,但由于某种原因,我没有从我的模型中获得任何更新,唉,它们仍然是空白的。我有点不知道出了什么问题。
Main.py
from fastapi import FastAPI
import os
app = FastAPI()
@app.get("/")
async def root():
return {"message": os.getenv("SQLALCHEMY_DATABASE_URL")}
@app.get("/hello/{name}")
async def say_hello(name: str):
return {"message": f"Hello {name}"}
Database.py
from sqlalchemy import create_engine
from sqlalchemy.ext.declarative import declarative_base
from sqlalchemy.orm import sessionmaker
import os
SQLALCHEMY_DATABASE_URL = os.getenv("SQLALCHEMY_DATABASE_URL")
engine = create_engine("postgresql://postgres:mysuperpassword@localhost/rodney")
SessionLocal = sessionmaker(autocommit=False, autoflush=False, bind=engine)
Base = declarative_base()
def get_db():
db = SessionLocal()
try:
yield db
except:
db.close()
我目前唯一的模型
from sqlalchemy import Integer, String
from sqlalchemy.sql.schema import Column
from ..db.database import Base
class CounterParty(Base):
__tablename__ = "Counterparty"
id = Column(Integer, primary_key=True)
Name = Column(String, nullable=False)
env.py(蒸馏器)
from logging.config import fileConfig
from sqlalchemy import engine_from_config
from sqlalchemy import pool
from alembic import context
# this is the Alembic Config object, which provides
# access to the values within the .ini file in use.
config = context.config
# Interpret the config file for Python logging.
# This line sets up loggers basically.
fileConfig(config.config_file_name)
# add your model's MetaData object here
# for 'autogenerate' support
from app.db.database import Base
target_metadata = Base.metadata
# other values from the config, defined by the needs of env.py,
# can be acquired:
# my_important_option = config.get_main_option("my_important_option")
# ... etc.
def run_migrations_offline():
"""Run migrations in 'offline' mode.
This configures the context with just a URL
and not an Engine, though an Engine is acceptable
here as well. By skipping the Engine creation
we don't even need a DBAPI to be available.
Calls to context.execute() here emit the given string to the
script output.
"""
url = config.get_main_option("sqlalchemy.url")
context.configure(
url=url,
target_metadata=target_metadata,
literal_binds=True,
dialect_opts={"paramstyle": "named"},
)
with context.begin_transaction():
context.run_migrations()
def run_migrations_online():
"""Run migrations in 'online' mode.
In this scenario we need to create an Engine
and associate a connection with the context.
"""
connectable = engine_from_config(
config.get_section(config.config_ini_section),
prefix="sqlalchemy.",
poolclass=pool.NullPool,
)
with connectable.connect() as connection:
context.configure(
connection=connection, target_metadata=target_metadata
)
with context.begin_transaction():
context.run_migrations()
if context.is_offline_mode():
run_migrations_offline()
else:
run_migrations_online()
现在当我 运行 “alembic revision --autogenerate -m “initial setup””时 Alembic 会创建大量迁移
我的文件夹结构
如果有人有任何想法,我将非常感激。干杯!
在我的例子中,我使用 Transformer BERT 模型部署在 FastApi 上,但 fastapi 无法识别我的模型,也无法接受模型输入和输出。
我用于案例的代码:
from fastapi import FastAPI
from pydantic import BaseModel
class Entities(BaseModel):
text: str
class EntitesOut(BaseModel):
headings: str
Probability: str
Prediction: str
model_load = load_model('BERT_HATESPEECH')
tokenizer = DistilBertTokenizerFast.from_pretrained('BERT_HATESPEECH_TOKENIZER')
file_to_read = open("label_encoder_bert_hatespeech.pkl", "rb")
label_encoder = pickle.load(file_to_read)
app = FastAPI()
@app.post('/predict', response_model=EntitesOut)
def prep_data(text:Entities):
text = text.text
tokens = tokenizer(text, max_length=150, truncation=True,
padding='max_length',
add_special_tokens=True,
return_tensors='tf')
tokens = {'input_ids': tf.cast(tokens['input_ids'], tf.float64), 'attention_mask': tf.cast(tokens['attention_mask'], tf.float64)}
headings = '''Non-offensive', 'identity_hate', 'neither', 'obscene','offensive', 'sexism'''
probs = model_load.predict(tokens)[0]
pred = label_encoder.inverse_transform([np.argmax(probs)])
return {"headings":headings,
"Probability":str(np.round(probs,3)),
"Prediction":str(pred)}
以上代码使用了 pydantic 的 BaseModel,我为 baseModel 创建了 类 以获取 text:str as input
和 headings, Probability, and prediction as Outputs in EntitiesOut class
之后它以某种方式被模型识别并保存 200 状态代码并输出
env.py 文件找不到模型,因为您还没有导入它们。一种解决方案是,您只需立即将它们导入 env.py 文件中:
from ..models import *
但是,您需要在模型目录中有一个 init.py 文件,并将所有模型包含在其中。
另一种方式(不过不推荐):如果你只有一个模型,可以直接导入为:
from ..models.counterPartyModel import
我对 FASTAPI 比较陌生,但决定使用 Postgres 和 Alembic 设置一个项目。每次我使用自动迁移时,我都设法让迁移创建新版本,但由于某种原因,我没有从我的模型中获得任何更新,唉,它们仍然是空白的。我有点不知道出了什么问题。
Main.py
from fastapi import FastAPI
import os
app = FastAPI()
@app.get("/")
async def root():
return {"message": os.getenv("SQLALCHEMY_DATABASE_URL")}
@app.get("/hello/{name}")
async def say_hello(name: str):
return {"message": f"Hello {name}"}
Database.py
from sqlalchemy import create_engine
from sqlalchemy.ext.declarative import declarative_base
from sqlalchemy.orm import sessionmaker
import os
SQLALCHEMY_DATABASE_URL = os.getenv("SQLALCHEMY_DATABASE_URL")
engine = create_engine("postgresql://postgres:mysuperpassword@localhost/rodney")
SessionLocal = sessionmaker(autocommit=False, autoflush=False, bind=engine)
Base = declarative_base()
def get_db():
db = SessionLocal()
try:
yield db
except:
db.close()
我目前唯一的模型
from sqlalchemy import Integer, String
from sqlalchemy.sql.schema import Column
from ..db.database import Base
class CounterParty(Base):
__tablename__ = "Counterparty"
id = Column(Integer, primary_key=True)
Name = Column(String, nullable=False)
env.py(蒸馏器)
from logging.config import fileConfig
from sqlalchemy import engine_from_config
from sqlalchemy import pool
from alembic import context
# this is the Alembic Config object, which provides
# access to the values within the .ini file in use.
config = context.config
# Interpret the config file for Python logging.
# This line sets up loggers basically.
fileConfig(config.config_file_name)
# add your model's MetaData object here
# for 'autogenerate' support
from app.db.database import Base
target_metadata = Base.metadata
# other values from the config, defined by the needs of env.py,
# can be acquired:
# my_important_option = config.get_main_option("my_important_option")
# ... etc.
def run_migrations_offline():
"""Run migrations in 'offline' mode.
This configures the context with just a URL
and not an Engine, though an Engine is acceptable
here as well. By skipping the Engine creation
we don't even need a DBAPI to be available.
Calls to context.execute() here emit the given string to the
script output.
"""
url = config.get_main_option("sqlalchemy.url")
context.configure(
url=url,
target_metadata=target_metadata,
literal_binds=True,
dialect_opts={"paramstyle": "named"},
)
with context.begin_transaction():
context.run_migrations()
def run_migrations_online():
"""Run migrations in 'online' mode.
In this scenario we need to create an Engine
and associate a connection with the context.
"""
connectable = engine_from_config(
config.get_section(config.config_ini_section),
prefix="sqlalchemy.",
poolclass=pool.NullPool,
)
with connectable.connect() as connection:
context.configure(
connection=connection, target_metadata=target_metadata
)
with context.begin_transaction():
context.run_migrations()
if context.is_offline_mode():
run_migrations_offline()
else:
run_migrations_online()
现在当我 运行 “alembic revision --autogenerate -m “initial setup””时 Alembic 会创建大量迁移
我的文件夹结构
如果有人有任何想法,我将非常感激。干杯!
在我的例子中,我使用 Transformer BERT 模型部署在 FastApi 上,但 fastapi 无法识别我的模型,也无法接受模型输入和输出。 我用于案例的代码:
from fastapi import FastAPI
from pydantic import BaseModel
class Entities(BaseModel):
text: str
class EntitesOut(BaseModel):
headings: str
Probability: str
Prediction: str
model_load = load_model('BERT_HATESPEECH')
tokenizer = DistilBertTokenizerFast.from_pretrained('BERT_HATESPEECH_TOKENIZER')
file_to_read = open("label_encoder_bert_hatespeech.pkl", "rb")
label_encoder = pickle.load(file_to_read)
app = FastAPI()
@app.post('/predict', response_model=EntitesOut)
def prep_data(text:Entities):
text = text.text
tokens = tokenizer(text, max_length=150, truncation=True,
padding='max_length',
add_special_tokens=True,
return_tensors='tf')
tokens = {'input_ids': tf.cast(tokens['input_ids'], tf.float64), 'attention_mask': tf.cast(tokens['attention_mask'], tf.float64)}
headings = '''Non-offensive', 'identity_hate', 'neither', 'obscene','offensive', 'sexism'''
probs = model_load.predict(tokens)[0]
pred = label_encoder.inverse_transform([np.argmax(probs)])
return {"headings":headings,
"Probability":str(np.round(probs,3)),
"Prediction":str(pred)}
以上代码使用了 pydantic 的 BaseModel,我为 baseModel 创建了 类 以获取 text:str as input
和 headings, Probability, and prediction as Outputs in EntitiesOut class
之后它以某种方式被模型识别并保存 200 状态代码并输出
env.py 文件找不到模型,因为您还没有导入它们。一种解决方案是,您只需立即将它们导入 env.py 文件中:
from ..models import *
但是,您需要在模型目录中有一个 init.py 文件,并将所有模型包含在其中。
另一种方式(不过不推荐):如果你只有一个模型,可以直接导入为:
from ..models.counterPartyModel import