使用 rds-data 增加 aws lambda 结果计数的 1000 个限制 execute_sql 或使用不同的包?
Increase the 1000 limit for aws lambda results count from execute_sql using rds-data or use a different package?
我一直在使用带有 python lambda 函数的 AWS aurora 来为我们的应用程序执行查询。
lambda 函数效果很好,但 returns 只有前 1000 个结果不是全部。我尝试使用分页器将限制增加到 5000,但找不到合适的解决方案:
import boto3
def lambda_handler(event, context):
client = boto3.client('rds-data')
readParam = event['query'] # readParam = 'select * from table;'
database1 = event['database'] # Database name
response = client.execute_sql(
awsSecretStoreArn='arn:aws:secretsmanager:us-east-1:xxxxx:secret:abc/read-XXXX',
database=database1,
dbClusterOrInstanceArn='arn:aws:rds:us-east-1:xxxxx:cluster:abcd-abc',
sqlStatements=readParam
)
return {
'statusCode': 200,
'headers': {
'Content-Type': 'application/json',
"Access-Control-Allow-Origin": "*",
"Access-Control-Allow-Headers": "Content-Type",
"Access-Control-Allow-Methods": "OPTIONS,POST"
},
'body': response
}
`
我试过使用 SQLAlchemy 和 pydataapi 并将 AWS 开发包部署到 lambda,但没有用。 lambda 函数没有读取具有 lambda_handler 的适当 python 文件。代码如下:
import pymysql.cursors
from sqlalchemy.engine import create_engine
def lambda_handler(event, context):
readParam = event['query']
database1 = event['database']
engine = create_engine(
'mysql+pydataapi://',
connect_args={
'resource_arn': 'arn:aws:rds:us-east-1:xxxx:cluster:abcd-abc',
'secret_arn': 'arn:aws:secretsmanager:us-east-1:xxxx:secret:abc/read-XXXX',
'database': 'mimic_dev'
}
)
result: ResultProxy = engine.execute(readParam)
return {
'statusCode': 200,
'headers': {
'Content-Type': 'application/json',
"Access-Control-Allow-Origin": "*",
"Access-Control-Allow-Headers": "Content-Type",
"Access-Control-Allow-Methods": "OPTIONS,POST"
},
'body': result.fetchall
}
有没有更好的替代解决方案来解决我一直在尝试的问题?
任何帮助表示赞赏。谢谢
此问题已解决。
资源:AWS lambda deployment Package in Python
以及以下代码:
import pymysql.cursors
import json
from sqlalchemy.engine import create_engine
def lambda_handler(event,context):
engine = create_engine(
'mysql+pydataapi://',
connect_args={
'resource_arn': 'arn:aws:rds:us-east-1:xxxxx:cluster:xxxx',
'secret_arn': 'arn:aws:secretsmanager:us-east-1:xxxx:secret:decima/abcd-abc',
'database': 'mimic_dev'
}
)
result: ResultProxy = engine.execute("select * from person limit 10000")
resultValue = result.fetchall()
return json.dumps([dict(r) for r in resultValue])
我一直在使用带有 python lambda 函数的 AWS aurora 来为我们的应用程序执行查询。 lambda 函数效果很好,但 returns 只有前 1000 个结果不是全部。我尝试使用分页器将限制增加到 5000,但找不到合适的解决方案:
import boto3
def lambda_handler(event, context):
client = boto3.client('rds-data')
readParam = event['query'] # readParam = 'select * from table;'
database1 = event['database'] # Database name
response = client.execute_sql(
awsSecretStoreArn='arn:aws:secretsmanager:us-east-1:xxxxx:secret:abc/read-XXXX',
database=database1,
dbClusterOrInstanceArn='arn:aws:rds:us-east-1:xxxxx:cluster:abcd-abc',
sqlStatements=readParam
)
return {
'statusCode': 200,
'headers': {
'Content-Type': 'application/json',
"Access-Control-Allow-Origin": "*",
"Access-Control-Allow-Headers": "Content-Type",
"Access-Control-Allow-Methods": "OPTIONS,POST"
},
'body': response
}
`
我试过使用 SQLAlchemy 和 pydataapi 并将 AWS 开发包部署到 lambda,但没有用。 lambda 函数没有读取具有 lambda_handler 的适当 python 文件。代码如下:
import pymysql.cursors
from sqlalchemy.engine import create_engine
def lambda_handler(event, context):
readParam = event['query']
database1 = event['database']
engine = create_engine(
'mysql+pydataapi://',
connect_args={
'resource_arn': 'arn:aws:rds:us-east-1:xxxx:cluster:abcd-abc',
'secret_arn': 'arn:aws:secretsmanager:us-east-1:xxxx:secret:abc/read-XXXX',
'database': 'mimic_dev'
}
)
result: ResultProxy = engine.execute(readParam)
return {
'statusCode': 200,
'headers': {
'Content-Type': 'application/json',
"Access-Control-Allow-Origin": "*",
"Access-Control-Allow-Headers": "Content-Type",
"Access-Control-Allow-Methods": "OPTIONS,POST"
},
'body': result.fetchall
}
有没有更好的替代解决方案来解决我一直在尝试的问题? 任何帮助表示赞赏。谢谢
此问题已解决。 资源:AWS lambda deployment Package in Python
以及以下代码:
import pymysql.cursors
import json
from sqlalchemy.engine import create_engine
def lambda_handler(event,context):
engine = create_engine(
'mysql+pydataapi://',
connect_args={
'resource_arn': 'arn:aws:rds:us-east-1:xxxxx:cluster:xxxx',
'secret_arn': 'arn:aws:secretsmanager:us-east-1:xxxx:secret:decima/abcd-abc',
'database': 'mimic_dev'
}
)
result: ResultProxy = engine.execute("select * from person limit 10000")
resultValue = result.fetchall()
return json.dumps([dict(r) for r in resultValue])