在数据管道中使用 SqlReaderQuery
Using SqlReaderQuery in Data Pipeline
我正在通过查询将数据从 Azure SQL 数据库复制到 blob。
这是activity的脚本:
{
"type": "Copy",
"typeProperties": {
"source": {
"type": "SqlSource",
"sqlReaderQuery": "select distinct a.*, b.Name from [dbo].[Transactxxxxxxx] a join dbo.Anxxxxx b on a.[Clixxxxx] = b.[Fixxxxxx] where b.Name = 'associations'"
},
"sink": {
"type": "BlobSink",
"writeBatchSize": 0,
"writeBatchTimeout": "00:00:00"
}
},
"inputs": [
{
"name": "Txnsxxxxxxxxxxx"
}
],
"outputs": [
{
"name": "Txnxxxxxxxxxxxx"
}
],
"policy": {
"timeout": "01:00:00",
"concurrency": 1,
"retry": 3
},
"scheduler": {
"frequency": "Hour",
"interval": 1
},
"name": "Copyxxxxxxxxxx"
}
activity 似乎有效,但它没有将任何文件放入接收器。
数据集指向正确的容器。
根据您提供的信息,我发现 运行 成功登录了我们的服务。我注意到目标 blob 被指定为 "experimentinput/Inxxx_To_xx_Associations.csv/Inxxx_To_xx.csv"。 blob名称是静态的,多个切片运行会覆盖同一个blob文件。您可以利用 partitionBy 属性 来获得动态 blob 名称。详情请参考这篇文章:https://azure.microsoft.com/en-us/documentation/articles/data-factory-azure-blob-connector/#azure-blob-dataset-type-properties.
我正在通过查询将数据从 Azure SQL 数据库复制到 blob。
这是activity的脚本:
{
"type": "Copy",
"typeProperties": {
"source": {
"type": "SqlSource",
"sqlReaderQuery": "select distinct a.*, b.Name from [dbo].[Transactxxxxxxx] a join dbo.Anxxxxx b on a.[Clixxxxx] = b.[Fixxxxxx] where b.Name = 'associations'"
},
"sink": {
"type": "BlobSink",
"writeBatchSize": 0,
"writeBatchTimeout": "00:00:00"
}
},
"inputs": [
{
"name": "Txnsxxxxxxxxxxx"
}
],
"outputs": [
{
"name": "Txnxxxxxxxxxxxx"
}
],
"policy": {
"timeout": "01:00:00",
"concurrency": 1,
"retry": 3
},
"scheduler": {
"frequency": "Hour",
"interval": 1
},
"name": "Copyxxxxxxxxxx"
}
activity 似乎有效,但它没有将任何文件放入接收器。
数据集指向正确的容器。
根据您提供的信息,我发现 运行 成功登录了我们的服务。我注意到目标 blob 被指定为 "experimentinput/Inxxx_To_xx_Associations.csv/Inxxx_To_xx.csv"。 blob名称是静态的,多个切片运行会覆盖同一个blob文件。您可以利用 partitionBy 属性 来获得动态 blob 名称。详情请参考这篇文章:https://azure.microsoft.com/en-us/documentation/articles/data-factory-azure-blob-connector/#azure-blob-dataset-type-properties.