使用 LogicApp 忽略第一个和最后一个记录
Ignore first and last records using LogicApp
我有一个非常简单的 LogicApp,我想忽略第一个和最后一个 x 条记录,定义如下,您应该可以看到我的结果:
{
"definition": {
"$schema": "https://schema.management.azure.com/providers/Microsoft.Logic/schemas/2016-06-01/workflowdefinition.json#",
"actions": {
"Convert_Each_Row_into_Array": {
"inputs": "@split(variables('CSV Data'),'\n')",
"runAfter": {
"Initialize_CSV_Data": [
"Succeeded"
]
},
"type": "Compose"
},
"Initialize_CSV_Data": {
"inputs": {
"variables": [
{
"name": "CSV Data",
"type": "string",
"value": "rubbish1,rubbish2,rubbish3\nblank1,blank2,blank3\nheader1,header2,header3\ndata1,data2,data3\ndata4,data5,data6\ndata7,data8,data9"
}
]
},
"runAfter": {
"Parse_JSON": [
"Succeeded"
]
},
"type": "InitializeVariable"
},
"Parse_JSON": {
"inputs": {
"content": "@triggerBody()",
"schema": {
"properties": {
"NumberOfFooterRows": {
"type": "integer"
},
"NumberOfHeaderRows": {
"type": "integer"
}
},
"type": "object"
}
},
"runAfter": {},
"type": "ParseJson"
},
"Skip_Footer": {
"inputs": "@take(outputs('Skip_Header'),sub(length(outputs('Skip_Header')),body('Parse_JSON')?['NumberOfFooterRows']))",
"runAfter": {
"Skip_Header": [
"Succeeded"
]
},
"type": "Compose"
},
"Skip_Header": {
"inputs": "@take(skip(outputs('Convert_Each_Row_into_Array'),body('Parse_JSON')?['NumberOfHeaderRows']),sub(length(outputs('Convert_Each_Row_into_Array')),1))",
"runAfter": {
"Convert_Each_Row_into_Array": [
"Succeeded"
]
},
"type": "Compose"
}
},
"contentVersion": "1.0.0.0",
"outputs": {},
"parameters": {},
"triggers": {
"manual": {
"inputs": {},
"kind": "Http",
"type": "Request"
}
}
},
"parameters": {}
}
负载是
{
"NumberOfHeaderRows":3,
"NumberOfFooterRows":2
}
这工作正常,但它仅用于测试,因为真实数据以 CSV 格式存储在 SFTP 中,因此添加了获取文件内容的额外步骤,然后将其放入 Initialise CSV Data 变量中:
CSV 文件与初始 CSV 数据变量完全相同,字面意思是:
rubbish1,rubbish2,rubbish3
blank1,blank2,blank3
header1,header2,header3
data1,data2,data3
data4,data5,data6
我现在剩下的是成功删除前 3 行但未删除最后 2 行的结果。它不会给出任何错误,但如果我点击
Download (Alt/Option + click)
然后就显示
[]
这是因为Initialize CSV data
是字符串类型,skip()只跳过我们提到的字符,这就是为什么我们需要将字符串转换为数组并跳过数组对象。此外,查看要提供的有效负载,我们发现您正在发送字符串类型。相反,您可以通过 make
发送 int
{
"NumberOfHeaderRows":"3",
"NumberOfFooterRows":"2"
}
到
{
"NumberOfHeaderRows":3,
"NumberOfFooterRows":2
}
如果使用数组
将您的 csv 数据的字符串类型转换为数组后,它就可以工作了。我添加了一个额外的步骤 Parse_JSON
只是为了检索 NumberOfHeaderRows
和 NumberOfFooterRows
这让事情变得更加清晰。这是我的逻辑应用程序的屏幕截图 -
结果:
Skip Headers
中使用的表达式
@take(skip(variables('CSV Data'),body('Parse_JSON')?['NumberOfHeaderRows']),sub(length(variables('CSV Data')),1))
Skip Footer
中使用的表达式
@take(outputs('Skip_Header'),sub(length(outputs('Skip_Header')),body('Parse_JSON')?['NumberOfFooterRows']))
下面是我的代码视图:
{
"definition": {
"$schema": "https://schema.management.azure.com/providers/Microsoft.Logic/schemas/2016-06-01/workflowdefinition.json#",
"actions": {
"Initialize_CSV_Data": {
"inputs": {
"variables": [
{
"name": "CSV Data",
"type": "array",
"value": [
"rubbish1,rubbish2,rubbish3",
"blank1,blank2,blank3",
"header1,header2,header3",
"data1,data2,data3",
"data4,data5,data6",
"data7,data8,data9"
]
}
]
},
"runAfter": {
"Parse_JSON": [
"Succeeded"
]
},
"type": "InitializeVariable"
},
"Parse_JSON": {
"inputs": {
"content": "@triggerBody()",
"schema": {
"properties": {
"NumberOfFooterRows": {
"type": "integer"
},
"NumberOfHeaderRows": {
"type": "integer"
}
},
"type": "object"
}
},
"runAfter": {},
"type": "ParseJson"
},
"Skip_Footer": {
"inputs": "@take(outputs('Skip_Header'),sub(length(outputs('Skip_Header')),body('Parse_JSON')?['NumberOfFooterRows']))",
"runAfter": {
"Skip_Header": [
"Succeeded"
]
},
"type": "Compose"
},
"Skip_Header": {
"inputs": "@take(skip(variables('CSV Data'),body('Parse_JSON')?['NumberOfHeaderRows']),sub(length(variables('CSV Data')),1))",
"runAfter": {
"Initialize_CSV_Data": [
"Succeeded"
]
},
"type": "Compose"
}
},
"contentVersion": "1.0.0.0",
"outputs": {},
"parameters": {},
"triggers": {
"manual": {
"inputs": {},
"kind": "Http",
"type": "Request"
}
}
},
"parameters": {}
}
如果使用字符串
考虑到您只剩下字符串,那么您可以使用 split 函数将该字符串转换为数组,该函数将每一行转换为数组对象。这是逻辑应用
结果:
这是Convert Each Row into Array
中的表达式
split(variables('CSV Data'),'
')
现在您可以使用 Convert Each Row into Array
连接器输出来实现我们的要求
下面是进行上述更改后的代码视图
{
"definition": {
"$schema": "https://schema.management.azure.com/providers/Microsoft.Logic/schemas/2016-06-01/workflowdefinition.json#",
"actions": {
"Convert_Each_Row_into_Array": {
"inputs": "@split(variables('CSV Data'),'\n')",
"runAfter": {
"Initialize_CSV_Data": [
"Succeeded"
]
},
"type": "Compose"
},
"Initialize_CSV_Data": {
"inputs": {
"variables": [
{
"name": "CSV Data",
"type": "string",
"value": "rubbish1,rubbish2,rubbish3\nblank1,blank2,blank3\nheader1,header2,header3\ndata1,data2,data3\ndata4,data5,data6\ndata7,data8,data9"
}
]
},
"runAfter": {
"Parse_JSON": [
"Succeeded"
]
},
"type": "InitializeVariable"
},
"Parse_JSON": {
"inputs": {
"content": "@triggerBody()",
"schema": {
"properties": {
"NumberOfFooterRows": {
"type": "integer"
},
"NumberOfHeaderRows": {
"type": "integer"
}
},
"type": "object"
}
},
"runAfter": {},
"type": "ParseJson"
},
"Skip_Footer": {
"inputs": "@take(outputs('Skip_Header'),sub(length(outputs('Skip_Header')),body('Parse_JSON')?['NumberOfFooterRows']))",
"runAfter": {
"Skip_Header": [
"Succeeded"
]
},
"type": "Compose"
},
"Skip_Header": {
"inputs": "@take(skip(outputs('Convert_Each_Row_into_Array'),body('Parse_JSON')?['NumberOfHeaderRows']),sub(length(outputs('Convert_Each_Row_into_Array')),1))",
"runAfter": {
"Convert_Each_Row_into_Array": [
"Succeeded"
]
},
"type": "Compose"
}
},
"contentVersion": "1.0.0.0",
"outputs": {},
"parameters": {},
"triggers": {
"manual": {
"inputs": {},
"kind": "Http",
"type": "Request"
}
}
},
"parameters": {}
}
我有一个非常简单的 LogicApp,我想忽略第一个和最后一个 x 条记录,定义如下,您应该可以看到我的结果:
{
"definition": {
"$schema": "https://schema.management.azure.com/providers/Microsoft.Logic/schemas/2016-06-01/workflowdefinition.json#",
"actions": {
"Convert_Each_Row_into_Array": {
"inputs": "@split(variables('CSV Data'),'\n')",
"runAfter": {
"Initialize_CSV_Data": [
"Succeeded"
]
},
"type": "Compose"
},
"Initialize_CSV_Data": {
"inputs": {
"variables": [
{
"name": "CSV Data",
"type": "string",
"value": "rubbish1,rubbish2,rubbish3\nblank1,blank2,blank3\nheader1,header2,header3\ndata1,data2,data3\ndata4,data5,data6\ndata7,data8,data9"
}
]
},
"runAfter": {
"Parse_JSON": [
"Succeeded"
]
},
"type": "InitializeVariable"
},
"Parse_JSON": {
"inputs": {
"content": "@triggerBody()",
"schema": {
"properties": {
"NumberOfFooterRows": {
"type": "integer"
},
"NumberOfHeaderRows": {
"type": "integer"
}
},
"type": "object"
}
},
"runAfter": {},
"type": "ParseJson"
},
"Skip_Footer": {
"inputs": "@take(outputs('Skip_Header'),sub(length(outputs('Skip_Header')),body('Parse_JSON')?['NumberOfFooterRows']))",
"runAfter": {
"Skip_Header": [
"Succeeded"
]
},
"type": "Compose"
},
"Skip_Header": {
"inputs": "@take(skip(outputs('Convert_Each_Row_into_Array'),body('Parse_JSON')?['NumberOfHeaderRows']),sub(length(outputs('Convert_Each_Row_into_Array')),1))",
"runAfter": {
"Convert_Each_Row_into_Array": [
"Succeeded"
]
},
"type": "Compose"
}
},
"contentVersion": "1.0.0.0",
"outputs": {},
"parameters": {},
"triggers": {
"manual": {
"inputs": {},
"kind": "Http",
"type": "Request"
}
}
},
"parameters": {}
}
负载是
{
"NumberOfHeaderRows":3,
"NumberOfFooterRows":2
}
这工作正常,但它仅用于测试,因为真实数据以 CSV 格式存储在 SFTP 中,因此添加了获取文件内容的额外步骤,然后将其放入 Initialise CSV Data 变量中:
CSV 文件与初始 CSV 数据变量完全相同,字面意思是:
rubbish1,rubbish2,rubbish3
blank1,blank2,blank3
header1,header2,header3
data1,data2,data3
data4,data5,data6
我现在剩下的是成功删除前 3 行但未删除最后 2 行的结果。它不会给出任何错误,但如果我点击
Download (Alt/Option + click)
然后就显示
[]
这是因为Initialize CSV data
是字符串类型,skip()只跳过我们提到的字符,这就是为什么我们需要将字符串转换为数组并跳过数组对象。此外,查看要提供的有效负载,我们发现您正在发送字符串类型。相反,您可以通过 make
{
"NumberOfHeaderRows":"3",
"NumberOfFooterRows":"2"
}
到
{
"NumberOfHeaderRows":3,
"NumberOfFooterRows":2
}
如果使用数组
将您的 csv 数据的字符串类型转换为数组后,它就可以工作了。我添加了一个额外的步骤 Parse_JSON
只是为了检索 NumberOfHeaderRows
和 NumberOfFooterRows
这让事情变得更加清晰。这是我的逻辑应用程序的屏幕截图 -
结果:
Skip Headers
@take(skip(variables('CSV Data'),body('Parse_JSON')?['NumberOfHeaderRows']),sub(length(variables('CSV Data')),1))
Skip Footer
@take(outputs('Skip_Header'),sub(length(outputs('Skip_Header')),body('Parse_JSON')?['NumberOfFooterRows']))
下面是我的代码视图:
{
"definition": {
"$schema": "https://schema.management.azure.com/providers/Microsoft.Logic/schemas/2016-06-01/workflowdefinition.json#",
"actions": {
"Initialize_CSV_Data": {
"inputs": {
"variables": [
{
"name": "CSV Data",
"type": "array",
"value": [
"rubbish1,rubbish2,rubbish3",
"blank1,blank2,blank3",
"header1,header2,header3",
"data1,data2,data3",
"data4,data5,data6",
"data7,data8,data9"
]
}
]
},
"runAfter": {
"Parse_JSON": [
"Succeeded"
]
},
"type": "InitializeVariable"
},
"Parse_JSON": {
"inputs": {
"content": "@triggerBody()",
"schema": {
"properties": {
"NumberOfFooterRows": {
"type": "integer"
},
"NumberOfHeaderRows": {
"type": "integer"
}
},
"type": "object"
}
},
"runAfter": {},
"type": "ParseJson"
},
"Skip_Footer": {
"inputs": "@take(outputs('Skip_Header'),sub(length(outputs('Skip_Header')),body('Parse_JSON')?['NumberOfFooterRows']))",
"runAfter": {
"Skip_Header": [
"Succeeded"
]
},
"type": "Compose"
},
"Skip_Header": {
"inputs": "@take(skip(variables('CSV Data'),body('Parse_JSON')?['NumberOfHeaderRows']),sub(length(variables('CSV Data')),1))",
"runAfter": {
"Initialize_CSV_Data": [
"Succeeded"
]
},
"type": "Compose"
}
},
"contentVersion": "1.0.0.0",
"outputs": {},
"parameters": {},
"triggers": {
"manual": {
"inputs": {},
"kind": "Http",
"type": "Request"
}
}
},
"parameters": {}
}
如果使用字符串
考虑到您只剩下字符串,那么您可以使用 split 函数将该字符串转换为数组,该函数将每一行转换为数组对象。这是逻辑应用
结果:
这是Convert Each Row into Array
split(variables('CSV Data'),'
')
现在您可以使用 Convert Each Row into Array
连接器输出来实现我们的要求
下面是进行上述更改后的代码视图
{
"definition": {
"$schema": "https://schema.management.azure.com/providers/Microsoft.Logic/schemas/2016-06-01/workflowdefinition.json#",
"actions": {
"Convert_Each_Row_into_Array": {
"inputs": "@split(variables('CSV Data'),'\n')",
"runAfter": {
"Initialize_CSV_Data": [
"Succeeded"
]
},
"type": "Compose"
},
"Initialize_CSV_Data": {
"inputs": {
"variables": [
{
"name": "CSV Data",
"type": "string",
"value": "rubbish1,rubbish2,rubbish3\nblank1,blank2,blank3\nheader1,header2,header3\ndata1,data2,data3\ndata4,data5,data6\ndata7,data8,data9"
}
]
},
"runAfter": {
"Parse_JSON": [
"Succeeded"
]
},
"type": "InitializeVariable"
},
"Parse_JSON": {
"inputs": {
"content": "@triggerBody()",
"schema": {
"properties": {
"NumberOfFooterRows": {
"type": "integer"
},
"NumberOfHeaderRows": {
"type": "integer"
}
},
"type": "object"
}
},
"runAfter": {},
"type": "ParseJson"
},
"Skip_Footer": {
"inputs": "@take(outputs('Skip_Header'),sub(length(outputs('Skip_Header')),body('Parse_JSON')?['NumberOfFooterRows']))",
"runAfter": {
"Skip_Header": [
"Succeeded"
]
},
"type": "Compose"
},
"Skip_Header": {
"inputs": "@take(skip(outputs('Convert_Each_Row_into_Array'),body('Parse_JSON')?['NumberOfHeaderRows']),sub(length(outputs('Convert_Each_Row_into_Array')),1))",
"runAfter": {
"Convert_Each_Row_into_Array": [
"Succeeded"
]
},
"type": "Compose"
}
},
"contentVersion": "1.0.0.0",
"outputs": {},
"parameters": {},
"triggers": {
"manual": {
"inputs": {},
"kind": "Http",
"type": "Request"
}
}
},
"parameters": {}
}