使用 U-SQL 消除一个特定列中的重复值和空值,同时保持第二列正确对齐
Using U-SQL to eliminate duplicate and null values in one specific column while keeping a 2nd column properly aligned
我正在尝试使用 U-SQL 删除 csv 文件中名为 "Function" 的特定列中的重复、空、'' 和 Nan 单元格。我还想在删除空白行后使 Product 列与 Function 列正确对齐。所以我想删除 Product 列中的行,就像我在 Function 列中所做的那样,以保持它们正确对齐。我只想保留一个重复的函数行。在这种情况下,我只想保留第一次出现的情况。 Product 列没有空单元格并且具有所有唯一值。任何帮助是极大的赞赏。我知道这可以通过更简单的方式完成,但我想使用代码来自动化该过程,因为 DataLake 中的数据会随着时间的推移而变化。我想我在我目前拥有的代码中有些接近。实际数据集是一个非常大的文件,我相当确定 Functions 列中至少有 4 个重复值,它们不仅仅是空单元格。我需要消除函数列中的重复值和空单元格,因为空单元格也被识别为重复项。我希望能够在不包含 Product 列的学校项目的下一步中使用 Function 值作为主键。
DECLARE @inputfile string = "/input/Function.csv";
//DECLARE @OutputUserFile string = "/output/Test_Function/UniqueFunction.csv";
@RawData =
EXTRACT Function string,
Product string
FROM @inputfile
USING Extractors.Csv(encoding: Encoding.[ASCII]);
// Query from Function data
// Set ROW_NUMBER() of each row within the window partitioned by Function field
@RawDataDuplicates=
SELECT ROW_NUMBER() OVER (PARTITION BY Function) AS RowNum, Function AS function
FROM @RawData;
// ORDER BY Function to see duplicate rows next to one another
@RawDataDuplicates2=
SELECT *
FROM @RawDataDuplicates
ORDER BY function
OFFSET 0 ROWS;
// Write to File
//OUTPUT @RawDataDuplicates2
//TO "/output/Test_Function/FunctionOver-Dups.csv"
//USING Outputters.Csv();
// GROUP BY and count # of duplicates per Function
@groupBy = SELECT Function, COUNT(Function) AS FunctionCount
FROM @RawData
GROUP BY Function
ORDER BY Function
OFFSET 0 ROWS;
// Write to file
//OUTPUT @groupBy
//TO "/output/Test_Function/FunctionGroupBy-Dups.csv"
//USING Outputters.Csv();
@RawDataDuplicates3 =
SELECT *
FROM @RawDataDuplicates2
WHERE RowNum == 1;
OUTPUT @RawDataDuplicates3
TO "/output/Test_Function/FunctionUniqueEmail.csv"
USING Outputters.Csv(outputHeader: true);
//OUTPUT @RawData
//TO @OutputUserFile
//USING Outputters.Csv(outputHeader: true);
我也注释掉了一些我不一定需要的代码。当我按原样 运行 代码时,我目前收到此错误:此 E_CSC_USER_REDUNDANTSTATEMENTINSCRIPT,错误消息:此语句是死代码.. –
它没有给出行号,但可能是 "Function AS function" 行?
这是一个示例文件,它是完整电子表格的一小部分,仅包含 2 个相关列中的数据。完整的电子表格在所有列中都有数据。
https://www.dropbox.com/s/auu2aco4b037xn7/Function.csv?dl=0
这是我按照 wBob 的建议并单击时得到的输出的屏幕截图。
您可以使用 .Length
之类的字符串函数和 ROW_NUMBER
之类的排名函数对您的数据应用一系列转换,以删除您想要的记录,例如:
@input =
EXTRACT
CompanyID string,
division string,
store_location string,
International_Id string,
Function string,
office_location string,
address string,
Product string,
Revenue string,
sales_goal string,
Manager string,
Country string
FROM "/input/input142.csv"
USING Extractors.Csv(skipFirstNRows : 1 );
// Remove empty columns
@working =
SELECT *
FROM @input
WHERE Function.Length > 0;
// Rank the columns by Function and keep only the first one
@working =
SELECT CompanyID,
division,
store_location,
International_Id,
Function,
office_location,
address,
Product,
Revenue,
sales_goal,
Manager,
Country
FROM
(
SELECT *,
ROW_NUMBER() OVER(PARTITION BY Function ORDER BY Product) AS rn
FROM @working
) AS x
WHERE rn == 1;
@output = SELECT * FROM @working;
OUTPUT @output TO "/output/output.csv"
USING Outputters.Csv(quoting:false);
我的结果:
我正在尝试使用 U-SQL 删除 csv 文件中名为 "Function" 的特定列中的重复、空、'' 和 Nan 单元格。我还想在删除空白行后使 Product 列与 Function 列正确对齐。所以我想删除 Product 列中的行,就像我在 Function 列中所做的那样,以保持它们正确对齐。我只想保留一个重复的函数行。在这种情况下,我只想保留第一次出现的情况。 Product 列没有空单元格并且具有所有唯一值。任何帮助是极大的赞赏。我知道这可以通过更简单的方式完成,但我想使用代码来自动化该过程,因为 DataLake 中的数据会随着时间的推移而变化。我想我在我目前拥有的代码中有些接近。实际数据集是一个非常大的文件,我相当确定 Functions 列中至少有 4 个重复值,它们不仅仅是空单元格。我需要消除函数列中的重复值和空单元格,因为空单元格也被识别为重复项。我希望能够在不包含 Product 列的学校项目的下一步中使用 Function 值作为主键。
DECLARE @inputfile string = "/input/Function.csv";
//DECLARE @OutputUserFile string = "/output/Test_Function/UniqueFunction.csv";
@RawData =
EXTRACT Function string,
Product string
FROM @inputfile
USING Extractors.Csv(encoding: Encoding.[ASCII]);
// Query from Function data
// Set ROW_NUMBER() of each row within the window partitioned by Function field
@RawDataDuplicates=
SELECT ROW_NUMBER() OVER (PARTITION BY Function) AS RowNum, Function AS function
FROM @RawData;
// ORDER BY Function to see duplicate rows next to one another
@RawDataDuplicates2=
SELECT *
FROM @RawDataDuplicates
ORDER BY function
OFFSET 0 ROWS;
// Write to File
//OUTPUT @RawDataDuplicates2
//TO "/output/Test_Function/FunctionOver-Dups.csv"
//USING Outputters.Csv();
// GROUP BY and count # of duplicates per Function
@groupBy = SELECT Function, COUNT(Function) AS FunctionCount
FROM @RawData
GROUP BY Function
ORDER BY Function
OFFSET 0 ROWS;
// Write to file
//OUTPUT @groupBy
//TO "/output/Test_Function/FunctionGroupBy-Dups.csv"
//USING Outputters.Csv();
@RawDataDuplicates3 =
SELECT *
FROM @RawDataDuplicates2
WHERE RowNum == 1;
OUTPUT @RawDataDuplicates3
TO "/output/Test_Function/FunctionUniqueEmail.csv"
USING Outputters.Csv(outputHeader: true);
//OUTPUT @RawData
//TO @OutputUserFile
//USING Outputters.Csv(outputHeader: true);
我也注释掉了一些我不一定需要的代码。当我按原样 运行 代码时,我目前收到此错误:此 E_CSC_USER_REDUNDANTSTATEMENTINSCRIPT,错误消息:此语句是死代码.. – 它没有给出行号,但可能是 "Function AS function" 行?
这是一个示例文件,它是完整电子表格的一小部分,仅包含 2 个相关列中的数据。完整的电子表格在所有列中都有数据。 https://www.dropbox.com/s/auu2aco4b037xn7/Function.csv?dl=0
这是我按照 wBob 的建议并单击时得到的输出的屏幕截图。
您可以使用 .Length
之类的字符串函数和 ROW_NUMBER
之类的排名函数对您的数据应用一系列转换,以删除您想要的记录,例如:
@input =
EXTRACT
CompanyID string,
division string,
store_location string,
International_Id string,
Function string,
office_location string,
address string,
Product string,
Revenue string,
sales_goal string,
Manager string,
Country string
FROM "/input/input142.csv"
USING Extractors.Csv(skipFirstNRows : 1 );
// Remove empty columns
@working =
SELECT *
FROM @input
WHERE Function.Length > 0;
// Rank the columns by Function and keep only the first one
@working =
SELECT CompanyID,
division,
store_location,
International_Id,
Function,
office_location,
address,
Product,
Revenue,
sales_goal,
Manager,
Country
FROM
(
SELECT *,
ROW_NUMBER() OVER(PARTITION BY Function ORDER BY Product) AS rn
FROM @working
) AS x
WHERE rn == 1;
@output = SELECT * FROM @working;
OUTPUT @output TO "/output/output.csv"
USING Outputters.Csv(quoting:false);
我的结果: