Azure Python SDK 和机器学习工作室 Web 服务批处理执行代码段:TypeError

Azure Python SDK & Machine Learning Studio Web Service Batch Execution Snippet: TypeError

第一个问题已解决,请向下滚动至 EDIT2

我正在尝试访问通过 Azure 机器学习工作室部署的 Web 服务,使用下页底部 Python 的批处理示例代码:

https://studio.azureml.net/apihelp/workspaces/306bc1f050ba4cdba0dbc6cc561c6ab0/webservices/e4e3d2d32ec347ae9a829b200f7d31cd/endpoints/61670382104542bc9533a920830b263c/jobs

我已经根据这个问题修复了一个问题(用 BlobBlockService 等替换了 BlobService):

https://studio.azureml.net/apihelp/workspaces/306bc1f050ba4cdba0dbc6cc561c6ab0/webservices/e4e3d2d32ec347ae9a829b200f7d31cd/endpoints/61670382104542bc9533a920830b263c/jobs

而且我也按照说明输入了API-Key, Container-Name, URL, account_key and account_name

然而,今天的代码片段似乎比当时更过时,因为我现在收到了一个不同的错误:

File "C:/Users/Alex/Desktop/scripts/BatchExecution.py", line 80, in uploadFileToBlob
    blob_service = asb.BlockBlobService(account_name=storage_account_name, account_key=storage_account_key)

  File "C:\Users\Alex\Anaconda3\lib\site-packages\azure\storage\blob\blockblobservice.py", line 145, in __init__

  File "C:\Users\Alex\Anaconda3\lib\site-packages\azure\storage\blob\baseblobservice.py", line 205, in __init__

TypeError: get_service_parameters() got an unexpected keyword argument 'token_credential'

我还注意到,当通过 pip 安装 Python 的 Azure SDK 时,我在过程结束时收到以下警告(但是安装成功):

azure-storage-queue 1.3.0 has requirement azure-storage-common<1.4.0,>=1.3.0, but you'll have azure-storage-common 1.1.0 which is incompatible.

azure-storage-file 1.3.0 has requirement azure-storage-common<1.4.0,>=1.3.0, but you'll have azure-storage-common 1.1.0 which is incompatible.

azure-storage-blob 1.3.0 has requirement azure-storage-common<1.4.0,>=1.3.0, but you'll have azure-storage-common 1.1.0 which is incompatible.

我在 Python SDK 的最新文档中找不到关于所有这些的任何信息(甚至不包含 'token_credential' 一词):

https://media.readthedocs.org/pdf/azure-storage/latest/azure-storage.pdf

有没有人知道安装过程中出了什么问题,或者为什么在执行过程中会弹出 'token_credential' 类型错误?

或者有人知道如何安装必要版本的 azure-storage-common 或 azure-storage-blob 吗?

编辑:这是我的代码(但不可重现,因为我在发布前更改了密钥)

# How this works:
#
# 1. Assume the input is present in a local file (if the web service accepts input)
# 2. Upload the file to an Azure blob - you"d need an Azure storage account
# 3. Call BES to process the data in the blob. 
# 4. The results get written to another Azure blob.

# 5. Download the output blob to a local file
#
# Note: You may need to download/install the Azure SDK for Python.
# See: http://azure.microsoft.com/en-us/documentation/articles/python-how-to-install/

import urllib
# If you are using Python 3+, import urllib instead of urllib2

import json
import time
import azure.storage.blob as asb          # replaces BlobService by BlobBlockService


def printHttpError(httpError):
    print("The request failed with status code: " + str(httpError.code))

    # Print the headers - they include the requert ID and the timestamp, which are useful for debugging the failure
    print(httpError.info())

    print(json.loads(httpError.read()))
    return


def saveBlobToFile(blobUrl, resultsLabel):
    output_file = "myresults.csv" # Replace this with the location you would like to use for your output file
    print("Reading the result from " + blobUrl)
    try:
        # If you are using Python 3+, replace urllib2 with urllib.request in the following code
        response = urllib.request.urlopen(blobUrl)
    except urllib.request.HTTPError:
        printHttpError(urllib.HTTPError)
        return

    with open(output_file, "w+") as f:
        f.write(response.read())
    print(resultsLabel + " have been written to the file " + output_file)
    return


def processResults(result):


    first = True
    results = result["Results"]
    for outputName in results:
        result_blob_location = results[outputName]
        sas_token = result_blob_location["SasBlobToken"]
        base_url = result_blob_location["BaseLocation"]
        relative_url = result_blob_location["RelativeLocation"]

        print("The results for " + outputName + " are available at the following Azure Storage location:")
        print("BaseLocation: " + base_url)
        print("RelativeLocation: " + relative_url)
        print("SasBlobToken: " + sas_token)


        if (first):
            first = False
            url3 = base_url + relative_url + sas_token
            saveBlobToFile(url3, "The results for " + outputName)
    return



def uploadFileToBlob(input_file, input_blob_name, storage_container_name, storage_account_name, storage_account_key):
    blob_service = asb.BlockBlobService(account_name=storage_account_name, account_key=storage_account_key)

    print("Uploading the input to blob storage...")
    data_to_upload = open(input_file, "r").read()
    blob_service.put_blob(storage_container_name, input_blob_name, data_to_upload, x_ms_blob_type="BlockBlob")

def invokeBatchExecutionService():
    storage_account_name = "storage1" # Replace this with your Azure Storage Account name
    storage_account_key = "kOveEtQMoP5zbUGfFR47" # Replace this with your Azure Storage Key
    storage_container_name = "input" # Replace this with your Azure Storage Container name
    connection_string = "DefaultEndpointsProtocol=https;AccountName=" + storage_account_name + ";AccountKey=" + storage_account_key #"DefaultEndpointsProtocol=https;AccountName=mayatostorage1;AccountKey=aOYA2P5VQPR3ZQCl+aWhcGhDRJhsR225teGGBKtfXWwb2fNEo0CrhlwGWdfbYiBTTXPHYoKZyMaKuEAU8A/Fzw==;EndpointSuffix=core.windows.net"
    api_key = "5wUaln7n99rt9k+enRLG2OrhSsr9VLeoCfh0q3mfYo27hfTCh32f10PsRjJtuA==" # Replace this with the API key for the web service
    url = "https://ussouthcentral.services.azureml.net/workspaces/306bc1f050/services/61670382104542bc9533a920830b263c/jobs" #"https://ussouthcentral.services.azureml.net/workspaces/306bc1f050ba4cdba0dbc6cc561c6ab0/services/61670382104542bc9533a920830b263c/jobs/job_id/start?api-version=2.0"



    uploadFileToBlob(r"C:\Users\Alex\Desktop_da.csv", # Replace this with the location of your input file
                     "input1datablob.csv", # Replace this with the name you would like to use for your Azure blob; this needs to have the same extension as the input file 
                     storage_container_name, storage_account_name, storage_account_key)

    payload =  {

        "Inputs": {

            "input1": { "ConnectionString": connection_string, "RelativeLocation": "/" + storage_container_name + "/input1datablob.csv" },
        },     

        "Outputs": {

            "output1": { "ConnectionString": connection_string, "RelativeLocation": "/" + storage_container_name + "/output1results.csv" },
        },
        "GlobalParameters": {
}
    }

    body = str.encode(json.dumps(payload))
    headers = { "Content-Type":"application/json", "Authorization":("Bearer " + api_key)}
    print("Submitting the job...")

    # If you are using Python 3+, replace urllib2 with urllib.request in the following code

    # submit the job
    req = urllib.request.Request(url + "?api-version=2.0", body, headers)
    try:
        response = urllib.request.urlopen(req)
    except urllib.request.HTTPError:
        printHttpError(urllib.HTTPError)
        return

    result = response.read()
    job_id = result[1:-1] # remove the enclosing double-quotes
    print("Job ID: " + job_id)


    # If you are using Python 3+, replace urllib2 with urllib.request in the following code
    # start the job
    print("Starting the job...")
    req = urllib.request.Request(url + "/" + job_id + "/start?api-version=2.0", "", headers)
    try:
        response = urllib.request.urlopen(req)
    except urllib.request.HTTPError:
        printHttpError(urllib.HTTPError)
        return

    url2 = url + "/" + job_id + "?api-version=2.0"

    while True:
        print("Checking the job status...")
        # If you are using Python 3+, replace urllib2 with urllib.request in the follwing code
        req = urllib.request.Request(url2, headers = { "Authorization":("Bearer " + api_key) })

        try:
            response = urllib.request.urlopen(req)
        except urllib.request.HTTPError:
            printHttpError(urllib.HTTPError)
            return    

        result = json.loads(response.read())
        status = result["StatusCode"]
        if (status == 0 or status == "NotStarted"):
            print("Job " + job_id + " not yet started...")
        elif (status == 1 or status == "Running"):
            print("Job " + job_id + " running...")
        elif (status == 2 or status == "Failed"):
            print("Job " + job_id + " failed!")
            print("Error details: " + result["Details"])
            break
        elif (status == 3 or status == "Cancelled"):
            print("Job " + job_id + " cancelled!")
            break
        elif (status == 4 or status == "Finished"):
            print("Job " + job_id + " finished!")

            processResults(result)
            break
        time.sleep(1) # wait one second
    return

invokeBatchExecutionService()

编辑 2:感谢 jon,上述问题已得到解决,csv 已上传到 blob 存储中。

但是现在在第 130 行提交作业时出现 HTTPError:

   raise HTTPError(req.full_url, code, msg, hdrs, fp)  HTTPError: Bad Request

我认为他们提供的代码在这一点上可能已经很老了。

azure.storage.bloblatest version是1.3。因此,也许 pip install azure.storage.blob --update 或简单地卸载并重新安装会有帮助。

获得最新版本后,请尝试使用 create_blob_from_text 方法将文件加载到您的存储容器中。

from azure.storage.blob import BlockBlobService

blobService = BlockBlobService(account_name="accountName", account_key="accountKey)

blobService.create_blob_from_text("containerName", "fileName", csv_file)

希望这有助于引导您走上正确的道路,但如果没有,我们可以解决它。 :)