未使用 Facebook 营销获得暂停的广告洞察 API

Not getting paused ads insights using Facebook Marketing API

我写了这个脚本 returns 一个广告列表及其统计数据,但显然我只获得了活动广告的洞察力,而不是暂停的广告 - 对于暂停的广告,我只是获得了广告系列名称和它的编号!

我试过像下面这样使用过滤,但它不起作用:

''

first = "https://graph.facebook.com/v3.2/act_105433210/campaigns?filtering=[{'field':'effective_status','operator':'IN','value':['PAUSED']}]&fields=created_time,name,effective_status,insights{spend,impressions,clicks}&access_token=%s"% token

然后我检查使用:

result = requests.get(first)
content_dict = json.loads(result.content)
print(content_dict)

这是我得到的输出示例:

{'data': [{'created_time': '2019-02-15T17:24:29+0100', 'name': '20122301-FB-BOOST-EVENT-CC SDSDSD', 'effective_status': 'PAUSED', 'id': '6118169436761'}

只有活动名称,没有见解! 有人检索过 stats/insights 之前暂停的 ads/campaigns 吗?

谢谢!

请检查我的 python 脚本的其他 post :I can't fetch stats for all my facebook campaigns using Python and Facebook Marketing API

您可以组合更多过滤条件,例如,对于过滤暂停的活动,名称包含字符串 name 并从 3 月 1 日开始,您可以使用:

act_105433210/campaigns?filtering=[{'field':'effective_status','operator':'IN','value':['PAUSED']},{'field':'name','operator':'CONTAIN','value':'name'},{'field':'created_time','operator':'GREATER_THAN','value':'1551444673'}]&fields=created_time,name,effective_status,insights{spend,impressions,clicks}

时间戳应该是纪元时间戳,例子中是:

Epoch timestamp: 1551444673 Human time (GMT): Friday, March 1, 2019 12:51:13 PM

经过几天的挖掘,我终于想出了一个脚本运行提取了 3 年的 facebook 广告洞察力,避免了 facebook 的速率限制API。

首先,我们导入我们需要的库:

from facebookads.api import FacebookAdsApi
from facebookads.adobjects.adsinsights import AdsInsights
from facebookads.adobjects.adaccount import AdAccount
from facebookads.adobjects.business import Business
import datetime
import csv
import re 
import pandas as pd
import numpy as np
import matplotlib as plt
from google.colab import files
import time

请注意,在提取见解后,我将它们保存在 Google 云存储中,然后保存在 Big Query 表中。

access_token = 'my-token'
ad_account_id = 'act_id'
app_secret = 'app_s****'
app_id = 'app_id****'
FacebookAdsApi.init(app_id,app_secret, access_token=access_token, api_version='v3.2')
account = AdAccount(ad_account_id)

然后,以下脚本调用 api 并检查我们确实达到的速率限制:

import logging
import requests as rq

#Function to find the string between two strings or characters
def find_between( s, first, last ):
    try:
        start = s.index( first ) + len( first )
        end = s.index( last, start )
        return s[start:end]
    except ValueError:
        return ""

#Function to check how close you are to the FB Rate Limit
def check_limit():
    check=rq.get('https://graph.facebook.com/v3.1/'+ad_account_id+'/insights?access_token='+access_token)
    usage=float(find_between(check.headers['x-ad-account-usage'],':','}'))
    return usage

现在,这是您可以 运行 提取最近 X 天数据的整个脚本!

Y = number of days 
for x in range(1, Y):

  date_0 = datetime.datetime.now() - datetime.timedelta(days=x )
  date_ = date_0.strftime('%Y-%m-%d')
  date_compact = date_.replace('-', '')
  filename = 'fb_%s.csv'%date_compact
  filelocation = "./"+ filename
    # Open or create new file 
  try:
      csvfile = open(filelocation , 'w+', 777)
  except:
      print ("Cannot open file.")


  # To keep track of rows added to file
  rows = 0

  try:
      # Create file writer
      filewriter = csv.writer(csvfile, delimiter=',')
      filewriter.writerow(['date','ad_name', 'adset_id', 'adset_name', 'campaign_id', 'campaign_name', 'clicks', 'impressions', 'spend'])
  except Exception as err:
      print(err)
  # Iterate through all accounts in the business account

  ads = account.get_insights(params={'time_range': {'since':date_, 'until':date_}, 'level':'ad' }, fields=[AdsInsights.Field.ad_name, AdsInsights.Field.adset_id, AdsInsights.Field.adset_name, AdsInsights.Field.campaign_id, AdsInsights.Field.campaign_name, AdsInsights.Field.clicks, AdsInsights.Field.impressions, AdsInsights.Field.spend ])
  for ad in ads:

    # Set default values in case the insight info is empty
    date = date_
    adsetid = ""
    adname = ""
    adsetname = ""
    campaignid = ""
    campaignname = ""
    clicks = ""
    impressions = ""
    spend = ""

    # Set values from insight data
    if ('adset_id' in ad) :
        adsetid = ad[AdsInsights.Field.adset_id]
    if ('ad_name' in ad) :
        adname = ad[AdsInsights.Field.ad_name]
    if ('adset_name' in ad) :
        adsetname = ad[AdsInsights.Field.adset_name]
    if ('campaign_id' in ad) :
        campaignid = ad[AdsInsights.Field.campaign_id]
    if ('campaign_name' in ad) :
        campaignname = ad[AdsInsights.Field.campaign_name]
    if ('clicks' in ad) : # This is stored strangely, takes a few steps to break through the layers
        clicks = ad[AdsInsights.Field.clicks]
    if ('impressions' in ad) : # This is stored strangely, takes a few steps to break through the layers
        impressions = ad[AdsInsights.Field.impressions]
    if ('spend' in ad) :
        spend = ad[AdsInsights.Field.spend]

    # Write all ad info to the file, and increment the number of rows that will display
    filewriter.writerow([date_, adname, adsetid, adsetname, campaignid, campaignname, clicks, impressions, spend])
    rows += 1

  csvfile.close()

# Print report
  print (str(rows) + " rows added to the file " + filename)
  print(check_limit(), 'reached of rate limit')
## write to GCS and BQ
  blob = bucket.blob('fb_2/fb_%s.csv'%date_compact)
  blob.upload_from_filename(filelocation)
  load_job_config = bigquery.LoadJobConfig()
  table_name = '0_fb_ad_stats_%s' % date_compact
  load_job_config.write_disposition = 'WRITE_TRUNCATE'
  load_job_config.skip_leading_rows = 1

  # The source format defaults to CSV, so the line below is optional.
  load_job_config.source_format = bigquery.SourceFormat.CSV
  load_job_config.field_delimiter = ','
  load_job_config.autodetect = True
  uri = 'gs://my-project/fb_2/fb_%s.csv'%date_compact
  load_job = bq_client.load_table_from_uri(
    uri,
    dataset.table(table_name),
    job_config=load_job_config)  # API request
  print('Starting job {}'.format(load_job.job_id))
  load_job.result()  # Waits for table load to complete.
  print('Job finished.')

  if (check_limit()>=75):
    print('75% Rate Limit Reached. Cooling Time 5 Minutes.')
    logging.debug('75% Rate Limit Reached. Cooling Time Around 3 Minutes And Half.')
    time.sleep(225)

这非常有效,但请注意,如果您打算提取 3 年的数据,脚本将花费很多时间 运行!

我要感谢 LucyTurtle and Ashish Baid 他们的脚本在我的工作中帮助了我!

如果您需要更多详细信息或需要为不同的广告帐户提取一天的数据,请参阅此 post: