使用正则表达式在不使用循环的情况下从大型 SFrame 或数据帧中提取信息
Using regex to extract information from a large SFrame or dataframe without using a loop
我有以下代码,其中我使用循环提取一些信息并使用这些信息创建一个新矩阵。但是,因为我使用的是循环,所以这段代码需要很长时间才能完成。
我想知道是否有更好的方法通过使用 GraphLab 的 SFrame
或 pandas dataframe
来做到这一点。感谢您的帮助!
# This is the regex pattern
pattern_topic_entry_read = r"\d{15}/discussion_topics/(?P<topic>\d{9})/entries/(?P<entry>\d{9})/read"
# Using the pattern, I filter my records
requests_topic_entry_read = requests[requests['url'].apply(lambda x: False if regex.match(pattern_topic_entry_read, x) == None else True)]
# Then for each record in the final set,
# I need to extract topic and entry info using match.group
for request in requests_topic_entry_read:
for match in regex.finditer(pattern_topic_entry_read, request['url']):
topic, entry = match.group('topic'), match.group('entry')
# Then, I need to create a new SFrame (or dataframe, or anything suitable)
newRow = gl.SFrame({'user_id':[request['user_id']],
'url':[request['url']],
'topic':[topic], 'entry':[entry]})
# And, append it to my existing SFrame (or dataframe)
entry_read_matrix = entry_read_matrix.append(newRow)
一些示例数据:
user_id | url
1000 | /123456832960900/discussion_topics/770000832912345/read
1001 | /123456832960900/discussion_topics/770000832923456/view?per_page=832945307
1002 | /123456832960900/discussion_topics/770000834562343/entries/832350330/read
1003 | /123456832960900/discussion_topics/770000534344444/entries/832350367/read
我想获得这个:
user_id | topic | entry
1002 | 770000834562343 | 832350330
1003 | 770000534344444 | 832350367
在这里,让我重现一下:
>>> import pandas as pd
>>> df = pd.DataFrame(columns=["user_id","url"])
>>> df.user_id = [1000,1001,1002,1003]
>>> df.url = ['/123456832960900/discussion_topics/770000832912345/read', '/123456832960900/discussion_topics/770000832923456/view?per_page=832945307', '/123456832960900/discussion_topics/770000834562343/entries/832350330/read','/123456832960900/discussion_topics/770000534344444/entries/832350367/read']
>>> df["entry"] = df.url.apply(lambda x: x.split("/")[-2] if "entries" in x.split("/") else "---")
>>> df["topic"] = df.url.apply(lambda x: x.split("/")[-4] if "entries" in x.split("/") else "---")
>>> df[df.entry!="---"]
给你想要的DataFrame
Pandas' 系列有 string functions。例如,您的数据在 df:
pattern = re.compile(r'.*/discussion_topics/(?P<topic>\d+)(?:/entries/(?P<entry>\d+))?')
df = pd.read_table(io.StringIO(data), sep=r'\s*\|\s*', index_col='user_id')
df.url.str.extract(pattern, expand=True)
产量
topic entry
user_id
1000 770000832912345 NaN
1001 770000832923456 NaN
1002 770000834562343 832350330
1003 770000534344444 832350367
我有以下代码,其中我使用循环提取一些信息并使用这些信息创建一个新矩阵。但是,因为我使用的是循环,所以这段代码需要很长时间才能完成。
我想知道是否有更好的方法通过使用 GraphLab 的 SFrame
或 pandas dataframe
来做到这一点。感谢您的帮助!
# This is the regex pattern
pattern_topic_entry_read = r"\d{15}/discussion_topics/(?P<topic>\d{9})/entries/(?P<entry>\d{9})/read"
# Using the pattern, I filter my records
requests_topic_entry_read = requests[requests['url'].apply(lambda x: False if regex.match(pattern_topic_entry_read, x) == None else True)]
# Then for each record in the final set,
# I need to extract topic and entry info using match.group
for request in requests_topic_entry_read:
for match in regex.finditer(pattern_topic_entry_read, request['url']):
topic, entry = match.group('topic'), match.group('entry')
# Then, I need to create a new SFrame (or dataframe, or anything suitable)
newRow = gl.SFrame({'user_id':[request['user_id']],
'url':[request['url']],
'topic':[topic], 'entry':[entry]})
# And, append it to my existing SFrame (or dataframe)
entry_read_matrix = entry_read_matrix.append(newRow)
一些示例数据:
user_id | url
1000 | /123456832960900/discussion_topics/770000832912345/read
1001 | /123456832960900/discussion_topics/770000832923456/view?per_page=832945307
1002 | /123456832960900/discussion_topics/770000834562343/entries/832350330/read
1003 | /123456832960900/discussion_topics/770000534344444/entries/832350367/read
我想获得这个:
user_id | topic | entry
1002 | 770000834562343 | 832350330
1003 | 770000534344444 | 832350367
在这里,让我重现一下:
>>> import pandas as pd
>>> df = pd.DataFrame(columns=["user_id","url"])
>>> df.user_id = [1000,1001,1002,1003]
>>> df.url = ['/123456832960900/discussion_topics/770000832912345/read', '/123456832960900/discussion_topics/770000832923456/view?per_page=832945307', '/123456832960900/discussion_topics/770000834562343/entries/832350330/read','/123456832960900/discussion_topics/770000534344444/entries/832350367/read']
>>> df["entry"] = df.url.apply(lambda x: x.split("/")[-2] if "entries" in x.split("/") else "---")
>>> df["topic"] = df.url.apply(lambda x: x.split("/")[-4] if "entries" in x.split("/") else "---")
>>> df[df.entry!="---"]
给你想要的DataFrame
Pandas' 系列有 string functions。例如,您的数据在 df:
pattern = re.compile(r'.*/discussion_topics/(?P<topic>\d+)(?:/entries/(?P<entry>\d+))?')
df = pd.read_table(io.StringIO(data), sep=r'\s*\|\s*', index_col='user_id')
df.url.str.extract(pattern, expand=True)
产量
topic entry
user_id
1000 770000832912345 NaN
1001 770000832923456 NaN
1002 770000834562343 832350330
1003 770000534344444 832350367