删除 .com python 之后的所有内容

Remove everything after .com python

我在 urls.tmp 文件中获得了包含这 3 个 url 的文件:

https://site1.com.br/wp-content/uploads/2020/06/?SD
https://site2.com.br/wp-content/uploads/tp-datademo/home-4/data/tp-hotel-booking/?SD
https://site3.com.br/wp-content/uploads/revslider/hotel-home/?MD

我想删除每个“com.br/”之后的所有内容。

我试过这个代码:

# open the file
sys.stdout = open("urls.tmp", "w")

# start remove
for i in "urls.tmp":
    url_parts = urllib.parse.urlparse(i)
    result = '{uri.scheme}://{uri.netloc}/'.format(uri=url_parts)
    print(result) #overwrite the file

# close the file
sys.stdout.close()

但是输出给了我这个奇怪的东西:

:///
:///
:///
:///
:///
:///
:///
:///

我是初学者,我做错了什么?

请参阅 答案来解决您的问题。

您可以使用字符串的拆分方法,如:

url = r"https://site1.com.br/wp-content/uploads/2020/06/?SD"

split_by = "com.br/"

new_url = url.split(split_by)[0] + split_by
# this gives you the part before <split_by> and then we can attach it again
new_url == r"https://site1.com.br"

如果您想添加一些额外的检查,您可以查看正则表达式。

您没有要求但可能对初学者有所帮助的事情。 我推荐使用

with open("urls.tmp", "w") as f:
   # do something with f

import pathlib

urls = pathlib.Path("urls.tmp").read_text()
# which gives you all lines in single string

平淡无奇 open。如果您想了解更多信息,我建议您查看上下文管理器。

还有 f-strings 自 Python 3.6 在我看来比 "{}".format.

更容易阅读

可以继续string的find()方法

urllist=[
'https://site1.com.br/wp-content/uploads/2020/06/?SD',
'https://site2.com.br/wp-content/uploads/tp-datademo/home-4/data/tp-hotel-booking/?SD',
'https://site3.com.br/wp-content/uploads/revslider/hotel-home/?MD']

newlist=[]
breaktext='com.br/'
for item in urllist:
    position=item.find(breaktext)
    newlist.append(item[:position+len(breaktext)])

print (newlist)

您正在遍历 "urls.tmp" 字符串本身,但想逐行遍历打开的文件对象。

所以试试这个:

with open("urls.tmp", "r") as urls_file:
    for line in urls_file:
        url_parts = urllib.parse.urlparse(line)
        result = "{uri.scheme}://{uri.netloc}/".format(uri=url_parts)
        print(result)

编辑:作者更新了原始问题,提到源文件内容应该用处理过的 url 重写,这里是示例:

new_urls = []

with open("urls.tmp", "r") as urls_file:
    old_urls = urls_file.readlines()

for line in old_urls:
    url_parts = urllib.parse.urlparse(line)
    proc_url = "{uri.scheme}://{uri.netloc}/\n".format(uri=url_parts)
    new_urls.append(proc_url)

with open("urls.tmp", "w") as urls_file:
    urls_file.writelines(new_urls)