大家好,欢迎来到IT知识分享网。
喜欢刷知乎的同志们会知道知乎里好看的壁纸有很多,今天笔者就把它全爬下来。
首先构造请求头:
header = { 'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/72.0.3626.121 Safari/537.36' ,'cookie': ' ' # 加上自己的cookie值 }
创建文件夹:
path_file = 'F:/知乎壁纸/img' if not os.path.exists(path_file): os.makedirs(path_file) else: print('路径不存在')
构造url:
k = 0 i = 3 while True: url = 'https://www.zhihu.com/api/v4/questions//answers?include=data[*].' \ 'is_normal%2Cadmin_closed_comment%2Creward_info%2Cis_collapsed%2Cannotation' \ '_action%2Cannotation_detail%2Ccollapse_reason%2Cis_sticky%2Ccollapsed_by%2Csuggest' \ '_edit%2Ccomment_count%2Ccan_comment%2Ccontent%2Ceditable_content%2Cvoteup_' \ 'count%2Creshipment_settings%2Ccomment_permission%2Ccreated_time%2Cupdated' \ '_time%2Creview_info%2Crelevant_info%2Cquestion%2Cexcerpt%2Crelationship.' \ 'is_authorized%2Cis_author%2Cvoting%2Cis_thanked%2Cis_nothelp%2Cis_labeled%3Bdata[*].' \ 'mark_infos[*].url%3Bdata[*].author.follower_count%2Cbadge[*].' \ 'topics&limit=5&offset={}&platform=desktop&sort_by=default'.format(str(i))
接下来就是用匹配信息找到图片链接进而保存在文件夹下:
etree_pic = html.etree
codes = etree_pic.HTML(pic_link_codes)
link = codes.xpath("//figure/noscript/img/@src") # 用xpath匹配所有的图片链接
file_name_path = str(k) + '.jpg'
for lin in link: # 遍历链接
file_name_path = str(k) + '.jpg'
request.urlretrieve(lin, filename=path_file + os.sep + file_name_path) # 保存在当地文件夹下
k = k + 1
print('正在打印第' + str(k) + '张壁纸')
print(lin)
except Exception as e:
print(e+'\''+'没有您需要请求的内容')
else:
continue
打开你的文件夹就发现你要的精美壁纸都在里边
免责声明:本站所有文章内容,图片,视频等均是来源于用户投稿和互联网及文摘转载整编而成,不代表本站观点,不承担相关法律责任。其著作权各归其原作者或其出版社所有。如发现本站有涉嫌抄袭侵权/违法违规的内容,侵犯到您的权益,请在线联系站长,一经查实,本站将立刻删除。 本文来自网络,若有侵权,请联系删除,如若转载,请注明出处:https://haidsoft.com/124906.html