ICode9

精准搜索请尝试: 精确搜索
首页 > 其他分享> 文章详细

爬虫系列---多线程爬取实例

2020-01-01 20:54:49  阅读:257  来源: 互联网

标签:url list 爬虫 pig 爬取 time import 多线程 pool


1.爬取站长图片源码

复制代码
#爬取站长'http://sc.chinaz.com/tupian/gudianmeinvtupian.html',所有的古典美女图片
import os
import time
import random
import requests
from lxml import etree
from multiprocessing.dummy import Pool
#获取所有页面的url
url ='http://sc.chinaz.com/tupian/gudianmeinvtupian.html'
page_url_list=[f'http://sc.chinaz.com/tupian/gudianmeinvtupian_{i}.html' for i in range(2,7)]
page_url_list.insert(0,url)

headers={
    'User-Agent': 'Mozilla/5.0 (Windows NT 6.1; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/73.0.3683.20 Safari/537.36',
    # 'Content-Encoding':'gzip',
    # 'Content-Type': 'text/html',
}
pig_url_list = []
def get_pig_url(url):
    response = requests.get(url=url, headers=headers)
    #xpath解析数据
    tree = etree.HTML(response.content.decode())
    div_list = tree.xpath('//div[@id="container"]/div')
    for div in div_list:
        url = div.xpath('.//img/@src2')[0]
        pig_url_list.append(url)

def download(url):
    '''下载图片数据'''
    return requests.get(url=url,headers=headers).content

def save_pig(data):
    '''保存图片'''
    # name=url.split('/')[-1]
    name=str(random.randrange(0,1000000))+'.jpg' #线程存储文件名需改善
    path='zhanzhangpig/'+name
    with open(path,'wb') as f:
        f.write(data)

if not os.path.exists('zhanzhangpig'):
    os.makedirs('zhanzhangpig')
# 使用线程池
print('多线程爬取开始')
start_time=time.time()
pool=Pool(8)
pool.map(get_pig_url,page_url_list)
data_list=pool.map(download,pig_url_list)
pool.map(save_pig,data_list)
#关闭线程池
end_time=time.time()
print('多线程爬取结束')
print('耗时:',end_time-start_time)

pool.close()
pool.join()
复制代码

 

2 爬取妹子网图片(https://www.mzitu.com/tag/ugirls/)

复制代码
import os
import time
import random
import requests
from lxml import etree
from multiprocessing.dummy import Pool
session=requests.session()
if not os.path.exists('meizitu'):
    os.makedirs('meizitu')

url='https://www.mzitu.com/tag/ugirls/'
page_url_list=[f'https://www.mzitu.com/tag/ugirls/page/{i}/' for i in range(2,17)]
page_url_list.insert(0,url)

headers={
    'User-Agent': 'Mozilla/5.0 (Windows NT 6.1; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/72.0.3626.119 Safari/537.36',
    'Upgrade-Insecure-Requests': '1',
    'Referer': 'https://www.mzitu.com/tag/ugirls/' # 反爬机制:需携带网页请求的原地址
}
pig_url_list = []
def get_pig_url(url):
    response = session.get(url=url, headers=headers)
    # print(response.text)
    #xpath解析数据
    tree = etree.HTML(response.content.decode())
    div_list = tree.xpath('//ul[@id="pins"]/li')
    for div in div_list:
        url = div.xpath('.//img/@data-original')[0]
        pig_url_list.append(url)

def download(url):
    '''下载图片数据'''
    # print(url)
    return session.get(url=url,headers=headers).content

def save_pig(data):
    '''保存图片'''
    name=str(random.randrange(0,1000000))+'.jpg' #线程存储文件名需改善
    path='meizitu/'+name
    with open(path,'wb') as f:
        f.write(data)

print('多线程爬取开始')
start_time=time.time()
#开启线程
pool=Pool(10)
# pig_url_list=get_pig_url(url=url) #单页爬取
#多页爬取

pool.map(get_pig_url,page_url_list)
# print(pig_url_list)
data_list=pool.map(download,pig_url_list)
pool.map(save_pig,data_list)

pool.close()
pool.join()
#关闭线程池
end_time=time.time()
print('多线程爬取结束')
print('耗时:',end_time-start_time)
#--------------------统计文件夹中文件个数-----------------
print(len(os.listdir('./meizitu')))
复制代码

!!!384张美图等你拿

 

标签:url,list,爬虫,pig,爬取,time,import,多线程,pool
来源: https://www.cnblogs.com/abdm-989/p/12129839.html

本站声明: 1. iCode9 技术分享网(下文简称本站)提供的所有内容,仅供技术学习、探讨和分享;
2. 关于本站的所有留言、评论、转载及引用,纯属内容发起人的个人观点,与本站观点和立场无关;
3. 关于本站的所有言论和文字,纯属内容发起人的个人观点,与本站观点和立场无关;
4. 本站文章均是网友提供,不完全保证技术分享内容的完整性、准确性、时效性、风险性和版权归属;如您发现该文章侵犯了您的权益,可联系我们第一时间进行删除;
5. 本站为非盈利性的个人网站,所有内容不会用来进行牟利,也不会利用任何形式的广告来间接获益,纯粹是为了广大技术爱好者提供技术内容和技术思想的分享性交流网站。

专注分享技术,共同学习,共同进步。侵权联系[81616952@qq.com]

Copyright (C)ICode9.com, All Rights Reserved.

ICode9版权所有