ICode9

精准搜索请尝试: 精确搜索
首页 > 编程语言> 文章详细

Python爬虫豆瓣电影Top250个电影数据保存在Excel

2021-07-23 17:04:23  阅读:124  来源: 互联网

标签:item Python 电影 Excel re html data findall append


from bs4 import BeautifulSoup
import urllib.request,urllib.error
import re
import xlwt




#爬取网页
findLink = re.compile(r'<a href="(.*?)">')
findImgSrc = re.compile(r'<img.*src="(.*?)"',re.S)
findTitle = re.compile(r'<span class="title">(.*)</span>')
findRating = re.compile(r'<span class="rating_num" property="v:average">(.*)</span>')
findJudge = re.compile(r'<span>(\d*)</span>')
findInq = re.compile(r'<span class="inq">(.*?)</span>')
findBd = re.compile(r'<p class="">(.*?)</p>',re.S)

def main():
    baseurl="https://movie.douban.com/top250?start="
    datalist=getDate(baseurl)
    savepath=".\\豆瓣电影Top250.xls"
    saveData(datalist,savepath)





def getDate(baseurl):
    datalist = []
    for i in range(0,10): #调用函数访问网页十次
        url=baseurl + str(i*25)
        html=askURL(url)
        soup=BeautifulSoup(html,"html.parser")
        for item in soup.find_all('div',class_='item'):
            item=str(item)
            data = []
            link=re.findall(findLink,item)[0]
            data.append(link)
            ImgSrc = re.findall(findImgSrc,item)[0]
            data.append(ImgSrc)
            titles = re.findall(findTitle,item)
            if len(titles) == 2:
                ctitle = titles[0]
                data.append(ctitle)
                otitle = titles[1].replace("/","")
                data.append(otitle)
            else:
                data.append(titles[0])
                data.append(' ')
            rating = re.findall(findRating,item)[0]
            data.append(rating)
            judgeNum = re.findall(findJudge,item)
            data.append(judgeNum)
            inq = re.findall(findInq,item)
            if len(inq) != 0:
                inq = inq[0].replace("。","")
                data.append(inq)
            else:
                data.append(" ")
            bd = str()
            bd = re.findall(findBd,item)[0]
            bd = re.sub('<br(\s+)?/>(\s+)?'," ",bd)
            bd = re.sub('/',' ',bd)
            data.append(bd.strip())                 #去掉空格

            datalist.append(data)                 #把处理好的一部电影的信息载入datalist




    return datalist




def askURL(url):
    head={
        "User-Agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/91.0.4472.124 Safari/537.36 Edg/91.0.864.67"
    }
    request=urllib.request.Request(url,headers=head)
    html=''
    try:
        response=urllib.request.urlopen(request)
        html=response.read().decode('utf-8')
        #print(html)
    except urllib.error.URLError as e:
        if hasattr(e,'code'):
            print(e.code)
        if hasattr(e,'reason'):
            print(e.reason)
    return html


def saveData(datelist,savepath):
     print("save......")
     book = xlwt.Workbook(encoding='utf-8')
     sheet = book.add_sheet('豆瓣电影Top250')
     col = ("电影链接","图片链接","影片中文名","影片外国名","评分","评价数","概况","相关信息")
     for i in range(0,8):
         sheet.write(0,i,col[i])
     for i in range(0,250):
         print("第%d条"%i)
         data = datelist[i]
         for j in range(0,8):
             sheet.write(i+1,j,data[j])
     book.save("student.xls")



if __name__ == '__main__':
    main()







标签:item,Python,电影,Excel,re,html,data,findall,append
来源: https://blog.csdn.net/qq_53481873/article/details/119039501

本站声明: 1. iCode9 技术分享网(下文简称本站)提供的所有内容,仅供技术学习、探讨和分享;
2. 关于本站的所有留言、评论、转载及引用,纯属内容发起人的个人观点,与本站观点和立场无关;
3. 关于本站的所有言论和文字,纯属内容发起人的个人观点,与本站观点和立场无关;
4. 本站文章均是网友提供,不完全保证技术分享内容的完整性、准确性、时效性、风险性和版权归属;如您发现该文章侵犯了您的权益,可联系我们第一时间进行删除;
5. 本站为非盈利性的个人网站,所有内容不会用来进行牟利,也不会利用任何形式的广告来间接获益,纯粹是为了广大技术爱好者提供技术内容和技术思想的分享性交流网站。

专注分享技术,共同学习,共同进步。侵权联系[81616952@qq.com]

Copyright (C)ICode9.com, All Rights Reserved.

ICode9版权所有