ICode9

精准搜索请尝试: 精确搜索
首页 > 系统相关> 文章详细

新版Nginx1.17体系化深度精讲 给开发和运维的刚需课程

2021-02-16 13:02:44  阅读:321  来源: 互联网

标签:Nginx1.17 运维 get house soup 体系化 div housename append


download:新版Nginx1.17体系化深度精讲 给开发和运维的刚需课程

Nginx是Web开发不可或缺的一部分,作为享誉中外的高性能静态Web服务器和反向代理服务器,被各大一线互联网公司广泛应用。本课程绝非散列知识点的罗列,而是从基础应用到架构思维,从场景实践到性能优化,带你拥抱完备的Nginx生态。以反向代理和负载均衡这2种经典生产场景为蓝本,深入阐述,助你解决企业生产中的实际问题。无论你是开发工程师还是运维工程师,这门课都能帮你在短时间内精进Nginx,实实在在地提升竞争力,契合企业的人才痛点需求!
适合人群
1.后端工程师,如:Java、Go、Python等
2.前端工程师
3.运维工程师
技术储备要求
Linux基本命令、HTTP基础

#!/usr/bin/python
from bs4 import BeautifulSoup
import requests
def getHouseList(url):
house =[]
headers = {'User-Agent':'Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.1 (KHTML, like Gecko) Chrome/21.0.1180.71 Safari/537.1 LBBROWSER'}
#get从网页获取信息
res = requests.get(url,headers=headers)
#解析内容
soup = BeautifulSoup(res.content,'lxml')
#房源title
housename_divs = soup.findall('div',class='title')
for housename_div in housename_divs:
housename_as=housename_div.find_all('a')
for housename_a in housename_as:
housename=[]
#标题
housename.append(housename_a.get_text())
#超链接
housename.append(housename_a['href'])
house.append(housename)
huseinfo_divs = soup.findall('div',class='houseInfo')
for i in range(len(huseinfo_divs)):
info = huseinfo_divs[i].get_text()
infos = info.split('|')
#小区称号
house[i].append(infos[0])
#户型
house[i].append(infos[1])
#平米
house[i].append(infos[2])
#查询总价
house_prices = soup.findall('div',class='totalPrice')
for i in range(len(house_prices)):
#价钱
price = house_prices[i].get_text()
house[i].append(price)
return house
#爬取房屋细致信息:所在区域、套内面积
def houseinfo(url):
headers = {'User-Agent':'Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.1 (KHTML, like Gecko) Chrome/21.0.1180.71 Safari/537.1 LBBROWSER'}
res = requests.get(url,headers=headers)
soup = BeautifulSoup(res.content,'lxml')
msg =[]
#所在区域
areainfos = soup.findall('span',class='info')
for areainfo in areainfos:
#只需求获取第一个a标签的内容即可
area = areainfo.find('a')
if(not area):
continue
hrefStr = area['href']
if(hrefStr.startswith('javascript')):
continue
msg.append(area.get_text())
break
#依据房屋户型计算套内面积
infolist = soup.find_all('div',id='infoList')
num = []
for info in infolist:
cols = info.findall('div',class='col')
for i in cols:
pingmi = i.get_text()
try:
a = float(pingmi[:-2])
num.append(a)
except ValueError:
continue
msg.append(sum(num))
return msg
#将房源信息写入txt文件
def writeFile(houseinfo):
f = open('d:/房源.txt','a',encoding='utf8')

houseinfo.join('\n')

f.write(houseinfo+'\n')
f.close()

#主函数
def main():
for i in range(1,100):
print('-----分隔符',i,'-------')
if i==1:
url ='https://sjz.lianjia.com/ershoufang/hy1f2f5sf1l3l2l4a2a3a4/'
else:
url='https://sjz.lianjia.com/ershoufang/pg'+str(i)+'hy1f2f5sf1l3l2l4a2a3a4/'
houses =getHouseList(url)
for house in houses:
link = house[1]
if(not link.startswith('http')):
continue
mianji = houseinfo(link)
#将套内面积、所在区域增加到房源信息
house.extend(mianji)
print(house)
info = " ".join([str(x) for x in house])
writeFile(info)
if name == 'main':
main()
从链家网站查询到8849条房源信息,但是页面只能显现31(每页数量)*100(总页码)=3100条房源,其他没找到。

第二版:

获取某个小区的房源信息,并写入excel。

#!/usr/bin/python
from bs4 import BeautifulSoup
import requests
import xlwt
def getHouseList(url):
house =[]
headers = {'User-Agent':'Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.1 (KHTML, like Gecko) Chrome/21.0.1180.71 Safari/537.1 LBBROWSER'}
#get从网页获取信息
res = requests.get(url,headers=headers)
#解析内容
soup = BeautifulSoup(res.content,'html.parser')
#房源title
housename_divs = soup.findall('div',class='title')
for housename_div in housename_divs:
housename_as=housename_div.find_all('a')
for housename_a in housename_as:
housename=[]
#标题
housename.append(housename_a.get_text())
#超链接
housename.append(housename_a.get('href'))
house.append(housename)
huseinfo_divs = soup.findall('div',class='houseInfo')
for i in range(len(huseinfo_divs)):
info = huseinfo_divs[i].get_text()
infos = info.split('|')
#小区称号
house[i].append(infos[0])
#户型
house[i].append(infos[1])
#平米
house[i].append(infos[2])
#查询总价
house_prices = soup.findall('div',class='totalPrice')
for i in range(len(house_prices)):
#价钱
price = house_prices[i].get_text()
house[i].append(price)
return house
#爬取房屋细致信息:所在区域、套内面积
def houseinfo(url):
headers = {'User-Agent':'Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.1 (KHTML, like Gecko) Chrome/21.0.1180.71 Safari/537.1 LBBROWSER'}
res = requests.get(url,headers=headers)
soup = BeautifulSoup(res.content,'html.parser')
msg =[]
#所在区域
areainfos = soup.findall('span',class='info')
for areainfo in areainfos:
#只需求获取第一个a标签的内容即可
area = areainfo.find('a')
if(not area):
continue
hrefStr = area['href']
if(hrefStr.startswith('javascript')):
continue
msg.append(area.get_text())
break
#依据房屋户型计算套内面积
infolist = soup.find_all('div',id='infoList')
num = []
for info in infolist:
cols = info.findall('div',class='col')
for i in cols:
pingmi = i.get_text()
try:
a = float(pingmi[:-2])
num.append(a)
except ValueError:
continue
msg.append(sum(num))
return msg
#将房源信息写入excel文件
def writeExcel(excelPath,houses):
workbook = xlwt.Workbook()
#获取第一个sheet页
sheet = workbook.add_sheet('git')
row0=['标题','链接地址','户型','面积','朝向','总价','所属区域','套内面积']
for i in range(0,len(row0)):
sheet.write(0,i,row0[i])
for i in range(0,len(houses)):
house = houses[i]
print(house)
for j in range(0,len(house)):
sheet.write(i+1,j,house[j])
workbook.save(excelPath)
#主函数
def main():

标签:Nginx1.17,运维,get,house,soup,体系化,div,housename,append
来源: https://blog.51cto.com/15101018/2629461

本站声明: 1. iCode9 技术分享网(下文简称本站)提供的所有内容,仅供技术学习、探讨和分享;
2. 关于本站的所有留言、评论、转载及引用,纯属内容发起人的个人观点,与本站观点和立场无关;
3. 关于本站的所有言论和文字,纯属内容发起人的个人观点,与本站观点和立场无关;
4. 本站文章均是网友提供,不完全保证技术分享内容的完整性、准确性、时效性、风险性和版权归属;如您发现该文章侵犯了您的权益,可联系我们第一时间进行删除;
5. 本站为非盈利性的个人网站,所有内容不会用来进行牟利,也不会利用任何形式的广告来间接获益,纯粹是为了广大技术爱好者提供技术内容和技术思想的分享性交流网站。

专注分享技术,共同学习,共同进步。侵权联系[81616952@qq.com]

Copyright (C)ICode9.com, All Rights Reserved.

ICode9版权所有