必威-必威-欢迎您

必威,必威官网企业自成立以来,以策略先行,经营致胜,管理为本的商,业推广理念,一步一个脚印发展成为同类企业中经营范围最广,在行业内颇具影响力的企业。

    Scrapy一个开源和协作的框架必威,使用requ

2019-11-22 05:15 来源:未知

一、爬虫之requests

官方网址链接:https://docs.scrapy.org/en/latest/topics/architecture.html

一 介绍

    Scrapy贰个开源和同盟的框架,当中期是为着页面抓取 (更方便来讲, 网络抓取 )所布署的,使用它能够以便捷、轻便、可扩展的章程从网址中领到所需的多少。但近年来Scrapy的用八臂哪吒三太子项充裕周围,可用于如数据开采、监测和自动化测量检验等世界,也足以选择在收获API所重临的数目(例如亚马逊 Associates Web Services ) 或许通用的互连网爬虫。

    Scrapy 是根据twisted框架开辟而来,twisted是一个风靡的事件驱动的python互连网框架。由此Scrapy使用了黄金时代种非梗塞(又名异步卡塔 尔(英语:State of Qatar)的代码来贯彻产出。全体架构大约如下
必威 1

The data flow in Scrapy is controlled by the execution engine, and goes like this:

  1. The Engine gets the initial Requests to crawl from the Spider.
  2. The Engine schedules the Requests in the Scheduler and asks for the next Requests to crawl.
  3. The Scheduler returns the next Requests to the Engine.
  4. The Engine sends the Requests to the Downloader, passing through the Downloader Middlewares (see process_request()).
  5. Once the page finishes downloading the Downloader generates a Response (with that page) and sends it to the Engine, passing through the Downloader Middlewares (see process_response()).
  6. The Engine receives the Response from the Downloader and sends it to the Spider for processing, passing through the Spider Middleware (see process_spider_input()).
  7. The Spider processes the Response and returns scraped items and new Requests (to follow) to the Engine, passing through the Spider Middleware (see process_spider_output()).
  8. The Engine sends processed items to Item Pipelines, then send processed Requests to the Scheduler and asks for possible next Requests to crawl.
  9. The process repeats (from step 1) until there are no more requests from the Scheduler.

 

Components:

  1. 引擎(EGINE) 引擎担负调控种类具有组件之间的数据流,并在有个别动作发生时接触事件。有关详细消息,请参见上边包车型地铁数据流部分。

  2. 调度器(SCHEDULER)
    用来经受引擎发过来的伸手, 压入队列中, 并在发动机再一次伸手的时候再次来到. 能够想像成一个U标致RCZL的开始时期级队列, 由它来支配下一个要抓取的网站是哪些, 同时去除重复的网址

  3. 下载器(DOWLOADER)
    用以下载网页内容, 并将网页内容再次来到给EGINE,下载器是创设在twisted那个高速的异步模型上的
  4. 爬虫(SPIDERS)
    SPIDE奥迪Q3S是开拓职员自定义的类,用来剖判responses,并且提取items,或然发送新的倡议
  5. 类型管道(ITEM PIPLINES)
    在items被提取后担当管理它们,首要蕴涵清理、验证、长久化(举个例子存到数据库卡塔尔等操作
  6. 下载器中间件(Downloader Middlewares)
    位于Scrapy引擎和下载器之间,重要用来管理从EGINE传到DOWLOADEPAJERO的倡议request,已经从DOWNLOADE奔驰G级传到EGINE的响应response,你可用该中间件做以下几件事
    1. process a request just before it is sent to the Downloader (i.e. right before Scrapy sends the request to the website);
    2. change received response before passing it to a spider;
    3. send a new Request instead of passing received response to a spider;
    4. pass response to a spider without fetching a web page;
    5. silently drop some requests.
  7. 爬虫中间件(Spider Middlewares)
    投身EGINE和SPIDE传祺S之间,首要办事是拍卖SPIDE君越S的输入(即responses卡塔 尔(阿拉伯语:قطر‎和出口(即requests卡塔尔

官方网站链接:https://docs.scrapy.org/en/latest/topics/architecture.html

    a、介绍:接受requests能够效仿浏览器的伏乞,比起在此之前运用的urllib,requests模块的api尤其简便易行(本质便是包裹了urllib3卡塔尔

性子相关

在编辑爬虫时,性能的费用首要在IO诉求中,当单进程单线程方式下央求UENVISIONL时肯定会挑起等待,进而使得央浼全体变慢。

必威 2必威 3

import requests

def fetch_async(url):
    response = requests.get(url)
    return response


url_list = ['http://www.github.com', 'http://www.bing.com']

for url in url_list:
    fetch_async(url)

1.风流罗曼蒂克并实施

必威 4必威 5

from concurrent.futures import ThreadPoolExecutor
import requests


def fetch_async(url):
    response = requests.get(url)
    return response


url_list = ['http://www.github.com', 'http://www.bing.com']
pool = ThreadPoolExecutor(5)
for url in url_list:
    pool.submit(fetch_async, url)
pool.shutdown(wait=True)

2.四线程实行

必威 6必威 7

from concurrent.futures import ThreadPoolExecutor
import requests

def fetch_async(url):
    response = requests.get(url)
    return response


def callback(future):
    print(future.result())


url_list = ['http://www.github.com', 'http://www.bing.com']
pool = ThreadPoolExecutor(5)
for url in url_list:
    v = pool.submit(fetch_async, url)
    v.add_done_callback(callback)
pool.shutdown(wait=True)

2.八线程+回调函数施行

必威 8必威 9

from concurrent.futures import ProcessPoolExecutor
import requests

def fetch_async(url):
    response = requests.get(url)
    return response


url_list = ['http://www.github.com', 'http://www.bing.com']
pool = ProcessPoolExecutor(5)
for url in url_list:
    pool.submit(fetch_async, url)
pool.shutdown(wait=True)

3.多进度施行

必威 10必威 11

from concurrent.futures import ProcessPoolExecutor
import requests


def fetch_async(url):
    response = requests.get(url)
    return response


def callback(future):
    print(future.result())


url_list = ['http://www.github.com', 'http://www.bing.com']
pool = ProcessPoolExecutor(5)
for url in url_list:
    v = pool.submit(fetch_async, url)
    v.add_done_callback(callback)
pool.shutdown(wait=True)

3.多进度+回调函数实行

经过上述代码均可以成功对央求质量的巩固,对于三十二线程和多开展的欠缺是在IO拥塞时会产生了线程和经过的浪费,所以异步IO回事首推:

必威 12必威 13

import asyncio


@asyncio.coroutine
def func1():
    print('before...func1......')
    yield from asyncio.sleep(5)
    print('end...func1......')


tasks = [func1(), func1()]

loop = asyncio.get_event_loop()
loop.run_until_complete(asyncio.gather(*tasks))
loop.close()

1.asyncio示例1

必威 14必威 15

import asyncio


@asyncio.coroutine
def fetch_async(host, url='/'):
    print(host, url)
    reader, writer = yield from asyncio.open_connection(host, 80)

    request_header_content = """GET %s HTTP/1.0rnHost: %srnrn""" % (url, host,)
    request_header_content = bytes(request_header_content, encoding='utf-8')

    writer.write(request_header_content)
    yield from writer.drain()
    text = yield from reader.read()
    print(host, url, text)
    writer.close()

tasks = [
    fetch_async('www.cnblogs.com', '/wupeiqi/'),
    fetch_async('dig.chouti.com', '/pic/show?nid=4073644713430508&lid=10273091')
]

loop = asyncio.get_event_loop()
results = loop.run_until_complete(asyncio.gather(*tasks))
loop.close()

1.asyncio示例2

必威 16必威 17

import aiohttp
import asyncio


@asyncio.coroutine
def fetch_async(url):
    print(url)
    response = yield from aiohttp.request('GET', url)
    # data = yield from response.read()
    # print(url, data)
    print(url, response)
    response.close()


tasks = [fetch_async('http://www.google.com/'), fetch_async('http://www.chouti.com/')]

event_loop = asyncio.get_event_loop()
results = event_loop.run_until_complete(asyncio.gather(*tasks))
event_loop.close()

2.asyncio + aiohttp

必威 18必威 19

import asyncio
import requests


@asyncio.coroutine
def fetch_async(func, *args):
    loop = asyncio.get_event_loop()
    future = loop.run_in_executor(None, func, *args)
    response = yield from future
    print(response.url, response.content)


tasks = [
    fetch_async(requests.get, 'http://www.cnblogs.com/wupeiqi/'),
    fetch_async(requests.get, 'http://dig.chouti.com/pic/show?nid=4073644713430508&lid=10273091')
]

loop = asyncio.get_event_loop()
results = loop.run_until_complete(asyncio.gather(*tasks))
loop.close()

3.asyncio + requests

必威 20必威 21

import gevent

import requests
from gevent import monkey

monkey.patch_all()


def fetch_async(method, url, req_kwargs):
    print(method, url, req_kwargs)
    response = requests.request(method=method, url=url, **req_kwargs)
    print(response.url, response.content)

# ##### 发送请求 #####
gevent.joinall([
    gevent.spawn(fetch_async, method='get', url='https://www.python.org/', req_kwargs={}),
    gevent.spawn(fetch_async, method='get', url='https://www.yahoo.com/', req_kwargs={}),
    gevent.spawn(fetch_async, method='get', url='https://github.com/', req_kwargs={}),
])

# ##### 发送请求(协程池控制最大协程数量) #####
# from gevent.pool import Pool
# pool = Pool(None)
# gevent.joinall([
#     pool.spawn(fetch_async, method='get', url='https://www.python.org/', req_kwargs={}),
#     pool.spawn(fetch_async, method='get', url='https://www.yahoo.com/', req_kwargs={}),
#     pool.spawn(fetch_async, method='get', url='https://www.github.com/', req_kwargs={}),
# ])

4.gevent + requests

必威 22必威 23

import grequests


request_list = [
    grequests.get('http://httpbin.org/delay/1', timeout=0.001),
    grequests.get('http://fakedomain/'),
    grequests.get('http://httpbin.org/status/500')
]


# ##### 执行并获取响应列表 #####
# response_list = grequests.map(request_list)
# print(response_list)


# ##### 执行并获取响应列表(处理异常) #####
# def exception_handler(request, exception):
# print(request,exception)
#     print("Request failed")

# response_list = grequests.map(request_list, exception_handler=exception_handler)
# print(response_list)

5.grequests

必威 24必威 25

from twisted.web.client import getPage, defer
from twisted.internet import reactor


def all_done(arg):
    reactor.stop()


def callback(contents):
    print(contents)


deferred_list = []

url_list = ['http://www.bing.com', 'http://www.baidu.com', ]
for url in url_list:
    deferred = getPage(bytes(url, encoding='utf8'))
    deferred.addCallback(callback)
    deferred_list.append(deferred)

dlist = defer.DeferredList(deferred_list)
dlist.addBoth(all_done)

reactor.run()

6.Twisted示例

必威 26必威 27

from tornado.httpclient import AsyncHTTPClient
from tornado.httpclient import HTTPRequest
from tornado import ioloop


def handle_response(response):
    """
    处理返回值内容(需要维护计数器,来停止IO循环),调用 ioloop.IOLoop.current().stop()
    :param response: 
    :return: 
    """
    if response.error:
        print("Error:", response.error)
    else:
        print(response.body)


def func():
    url_list = [
        'http://www.baidu.com',
        'http://www.bing.com',
    ]
    for url in url_list:
        print(url)
        http_client = AsyncHTTPClient()
        http_client.fetch(HTTPRequest(url), handle_response)


ioloop.IOLoop.current().add_callback(func)
ioloop.IOLoop.current().start()

7.Tornado

必威 28必威 29

from twisted.internet import reactor
from twisted.web.client import getPage
import urllib.parse


def one_done(arg):
    print(arg)
    reactor.stop()

post_data = urllib.parse.urlencode({'check_data': 'adf'})
post_data = bytes(post_data, encoding='utf8')
headers = {b'Content-Type': b'application/x-www-form-urlencoded'}
response = getPage(bytes('http://dig.chouti.com/login', encoding='utf8'),
                   method=bytes('POST', encoding='utf8'),
                   postdata=post_data,
                   cookies={},
                   headers=headers)
response.addBoth(one_done)

reactor.run()

Twisted更多

以上均是Python内置以至第三方模块提供异步IO央求模块,使用简便大大提升功用,而对于异步IO要求的真面目则是【非梗塞Socket】+【IO多路复用】:

必威 30必威 31

import select
import socket
import time


class AsyncTimeoutException(TimeoutError):
    """
    请求超时异常类
    """

    def __init__(self, msg):
        self.msg = msg
        super(AsyncTimeoutException, self).__init__(msg)


class HttpContext(object):
    """封装请求和相应的基本数据"""

    def __init__(self, sock, host, port, method, url, data, callback, timeout=5):
        """
        sock: 请求的客户端socket对象
        host: 请求的主机名
        port: 请求的端口
        port: 请求的端口
        method: 请求方式
        url: 请求的URL
        data: 请求时请求体中的数据
        callback: 请求完成后的回调函数
        timeout: 请求的超时时间
        """
        self.sock = sock
        self.callback = callback
        self.host = host
        self.port = port
        self.method = method
        self.url = url
        self.data = data

        self.timeout = timeout

        self.__start_time = time.time()
        self.__buffer = []

    def is_timeout(self):
        """当前请求是否已经超时"""
        current_time = time.time()
        if (self.__start_time + self.timeout) < current_time:
            return True

    def fileno(self):
        """请求sockect对象的文件描述符,用于select监听"""
        return self.sock.fileno()

    def write(self, data):
        """在buffer中写入响应内容"""
        self.__buffer.append(data)

    def finish(self, exc=None):
        """在buffer中写入响应内容完成,执行请求的回调函数"""
        if not exc:
            response = b''.join(self.__buffer)
            self.callback(self, response, exc)
        else:
            self.callback(self, None, exc)

    def send_request_data(self):
        content = """%s %s HTTP/1.0rnHost: %srnrn%s""" % (
            self.method.upper(), self.url, self.host, self.data,)

        return content.encode(encoding='utf8')


class AsyncRequest(object):
    def __init__(self):
        self.fds = []
        self.connections = []

    def add_request(self, host, port, method, url, data, callback, timeout):
        """创建一个要请求"""
        client = socket.socket()
        client.setblocking(False)
        try:
            client.connect((host, port))
        except BlockingIOError as e:
            pass
            # print('已经向远程发送连接的请求')
        req = HttpContext(client, host, port, method, url, data, callback, timeout)
        self.connections.append(req)
        self.fds.append(req)

    def check_conn_timeout(self):
        """检查所有的请求,是否有已经连接超时,如果有则终止"""
        timeout_list = []
        for context in self.connections:
            if context.is_timeout():
                timeout_list.append(context)
        for context in timeout_list:
            context.finish(AsyncTimeoutException('请求超时'))
            self.fds.remove(context)
            self.connections.remove(context)

    def running(self):
        """事件循环,用于检测请求的socket是否已经就绪,从而执行相关操作"""
        while True:
            r, w, e = select.select(self.fds, self.connections, self.fds, 0.05)

            if not self.fds:
                return

            for context in r:
                sock = context.sock
                while True:
                    try:
                        data = sock.recv(8096)
                        if not data:
                            self.fds.remove(context)
                            context.finish()
                            break
                        else:
                            context.write(data)
                    except BlockingIOError as e:
                        break
                    except TimeoutError as e:
                        self.fds.remove(context)
                        self.connections.remove(context)
                        context.finish(e)
                        break

            for context in w:
                # 已经连接成功远程服务器,开始向远程发送请求数据
                if context in self.fds:
                    data = context.send_request_data()
                    context.sock.sendall(data)
                    self.connections.remove(context)

            self.check_conn_timeout()


if __name__ == '__main__':
    def callback_func(context, response, ex):
        """
        :param context: HttpContext对象,内部封装了请求相关信息
        :param response: 请求响应内容
        :param ex: 是否出现异常(如果有异常则值为异常对象;否则值为None)
        :return:
        """
        print(context, response, ex)

    obj = AsyncRequest()
    url_list = [
        {'host': 'www.google.com', 'port': 80, 'method': 'GET', 'url': '/', 'data': '', 'timeout': 5,
         'callback': callback_func},
        {'host': 'www.baidu.com', 'port': 80, 'method': 'GET', 'url': '/', 'data': '', 'timeout': 5,
         'callback': callback_func},
        {'host': 'www.bing.com', 'port': 80, 'method': 'GET', 'url': '/', 'data': '', 'timeout': 5,
         'callback': callback_func},
    ]
    for item in url_list:
        print(item)
        obj.add_request(**item)

    obj.running()

史上最牛逼的异步IO模块

二 安装

#Windows平台
    1、pip3 install wheel #安装后,便支持通过wheel文件安装软件,wheel文件官网:https://www.lfd.uci.edu/~gohlke/pythonlibs
    3、pip3 install lxml
    4、pip3 install pyopenssl
    5、下载并安装pywin32:https://sourceforge.net/projects/pywin32/files/pywin32/
    6、下载twisted的wheel文件:http://www.lfd.uci.edu/~gohlke/pythonlibs/#twisted
    7、执行pip3 install 下载目录Twisted-17.9.0-cp36-cp36m-win_amd64.whl
    8、pip3 install scrapy

#Linux平台
    1、pip3 install scrapy

 三 命令行工具

#1 查看帮助
    scrapy -h
    scrapy <command> -h

#2 有两种命令:其中Project-only必须切到项目文件夹下才能执行,而Global的命令则不需要
    Global commands:
        startproject #创建项目
        genspider    #创建爬虫程序
        settings     #如果是在项目目录下,则得到的是该项目的配置
        runspider    #运行一个独立的python文件,不必创建项目
        shell        #scrapy shell url地址  在交互式调试,如选择器规则正确与否
        fetch        #独立于程单纯地爬取一个页面,可以拿到请求头
        view         #下载完毕后直接弹出浏览器,以此可以分辨出哪些数据是ajax请求
        version      #scrapy version 查看scrapy的版本,scrapy version -v查看scrapy依赖库的版本
    Project-only commands:
        crawl        #运行爬虫,必须创建项目才行,确保配置文件中ROBOTSTXT_OBEY = False
        check        #检测项目中有无语法错误
        list         #列出项目中所包含的爬虫名
        edit         #编辑器,一般不用
        parse        #scrapy parse url地址 --callback 回调函数  #以此可以验证我们的回调函数是否正确
        bench        #scrapy bentch压力测试

#3 官网链接
    https://docs.scrapy.org/en/latest/topics/commands.html

 

#1、执行全局命令:请确保不在某个项目的目录下,排除受该项目配置的影响
scrapy startproject MyProject

cd MyProject
scrapy genspider baidu www.baidu.com

scrapy settings --get XXX #如果切换到项目目录下,看到的则是该项目的配置

scrapy runspider baidu.py

scrapy shell https://www.baidu.com
    response
    response.status
    response.body
    view(response)

scrapy view https://www.taobao.com #如果页面显示内容不全,不全的内容则是ajax请求实现的,以此快速定位问题

scrapy fetch --nolog --headers https://www.taobao.com

scrapy version #scrapy的版本

scrapy version -v #依赖库的版本


#2、执行项目命令:切到项目目录下
scrapy crawl baidu
scrapy check
scrapy list
scrapy parse http://quotes.toscrape.com/ --callback parse
scrapy bench

 四 项目布局以致爬虫应用简单介绍

project_name/
   scrapy.cfg
   project_name/
       __init__.py
       items.py
       pipelines.py
       settings.py
       spiders/
           __init__.py
           爬虫1.py
           爬虫2.py
           爬虫3.py

 

文本表明:

  • scrapy.cfg  项目标主配置消息,用来布局scrapy时利用,爬虫相关的安排消息在settings.py文件中。
  • items.py    设置数据存款和储蓄模板,用于结构化数据,如:Django的Model
  • pipelines    数据处理作为,如:常常结构化的数额持久化
  • settings.py 配置文件,如:递归的层数、并发数,延迟下载等。重申:配置文件的选项必须大写不然视为无效,正确写法USELX570_AGENT='xxxx'
  • spiders      爬虫目录,如:创造文件,编写爬虫法则

留意:平时创设爬虫文件时,以网址域名命名

暗许只好在cmd中实施爬虫,假使想在pycharm中实行要求做

#在项目目录下新建:entrypoint.py
from scrapy.cmdline import execute
execute(['scrapy', 'crawl', 'xiaohua'])

 关于windows编码

import sys,os
sys.stdout=io.TextIOWrapper(sys.stdout.buffer,encoding='gb18030')

 五 Spiders

1、介绍

#1、Spiders是由一系列类(定义了一个网址或一组网址将被爬取)组成,具体包括如何执行爬取任务并且如何从页面中提取结构化的数据。

#2、换句话说,Spiders是你为了一个特定的网址或一组网址自定义爬取和解析页面行为的地方

2、Spiders会循环做如下事情

#1、生成初始的Requests来爬取第一个URLS,并且标识一个回调函数
第一个请求定义在start_requests()方法内默认从start_urls列表中获得url地址来生成Request请求,默认的回调函数是parse方法。回调函数在下载完成返回response时自动触发

#2、在回调函数中,解析response并且返回值
返回值可以4种:
        包含解析数据的字典
        Item对象
        新的Request对象(新的Requests也需要指定一个回调函数)
        或者是可迭代对象(包含Items或Request)

#3、在回调函数中解析页面内容
通常使用Scrapy自带的Selectors,但很明显你也可以使用Beutifulsoup,lxml或其他你爱用啥用啥。

#4、最后,针对返回的Items对象将会被持久化到数据库
通过Item Pipeline组件存到数据库:https://docs.scrapy.org/en/latest/topics/item-pipeline.html#topics-item-pipeline)
或者导出到不同的文件(通过Feed exports:https://docs.scrapy.org/en/latest/topics/feed-exports.html#topics-feed-exports)

 3、Spiders总共提供了五品种:

1、scrapy.spiders.Spider #scrapy.Spider等同于scrapy.spiders.Spider
2、scrapy.spiders.CrawlSpider
3、scrapy.spiders.XMLFeedSpider
4、scrapy.spiders.CSVFeedSpider
5、scrapy.spiders.SitemapSpider

 4、导入使用

# -*- coding: utf-8 -*-
import scrapy
from scrapy.spiders import Spider,CrawlSpider,XMLFeedSpider,CSVFeedSpider,SitemapSpider

class AmazonSpider(scrapy.Spider): #自定义类,继承Spiders提供的基类
    name = 'amazon'
    allowed_domains = ['www.amazon.cn']
    start_urls = ['http://www.amazon.cn/']

    def parse(self, response):
        pass

5、class scrapy.spiders.Spider

那是最简单易行的spider类,任何别的的spider类都亟待继续它(包涵你协调定义的卡塔尔国。

该类不提供任何例外的坚决守护,它仅提供了八个暗中认可的start_requests方法暗中同意从start_urls中读取url地址发送requests诉求,并且私下认可parse作为回调函数

class AmazonSpider(scrapy.Spider):
    name = 'amazon' 

    allowed_domains = ['www.amazon.cn'] 

    start_urls = ['http://www.amazon.cn/']

    custom_settings = {
        'BOT_NAME' : 'Egon_Spider_Amazon',
        'REQUEST_HEADERS' : {
          'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8',
          'Accept-Language': 'en',
        }
    }

    def parse(self, response):
        pass

必威 32必威 33

#1、name = 'amazon' 
定义爬虫名,scrapy会根据该值定位爬虫程序
所以它必须要有且必须唯一(In Python 2 this must be ASCII only.)

#2、allowed_domains = ['www.amazon.cn'] 
定义允许爬取的域名,如果OffsiteMiddleware启动(默认就启动),
那么不属于该列表的域名及其子域名都不允许爬取
如果爬取的网址为:https://www.example.com/1.html,那就添加'example.com'到列表.

#3、start_urls = ['http://www.amazon.cn/']
如果没有指定url,就从该列表中读取url来生成第一个请求

#4、custom_settings
值为一个字典,定义一些配置信息,在运行爬虫程序时,这些配置会覆盖项目级别的配置
所以custom_settings必须被定义成一个类属性,由于settings会在类实例化前被加载

#5、settings
通过self.settings['配置项的名字']可以访问settings.py中的配置,如果自己定义了custom_settings还是以自己的为准

#6、logger
日志名默认为spider的名字
self.logger.debug('=============>%s' %self.settings['BOT_NAME'])

#5、crawler:了解
该属性必须被定义到类方法from_crawler中

#6、from_crawler(crawler, *args, **kwargs):了解
You probably won’t need to override this directly  because the default implementation acts as a proxy to the __init__() method, calling it with the given arguments args and named arguments kwargs.

#7、start_requests()
该方法用来发起第一个Requests请求,且必须返回一个可迭代的对象。它在爬虫程序打开时就被Scrapy调用,Scrapy只调用它一次。
默认从start_urls里取出每个url来生成Request(url, dont_filter=True)

#针对参数dont_filter,请看自定义去重规则

如果你想要改变起始爬取的Requests,你就需要覆盖这个方法,例如你想要起始发送一个POST请求,如下
class MySpider(scrapy.Spider):
    name = 'myspider'

    def start_requests(self):
        return [scrapy.FormRequest("http://www.example.com/login",
                                   formdata={'user': 'john', 'pass': 'secret'},
                                   callback=self.logged_in)]

    def logged_in(self, response):
        # here you would extract links to follow and return Requests for
        # each of them, with another callback
        pass

#8、parse(response)
这是默认的回调函数,所有的回调函数必须返回an iterable of Request and/or dicts or Item objects.

#9、log(message[, level, component]):了解
Wrapper that sends a log message through the Spider’s logger, kept for backwards compatibility. For more information see Logging from Spiders.

#10、closed(reason)
爬虫程序结束时自动触发

定制scrapy.spider属性与方法详解

定制scrapy.spider属性与艺术详整

必威 34必威 35

去重规则应该多个爬虫共享的,但凡一个爬虫爬取了,其他都不要爬了,实现方式如下

#方法一:
1、新增类属性
visited=set() #类属性

2、回调函数parse方法内:
def parse(self, response):
    if response.url in self.visited:
        return None
    .......

    self.visited.add(response.url) 

#方法一改进:针对url可能过长,所以我们存放url的hash值
def parse(self, response):
        url=md5(response.request.url)
    if url in self.visited:
        return None
    .......

    self.visited.add(url) 

#方法二:Scrapy自带去重功能
配置文件:
DUPEFILTER_CLASS = 'scrapy.dupefilter.RFPDupeFilter' #默认的去重规则帮我们去重,去重规则在内存中
DUPEFILTER_DEBUG = False
JOBDIR = "保存范文记录的日志路径,如:/root/"  # 最终路径为 /root/requests.seen,去重规则放文件中

scrapy自带去重规则默认为RFPDupeFilter,只需要我们指定
Request(...,dont_filter=False) ,如果dont_filter=True则告诉Scrapy这个URL不参与去重。

#方法三:
我们也可以仿照RFPDupeFilter自定义去重规则,

from scrapy.dupefilter import RFPDupeFilter,看源码,仿照BaseDupeFilter

#步骤一:在项目目录下自定义去重文件dup.py
class UrlFilter(object):
    def __init__(self):
        self.visited = set() #或者放到数据库

    @classmethod
    def from_settings(cls, settings):
        return cls()

    def request_seen(self, request):
        if request.url in self.visited:
            return True
        self.visited.add(request.url)

    def open(self):  # can return deferred
        pass

    def close(self, reason):  # can return a deferred
        pass

    def log(self, request, spider):  # log that a request has been filtered
        pass

#步骤二:配置文件settings.py:
DUPEFILTER_CLASS = '项目名.dup.UrlFilter'


# 源码分析:
from scrapy.core.scheduler import Scheduler
见Scheduler下的enqueue_request方法:self.df.request_seen(request)

去重规则:去除重复的url

去重法则:去除重复的url

必威 36必威 37

#例一:
import scrapy

class MySpider(scrapy.Spider):
    name = 'example.com'
    allowed_domains = ['example.com']
    start_urls = [
        'http://www.example.com/1.html',
        'http://www.example.com/2.html',
        'http://www.example.com/3.html',
    ]

    def parse(self, response):
        self.logger.info('A response from %s just arrived!', response.url)


#例二:一个回调函数返回多个Requests和Items
import scrapy

class MySpider(scrapy.Spider):
    name = 'example.com'
    allowed_domains = ['example.com']
    start_urls = [
        'http://www.example.com/1.html',
        'http://www.example.com/2.html',
        'http://www.example.com/3.html',
    ]

    def parse(self, response):
        for h3 in response.xpath('//h3').extract():
            yield {"title": h3}

        for url in response.xpath('//a/@href').extract():
            yield scrapy.Request(url, callback=self.parse)


#例三:在start_requests()内直接指定起始爬取的urls,start_urls就没有用了,

import scrapy
from myproject.items import MyItem

class MySpider(scrapy.Spider):
    name = 'example.com'
    allowed_domains = ['example.com']

    def start_requests(self):
        yield scrapy.Request('http://www.example.com/1.html', self.parse)
        yield scrapy.Request('http://www.example.com/2.html', self.parse)
        yield scrapy.Request('http://www.example.com/3.html', self.parse)

    def parse(self, response):
        for h3 in response.xpath('//h3').extract():
            yield MyItem(title=h3)

        for url in response.xpath('//a/@href').extract():
            yield scrapy.Request(url, callback=self.parse)

例子

例:

必威 38必威 39

我们可能需要在命令行为爬虫程序传递参数,比如传递初始的url,像这样
#命令行执行
scrapy crawl myspider -a category=electronics

#在__init__方法中可以接收外部传进来的参数
import scrapy

class MySpider(scrapy.Spider):
    name = 'myspider'

    def __init__(self, category=None, *args, **kwargs):
        super(MySpider, self).__init__(*args, **kwargs)
        self.start_urls = ['http://www.example.com/categories/%s' % category]
        #...


#注意接收的参数全都是字符串,如果想要结构化的数据,你需要用类似json.loads的方法

参数传递

参数字传送递

6、其余通用Spiders:https://docs.scrapy.org/en/latest/topics/spiders.html#generic-spiders

    b、注意:requests发送诉求是将网页内容下载来以往,并不会施行js代码,那亟需我们和好剖判指标站点然后发起新的requests伏乞

Scrapy

六 Selectors

1 //与/
2 text
3、extract与extract_first:从selector对象中解出内容
4、属性:xpath的属性加前缀@
5、嵌套查找
6、设置默认值
7、按照属性查找
8、按照属性模糊查找
9、正则表达式
10、xpath相对路径
11、带变量的xpath

必威 40必威 41

response.selector.css()
response.selector.xpath()
可简写为
response.css()
response.xpath()

#1 //与/
response.xpath('//body/a/')#
response.css('div a::text')

>>> response.xpath('//body/a') #开头的//代表从整篇文档中寻找,body之后的/代表body的儿子
[]
>>> response.xpath('//body//a') #开头的//代表从整篇文档中寻找,body之后的//代表body的子子孙孙
[<Selector xpath='//body//a' data='<a href="image1.html">Name: My image 1 <'>, <Selector xpath='//body//a' data='<a href="image2.html">Name: My image 2 <'>, <Selector xpath='//body//a' data='<a href="
image3.html">Name: My image 3 <'>, <Selector xpath='//body//a' data='<a href="image4.html">Name: My image 4 <'>, <Selector xpath='//body//a' data='<a href="image5.html">Name: My image 5 <'>]

#2 text
>>> response.xpath('//body//a/text()')
>>> response.css('body a::text')

#3、extract与extract_first:从selector对象中解出内容
>>> response.xpath('//div/a/text()').extract()
['Name: My image 1 ', 'Name: My image 2 ', 'Name: My image 3 ', 'Name: My image 4 ', 'Name: My image 5 ']
>>> response.css('div a::text').extract()
['Name: My image 1 ', 'Name: My image 2 ', 'Name: My image 3 ', 'Name: My image 4 ', 'Name: My image 5 ']

>>> response.xpath('//div/a/text()').extract_first()
'Name: My image 1 '
>>> response.css('div a::text').extract_first()
'Name: My image 1 '

#4、属性:xpath的属性加前缀@
>>> response.xpath('//div/a/@href').extract_first()
'image1.html'
>>> response.css('div a::attr(href)').extract_first()
'image1.html'

#4、嵌套查找
>>> response.xpath('//div').css('a').xpath('@href').extract_first()
'image1.html'

#5、设置默认值
>>> response.xpath('//div[@id="xxx"]').extract_first(default="not found")
'not found'

#4、按照属性查找
response.xpath('//div[@id="images"]/a[@href="image3.html"]/text()').extract()
response.css('#images a[@href="image3.html"]/text()').extract()

#5、按照属性模糊查找
response.xpath('//a[contains(@href,"image")]/@href').extract()
response.css('a[href*="image"]::attr(href)').extract()

response.xpath('//a[contains(@href,"image")]/img/@src').extract()
response.css('a[href*="imag"] img::attr(src)').extract()

response.xpath('//*[@href="image1.html"]')
response.css('*[href="image1.html"]')

#6、正则表达式
response.xpath('//a/text()').re(r'Name: (.*)')
response.xpath('//a/text()').re_first(r'Name: (.*)')

#7、xpath相对路径
>>> res=response.xpath('//a[contains(@href,"3")]')[0]
>>> res.xpath('img')
[<Selector xpath='img' data='<img src="image3_thumb.jpg">'>]
>>> res.xpath('./img')
[<Selector xpath='./img' data='<img src="image3_thumb.jpg">'>]
>>> res.xpath('.//img')
[<Selector xpath='.//img' data='<img src="image3_thumb.jpg">'>]
>>> res.xpath('//img') #这就是从头开始扫描
[<Selector xpath='//img' data='<img src="image1_thumb.jpg">'>, <Selector xpath='//img' data='<img src="image2_thumb.jpg">'>, <Selector xpath='//img' data='<img src="image3_thumb.jpg">'>, <Selector xpa
th='//img' data='<img src="image4_thumb.jpg">'>, <Selector xpath='//img' data='<img src="image5_thumb.jpg">'>]

#8、带变量的xpath
>>> response.xpath('//div[@id=$xxx]/a/text()',xxx='images').extract_first()
'Name: My image 1 '
>>> response.xpath('//div[count(a)=$yyy]/@id',yyy=5).extract_first() #求有5个a标签的div的id
'images'

View Code

    c、安装:pip3 install requests

一 介绍

    Scrapy三个开源和合营的框架,其早期是为着页面抓取 (更贴切来讲, 网络抓取 )所设计的,使用它能够以相当慢、轻便、可扩张的方法从网址中提取所需的数据。但近些日子Scrapy的用场充足不足为奇,可用于如数据发掘、监测和自动化测量检验等领域,也得以选拔在收获API所重临的数额(比方亚马逊 Associates Web Services ) 大概通用的网络爬虫。

    Scrapy 是依附twisted框架开采而来,twisted是一个流行的事件驱动的python互联网框架。由此Scrapy使用了生机勃勃种非梗塞(又名异步卡塔 尔(英语:State of Qatar)的代码来完毕产出。全部框架结构大致如下Scrapy是一个为了爬取网址数据,提取结构性数据而编辑的使用框架。 其得以应用在多少开采,消息处理或存款和储蓄历史数据等生龙活虎多种的顺序中。

Scrapy 使用了 Twisted异步网络库来管理互联网通信。全部架构大约如下

必威 42

Scrapy重要总结了以下组件:

  • 引擎(Scrapy)
    用来管理整个系统的数据流处理, 触发职业(框架宗旨)
  • 调度器(Scheduler)
    用来经受引擎发过来的央浼, 压入队列中, 并在电动机再一次伏乞的时候重返. 能够想像成叁个UENVISIONL(抓取网页的网站可能说是链接卡塔尔的优先队列, 由它来调控下一个要抓取的网站是何等, 同临时候去除重复的网站
  • 下载器(Downloader)
    用来下载网页内容, 并将网页内容重返给蜘蛛(Scrapy下载器是雏鹰展翅在twisted那一个高速的异步模型上的)
  • 爬虫(Spiders)
    爬虫是至关心重视要专业的, 用于从一定的网页中领到自个儿索要的音信, 即所谓的实业(Item)。客户也得以从当中提抽出链接,让Scrapy继续抓取下七个页面
  • 类型管道(Pipeline)
    担当管理爬虫从网页中抽出的实业,首要的效果是长久化实体、验证实体的得力、杀绝无需的消息。当页面被爬虫拆解解析后,将被发送到项目管道,并经过多少个特定的次序管理数量。
  • 下载器中间件(Downloader Middlewares)
    坐落Scrapy引擎和下载器之间的框架,主假诺管理Scrapy引擎与下载器之间的央求及响应。
  • 爬虫中间件(Spider Middlewares)
    在于Scrapy引擎和爬虫之间的框架,首要办事是管理蜘蛛的响应输入和号召输出。
  • 调整中间件(Scheduler Middewares)
    介于Scrapy引擎和调解之间的中间件,从Scrapy引擎发送到调解的倡议和响应。

Scrapy运行流程大约如下:

  1. 外燃机从调整器中抽出多少个链接(ULANDL)用于接下去的抓取
  2. 发动机把U奥迪Q5L封装成多个号令(Request)传给下载器
  3. 下载器把财富下载下来,并封装成应答包(Response)
  4. 爬虫深入剖判Response
  5. 剖判出实体(Item卡塔尔,则交给实体管道进行更进一竿的管理
  6. 解析出的是链接(UWranglerL卡塔 尔(英语:State of Qatar),则把U兰德纳瓦拉L交给调治器等待抓取

七 Items

    d、各类央浼方式,常用的是requests.get()和requets.post()

一、安装

#Windows平台
    1、pip3 install wheel #安装后,便支持通过wheel文件安装软件,wheel文件官网:https://www.lfd.uci.edu/~gohlke/pythonlibs
    3、pip3 install lxml
    4、pip3 install pyopenssl
    5、下载并安装pywin32:https://sourceforge.net/projects/pywin32/files/pywin32/
    6、下载twisted的wheel文件:http://www.lfd.uci.edu/~gohlke/pythonlibs/#twisted
    7、执行pip3 install 下载目录Twisted-17.9.0-cp36-cp36m-win_amd64.whl
    8、pip3 install scrapy

#Linux平台
    1、pip3 install scrapy

 

八 Item Pipeline

必威 43必威 44

#一:可以写多个Pipeline类
#1、如果优先级高的Pipeline的process_item返回一个值或者None,会自动传给下一个pipline的process_item,
#2、如果只想让第一个Pipeline执行,那得让第一个pipline的process_item抛出异常raise DropItem()

#3、可以用spider.name == '爬虫名' 来控制哪些爬虫用哪些pipeline

二:示范
from scrapy.exceptions import DropItem

class CustomPipeline(object):
    def __init__(self,v):
        self.value = v

    @classmethod
    def from_crawler(cls, crawler):
        """
        Scrapy会先通过getattr判断我们是否自定义了from_crawler,有则调它来完
        成实例化
        """
        val = crawler.settings.getint('MMMM')
        return cls(val)

    def open_spider(self,spider):
        """
        爬虫刚启动时执行一次
        """
        print('000000')

    def close_spider(self,spider):
        """
        爬虫关闭时执行一次
        """
        print('111111')


    def process_item(self, item, spider):
        # 操作并进行持久化

        # return表示会被后续的pipeline继续处理
        return item

        # 表示将item丢弃,不会被后续pipeline处理
        # raise DropItem()

自定义pipeline

自定义pipeline

必威 45必威 46

#1、settings.py
HOST="127.0.0.1"
PORT=27017
USER="root"
PWD="123"
DB="amazon"
TABLE="goods"



ITEM_PIPELINES = {
   'Amazon.pipelines.CustomPipeline': 200,
}

#2、pipelines.py
class CustomPipeline(object):
    def __init__(self,host,port,user,pwd,db,table):
        self.host=host
        self.port=port
        self.user=user
        self.pwd=pwd
        self.db=db
        self.table=table

    @classmethod
    def from_crawler(cls, crawler):
        """
        Scrapy会先通过getattr判断我们是否自定义了from_crawler,有则调它来完
        成实例化
        """
        HOST = crawler.settings.get('HOST')
        PORT = crawler.settings.get('PORT')
        USER = crawler.settings.get('USER')
        PWD = crawler.settings.get('PWD')
        DB = crawler.settings.get('DB')
        TABLE = crawler.settings.get('TABLE')
        return cls(HOST,PORT,USER,PWD,DB,TABLE)

    def open_spider(self,spider):
        """
        爬虫刚启动时执行一次
        """
        self.client = MongoClient('mongodb://%s:%s@%s:%s' %(self.user,self.pwd,self.host,self.port))

    def close_spider(self,spider):
        """
        爬虫关闭时执行一次
        """
        self.client.close()


    def process_item(self, item, spider):
        # 操作并进行持久化

        self.client[self.db][self.table].save(dict(item))

示范

实例

二、基于get请求

二、基本使用

九 Dowloader Middeware

下载中间件的用途
    1、在process——request内,自定义下载,不用scrapy的下载
    2、对请求进行二次加工,比如
        设置请求头
        设置cookie
        添加代理
            scrapy自带的代理组件:
                from scrapy.downloadermiddlewares.httpproxy import HttpProxyMiddleware
                from urllib.request import getproxies

必威 47必威 48

class DownMiddleware1(object):
    def process_request(self, request, spider):
        """
        请求需要被下载时,经过所有下载器中间件的process_request调用
        :param request: 
        :param spider: 
        :return:  
            None,继续后续中间件去下载;
            Response对象,停止process_request的执行,开始执行process_response
            Request对象,停止中间件的执行,将Request重新调度器
            raise IgnoreRequest异常,停止process_request的执行,开始执行process_exception
        """
        pass



    def process_response(self, request, response, spider):
        """
        spider处理完成,返回时调用
        :param response:
        :param result:
        :param spider:
        :return: 
            Response 对象:转交给其他中间件process_response
            Request 对象:停止中间件,request会被重新调度下载
            raise IgnoreRequest 异常:调用Request.errback
        """
        print('response1')
        return response

    def process_exception(self, request, exception, spider):
        """
        当下载处理器(download handler)或 process_request() (下载中间件)抛出异常
        :param response:
        :param exception:
        :param spider:
        :return: 
            None:继续交给后续中间件处理异常;
            Response对象:停止后续process_exception方法
            Request对象:停止中间件,request将会被重新调用下载
        """
        return None

下载器中间件

下载中间件

必威 49必威 50

#1、与middlewares.py同级目录下新建proxy_handle.py
import requests

def get_proxy():
    return requests.get("http://127.0.0.1:5010/get/").text

def delete_proxy(proxy):
    requests.get("http://127.0.0.1:5010/delete/?proxy={}".format(proxy))



#2、middlewares.py
from Amazon.proxy_handle import get_proxy,delete_proxy

class DownMiddleware1(object):
    def process_request(self, request, spider):
        """
        请求需要被下载时,经过所有下载器中间件的process_request调用
        :param request:
        :param spider:
        :return:
            None,继续后续中间件去下载;
            Response对象,停止process_request的执行,开始执行process_response
            Request对象,停止中间件的执行,将Request重新调度器
            raise IgnoreRequest异常,停止process_request的执行,开始执行process_exception
        """
        proxy="http://" + get_proxy()
        request.meta['download_timeout']=20
        request.meta["proxy"] = proxy
        print('为%s 添加代理%s ' % (request.url, proxy),end='')
        print('元数据为',request.meta)

    def process_response(self, request, response, spider):
        """
        spider处理完成,返回时调用
        :param response:
        :param result:
        :param spider:
        :return:
            Response 对象:转交给其他中间件process_response
            Request 对象:停止中间件,request会被重新调度下载
            raise IgnoreRequest 异常:调用Request.errback
        """
        print('返回状态吗',response.status)
        return response


    def process_exception(self, request, exception, spider):
        """
        当下载处理器(download handler)或 process_request() (下载中间件)抛出异常
        :param response:
        :param exception:
        :param spider:
        :return:
            None:继续交给后续中间件处理异常;
            Response对象:停止后续process_exception方法
            Request对象:停止中间件,request将会被重新调用下载
        """
        print('代理%s,访问%s出现异常:%s' %(request.meta['proxy'],request.url,exception))
        import time
        time.sleep(5)
        delete_proxy(request.meta['proxy'].split("//")[-1])
        request.meta['proxy']='http://'+get_proxy()

        return request

配置代理

布置代理

    a、基本央求

1. 大旨命令

#1 查看帮助
    scrapy -h
    scrapy <command> -h

#2 有两种命令:其中Project-only必须切到项目文件夹下才能执行,而Global的命令则不需要
    Global commands:
        startproject #创建项目
        genspider    #创建爬虫程序 
                            如:
                              scrapy gensipider -t basic oldboy oldboy.com
                              scrapy gensipider -t xmlfeed autohome autohome.com.cn
        settings     #如果是在项目目录下,则得到的是该项目的配置
        runspider    #运行一个独立的python文件,不必创建项目
        shell        #scrapy shell url地址  在交互式调试,如选择器规则正确与否
        fetch        #独立于程单纯地爬取一个页面,可以拿到请求头
        view         #下载完毕后直接弹出浏览器,以此可以分辨出哪些数据是ajax请求
        version      #scrapy version 查看scrapy的版本,scrapy version -v查看scrapy依赖库的版本
    Project-only commands:
        crawl        #运行爬虫,必须创建项目才行,确保配置文件中ROBOTSTXT_OBEY = False
        check        #检测项目中有无语法错误
        list         #列出项目中所包含的爬虫名
        edit         #编辑器,一般不用
        parse        #scrapy parse url地址 --callback 回调函数  #以此可以验证我们的回调函数是否正确
        bench        #scrapy bentch压力测试

#3 官网链接
    https://docs.scrapy.org/en/latest/topics/commands.html        

必威 51必威 52

#1、执行全局命令:请确保不在某个项目的目录下,排除受该项目配置的影响
scrapy startproject MyProject

cd MyProject
scrapy genspider baidu www.baidu.com

scrapy settings --get XXX #如果切换到项目目录下,看到的则是该项目的配置

scrapy runspider baidu.py

scrapy shell https://www.baidu.com
    response
    response.status
    response.body
    view(response)

scrapy view https://www.taobao.com #如果页面显示内容不全,不全的内容则是ajax请求实现的,以此快速定位问题

scrapy fetch --nolog --headers https://www.taobao.com

scrapy version #scrapy的版本

scrapy version -v #依赖库的版本


#2、执行项目命令:切到项目目录下
scrapy crawl baidu
scrapy check
scrapy list
scrapy parse http://quotes.toscrape.com/ --callback parse
scrapy bench


示范用法

演示用法

 

2.类型协会以至爬虫应用简单介绍

project_name/
   scrapy.cfg
   project_name/
       __init__.py
       items.py
       pipelines.py
       settings.py
       spiders/
           __init__.py
           爬虫1.py
           爬虫2.py
           爬虫3.py

文本表达:

  • scrapy.cfg  项目标主配置音信。(真正爬虫相关的布局消息在settings.py文件中卡塔 尔(英语:State of Qatar)
  • items.py    设置数据存款和储蓄模板,用于结构化数据,如:Django的Model
  • pipelines    数据管理作为,如:平日结构化的多寡持久化
  • settings.py 配置文件,如:递归的层数、并发数,延迟下载等
  • spiders      爬虫目录,如:创造文件,编写爬虫准则

在乎:平时成立爬虫文件时,以网址域名命名

必威 53必威 54

import scrapy

class XiaoHuarSpider(scrapy.spiders.Spider):
    name = "xiaohuar"                            # 爬虫名称 *****
    allowed_domains = ["xiaohuar.com"]  # 允许的域名
    start_urls = [
        "http://www.xiaohuar.com/hua/",   # 其实URL
    ]

    def parse(self, response):
        # 访问起始URL并获取结果后的回调函数

爬虫1.py

必威 55必威 56

import sys,os
sys.stdout=io.TextIOWrapper(sys.stdout.buffer,encoding='gb18030')

关于windows编码

3. 牛刀小试

 

import scrapy
from scrapy.selector import HtmlXPathSelector
from scrapy.http.request import Request


class DigSpider(scrapy.Spider):
    # 爬虫应用的名称,通过此名称启动爬虫命令
    name = "dig"

    # 允许的域名
    allowed_domains = ["chouti.com"]

    # 起始URL
    start_urls = [
        'http://dig.chouti.com/',
    ]

    has_request_set = {}

    def parse(self, response):
        print(response.url)

        hxs = HtmlXPathSelector(response)
        page_list = hxs.select('//div[@id="dig_lcpage"]//a[re:test(@href, "/all/hot/recent/d+")]/@href').extract()
        for page in page_list:
            page_url = 'http://dig.chouti.com%s' % page
            key = self.md5(page_url)
            if key in self.has_request_set:
                pass
            else:
                self.has_request_set[key] = page_url
                obj = Request(url=page_url, method='GET', callback=self.parse)
                yield obj

    @staticmethod
    def md5(val):
        import hashlib
        ha = hashlib.md5()
        ha.update(bytes(val, encoding='utf-8'))
        key = ha.hexdigest()
        return key

  

推行此爬虫文件,则在尖峰踏入项目目录试行如下命令:

scrapy crawl dig --nolog

对此上述代码主要之处在于:

  • Request是三个包裹顾客诉求的类,在回调函数中yield该指标表示继续访谈
  • HtmlXpathSelector用于结构化HTML代码并提供选择器成效

4. 选择器

#1 //与/
#2 text
#3、extract与extract_first:从selector对象中解出内容
#4、属性:xpath的属性加前缀@
#4、嵌套查找
#5、设置默认值
#4、按照属性查找
#5、按照属性模糊查找
#6、正则表达式
#7、xpath相对路径
#8、带变量的xpath

必威 57必威 58

response.selector.css()
response.selector.xpath()
可简写为
response.css()
response.xpath()

#1 //与/
response.xpath('//body/a/')#
response.css('div a::text')

>>> response.xpath('//body/a') #开头的//代表从整篇文档中寻找,body之后的/代表body的儿子
[]
>>> response.xpath('//body//a') #开头的//代表从整篇文档中寻找,body之后的//代表body的子子孙孙
[<Selector xpath='//body//a' data='<a href="image1.html">Name: My image 1 <'>, <Selector xpath='//body//a' data='<a href="image2.html">Name: My image 2 <'>, <Selector xpath='//body//a' data='<a href="
image3.html">Name: My image 3 <'>, <Selector xpath='//body//a' data='<a href="image4.html">Name: My image 4 <'>, <Selector xpath='//body//a' data='<a href="image5.html">Name: My image 5 <'>]

#2 text
>>> response.xpath('//body//a/text()')
>>> response.css('body a::text')

#3、extract与extract_first:从selector对象中解出内容
>>> response.xpath('//div/a/text()').extract()
['Name: My image 1 ', 'Name: My image 2 ', 'Name: My image 3 ', 'Name: My image 4 ', 'Name: My image 5 ']
>>> response.css('div a::text').extract()
['Name: My image 1 ', 'Name: My image 2 ', 'Name: My image 3 ', 'Name: My image 4 ', 'Name: My image 5 ']

>>> response.xpath('//div/a/text()').extract_first()
'Name: My image 1 '
>>> response.css('div a::text').extract_first()
'Name: My image 1 '

#4、属性:xpath的属性加前缀@
>>> response.xpath('//div/a/@href').extract_first()
'image1.html'
>>> response.css('div a::attr(href)').extract_first()
'image1.html'

#4、嵌套查找
>>> response.xpath('//div').css('a').xpath('@href').extract_first()
'image1.html'

#5、设置默认值
>>> response.xpath('//div[@id="xxx"]').extract_first(default="not found")
'not found'

#4、按照属性查找
response.xpath('//div[@id="images"]/a[@href="image3.html"]/text()').extract()
response.css('#images a[@href="image3.html"]/text()').extract()

#5、按照属性模糊查找
response.xpath('//a[contains(@href,"image")]/@href').extract()
response.css('a[href*="image"]::attr(href)').extract()

response.xpath('//a[contains(@href,"image")]/img/@src').extract()
response.css('a[href*="imag"] img::attr(src)').extract()

response.xpath('//*[@href="image1.html"]')
response.css('*[href="image1.html"]')

#6、正则表达式
response.xpath('//a/text()').re(r'Name: (.*)')
response.xpath('//a/text()').re_first(r'Name: (.*)')

#7、xpath相对路径
>>> res=response.xpath('//a[contains(@href,"3")]')[0]
>>> res.xpath('img')
[<Selector xpath='img' data='<img src="image3_thumb.jpg">'>]
>>> res.xpath('./img')
[<Selector xpath='./img' data='<img src="image3_thumb.jpg">'>]
>>> res.xpath('.//img')
[<Selector xpath='.//img' data='<img src="image3_thumb.jpg">'>]
>>> res.xpath('//img') #这就是从头开始扫描
[<Selector xpath='//img' data='<img src="image1_thumb.jpg">'>, <Selector xpath='//img' data='<img src="image2_thumb.jpg">'>, <Selector xpath='//img' data='<img src="image3_thumb.jpg">'>, <Selector xpa
th='//img' data='<img src="image4_thumb.jpg">'>, <Selector xpath='//img' data='<img src="image5_thumb.jpg">'>]

#8、带变量的xpath
>>> response.xpath('//div[@id=$xxx]/a/text()',xxx='images').extract_first()
'Name: My image 1 '
>>> response.xpath('//div[count(a)=$yyy]/@id',yyy=5).extract_first() #求有5个a标签的div的id
'images'

接收器示例用法

 

必威 59必威 60

# -*- coding: utf-8 -*-
import scrapy
from scrapy.selector import HtmlXPathSelector
from scrapy.http.request import Request
from scrapy.http.cookies import CookieJar
from scrapy import FormRequest


class ChouTiSpider(scrapy.Spider):
    # 爬虫应用的名称,通过此名称启动爬虫命令
    name = "chouti"
    # 允许的域名
    allowed_domains = ["chouti.com"]

    cookie_dict = {}
    has_request_set = {}

    def start_requests(self):
        url = 'http://dig.chouti.com/'
        # return [Request(url=url, callback=self.login)]
        yield Request(url=url, callback=self.login)

    def login(self, response):
        cookie_jar = CookieJar()
        cookie_jar.extract_cookies(response, response.request)
        for k, v in cookie_jar._cookies.items():
            for i, j in v.items():
                for m, n in j.items():
                    self.cookie_dict[m] = n.value

        req = Request(
            url='http://dig.chouti.com/login',
            method='POST',
            headers={'Content-Type': 'application/x-www-form-urlencoded; charset=UTF-8'},
            body='phone=8615131255089&password=pppppppp&oneMonth=1',
            cookies=self.cookie_dict,
            callback=self.check_login
        )
        yield req

    def check_login(self, response):
        req = Request(
            url='http://dig.chouti.com/',
            method='GET',
            callback=self.show,
            cookies=self.cookie_dict,
            dont_filter=True
        )
        yield req

    def show(self, response):
        # print(response)
        hxs = HtmlXPathSelector(response)
        news_list = hxs.select('//div[@id="content-list"]/div[@class="item"]')
        for new in news_list:
            # temp = new.xpath('div/div[@class="part2"]/@share-linkid').extract()
            link_id = new.xpath('*/div[@class="part2"]/@share-linkid').extract_first()
            yield Request(
                url='http://dig.chouti.com/link/vote?linksId=%s' %(link_id,),
                method='POST',
                cookies=self.cookie_dict,
                callback=self.do_favor
            )

        page_list = hxs.select('//div[@id="dig_lcpage"]//a[re:test(@href, "/all/hot/recent/d+")]/@href').extract()
        for page in page_list:

            page_url = 'http://dig.chouti.com%s' % page
            import hashlib
            hash = hashlib.md5()
            hash.update(bytes(page_url,encoding='utf-8'))
            key = hash.hexdigest()
            if key in self.has_request_set:
                pass
            else:
                self.has_request_set[key] = page_url
                yield Request(
                    url=page_url,
                    method='GET',
                    callback=self.show
                )

    def do_favor(self, response):
        print(response.text)

示范:自动登入抽屉并点赞

注意:settings.py中设置DEPTH_LIMIT = 1来指定“递归”的层数。

十 Spider Middleware

1、爬虫中间件方法介绍

必威 61必威 62

from scrapy import signals

class SpiderMiddleware(object):
    # Not all methods need to be defined. If a method is not defined,
    # scrapy acts as if the spider middleware does not modify the
    # passed objects.

    @classmethod
    def from_crawler(cls, crawler):
        # This method is used by Scrapy to create your spiders.
        s = cls()
        crawler.signals.connect(s.spider_opened, signal=signals.spider_opened) #当前爬虫执行时触发spider_opened
        return s

    def spider_opened(self, spider):
        # spider.logger.info('我是egon派来的爬虫1: %s' % spider.name)
        print('我是egon派来的爬虫1: %s' % spider.name)

    def process_start_requests(self, start_requests, spider):
        # Called with the start requests of the spider, and works
        # similarly to the process_spider_output() method, except
        # that it doesn’t have a response associated.

        # Must return only requests (not items).
        print('start_requests1')
        for r in start_requests:
            yield r

    def process_spider_input(self, response, spider):
        # Called for each response that goes through the spider
        # middleware and into the spider.
        # 每个response经过爬虫中间件进入spider时调用

        # 返回值:Should return None or raise an exception.
        #1、None: 继续执行其他中间件的process_spider_input
        #2、抛出异常:
        # 一旦抛出异常则不再执行其他中间件的process_spider_input
        # 并且触发request绑定的errback
        # errback的返回值倒着传给中间件的process_spider_output
        # 如果未找到errback,则倒着执行中间件的process_spider_exception

        print("input1")
        return None

    def process_spider_output(self, response, result, spider):
        # Called with the results returned from the Spider, after
        # it has processed the response.

        # Must return an iterable of Request, dict or Item objects.
        print('output1')

        # 用yield返回多次,与return返回一次是一个道理
        # 如果生成器掌握不好(函数内有yield执行函数得到的是生成器而并不会立刻执行),生成器的形式会容易误导你对中间件执行顺序的理解
        # for i in result:
        #     yield i
        return result

    def process_spider_exception(self, response, exception, spider):
        # Called when a spider or process_spider_input() method
        # (from other spider middleware) raises an exception.

        # Should return either None or an iterable of Response, dict
        # or Item objects.
        print('exception1')

爬虫中间件

View Code

 2、当前爬虫运营时以致初阶乞求发生时

必威 63必威 64

#步骤一:
'''
打开注释:
SPIDER_MIDDLEWARES = {
   'Baidu.middlewares.SpiderMiddleware1': 200,
   'Baidu.middlewares.SpiderMiddleware2': 300,
   'Baidu.middlewares.SpiderMiddleware3': 400,
}

'''


#步骤二:middlewares.py
from scrapy import signals

class SpiderMiddleware1(object):
    @classmethod
    def from_crawler(cls, crawler):
        s = cls()
        crawler.signals.connect(s.spider_opened, signal=signals.spider_opened) #当前爬虫执行时触发spider_opened
        return s

    def spider_opened(self, spider):
        print('我是egon派来的爬虫1: %s' % spider.name)

    def process_start_requests(self, start_requests, spider):
        # Must return only requests (not items).
        print('start_requests1')
        for r in start_requests:
            yield r




class SpiderMiddleware2(object):
    @classmethod
    def from_crawler(cls, crawler):
        s = cls()
        crawler.signals.connect(s.spider_opened, signal=signals.spider_opened)  # 当前爬虫执行时触发spider_opened
        return s

    def spider_opened(self, spider):
        print('我是egon派来的爬虫2: %s' % spider.name)

    def process_start_requests(self, start_requests, spider):
        print('start_requests2')
        for r in start_requests:
            yield r


class SpiderMiddleware3(object):
    @classmethod
    def from_crawler(cls, crawler):
        s = cls()
        crawler.signals.connect(s.spider_opened, signal=signals.spider_opened)  # 当前爬虫执行时触发spider_opened
        return s

    def spider_opened(self, spider):
        print('我是egon派来的爬虫3: %s' % spider.name)

    def process_start_requests(self, start_requests, spider):
        print('start_requests3')
        for r in start_requests:
            yield r


#步骤三:分析运行结果
#1、启动爬虫时则立刻执行:

我是egon派来的爬虫1: baidu
我是egon派来的爬虫2: baidu
我是egon派来的爬虫3: baidu


#2、然后产生一个初始的request请求,依次经过爬虫中间件1,2,3:
start_requests1
start_requests2
start_requests3

View Code

3、process_spider_input返回None时

必威 65必威 66

#步骤一:打开注释:
SPIDER_MIDDLEWARES = {
   'Baidu.middlewares.SpiderMiddleware1': 200,
   'Baidu.middlewares.SpiderMiddleware2': 300,
   'Baidu.middlewares.SpiderMiddleware3': 400,
}

'''

#步骤二:middlewares.py
from scrapy import signals

class SpiderMiddleware1(object):

    def process_spider_input(self, response, spider):
        print("input1")

    def process_spider_output(self, response, result, spider):
        print('output1')
        return result

    def process_spider_exception(self, response, exception, spider):
        print('exception1')


class SpiderMiddleware2(object):

    def process_spider_input(self, response, spider):
        print("input2")
        return None

    def process_spider_output(self, response, result, spider):
        print('output2')
        return result

    def process_spider_exception(self, response, exception, spider):
        print('exception2')


class SpiderMiddleware3(object):

    def process_spider_input(self, response, spider):
        print("input3")
        return None

    def process_spider_output(self, response, result, spider):
        print('output3')
        return result

    def process_spider_exception(self, response, exception, spider):
        print('exception3')


#步骤三:运行结果分析

#1、返回response时,依次经过爬虫中间件1,2,3
input1
input2
input3

#2、spider处理完毕后,依次经过爬虫中间件3,2,1
output3
output2
output1

View Code

4、process_spider_input抛出极其时

必威 67必威 68

#步骤一:
'''
打开注释:
SPIDER_MIDDLEWARES = {
   'Baidu.middlewares.SpiderMiddleware1': 200,
   'Baidu.middlewares.SpiderMiddleware2': 300,
   'Baidu.middlewares.SpiderMiddleware3': 400,
}

'''

#步骤二:middlewares.py

from scrapy import signals

class SpiderMiddleware1(object):

    def process_spider_input(self, response, spider):
        print("input1")

    def process_spider_output(self, response, result, spider):
        print('output1')
        return result

    def process_spider_exception(self, response, exception, spider):
        print('exception1')


class SpiderMiddleware2(object):

    def process_spider_input(self, response, spider):
        print("input2")
        raise Type

    def process_spider_output(self, response, result, spider):
        print('output2')
        return result

    def process_spider_exception(self, response, exception, spider):
        print('exception2')


class SpiderMiddleware3(object):

    def process_spider_input(self, response, spider):
        print("input3")
        return None

    def process_spider_output(self, response, result, spider):
        print('output3')
        return result

    def process_spider_exception(self, response, exception, spider):
        print('exception3')



#运行结果        
input1
input2
exception3
exception2
exception1

#分析:
#1、当response经过中间件1的 process_spider_input返回None,继续交给中间件2的process_spider_input
#2、中间件2的process_spider_input抛出异常,则直接跳过后续的process_spider_input,将异常信息传递给Spiders里该请求的errback
#3、没有找到errback,则该response既没有被Spiders正常的callback执行,也没有被errback执行,即Spiders啥事也没有干,那么开始倒着执行process_spider_exception
#4、如果process_spider_exception返回None,代表该方法推卸掉责任,并没处理异常,而是直接交给下一个process_spider_exception,全都返回None,则异常最终交给Engine抛出

View Code

5、指定errback

必威 69必威 70

#步骤一:spider.py
import scrapy


class BaiduSpider(scrapy.Spider):
    name = 'baidu'
    allowed_domains = ['www.baidu.com']
    start_urls = ['http://www.baidu.com/']


    def start_requests(self):
        yield scrapy.Request(url='http://www.baidu.com/',
                             callback=self.parse,
                             errback=self.parse_err,
                             )

    def parse(self, response):
        pass

    def parse_err(self,res):
        #res 为异常信息,异常已经被该函数处理了,因此不会再抛给因此,于是开始走process_spider_output
        return [1,2,3,4,5] #提取异常信息中有用的数据以可迭代对象的形式存放于管道中,等待被process_spider_output取走



#步骤二:
'''
打开注释:
SPIDER_MIDDLEWARES = {
   'Baidu.middlewares.SpiderMiddleware1': 200,
   'Baidu.middlewares.SpiderMiddleware2': 300,
   'Baidu.middlewares.SpiderMiddleware3': 400,
}

'''

#步骤三:middlewares.py

from scrapy import signals

class SpiderMiddleware1(object):

    def process_spider_input(self, response, spider):
        print("input1")

    def process_spider_output(self, response, result, spider):
        print('output1',list(result))
        return result

    def process_spider_exception(self, response, exception, spider):
        print('exception1')


class SpiderMiddleware2(object):

    def process_spider_input(self, response, spider):
        print("input2")
        raise TypeError('input2 抛出异常')

    def process_spider_output(self, response, result, spider):
        print('output2',list(result))
        return result

    def process_spider_exception(self, response, exception, spider):
        print('exception2')


class SpiderMiddleware3(object):

    def process_spider_input(self, response, spider):
        print("input3")
        return None

    def process_spider_output(self, response, result, spider):
        print('output3',list(result))
        return result

    def process_spider_exception(self, response, exception, spider):
        print('exception3')



#步骤四:运行结果分析
input1
input2
output3 [1, 2, 3, 4, 5] #parse_err的返回值放入管道中,只能被取走一次,在output3的方法内可以根据异常信息封装一个新的request请求
output2 []
output1 []

View Code

import requests
response=requests.get('http://dig.chouti.com/')
print(response.text)

5 Spiders

必威 71必威 72

#在项目目录下新建:entrypoint.py
from scrapy.cmdline import execute
execute(['scrapy', 'crawl', 'xiaohua'])

暗许只好在cmd中实践爬虫,如若想在pycharm中实行必要做

重申:配置文件的选项必得是大写,如X='1'

必威 73必威 74

# -*- coding: utf-8 -*-
import scrapy
from scrapy.linkextractors import LinkExtractor
from scrapy.spiders import CrawlSpider, Rule


class BaiduSpider(CrawlSpider):
    name = 'xiaohua'
    allowed_domains = ['www.xiaohuar.com']
    start_urls = ['http://www.xiaohuar.com/v/']
    # download_delay = 1

    rules = (
        Rule(LinkExtractor(allow=r'p-d-d+.html$'), callback='parse_item',follow=True,),
    )


    def parse_item(self, response):

        if url:
            print('======下载视频==============================', url)
            yield scrapy.Request(url,callback=self.save)



    def save(self,response):
        print('======保存视频==============================',response.url,len(response.body))

        import time
        import hashlib
        m=hashlib.md5()
        m.update(str(time.time()).encode('utf-8'))
        m.update(response.url.encode('utf-8'))

        filename=r'E:\mv\%s.mp4' %m.hexdigest()
        with open(filename,'wb') as f:
            f.write(response.body)

模版:CrawlSpider

三. 格式化管理

上述实例只是简短的管理,所以在parse方法中央行政机构接管理。假若对于想要获取越来越多的数目管理,则能够动用Scrapy的items将数据格式化,然后统意气风发交由pipelines来管理。

必威 75必威 76

import scrapy
from scrapy.selector import HtmlXPathSelector
from scrapy.http.request import Request
from scrapy.http.cookies import CookieJar
from scrapy import FormRequest


class XiaoHuarSpider(scrapy.Spider):
    # 爬虫应用的名称,通过此名称启动爬虫命令
    name = "xiaohuar"
    # 允许的域名
    allowed_domains = ["xiaohuar.com"]

    start_urls = [
        "http://www.xiaohuar.com/list-1-1.html",
    ]
    # custom_settings = {
    #     'ITEM_PIPELINES':{
    #         'spider1.pipelines.JsonPipeline': 100
    #     }
    # }
    has_request_set = {}

    def parse(self, response):
        # 分析页面
        # 找到页面中符合规则的内容(校花图片),保存
        # 找到所有的a标签,再访问其他a标签,一层一层的搞下去

        hxs = HtmlXPathSelector(response)

        items = hxs.select('//div[@class="item_list infinite_scroll"]/div')
        for item in items:
            src = item.select('.//div[@class="img"]/a/img/@src').extract_first()
            name = item.select('.//div[@class="img"]/span/text()').extract_first()
            school = item.select('.//div[@class="img"]/div[@class="btns"]/a/text()').extract_first()
            url = "http://www.xiaohuar.com%s" % src
            from ..items import XiaoHuarItem
            obj = XiaoHuarItem(name=name, school=school, url=url)
            yield obj

        urls = hxs.select('//a[re:test(@href, "http://www.xiaohuar.com/list-1-d+.html")]/@href')
        for url in urls:
            key = self.md5(url)
            if key in self.has_request_set:
                pass
            else:
                self.has_request_set[key] = url
                req = Request(url=url,method='GET',callback=self.parse)
                yield req

    @staticmethod
    def md5(val):
        import hashlib
        ha = hashlib.md5()
        ha.update(bytes(val, encoding='utf-8'))
        key = ha.hexdigest()
        return key

spiders/xiahuar.py

必威 77必威 78

import scrapy


class XiaoHuarItem(scrapy.Item):
    name = scrapy.Field()
    school = scrapy.Field()
    url = scrapy.Field()

items

必威 79必威 80

import json
import os
import requests


class JsonPipeline(object):
    def __init__(self):
        self.file = open('xiaohua.txt', 'w')

    def process_item(self, item, spider):
        v = json.dumps(dict(item), ensure_ascii=False)
        self.file.write(v)
        self.file.write('n')
        self.file.flush()
        return item


class FilePipeline(object):
    def __init__(self):
        if not os.path.exists('imgs'):
            os.makedirs('imgs')

    def process_item(self, item, spider):
        response = requests.get(item['url'], stream=True)
        file_name = '%s_%s.jpg' % (item['name'], item['school'])
        with open(os.path.join('imgs', file_name), mode='wb') as f:
            f.write(response.content)
        return item

pipelines

必威 81必威 82

ITEM_PIPELINES = {
   'spider1.pipelines.JsonPipeline': 100,
   'spider1.pipelines.FilePipeline': 300,
}
# 每行后面的整型值,确定了他们运行的顺序,item按数字从低到高的顺序,通过pipeline,通常将这些数字定义在0-1000范围内。

settings

对于pipeline能够做越来越多,如下:

必威 83必威 84

from scrapy.exceptions import DropItem

class CustomPipeline(object):
    def __init__(self,v):
        self.value = v

    def process_item(self, item, spider):
        # 操作并进行持久化

        # return表示会被后续的pipeline继续处理
        return item

        # 表示将item丢弃,不会被后续pipeline处理
        # raise DropItem()


    @classmethod
    def from_crawler(cls, crawler):
        """
        初始化时候,用于创建pipeline对象
        :param crawler: 
        :return: 
        """
        val = crawler.settings.getint('MMMM')
        return cls(val)

    def open_spider(self,spider):
        """
        爬虫开始执行时,调用
        :param spider: 
        :return: 
        """
        print('000000')

    def close_spider(self,spider):
        """
        爬虫关闭时,被调用
        :param spider: 
        :return: 
        """
        print('111111')

自定义pipeline

四.中间件

必威 85必威 86

class SpiderMiddleware(object):

    def process_spider_input(self,response, spider):
        """
        下载完成,执行,然后交给parse处理
        :param response: 
        :param spider: 
        :return: 
        """
        pass

    def process_spider_output(self,response, result, spider):
        """
        spider处理完成,返回时调用
        :param response:
        :param result:
        :param spider:
        :return: 必须返回包含 Request 或 Item 对象的可迭代对象(iterable)
        """
        return result

    def process_spider_exception(self,response, exception, spider):
        """
        异常调用
        :param response:
        :param exception:
        :param spider:
        :return: None,继续交给后续中间件处理异常;含 Response 或 Item 的可迭代对象(iterable),交给调度器或pipeline
        """
        return None


    def process_start_requests(self,start_requests, spider):
        """
        爬虫启动时调用
        :param start_requests:
        :param spider:
        :return: 包含 Request 对象的可迭代对象
        """
        return start_requests

爬虫中间件

必威 87必威 88

class DownMiddleware1(object):
    def process_request(self, request, spider):
        """
        请求需要被下载时,经过所有下载器中间件的process_request调用
        :param request: 
        :param spider: 
        :return:  
            None,继续后续中间件去下载;
            Response对象,停止process_request的执行,开始执行process_response
            Request对象,停止中间件的执行,将Request重新调度器
            raise IgnoreRequest异常,停止process_request的执行,开始执行process_exception
        """
        pass



    def process_response(self, request, response, spider):
        """
        spider处理完成,返回时调用
        :param response:
        :param result:
        :param spider:
        :return: 
            Response 对象:转交给其他中间件process_response
            Request 对象:停止中间件,request会被重新调度下载
            raise IgnoreRequest 异常:调用Request.errback
        """
        print('response1')
        return response

    def process_exception(self, request, exception, spider):
        """
        当下载处理器(download handler)或 process_request() (下载中间件)抛出异常
        :param response:
        :param exception:
        :param spider:
        :return: 
            None:继续交给后续中间件处理异常;
            Response对象:停止后续process_exception方法
            Request对象:停止中间件,request将会被重新调用下载
        """
        return None

下载器中间件

五. 自定制命令

  • 在spiders同级创制放肆目录,如:commands
  • 在里头创设 crawlall.py 文件 (此处文件名便是自定义的吩咐卡塔 尔(英语:State of Qatar)
    必威 89必威 90
        from scrapy.commands import ScrapyCommand
        from scrapy.utils.project import get_project_settings
    
        class Command(ScrapyCommand):

            requires_project = True

            def syntax(self):
                return '[options]'

            def short_desc(self):
                return 'Runs all of the spiders'

            def run(self, args, opts):
                spider_list = self.crawler_process.spiders.list()
                for name in spider_list:
                    self.crawler_process.crawl(name, **opts.__dict__)
                self.crawler_process.start()

crawlall.py
  • 在settings.py 中丰硕配置 COMMANDS_MODULE = '项目名称.目录名称'
  • 在品种目录实行命令:scrapy crawlall 

六. 自定义扩展

自定义扩张时,利用实信号在钦命地方注册制订操作

必威 91必威 92

from scrapy import signals


class MyExtension(object):
    def __init__(self, value):
        self.value = value

    @classmethod
    def from_crawler(cls, crawler):
        val = crawler.settings.getint('MMMM')
        ext = cls(val)

        crawler.signals.connect(ext.spider_opened, signal=signals.spider_opened)
        crawler.signals.connect(ext.spider_closed, signal=signals.spider_closed)

        return ext

    def spider_opened(self, spider):
        print('open')

    def spider_closed(self, spider):
        print('close')

View Code

七. 幸免重复访谈

scrapy私下认可使用 scrapy.dupefilter.KoleosFPDupeFilter 实行去重,相关配置有:

DUPEFILTER_CLASS = 'scrapy.dupefilter.RFPDupeFilter'
DUPEFILTER_DEBUG = False
JOBDIR = "保存范文记录的日志路径,如:/root/"  # 最终路径为 /root/requests.seen

必威 93必威 94

class RepeatUrl:
    def __init__(self):
        self.visited_url = set()

    @classmethod
    def from_settings(cls, settings):
        """
        初始化时,调用
        :param settings: 
        :return: 
        """
        return cls()

    def request_seen(self, request):
        """
        检测当前请求是否已经被访问过
        :param request: 
        :return: True表示已经访问过;False表示未访问过
        """
        if request.url in self.visited_url:
            return True
        self.visited_url.add(request.url)
        return False

    def open(self):
        """
        开始爬去请求时,调用
        :return: 
        """
        print('open replication')

    def close(self, reason):
        """
        结束爬虫爬取时,调用
        :param reason: 
        :return: 
        """
        print('close replication')

    def log(self, request, spider):
        """
        记录日志
        :param request: 
        :param spider: 
        :return: 
        """
        print('repeat', request.url)

自定义ULANDL去重操作

八.其他

必威 95必威 96

# -*- coding: utf-8 -*-

# Scrapy settings for step8_king project
#
# For simplicity, this file contains only settings considered important or
# commonly used. You can find more settings consulting the documentation:
#
#     http://doc.scrapy.org/en/latest/topics/settings.html
#     http://scrapy.readthedocs.org/en/latest/topics/downloader-middleware.html
#     http://scrapy.readthedocs.org/en/latest/topics/spider-middleware.html

# 1. 爬虫名称
BOT_NAME = 'step8_king'

# 2. 爬虫应用路径
SPIDER_MODULES = ['step8_king.spiders']
NEWSPIDER_MODULE = 'step8_king.spiders'

# Crawl responsibly by identifying yourself (and your website) on the user-agent
# 3. 客户端 user-agent请求头
# USER_AGENT = 'step8_king (+http://www.yourdomain.com)'

# Obey robots.txt rules
# 4. 禁止爬虫配置
# ROBOTSTXT_OBEY = False

# Configure maximum concurrent requests performed by Scrapy (default: 16)
# 5. 并发请求数
# CONCURRENT_REQUESTS = 4

# Configure a delay for requests for the same website (default: 0)
# See http://scrapy.readthedocs.org/en/latest/topics/settings.html#download-delay
# See also autothrottle settings and docs
# 6. 延迟下载秒数
# DOWNLOAD_DELAY = 2


# The download delay setting will honor only one of:
# 7. 单域名访问并发数,并且延迟下次秒数也应用在每个域名
# CONCURRENT_REQUESTS_PER_DOMAIN = 2
# 单IP访问并发数,如果有值则忽略:CONCURRENT_REQUESTS_PER_DOMAIN,并且延迟下次秒数也应用在每个IP
# CONCURRENT_REQUESTS_PER_IP = 3

# Disable cookies (enabled by default)
# 8. 是否支持cookie,cookiejar进行操作cookie
# COOKIES_ENABLED = True
# COOKIES_DEBUG = True

# Disable Telnet Console (enabled by default)
# 9. Telnet用于查看当前爬虫的信息,操作爬虫等...
#    使用telnet ip port ,然后通过命令操作
# TELNETCONSOLE_ENABLED = True
# TELNETCONSOLE_HOST = '127.0.0.1'
# TELNETCONSOLE_PORT = [6023,]


# 10. 默认请求头
# Override the default request headers:
# DEFAULT_REQUEST_HEADERS = {
#     'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8',
#     'Accept-Language': 'en',
# }


# Configure item pipelines
# See http://scrapy.readthedocs.org/en/latest/topics/item-pipeline.html
# 11. 定义pipeline处理请求
# ITEM_PIPELINES = {
#    'step8_king.pipelines.JsonPipeline': 700,
#    'step8_king.pipelines.FilePipeline': 500,
# }



# 12. 自定义扩展,基于信号进行调用
# Enable or disable extensions
# See http://scrapy.readthedocs.org/en/latest/topics/extensions.html
# EXTENSIONS = {
#     # 'step8_king.extensions.MyExtension': 500,
# }


# 13. 爬虫允许的最大深度,可以通过meta查看当前深度;0表示无深度
# DEPTH_LIMIT = 3

# 14. 爬取时,0表示深度优先Lifo(默认);1表示广度优先FiFo

# 后进先出,深度优先
# DEPTH_PRIORITY = 0
# SCHEDULER_DISK_QUEUE = 'scrapy.squeue.PickleLifoDiskQueue'
# SCHEDULER_MEMORY_QUEUE = 'scrapy.squeue.LifoMemoryQueue'
# 先进先出,广度优先

# DEPTH_PRIORITY = 1
# SCHEDULER_DISK_QUEUE = 'scrapy.squeue.PickleFifoDiskQueue'
# SCHEDULER_MEMORY_QUEUE = 'scrapy.squeue.FifoMemoryQueue'

# 15. 调度器队列
# SCHEDULER = 'scrapy.core.scheduler.Scheduler'
# from scrapy.core.scheduler import Scheduler


# 16. 访问URL去重
# DUPEFILTER_CLASS = 'step8_king.duplication.RepeatUrl'


# Enable and configure the AutoThrottle extension (disabled by default)
# See http://doc.scrapy.org/en/latest/topics/autothrottle.html

"""
17. 自动限速算法
    from scrapy.contrib.throttle import AutoThrottle
    自动限速设置
    1. 获取最小延迟 DOWNLOAD_DELAY
    2. 获取最大延迟 AUTOTHROTTLE_MAX_DELAY
    3. 设置初始下载延迟 AUTOTHROTTLE_START_DELAY
    4. 当请求下载完成后,获取其"连接"时间 latency,即:请求连接到接受到响应头之间的时间
    5. 用于计算的... AUTOTHROTTLE_TARGET_CONCURRENCY
    target_delay = latency / self.target_concurrency
    new_delay = (slot.delay + target_delay) / 2.0 # 表示上一次的延迟时间
    new_delay = max(target_delay, new_delay)
    new_delay = min(max(self.mindelay, new_delay), self.maxdelay)
    slot.delay = new_delay
"""

# 开始自动限速
# AUTOTHROTTLE_ENABLED = True
# The initial download delay
# 初始下载延迟
# AUTOTHROTTLE_START_DELAY = 5
# The maximum download delay to be set in case of high latencies
# 最大下载延迟
# AUTOTHROTTLE_MAX_DELAY = 10
# The average number of requests Scrapy should be sending in parallel to each remote server
# 平均每秒并发数
# AUTOTHROTTLE_TARGET_CONCURRENCY = 1.0

# Enable showing throttling stats for every response received:
# 是否显示
# AUTOTHROTTLE_DEBUG = True

# Enable and configure HTTP caching (disabled by default)
# See http://scrapy.readthedocs.org/en/latest/topics/downloader-middleware.html#httpcache-middleware-settings


"""
18. 启用缓存
    目的用于将已经发送的请求或相应缓存下来,以便以后使用

    from scrapy.downloadermiddlewares.httpcache import HttpCacheMiddleware
    from scrapy.extensions.httpcache import DummyPolicy
    from scrapy.extensions.httpcache import FilesystemCacheStorage
"""
# 是否启用缓存策略
# HTTPCACHE_ENABLED = True

# 缓存策略:所有请求均缓存,下次在请求直接访问原来的缓存即可
# HTTPCACHE_POLICY = "scrapy.extensions.httpcache.DummyPolicy"
# 缓存策略:根据Http响应头:Cache-Control、Last-Modified 等进行缓存的策略
# HTTPCACHE_POLICY = "scrapy.extensions.httpcache.RFC2616Policy"

# 缓存超时时间
# HTTPCACHE_EXPIRATION_SECS = 0

# 缓存保存路径
# HTTPCACHE_DIR = 'httpcache'

# 缓存忽略的Http状态码
# HTTPCACHE_IGNORE_HTTP_CODES = []

# 缓存存储的插件
# HTTPCACHE_STORAGE = 'scrapy.extensions.httpcache.FilesystemCacheStorage'


"""
19. 代理,需要在环境变量中设置
    from scrapy.contrib.downloadermiddleware.httpproxy import HttpProxyMiddleware

    方式一:使用默认
        os.environ
        {
            http_proxy:http://root:woshiniba@192.168.11.11:9999/
            https_proxy:http://192.168.11.11:9999/
        }
    方式二:使用自定义下载中间件

    def to_bytes(text, encoding=None, errors='strict'):
        if isinstance(text, bytes):
            return text
        if not isinstance(text, six.string_types):
            raise TypeError('to_bytes must receive a unicode, str or bytes '
                            'object, got %s' % type(text).__name__)
        if encoding is None:
            encoding = 'utf-8'
        return text.encode(encoding, errors)

    class ProxyMiddleware(object):
        def process_request(self, request, spider):
            PROXIES = [
                {'ip_port': '111.11.228.75:80', 'user_pass': ''},
                {'ip_port': '120.198.243.22:80', 'user_pass': ''},
                {'ip_port': '111.8.60.9:8123', 'user_pass': ''},
                {'ip_port': '101.71.27.120:80', 'user_pass': ''},
                {'ip_port': '122.96.59.104:80', 'user_pass': ''},
                {'ip_port': '122.224.249.122:8088', 'user_pass': ''},
            ]
            proxy = random.choice(PROXIES)
            if proxy['user_pass'] is not None:
                request.meta['proxy'] = to_bytes("http://%s" % proxy['ip_port'])
                encoded_user_pass = base64.encodestring(to_bytes(proxy['user_pass']))
                request.headers['Proxy-Authorization'] = to_bytes('Basic ' + encoded_user_pass)
                print "**************ProxyMiddleware have pass************" + proxy['ip_port']
            else:
                print "**************ProxyMiddleware no pass************" + proxy['ip_port']
                request.meta['proxy'] = to_bytes("http://%s" % proxy['ip_port'])

    DOWNLOADER_MIDDLEWARES = {
       'step8_king.middlewares.ProxyMiddleware': 500,
    }

"""

"""
20. Https访问
    Https访问时有两种情况:
    1. 要爬取网站使用的可信任证书(默认支持)
        DOWNLOADER_HTTPCLIENTFACTORY = "scrapy.core.downloader.webclient.ScrapyHTTPClientFactory"
        DOWNLOADER_CLIENTCONTEXTFACTORY = "scrapy.core.downloader.contextfactory.ScrapyClientContextFactory"

    2. 要爬取网站使用的自定义证书
        DOWNLOADER_HTTPCLIENTFACTORY = "scrapy.core.downloader.webclient.ScrapyHTTPClientFactory"
        DOWNLOADER_CLIENTCONTEXTFACTORY = "step8_king.https.MySSLFactory"

        # https.py
        from scrapy.core.downloader.contextfactory import ScrapyClientContextFactory
        from twisted.internet.ssl import (optionsForClientTLS, CertificateOptions, PrivateCertificate)

        class MySSLFactory(ScrapyClientContextFactory):
            def getCertificateOptions(self):
                from OpenSSL import crypto
                v1 = crypto.load_privatekey(crypto.FILETYPE_PEM, open('/Users/wupeiqi/client.key.unsecure', mode='r').read())
                v2 = crypto.load_certificate(crypto.FILETYPE_PEM, open('/Users/wupeiqi/client.pem', mode='r').read())
                return CertificateOptions(
                    privateKey=v1,  # pKey对象
                    certificate=v2,  # X509对象
                    verify=False,
                    method=getattr(self, 'method', getattr(self, '_ssl_method', None))
                )
    其他:
        相关类
            scrapy.core.downloader.handlers.http.HttpDownloadHandler
            scrapy.core.downloader.webclient.ScrapyHTTPClientFactory
            scrapy.core.downloader.contextfactory.ScrapyClientContextFactory
        相关配置
            DOWNLOADER_HTTPCLIENTFACTORY
            DOWNLOADER_CLIENTCONTEXTFACTORY

"""



"""
21. 爬虫中间件
    class SpiderMiddleware(object):

        def process_spider_input(self,response, spider):
            '''
            下载完成,执行,然后交给parse处理
            :param response: 
            :param spider: 
            :return: 
            '''
            pass

        def process_spider_output(self,response, result, spider):
            '''
            spider处理完成,返回时调用
            :param response:
            :param result:
            :param spider:
            :return: 必须返回包含 Request 或 Item 对象的可迭代对象(iterable)
            '''
            return result

        def process_spider_exception(self,response, exception, spider):
            '''
            异常调用
            :param response:
            :param exception:
            :param spider:
            :return: None,继续交给后续中间件处理异常;含 Response 或 Item 的可迭代对象(iterable),交给调度器或pipeline
            '''
            return None


        def process_start_requests(self,start_requests, spider):
            '''
            爬虫启动时调用
            :param start_requests:
            :param spider:
            :return: 包含 Request 对象的可迭代对象
            '''
            return start_requests

    内置爬虫中间件:
        'scrapy.contrib.spidermiddleware.httperror.HttpErrorMiddleware': 50,
        'scrapy.contrib.spidermiddleware.offsite.OffsiteMiddleware': 500,
        'scrapy.contrib.spidermiddleware.referer.RefererMiddleware': 700,
        'scrapy.contrib.spidermiddleware.urllength.UrlLengthMiddleware': 800,
        'scrapy.contrib.spidermiddleware.depth.DepthMiddleware': 900,

"""
# from scrapy.contrib.spidermiddleware.referer import RefererMiddleware
# Enable or disable spider middlewares
# See http://scrapy.readthedocs.org/en/latest/topics/spider-middleware.html
SPIDER_MIDDLEWARES = {
   # 'step8_king.middlewares.SpiderMiddleware': 543,
}


"""
22. 下载中间件
    class DownMiddleware1(object):
        def process_request(self, request, spider):
            '''
            请求需要被下载时,经过所有下载器中间件的process_request调用
            :param request:
            :param spider:
            :return:
                None,继续后续中间件去下载;
                Response对象,停止process_request的执行,开始执行process_response
                Request对象,停止中间件的执行,将Request重新调度器
                raise IgnoreRequest异常,停止process_request的执行,开始执行process_exception
            '''
            pass



        def process_response(self, request, response, spider):
            '''
            spider处理完成,返回时调用
            :param response:
            :param result:
            :param spider:
            :return:
                Response 对象:转交给其他中间件process_response
                Request 对象:停止中间件,request会被重新调度下载
                raise IgnoreRequest 异常:调用Request.errback
            '''
            print('response1')
            return response

        def process_exception(self, request, exception, spider):
            '''
            当下载处理器(download handler)或 process_request() (下载中间件)抛出异常
            :param response:
            :param exception:
            :param spider:
            :return:
                None:继续交给后续中间件处理异常;
                Response对象:停止后续process_exception方法
                Request对象:停止中间件,request将会被重新调用下载
            '''
            return None


    默认下载中间件
    {
        'scrapy.contrib.downloadermiddleware.robotstxt.RobotsTxtMiddleware': 100,
        'scrapy.contrib.downloadermiddleware.httpauth.HttpAuthMiddleware': 300,
        'scrapy.contrib.downloadermiddleware.downloadtimeout.DownloadTimeoutMiddleware': 350,
        'scrapy.contrib.downloadermiddleware.useragent.UserAgentMiddleware': 400,
        'scrapy.contrib.downloadermiddleware.retry.RetryMiddleware': 500,
        'scrapy.contrib.downloadermiddleware.defaultheaders.DefaultHeadersMiddleware': 550,
        'scrapy.contrib.downloadermiddleware.redirect.MetaRefreshMiddleware': 580,
        'scrapy.contrib.downloadermiddleware.httpcompression.HttpCompressionMiddleware': 590,
        'scrapy.contrib.downloadermiddleware.redirect.RedirectMiddleware': 600,
        'scrapy.contrib.downloadermiddleware.cookies.CookiesMiddleware': 700,
        'scrapy.contrib.downloadermiddleware.httpproxy.HttpProxyMiddleware': 750,
        'scrapy.contrib.downloadermiddleware.chunked.ChunkedTransferMiddleware': 830,
        'scrapy.contrib.downloadermiddleware.stats.DownloaderStats': 850,
        'scrapy.contrib.downloadermiddleware.httpcache.HttpCacheMiddleware': 900,
    }

"""
# from scrapy.contrib.downloadermiddleware.httpauth import HttpAuthMiddleware
# Enable or disable downloader middlewares
# See http://scrapy.readthedocs.org/en/latest/topics/downloader-middleware.html
# DOWNLOADER_MIDDLEWARES = {
#    'step8_king.middlewares.DownMiddleware1': 100,
#    'step8_king.middlewares.DownMiddleware2': 500,
# }

settings 

九.TinyScrapy

必威 97必威 98

#!/usr/bin/env python
# -*- coding:utf-8 -*-
import types
from twisted.internet import defer
from twisted.web.client import getPage
from twisted.internet import reactor



class Request(object):
    def __init__(self, url, callback):
        self.url = url
        self.callback = callback
        self.priority = 0


class HttpResponse(object):
    def __init__(self, content, request):
        self.content = content
        self.request = request


class ChouTiSpider(object):

    def start_requests(self):
        url_list = ['http://www.cnblogs.com/', 'http://www.bing.com']
        for url in url_list:
            yield Request(url=url, callback=self.parse)

    def parse(self, response):
        print(response.request.url)
        # yield Request(url="http://www.baidu.com", callback=self.parse)




from queue import Queue
Q = Queue()


class CallLaterOnce(object):
    def __init__(self, func, *a, **kw):
        self._func = func
        self._a = a
        self._kw = kw
        self._call = None

    def schedule(self, delay=0):
        if self._call is None:
            self._call = reactor.callLater(delay, self)

    def cancel(self):
        if self._call:
            self._call.cancel()

    def __call__(self):
        self._call = None
        return self._func(*self._a, **self._kw)


class Engine(object):
    def __init__(self):
        self.nextcall = None
        self.crawlling = []
        self.max = 5
        self._closewait = None

    def get_response(self,content, request):
        response = HttpResponse(content, request)
        gen = request.callback(response)
        if isinstance(gen, types.GeneratorType):
            for req in gen:
                req.priority = request.priority + 1
                Q.put(req)


    def rm_crawlling(self,response,d):
        self.crawlling.remove(d)

    def _next_request(self,spider):
        if Q.qsize() == 0 and len(self.crawlling) == 0:
            self._closewait.callback(None)

        if len(self.crawlling) >= 5:
            return
        while len(self.crawlling) < 5:
            try:
                req = Q.get(block=False)
            except Exception as e:
                req = None
            if not req:
                return
            d = getPage(req.url.encode('utf-8'))
            self.crawlling.append(d)
            d.addCallback(self.get_response, req)
            d.addCallback(self.rm_crawlling,d)
            d.addCallback(lambda _: self.nextcall.schedule())


    @defer.inlineCallbacks
    def crawl(self):
        spider = ChouTiSpider()
        start_requests = iter(spider.start_requests())
        flag = True
        while flag:
            try:
                req = next(start_requests)
                Q.put(req)
            except StopIteration as e:
                flag = False

        self.nextcall = CallLaterOnce(self._next_request,spider)
        self.nextcall.schedule()

        self._closewait = defer.Deferred()
        yield self._closewait

    @defer.inlineCallbacks
    def pp(self):
        yield self.crawl()

_active = set()
obj = Engine()
d = obj.crawl()
_active.add(d)

li = defer.DeferredList(_active)
li.addBoth(lambda _,*a,**kw: reactor.stop())

reactor.run()

参考版

点击下载

 越多文书档案参见:

 

十黄金年代 自定义扩张

自定义扩展(与django的信号类似)
    1、django的信号是django是预留的扩展,信号一旦被触发,相应的功能就会执行
    2、scrapy自定义扩展的好处是可以在任意我们想要的位置添加功能,而其他组件中提供的功能只能在规定的位置执行

必威 99必威 100

#1、在与settings同级目录下新建一个文件,文件名可以为extentions.py,内容如下
from scrapy import signals


class MyExtension(object):
    def __init__(self, value):
        self.value = value

    @classmethod
    def from_crawler(cls, crawler):
        val = crawler.settings.getint('MMMM')
        obj = cls(val)

        crawler.signals.connect(obj.spider_opened, signal=signals.spider_opened)
        crawler.signals.connect(obj.spider_closed, signal=signals.spider_closed)

        return obj

    def spider_opened(self, spider):
        print('=============>open')

    def spider_closed(self, spider):
        print('=============>close')

#2、配置生效
EXTENSIONS = {
    "Amazon.extentions.MyExtension":200
}

View Code

    b、带参数get请求-----》》params

十、 爬取亚马逊(亚马逊卡塔尔国商品音信

必威 101必威 102

1、
scrapy startproject Amazon
cd Amazon
scrapy genspider spider_goods www.amazon.cn

2、settings.py
ROBOTSTXT_OBEY = False
#请求头
DEFAULT_REQUEST_HEADERS = {
    'Referer':'https://www.amazon.cn/',
    'User-Agent':'Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/62.0.3202.75 Safari/537.36'
}
#打开注释
HTTPCACHE_ENABLED = True
HTTPCACHE_EXPIRATION_SECS = 0
HTTPCACHE_DIR = 'httpcache'
HTTPCACHE_IGNORE_HTTP_CODES = []
HTTPCACHE_STORAGE = 'scrapy.extensions.httpcache.FilesystemCacheStorage'

3、items.py
class GoodsItem(scrapy.Item):
    # define the fields for your item here like:
    # name = scrapy.Field()
    #商品名字
    goods_name = scrapy.Field()
    #价钱
    goods_price = scrapy.Field()
    #配送方式
    delivery_method=scrapy.Field()

4、spider_goods.py
# -*- coding: utf-8 -*-
import scrapy

from Amazon.items import  GoodsItem
from scrapy.http import Request
from urllib.parse import urlencode

class SpiderGoodsSpider(scrapy.Spider):
    name = 'spider_goods'
    allowed_domains = ['www.amazon.cn']
    # start_urls = ['http://www.amazon.cn/']


    def __int__(self,keyword=None,*args,**kwargs):
        super(SpiderGoodsSpider).__init__(*args,**kwargs)
        self.keyword=keyword

    def start_requests(self):
        url='https://www.amazon.cn/s/ref=nb_sb_noss_1?'
        paramas={
            '__mk_zh_CN': '亚马逊网站',
            'url': 'search - alias = aps',
            'field-keywords': self.keyword
        }
        url=url+urlencode(paramas,encoding='utf-8')
        yield Request(url,callback=self.parse_index)


    def parse_index(self, response):
        print('解析索引页:%s' %response.url)

        urls=response.xpath('//*[contains(@id,"result_")]/div/div[3]/div[1]/a/@href').extract()
        for url in urls:
            yield Request(url,callback=self.parse_detail)

        next_url=response.urljoin(response.xpath('//*[@id="pagnNextLink"]/@href').extract_first())
        print('下一页的url',next_url)
        yield Request(next_url,callback=self.parse_index)

    def parse_detail(self,response):
        print('解析详情页:%s' %(response.url))

        item=GoodsItem()
        # 商品名字
        item['goods_name'] = response.xpath('//*[@id="productTitle"]/text()').extract_first().strip()
        # 价钱
        item['goods_price'] = response.xpath('//*[@id="priceblock_ourprice"]/text()').extract_first().strip()
        # 配送方式
        item['delivery_method'] = ''.join(response.xpath('//*[@id="ddmMerchantMessage"]//text()').extract())
        return item

5、自定义pipelines
#sql.py
import pymysql
import settings


MYSQL_HOST=settings.MYSQL_HOST
MYSQL_PORT=settings.MYSQL_PORT
MYSQL_USER=settings.MYSQL_USER
MYSQL_PWD=settings.MYSQL_PWD
MYSQL_DB=settings.MYSQL_DB

conn=pymysql.connect(
    host=MYSQL_HOST,
    port=int(MYSQL_PORT),
    user=MYSQL_USER,
    password=MYSQL_PWD,
    db=MYSQL_DB,
    charset='utf8'
)
cursor=conn.cursor()

class Mysql(object):
    @staticmethod
    def insert_tables_goods(goods_name,goods_price,deliver_mode):
        sql='insert into goods(goods_name,goods_price,delivery_method) values(%s,%s,%s)'
        cursor.execute(sql,args=(goods_name,goods_price,deliver_mode))
        conn.commit()

    @staticmethod
    def is_repeat(goods_name):
        sql='select count(1) from goods where goods_name=%s'
        cursor.execute(sql,args=(goods_name,))
        if cursor.fetchone()[0] >= 1:
            return True

if __name__ == '__main__':
    cursor.execute('select * from goods;')
    print(cursor.fetchall())


#pipelines.py
from Amazon.mysqlpipelines.sql import Mysql


class AmazonPipeline(object):
    def process_item(self, item, spider):
        goods_name=item['goods_name']
        goods_price=item['goods_price']
        delivery_mode=item['delivery_method']
        if not Mysql.is_repeat(goods_name):
            Mysql.insert_table_goods(goods_name,goods_price,delivery_mode)



6、创建数据库表
create database amazon charset utf8;
create table goods(
    id int primary key auto_increment,
    goods_name char(30),
    goods_price char(20),
    delivery_method varchar(50)
);

7、settings.py
MYSQL_HOST='localhost'
MYSQL_PORT='3306'
MYSQL_USER='root'
MYSQL_PWD='123'
MYSQL_DB='amazon'


#数字代表优先级程度(1-1000随意设置,数值越低,组件的优先级越高)
ITEM_PIPELINES = {
   'Amazon.mysqlpipelines.pipelines.mazonPipeline': 1,
}


#8、在项目目录下新建:entrypoint.py
from scrapy.cmdline import execute
execute(['scrapy', 'crawl', 'spider_goods','-a','keyword=iphone8'])

View Code

 

 

 

 

十二 settings.py

#==>第一部分:基本配置<===
#1、项目名称,默认的USER_AGENT由它来构成,也作为日志记录的日志名
BOT_NAME = 'Amazon'

#2、爬虫应用路径
SPIDER_MODULES = ['Amazon.spiders']
NEWSPIDER_MODULE = 'Amazon.spiders'

#3、客户端User-Agent请求头
#USER_AGENT = 'Amazon (+http://www.yourdomain.com)'

#4、是否遵循爬虫协议
# Obey robots.txt rules
ROBOTSTXT_OBEY = False

#5、是否支持cookie,cookiejar进行操作cookie,默认开启
#COOKIES_ENABLED = False

#6、Telnet用于查看当前爬虫的信息,操作爬虫等...使用telnet ip port ,然后通过命令操作
#TELNETCONSOLE_ENABLED = False
#TELNETCONSOLE_HOST = '127.0.0.1'
#TELNETCONSOLE_PORT = [6023,]

#7、Scrapy发送HTTP请求默认使用的请求头
#DEFAULT_REQUEST_HEADERS = {
#   'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8',
#   'Accept-Language': 'en',
#}



#===>第二部分:并发与延迟<===
#1、下载器总共最大处理的并发请求数,默认值16
#CONCURRENT_REQUESTS = 32

#2、每个域名能够被执行的最大并发请求数目,默认值8
#CONCURRENT_REQUESTS_PER_DOMAIN = 16

#3、能够被单个IP处理的并发请求数,默认值0,代表无限制,需要注意两点
#I、如果不为零,那CONCURRENT_REQUESTS_PER_DOMAIN将被忽略,即并发数的限制是按照每个IP来计算,而不是每个域名
#II、该设置也影响DOWNLOAD_DELAY,如果该值不为零,那么DOWNLOAD_DELAY下载延迟是限制每个IP而不是每个域
#CONCURRENT_REQUESTS_PER_IP = 16

#4、如果没有开启智能限速,这个值就代表一个规定死的值,代表对同一网址延迟请求的秒数
#DOWNLOAD_DELAY = 3


#===>第三部分:智能限速/自动节流:AutoThrottle extension<===
#一:介绍
from scrapy.contrib.throttle import AutoThrottle #http://scrapy.readthedocs.io/en/latest/topics/autothrottle.html#topics-autothrottle
设置目标:
1、比使用默认的下载延迟对站点更好
2、自动调整scrapy到最佳的爬取速度,所以用户无需自己调整下载延迟到最佳状态。用户只需要定义允许最大并发的请求,剩下的事情由该扩展组件自动完成


#二:如何实现?
在Scrapy中,下载延迟是通过计算建立TCP连接到接收到HTTP包头(header)之间的时间来测量的。
注意,由于Scrapy可能在忙着处理spider的回调函数或者无法下载,因此在合作的多任务环境下准确测量这些延迟是十分苦难的。 不过,这些延迟仍然是对Scrapy(甚至是服务器)繁忙程度的合理测量,而这扩展就是以此为前提进行编写的。


#三:限速算法
自动限速算法基于以下规则调整下载延迟
#1、spiders开始时的下载延迟是基于AUTOTHROTTLE_START_DELAY的值
#2、当收到一个response,对目标站点的下载延迟=收到响应的延迟时间/AUTOTHROTTLE_TARGET_CONCURRENCY
#3、下一次请求的下载延迟就被设置成:对目标站点下载延迟时间和过去的下载延迟时间的平均值
#4、没有达到200个response则不允许降低延迟
#5、下载延迟不能变的比DOWNLOAD_DELAY更低或者比AUTOTHROTTLE_MAX_DELAY更高

#四:配置使用
#开启True,默认False
AUTOTHROTTLE_ENABLED = True
#起始的延迟
AUTOTHROTTLE_START_DELAY = 5
#最小延迟
DOWNLOAD_DELAY = 3
#最大延迟
AUTOTHROTTLE_MAX_DELAY = 10
#每秒并发请求数的平均值,不能高于 CONCURRENT_REQUESTS_PER_DOMAIN或CONCURRENT_REQUESTS_PER_IP,调高了则吞吐量增大强奸目标站点,调低了则对目标站点更加”礼貌“
#每个特定的时间点,scrapy并发请求的数目都可能高于或低于该值,这是爬虫视图达到的建议值而不是硬限制
AUTOTHROTTLE_TARGET_CONCURRENCY = 16.0
#调试
AUTOTHROTTLE_DEBUG = True
CONCURRENT_REQUESTS_PER_DOMAIN = 16
CONCURRENT_REQUESTS_PER_IP = 16



#===>第四部分:爬取深度与爬取方式<===
#1、爬虫允许的最大深度,可以通过meta查看当前深度;0表示无深度
# DEPTH_LIMIT = 3

#2、爬取时,0表示深度优先Lifo(默认);1表示广度优先FiFo

# 后进先出,深度优先
# DEPTH_PRIORITY = 0
# SCHEDULER_DISK_QUEUE = 'scrapy.squeue.PickleLifoDiskQueue'
# SCHEDULER_MEMORY_QUEUE = 'scrapy.squeue.LifoMemoryQueue'
# 先进先出,广度优先

# DEPTH_PRIORITY = 1
# SCHEDULER_DISK_QUEUE = 'scrapy.squeue.PickleFifoDiskQueue'
# SCHEDULER_MEMORY_QUEUE = 'scrapy.squeue.FifoMemoryQueue'


#3、调度器队列
# SCHEDULER = 'scrapy.core.scheduler.Scheduler'
# from scrapy.core.scheduler import Scheduler

#4、访问URL去重
# DUPEFILTER_CLASS = 'step8_king.duplication.RepeatUrl'



#===>第五部分:中间件、Pipelines、扩展<===
#1、Enable or disable spider middlewares
# See http://scrapy.readthedocs.org/en/latest/topics/spider-middleware.html
#SPIDER_MIDDLEWARES = {
#    'Amazon.middlewares.AmazonSpiderMiddleware': 543,
#}

#2、Enable or disable downloader middlewares
# See http://scrapy.readthedocs.org/en/latest/topics/downloader-middleware.html
DOWNLOADER_MIDDLEWARES = {
   # 'Amazon.middlewares.DownMiddleware1': 543,
}

#3、Enable or disable extensions
# See http://scrapy.readthedocs.org/en/latest/topics/extensions.html
#EXTENSIONS = {
#    'scrapy.extensions.telnet.TelnetConsole': None,
#}

#4、Configure item pipelines
# See http://scrapy.readthedocs.org/en/latest/topics/item-pipeline.html
ITEM_PIPELINES = {
   # 'Amazon.pipelines.CustomPipeline': 200,
}



#===>第六部分:缓存<===
"""
1. 启用缓存
    目的用于将已经发送的请求或相应缓存下来,以便以后使用

    from scrapy.downloadermiddlewares.httpcache import HttpCacheMiddleware
    from scrapy.extensions.httpcache import DummyPolicy
    from scrapy.extensions.httpcache import FilesystemCacheStorage
"""
# 是否启用缓存策略
# HTTPCACHE_ENABLED = True

# 缓存策略:所有请求均缓存,下次在请求直接访问原来的缓存即可
# HTTPCACHE_POLICY = "scrapy.extensions.httpcache.DummyPolicy"
# 缓存策略:根据Http响应头:Cache-Control、Last-Modified 等进行缓存的策略
# HTTPCACHE_POLICY = "scrapy.extensions.httpcache.RFC2616Policy"

# 缓存超时时间
# HTTPCACHE_EXPIRATION_SECS = 0

# 缓存保存路径
# HTTPCACHE_DIR = 'httpcache'

# 缓存忽略的Http状态码
# HTTPCACHE_IGNORE_HTTP_CODES = []

# 缓存存储的插件
# HTTPCACHE_STORAGE = 'scrapy.extensions.httpcache.FilesystemCacheStorage'


#===>第七部分:线程池<===
REACTOR_THREADPOOL_MAXSIZE = 10

#Default: 10
#scrapy基于twisted异步IO框架,downloader是多线程的,线程数是Twisted线程池的默认大小(The maximum limit for Twisted Reactor thread pool size.)

#关于twisted线程池:
http://twistedmatrix.com/documents/10.1.0/core/howto/threading.html

#线程池实现:twisted.python.threadpool.ThreadPool
twisted调整线程池大小:
from twisted.internet import reactor
reactor.suggestThreadPoolSize(30)

#scrapy相关源码:
D:python3.6Libsite-packagesscrapycrawler.py

#补充:
windows下查看进程内线程数的工具:
    https://docs.microsoft.com/zh-cn/sysinternals/downloads/pslist
    或
    https://pan.baidu.com/s/1jJ0pMaM

    命令为:
    pslist |findstr python

linux下:top -p 进程id


#===>第八部分:其他默认配置参考<===
D:python3.6Libsite-packagesscrapysettingsdefault_settings.py

settings.py

    c、带参数get请求-----》》headers

十一 爬取Amazon商品新闻

 

#通常我们在发送请求时都需要带上请求头,请求头是将自身伪装成浏览器的关键,常见的有用的请求头如下
Host
Referer #大型网站通常都会根据该参数判断请求的来源
User-Agent #客户端
Cookie #Cookie信息虽然包含在请求头里,但requests模块有单独的参数来处理他,headers={}内就不要放它了

     d、带参数get请求-----》》cookies

三、基于post请求

     a、介绍

#GET请求
HTTP默认的请求方法就是GET
     * 没有请求体
     * 数据必须在1K之内!
     * GET请求数据会暴露在浏览器的地址栏中

GET请求常用的操作:
       1. 在浏览器的地址栏中直接给出URL,那么就一定是GET请求
       2. 点击页面上的超链接也一定是GET请求
       3. 提交表单时,表单默认使用GET请求,但可以设置为POST


#POST请求
(1). 数据不会出现在地址栏中
(2). 数据的大小没有上限
(3). 有请求体
(4). 请求体中如果存在中文,会使用URL编码!


#!!!requests.post()用法与requests.get()完全一致,特殊的是requests.post()有一个data参数,用来存放请求体数据
复制代码

    b、出殡POST的伸手,模拟浏览器的登入行为

四、响应Response

    a、response属性

必威 103必威 104

import requests
respone=requests.get('http://www.jianshu.com')
# respone属性
print(respone.text)
print(respone.content)

print(respone.status_code)
print(respone.headers)
print(respone.cookies)
print(respone.cookies.get_dict())
print(respone.cookies.items())

print(respone.url)
print(respone.history)

print(respone.encoding)

#关闭:response.close()
from contextlib import closing
with closing(requests.get('xxx',stream=True)) as response:
    for line in response.iter_content():
    pass

View Code

    b、编码难点

#编码问题
import requests
response=requests.get('http://www.autohome.com/news')
# response.encoding='gbk' #汽车之家网站返回的页面内容为gb2312编码的,而requests的默认编码为ISO-8859-1,如果不设置成gbk则中文乱码
print(response.text)

    c、获取二进制

#stream参数:一点一点的取,比如下载视频时,如果视频100G,用response.content然后一下子写到文件中是不合理的

import requests

response=requests.get('https://gss3.baidu.com/6LZ0ej3k1Qd3ote6lo7D0j9wehsv/tieba-smallvideo-transcode/1767502_56ec685f9c7ec542eeaf6eac93a65dc7_6fe25cd1347c_3.mp4',
                      stream=True)

with open('b.mp4','wb') as f:
    for line in response.iter_content():
        f.write(line)

    d、解析json

#解析json
import requests
response=requests.get('http://httpbin.org/get')

import json
res1=json.loads(response.text) #太麻烦

res2=response.json() #直接获取json数据


print(res1 == res2) #True

五、selenium模块

    a、介绍

selenium最初是一个自动化测试工具,而爬虫中使用它主要是为了解决requests无法直接执行JavaScript代码的问题

selenium本质是通过驱动浏览器,完全模拟浏览器的操作,比如跳转、输入、点击、下拉等,来拿到网页渲染之后的结果,可支持多种浏览器

from selenium import webdriver
browser=webdriver.Chrome()
browser=webdriver.Firefox()
browser=webdriver.PhantomJS()
browser=webdriver.Safari()
browser=webdriver.Edge()

   b、安装

#安装:selenium+chromedriver
pip3 install selenium
下载chromdriver.exe放到python安装路径的scripts目录中即可,注意最新版本是2.29,并非2.9
国内镜像网站地址:http://npm.taobao.org/mirrors/chromedriver/2.29/
最新的版本去官网找:https://sites.google.com/a/chromium.org/chromedriver/downloads

#注意:
selenium3默认支持的webdriver是Firfox,而Firefox需要安装geckodriver
下载链接:https://github.com/mozilla/geckodriver/releases

六、选择器

   a、基本选取

必威 105必威 106

from selenium import webdriver
from selenium.webdriver import ActionChains
from selenium.webdriver.common.by import By #按照什么方式查找,By.ID,By.CSS_SELECTOR
from selenium.webdriver.common.keys import Keys #键盘按键操作
from selenium.webdriver.support import expected_conditions as EC
from selenium.webdriver.support.wait import WebDriverWait #等待页面加载某些元素
import time

driver=webdriver.Chrome()
driver.get('https://www.baidu.com')
wait=WebDriverWait(driver,10)

try:
    #===============所有方法===================
    # 1、find_element_by_id
    # 2、find_element_by_link_text
    # 3、find_element_by_partial_link_text
    # 4、find_element_by_tag_name
    # 5、find_element_by_class_name
    # 6、find_element_by_name
    # 7、find_element_by_css_selector
    # 8、find_element_by_xpath
    # 强调:
    # 1、上述均可以改写成find_element(By.ID,'kw')的形式
    # 2、find_elements_by_xxx的形式是查找到多个元素,结果为列表

    #===============示范用法===================
    # 1、find_element_by_id
    print(driver.find_element_by_id('kw'))

    # 2、find_element_by_link_text
    # login=driver.find_element_by_link_text('登录')
    # login.click()

    # 3、find_element_by_partial_link_text
    login=driver.find_elements_by_partial_link_text('录')[0]
    login.click()

    # 4、find_element_by_tag_name
    print(driver.find_element_by_tag_name('a'))

    # 5、find_element_by_class_name
    button=wait.until(EC.element_to_be_clickable((By.CLASS_NAME,'tang-pass-footerBarULogin')))
    button.click()

    # 6、find_element_by_name
    input_user=wait.until(EC.presence_of_element_located((By.NAME,'userName')))
    input_pwd=wait.until(EC.presence_of_element_located((By.NAME,'password')))
    commit=wait.until(EC.element_to_be_clickable((By.ID,'TANGRAM__PSP_10__submit')))

    input_user.send_keys('18611453110')
    input_pwd.send_keys('lhf@094573')
    commit.click()

    # 7、find_element_by_css_selector
    driver.find_element_by_css_selector('#kw')

    # 8、find_element_by_xpath

    time.sleep(5)

finally:
    driver.close()

View Code

   b、xpath

   c、获取标签属性

from selenium import webdriver
from selenium.webdriver import ActionChains
from selenium.webdriver.common.by import By #按照什么方式查找,By.ID,By.CSS_SELECTOR
from selenium.webdriver.common.keys import Keys #键盘按键操作
from selenium.webdriver.support import expected_conditions as EC
from selenium.webdriver.support.wait import WebDriverWait #等待页面加载某些元素

browser=webdriver.Chrome()

browser.get('https://www.amazon.cn/')

wait=WebDriverWait(browser,10)
wait.until(EC.presence_of_element_located((By.ID,'cc-lm-tcgShowImgContainer')))

tag=browser.find_element(By.CSS_SELECTOR,'#cc-lm-tcgShowImgContainer img')

#获取标签属性,
print(tag.get_attribute('src'))


#获取标签ID,位置,名称,大小(了解)
print(tag.id)
print(tag.location)
print(tag.tag_name)
print(tag.size)


browser.close()

七、等待成分加载

      a、selenium只是模仿浏览器的作为,而浏览器解析页面是供给时日的(实践css,js卡塔 尔(阿拉伯语:قطر‎,一些因素恐怕须求过一段时间技艺加载出来,为了确认保证能查找到成分,必需等待

      b、等待的艺术分二种:

                隐式等待:在browser.get('xxx'卡塔 尔(阿拉伯语:قطر‎前就设置,针对具备因素有效

         显式等待:在browser.get('xxx'卡塔尔之后设置,只针对某个成分有效

八、爬虫之解析库----re,beautifulsoup、pyquery

      a、介绍

Beautiful Soup 是一个可以从HTML或XML文件中提取数据的Python库.它能够通过你喜欢的转换器实现惯用的文档导航,查找,
修改文档的方式.Beautiful Soup会帮你节省数小时甚至数天的工作时间.你可能在寻找 Beautiful Soup3 的文档,Beautiful Soup 3
 目前已经停止开发,官网推荐在现在的项目中使用Beautiful Soup 4, 移植到BS4

     b、安装

安装:Beautifulsoup4
      pip3 install beautifulsoup4
安装解释器:
      Beautiful Soup支持Python标准库中的HTML解析器,还支持一些第三方的解析器,其中一个是 lxml .根据操作系统不同,可以选择下列方法来安装lxml:

九、遍历文书档案数

#遍历文档树:即直接通过标签名字选择,特点是选择速度快,但如果存在多个相同的标签则只返回第一个
#1、用法
#2、获取标签的名称
#3、获取标签的属性
#4、获取标签的内容
#5、嵌套选择
#6、子节点、子孙节点
#7、父节点、祖先节点
#8、兄弟节点

必威 107必威 108

#遍历文档树:即直接通过标签名字选择,特点是选择速度快,但如果存在多个相同的标签则只返回第一个
html_doc = """
<html><head><title>The Dormouse's story</title></head>
<body>
<p id="my p" class="title"><b id="bbb" class="boldest">The Dormouse's story</b></p>

<p class="story">Once upon a time there were three little sisters; and their names were
<a href="http://example.com/elsie" class="sister" id="link1">Elsie</a>,
<a href="http://example.com/lacie" class="sister" id="link2">Lacie</a> and
<a href="http://example.com/tillie" class="sister" id="link3">Tillie</a>;
and they lived at the bottom of a well.</p>

<p class="story">...</p>
"""

#1、用法
from bs4 import BeautifulSoup
soup=BeautifulSoup(html_doc,'lxml')
# soup=BeautifulSoup(open('a.html'),'lxml')

print(soup.p) #存在多个相同的标签则只返回第一个
print(soup.a) #存在多个相同的标签则只返回第一个

#2、获取标签的名称
print(soup.p.name)

#3、获取标签的属性
print(soup.p.attrs)

#4、获取标签的内容
print(soup.p.string) # p下的文本只有一个时,取到,否则为None
print(soup.p.strings) #拿到一个生成器对象, 取到p下所有的文本内容
print(soup.p.text) #取到p下所有的文本内容
for line in soup.stripped_strings: #去掉空白
    print(line)


'''
如果tag包含了多个子节点,tag就无法确定 .string 方法应该调用哪个子节点的内容, .string 的输出结果是 None,如果只有一个子节点那么就输出该子节点的文本,比如下面的这种结构,soup.p.string 返回为None,但soup.p.strings就可以找到所有文本
<p id='list-1'>
    哈哈哈哈
    <a class='sss'>

            <h1>aaaa</h1>

    </a>
    <b>bbbbb</b>
</p>
'''

#5、嵌套选择
print(soup.head.title.string)
print(soup.body.a.string)


#6、子节点、子孙节点
print(soup.p.contents) #p下所有子节点
print(soup.p.children) #得到一个迭代器,包含p下所有子节点

for i,child in enumerate(soup.p.children):
    print(i,child)

print(soup.p.descendants) #获取子孙节点,p下所有的标签都会选择出来
for i,child in enumerate(soup.p.descendants):
    print(i,child)

#7、父节点、祖先节点
print(soup.a.parent) #获取a标签的父节点
print(soup.a.parents) #找到a标签所有的祖先节点,父亲的父亲,父亲的父亲的父亲...


#8、兄弟节点
print('=====>')
print(soup.a.next_sibling) #下一个兄弟
print(soup.a.previous_sibling) #上一个兄弟

print(list(soup.a.next_siblings)) #下面的兄弟们=>生成器对象
print(soup.a.previous_siblings) #上面的兄弟们=>生成器对象

用法

View Code

十、找寻文书档案数

    a、八种过滤器

#1、五种过滤器: 字符串、正则表达式、列表、True、方法
#1.1、字符串:即标签名
print(soup.find_all('b'))

#1.2、正则表达式
import re
print(soup.find_all(re.compile('^b'))) #找出b开头的标签,结果有body和b标签

#1.3、列表:如果传入列表参数,Beautiful Soup会将与列表中任一元素匹配的内容返回.下面代码找到文档中所有<a>标签和<b>标签:
print(soup.find_all(['a','b']))

#1.4、True:可以匹配任何值,下面代码查找到所有的tag,但是不会返回字符串节点
print(soup.find_all(True))
for tag in soup.find_all(True):
    print(tag.name)

#1.5、方法:如果没有合适过滤器,那么还可以定义一个方法,方法只接受一个元素参数 ,如果这个方法返回 True 表示当前元素匹配并且被找到,如果不是则反回 False
def has_class_but_no_id(tag):
    return tag.has_attr('class') and not tag.has_attr('id')

     b、find_all

必威 109必威 110

#2、find_all( name , attrs , recursive , text , **kwargs )
#2.1、name: 搜索name参数的值可以使任一类型的 过滤器 ,字符窜,正则表达式,列表,方法或是 True .
print(soup.find_all(name=re.compile('^t')))

#2.2、keyword: key=value的形式,value可以是过滤器:字符串 , 正则表达式 , 列表, True .
print(soup.find_all(id=re.compile('my')))
print(soup.find_all(href=re.compile('lacie'),id=re.compile('d'))) #注意类要用class_
print(soup.find_all(id=True)) #查找有id属性的标签

# 有些tag属性在搜索不能使用,比如HTML5中的 data-* 属性:
data_soup = BeautifulSoup('<div data-foo="value">foo!</div>','lxml')
# data_soup.find_all(data-foo="value") #报错:SyntaxError: keyword can't be an expression
# 但是可以通过 find_all() 方法的 attrs 参数定义一个字典参数来搜索包含特殊属性的tag:
print(data_soup.find_all(attrs={"data-foo": "value"}))
# [<div data-foo="value">foo!</div>]

#2.3、按照类名查找,注意关键字是class_,class_=value,value可以是五种选择器之一
print(soup.find_all('a',class_='sister')) #查找类为sister的a标签
print(soup.find_all('a',class_='sister ssss')) #查找类为sister和sss的a标签,顺序错误也匹配不成功
print(soup.find_all(class_=re.compile('^sis'))) #查找类为sister的所有标签

#2.4、attrs
print(soup.find_all('p',attrs={'class':'story'}))

#2.5、text: 值可以是:字符,列表,True,正则
print(soup.find_all(text='Elsie'))
print(soup.find_all('a',text='Elsie'))

#2.6、limit参数:如果文档树很大那么搜索会很慢.如果我们不需要全部结果,可以使用 limit 参数限制返回结果的数量.效果与SQL中的limit关键字类似,当搜索到的结果数量达到 limit 的限制时,就停止搜索返回结果
print(soup.find_all('a',limit=2))

#2.7、recursive:调用tag的 find_all() 方法时,Beautiful Soup会检索当前tag的所有子孙节点,如果只想搜索tag的直接子节点,可以使用参数 recursive=False .
print(soup.html.find_all('a'))
print(soup.html.find_all('a',recursive=False))

'''
像调用 find_all() 一样调用tag
find_all() 几乎是Beautiful Soup中最常用的搜索方法,所以我们定义了它的简写方法. BeautifulSoup 对象和 tag 对象可以被当作一个方法来使用,这个方法的执行结果与调用这个对象的 find_all() 方法相同,下面两行代码是等价的:
soup.find_all("a")
soup("a")
这两行代码也是等价的:
soup.title.find_all(text=True)
soup.title(text=True)
'''
TAG标签:
版权声明:本文由必威发布于必威-编程,转载请注明出处:    Scrapy一个开源和协作的框架必威,使用requ