1. 简单项目
pip install scrapy scrapy startproject appdemo
2. 项目代码
a. 项目代码结构├── Dockerfile├── README.md├── appdemo│ ├── __init__.py│ ├── __pycache__│ ├── items.py│ ├── middlewares.py│ ├── pipelines.py│ ├── settings.py│ └── spiders│ ├── __init__.py│ ├── __pycache__│ └── book_spider.py└── scrapy.cfgb. 主要代码是book_spider.pyimport scrapyclass BookSpider(scrapy.Spider): name="appdemo" start_urls=["http://books.toscrape.com/"] def parse(self,response): for book in response.css("article.product_pod"): name= book.xpath("./h3/a/@title").extract_first() price=book.css("p.price_color::text").extract_first() yield { "name":name, "price":price, } next_url=response.css("ul.pager li.next a::attr(href)").extract_first() if next_url: next_url=response.urljoin(next_url) yield scrapy.Request(next_url,callback=self.parse)c. DockerfileFROM python:3.5RUN pip install scrapyVOLUME [ "/data" ]WORKDIR /myappCOPY . /myappENTRYPOINT [ "scrapy","crawl","appdemo","-o","/data/appdemo.csv" ]备注: 为了简单使用了python:3.5 基础镜像,alpine 镜像存在包依赖的问题
3. 运行
a. 命令行运行scrapy crawl appdemo -o myinfo.csvb. docker builddocker build -t myscrapy .docker run -it -v $PWD/mydata:/data myscrapycat $PWD/mydata/appdemo.csvc. 直接使用dockerhub 镜像运行docker run -it -v $PWD/mydata:/data dalongrong/scrapydockerdemo
docker
4. 参考文档
https://docs.scrapy.org/en/latest/https://github.com/rongfengliang/scrapydockerdemo