Zoey.asia

April 1, 2025

小红书数据爬取技术原理与实践指南

9.7 min to read

小红书数据爬取技术原理与实践指南

1. 前言

小红书作为国内知名的社交电商平台,其数据具有重要的商业价值和研究价值。本文将详细介绍如何通过技术手段实现小红书数据的爬取,包括技术原理、实现方案和注意事项。

2. 小红书反爬机制分析

2.1 签名机制

小红书API请求需要特殊的签名参数,主要包括:

  • X-s:请求签名
  • X-t:时间戳
  • x-S-Common:通用签名
  • X-B3-Traceid:追踪ID

这些签名参数是通过前端JavaScript代码动态生成的,直接模拟难度较大。

2.2 其他反爬措施

  1. Cookie验证

    • 需要有效的登录态Cookie
    • Cookie中包含关键参数如a1webId
  2. 请求频率限制

    • 单IP请求频率限制
    • 账号请求频率限制
  3. 行为检测

    • 检测异常请求模式
    • 检测浏览器指纹

3. 技术实现方案

3.1 基于浏览器自动化的方案

使用Playwright或Selenium等浏览器自动化工具,通过执行JavaScript获取签名参数:

from playwright.async_api import async_playwright

async def get_signature(url, data):
    async with async_playwright() as p:
        browser = await p.chromium.launch()
        page = await browser.new_page()
        
        # 加载小红书页面
        await page.goto("https://www.xiaohongshu.com")
        
        # 执行签名生成函数
        signature = await page.evaluate(
            "([url, data]) => window._webmsxyw(url,data)", 
            [url, data]
        )
        
        await browser.close()
        return signature

3.2 基于逆向工程的方案

通过分析小红书前端代码,还原签名生成算法:

import time
import hashlib
import base64

def generate_signature(url, data, a1, b1):
    # 时间戳
    timestamp = str(int(time.time() * 1000))
    
    # 构造签名数据
    sign_data = {
        "url": url,
        "data": data,
        "timestamp": timestamp,
        "a1": a1,
        "b1": b1
    }
    
    # 签名生成算法
    sign_str = f"{url}{data}{timestamp}{a1}{b1}"
    sign_hash = hashlib.md5(sign_str.encode()).hexdigest()
    
    return {
        "X-s": sign_hash,
        "X-t": timestamp,
        "x-S-Common": base64.b64encode(sign_str.encode()).decode(),
        "X-B3-Traceid": f"{timestamp}-{sign_hash[:8]}"
    }

3.3 请求封装

import httpx
import json

class XiaoHongShuClient:
    def __init__(self):
        self.base_url = "https://edith.xiaohongshu.com"
        self.headers = {
            "User-Agent": "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36",
            "Content-Type": "application/json",
            "Origin": "https://www.xiaohongshu.com",
            "Referer": "https://www.xiaohongshu.com"
        }
        
    async def request(self, method, url, data=None):
        # 获取签名
        signature = await self.get_signature(url, data)
        self.headers.update(signature)
        
        async with httpx.AsyncClient() as client:
            response = await client.request(
                method=method,
                url=f"{self.base_url}{url}",
                headers=self.headers,
                json=data
            )
            return response.json()
            
    async def search_notes(self, keyword, page=1):
        """搜索笔记"""
        url = "/api/sns/web/v1/search/notes"
        data = {
            "keyword": keyword,
            "page": page,
            "page_size": 20,
            "sort": "general"
        }
        return await self.request("POST", url, data)
        
    async def get_note_detail(self, note_id):
        """获取笔记详情"""
        url = "/api/sns/web/v1/feed"
        data = {
            "source_note_id": note_id
        }
        return await self.request("POST", url, data)
        
    async def get_comments(self, note_id, cursor=""):
        """获取评论"""
        url = "/api/sns/web/v2/comment/page"
        data = {
            "note_id": note_id,
            "cursor": cursor,
            "top_comment_id": "",
            "image_formats": ["jpg", "webp", "avif"]
        }
        return await self.request("POST", url, data)

4. 登录与Cookie管理

4.1 登录方式

  1. 二维码登录
async def login_by_qrcode():
    async with async_playwright() as p:
        browser = await p.chromium.launch()
        page = await browser.new_page()
        
        # 访问登录页
        await page.goto("https://www.xiaohongshu.com/login")
        
        # 等待二维码出现
        qrcode = await page.wait_for_selector(".qrcode-wrapper img")
        qrcode_src = await qrcode.get_attribute("src")
        
        # 保存二维码
        img_data = qrcode_src.split(",")[1]
        with open("qrcode.png", "wb") as f:
            f.write(base64.b64decode(img_data))
            
        # 等待扫码登录
        await page.wait_for_url("**xiaohongshu.com**", timeout=120000)
        
        # 保存Cookie
        cookies = await page.context.cookies()
        await browser.close()
        return cookies
  1. Cookie登录
async def login_by_cookies(cookies):
    async with async_playwright() as p:
        browser = await p.chromium.launch()
        context = await browser.new_context()
        
        # 设置Cookie
        await context.add_cookies(cookies)
        
        # 验证登录状态
        page = await context.new_page()
        await page.goto("https://www.xiaohongshu.com")
        
        # 检查登录状态
        is_logged_in = await page.evaluate("""() => {
            return document.cookie.includes('a1')
        }""")
        
        await browser.close()
        return is_logged_in

5. 数据爬取策略

5.1 并发控制

import asyncio
from asyncio import Semaphore

class XiaoHongShuCrawler:
    def __init__(self, max_concurrency=3):
        self.semaphore = Semaphore(max_concurrency)
        self.client = XiaoHongShuClient()
        
    async def fetch_with_semaphore(self, coro):
        async with self.semaphore:
            return await coro
            
    async def batch_fetch_notes(self, note_ids):
        tasks = [
            self.fetch_with_semaphore(self.client.get_note_detail(note_id))
            for note_id in note_ids
        ]
        return await asyncio.gather(*tasks)

5.2 请求频率控制

import random
import asyncio

class RateLimiter:
    def __init__(self, min_delay=1, max_delay=3):
        self.min_delay = min_delay
        self.max_delay = max_delay
        
    async def wait(self):
        delay = random.uniform(self.min_delay, self.max_delay)
        await asyncio.sleep(delay)

class XiaoHongShuCrawler:
    def __init__(self):
        self.rate_limiter = RateLimiter()
        
    async def fetch_with_rate_limit(self, coro):
        await self.rate_limiter.wait()
        return await coro

5.3 错误处理与重试

from tenacity import retry, stop_after_attempt, wait_exponential

class XiaoHongShuClient:
    @retry(
        stop=stop_after_attempt(3),
        wait=wait_exponential(multiplier=1, min=4, max=10)
    )
    async def request(self, method, url, data=None):
        try:
            response = await self._do_request(method, url, data)
            if response.get("code") == 300012:  # IP被封
                raise IPBlockError("IP已被封禁")
            return response
        except Exception as e:
            print(f"Request failed: {e}")
            raise

6. 数据存储

6.1 JSON存储

import json
import os

class DataStorage:
    def __init__(self, base_dir="./data"):
        self.base_dir = base_dir
        os.makedirs(base_dir, exist_ok=True)
        
    def save_note(self, note_data):
        note_id = note_data.get("note_id")
        file_path = os.path.join(self.base_dir, f"{note_id}.json")
        with open(file_path, "w", encoding="utf-8") as f:
            json.dump(note_data, f, ensure_ascii=False, indent=2)

6.2 数据库存储

import sqlite3
from datetime import datetime

class DatabaseStorage:
    def __init__(self, db_path="./xhs.db"):
        self.db_path = db_path
        self.init_db()
        
    def init_db(self):
        with sqlite3.connect(self.db_path) as conn:
            cursor = conn.cursor()
            cursor.execute("""
                CREATE TABLE IF NOT EXISTS notes (
                    note_id TEXT PRIMARY KEY,
                    title TEXT,
                    content TEXT,
                    user_id TEXT,
                    created_at TIMESTAMP,
                    updated_at TIMESTAMP
                )
            """)
            conn.commit()
            
    def save_note(self, note_data):
        with sqlite3.connect(self.db_path) as conn:
            cursor = conn.cursor()
            cursor.execute("""
                INSERT OR REPLACE INTO notes 
                (note_id, title, content, user_id, created_at, updated_at)
                VALUES (?, ?, ?, ?, ?, ?)
            """, (
                note_data["note_id"],
                note_data["title"],
                note_data["content"],
                note_data["user_id"],
                datetime.fromtimestamp(note_data["create_time"]),
                datetime.now()
            ))
            conn.commit()

7. 反爬虫策略

7.1 IP代理池

class ProxyPool:
    def __init__(self):
        self.proxies = []
        self.current_index = 0
        
    async def get_proxy(self):
        if not self.proxies:
            await self.refresh_proxies()
        proxy = self.proxies[self.current_index]
        self.current_index = (self.current_index + 1) % len(self.proxies)
        return proxy
        
    async def refresh_proxies(self):
        # 从代理服务商获取新的代理IP
        # 这里需要实现具体的代理获取逻辑
        pass

7.2 浏览器指纹伪装

class BrowserFingerprint:
    def __init__(self):
        self.user_agents = [
            "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36",
            "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36",
            # 更多UA...
        ]
        
    def get_random_ua(self):
        return random.choice(self.user_agents)
        
    def get_browser_context(self):
        return {
            "viewport": {"width": 1920, "height": 1080},
            "user_agent": self.get_random_ua(),
            "locale": "zh-CN",
            "timezone_id": "Asia/Shanghai"
        }

8. 注意事项与最佳实践

  1. 遵守平台规则

    • 遵守robots.txt规则
    • 控制爬取频率
    • 不进行大规模爬取
  2. 数据安全

    • 加密存储敏感信息
    • 定期备份数据
    • 遵守数据保护法规
  3. 异常处理

    • 完善错误处理机制
    • 实现断点续传
    • 记录详细日志
  4. 性能优化

    • 使用异步编程
    • 实现请求缓存
    • 优化数据存储结构

9. 总结

小红书数据爬取技术涉及多个方面,包括:

  • 签名参数获取
  • 登录态管理
  • 并发控制
  • 反爬虫策略
  • 数据存储

成功实现小红书数据爬取需要综合考虑这些因素,并采取合适的策略。同时,在使用过程中要遵守平台规则,确保爬取行为的合法性。

注意:本文仅供学习和研究使用,请遵守相关平台的使用条款和robots.txt规则,不得用于非法用途。