Skip to Content
开发者接口主页批量提取接口

主页(播放列表/频道/Hashtag/话题)批量提取接口

Note

接口概述

该接口用于批量提取主页内容(播放列表/频道/Hashtag/话题等),哼哼猫「主页批量提取」功能支持的平台,该接口都支持。

🎯 基本信息

项目内容
接口地址https://api.meowload.net/openapi/extract/playlist
请求方式POST
Content-Typeapplication/json

📋 请求参数

请求头 (Headers)

参数类型必填说明
x-api-keystringAPI 密钥,在 开发者管理中心  获取
accept-languagestring-错误消息语言,默认 en
支持:zhenjaesde

请求体 (Body)

参数类型必填说明
urlstring主页链接地址(播放列表/频道/话题等)
cursorstring-分页游标
获取第 1 页时不需传递
获取后续页面时,使用前一页返回的 next_cursor

请求示例

# 获取第一页 curl -X POST https://api.meowload.net/openapi/extract/playlist \ -H "Content-Type: application/json" \ -H "x-api-key: your-api-key-here" \ -H "accept-language: zh" \ -d '{ "url": "https://www.youtube.com/@Nike/videos" }' # 获取第二页(使用前一页的 next_cursor) curl -X POST https://api.meowload.net/openapi/extract/playlist \ -H "Content-Type: application/json" \ -H "x-api-key: your-api-key-here" \ -d '{ "url": "https://www.youtube.com/@Nike/videos", "cursor": "DAABCgABG3SXo4g___MKAAIbQXHzaFrwNQgAAwAAAAIAAA" }'

🟢 成功响应

HTTP 状态码:200 OK

响应示例

{ "next_cursor": "DAABCgABG3SXo4g___MKAAIbQXHzaFrwNQgAAwAAAAIAAA", "has_more": true, "posts": [ { "id": "4740201740502412", "created_at": "1645635626000", "text": "旧电脑千万不要扔,一招让它比新电脑还流畅!", "medias": [ { "media_type": "video", "resource_url": "http://f.video.weibocdn.com/o0/OKxyGGNvlx07TZHhMZlC01041200jdfp0E010.mp4?label=mp4_720p&template=720x1056.24.0", "preview_url": "http://wx1.sinaimg.cn/orj480/60c32e03ly1gznxqvfvs5j20k00qpad8.jpg" } ], "post_url": "https://x.com/nike/status/1981040559746699631" }, { "id": "4740187936000074", "created_at": "1645632335000", "text": "勤练3年的肚皮舞,大家觉得如何?", "medias": [ { "media_type": "image", "resource_url": "http://wx3.sinaimg.cn/orj480/60c32e03ly1gznw662vcxj20k00sw75q.jpg", "preview_url": null } ], "post_url": "https://x.com/nike/status/1981040559746699632" } ], "user": { "username": "Nike", "image_url": "https://pbs.twimg.com/profile_images/1970147674809667584/06urBNop_200x200.jpg" } }

响应字段说明

顶级字段

字段类型必返回说明
has_moreboolean是否有更多数据
true(有下一页),false(已是最后一页)
next_cursorstring-下一页的游标值
has_moretrue 时才返回
postsarray-帖子列表数组
可能为空数组
userobject-博主/频道/话题信息

posts 数组字段

字段类型必返回说明
idstring-帖子唯一标识 ID
created_atstring-帖子创建(发布)时间
textstring-帖子文案内容
mediasarray媒体资源列表
(每个帖子至少一个媒体)
post_urlstring-帖子原始链接

medias 子字段

字段类型必返回说明
media_typestring媒体类型
video(视频)、image(图片)、audio(音频)、live(直播)、file(未知文件类型)
resource_urlstring媒体下载地址
preview_urlstring-封面/预览图地址

user 对象字段

字段类型必返回说明
usernamestring-博主昵称 / 频道名称 / 话题名称
image_urlstring-博主头像 / 频道封面地址

🔴 错误响应

HTTP 状态码:非 200

错误响应示例

{ "message": "链接格式错误" }

HTTP 状态码说明

状态码说明常见原因解决方案
200成功--
400业务失败解析失败,链接不包含有效媒体检查链接是否正确,是否包含视频/图片等
401鉴权失败API Key 无效或过期检查 x-api-key 是否正确
402次数用尽调用额度已用完前往 管理中心  充值
422参数错误链接格式不正确检查 url 参数格式
500服务器错误服务器内部异常联系技术支持

💻 代码示例

Python - 单页获取

import requests api_url = "https://api.meowload.net/openapi/extract/playlist" api_key = "your-api-key-here" payload = { "url": "https://www.youtube.com/@Nike/videos" } headers = { "x-api-key": api_key, "accept-language": "zh" } response = requests.post(api_url, json=payload, headers=headers) if response.status_code == 200: data = response.json() print(f"✅ 获取成功!") print(f"博主: {data['user']['username']}") print(f"本页帖子数: {len(data['posts'])}") print(f"是否有下一页: {data['has_more']}") for idx, post in enumerate(data['posts'], 1): print(f"\n--- 帖子 {idx} ---") print(f"文案: {post.get('text', '无')}") print(f"媒体数: {len(post['medias'])}") print(f"链接: {post.get('post_url', '无')}") else: error = response.json() print(f"- 请求失败 ({response.status_code}): {error['message']}")

Python - 获取所有页面

import requests import time def fetch_all_posts(url, api_key, delay=1): """ 递归获取所有页面的帖子 参数: url: 主页链接 api_key: API 密钥 delay: 请求间隔(秒),避免请求过快 """ api_url = "https://api.meowload.net/openapi/extract/playlist" all_posts = [] cursor = None page = 1 headers = { "x-api-key": api_key, "accept-language": "zh" } while True: print(f"📥 正在获取第 {page} 页...") payload = {"url": url} if cursor: payload["cursor"] = cursor response = requests.post(api_url, json=payload, headers=headers) if response.status_code != 200: error = response.json() print(f"- 请求失败: {error['message']}") break data = response.json() # 添加本页帖子 all_posts.extend(data['posts']) print(f"第 {page} 页获取完成 ({len(data['posts'])} 条帖子)") # 检查是否有下一页 if not data['has_more']: print(f"\n🎉 所有页面获取完成!总共 {len(all_posts)} 条帖子") break # 等待后再请求下一页 cursor = data['next_cursor'] page += 1 time.sleep(delay) return all_posts, data.get('user') # 使用示例 posts, user = fetch_all_posts( url="https://www.youtube.com/@Nike/videos", api_key="your-api-key-here", delay=2 # 每个请求间隔 2 秒 ) print(f"\n博主: {user['username']}") print(f"总帖子数: {len(posts)}")

JavaScript - 单页获取

const apiUrl = "https://api.meowload.net/openapi/extract/playlist"; const apiKey = "your-api-key-here"; const payload = { url: "https://www.youtube.com/@Nike/videos" }; const headers = { "Content-Type": "application/json", "x-api-key": apiKey, "accept-language": "zh" }; fetch(apiUrl, { method: "POST", headers: headers, body: JSON.stringify(payload) }) .then(response => { if (!response.ok) { return response.json().then(error => { throw new Error(`${response.status}: ${error.message}`); }); } return response.json(); }) .then(data => { console.log("✅ 获取成功!"); console.log(`博主: ${data.user.username}`); console.log(`本页帖子数: ${data.posts.length}`); console.log(`是否有下一页: ${data.has_more}`); data.posts.forEach((post, idx) => { console.log(`\n--- 帖子 ${idx + 1} ---`); console.log(`文案: ${post.text || '无'}`); console.log(`媒体数: ${post.medias.length}`); console.log(`链接: ${post.post_url || '无'}`); }); }) .catch(error => { console.error(`- 请求失败: ${error.message}`); });

JavaScript - 获取所有页面

async function fetchAllPosts(url, apiKey, delay = 1000) { const apiUrl = "https://api.meowload.net/openapi/extract/playlist"; const allPosts = []; let cursor = null; let page = 1; const headers = { "Content-Type": "application/json", "x-api-key": apiKey, "accept-language": "zh" }; while (true) { console.log(`📥 正在获取第 ${page} 页...`); const payload = { url }; if (cursor) { payload.cursor = cursor; } try { const response = await fetch(apiUrl, { method: "POST", headers: headers, body: JSON.stringify(payload) }); if (!response.ok) { const error = await response.json(); console.error(`- 请求失败: ${error.message}`); break; } const data = await response.json(); allPosts.push(...data.posts); console.log(`✅ 第 ${page} 页获取完成 (${data.posts.length} 条帖子)`); if (!data.has_more) { console.log(`\n🎉 所有页面获取完成!总共 ${allPosts.length} 条帖子`); return { posts: allPosts, user: data.user }; } cursor = data.next_cursor; page += 1; // 延迟后请求下一页 await new Promise(resolve => setTimeout(resolve, delay)); } catch (error) { console.error(`- 网络错误: ${error.message}`); break; } } return { posts: allPosts, user: null }; } // 使用示例 fetchAllPosts( "https://www.youtube.com/@Nike/videos", "your-api-key-here", 2000 // 延迟 2 秒 ).then(({ posts, user }) => { console.log(`\n博主: ${user.username}`); console.log(`总帖子数: ${posts.length}`); });

Golang

package main import ( "bytes" "encoding/json" "fmt" "io" "net/http" "time" ) type PlaylistRequest struct { URL string `json:"url"` Cursor string `json:"cursor,omitempty"` } type Media struct { MediaType string `json:"media_type"` ResourceURL string `json:"resource_url"` PreviewURL string `json:"preview_url,omitempty"` } type Post struct { ID string `json:"id,omitempty"` CreatedAt string `json:"created_at,omitempty"` Text string `json:"text,omitempty"` Medias []Media `json:"medias"` PostURL string `json:"post_url,omitempty"` } type User struct { Username string `json:"username,omitempty"` ImageURL string `json:"image_url,omitempty"` } type PlaylistResponse struct { NextCursor string `json:"next_cursor,omitempty"` HasMore bool `json:"has_more"` Posts []Post `json:"posts"` User User `json:"user"` } type ErrorResponse struct { Message string `json:"message"` } func fetchPlaylistPage(url, apiKey, cursor string) (*PlaylistResponse, error) { apiURL := "https://api.meowload.net/openapi/extract/playlist" requestBody := PlaylistRequest{ URL: url, Cursor: cursor, } jsonData, err := json.Marshal(requestBody) if err != nil { return nil, fmt.Errorf("JSON 序列化失败: %w", err) } req, err := http.NewRequest("POST", apiURL, bytes.NewBuffer(jsonData)) if err != nil { return nil, fmt.Errorf("创建请求失败: %w", err) } req.Header.Set("Content-Type", "application/json") req.Header.Set("x-api-key", apiKey) req.Header.Set("accept-language", "zh") client := &http.Client{} resp, err := client.Do(req) if err != nil { return nil, fmt.Errorf("发送请求失败: %w", err) } defer resp.Body.Close() body, err := io.ReadAll(resp.Body) if err != nil { return nil, fmt.Errorf("读取响应失败: %w", err) } if resp.StatusCode != 200 { var errorResp ErrorResponse json.Unmarshal(body, &errorResp) return nil, fmt.Errorf("请求失败 (%d): %s", resp.StatusCode, errorResp.Message) } var result PlaylistResponse if err := json.Unmarshal(body, &result); err != nil { return nil, fmt.Errorf("解析响应失败: %w", err) } return &result, nil } func fetchAllPosts(url, apiKey string) ([]Post, *User, error) { allPosts := []Post{} var cursor string page := 1 for { fmt.Printf("📥 正在获取第 %d 页...\n", page) data, err := fetchPlaylistPage(url, apiKey, cursor) if err != nil { return nil, nil, err } allPosts = append(allPosts, data.Posts...) fmt.Printf("✅ 第 %d 页获取完成 (%d 条帖子)\n", page, len(data.Posts)) if !data.HasMore { fmt.Printf("\n🎉 所有页面获取完成!总共 %d 条帖子\n", len(allPosts)) return allPosts, &data.User, nil } cursor = data.NextCursor page++ // 延迟避免请求过快 time.Sleep(2 * time.Second) } } func main() { posts, user, err := fetchAllPosts( "https://www.youtube.com/@Nike/videos", "your-api-key-here", ) if err != nil { fmt.Printf("- 错误: %v\n", err) return } fmt.Printf("\n博主: %s\n", user.Username) fmt.Printf("总帖子数: %d\n", len(posts)) for idx, post := range posts { fmt.Printf("\n--- 帖子 %d ---\n", idx+1) fmt.Printf("文案: %s\n", post.Text) fmt.Printf("媒体数: %d\n", len(post.Medias)) } }

💡 使用技巧

1. 分页处理

对于数据量大的主页,使用游标分页获取所有内容:

def paginate_posts(url, api_key, batch_size=None): """ 生成器函数,逐页获取帖子 用法: for post in paginate_posts(url, api_key): process(post) """ api_url = "https://api.meowload.net/openapi/extract/playlist" cursor = None count = 0 while True: payload = {"url": url} if cursor: payload["cursor"] = cursor response = requests.post( api_url, json=payload, headers={"x-api-key": api_key} ) if response.status_code != 200: break data = response.json() for post in data['posts']: yield post count += 1 if batch_size and count >= batch_size: return if not data['has_more']: break cursor = data['next_cursor']

2. 过滤和排序

from datetime import datetime # 按时间排序 def sort_by_date(posts, reverse=True): return sorted( posts, key=lambda x: int(x.get('created_at', 0)), reverse=reverse ) # 只获取有视频的帖子 def filter_video_posts(posts): return [ post for post in posts if any(m['media_type'] == 'video' for m in post.get('medias', [])) ] # 按媒体类型统计 from collections import Counter def count_media_types(posts): media_types = [] for post in posts: for media in post.get('medias', []): media_types.append(media['media_type']) return Counter(media_types)

3. 批量下载

import os from concurrent.futures import ThreadPoolExecutor def download_media(media, output_dir='downloads'): """下载单个媒体文件""" os.makedirs(output_dir, exist_ok=True) url = media['resource_url'] filename = os.path.basename(url.split('?')[0]) filepath = os.path.join(output_dir, filename) if os.path.exists(filepath): print(f"⏭️ 已存在: {filename}") return try: response = requests.get(url, headers=media.get('headers', {})) with open(filepath, 'wb') as f: f.write(response.content) print(f"✅ 下载完成: {filename}") except Exception as e: print(f"- 下载失败: {e}") def batch_download(posts, output_dir='downloads', max_workers=4): """批量下载所有媒体""" with ThreadPoolExecutor(max_workers=max_workers) as executor: for post in posts: for media in post['medias']: executor.submit(download_media, media, output_dir)