Profile (Playlist/Channel/Hashtag/Topic) Batch Extraction API
Note
API Overview
This API is used to batch extract profile content (playlists/channels/hashtags/topics, etc.). All platforms supported by the MeowLoad “Profile Batch Extraction” feature are supported by this API.
🎯 Basic Information
| Item | Details |
|---|---|
| Endpoint | https://api.meowload.net/openapi/extract/playlist |
| Method | POST |
| Content-Type | application/json |
📋 Request Parameters
Headers
| Parameter | Type | Required | Description |
|---|---|---|---|
x-api-key | string | Yes | API key, obtained from the Developer Management Center |
accept-language | string | - | Error message language, defaults to enSupported: zh, en, ja, es, de, etc. |
Request Body
| Parameter | Type | Required | Description |
|---|---|---|---|
url | string | Yes | Profile page URL (playlist/channel/topic, etc.) |
cursor | string | - | Pagination cursor Not required when fetching page 1 For subsequent pages, use the next_cursor returned from the previous page |
Request Example
# Fetch the first page
curl -X POST https://api.meowload.net/openapi/extract/playlist \
-H "Content-Type: application/json" \
-H "x-api-key: your-api-key-here" \
-H "accept-language: zh" \
-d '{
"url": "https://www.youtube.com/@Nike/videos"
}'
# Fetch the second page (using next_cursor from the previous page)
curl -X POST https://api.meowload.net/openapi/extract/playlist \
-H "Content-Type: application/json" \
-H "x-api-key: your-api-key-here" \
-d '{
"url": "https://www.youtube.com/@Nike/videos",
"cursor": "DAABCgABG3SXo4g___MKAAIbQXHzaFrwNQgAAwAAAAIAAA"
}'🟢 Success Response
HTTP Status Code:
200 OK
Response Example
{
"next_cursor": "DAABCgABG3SXo4g___MKAAIbQXHzaFrwNQgAAwAAAAIAAA",
"has_more": true,
"posts": [
{
"id": "4740201740502412",
"created_at": "1645635626000",
"text": "旧电脑千万不要扔,一招让它比新电脑还流畅!",
"medias": [
{
"media_type": "video",
"resource_url": "http://f.video.weibocdn.com/o0/OKxyGGNvlx07TZHhMZlC01041200jdfp0E010.mp4?label=mp4_720p&template=720x1056.24.0",
"preview_url": "http://wx1.sinaimg.cn/orj480/60c32e03ly1gznxqvfvs5j20k00qpad8.jpg"
}
],
"post_url": "https://x.com/nike/status/1981040559746699631"
},
{
"id": "4740187936000074",
"created_at": "1645632335000",
"text": "勤练3年的肚皮舞,大家觉得如何?",
"medias": [
{
"media_type": "image",
"resource_url": "http://wx3.sinaimg.cn/orj480/60c32e03ly1gznw662vcxj20k00sw75q.jpg",
"preview_url": null
}
],
"post_url": "https://x.com/nike/status/1981040559746699632"
}
],
"user": {
"username": "Nike",
"image_url": "https://pbs.twimg.com/profile_images/1970147674809667584/06urBNop_200x200.jpg"
}
}Response Field Descriptions
Top-Level Fields
| Field | Type | Always Returned | Description |
|---|---|---|---|
has_more | boolean | Yes | Whether more data is availabletrue (has next page), false (last page) |
next_cursor | string | - | Cursor value for the next page Only returned when has_more is true |
posts | array | - | Array of posts May be an empty array |
user | object | - | Creator/channel/topic information |
posts Array Fields
| Field | Type | Always Returned | Description |
|---|---|---|---|
id | string | - | Unique post ID |
created_at | string | - | Post creation (publish) time |
text | string | - | Post caption content |
medias | array | Yes | Media resource list (at least one media item per post) |
post_url | string | - | Original post URL |
medias Subfields
| Field | Type | Always Returned | Description |
|---|---|---|---|
media_type | string | Yes | Media typevideo, image, audio, live, file (unknown file type) |
resource_url | string | Yes | Media download URL |
preview_url | string | - | Cover/preview image URL |
user Object Fields
| Field | Type | Always Returned | Description |
|---|---|---|---|
username | string | - | Creator username / channel name / topic name |
image_url | string | - | Creator avatar / channel cover image URL |
🔴 Error Response
HTTP Status Code: non-
200
Error Response Example
{
"message": "链接格式错误"
}HTTP Status Code Reference
| Status Code | Description | Common Cause | Solution |
|---|---|---|---|
200 | Success | - | - |
400 | Business Error | Extraction failed, link contains no valid media | Check if the link is correct and contains video/images, etc. |
401 | Authentication Failed | Invalid or expired API Key | Verify that x-api-key is correct |
402 | Credits Exhausted | API call quota used up | Visit the Management Center to top up |
422 | Parameter Error | Incorrect link format | Check the url parameter format |
500 | Server Error | Internal server error | Contact technical support |
💻 Code Examples
Python - Single Page
import requests
api_url = "https://api.meowload.net/openapi/extract/playlist"
api_key = "your-api-key-here"
payload = {
"url": "https://www.youtube.com/@Nike/videos"
}
headers = {
"x-api-key": api_key,
"accept-language": "zh"
}
response = requests.post(api_url, json=payload, headers=headers)
if response.status_code == 200:
data = response.json()
print(f"✅ Fetch successful!")
print(f"Creator: {data['user']['username']}")
print(f"Posts on this page: {len(data['posts'])}")
print(f"Has next page: {data['has_more']}")
for idx, post in enumerate(data['posts'], 1):
print(f"\n--- Post {idx} ---")
print(f"Caption: {post.get('text', 'N/A')}")
print(f"Media count: {len(post['medias'])}")
print(f"Link: {post.get('post_url', 'N/A')}")
else:
error = response.json()
print(f"- Request failed ({response.status_code}): {error['message']}")Python - Fetch All Pages
import requests
import time
def fetch_all_posts(url, api_key, delay=1):
"""
Recursively fetch posts from all pages
Args:
url: Profile page URL
api_key: API key
delay: Request interval (seconds) to avoid rate limiting
"""
api_url = "https://api.meowload.net/openapi/extract/playlist"
all_posts = []
cursor = None
page = 1
headers = {
"x-api-key": api_key,
"accept-language": "zh"
}
while True:
print(f"📥 Fetching page {page}...")
payload = {"url": url}
if cursor:
payload["cursor"] = cursor
response = requests.post(api_url, json=payload, headers=headers)
if response.status_code != 200:
error = response.json()
print(f"- Request failed: {error['message']}")
break
data = response.json()
# Add posts from this page
all_posts.extend(data['posts'])
print(f"Page {page} complete ({len(data['posts'])} posts)")
# Check if there is a next page
if not data['has_more']:
print(f"\n🎉 All pages fetched! Total: {len(all_posts)} posts")
break
# Wait before requesting the next page
cursor = data['next_cursor']
page += 1
time.sleep(delay)
return all_posts, data.get('user')
# Usage example
posts, user = fetch_all_posts(
url="https://www.youtube.com/@Nike/videos",
api_key="your-api-key-here",
delay=2 # 2-second interval between requests
)
print(f"\nCreator: {user['username']}")
print(f"Total posts: {len(posts)}")JavaScript - Single Page
const apiUrl = "https://api.meowload.net/openapi/extract/playlist";
const apiKey = "your-api-key-here";
const payload = {
url: "https://www.youtube.com/@Nike/videos"
};
const headers = {
"Content-Type": "application/json",
"x-api-key": apiKey,
"accept-language": "zh"
};
fetch(apiUrl, {
method: "POST",
headers: headers,
body: JSON.stringify(payload)
})
.then(response => {
if (!response.ok) {
return response.json().then(error => {
throw new Error(`${response.status}: ${error.message}`);
});
}
return response.json();
})
.then(data => {
console.log("✅ Fetch successful!");
console.log(`Creator: ${data.user.username}`);
console.log(`Posts on this page: ${data.posts.length}`);
console.log(`Has next page: ${data.has_more}`);
data.posts.forEach((post, idx) => {
console.log(`\n--- Post ${idx + 1} ---`);
console.log(`Caption: ${post.text || 'N/A'}`);
console.log(`Media count: ${post.medias.length}`);
console.log(`Link: ${post.post_url || 'N/A'}`);
});
})
.catch(error => {
console.error(`- Request failed: ${error.message}`);
});JavaScript - Fetch All Pages
async function fetchAllPosts(url, apiKey, delay = 1000) {
const apiUrl = "https://api.meowload.net/openapi/extract/playlist";
const allPosts = [];
let cursor = null;
let page = 1;
const headers = {
"Content-Type": "application/json",
"x-api-key": apiKey,
"accept-language": "zh"
};
while (true) {
console.log(`📥 Fetching page ${page}...`);
const payload = { url };
if (cursor) {
payload.cursor = cursor;
}
try {
const response = await fetch(apiUrl, {
method: "POST",
headers: headers,
body: JSON.stringify(payload)
});
if (!response.ok) {
const error = await response.json();
console.error(`- Request failed: ${error.message}`);
break;
}
const data = await response.json();
allPosts.push(...data.posts);
console.log(`✅ Page ${page} complete (${data.posts.length} posts)`);
if (!data.has_more) {
console.log(`\n🎉 All pages fetched! Total: ${allPosts.length} posts`);
return { posts: allPosts, user: data.user };
}
cursor = data.next_cursor;
page += 1;
// Delay before requesting the next page
await new Promise(resolve => setTimeout(resolve, delay));
} catch (error) {
console.error(`- Network error: ${error.message}`);
break;
}
}
return { posts: allPosts, user: null };
}
// Usage example
fetchAllPosts(
"https://www.youtube.com/@Nike/videos",
"your-api-key-here",
2000 // 2-second delay
).then(({ posts, user }) => {
console.log(`\nCreator: ${user.username}`);
console.log(`Total posts: ${posts.length}`);
});Golang
package main
import (
"bytes"
"encoding/json"
"fmt"
"io"
"net/http"
"time"
)
type PlaylistRequest struct {
URL string `json:"url"`
Cursor string `json:"cursor,omitempty"`
}
type Media struct {
MediaType string `json:"media_type"`
ResourceURL string `json:"resource_url"`
PreviewURL string `json:"preview_url,omitempty"`
}
type Post struct {
ID string `json:"id,omitempty"`
CreatedAt string `json:"created_at,omitempty"`
Text string `json:"text,omitempty"`
Medias []Media `json:"medias"`
PostURL string `json:"post_url,omitempty"`
}
type User struct {
Username string `json:"username,omitempty"`
ImageURL string `json:"image_url,omitempty"`
}
type PlaylistResponse struct {
NextCursor string `json:"next_cursor,omitempty"`
HasMore bool `json:"has_more"`
Posts []Post `json:"posts"`
User User `json:"user"`
}
type ErrorResponse struct {
Message string `json:"message"`
}
func fetchPlaylistPage(url, apiKey, cursor string) (*PlaylistResponse, error) {
apiURL := "https://api.meowload.net/openapi/extract/playlist"
requestBody := PlaylistRequest{
URL: url,
Cursor: cursor,
}
jsonData, err := json.Marshal(requestBody)
if err != nil {
return nil, fmt.Errorf("JSON serialization failed: %w", err)
}
req, err := http.NewRequest("POST", apiURL, bytes.NewBuffer(jsonData))
if err != nil {
return nil, fmt.Errorf("failed to create request: %w", err)
}
req.Header.Set("Content-Type", "application/json")
req.Header.Set("x-api-key", apiKey)
req.Header.Set("accept-language", "zh")
client := &http.Client{}
resp, err := client.Do(req)
if err != nil {
return nil, fmt.Errorf("failed to send request: %w", err)
}
defer resp.Body.Close()
body, err := io.ReadAll(resp.Body)
if err != nil {
return nil, fmt.Errorf("failed to read response: %w", err)
}
if resp.StatusCode != 200 {
var errorResp ErrorResponse
json.Unmarshal(body, &errorResp)
return nil, fmt.Errorf("request failed (%d): %s", resp.StatusCode, errorResp.Message)
}
var result PlaylistResponse
if err := json.Unmarshal(body, &result); err != nil {
return nil, fmt.Errorf("failed to parse response: %w", err)
}
return &result, nil
}
func fetchAllPosts(url, apiKey string) ([]Post, *User, error) {
allPosts := []Post{}
var cursor string
page := 1
for {
fmt.Printf("📥 Fetching page %d...\n", page)
data, err := fetchPlaylistPage(url, apiKey, cursor)
if err != nil {
return nil, nil, err
}
allPosts = append(allPosts, data.Posts...)
fmt.Printf("✅ Page %d complete (%d posts)\n", page, len(data.Posts))
if !data.HasMore {
fmt.Printf("\n🎉 All pages fetched! Total: %d posts\n", len(allPosts))
return allPosts, &data.User, nil
}
cursor = data.NextCursor
page++
// Delay to avoid rate limiting
time.Sleep(2 * time.Second)
}
}
func main() {
posts, user, err := fetchAllPosts(
"https://www.youtube.com/@Nike/videos",
"your-api-key-here",
)
if err != nil {
fmt.Printf("- Error: %v\n", err)
return
}
fmt.Printf("\nCreator: %s\n", user.Username)
fmt.Printf("Total posts: %d\n", len(posts))
for idx, post := range posts {
fmt.Printf("\n--- Post %d ---\n", idx+1)
fmt.Printf("Caption: %s\n", post.Text)
fmt.Printf("Media count: %d\n", len(post.Medias))
}
}💡 Tips
1. Pagination Handling
For profiles with large amounts of data, use cursor-based pagination to retrieve all content:
def paginate_posts(url, api_key, batch_size=None):
"""
Generator function that fetches posts page by page
Usage:
for post in paginate_posts(url, api_key):
process(post)
"""
api_url = "https://api.meowload.net/openapi/extract/playlist"
cursor = None
count = 0
while True:
payload = {"url": url}
if cursor:
payload["cursor"] = cursor
response = requests.post(
api_url,
json=payload,
headers={"x-api-key": api_key}
)
if response.status_code != 200:
break
data = response.json()
for post in data['posts']:
yield post
count += 1
if batch_size and count >= batch_size:
return
if not data['has_more']:
break
cursor = data['next_cursor']2. Filtering and Sorting
from datetime import datetime
# Sort by date
def sort_by_date(posts, reverse=True):
return sorted(
posts,
key=lambda x: int(x.get('created_at', 0)),
reverse=reverse
)
# Filter posts that contain videos only
def filter_video_posts(posts):
return [
post for post in posts
if any(m['media_type'] == 'video' for m in post.get('medias', []))
]
# Count by media type
from collections import Counter
def count_media_types(posts):
media_types = []
for post in posts:
for media in post.get('medias', []):
media_types.append(media['media_type'])
return Counter(media_types)3. Batch Download
import os
from concurrent.futures import ThreadPoolExecutor
def download_media(media, output_dir='downloads'):
"""Download a single media file"""
os.makedirs(output_dir, exist_ok=True)
url = media['resource_url']
filename = os.path.basename(url.split('?')[0])
filepath = os.path.join(output_dir, filename)
if os.path.exists(filepath):
print(f"⏭️ Already exists: {filename}")
return
try:
response = requests.get(url, headers=media.get('headers', {}))
with open(filepath, 'wb') as f:
f.write(response.content)
print(f"✅ Download complete: {filename}")
except Exception as e:
print(f"- Download failed: {e}")
def batch_download(posts, output_dir='downloads', max_workers=4):
"""Batch download all media"""
with ThreadPoolExecutor(max_workers=max_workers) as executor:
for post in posts:
for media in post['medias']:
executor.submit(download_media, media, output_dir)