Compare commits

..

36 Commits

Author SHA1 Message Date
Mateusz Gruszczyński
e2017b8344 more conditions in automations 2026-05-06 21:58:49 +02:00
Mateusz Gruszczyński
21cbd5baa6 remove actions in peer 2026-05-06 13:57:26 +02:00
Mateusz Gruszczyński
c19ff17134 offline libs 2026-05-06 11:25:41 +02:00
Mateusz Gruszczyński
6587e74892 about and errorpages 2026-05-06 11:06:08 +02:00
Mateusz Gruszczyński
1baf4a8495 headers 2026-05-06 08:50:40 +02:00
Mateusz Gruszczyński
890e3d1564 headers 2026-05-06 08:47:20 +02:00
Mateusz Gruszczyński
dc1cac4e6f add auth support 2026-05-06 08:38:07 +02:00
Mateusz Gruszczyński
aea3c92830 fix queue 2026-05-05 19:19:47 +02:00
Mateusz Gruszczyński
45cb6cbb3a fix queue 2026-05-05 18:52:34 +02:00
Mateusz Gruszczyński
eedfce7207 fix queue 2026-05-05 17:48:21 +02:00
Mateusz Gruszczyński
fc5fedbde2 fix queue 2026-05-05 17:29:45 +02:00
Mateusz Gruszczyński
0f6f9d740c smart queue fix 2026-05-05 16:01:00 +02:00
Mateusz Gruszczyński
0e0c3359ee smart queue fix 2026-05-05 15:43:56 +02:00
Mateusz Gruszczyński
912d89abba smart queue fix 2026-05-05 15:39:32 +02:00
Mateusz Gruszczyński
48bf3ca209 smart queue fix 2026-05-05 15:35:41 +02:00
Mateusz Gruszczyński
c3d12bde46 smart queue fix 2026-05-05 15:25:33 +02:00
Mateusz Gruszczyński
904f36e06f smart queue fix 2026-05-05 14:43:58 +02:00
Mateusz Gruszczyński
df8750bacb smart queue fix 2026-05-05 14:30:19 +02:00
Mateusz Gruszczyński
b74b32dd21 smart queue fix 2026-05-05 14:14:47 +02:00
Mateusz Gruszczyński
2d19481c4c queue changes 2026-05-05 14:03:31 +02:00
Mateusz Gruszczyński
dc78f8fd38 fix in queue 2026-05-05 11:07:54 +02:00
Mateusz Gruszczyński
eb3d743500 fix in queue 2026-05-05 10:50:01 +02:00
Mateusz Gruszczyński
5874f8669d fix in queue 2026-05-05 10:47:54 +02:00
Mateusz Gruszczyński
08772ddda5 smart queue fix 2026-05-05 09:53:14 +02:00
Mateusz Gruszczyński
3c14a6f510 new functions 2026-05-05 08:21:36 +02:00
Mateusz Gruszczyński
0b580f590e new functions 2026-05-05 08:08:48 +02:00
Mateusz Gruszczyński
fa0d2f13fe new functions 2026-05-05 08:00:32 +02:00
gru
4b236c21f8 Merge pull request 'gunicorn' (#1) from gunicorn into master
Reviewed-on: #1
2026-05-05 07:23:23 +02:00
Mateusz Gruszczyński
0dcdf0e22b filters and jobs finished date 2026-05-04 22:57:39 +02:00
Mateusz Gruszczyński
879c60d563 execute.capture fix 2026-05-04 21:37:02 +02:00
Mateusz Gruszczyński
d5b7d97528 badge colors 2026-05-04 21:14:17 +02:00
Mateusz Gruszczyński
1ff1525f0b bulk-part-jobs, and scgi retries 2026-05-04 21:08:30 +02:00
Mateusz Gruszczyński
d55533d78a bulk-part-jobs 2026-05-04 20:12:26 +02:00
Mateusz Gruszczyński
e9940bf16c force kill job 2026-05-04 19:21:09 +02:00
Mateusz Gruszczyński
dca7389a1a gunicorn 2026-05-04 10:57:03 +02:00
Mateusz Gruszczyński
31bba1269d gunicorn 2026-05-04 10:43:31 +02:00
34 changed files with 4042 additions and 719 deletions

View File

@@ -6,6 +6,19 @@ PYTORRENT_DEBUG=0
PYTORRENT_POLL_INTERVAL=1.0 PYTORRENT_POLL_INTERVAL=1.0
PYTORRENT_WORKERS=16 PYTORRENT_WORKERS=16
PYTORRENT_GEOIP_DB=data/GeoLite2-City.mmdb PYTORRENT_GEOIP_DB=data/GeoLite2-City.mmdb
PYTORRENT_ALLOW_UNSAFE_WERKZEUG=0
PYTORRENT_SCGI_RETRIES=8
# css/js libs
PYTORRENT_USE_OFFLINE_LIBS=false
# python -m pytorrent.cli reset-password admin new_Pass
PYTORRENT_AUTH_ENABLE=false
# Reverse proxy / HTTPS
PYTORRENT_PROXY_FIX_ENABLE=false
PYTORRENT_SESSION_COOKIE_SECURE=false
# PYTORRENT_SOCKETIO_CORS_ALLOWED_ORIGINS=https://your-domain.com
# Retention / Smart Queue # Retention / Smart Queue
PYTORRENT_TRAFFIC_HISTORY_RETENTION_DAYS=90 PYTORRENT_TRAFFIC_HISTORY_RETENTION_DAYS=90
@@ -13,4 +26,3 @@ PYTORRENT_JOBS_RETENTION_DAYS=30
PYTORRENT_SMART_QUEUE_HISTORY_RETENTION_DAYS=30 PYTORRENT_SMART_QUEUE_HISTORY_RETENTION_DAYS=30
PYTORRENT_LOG_RETENTION_DAYS=30 PYTORRENT_LOG_RETENTION_DAYS=30
PYTORRENT_SMART_QUEUE_LABEL="Smart Queue Paused" PYTORRENT_SMART_QUEUE_LABEL="Smart Queue Paused"

1
.gitignore vendored
View File

@@ -37,3 +37,4 @@ data/*
logs/* logs/*
todo.txt todo.txt
pytorrent/static/libs/*

120
README.md
View File

@@ -1,33 +1,33 @@
# pyTorrent # pyTorrent
Monopage web UI dla rTorrent inspirowany workflow ruTorrent. Single-page web UI for rTorrent inspired by the ruTorrent workflow.
## Funkcje ## Features
- Flask + Flask-SocketIO. - Flask + Flask-SocketIO.
- SQLite na preferencje, profile SCGI, motyw Bootstrapa i font UI. - SQLite storage for preferences, SCGI profiles, Bootstrap theme and UI font.
- Dowolna liczba profili rTorrent per user. - Multiple rTorrent profiles per user.
- Profile można dodawać i edytować z UI; flaga zdalnej lokalizacji ukrywa CPU/RAM hosta aplikacji, żeby nie mylić ich z zasobami zdalnego rTorrenta; publiczny IP dla port check jest dalej sprawdzany zdalnie, jeśli rTorrent to obsługuje. - Profiles can be added and edited from the UI; the remote profile flag hides local CPU/RAM usage to avoid confusing it with remote rTorrent host resources.
- Przełączanie aktywnego rTorrent z UI. - Active rTorrent profile switching from the UI.
- Live lista torrentów przez WebSocket. - Live torrent list over WebSocket.
- Cache aplikacyjny i wysyłanie patchy bez przeładowywania całej tabeli. - Application-side cache with patch updates instead of full table reloads.
- Operacje usera wykonywane w ThreadPoolExecutor. - User operations executed through ThreadPoolExecutor.
- Akcje `move` i `remove` są wykonywane per profil w kolejności zlecenia, więc późniejsze usunięcie poczeka na wcześniejsze przenoszenia. - `move` and `remove` actions are executed per profile in request order, so later deletes wait for earlier moves.
- Log jobsów pokazuje krótką datę i godzinę w tabeli oraz pełny timestamp w tooltipie. - Job log shows a short date/time in the table and the full timestamp in the tooltip.
- Masowe start/pause/stop/resume/recheck/remove/move. - Bulk start, pause, stop, resume, recheck, remove and move.
- Move obsługuje `move_data=true`, który fizycznie przenosi dane po stronie rTorrent w tle i odpytuje plik statusu, dzięki czemu długie `mv` nie kończy się timeoutem SCGI; jeśli cel już istnieje, jest nadpisywany (`force`), a timeouty z `mkdir`/startu/pollingu move nie przerywają operacji. Potem aktualizuje katalog torrenta, a `recheck` domyślnie włącza się przy fizycznym przenoszeniu. - Move supports `move_data=true`; data is physically moved on the rTorrent side in the background and status is polled from a marker file, so long `mv` operations do not hit the SCGI timeout.
- Modal dodawania wielu magnetów. - Multi-magnet add modal.
- Dolny status bar: CPU, RAM, wersja rTorrent, prędkości, limity, total DL/UP oraz status portu, gdy port check jest włączony. - Bottom status bar with CPU, RAM, rTorrent version, speeds, limits, total DL/UP and port-check status when enabled.
- Prawoklik na torrentach. - Torrent context menu.
- Skróty klawiaturowe. - Keyboard shortcuts.
- Szczegóły: General, Files, Peers, Trackers, Log. - Details tabs: General, Files, Peers, Trackers and Log.
- Smart Queue pokazuje domyślnie 10 ostatnich operacji; można rozwinąć historię do 100 wpisów. - Smart Queue shows the last 10 operations by default and can expand history to 100 rows.
- GeoIP peerów z MaxMind GeoLite2-City.mmdb, z cache IP. - Peer GeoIP with MaxMind GeoLite2-City.mmdb and IP cache.
- Cache-busting statyków przez MD5 i nagłówki cache. - Static cache busting with MD5 and cache headers.
- Preferencje wyglądu: domyślny Bootstrap albo Bootswatch: Flatly, Litera, Lumen, Minty, Sketchy, Solar, Spacelab, United, Zephyr. - Appearance preferences: default Bootstrap or Bootswatch themes Flatly, Litera, Lumen, Minty, Sketchy, Solar, Spacelab, United and Zephyr.
- Preferencje fontu: domyślny font motywu, Adwaita Mono oraz dodatkowe pasujące fonty. - Font preferences: default theme font, Adwaita Mono and additional matching fonts.
## Uruchomienie ## Run locally
```bash ```bash
./install.sh ./install.sh
@@ -35,17 +35,54 @@ Monopage web UI dla rTorrent inspirowany workflow ruTorrent.
python app.py python app.py
``` ```
Domyślnie: `http://127.0.0.1:8090`. Default URL: `http://127.0.0.1:8090`.
## Profil SCGI ## Production run
Przykład: Preferred mode without development Werkzeug:
```bash
. venv/bin/activate
gunicorn --worker-class gthread --workers 1 --threads 32 --bind 0.0.0.0:8090 --access-logfile - --error-logfile - wsgi:app
```
Note: the app keeps `async_mode="threading"`, so WebSocket, `start_background_task`, operation queues and the poller run in the same model as before.
Alternatives reviewed but not enabled by default:
- Gunicorn with `eventlet`: works with Flask-SocketIO, but requires green threads and monkey patching, which increases regression risk for file and SCGI operations.
- Gunicorn with `gevent`: a valid production option, but it needs extra dependencies and compatibility testing.
- Multiple Gunicorn workers: requires Redis, RabbitMQ or Kafka as the Socket.IO message queue, so it is not a drop-in replacement.
## Reverse proxy
When pyTorrent is served behind a reverse proxy, enable proxy header handling only when the proxy is trusted:
```env
PYTORRENT_PROXY_FIX_ENABLE=true
PYTORRENT_SESSION_COOKIE_SECURE=true
```
The proxy should forward at least:
```txt
X-Forwarded-For
X-Forwarded-Proto
X-Forwarded-Host
X-Forwarded-Port
```
This keeps login redirects, session cookies and same-origin API checks correct when HTTPS is terminated by the proxy. If pyTorrent is mounted under a sub-path, also forward `X-Forwarded-Prefix`.
## SCGI profile
Example:
```txt ```txt
scgi://127.0.0.1:5000/RPC2 scgi://127.0.0.1:5000/RPC2
``` ```
Po stronie rTorrent: On the rTorrent side:
```txt ```txt
network.scgi.open_port = 127.0.0.1:5000 network.scgi.open_port = 127.0.0.1:5000
@@ -53,22 +90,39 @@ network.scgi.open_port = 127.0.0.1:5000
## GeoIP ## GeoIP
Instalator pobiera bazę GeoLite2-City jednorazowo do: The installer downloads GeoLite2-City once to:
```txt ```txt
data/GeoLite2-City.mmdb data/GeoLite2-City.mmdb
``` ```
Można też uruchomić ręcznie: Manual download:
```bash ```bash
./scripts/download_geoip.sh ./scripts/download_geoip.sh
``` ```
Skrypt używa głównego źródła `https://git.io/GeoLite2-City.mmdb`, a przy błędzie fallbacku `https://github.com/P3TERX/GeoLite.mmdb/raw/download/GeoLite2-City.mmdb`. Katalog `data` ma uprawnienia `755`, a plik bazy `644`. The script uses `https://git.io/GeoLite2-City.mmdb` as the primary source and `https://github.com/P3TERX/GeoLite.mmdb/raw/download/GeoLite2-City.mmdb` as fallback. The `data` directory is set to `755`, and the database file is set to `644`.
## API docs ## API docs
Dokumentacja OpenAPI jest dostępna pod `/docs`. Endpoint `/api/profiles` obsługuje `max_parallel_jobs` z domyślną wartością `5` oraz `is_remote`; `PUT /api/profiles/{profile_id}` edytuje istniejący profil. Endpoint `/api/preferences` obsługuje m.in. `theme`, `bootstrap_theme`, `font_family`, `table_columns_json`, `peers_refresh_seconds` i `port_check_enabled`. Endpoint `/api/port-check` zwraca status portu wraz z `checked_at`; dla zdalnego profilu publiczny IP jest pobierany przez rTorrent z fallbackami `ifconfig.co`, `ifconfig.me` i `ipapi.linuxiarz.pl`, jeśli dana konfiguracja rTorrenta wspiera zdalne polecenia, a metoda `POST` wymusza ponowny check z pominięciem cache. Endpoint `/api/system/status` dla zdalnego profilu zwraca `usage_available=false` i nie odczytuje CPU/RAM. OpenAPI documentation is available at `/docs`. `/api/profiles` supports `max_parallel_jobs` with default value `5` and `is_remote`; `PUT /api/profiles/{profile_id}` edits an existing profile. `/api/preferences` supports fields including `theme`, `bootstrap_theme`, `font_family`, `table_columns_json`, `peers_refresh_seconds` and `port_check_enabled`. `/api/port-check` returns port status with `checked_at`; for remote profiles the public IP is read through rTorrent with fallbacks when supported. `/api/system/status` returns `usage_available=false` for remote profiles and does not read local CPU/RAM.
`/api/openapi.json` zawiera reusable schemas dla głównych odpowiedzi API, w tym `TorrentListResponse`, `TorrentSummary`, `TorrentFilterSummary`, `CleanupSummary` i `AppStatus`. `GET /api/torrents` dokumentuje teraz pole `summary` używane przez sidebar filters. `/api/openapi.json` includes reusable schemas for main API responses, including `TorrentListResponse`, `TorrentSummary`, `TorrentFilterSummary`, `CleanupSummary` and `AppStatus`. `GET /api/torrents` documents the `summary` field used by sidebar filters.
## Admin CLI
Reset an existing user's password:
```bash
. venv/bin/activate
python -m pytorrent.cli reset-password admin new_password
```
Without the password argument, the CLI asks for it interactively:
```bash
python -m pytorrent.cli reset-password admin
```
The command uses the same database as the app and respects `PYTORRENT_DB_PATH` from `.env`. The reset changes only the password hash and leaves role and permissions unchanged.

11
app.py
View File

@@ -1,7 +1,14 @@
from pytorrent import create_app, socketio from pytorrent import create_app, socketio
from pytorrent.config import HOST, PORT, DEBUG from pytorrent.config import ALLOW_UNSAFE_WERKZEUG, DEBUG, HOST, PORT
app = create_app() app = create_app()
if __name__ == "__main__": if __name__ == "__main__":
socketio.run(app, host=HOST, port=PORT, debug=DEBUG, allow_unsafe_werkzeug=True) # Note: This entrypoint is kept for local development; production should use gunicorn via wsgi:app.
socketio.run(
app,
host=HOST,
port=PORT,
debug=DEBUG,
allow_unsafe_werkzeug=ALLOW_UNSAFE_WERKZEUG,
)

View File

@@ -8,20 +8,16 @@ Wants=network-online.target
[Service] [Service]
Type=simple Type=simple
#User=root
#Group=root
User=pytorrent User=pytorrent
Group=pytorrent Group=pytorrent
WorkingDirectory=/opt/pyTorrent WorkingDirectory=/opt/pyTorrent
Environment="PYTHONUNBUFFERED=1" Environment="PYTHONUNBUFFERED=1"
EnvironmentFile=/opt/pyTorrent/.env EnvironmentFile=/opt/pyTorrent/.env
ExecStart=/opt/pyTorrent/venv/bin/python /opt/pyTorrent/app.py ExecStart=/opt/pyTorrent/venv/bin/gunicorn -c /opt/pyTorrent/gunicorn.conf.py --worker-class gthread --workers 1 --threads 32 --bind ${PYTORRENT_HOST}:${PYTORRENT_PORT} --access-logfile - --error-logfile - wsgi:app
Restart=always Restart=always
RestartSec=3 RestartSec=3
KillSignal=SIGINT KillSignal=SIGINT
TimeoutStopSec=20 TimeoutStopSec=20
# opcjonalnie
NoNewPrivileges=true NoNewPrivileges=true
PrivateTmp=true PrivateTmp=true

3
gunicorn.conf.py Normal file
View File

@@ -0,0 +1,3 @@
import gunicorn.http.wsgi
gunicorn.http.wsgi.SERVER = "pyTorrent"

View File

@@ -5,6 +5,8 @@ python3 -m venv venv
pip install --upgrade pip pip install --upgrade pip
pip install -r requirements.txt pip install -r requirements.txt
cp -n .env.example .env || true cp -n .env.example .env || true
grep -q '^PYTORRENT_USE_OFFLINE_LIBS=' .env || echo 'PYTORRENT_USE_OFFLINE_LIBS=true' >> .env
./scripts/download_frontend_libs.py
mkdir -p data mkdir -p data
chmod 755 data chmod 755 data
./scripts/download_geoip.sh data/GeoLite2-City.mmdb ./scripts/download_geoip.sh data/GeoLite2-City.mmdb

View File

@@ -46,7 +46,7 @@ def make_zip(repo_path: Path, output_zip: Path) -> None:
zf.write(abs_path, arcname=rel_path) zf.write(abs_path, arcname=rel_path)
print(f"Utworzono archiwum: {output_zip}") print(f"Utworzono archiwum: {output_zip}")
print(f"Dodano plików: {len(files)}") print(f"Added files: {len(files)}")
def main(): def main():
@@ -60,7 +60,7 @@ def main():
try: try:
run_git_command(["rev-parse", "--show-toplevel"], repo_path) run_git_command(["rev-parse", "--show-toplevel"], repo_path)
except subprocess.CalledProcessError: except subprocess.CalledProcessError:
print("Błąd: ten katalog nie jest repozytorium Git.", file=sys.stderr) print("Error: this directory is not a Git repository.", file=sys.stderr)
sys.exit(1) sys.exit(1)
make_zip(repo_path, output_zip) make_zip(repo_path, output_zip)

View File

@@ -1,19 +1,79 @@
from __future__ import annotations from __future__ import annotations
from pathlib import Path from pathlib import Path
from flask import Flask, request, url_for from flask import Flask, jsonify, render_template, request, url_for
from flask_socketio import SocketIO from flask_socketio import SocketIO
from .config import SECRET_KEY from werkzeug.middleware.proxy_fix import ProxyFix
from .config import (
SECRET_KEY,
SESSION_COOKIE_SECURE,
PROXY_FIX_ENABLE,
PROXY_FIX_X_FOR,
PROXY_FIX_X_PROTO,
PROXY_FIX_X_HOST,
PROXY_FIX_X_PORT,
PROXY_FIX_X_PREFIX,
SOCKETIO_CORS_ALLOWED_ORIGINS,
)
from .db import init_db from .db import init_db
from .services.frontend_assets import asset_path, bootstrap_css_path, validate_offline_assets
from .utils import file_md5 from .utils import file_md5
socketio = SocketIO(cors_allowed_origins="*", ping_timeout=30, async_mode="threading") socketio = SocketIO(cors_allowed_origins=SOCKETIO_CORS_ALLOWED_ORIGINS, ping_timeout=30, async_mode="threading")
_static_md5_cache: dict[tuple, str] = {} _static_md5_cache: dict[tuple, str] = {}
def _wants_json_response() -> bool:
"""Return true for API/error clients that should not receive an HTML page."""
best = request.accept_mimetypes.best_match(["application/json", "text/html"])
return request.path.startswith("/api/") or best == "application/json"
def register_error_pages(app: Flask) -> None:
# Notatka: własne strony błędów zastępują generyczne 404/500 i zachowują JSON dla API.
@app.errorhandler(404)
def not_found(error):
if _wants_json_response():
return jsonify({"ok": False, "error": "Not found"}), 404
return render_template(
"error.html",
code=404,
title="Page not found",
message="The requested pyTorrent view does not exist or is not available.",
icon="fa-compass-drafting",
), 404
@app.errorhandler(500)
def server_error(error):
if _wants_json_response():
return jsonify({"ok": False, "error": "Internal server error"}), 500
return render_template(
"error.html",
code=500,
title="Application error",
message="pyTorrent hit an internal error while handling this request.",
icon="fa-bug",
), 500
def create_app() -> Flask: def create_app() -> Flask:
validate_offline_assets()
app = Flask(__name__) app = Flask(__name__)
if PROXY_FIX_ENABLE:
app.wsgi_app = ProxyFix(
app.wsgi_app,
x_for=PROXY_FIX_X_FOR,
x_proto=PROXY_FIX_X_PROTO,
x_host=PROXY_FIX_X_HOST,
x_port=PROXY_FIX_X_PORT,
x_prefix=PROXY_FIX_X_PREFIX,
)
app.secret_key = SECRET_KEY app.secret_key = SECRET_KEY
app.config.update(
SESSION_COOKIE_HTTPONLY=True,
SESSION_COOKIE_SAMESITE="Lax",
SESSION_COOKIE_SECURE=SESSION_COOKIE_SECURE,
)
@app.context_processor @app.context_processor
def static_helpers(): def static_helpers():
@@ -30,7 +90,21 @@ def create_app() -> Flask:
return url_for("static", filename=filename, v=version) return url_for("static", filename=filename, v=version)
except OSError: except OSError:
return url_for("static", filename=filename) return url_for("static", filename=filename)
return {"static_url": static_url}
def frontend_asset_url(key: str) -> str:
# Notatka: helper przełącza szablony między CDN i lokalnymi plikami bez duplikowania logiki.
path = asset_path(key)
return path if path.startswith("http") else static_url(path)
def bootstrap_theme_url(theme: str | None = None) -> str:
path = bootstrap_css_path(theme)
return path if path.startswith("http") else static_url(path)
return {
"static_url": static_url,
"frontend_asset_url": frontend_asset_url,
"bootstrap_theme_url": bootstrap_theme_url,
}
@app.after_request @app.after_request
def cache_headers(response): def cache_headers(response):
@@ -39,16 +113,17 @@ def create_app() -> Flask:
if request.endpoint == "static": if request.endpoint == "static":
response.headers["Cache-Control"] = "public, max-age=31536000, immutable" response.headers["Cache-Control"] = "public, max-age=31536000, immutable"
else: else:
response.headers["Cache-Control"] = "no-cache, no-store, must-revalidate" response.headers["Cache-Control"] = "no-store, private"
response.headers["Pragma"] = "no-cache"
response.headers["Expires"] = "0"
return response return response
from .routes.main import bp as main_bp from .routes.main import bp as main_bp
from .routes.api import bp as api_bp from .routes.api import bp as api_bp
app.register_blueprint(main_bp) app.register_blueprint(main_bp)
app.register_blueprint(api_bp) app.register_blueprint(api_bp)
register_error_pages(app)
init_db() init_db()
from .services.auth import install_guards
install_guards(app)
socketio.init_app(app) socketio.init_app(app)
from .services.workers import set_socketio from .services.workers import set_socketio

79
pytorrent/cli.py Normal file
View File

@@ -0,0 +1,79 @@
from __future__ import annotations
import argparse
import getpass
import sys
from .db import connect, init_db, utcnow
from .services.auth import password_hash
def reset_password(username: str, password: str) -> bool:
"""Note: Reset the selected user password hash without changing role or permissions."""
username = (username or "").strip()
if not username:
raise ValueError("Username is required")
if password is None or password == "":
raise ValueError("Password cannot be empty")
init_db()
now = utcnow()
hashed = password_hash(password)
with connect() as conn:
row = conn.execute("SELECT id FROM users WHERE username=?", (username,)).fetchone()
if not row:
return False
conn.execute(
"UPDATE users SET password_hash=?, updated_at=? WHERE username=?",
(hashed, now, username),
)
return True
def _password_from_args(args: argparse.Namespace) -> str:
"""Note: Allow the password to be passed as an argument or entered securely in interactive mode."""
if args.password is not None:
return args.password
first = getpass.getpass("New password: ")
second = getpass.getpass("Repeat password: ")
if first != second:
raise ValueError("Passwords do not match")
return first
def build_parser() -> argparse.ArgumentParser:
"""Note: Define simple administrative commands launched with python -m pytorrent.cli."""
parser = argparse.ArgumentParser(description="pyTorrent CLI")
sub = parser.add_subparsers(dest="command", required=True)
reset = sub.add_parser("reset-password", help="Reset password for an existing user")
reset.add_argument("username", help="User login")
reset.add_argument("password", nargs="?", help="New password; omit to type it interactively")
reset.set_defaults(func=_cmd_reset_password)
return parser
def _cmd_reset_password(args: argparse.Namespace) -> int:
"""Note: Run the password reset and return a readable terminal status."""
password = _password_from_args(args)
if reset_password(args.username, password):
print(f"Password reset for user: {args.username}")
return 0
print(f"User not found: {args.username}", file=sys.stderr)
return 1
def main(argv: list[str] | None = None) -> int:
"""Note: Main CLI entrypoint with error handling and without starting the web app."""
parser = build_parser()
args = parser.parse_args(argv)
try:
return int(args.func(args) or 0)
except Exception as exc:
print(f"Error: {exc}", file=sys.stderr)
return 1
if __name__ == "__main__":
raise SystemExit(main())

View File

@@ -1,20 +1,46 @@
from __future__ import annotations from __future__ import annotations
import os import os
import secrets
from pathlib import Path from pathlib import Path
from dotenv import load_dotenv from dotenv import load_dotenv
BASE_DIR = Path(__file__).resolve().parent.parent BASE_DIR = Path(__file__).resolve().parent.parent
load_dotenv(BASE_DIR / ".env") load_dotenv(BASE_DIR / ".env")
SECRET_KEY = os.getenv("PYTORRENT_SECRET_KEY", "dev-change-me")
def _env_bool(name: str, default: bool = False) -> bool:
value = os.getenv(name)
if value is None:
return default
return value.strip().lower() in {"1", "true", "yes", "on"}
_SECRET_KEY_ENV = os.getenv("PYTORRENT_SECRET_KEY")
SECRET_KEY = _SECRET_KEY_ENV or "dev-change-me"
DB_PATH = Path(os.getenv("PYTORRENT_DB_PATH", str(BASE_DIR / "data" / "pytorrent.sqlite3"))) DB_PATH = Path(os.getenv("PYTORRENT_DB_PATH", str(BASE_DIR / "data" / "pytorrent.sqlite3")))
if not DB_PATH.is_absolute(): if not DB_PATH.is_absolute():
DB_PATH = BASE_DIR / DB_PATH DB_PATH = BASE_DIR / DB_PATH
HOST = os.getenv("PYTORRENT_HOST", "0.0.0.0") HOST = os.getenv("PYTORRENT_HOST", "0.0.0.0")
PORT = int(os.getenv("PYTORRENT_PORT", "8090")) PORT = int(os.getenv("PYTORRENT_PORT", "8090"))
DEBUG = os.getenv("PYTORRENT_DEBUG", "0") == "1" DEBUG = _env_bool("PYTORRENT_DEBUG", False)
# Notatka: tryb offline wymusza lokalne JS/CSS i wyłącza zależność od CDN.
USE_OFFLINE_LIBS = _env_bool("PYTORRENT_USE_OFFLINE_LIBS", False)
# Note: Optional authentication remains disabled unless explicitly enabled in .env.
AUTH_ENABLE = _env_bool("PYTORRENT_AUTH_ENABLE", False)
if AUTH_ENABLE and (not _SECRET_KEY_ENV or SECRET_KEY == "dev-change-me"):
# Note: Auth mode cannot use Flask's development secret; persist a local random session key instead.
_secret_file = BASE_DIR / "data" / ".session_secret"
_secret_file.parent.mkdir(parents=True, exist_ok=True)
if _secret_file.exists():
SECRET_KEY = _secret_file.read_text(encoding="utf-8").strip()
if not SECRET_KEY or SECRET_KEY == "dev-change-me":
SECRET_KEY = secrets.token_urlsafe(48)
_secret_file.write_text(SECRET_KEY, encoding="utf-8")
SESSION_COOKIE_SECURE = _env_bool("PYTORRENT_SESSION_COOKIE_SECURE", False)
# Note: Keep Werkzeug opt-in only for explicit local/dev use, never by default in services.
ALLOW_UNSAFE_WERKZEUG = _env_bool("PYTORRENT_ALLOW_UNSAFE_WERKZEUG", DEBUG)
POLL_INTERVAL = float(os.getenv("PYTORRENT_POLL_INTERVAL", "1.0")) POLL_INTERVAL = float(os.getenv("PYTORRENT_POLL_INTERVAL", "1.0"))
WORKERS = int(os.getenv("PYTORRENT_WORKERS", "16")) WORKERS = int(os.getenv("PYTORRENT_WORKERS", "16"))
GEOIP_DB = Path(os.getenv("PYTORRENT_GEOIP_DB", str(BASE_DIR / "data" / "GeoLite2-City.mmdb"))) GEOIP_DB = Path(os.getenv("PYTORRENT_GEOIP_DB", str(BASE_DIR / "data" / "GeoLite2-City.mmdb")))
@@ -29,6 +55,17 @@ def _env_int(name: str, default: int, minimum: int = 0) -> int:
return default return default
PROXY_FIX_ENABLE = _env_bool("PYTORRENT_PROXY_FIX_ENABLE", False)
PROXY_FIX_X_FOR = _env_int("PYTORRENT_PROXY_FIX_X_FOR", 1, 0)
PROXY_FIX_X_PROTO = _env_int("PYTORRENT_PROXY_FIX_X_PROTO", 1, 0)
PROXY_FIX_X_HOST = _env_int("PYTORRENT_PROXY_FIX_X_HOST", 1, 0)
PROXY_FIX_X_PORT = _env_int("PYTORRENT_PROXY_FIX_X_PORT", 1, 0)
PROXY_FIX_X_PREFIX = _env_int("PYTORRENT_PROXY_FIX_X_PREFIX", 1, 0)
_SOCKETIO_CORS = os.getenv("PYTORRENT_SOCKETIO_CORS_ALLOWED_ORIGINS", "").strip()
SOCKETIO_CORS_ALLOWED_ORIGINS = None if not _SOCKETIO_CORS else [item.strip() for item in _SOCKETIO_CORS.split(",") if item.strip()]
TRAFFIC_HISTORY_RETENTION_DAYS = _env_int("PYTORRENT_TRAFFIC_HISTORY_RETENTION_DAYS", 90, 1) TRAFFIC_HISTORY_RETENTION_DAYS = _env_int("PYTORRENT_TRAFFIC_HISTORY_RETENTION_DAYS", 90, 1)
JOBS_RETENTION_DAYS = _env_int("PYTORRENT_JOBS_RETENTION_DAYS", 30, 1) JOBS_RETENTION_DAYS = _env_int("PYTORRENT_JOBS_RETENTION_DAYS", 30, 1)
SMART_QUEUE_HISTORY_RETENTION_DAYS = _env_int("PYTORRENT_SMART_QUEUE_HISTORY_RETENTION_DAYS", 30, 1) SMART_QUEUE_HISTORY_RETENTION_DAYS = _env_int("PYTORRENT_SMART_QUEUE_HISTORY_RETENTION_DAYS", 30, 1)

View File

@@ -10,7 +10,20 @@ CREATE TABLE IF NOT EXISTS users (
id INTEGER PRIMARY KEY AUTOINCREMENT, id INTEGER PRIMARY KEY AUTOINCREMENT,
username TEXT UNIQUE NOT NULL, username TEXT UNIQUE NOT NULL,
password_hash TEXT, password_hash TEXT,
created_at TEXT NOT NULL role TEXT DEFAULT 'user',
is_active INTEGER DEFAULT 1,
created_at TEXT NOT NULL,
updated_at TEXT
);
CREATE TABLE IF NOT EXISTS user_profile_permissions (
user_id INTEGER NOT NULL,
profile_id INTEGER NOT NULL DEFAULT 0,
access_level TEXT NOT NULL DEFAULT 'ro',
created_at TEXT NOT NULL,
updated_at TEXT NOT NULL,
PRIMARY KEY(user_id, profile_id),
FOREIGN KEY(user_id) REFERENCES users(id)
); );
CREATE TABLE IF NOT EXISTS user_preferences ( CREATE TABLE IF NOT EXISTS user_preferences (
@@ -126,6 +139,7 @@ CREATE TABLE IF NOT EXISTS smart_queue_settings (
stalled_seconds INTEGER DEFAULT 300, stalled_seconds INTEGER DEFAULT 300,
min_speed_bytes INTEGER DEFAULT 1024, min_speed_bytes INTEGER DEFAULT 1024,
min_seeds INTEGER DEFAULT 1, min_seeds INTEGER DEFAULT 1,
manage_stopped INTEGER DEFAULT 0,
updated_at TEXT NOT NULL, updated_at TEXT NOT NULL,
PRIMARY KEY(user_id, profile_id) PRIMARY KEY(user_id, profile_id)
); );
@@ -234,9 +248,20 @@ CREATE TABLE IF NOT EXISTS app_settings (
key TEXT PRIMARY KEY, key TEXT PRIMARY KEY,
value TEXT value TEXT
); );
CREATE TABLE IF NOT EXISTS torrent_stats_cache (
profile_id INTEGER PRIMARY KEY,
payload_json TEXT NOT NULL,
created_at TEXT NOT NULL,
updated_at TEXT NOT NULL,
updated_epoch REAL DEFAULT 0
);
""" """
MIGRATIONS = [ MIGRATIONS = [
"ALTER TABLE users ADD COLUMN role TEXT DEFAULT 'user'",
"ALTER TABLE users ADD COLUMN is_active INTEGER DEFAULT 1",
"ALTER TABLE users ADD COLUMN updated_at TEXT",
"ALTER TABLE user_preferences ADD COLUMN mobile_mode INTEGER DEFAULT 0", "ALTER TABLE user_preferences ADD COLUMN mobile_mode INTEGER DEFAULT 0",
"ALTER TABLE user_preferences ADD COLUMN peers_refresh_seconds INTEGER DEFAULT 0", "ALTER TABLE user_preferences ADD COLUMN peers_refresh_seconds INTEGER DEFAULT 0",
"ALTER TABLE user_preferences ADD COLUMN port_check_enabled INTEGER DEFAULT 0", "ALTER TABLE user_preferences ADD COLUMN port_check_enabled INTEGER DEFAULT 0",
@@ -253,6 +278,8 @@ MIGRATIONS = [
"ALTER TABLE automation_rules ADD COLUMN cooldown_minutes INTEGER DEFAULT 60", "ALTER TABLE automation_rules ADD COLUMN cooldown_minutes INTEGER DEFAULT 60",
"ALTER TABLE rtorrent_config_overrides ADD COLUMN apply_on_start INTEGER DEFAULT 0", "ALTER TABLE rtorrent_config_overrides ADD COLUMN apply_on_start INTEGER DEFAULT 0",
"ALTER TABLE rtorrent_config_overrides ADD COLUMN baseline_value TEXT", "ALTER TABLE rtorrent_config_overrides ADD COLUMN baseline_value TEXT",
"ALTER TABLE torrent_stats_cache ADD COLUMN updated_epoch REAL DEFAULT 0",
"ALTER TABLE smart_queue_settings ADD COLUMN manage_stopped INTEGER DEFAULT 0",
] ]
@@ -288,15 +315,21 @@ def init_db():
pass pass
now = utcnow() now = utcnow()
conn.execute( conn.execute(
"INSERT OR IGNORE INTO users(id, username, password_hash, created_at) VALUES(1, 'default', NULL, ?)", "INSERT OR IGNORE INTO users(id, username, password_hash, role, is_active, created_at, updated_at) VALUES(1, 'default', NULL, 'admin', 1, ?, ?)",
(now,), (now, now),
) )
conn.execute("UPDATE users SET role=COALESCE(role, 'admin'), is_active=COALESCE(is_active, 1), updated_at=COALESCE(updated_at, ?) WHERE id=1", (now,))
pref = conn.execute("SELECT id FROM user_preferences WHERE user_id=1").fetchone() pref = conn.execute("SELECT id FROM user_preferences WHERE user_id=1").fetchone()
if not pref: if not pref:
conn.execute( conn.execute(
"INSERT INTO user_preferences(user_id, theme, created_at, updated_at) VALUES(1, 'dark', ?, ?)", "INSERT INTO user_preferences(user_id, theme, created_at, updated_at) VALUES(1, 'dark', ?, ?)",
(now, now), (now, now),
) )
try:
from .services.auth import ensure_admin_user
ensure_admin_user()
except Exception:
pass
def default_user_id() -> int: def default_user_id() -> int:

View File

@@ -13,17 +13,91 @@ import socket
import json import json
import psutil import psutil
import xml.etree.ElementTree as ET import xml.etree.ElementTree as ET
from flask import Blueprint, jsonify, request from flask import Blueprint, jsonify, request, abort
from ..config import DB_PATH, JOBS_RETENTION_DAYS, SMART_QUEUE_HISTORY_RETENTION_DAYS, WORKERS from ..config import DB_PATH, JOBS_RETENTION_DAYS, SMART_QUEUE_HISTORY_RETENTION_DAYS, WORKERS
from ..db import default_user_id, connect, utcnow from ..db import connect, utcnow
from ..services import preferences, rtorrent from ..services.auth import current_user_id as default_user_id, current_user, list_users, save_user, delete_user, login_user, logout_user, enabled as auth_enabled, require_profile_write
from ..services import preferences, rtorrent, torrent_stats
from ..services.torrent_cache import torrent_cache from ..services.torrent_cache import torrent_cache
from ..services.torrent_summary import cached_summary from ..services.torrent_summary import cached_summary
from ..services.workers import enqueue, list_jobs, cancel_job, retry_job, clear_jobs from ..services.workers import enqueue, list_jobs, cancel_job, retry_job, clear_jobs, emergency_clear_jobs
from ..services.geoip import lookup_ip from ..services.geoip import lookup_ip
bp = Blueprint("api", __name__, url_prefix="/api") bp = Blueprint("api", __name__, url_prefix="/api")
MOVE_BULK_MAX_HASHES = 100
@bp.post("/auth/login")
def auth_login():
# Note: Auth API is hidden when optional authentication is disabled.
if not auth_enabled():
abort(404)
data = request.get_json(silent=True) or {}
user = login_user(str(data.get("username") or ""), str(data.get("password") or ""))
if not user:
return jsonify({"ok": False, "error": "Invalid username or password"}), 401
return ok({"user": user, "auth_enabled": auth_enabled()})
@bp.get("/auth/me")
def auth_me():
if not auth_enabled():
abort(404)
return ok({"user": current_user(), "auth_enabled": auth_enabled()})
@bp.post("/auth/logout")
def auth_logout():
if not auth_enabled():
abort(404)
logout_user()
return ok()
@bp.get("/auth/users")
def auth_users_list():
if not auth_enabled():
abort(404)
return ok({"users": list_users()})
@bp.post("/auth/users")
def auth_users_create():
if not auth_enabled():
abort(404)
try:
return ok({"user": save_user(request.get_json(silent=True) or {})})
except Exception as exc:
return jsonify({"ok": False, "error": str(exc)}), 400
@bp.put("/auth/users/<int:user_id>")
def auth_users_update(user_id: int):
if not auth_enabled():
abort(404)
try:
return ok({"user": save_user(request.get_json(silent=True) or {}, user_id)})
except Exception as exc:
return jsonify({"ok": False, "error": str(exc)}), 400
@bp.delete("/auth/users/<int:user_id>")
def auth_users_delete(user_id: int):
if not auth_enabled():
abort(404)
try:
delete_user(user_id)
return ok({"users": list_users()})
except Exception as exc:
return jsonify({"ok": False, "error": str(exc)}), 400
def _job_profile_id(job_id: str) -> int | None:
with connect() as conn:
row = conn.execute("SELECT profile_id FROM jobs WHERE id=?", (job_id,)).fetchone()
return int(row.get("profile_id") or 0) if row else None
def ok(payload=None): def ok(payload=None):
data = {"ok": True} data = {"ok": True}
@@ -303,6 +377,52 @@ def enrich_bulk_payload(profile: dict, action_name: str, data: dict) -> dict:
return payload return payload
def _chunk_hashes(hashes: list[str], size: int = MOVE_BULK_MAX_HASHES) -> list[list[str]]:
# Note: Splits very large torrent selections into predictable chunks so each queued job stays small and recoverable.
safe_size = max(1, int(size or MOVE_BULK_MAX_HASHES))
return [hashes[index:index + safe_size] for index in range(0, len(hashes), safe_size)]
def enqueue_bulk_parts(profile: dict, action_name: str, data: dict) -> list[dict]:
# Note: One shared helper splits large move/remove operations into small ordered parts without changing other actions.
base_payload = enrich_bulk_payload(profile, action_name, data)
hashes = base_payload.get("hashes") or []
chunks = _chunk_hashes(hashes)
if len(chunks) <= 1:
job_id = enqueue(action_name, profile["id"], base_payload)
return [{"job_id": job_id, "label": "bulk-1", "part": 1, "parts": 1, "hashes": hashes, "hash_count": len(hashes)}]
jobs = []
items_by_hash = {str(item.get("hash")): item for item in (base_payload.get("job_context") or {}).get("items") or []}
for index, chunk in enumerate(chunks, start=1):
payload = dict(base_payload)
payload["hashes"] = chunk
context = dict(base_payload.get("job_context") or {})
context.update({
"bulk": True,
"bulk_label": f"bulk-{index}",
"bulk_part": index,
"bulk_parts": len(chunks),
"hash_count": len(chunk),
"parent_hash_count": len(hashes),
"items": [items_by_hash[h] for h in chunk if h in items_by_hash],
})
payload["job_context"] = context
job_id = enqueue(action_name, profile["id"], payload)
jobs.append({"job_id": job_id, "label": context["bulk_label"], "part": index, "parts": len(chunks), "hashes": chunk, "hash_count": len(chunk)})
return jobs
def enqueue_move_bulk_parts(profile: dict, data: dict) -> list[dict]:
# Note: Keep the old public move helper while using the same partitioning logic.
return enqueue_bulk_parts(profile, "move", data)
def enqueue_remove_bulk_parts(profile: dict, data: dict) -> list[dict]:
# Note: Remove/rm uses the same partitioning as move, which lowers rTorrent load.
return enqueue_bulk_parts(profile, "remove", data)
@bp.get("/profiles") @bp.get("/profiles")
def profiles_list(): def profiles_list():
return ok({"profiles": preferences.list_profiles(), "active": preferences.active_profile()}) return ok({"profiles": preferences.list_profiles(), "active": preferences.active_profile()})
@@ -362,6 +482,19 @@ def torrents():
}) })
@bp.get("/torrent-stats")
def torrent_stats_get():
profile = preferences.active_profile()
if not profile:
return ok({"stats": {}, "error": "No profile"})
force = str(request.args.get("force") or "").lower() in {"1", "true", "yes"}
try:
# Note: Heavy file metadata is served from a 15-minute DB cache unless the user explicitly refreshes it.
return ok({"stats": torrent_stats.get(profile, force=force)})
except Exception as exc:
return jsonify({"ok": False, "error": str(exc)}), 500
@bp.get("/torrents/<torrent_hash>/files") @bp.get("/torrents/<torrent_hash>/files")
def torrent_files(torrent_hash: str): def torrent_files(torrent_hash: str):
profile = preferences.active_profile() profile = preferences.active_profile()
@@ -395,19 +528,6 @@ def torrent_peers(torrent_hash: str):
return ok({"peers": peers}) return ok({"peers": peers})
@bp.post("/torrents/<torrent_hash>/peers/action")
def torrent_peer_action(torrent_hash: str):
profile = preferences.active_profile()
if not profile:
return jsonify({"ok": False, "error": "No profile"}), 400
data = request.get_json(silent=True) or {}
try:
result = rtorrent.peer_action(profile, torrent_hash, int(data.get("peer_index")), str(data.get("action") or ""))
return ok({"result": result, "message": f"Peer {result['action']} via {result['method']}"})
except Exception as exc:
return jsonify({"ok": False, "error": str(exc)}), 400
@bp.get("/torrents/<torrent_hash>/trackers") @bp.get("/torrents/<torrent_hash>/trackers")
def torrent_trackers(torrent_hash: str): def torrent_trackers(torrent_hash: str):
profile = preferences.active_profile() profile = preferences.active_profile()
@@ -434,9 +554,23 @@ def torrent_action(action_name: str):
if not profile: if not profile:
return jsonify({"ok": False, "error": "No profile"}), 400 return jsonify({"ok": False, "error": "No profile"}), 400
data = request.get_json(silent=True) or {} data = request.get_json(silent=True) or {}
allowed = {"start", "pause", "stop", "resume", "recheck", "reannounce", "remove", "move", "set_label", "set_ratio_group"} allowed = {"start", "pause", "unpause", "stop", "resume", "recheck", "reannounce", "remove", "move", "set_label", "set_ratio_group"}
if action_name not in allowed: if action_name not in allowed:
return jsonify({"ok": False, "error": "Unknown action"}), 400 return jsonify({"ok": False, "error": "Unknown action"}), 400
if action_name in {"move", "remove"}:
# Note: Large move/remove requests are split into ordered bulk parts; smaller requests keep the old single-job response shape.
jobs = enqueue_bulk_parts(profile, action_name, data)
first_job_id = jobs[0]["job_id"] if jobs else None
total_hashes = sum(int(job.get("hash_count") or 0) for job in jobs)
return ok({
"job_id": first_job_id,
"job_ids": [job["job_id"] for job in jobs],
"jobs": jobs,
"hash_count": total_hashes,
"bulk": total_hashes > 1,
"bulk_parts": len(jobs),
"chunk_size": MOVE_BULK_MAX_HASHES,
})
payload = enrich_bulk_payload(profile, action_name, data) payload = enrich_bulk_payload(profile, action_name, data)
job_id = enqueue(action_name, profile["id"], payload) job_id = enqueue(action_name, profile["id"], payload)
return ok({"job_id": job_id, "hash_count": len(payload.get("hashes") or []), "bulk": len(payload.get("hashes") or []) > 1}) return ok({"job_id": job_id, "hash_count": len(payload.get("hashes") or []), "bulk": len(payload.get("hashes") or []) > 1})
@@ -566,8 +700,12 @@ def jobs_list():
@bp.post("/jobs/clear") @bp.post("/jobs/clear")
def jobs_clear(): def jobs_clear():
if str(request.args.get("force") or "").lower() in {"1", "true", "yes"}:
# Note: Emergency cleanup keeps the endpoint behavior unchanged, while force=1 enables rescue mode.
deleted = emergency_clear_jobs()
return ok({"deleted": deleted, "emergency": True})
deleted = clear_jobs() deleted = clear_jobs()
return ok({"deleted": deleted}) return ok({"deleted": deleted, "emergency": False})
@bp.get("/cleanup/summary") @bp.get("/cleanup/summary")
@@ -608,13 +746,15 @@ def cleanup_all():
@bp.post("/jobs/<job_id>/cancel") @bp.post("/jobs/<job_id>/cancel")
def jobs_cancel(job_id: str): def jobs_cancel(job_id: str):
require_profile_write(_job_profile_id(job_id))
if not cancel_job(job_id): if not cancel_job(job_id):
return jsonify({"ok": False, "error": "Only pending or failed jobs can be cancelled"}), 400 return jsonify({"ok": False, "error": "Only unfinished jobs can be cancelled"}), 400
return ok() return ok({"emergency": True})
@bp.post("/jobs/<job_id>/retry") @bp.post("/jobs/<job_id>/retry")
def jobs_retry(job_id: str): def jobs_retry(job_id: str):
require_profile_write(_job_profile_id(job_id))
if not retry_job(job_id): if not retry_job(job_id):
return jsonify({"ok": False, "error": "Only failed or cancelled jobs can be retried"}), 400 return jsonify({"ok": False, "error": "Only failed or cancelled jobs can be retried"}), 400
return ok() return ok()
@@ -832,7 +972,11 @@ def smart_queue_check():
if not profile: if not profile:
return ok({'result': {'ok': False, 'error': 'No profile'}}) return ok({'result': {'ok': False, 'error': 'No profile'}})
try: try:
return ok({'result': smart_queue.check(profile, force=True)}) result = smart_queue.check(profile, force=True)
# Note: Manual check immediately returns a fresh snapshot so the UI shows the real Downloading count after the action.
diff = torrent_cache.refresh(profile)
rows = torrent_cache.snapshot(profile['id'])
return ok({'result': result, 'torrent_patch': {**diff, 'summary': cached_summary(profile['id'], rows, force=True)}})
except Exception as exc: except Exception as exc:
return jsonify({'ok': False, 'error': str(exc)}), 500 return jsonify({'ok': False, 'error': str(exc)}), 500

View File

@@ -1,11 +1,42 @@
from __future__ import annotations from __future__ import annotations
from flask import Blueprint, render_template, jsonify, Response from flask import Blueprint, render_template, jsonify, Response, request, redirect, url_for, abort
from ..services.preferences import get_preferences, list_profiles, active_profile, BOOTSTRAP_THEMES, FONT_FAMILIES, bootstrap_css_url from ..services.preferences import get_preferences, list_profiles, active_profile, BOOTSTRAP_THEMES, FONT_FAMILIES
from ..services import auth
from ..services.frontend_assets import asset_path
bp = Blueprint("main", __name__) bp = Blueprint("main", __name__)
def _asset_url(key: str) -> str:
# Notatka: API docs korzysta z tego samego przełącznika CDN/offline co reszta aplikacji.
path = asset_path(key)
return path if path.startswith("http") else url_for("static", filename=path)
@bp.route("/login", methods=["GET", "POST"])
def login():
# Note: When optional authentication is disabled, /login is intentionally unavailable.
if not auth.enabled():
abort(404)
error = ""
if request.method == "POST":
user = auth.login_user(request.form.get("username", ""), request.form.get("password", ""))
if user:
return redirect(request.args.get("next") or url_for("main.index"))
error = "Invalid username or password"
return render_template("login.html", error=error)
@bp.get("/logout")
def logout():
auth.logout_user()
if not auth.enabled():
return redirect(url_for("main.index"))
return redirect(url_for("main.login"))
@bp.get("/") @bp.get("/")
def index(): def index():
prefs = get_preferences() prefs = get_preferences()
@@ -16,13 +47,14 @@ def index():
active_profile=active_profile(), active_profile=active_profile(),
bootstrap_themes=BOOTSTRAP_THEMES, bootstrap_themes=BOOTSTRAP_THEMES,
font_families=FONT_FAMILIES, font_families=FONT_FAMILIES,
bootstrap_css_url=bootstrap_css_url((prefs or {}).get("bootstrap_theme")), auth_enabled=auth.enabled(),
current_user=auth.current_user(),
) )
@bp.get("/docs") @bp.get("/docs")
def docs(): def docs():
html = """<!doctype html><html lang=\"en\"><head><meta charset=\"utf-8\"><meta name=\"viewport\" content=\"width=device-width, initial-scale=1\"><title>pyTorrent API Docs</title><link rel=\"stylesheet\" href=\"https://cdn.jsdelivr.net/npm/swagger-ui-dist@5/swagger-ui.css\"></head><body><div id=\"swagger-ui\"></div><script src=\"https://cdn.jsdelivr.net/npm/swagger-ui-dist@5/swagger-ui-bundle.js\"></script><script>window.onload=()=>SwaggerUIBundle({url:'/api/openapi.json',dom_id:'#swagger-ui',deepLinking:true,persistAuthorization:true});</script></body></html>""" html = f"""<!doctype html><html lang="en"><head><meta charset="utf-8"><meta name="viewport" content="width=device-width, initial-scale=1"><title>pyTorrent API Docs</title><link rel="stylesheet" href="{_asset_url('swagger_css')}"></head><body><div id="swagger-ui"></div><script src="{_asset_url('swagger_js')}"></script><script>window.onload=()=>SwaggerUIBundle({{url:'/api/openapi.json',dom_id:'#swagger-ui',deepLinking:true,persistAuthorization:true}});</script></body></html>"""
return Response(html, mimetype="text/html") return Response(html, mimetype="text/html")
@@ -55,7 +87,7 @@ def openapi():
}, },
}, },
"/api/torrents": {"get": {"summary": "Get cached torrent snapshot", "responses": {"200": {"description": "Torrent list"}}}}, "/api/torrents": {"get": {"summary": "Get cached torrent snapshot", "responses": {"200": {"description": "Torrent list"}}}},
"/api/torrents/{action_name}": {"post": {"summary": "Queue torrent action", "description": "For move, path is the target directory; move_data=true physically moves data on the rTorrent host using a detached shell move with status polling, force-overwrites an existing destination, tolerates rTorrent execute timeouts around mkdir/start/polling, handles retries after a partially completed move, avoids SCGI timeout on long mv operations, and recheck defaults to move_data. Move and remove jobs are ordered per profile, so a later remove waits for earlier move/remove jobs to finish.", "parameters": [{"name": "action_name", "in": "path", "required": True, "schema": {"type": "string", "enum": ["start", "pause", "stop", "resume", "recheck", "remove", "move", "set_label", "set_ratio_group"]}}], "requestBody": {"content": {"application/json": {"schema": {"type": "object", "properties": {"hashes": {"type": "array", "items": {"type": "string"}}, "path": {"type": "string", "description": "Target directory for move"}, "move_data": {"type": "boolean", "description": "Physically move data before setting torrent directory"}, "recheck": {"type": "boolean", "description": "Run hash check after physical move; defaults to move_data"}, "label": {"type": "string"}, "ratio_group": {"type": "string"}, "remove_data": {"type": "boolean"}}}}}}, "responses": {"200": {"description": "Job queued"}}}}, "/api/torrents/{action_name}": {"post": {"summary": "Queue torrent action", "description": "For move, path is the target directory; move_data=true physically moves data on the rTorrent host using a detached shell move with status polling, force-overwrites an existing destination, tolerates rTorrent execute timeouts around mkdir/start/polling, handles retries after a partially completed move, avoids SCGI timeout on long mv operations, and recheck defaults to move_data. Large move selections are split into ordered bulk parts of up to 100 hashes. Move and remove jobs are ordered per profile, so a later remove waits for earlier move/remove jobs to finish.", "parameters": [{"name": "action_name", "in": "path", "required": True, "schema": {"type": "string", "enum": ["start", "pause", "stop", "resume", "recheck", "remove", "move", "set_label", "set_ratio_group"]}}], "requestBody": {"content": {"application/json": {"schema": {"type": "object", "properties": {"hashes": {"type": "array", "items": {"type": "string"}}, "path": {"type": "string", "description": "Target directory for move"}, "move_data": {"type": "boolean", "description": "Physically move data before setting torrent directory"}, "recheck": {"type": "boolean", "description": "Run hash check after physical move; defaults to move_data"}, "label": {"type": "string"}, "ratio_group": {"type": "string"}, "remove_data": {"type": "boolean"}}}}}}, "responses": {"200": {"description": "Job queued"}}}},
"/api/torrents/add": {"post": {"summary": "Add magnet links or torrent files", "requestBody": {"content": {"multipart/form-data": {"schema": {"type": "object", "properties": {"uris": {"type": "string"}, "directory": {"type": "string"}, "label": {"type": "string"}, "start": {"type": "boolean"}, "files": {"type": "array", "items": {"type": "string", "format": "binary"}}}}}, "application/json": {"schema": {"type": "object"}}}}, "responses": {"200": {"description": "Jobs queued"}}}}, "/api/torrents/add": {"post": {"summary": "Add magnet links or torrent files", "requestBody": {"content": {"multipart/form-data": {"schema": {"type": "object", "properties": {"uris": {"type": "string"}, "directory": {"type": "string"}, "label": {"type": "string"}, "start": {"type": "boolean"}, "files": {"type": "array", "items": {"type": "string", "format": "binary"}}}}}, "application/json": {"schema": {"type": "object"}}}}, "responses": {"200": {"description": "Jobs queued"}}}},
"/api/torrents/{torrent_hash}/files": {"get": {"summary": "Torrent files", "parameters": [{"name": "torrent_hash", "in": "path", "required": True, "schema": {"type": "string"}}], "responses": {"200": {"description": "Files"}}}}, "/api/torrents/{torrent_hash}/files": {"get": {"summary": "Torrent files", "parameters": [{"name": "torrent_hash", "in": "path", "required": True, "schema": {"type": "string"}}], "responses": {"200": {"description": "Files"}}}},
"/api/torrents/{torrent_hash}/peers": {"get": {"summary": "Torrent peers with GeoIP", "parameters": [{"name": "torrent_hash", "in": "path", "required": True, "schema": {"type": "string"}}], "responses": {"200": {"description": "Peers"}}}}, "/api/torrents/{torrent_hash}/peers": {"get": {"summary": "Torrent peers with GeoIP", "parameters": [{"name": "torrent_hash", "in": "path", "required": True, "schema": {"type": "string"}}], "responses": {"200": {"description": "Peers"}}}},
@@ -80,10 +112,15 @@ def openapi():
"/api/traffic/history": {"get": {"summary": "Transfer history for charts", "parameters": [{"name": "range", "in": "query", "schema": {"type": "string", "enum": ["15m", "1h", "3h", "6h", "24h", "7d", "30d", "90d"]}}], "responses": {"200": {"description": "Aggregated traffic history"}}}} "/api/traffic/history": {"get": {"summary": "Transfer history for charts", "parameters": [{"name": "range", "in": "query", "schema": {"type": "string", "enum": ["15m", "1h", "3h", "6h", "24h", "7d", "30d", "90d"]}}], "responses": {"200": {"description": "Aggregated traffic history"}}}}
} }
paths.update({ paths.update({
"/api/auth/login": {"post": {"summary": "Log in with username and password when authentication is enabled", "requestBody": {"content": {"application/json": {"schema": {"type": "object", "properties": {"username": {"type": "string"}, "password": {"type": "string", "format": "password"}}, "required": ["username", "password"]}}}}, "responses": {"200": {"description": "Logged in"}, "401": {"description": "Invalid credentials"}, "404": {"description": "Authentication disabled"}}}},
"/api/auth/me": {"get": {"summary": "Return current authenticated user", "responses": {"200": {"description": "Current user"}, "404": {"description": "Authentication disabled"}}}},
"/api/auth/logout": {"post": {"summary": "Log out current user", "responses": {"200": {"description": "Logged out"}, "404": {"description": "Authentication disabled"}}}},
"/api/auth/users": {"get": {"summary": "List users, admin only", "responses": {"200": {"description": "Users"}, "403": {"description": "Admin only"}}}, "post": {"summary": "Create user, admin only", "requestBody": {"content": {"application/json": {"schema": {"$ref": "#/components/schemas/AuthUserInput"}}}}, "responses": {"200": {"description": "User created"}, "403": {"description": "Admin only"}}}},
"/api/auth/users/{user_id}": {"put": {"summary": "Update user, admin only", "parameters": [{"name": "user_id", "in": "path", "required": True, "schema": {"type": "integer"}}], "requestBody": {"content": {"application/json": {"schema": {"$ref": "#/components/schemas/AuthUserInput"}}}}, "responses": {"200": {"description": "User updated"}, "403": {"description": "Admin only"}}}, "delete": {"summary": "Delete user, admin only", "parameters": [{"name": "user_id", "in": "path", "required": True, "schema": {"type": "integer"}}], "responses": {"200": {"description": "User deleted"}, "403": {"description": "Admin only"}}}},
"/api/profiles/{profile_id}": {"delete": {"summary": "Delete rTorrent profile", "parameters": [{"name": "profile_id", "in": "path", "required": True, "schema": {"type": "integer"}}], "responses": {"200": {"description": "Deleted"}}}}, "/api/profiles/{profile_id}": {"delete": {"summary": "Delete rTorrent profile", "parameters": [{"name": "profile_id", "in": "path", "required": True, "schema": {"type": "integer"}}], "responses": {"200": {"description": "Deleted"}}}},
"/api/torrent-stats": {"get": {"summary": "Torrent statistics and cached file metadata", "parameters": [{"name": "force", "in": "query", "schema": {"type": "boolean", "default": False}}], "responses": {"200": {"description": "Torrent statistics"}}}},
"/api/path/default": {"get": {"summary": "Read active rTorrent default download path", "responses": {"200": {"description": "Default path"}}}}, "/api/path/default": {"get": {"summary": "Read active rTorrent default download path", "responses": {"200": {"description": "Default path"}}}},
"/api/torrents/{torrent_hash}/files/priority": {"post": {"summary": "Set file priorities", "parameters": [{"name": "torrent_hash", "in": "path", "required": True, "schema": {"type": "string"}}], "requestBody": {"content": {"application/json": {"schema": {"type": "object", "properties": {"files": {"type": "array", "items": {"type": "object", "properties": {"index": {"type": "integer"}, "priority": {"type": "integer", "enum": [0, 1, 2]}}}}}}}}}, "responses": {"200": {"description": "Updated priorities"}, "207": {"description": "Partial update"}}}}, "/api/torrents/{torrent_hash}/files/priority": {"post": {"summary": "Set file priorities", "parameters": [{"name": "torrent_hash", "in": "path", "required": True, "schema": {"type": "string"}}], "requestBody": {"content": {"application/json": {"schema": {"type": "object", "properties": {"files": {"type": "array", "items": {"type": "object", "properties": {"index": {"type": "integer"}, "priority": {"type": "integer", "enum": [0, 1, 2]}}}}}}}}}, "responses": {"200": {"description": "Updated priorities"}, "207": {"description": "Partial update"}}}},
"/api/torrents/{torrent_hash}/peers/action": {"post": {"summary": "Run peer action", "parameters": [{"name": "torrent_hash", "in": "path", "required": True, "schema": {"type": "string"}}], "requestBody": {"content": {"application/json": {"schema": {"type": "object", "properties": {"peer_index": {"type": "integer"}, "action": {"type": "string", "enum": ["disconnect", "kick", "snub", "unsnub", "ban"]}}}}}}, "responses": {"200": {"description": "Peer action result"}}}},
"/api/labels/{label_id}": {"delete": {"summary": "Delete saved label", "parameters": [{"name": "label_id", "in": "path", "required": True, "schema": {"type": "integer"}}], "responses": {"200": {"description": "Labels"}}}}, "/api/labels/{label_id}": {"delete": {"summary": "Delete saved label", "parameters": [{"name": "label_id", "in": "path", "required": True, "schema": {"type": "integer"}}], "responses": {"200": {"description": "Labels"}}}},
"/api/rtorrent-config": {"get": {"summary": "Read supported rTorrent config fields", "responses": {"200": {"description": "Config fields"}}}, "post": {"summary": "Save supported rTorrent config fields", "requestBody": {"content": {"application/json": {"schema": {"type": "object", "properties": {"values": {"type": "object"}}}}}}, "responses": {"200": {"description": "Save result"}}}}, "/api/rtorrent-config": {"get": {"summary": "Read supported rTorrent config fields", "responses": {"200": {"description": "Config fields"}}}, "post": {"summary": "Save supported rTorrent config fields", "requestBody": {"content": {"application/json": {"schema": {"type": "object", "properties": {"values": {"type": "object"}}}}}}, "responses": {"200": {"description": "Save result"}}}},
"/api/rtorrent-config/generate": {"post": {"summary": "Generate rTorrent config text from provided values", "requestBody": {"content": {"application/json": {"schema": {"type": "object", "properties": {"values": {"type": "object"}}}}}}, "responses": {"200": {"description": "Generated config text"}}}}, "/api/rtorrent-config/generate": {"post": {"summary": "Generate rTorrent config text from provided values", "requestBody": {"content": {"application/json": {"schema": {"type": "object", "properties": {"values": {"type": "object"}}}}}}, "responses": {"200": {"description": "Generated config text"}}}},
@@ -98,6 +135,17 @@ def openapi():
"properties": {"ok": {"type": "boolean"}}, "properties": {"ok": {"type": "boolean"}},
"required": ["ok"], "required": ["ok"],
}, },
"AuthUserInput": {
"type": "object",
"properties": {
"username": {"type": "string"},
"password": {"type": "string", "format": "password", "description": "Optional on update"},
"role": {"type": "string", "enum": ["admin", "user"]},
"is_active": {"type": "boolean"},
"permissions": {"type": "array", "items": {"type": "object", "properties": {"profile_id": {"type": "integer", "description": "0 means all profiles"}, "access_level": {"type": "string", "enum": ["ro", "full"]}}}},
},
"required": ["username"],
},
"Profile": { "Profile": {
"type": "object", "type": "object",
"additionalProperties": True, "additionalProperties": True,
@@ -278,4 +326,9 @@ def openapi():
}, },
}) })
return jsonify({"openapi": "3.0.3", "info": {"title": "pyTorrent API", "version": "0.2.0"}, "paths": paths, "components": components}) components.setdefault("securitySchemes", {})["sessionCookie"] = {"type": "apiKey", "in": "cookie", "name": "session"}
for path, methods in paths.items():
if path != "/api/auth/login":
for operation in methods.values():
operation.setdefault("security", [{"sessionCookie": []}])
return jsonify({"openapi": "3.0.3", "info": {"title": "pyTorrent API", "version": "0.0.1"}, "paths": paths, "components": components})

344
pytorrent/services/auth.py Normal file
View File

@@ -0,0 +1,344 @@
from __future__ import annotations
from functools import wraps
from typing import Any
from urllib.parse import urlparse
from flask import abort, jsonify, redirect, request, session, url_for
from werkzeug.security import check_password_hash, generate_password_hash
from ..config import AUTH_ENABLE
from ..db import connect, default_user_id, utcnow
PUBLIC_ENDPOINTS = {"main.login", "main.logout", "api.auth_login", "api.auth_me", "static"}
RTORRENT_WRITE_PREFIXES = (
"/api/torrents/",
"/api/speed/limits",
"/api/labels",
"/api/ratio-groups",
"/api/rss",
"/api/smart-queue",
"/api/automations",
"/api/jobs",
)
RTORRENT_CONFIG_PREFIXES = ("/api/rtorrent-config",)
ADMIN_PREFIXES = ("/api/auth/users", "/api/profiles")
# Note: API reads that expose rTorrent/profile data must also respect profile permissions.
PROFILE_READ_PREFIXES = (
"/api/torrents",
"/api/torrent-stats",
"/api/system/status",
"/api/app/status",
"/api/port-check",
"/api/path",
"/api/labels",
"/api/ratio-groups",
"/api/rss",
"/api/rtorrent-config",
"/api/smart-queue",
"/api/traffic/history",
"/api/automations",
)
def enabled() -> bool:
return bool(AUTH_ENABLE)
def password_hash(password: str) -> str:
return generate_password_hash(password or "")
def current_user_id() -> int:
if not enabled():
return default_user_id()
try:
return int(session.get("user_id") or 0)
except Exception:
return 0
def current_user() -> dict[str, Any] | None:
uid = current_user_id()
if not uid:
return None
with connect() as conn:
return conn.execute(
"SELECT id, username, role, is_active, created_at, updated_at FROM users WHERE id=?",
(uid,),
).fetchone()
def is_admin(user: dict[str, Any] | None = None) -> bool:
if not enabled():
return True
user = user or current_user()
return bool(user and user.get("role") == "admin" and int(user.get("is_active") or 0))
def _permissions(user_id: int | None = None) -> list[dict[str, Any]]:
if not enabled():
return [{"profile_id": 0, "access_level": "full"}]
uid = user_id or current_user_id()
if not uid:
return []
with connect() as conn:
return conn.execute(
"SELECT profile_id, access_level FROM user_profile_permissions WHERE user_id=?",
(uid,),
).fetchall()
def can_access_profile(profile_id: int | None, user_id: int | None = None) -> bool:
if not enabled():
return True
uid = user_id or current_user_id()
if not uid:
return False
with connect() as conn:
user = conn.execute("SELECT role, is_active FROM users WHERE id=?", (uid,)).fetchone()
if not user or not int(user.get("is_active") or 0):
return False
if user.get("role") == "admin":
return True
pid = int(profile_id or 0)
row = conn.execute(
"SELECT 1 FROM user_profile_permissions WHERE user_id=? AND (profile_id=0 OR profile_id=?) LIMIT 1",
(uid, pid),
).fetchone()
return bool(row)
def can_write_profile(profile_id: int | None, user_id: int | None = None) -> bool:
if not enabled():
return True
uid = user_id or current_user_id()
if not uid:
return False
with connect() as conn:
user = conn.execute("SELECT role, is_active FROM users WHERE id=?", (uid,)).fetchone()
if not user or not int(user.get("is_active") or 0):
return False
if user.get("role") == "admin":
return True
pid = int(profile_id or 0)
row = conn.execute(
"SELECT access_level FROM user_profile_permissions WHERE user_id=? AND (profile_id=0 OR profile_id=?) ORDER BY profile_id DESC LIMIT 1",
(uid, pid),
).fetchone()
return bool(row and row.get("access_level") == "full")
def visible_profile_ids(user_id: int | None = None) -> set[int] | None:
if not enabled():
return None
uid = user_id or current_user_id()
if not uid:
return set()
with connect() as conn:
user = conn.execute("SELECT role, is_active FROM users WHERE id=?", (uid,)).fetchone()
if not user or not int(user.get("is_active") or 0):
return set()
if user.get("role") == "admin":
return None
rows = conn.execute("SELECT profile_id FROM user_profile_permissions WHERE user_id=?", (uid,)).fetchall()
if any(int(row.get("profile_id") or 0) == 0 for row in rows):
return None
return {int(row.get("profile_id") or 0) for row in rows}
def same_origin_request() -> bool:
"""Return False only when an unsafe request clearly comes from another origin."""
origin = request.headers.get("Origin") or request.headers.get("Referer")
if not origin:
return True
try:
parsed = urlparse(origin)
return parsed.scheme == request.scheme and parsed.netloc == request.host
except Exception:
return False
def writable_profile_ids(user_id: int | None = None) -> set[int] | None:
if not enabled():
return None
uid = user_id or current_user_id()
if not uid:
return set()
with connect() as conn:
user = conn.execute("SELECT role, is_active FROM users WHERE id=?", (uid,)).fetchone()
if not user or not int(user.get("is_active") or 0):
return set()
if user.get("role") == "admin":
return None
rows = conn.execute("SELECT profile_id FROM user_profile_permissions WHERE user_id=? AND access_level='full'", (uid,)).fetchall()
if any(int(row.get("profile_id") or 0) == 0 for row in rows):
return None
return {int(row.get("profile_id") or 0) for row in rows}
def require_admin() -> None:
if enabled() and not is_admin():
abort(403)
def require_profile_read(profile_id: int | None) -> None:
if enabled() and not can_access_profile(profile_id):
abort(403)
def require_profile_write(profile_id: int | None) -> None:
if enabled() and not can_write_profile(profile_id):
abort(403)
def login_user(username: str, password: str) -> dict[str, Any] | None:
if not enabled():
return {"id": default_user_id(), "username": "default", "role": "admin", "is_active": 1}
with connect() as conn:
user = conn.execute("SELECT * FROM users WHERE username=?", (username.strip(),)).fetchone()
if not user or not int(user.get("is_active") or 0):
return None
if not user.get("password_hash") or not check_password_hash(user.get("password_hash"), password or ""):
return None
session.clear()
session["user_id"] = int(user["id"])
session["username"] = user["username"]
session["role"] = user.get("role") or "user"
return current_user()
def logout_user() -> None:
session.clear()
def ensure_admin_user() -> None:
if not enabled():
return
now = utcnow()
with connect() as conn:
row = conn.execute("SELECT id FROM users WHERE username='admin'").fetchone()
if not row:
conn.execute(
"INSERT INTO users(username,password_hash,role,is_active,created_at,updated_at) VALUES(?,?,?,?,?,?)",
("admin", password_hash("admin"), "admin", 1, now, now),
)
else:
conn.execute("UPDATE users SET role='admin', is_active=1, updated_at=? WHERE username='admin'", (now,))
def list_users() -> list[dict[str, Any]]:
require_admin()
with connect() as conn:
users = conn.execute(
"SELECT id, username, role, is_active, created_at, updated_at FROM users ORDER BY username COLLATE NOCASE"
).fetchall()
perms = conn.execute(
"SELECT user_id, profile_id, access_level FROM user_profile_permissions ORDER BY user_id, profile_id"
).fetchall()
by_user: dict[int, list[dict[str, Any]]] = {}
for perm in perms:
by_user.setdefault(int(perm["user_id"]), []).append({
"profile_id": int(perm.get("profile_id") or 0),
"access_level": perm.get("access_level") or "ro",
})
for user in users:
user["permissions"] = by_user.get(int(user["id"]), [])
return users
def save_user(data: dict[str, Any], user_id: int | None = None) -> dict[str, Any]:
require_admin()
now = utcnow()
username = str(data.get("username") or "").strip()
role = "admin" if data.get("role") == "admin" else "user"
is_active = 1 if data.get("is_active", True) else 0
if not username:
raise ValueError("Username is required")
with connect() as conn:
if user_id:
row = conn.execute("SELECT id FROM users WHERE id=?", (user_id,)).fetchone()
if not row:
raise ValueError("User does not exist")
conn.execute(
"UPDATE users SET username=?, role=?, is_active=?, updated_at=? WHERE id=?",
(username, role, is_active, now, user_id),
)
else:
cur = conn.execute(
"INSERT INTO users(username,password_hash,role,is_active,created_at,updated_at) VALUES(?,?,?,?,?,?)",
(username, password_hash(str(data.get("password") or username)), role, is_active, now, now),
)
user_id = int(cur.lastrowid)
if data.get("password"):
conn.execute("UPDATE users SET password_hash=?, updated_at=? WHERE id=?", (password_hash(str(data.get("password"))), now, user_id))
if role != "admin":
conn.execute("DELETE FROM user_profile_permissions WHERE user_id=?", (user_id,))
for item in data.get("permissions") or []:
profile_id = int(item.get("profile_id") or 0)
access = "full" if item.get("access_level") == "full" else "ro"
conn.execute(
"INSERT OR REPLACE INTO user_profile_permissions(user_id,profile_id,access_level,created_at,updated_at) VALUES(?,?,?,?,?)",
(user_id, profile_id, access, now, now),
)
else:
conn.execute("DELETE FROM user_profile_permissions WHERE user_id=?", (user_id,))
return conn.execute("SELECT id, username, role, is_active, created_at, updated_at FROM users WHERE id=?", (user_id,)).fetchone()
def delete_user(user_id: int) -> None:
require_admin()
if int(user_id) == current_user_id():
raise ValueError("Cannot delete current user")
with connect() as conn:
conn.execute("DELETE FROM user_profile_permissions WHERE user_id=?", (user_id,))
conn.execute("DELETE FROM users WHERE id=? AND username <> 'admin'", (user_id,))
def install_guards(app) -> None:
@app.before_request
def _auth_guard():
if not enabled():
return None
endpoint = request.endpoint or ""
if endpoint in PUBLIC_ENDPOINTS or endpoint.startswith("static"):
return None
if not current_user_id():
if request.path.startswith("/api/"):
return jsonify({"ok": False, "error": "Authentication required"}), 401
return redirect(url_for("main.login", next=request.full_path if request.query_string else request.path))
user = current_user()
if not user or not int(user.get("is_active") or 0):
logout_user()
return jsonify({"ok": False, "error": "Authentication required"}), 401 if request.path.startswith("/api/") else redirect(url_for("main.login"))
if request.path.startswith("/api/auth/users") and not is_admin(user):
return jsonify({"ok": False, "error": "Admin only"}), 403
if request.path.startswith(PROFILE_READ_PREFIXES):
profile_id = _request_profile_id()
if profile_id and not can_access_profile(profile_id):
return jsonify({"ok": False, "error": "Profile access denied"}), 403
if request.method not in {"GET", "HEAD", "OPTIONS"}:
if request.path.startswith("/api/") and not same_origin_request():
return jsonify({"ok": False, "error": "Cross-origin API request blocked"}), 403
if request.path.startswith("/api/profiles") and not request.path.endswith("/activate") and not is_admin(user):
return jsonify({"ok": False, "error": "Admin only"}), 403
profile_id = _request_profile_id()
if request.path.startswith(RTORRENT_CONFIG_PREFIXES) and not can_write_profile(profile_id):
return jsonify({"ok": False, "error": "Read-only profile access"}), 403
if request.path.startswith(RTORRENT_WRITE_PREFIXES) and not can_write_profile(profile_id):
return jsonify({"ok": False, "error": "Read-only profile access"}), 403
return None
def _request_profile_id() -> int | None:
if request.view_args and request.view_args.get("profile_id"):
return int(request.view_args["profile_id"])
try:
payload = request.get_json(silent=True) or {}
if payload.get("profile_id"):
return int(payload.get("profile_id"))
except Exception:
pass
from . import preferences
profile = preferences.active_profile()
return int(profile["id"]) if profile else None

View File

@@ -137,8 +137,11 @@ def _apply_effects(c: Any, profile: dict[str, Any], torrent: dict[str, Any], eff
for eff in effects: for eff in effects:
typ = str(eff.get('type') or '') typ = str(eff.get('type') or '')
if typ == 'move': if typ == 'move':
# Note: Automation move-to-path now uses the same move implementation as the main app action.
path = str(eff.get('path') or '').strip() or rtorrent.default_download_path(profile) path = str(eff.get('path') or '').strip() or rtorrent.default_download_path(profile)
if path: c.call('d.directory.set', h, path); applied.append({'type': 'move', 'path': path}) move_payload = {'path': path, 'move_data': bool(eff.get('move_data')), 'recheck': bool(eff.get('recheck', eff.get('move_data'))), 'keep_seeding': bool(eff.get('keep_seeding'))}
result = rtorrent.move_torrents(profile, [h], move_payload) if path else None
if path: applied.append({'type': 'move', 'path': path, 'move_data': bool(eff.get('move_data')), 'recheck': bool(move_payload['recheck']), 'keep_seeding': bool(eff.get('keep_seeding')), 'result': result})
elif typ == 'add_label': elif typ == 'add_label':
label = str(eff.get('label') or '').strip() label = str(eff.get('label') or '').strip()
if label and label not in labels: labels.append(label); c.call('d.custom1.set', h, _label_value(labels)) if label and label not in labels: labels.append(label); c.call('d.custom1.set', h, _label_value(labels))

View File

@@ -0,0 +1,111 @@
from __future__ import annotations
from pathlib import Path
from ..config import BASE_DIR, USE_OFFLINE_LIBS
# Notatka: jeden manifest utrzymuje spójne adresy CDN i ścieżki lokalne dla trybu offline.
LIBS_STATIC_DIR = "libs"
LIBS_DIR = BASE_DIR / "pytorrent" / "static" / LIBS_STATIC_DIR
BOOTSTRAP_VERSION = "5.3.3"
BOOTSWATCH_VERSION = "5.3.3"
FONTAWESOME_VERSION = "6.5.2"
FLAG_ICONS_VERSION = "7.2.3"
SWAGGER_UI_VERSION = "5"
SOCKET_IO_VERSION = "4.7.5"
BOOTSTRAP_THEMES = (
"default",
"flatly",
"litera",
"lumen",
"minty",
"sketchy",
"solar",
"spacelab",
"united",
"zephyr",
)
STATIC_ASSETS = {
"bootstrap_js": {
"local": f"{LIBS_STATIC_DIR}/bootstrap/{BOOTSTRAP_VERSION}/js/bootstrap.bundle.min.js",
"cdn": f"https://cdn.jsdelivr.net/npm/bootstrap@{BOOTSTRAP_VERSION}/dist/js/bootstrap.bundle.min.js",
},
"fontawesome_css": {
"local": f"{LIBS_STATIC_DIR}/fontawesome/{FONTAWESOME_VERSION}/css/all.min.css",
"cdn": f"https://cdnjs.cloudflare.com/ajax/libs/font-awesome/{FONTAWESOME_VERSION}/css/all.min.css",
},
"flag_icons_css": {
"local": f"{LIBS_STATIC_DIR}/flag-icons/{FLAG_ICONS_VERSION}/css/flag-icons.min.css",
"cdn": f"https://cdn.jsdelivr.net/gh/lipis/flag-icons@{FLAG_ICONS_VERSION}/css/flag-icons.min.css",
},
"socket_io_js": {
"local": f"{LIBS_STATIC_DIR}/socket.io/{SOCKET_IO_VERSION}/socket.io.min.js",
"cdn": f"https://cdn.socket.io/{SOCKET_IO_VERSION}/socket.io.min.js",
},
"swagger_css": {
"local": f"{LIBS_STATIC_DIR}/swagger-ui/{SWAGGER_UI_VERSION}/swagger-ui.css",
"cdn": f"https://cdn.jsdelivr.net/npm/swagger-ui-dist@{SWAGGER_UI_VERSION}/swagger-ui.css",
},
"swagger_js": {
"local": f"{LIBS_STATIC_DIR}/swagger-ui/{SWAGGER_UI_VERSION}/swagger-ui-bundle.js",
"cdn": f"https://cdn.jsdelivr.net/npm/swagger-ui-dist@{SWAGGER_UI_VERSION}/swagger-ui-bundle.js",
},
}
def bootstrap_css_asset(theme: str | None = None) -> dict[str, str]:
theme = theme if theme in BOOTSTRAP_THEMES else "default"
if theme == "default":
return {
"local": f"{LIBS_STATIC_DIR}/bootstrap/{BOOTSTRAP_VERSION}/css/bootstrap.min.css",
"cdn": f"https://cdn.jsdelivr.net/npm/bootstrap@{BOOTSTRAP_VERSION}/dist/css/bootstrap.min.css",
}
return {
"local": f"{LIBS_STATIC_DIR}/bootswatch/{BOOTSWATCH_VERSION}/{theme}/bootstrap.min.css",
"cdn": f"https://cdn.jsdelivr.net/npm/bootswatch@{BOOTSWATCH_VERSION}/dist/{theme}/bootstrap.min.css",
}
def asset_path(key: str) -> str:
return STATIC_ASSETS[key]["local" if USE_OFFLINE_LIBS else "cdn"]
def bootstrap_css_path(theme: str | None = None) -> str:
return bootstrap_css_asset(theme)["local" if USE_OFFLINE_LIBS else "cdn"]
def required_offline_paths() -> list[Path]:
paths = [LIBS_DIR.parent / item["local"] for item in STATIC_ASSETS.values()]
paths.extend(LIBS_DIR.parent / bootstrap_css_asset(theme)["local"] for theme in BOOTSTRAP_THEMES)
return paths
def missing_offline_paths() -> list[Path]:
missing = [path for path in required_offline_paths() if not path.is_file() or path.stat().st_size <= 0]
# Notatka: sprawdzane są też zasoby referencjonowane przez CSS, np. fonty ikon i pliki flag.
required_dirs = [
LIBS_DIR / f"fontawesome/{FONTAWESOME_VERSION}/webfonts",
LIBS_DIR / f"flag-icons/{FLAG_ICONS_VERSION}/flags/4x3",
LIBS_DIR / f"flag-icons/{FLAG_ICONS_VERSION}/flags/1x1",
]
for directory in required_dirs:
if not directory.is_dir() or not any(directory.iterdir()):
missing.append(directory)
return missing
def validate_offline_assets() -> None:
# Notatka: aplikacja zatrzymuje start, gdy tryb offline jest aktywny, a pliki nie są zainstalowane.
if not USE_OFFLINE_LIBS:
return
missing = missing_offline_paths()
if missing:
preview = "\n".join(f"- {path.relative_to(BASE_DIR)}" for path in missing[:20])
extra = "" if len(missing) <= 20 else f"\n- ... and {len(missing) - 20} more"
raise RuntimeError(
"PYTORRENT_USE_OFFLINE_LIBS=true, but frontend libraries are missing. "
"Run: ./scripts/download_frontend_libs.py or ./install.sh\n"
f"Missing files:\n{preview}{extra}"
)

View File

@@ -3,6 +3,7 @@ from __future__ import annotations
import json import json
from ..db import connect, utcnow, default_user_id from ..db import connect, utcnow, default_user_id
from . import auth
BOOTSTRAP_THEMES = { BOOTSTRAP_THEMES = {
"default": "Default Bootstrap", "default": "Default Bootstrap",
@@ -27,50 +28,50 @@ FONT_FAMILIES = {
} }
def bootstrap_css_url(theme: str | None) -> str: def bootstrap_css_url(theme: str | None) -> str:
theme = theme if theme in BOOTSTRAP_THEMES else "default" # Notatka: zachowana funkcja zwraca aktualny adres motywu, ale źródło wybiera konfiguracja offline.
if theme == "default": from .frontend_assets import bootstrap_css_path
return "https://cdn.jsdelivr.net/npm/bootstrap@5.3.3/dist/css/bootstrap.min.css"
return f"https://cdn.jsdelivr.net/npm/bootswatch@5.3.3/dist/{theme}/bootstrap.min.css"
return bootstrap_css_path(theme)
def list_profiles(user_id: int | None = None): def list_profiles(user_id: int | None = None):
user_id = user_id or default_user_id() user_id = user_id or auth.current_user_id() or default_user_id()
visible = auth.visible_profile_ids(user_id)
with connect() as conn: with connect() as conn:
if visible is None:
return conn.execute(
"SELECT * FROM rtorrent_profiles ORDER BY is_default DESC, name COLLATE NOCASE"
).fetchall()
if not visible:
return []
placeholders = ",".join("?" for _ in visible)
return conn.execute( return conn.execute(
"SELECT * FROM rtorrent_profiles WHERE user_id=? ORDER BY is_default DESC, name COLLATE NOCASE", f"SELECT * FROM rtorrent_profiles WHERE id IN ({placeholders}) ORDER BY is_default DESC, name COLLATE NOCASE",
(user_id,), tuple(visible),
).fetchall() ).fetchall()
def get_profile(profile_id: int, user_id: int | None = None): def get_profile(profile_id: int, user_id: int | None = None):
user_id = user_id or default_user_id() user_id = user_id or auth.current_user_id() or default_user_id()
if not auth.can_access_profile(profile_id, user_id):
return None
with connect() as conn: with connect() as conn:
return conn.execute( return conn.execute("SELECT * FROM rtorrent_profiles WHERE id=?", (profile_id,)).fetchone()
"SELECT * FROM rtorrent_profiles WHERE id=? AND user_id=?",
(profile_id, user_id),
).fetchone()
def active_profile(user_id: int | None = None): def active_profile(user_id: int | None = None):
user_id = user_id or default_user_id() user_id = user_id or auth.current_user_id() or default_user_id()
with connect() as conn: with connect() as conn:
pref = conn.execute("SELECT active_rtorrent_id FROM user_preferences WHERE user_id=?", (user_id,)).fetchone() pref = conn.execute("SELECT active_rtorrent_id FROM user_preferences WHERE user_id=?", (user_id,)).fetchone()
if pref and pref.get("active_rtorrent_id"): if pref and pref.get("active_rtorrent_id") and auth.can_access_profile(int(pref["active_rtorrent_id"]), user_id):
row = conn.execute( row = conn.execute("SELECT * FROM rtorrent_profiles WHERE id=?", (pref["active_rtorrent_id"],)).fetchone()
"SELECT * FROM rtorrent_profiles WHERE id=? AND user_id=?",
(pref["active_rtorrent_id"], user_id),
).fetchone()
if row: if row:
return row return row
row = conn.execute( profiles = list_profiles(user_id)
"SELECT * FROM rtorrent_profiles WHERE user_id=? ORDER BY is_default DESC, id ASC LIMIT 1", return profiles[0] if profiles else None
(user_id,),
).fetchone()
return row
def save_profile(data: dict, user_id: int | None = None): def save_profile(data: dict, user_id: int | None = None):
user_id = user_id or default_user_id() user_id = user_id or auth.current_user_id() or default_user_id()
now = utcnow() now = utcnow()
name = str(data.get("name") or "rTorrent").strip() name = str(data.get("name") or "rTorrent").strip()
scgi_url = str(data.get("scgi_url") or "").strip() scgi_url = str(data.get("scgi_url") or "").strip()
@@ -79,7 +80,7 @@ def save_profile(data: dict, user_id: int | None = None):
is_remote = 1 if data.get("is_remote") else 0 is_remote = 1 if data.get("is_remote") else 0
is_default = 1 if data.get("is_default") else 0 is_default = 1 if data.get("is_default") else 0
if not scgi_url.startswith("scgi://"): if not scgi_url.startswith("scgi://"):
raise ValueError("SCGI URL musi zaczynać się od scgi://") raise ValueError("SCGI URL must start with scgi://")
with connect() as conn: with connect() as conn:
if is_default: if is_default:
conn.execute("UPDATE rtorrent_profiles SET is_default=0 WHERE user_id=?", (user_id,)) conn.execute("UPDATE rtorrent_profiles SET is_default=0 WHERE user_id=?", (user_id,))
@@ -94,11 +95,11 @@ def save_profile(data: dict, user_id: int | None = None):
"UPDATE user_preferences SET active_rtorrent_id=?, updated_at=? WHERE user_id=?", "UPDATE user_preferences SET active_rtorrent_id=?, updated_at=? WHERE user_id=?",
(profile_id, now, user_id), (profile_id, now, user_id),
) )
return conn.execute("SELECT * FROM rtorrent_profiles WHERE id=? AND user_id=?", (profile_id, user_id)).fetchone() return conn.execute("SELECT * FROM rtorrent_profiles WHERE id=?", (profile_id,)).fetchone()
def update_profile(profile_id: int, data: dict, user_id: int | None = None): def update_profile(profile_id: int, data: dict, user_id: int | None = None):
user_id = user_id or default_user_id() user_id = user_id or auth.current_user_id() or default_user_id()
now = utcnow() now = utcnow()
name = str(data.get("name") or "rTorrent").strip() name = str(data.get("name") or "rTorrent").strip()
scgi_url = str(data.get("scgi_url") or "").strip() scgi_url = str(data.get("scgi_url") or "").strip()
@@ -107,24 +108,25 @@ def update_profile(profile_id: int, data: dict, user_id: int | None = None):
is_remote = 1 if data.get("is_remote") else 0 is_remote = 1 if data.get("is_remote") else 0
is_default = 1 if data.get("is_default") else 0 is_default = 1 if data.get("is_default") else 0
if not scgi_url.startswith("scgi://"): if not scgi_url.startswith("scgi://"):
raise ValueError("SCGI URL musi zaczynać się od scgi://") raise ValueError("SCGI URL must start with scgi://")
with connect() as conn: with connect() as conn:
row = conn.execute("SELECT id FROM rtorrent_profiles WHERE id=? AND user_id=?", (profile_id, user_id)).fetchone() row = conn.execute("SELECT id FROM rtorrent_profiles WHERE id=?", (profile_id,)).fetchone()
if not row: if not row or not auth.can_write_profile(profile_id, user_id):
raise ValueError("Profil nie istnieje") raise ValueError("Profil nie istnieje")
if is_default: if is_default:
conn.execute("UPDATE rtorrent_profiles SET is_default=0 WHERE user_id=?", (user_id,)) conn.execute("UPDATE rtorrent_profiles SET is_default=0 WHERE user_id=?", (user_id,))
conn.execute( conn.execute(
"UPDATE rtorrent_profiles SET name=?, scgi_url=?, is_default=?, timeout_seconds=?, max_parallel_jobs=?, is_remote=?, updated_at=? WHERE id=? AND user_id=?", "UPDATE rtorrent_profiles SET name=?, scgi_url=?, is_default=?, timeout_seconds=?, max_parallel_jobs=?, is_remote=?, updated_at=? WHERE id=?",
(name, scgi_url, is_default, timeout, max_parallel, is_remote, now, profile_id, user_id), (name, scgi_url, is_default, timeout, max_parallel, is_remote, now, profile_id),
) )
return conn.execute("SELECT * FROM rtorrent_profiles WHERE id=? AND user_id=?", (profile_id, user_id)).fetchone() return conn.execute("SELECT * FROM rtorrent_profiles WHERE id=?", (profile_id,)).fetchone()
def delete_profile(profile_id: int, user_id: int | None = None): def delete_profile(profile_id: int, user_id: int | None = None):
user_id = user_id or default_user_id() user_id = user_id or auth.current_user_id() or default_user_id()
auth.require_profile_write(profile_id)
with connect() as conn: with connect() as conn:
conn.execute("DELETE FROM rtorrent_profiles WHERE id=? AND user_id=?", (profile_id, user_id)) conn.execute("DELETE FROM rtorrent_profiles WHERE id=?", (profile_id,))
active = active_profile(user_id) active = active_profile(user_id)
conn.execute( conn.execute(
"UPDATE user_preferences SET active_rtorrent_id=?, updated_at=? WHERE user_id=?", "UPDATE user_preferences SET active_rtorrent_id=?, updated_at=? WHERE user_id=?",
@@ -133,10 +135,10 @@ def delete_profile(profile_id: int, user_id: int | None = None):
def activate_profile(profile_id: int, user_id: int | None = None): def activate_profile(profile_id: int, user_id: int | None = None):
user_id = user_id or default_user_id() user_id = user_id or auth.current_user_id() or default_user_id()
with connect() as conn: with connect() as conn:
row = conn.execute("SELECT id FROM rtorrent_profiles WHERE id=? AND user_id=?", (profile_id, user_id)).fetchone() row = conn.execute("SELECT id FROM rtorrent_profiles WHERE id=?", (profile_id,)).fetchone()
if not row: if not row or not auth.can_access_profile(profile_id, user_id):
raise ValueError("Profil nie istnieje") raise ValueError("Profil nie istnieje")
conn.execute( conn.execute(
"UPDATE user_preferences SET active_rtorrent_id=?, updated_at=? WHERE user_id=?", "UPDATE user_preferences SET active_rtorrent_id=?, updated_at=? WHERE user_id=?",
@@ -146,13 +148,18 @@ def activate_profile(profile_id: int, user_id: int | None = None):
def get_preferences(user_id: int | None = None): def get_preferences(user_id: int | None = None):
user_id = user_id or default_user_id() user_id = user_id or auth.current_user_id() or default_user_id()
with connect() as conn: with connect() as conn:
return conn.execute("SELECT * FROM user_preferences WHERE user_id=?", (user_id,)).fetchone() pref = conn.execute("SELECT * FROM user_preferences WHERE user_id=?", (user_id,)).fetchone()
if not pref:
now = utcnow()
conn.execute("INSERT INTO user_preferences(user_id, theme, created_at, updated_at) VALUES(?, 'dark', ?, ?)", (user_id, now, now))
pref = conn.execute("SELECT * FROM user_preferences WHERE user_id=?", (user_id,)).fetchone()
return pref
def save_preferences(data: dict, user_id: int | None = None): def save_preferences(data: dict, user_id: int | None = None):
user_id = user_id or default_user_id() user_id = user_id or auth.current_user_id() or default_user_id()
allowed_theme = data.get("theme") if data.get("theme") in {"light", "dark"} else None allowed_theme = data.get("theme") if data.get("theme") in {"light", "dark"} else None
bootstrap_theme = data.get("bootstrap_theme") if data.get("bootstrap_theme") in BOOTSTRAP_THEMES else None bootstrap_theme = data.get("bootstrap_theme") if data.get("bootstrap_theme") in BOOTSTRAP_THEMES else None
font_family = data.get("font_family") if data.get("font_family") in FONT_FAMILIES else None font_family = data.get("font_family") if data.get("font_family") in FONT_FAMILIES else None

View File

@@ -1,5 +1,6 @@
from __future__ import annotations from __future__ import annotations
import errno
import os import os
import posixpath import posixpath
import socket import socket
@@ -53,24 +54,57 @@ class ScgiRtorrentClient:
} }
header_blob = b"".join(k.encode() + b"\0" + v.encode() + b"\0" for k, v in headers.items()) header_blob = b"".join(k.encode() + b"\0" + v.encode() + b"\0" for k, v in headers.items())
payload = str(len(header_blob)).encode("ascii") + b":" + header_blob + b"," + body payload = str(len(header_blob)).encode("ascii") + b":" + header_blob + b"," + body
with socket.create_connection((self.host, self.port), timeout=self.timeout) as sock: attempts = _scgi_retry_attempts()
sock.settimeout(self.timeout) last_exc = None
sock.sendall(payload) for attempt in range(1, attempts + 1):
chunks: list[bytes] = [] try:
while True: with socket.create_connection((self.host, self.port), timeout=self.timeout) as sock:
chunk = sock.recv(65536) sock.settimeout(self.timeout)
if not chunk: sock.sendall(payload)
break chunks: list[bytes] = []
chunks.append(chunk) while True:
response = b"".join(chunks) chunk = sock.recv(65536)
if not response: if not chunk:
raise ConnectionError("Empty response from rTorrent SCGI") break
if b"\r\n\r\n" in response: chunks.append(chunk)
response = response.split(b"\r\n\r\n", 1)[1] response = b"".join(chunks)
elif b"\n\n" in response: if not response:
response = response.split(b"\n\n", 1)[1] raise ConnectionError("Empty response from rTorrent SCGI")
result, _ = loads(response) if b"\r\n\r\n" in response:
return result[0] if len(result) == 1 else result response = response.split(b"\r\n\r\n", 1)[1]
elif b"\n\n" in response:
response = response.split(b"\n\n", 1)[1]
result, _ = loads(response)
return result[0] if len(result) == 1 else result
except Exception as exc:
last_exc = exc
if attempt >= attempts or not _is_transient_scgi_error(exc):
raise
time.sleep(_scgi_retry_delay(attempt))
raise last_exc or ConnectionError("rTorrent SCGI call failed")
def _scgi_retry_attempts() -> int:
# Note: Short retry/backoff protects bulk operations from temporary Errno 111 during high rTorrent load.
try:
return max(1, min(10, int(os.environ.get("PYTORRENT_SCGI_RETRIES", "5"))))
except Exception:
return 5
def _scgi_retry_delay(attempt: int) -> float:
return min(5.0, 0.35 * (2 ** max(0, attempt - 1)))
def _is_transient_scgi_error(exc: Exception) -> bool:
# Note: Retry covers common temporary SCGI/socket errors but does not hide semantic XML-RPC errors.
if isinstance(exc, (ConnectionRefusedError, ConnectionResetError, TimeoutError, socket.timeout)):
return True
err_no = getattr(exc, "errno", None)
if err_no in {errno.ECONNREFUSED, errno.ECONNRESET, errno.ETIMEDOUT, errno.EHOSTUNREACH, errno.ENETUNREACH}:
return True
msg = str(exc).lower()
return any(text in msg for text in ("connection refused", "connection reset", "timed out", "timeout", "empty response", "pipe creation failed", "resource temporarily unavailable", "try again", "temporarily unavailable"))
def client_for(profile: dict) -> ScgiRtorrentClient: def client_for(profile: dict) -> ScgiRtorrentClient:
@@ -78,32 +112,78 @@ def client_for(profile: dict) -> ScgiRtorrentClient:
_UNSUPPORTED_EXEC_METHODS: set[str] = set() _UNSUPPORTED_EXEC_METHODS: set[str] = set()
_EXEC_TARGET_STYLE: dict[str, int] = {}
def _rt_execute_preview(method_name: str, call_args: tuple) -> str:
# Note: The compact RPC summary removes long scripts from error messages while keeping the method and first arguments for diagnostics.
preview = ", ".join(repr(x) for x in call_args[:3])
if len(call_args) > 3:
preview += ", ..."
return f"{method_name}({preview})"
def _rt_execute_target_variants(method: str, args: tuple) -> list[tuple]:
# Note: Depending on version, rTorrent XML-RPC either requires or rejects an empty target; cache the working variant per method.
variants = [("", *args), args]
preferred = _EXEC_TARGET_STYLE.get(method)
if preferred is not None and 0 <= preferred < len(variants):
return [variants[preferred]] + [v for i, v in enumerate(variants) if i != preferred]
return variants
def _is_rt_method_missing(exc: Exception) -> bool:
msg = str(exc).lower()
return "not defined" in msg or "no such method" in msg or "unknown method" in msg
def _rt_execute_methods(method: str) -> list[str]:
# Note: execute2.* is tried only when the base execute.* method does not exist to avoid false retry errors.
methods = [method]
if method.startswith("execute."):
fallback = method.replace("execute.", "execute2.", 1)
if fallback not in _UNSUPPORTED_EXEC_METHODS:
methods.append(fallback)
return methods
def _rt_execute(c: ScgiRtorrentClient, method: str, *args): def _rt_execute(c: ScgiRtorrentClient, method: str, *args):
"""Run rTorrent execute.* as the rTorrent user across XML-RPC variants.""" """Run rTorrent execute.* as the rTorrent user across XML-RPC variants."""
method_names = [method] errors: list[str] = []
if method.startswith("execute."): attempts = _scgi_retry_attempts()
execute2 = method.replace("execute.", "execute2.", 1) for attempt in range(1, attempts + 1):
if execute2 not in _UNSUPPORTED_EXEC_METHODS: errors.clear()
method_names.append(execute2) transient_seen = False
errors = [] primary_missing = False
for method_name in method_names: for method_index, method_name in enumerate(_rt_execute_methods(method)):
for call_args in (("", *args), args): if method_name in _UNSUPPORTED_EXEC_METHODS:
try: continue
return c.call(method_name, *call_args) if method_index > 0 and not primary_missing:
except Exception as exc: continue
message = str(exc) for call_args in _rt_execute_target_variants(method_name, args):
if "not defined" in message.lower(): try:
_UNSUPPORTED_EXEC_METHODS.add(method_name) result = c.call(method_name, *call_args)
preview = ", ".join(repr(x) for x in call_args[:3]) if method_name == method:
if len(call_args) > 3: _EXEC_TARGET_STYLE[method_name] = 0 if call_args and call_args[0] == "" else 1
preview += ", ..." return result
errors.append(f"{method_name}({preview}): {exc}") except Exception as exc:
if _is_rt_method_missing(exc):
_UNSUPPORTED_EXEC_METHODS.add(method_name)
if method_name == method:
primary_missing = True
errors.append(f"{method_name}: method not defined")
break
transient_seen = transient_seen or _is_transient_scgi_error(exc)
errors.append(f"{_rt_execute_preview(method_name, call_args)}: {exc}")
if transient_seen and attempt < attempts:
time.sleep(_scgi_retry_delay(attempt))
continue
break
raise RuntimeError("rTorrent execute failed: " + "; ".join(errors)) raise RuntimeError("rTorrent execute failed: " + "; ".join(errors))
def _is_rt_timeout_error(exc: Exception) -> bool: def _is_rt_timeout_error(exc: Exception) -> bool:
return isinstance(exc, (TimeoutError, socket.timeout)) or "timed out" in str(exc).lower() msg = str(exc).lower()
return isinstance(exc, (TimeoutError, socket.timeout)) or "timed out" in msg or "timeout" in msg
def _rt_execute_allow_timeout(c: ScgiRtorrentClient, method: str, *args): def _rt_execute_allow_timeout(c: ScgiRtorrentClient, method: str, *args):
@@ -159,7 +239,8 @@ def _run_remote_move(c: ScgiRtorrentClient, src: str, dst: str, poll_interval: f
try: try:
output = str(_rt_execute(c, "execute.capture", "sh", "-c", poll_script, "pytorrent-move-poll", status_path) or "").strip() output = str(_rt_execute(c, "execute.capture", "sh", "-c", poll_script, "pytorrent-move-poll", status_path) or "").strip()
except Exception as exc: except Exception as exc:
if _is_rt_timeout_error(exc): # Note: During bulk moves, rTorrent may briefly not create the execute.capture pipe; polling waits and retries.
if _is_rt_timeout_error(exc) or _is_transient_scgi_error(exc):
continue continue
raise raise
if not output: if not output:
@@ -207,6 +288,47 @@ def _safe_rm_rf_path(path: str) -> str:
return path return path
def _run_remote_rm(c: ScgiRtorrentClient, path: str, poll_interval: float = 2.0) -> None:
# Note: rm -rf runs in the background on the rTorrent side, so long deletes do not hold a single SCGI connection.
token = uuid.uuid4().hex
status_path = f"/tmp/pytorrent-rm-{token}.status"
script = (
'target=$1; status=$2; tmp=${status}.tmp; '
'rm -f "$status" "$tmp"; '
'( rc=0; '
'if [ -z "$target" ] || [ "$target" = "/" ] || [ "$target" = "." ]; then echo "unsafe remove target: $target" >&2; rc=5; '
'else rm -rf -- "$target" || rc=$?; fi; '
'if [ $rc -eq 0 ]; then printf "OK\n" > "$status"; else printf "ERR %s\n" "$rc" > "$status"; fi; '
'if [ -s "$tmp" ]; then cat "$tmp" >> "$status"; fi; '
'rm -f "$tmp" ) > "$tmp" 2>&1 &'
)
poll_script = 'status=$1; [ -f "$status" ] && cat "$status" || true'
cleanup_script = 'rm -f "$1"'
_rt_execute_allow_timeout(c, "execute.throw", "sh", "-c", script, "pytorrent-rm-start", path, status_path)
while True:
time.sleep(max(0.25, poll_interval))
try:
output = str(_rt_execute(c, "execute.capture", "sh", "-c", poll_script, "pytorrent-rm-poll", status_path) or "").strip()
except Exception as exc:
# Note: Remove uses the same safe polling as move, so a temporary missing pipe does not fail the whole queue.
if _is_rt_timeout_error(exc) or _is_transient_scgi_error(exc):
continue
raise
if not output:
continue
try:
_rt_execute(c, "execute.throw", "sh", "-c", cleanup_script, "pytorrent-rm-clean", status_path)
except Exception:
pass
first_line = output.splitlines()[0].strip()
if first_line == "OK":
return
if first_line.startswith("ERR"):
details = "\n".join(output.splitlines()[1:]).strip()
raise RuntimeError(details or first_line)
raise RuntimeError(output)
def _remove_torrent_data(c: ScgiRtorrentClient, torrent_hash: str) -> dict: def _remove_torrent_data(c: ScgiRtorrentClient, torrent_hash: str) -> dict:
data_path = _safe_rm_rf_path(_torrent_data_path(c, torrent_hash)) data_path = _safe_rm_rf_path(_torrent_data_path(c, torrent_hash))
try: try:
@@ -217,7 +339,7 @@ def _remove_torrent_data(c: ScgiRtorrentClient, torrent_hash: str) -> dict:
c.call("d.close", torrent_hash) c.call("d.close", torrent_hash)
except Exception: except Exception:
pass pass
_rt_execute(c, "execute.throw", "rm", "-rf", data_path) _run_remote_rm(c, data_path)
return {"hash": torrent_hash, "removed_path": data_path} return {"hash": torrent_hash, "removed_path": data_path}
@@ -249,6 +371,86 @@ def browse_path(profile: dict, path: str | None = None) -> dict:
return {"path": base, "parent": parent, "dirs": dirs[:300], "source": "rtorrent"} return {"path": base, "parent": parent, "dirs": dirs[:300], "source": "rtorrent"}
POST_CHECK_DOWNLOAD_LABEL = "To download after check"
def _label_names(value: str) -> list[str]:
names: list[str] = []
for part in str(value or "").replace(";", ",").replace("|", ",").split(","):
label = part.strip()
if label and label not in names:
names.append(label)
return names
def _label_value(labels: list[str]) -> str:
return ", ".join([label for label in labels if str(label or "").strip()])
def _row_progress_complete(row: dict) -> bool:
size = int(row.get("size") or 0)
completed = int(row.get("completed_bytes") or 0)
return bool(row.get("complete")) or (size > 0 and completed >= size) or float(row.get("progress") or 0) >= 100.0
def _remove_post_check_label_if_finished(c: ScgiRtorrentClient, row: dict) -> bool:
labels = _label_names(str(row.get("label") or ""))
if POST_CHECK_DOWNLOAD_LABEL not in labels:
return False
status = str(row.get("status") or "").lower()
if not (_row_progress_complete(row) or status == "seeding"):
return False
labels = [label for label in labels if label != POST_CHECK_DOWNLOAD_LABEL]
value = _label_value(labels)
# Note: Clean the temporary label after reaching 100% or entering seeding, even when the state no longer comes directly from recheck.
c.call("d.custom1.set", str(row.get("hash") or ""), value)
row["label"] = value
return True
def apply_post_check_policy(profile: dict, rows: list[dict], previous_rows: dict[str, dict] | None = None) -> list[dict]:
"""Start complete torrents after check; pause and label incomplete ones."""
previous_rows = previous_rows or {}
c = client_for(profile)
changes: list[dict] = []
for row in rows:
h = str(row.get("hash") or "")
prev = previous_rows.get(h) or {}
try:
if h and _remove_post_check_label_if_finished(c, row):
changes.append({"hash": h, "action": "remove_post_check_label", "complete": True})
except Exception as exc:
changes.append({"hash": h, "action": "remove_post_check_label_failed", "error": str(exc)})
was_checking = str(prev.get("status") or "") == "Checking" or int(prev.get("hashing") or 0) > 0
is_checking = str(row.get("status") or "") == "Checking" or int(row.get("hashing") or 0) > 0
if not h or not was_checking or is_checking:
continue
complete = _row_progress_complete(row)
try:
if complete:
# Note: After a completed check, a complete torrent is started automatically so it can seed immediately.
c.call("d.start", h)
labels = [label for label in _label_names(str(row.get("label") or "")) if label != POST_CHECK_DOWNLOAD_LABEL]
if _label_value(labels) != str(row.get("label") or ""):
c.call("d.custom1.set", h, _label_value(labels))
row["label"] = _label_value(labels)
row.update({"state": 1, "active": 1, "paused": False, "status": "Seeding"})
changes.append({"hash": h, "action": "start", "complete": True})
else:
# Note: After check, an incomplete torrent is paused and labeled to show that it needs more downloading.
c.call("d.start", h)
c.call("d.pause", h)
labels = _label_names(str(row.get("label") or ""))
if POST_CHECK_DOWNLOAD_LABEL not in labels:
labels.append(POST_CHECK_DOWNLOAD_LABEL)
c.call("d.custom1.set", h, _label_value(labels))
row.update({"state": 1, "active": 0, "paused": True, "status": "Paused", "label": _label_value(labels)})
changes.append({"hash": h, "action": "pause_and_label", "complete": False, "label": POST_CHECK_DOWNLOAD_LABEL})
except Exception as exc:
changes.append({"hash": h, "action": "post_check_policy_failed", "error": str(exc)})
return changes
TORRENT_FIELDS = [ TORRENT_FIELDS = [
"d.hash=", "d.name=", "d.state=", "d.complete=", "d.size_bytes=", "d.completed_bytes=", "d.hash=", "d.name=", "d.state=", "d.complete=", "d.size_bytes=", "d.completed_bytes=",
"d.ratio=", "d.up.rate=", "d.down.rate=", "d.up.total=", "d.down.total=", "d.peers_connected=", "d.ratio=", "d.up.rate=", "d.down.rate=", "d.up.total=", "d.down.total=", "d.peers_connected=",
@@ -646,31 +848,6 @@ def torrent_peers(profile: dict, torrent_hash: str) -> list[dict]:
return peers return peers
def peer_action(profile: dict, torrent_hash: str, peer_index: int, action_name: str) -> dict:
if peer_index < 0:
raise ValueError("Invalid peer index")
methods = {
"disconnect": ["p.disconnect", "p.close"],
"kick": ["p.disconnect", "p.close"],
"snub": ["p.snub"],
"unsnub": ["p.unsnub"],
"ban": ["p.ban", "p.disconnect"],
}
candidates = methods.get(action_name)
if not candidates:
raise ValueError(f"Unknown peer action: {action_name}")
c = client_for(profile)
target = f"{torrent_hash}:p{int(peer_index)}"
errors = []
for method in candidates:
try:
c.call(method, target)
return {"ok": True, "action": action_name, "method": method, "peer_index": peer_index}
except Exception as exc:
errors.append(f"{method}: {exc}")
raise RuntimeError("; ".join(errors))
def _call_first(c: ScgiRtorrentClient, candidates: list[tuple[str, tuple]]) -> dict: def _call_first(c: ScgiRtorrentClient, candidates: list[tuple[str, tuple]]) -> dict:
@@ -984,14 +1161,191 @@ def apply_startup_overrides(profile: dict) -> dict:
return {"ok": True, "updated": [], "errors": [], "skipped": True} return {"ok": True, "updated": [], "errors": [], "skipped": True}
return set_config(profile, values, apply_now=True, apply_on_start=True) return set_config(profile, values, apply_now=True, apply_on_start=True)
def _int_rpc(c: ScgiRtorrentClient, method: str, h: str, default: int = 0) -> int:
try:
return int(c.call(method, h) or 0)
except Exception:
return default
def _str_rpc(c: ScgiRtorrentClient, method: str, h: str, default: str = '') -> str:
try:
return str(c.call(method, h) or '')
except Exception:
return default
def _download_runtime_state(c: ScgiRtorrentClient, h: str) -> dict:
"""Read rTorrent state using the native pause model: stopped, paused or active."""
state = _int_rpc(c, 'd.state', h)
active = _int_rpc(c, 'd.is_active', h)
opened = _int_rpc(c, 'd.is_open', h)
# Note: In rTorrent, pause does not change d.state. Paused means state=1, open=1, active=0.
return {
'state': state,
'open': opened,
'active': active,
'paused': bool(state and opened and not active),
'stopped': not bool(state),
'message': _str_rpc(c, 'd.message', h),
}
def pause_hash(c: ScgiRtorrentClient, torrent_hash: str) -> dict:
"""Pause an active rTorrent item without stopping or closing it."""
h = str(torrent_hash or '')
if not h:
return {'hash': h, 'ok': False, 'error': 'missing hash'}
before = _download_runtime_state(c, h)
result = {'hash': h, 'before': before, 'commands': []}
try:
# Note: Smart Queue frees a slot with d.pause, not d.stop, so later d.resume behaves like ruTorrent.
c.call('d.pause', h)
result['commands'].append('d.pause')
result['after'] = _download_runtime_state(c, h)
result['ok'] = True
except Exception as exc:
result.update({'ok': False, 'error': str(exc), 'after': _download_runtime_state(c, h)})
return result
def resume_paused_hash(c: ScgiRtorrentClient, torrent_hash: str) -> dict:
"""Resume only a paused rTorrent item; never convert it through stop/start."""
h = str(torrent_hash or '')
if not h:
return {'hash': h, 'ok': False, 'error': 'missing hash'}
before = _download_runtime_state(c, h)
result: dict = {'hash': h, 'before': before, 'commands': []}
if before.get('stopped'):
result.update({'ok': False, 'skipped': 'stopped_not_paused', 'after': before})
return result
if before.get('active'):
result.update({'ok': True, 'skipped': 'already_active', 'after': before})
return result
try:
# Note: ruTorrent unpauses with the equivalent of d.resume. Do not add d.start/d.open,
# because those commands belong to Stopped/Open state, not a clean Paused state.
c.call('d.resume', h)
result['commands'].append('d.resume')
result['after'] = _download_runtime_state(c, h)
result['ok'] = True
except Exception as exc:
result.update({'ok': False, 'error': str(exc), 'after': _download_runtime_state(c, h)})
return result
def start_or_resume_hash(c: ScgiRtorrentClient, torrent_hash: str) -> dict:
"""Start stopped torrents or resume torrents paused with d.pause, without mixing both paths."""
h = str(torrent_hash or '')
if not h:
return {'hash': h, 'ok': False, 'error': 'missing hash'}
before = _download_runtime_state(c, h)
result: dict = {'hash': h, 'before': before, 'commands': []}
if before.get('active'):
result.update({'ok': True, 'skipped': 'already_active', 'after': before})
return result
if before.get('paused') or (before.get('state') and not before.get('active')):
# Note: Paused rTorrent items are resumed only with d.resume; d.start is intentionally skipped here.
resumed = resume_paused_hash(c, h)
resumed['mode'] = 'resume_paused'
return resumed
try:
# Note: d.start remains only for Stopped/closed items, not for the pause-to-resume path.
c.call('d.open', h)
result['commands'].append('d.open')
except Exception as exc:
result.setdefault('ignored_errors', []).append(f'd.open: {exc}')
try:
c.call('d.start', h)
result['commands'].append('d.start')
except Exception as exc:
result.setdefault('ignored_errors', []).append(f'd.start: {exc}')
try:
c.call('d.try_start', h)
result['commands'].append('d.try_start')
except Exception as exc2:
result.setdefault('ignored_errors', []).append(f'd.try_start: {exc2}')
result['ok'] = False
result['after'] = _download_runtime_state(c, h)
result['ok'] = result.get('ok', True)
return result
def move_torrents(profile: dict, torrent_hashes: list[str], payload: dict | None = None) -> dict:
# Note: Shared move implementation keeps API move and automation move-to-path identical.
payload = payload or {}
c = client_for(profile)
path = _remote_clean_path(payload.get("path") or "")
move_data = bool(payload.get("move_data"))
recheck = bool(payload.get("recheck", move_data))
keep_seeding = bool(payload.get("keep_seeding"))
# Note: keep_seeding lets automation move completed data to another path and force the torrent back into seeding.
if not path:
raise ValueError("Missing path")
results = []
if move_data:
_rt_execute_allow_timeout(c, "execute.throw", "mkdir", "-p", path)
for h in torrent_hashes:
item = {"hash": h, "path": path, "move_data": move_data, "keep_seeding": keep_seeding}
try:
was_state = int(c.call("d.state", h) or 0)
except Exception:
was_state = 0
try:
was_active = int(c.call("d.is_active", h) or 0)
except Exception:
was_active = was_state
if move_data:
src = _remote_clean_path(_torrent_data_path(c, h))
if not src:
raise ValueError(f"Cannot determine source path for {h}")
dst = _remote_join(path, posixpath.basename(src.rstrip("/")))
if src != dst:
try:
c.call("d.stop", h)
except Exception:
pass
try:
c.call("d.close", h)
except Exception:
pass
_run_remote_move(c, src, dst)
item["moved_from"] = src
item["moved_to"] = dst
else:
item["skipped"] = "source and destination are the same"
c.call("d.directory.set", h, path)
if recheck:
try:
c.call("d.check_hash", h)
except Exception as exc:
item["recheck_error"] = str(exc)
if keep_seeding or was_state or was_active:
try:
c.call("d.start", h)
item["started_after_move"] = True
except Exception as exc:
item["start_error"] = str(exc)
else:
c.call("d.directory.set", h, path)
if keep_seeding:
try:
c.call("d.start", h)
item["started_after_path_change"] = True
except Exception as exc:
item["start_error"] = str(exc)
results.append(item)
return {"ok": True, "count": len(torrent_hashes), "move_data": move_data, "keep_seeding": keep_seeding, "results": results}
def action(profile: dict, torrent_hashes: list[str], name: str, payload: dict | None = None) -> dict: def action(profile: dict, torrent_hashes: list[str], name: str, payload: dict | None = None) -> dict:
payload = payload or {} payload = payload or {}
c = client_for(profile) c = client_for(profile)
methods = { methods = {
"start": "d.start",
"pause": "d.pause",
"stop": "d.stop", "stop": "d.stop",
"resume": "d.resume",
"recheck": "d.check_hash", "recheck": "d.check_hash",
"reannounce": "d.tracker_announce", "reannounce": "d.tracker_announce",
"remove": "d.erase", "remove": "d.erase",
@@ -1007,58 +1361,21 @@ def action(profile: dict, torrent_hashes: list[str], name: str, payload: dict |
c.call("d.custom.set", h, "py_ratio_group", group) c.call("d.custom.set", h, "py_ratio_group", group)
return {"ok": True, "count": len(torrent_hashes), "ratio_group": group} return {"ok": True, "count": len(torrent_hashes), "ratio_group": group}
if name == "move": if name == "move":
path = _remote_clean_path(payload.get("path") or "") # Note: Main move delegates to the shared helper used by automations.
move_data = bool(payload.get("move_data")) return move_torrents(profile, torrent_hashes, payload)
recheck = bool(payload.get("recheck", move_data)) if name == "pause":
if not path: # Note: The app pause action is now a pure d.pause so later resume works without stop/start.
raise ValueError("Missing path") results = [pause_hash(c, h) for h in torrent_hashes]
results = [] return {"ok": True, "count": len(torrent_hashes), "remove_data": False, "results": results}
if move_data: if name in {"resume", "unpause"}:
_rt_execute_allow_timeout(c, "execute.throw", "mkdir", "-p", path) # Note: Resume/Unpause uses only d.resume for Paused state.
for h in torrent_hashes: results = [resume_paused_hash(c, h) for h in torrent_hashes]
item = {"hash": h, "path": path, "move_data": move_data} return {"ok": True, "count": len(torrent_hashes), "remove_data": False, "results": results}
try: if name == "start":
was_state = int(c.call("d.state", h) or 0) # Note: Start separates Stopped from Paused; paused items go through d.resume, stopped items through d.start.
except Exception: results = [start_or_resume_hash(c, h) for h in torrent_hashes]
was_state = 0 return {"ok": True, "count": len(torrent_hashes), "remove_data": False, "results": results}
try:
was_active = int(c.call("d.is_active", h) or 0)
except Exception:
was_active = was_state
if move_data:
src = _remote_clean_path(_torrent_data_path(c, h))
if not src:
raise ValueError(f"Cannot determine source path for {h}")
dst = _remote_join(path, posixpath.basename(src.rstrip("/")))
if src != dst:
try:
c.call("d.stop", h)
except Exception:
pass
try:
c.call("d.close", h)
except Exception:
pass
_run_remote_move(c, src, dst)
item["moved_from"] = src
item["moved_to"] = dst
else:
item["skipped"] = "source and destination are the same"
c.call("d.directory.set", h, path)
if recheck:
try:
c.call("d.check_hash", h)
except Exception as exc:
item["recheck_error"] = str(exc)
if was_state or was_active:
try:
c.call("d.start", h)
except Exception:
pass
else:
c.call("d.directory.set", h, path)
results.append(item)
return {"ok": True, "count": len(torrent_hashes), "move_data": move_data, "results": results}
method = methods.get(name) method = methods.get(name)
if not method: if not method:
raise ValueError(f"Unknown action: {name}") raise ValueError(f"Unknown action: {name}")

View File

@@ -29,6 +29,7 @@ def _default_settings(user_id: int, profile_id: int) -> dict[str, Any]:
'stalled_seconds': 300, 'stalled_seconds': 300,
'min_speed_bytes': 1024, 'min_speed_bytes': 1024,
'min_seeds': 1, 'min_seeds': 1,
'manage_stopped': 0,
'updated_at': utcnow(), 'updated_at': utcnow(),
} }
@@ -52,20 +53,23 @@ def save_settings(profile_id: int, data: dict[str, Any], user_id: int | None = N
'stalled_seconds': max(30, int(data.get('stalled_seconds') or current.get('stalled_seconds') or 300)), 'stalled_seconds': max(30, int(data.get('stalled_seconds') or current.get('stalled_seconds') or 300)),
'min_speed_bytes': max(0, int(data.get('min_speed_bytes') or current.get('min_speed_bytes') or 0)), 'min_speed_bytes': max(0, int(data.get('min_speed_bytes') or current.get('min_speed_bytes') or 0)),
'min_seeds': max(0, int(data.get('min_seeds') or current.get('min_seeds') or 0)), 'min_seeds': max(0, int(data.get('min_seeds') or current.get('min_seeds') or 0)),
# Note: This switch protects fully stopped torrents from automatic starts; by default Smart Queue manages only paused items.
'manage_stopped': 1 if data.get('manage_stopped', current.get('manage_stopped')) else 0,
} }
now = utcnow() now = utcnow()
with connect() as conn: with connect() as conn:
conn.execute( conn.execute(
'''INSERT INTO smart_queue_settings(user_id,profile_id,enabled,max_active_downloads,stalled_seconds,min_speed_bytes,min_seeds,updated_at) '''INSERT INTO smart_queue_settings(user_id,profile_id,enabled,max_active_downloads,stalled_seconds,min_speed_bytes,min_seeds,manage_stopped,updated_at)
VALUES(?,?,?,?,?,?,?,?) VALUES(?,?,?,?,?,?,?,?,?)
ON CONFLICT(user_id, profile_id) DO UPDATE SET ON CONFLICT(user_id, profile_id) DO UPDATE SET
enabled=excluded.enabled, enabled=excluded.enabled,
max_active_downloads=excluded.max_active_downloads, max_active_downloads=excluded.max_active_downloads,
stalled_seconds=excluded.stalled_seconds, stalled_seconds=excluded.stalled_seconds,
min_speed_bytes=excluded.min_speed_bytes, min_speed_bytes=excluded.min_speed_bytes,
min_seeds=excluded.min_seeds, min_seeds=excluded.min_seeds,
manage_stopped=excluded.manage_stopped,
updated_at=excluded.updated_at''', updated_at=excluded.updated_at''',
(user_id, profile_id, settings['enabled'], settings['max_active_downloads'], settings['stalled_seconds'], settings['min_speed_bytes'], settings['min_seeds'], now), (user_id, profile_id, settings['enabled'], settings['max_active_downloads'], settings['stalled_seconds'], settings['min_speed_bytes'], settings['min_seeds'], settings['manage_stopped'], now),
) )
return get_settings(profile_id, user_id) return get_settings(profile_id, user_id)
@@ -147,17 +151,33 @@ def _remember_auto_label(profile_id: int, torrent_hash: str, previous_label: str
) )
def _read_label(client: Any, torrent_hash: str, fallback: str = '') -> str:
try:
return str(client.call('d.custom1', torrent_hash) or '')
except Exception:
return fallback
def _restore_auto_label(client: Any, profile_id: int, torrent_hash: str, current_label: str | None = None) -> bool: def _restore_auto_label(client: Any, profile_id: int, torrent_hash: str, current_label: str | None = None) -> bool:
with connect() as conn: with connect() as conn:
row = conn.execute( row = conn.execute(
'SELECT previous_label FROM smart_queue_auto_labels WHERE profile_id=? AND torrent_hash=?', 'SELECT previous_label FROM smart_queue_auto_labels WHERE profile_id=? AND torrent_hash=?',
(profile_id, torrent_hash), (profile_id, torrent_hash),
).fetchone() ).fetchone()
live_label = _read_label(client, torrent_hash, current_label or '')
if not row: if not row:
return False if live_label != SMART_QUEUE_LABEL:
return False
try:
# Note: Clear the Smart Queue label even when the torrent was marked earlier but no previous-label entry remains.
client.call('d.custom1.set', torrent_hash, '')
return True
except Exception:
return False
previous = row.get('previous_label') or '' previous = row.get('previous_label') or ''
try: try:
if current_label is None or current_label == SMART_QUEUE_LABEL: # Note: On resume, Smart Queue restores the previous label only while it still sees its own technical label.
if live_label == SMART_QUEUE_LABEL or current_label is None:
client.call('d.custom1.set', torrent_hash, previous) client.call('d.custom1.set', torrent_hash, previous)
conn.execute('DELETE FROM smart_queue_auto_labels WHERE profile_id=? AND torrent_hash=?', (profile_id, torrent_hash)) conn.execute('DELETE FROM smart_queue_auto_labels WHERE profile_id=? AND torrent_hash=?', (profile_id, torrent_hash))
return True return True
@@ -165,6 +185,103 @@ def _restore_auto_label(client: Any, profile_id: int, torrent_hash: str, current
return False return False
def _call_rtorrent_setter(client: Any, method: str, value: int) -> bool:
"""Set a scalar rTorrent setting while tolerating XMLRPC signature differences."""
for args in ((int(value),), ('', int(value))):
try:
client.call(method, *args)
return True
except Exception:
continue
return False
def _ensure_rtorrent_download_cap(client: Any, max_active: int) -> dict[str, Any]:
"""Raise rTorrent download caps that can silently limit Smart Queue to one item."""
result: dict[str, Any] = {'checked': False, 'updated': False, 'items': []}
# Note: rTorrent may have separate global and per-throttle limits. When div=1,
# starts can effectively stop at one active torrent even when the target is 100.
for key in ('throttle.max_downloads.global', 'throttle.max_downloads.div'):
item: dict[str, Any] = {'key': key, 'checked': False, 'updated': False}
try:
current = int(client.call(key) or 0)
item.update({'checked': True, 'current': current, 'target': int(max_active)})
result['checked'] = True
# Note: 0 means unlimited; raise only positive limits lower than the target.
if 0 < current < max_active:
ok = _call_rtorrent_setter(client, f'{key}.set', int(max_active))
item['updated'] = ok
if ok:
result['updated'] = True
item['new'] = int(max_active)
result.setdefault('current', current)
result['new'] = int(max_active)
except Exception as exc:
item.update({'error': str(exc)})
result['items'].append(item)
return result
def _start_download(client: Any, torrent: dict[str, Any]) -> dict[str, Any]:
"""Resume paused torrents through rTorrent's pause model."""
h = str(torrent.get('hash') or '')
if not h:
return {'hash': h, 'ok': False, 'error': 'missing hash'}
if bool(torrent.get('paused')) or str(torrent.get('status') or '').lower() == 'paused' or int(torrent.get('state') or 0):
# Note: Smart Queue candidates paused with d.pause must be resumed with d.resume, without d.start/d.stop.
return rtorrent.resume_paused_hash(client, h)
# Note: Only optional manage_stopped uses the start path for fully stopped torrents.
return rtorrent.start_or_resume_hash(client, h)
def _verify_started_downloads(client: Any, hashes: list[str], attempts: int = 10, delay: float = 0.5) -> tuple[list[str], list[dict[str, Any]]]:
"""Verify starts after rTorrent has time to process resume/start commands."""
pending = [h for h in hashes if h]
started: list[str] = []
no_effect: list[dict[str, Any]] = []
seen_started: set[str] = set()
last_state: dict[str, dict[str, Any]] = {}
for attempt in range(max(1, attempts)):
if attempt:
time.sleep(delay)
for h in list(pending):
live = _read_live_start_state(client, h)
last_state[h] = live
if live.get('started'):
seen_started.add(h)
pending.remove(h)
if not pending:
break
started = [h for h in hashes if h in seen_started]
no_effect = [last_state.get(h, {'hash': h, 'started': False}) for h in hashes if h and h not in seen_started]
return started, no_effect
def _read_live_start_state(client: Any, torrent_hash: str) -> dict[str, Any]:
result: dict[str, Any] = {'hash': torrent_hash}
fields = (
('state', 'd.state'),
('active', 'd.is_active'),
('open', 'd.is_open'),
('priority', 'd.priority'),
('message', 'd.message'),
('label', 'd.custom1'),
)
for key, method in fields:
try:
value = client.call(method, torrent_hash)
result[key] = int(value or 0) if key in {'state', 'active', 'open', 'priority'} else str(value or '')
except Exception as exc:
result[f'{key}_error'] = str(exc)
# Note: Do not treat d.is_open or state=1 as resumed; Paused can also have those values.
# Smart Queue counts a start only after d.is_active=1, meaning the pause was actually removed.
result['started'] = bool(int(result.get('active') or 0))
return result
def _set_smart_queue_label(client: Any, torrent_hash: str, attempts: int = 3) -> bool: def _set_smart_queue_label(client: Any, torrent_hash: str, attempts: int = 3) -> bool:
for attempt in range(max(1, attempts)): for attempt in range(max(1, attempts)):
try: try:
@@ -186,31 +303,87 @@ def _mark_auto_paused(client: Any, profile_id: int, torrent: dict[str, Any]) ->
return _set_smart_queue_label(client, torrent_hash) return _set_smart_queue_label(client, torrent_hash)
def _cleanup_auto_labels(client: Any, profile_id: int, torrents: list[dict[str, Any]], keep_hashes: set[str]) -> list[str]: def _is_smart_queue_hold(torrent: dict[str, Any] | None, manage_stopped: bool = True) -> bool:
if not torrent or int(torrent.get('complete') or 0):
return False
if str(torrent.get('label') or '') == SMART_QUEUE_LABEL:
return True
# Note: Paused in rTorrent usually has state=1 and active=0, so state=0 must not be required.
# This lets Smart Queue treat paused torrents as pending and fill the queue target later.
if bool(torrent.get('paused')):
return True
# Note: Fully stopped items are managed only when Use stopped torrents is enabled.
if not manage_stopped:
return False
return not int(torrent.get('state') or 0)
def _clear_untracked_smart_queue_label(client: Any, torrent_hash: str, current_label: str) -> bool:
if current_label != SMART_QUEUE_LABEL:
return False
try:
# Note: Clear an orphaned Smart Queue label when no previous-label entry exists in the database.
client.call('d.custom1.set', torrent_hash, '')
return True
except Exception:
return False
def _cleanup_auto_labels(client: Any, profile_id: int, torrents: list[dict[str, Any]], keep_hashes: set[str], manage_stopped: bool = True) -> list[str]:
by_hash = {str(t.get('hash') or ''): t for t in torrents} by_hash = {str(t.get('hash') or ''): t for t in torrents}
restored: list[str] = [] restored: list[str] = []
with connect() as conn: with connect() as conn:
rows = conn.execute('SELECT torrent_hash FROM smart_queue_auto_labels WHERE profile_id=?', (profile_id,)).fetchall() rows = conn.execute('SELECT torrent_hash FROM smart_queue_auto_labels WHERE profile_id=?', (profile_id,)).fetchall()
tracked_hashes = {str(row.get('torrent_hash') or '') for row in rows if row.get('torrent_hash')}
for row in rows: for row in rows:
h = str(row.get('torrent_hash') or '') h = str(row.get('torrent_hash') or '')
t = by_hash.get(h) t = by_hash.get(h)
if not h or h in keep_hashes: if not h or h in keep_hashes:
continue continue
if t is None or int(t.get('complete') or 0): current_label = '' if t is None else str(t.get('label') or '')
if _restore_auto_label(client, profile_id, h, None if t is None else str(t.get('label') or '')): if not _is_smart_queue_hold(t, manage_stopped):
if _restore_auto_label(client, profile_id, h, None if t is None else current_label):
restored.append(h) restored.append(h)
continue continue
is_paused_or_stopped = bool(t.get('paused')) or not int(t.get('active') or 0) or not int(t.get('state') or 0) if current_label != SMART_QUEUE_LABEL:
current_label = str(t.get('label') or '') _set_smart_queue_label(client, h)
if is_paused_or_stopped:
if current_label != SMART_QUEUE_LABEL: for h, t in by_hash.items():
_set_smart_queue_label(client, h) if not h or h in keep_hashes or h in tracked_hashes or _is_smart_queue_hold(t, manage_stopped):
continue continue
if _restore_auto_label(client, profile_id, h, current_label): if _clear_untracked_smart_queue_label(client, h, str(t.get('label') or '')):
restored.append(h) restored.append(h)
return restored return restored
def _is_running_download_slot(t: dict[str, Any]) -> bool:
"""Return True for incomplete torrents that already occupy a Smart Queue slot."""
# Note: The Smart Queue limit means the target number of actually active slots.
# Paused can have state=1/open=1, so a slot is counted only after d.is_active=1.
if int(t.get('complete') or 0):
return False
if str(t.get('label') or '') == SMART_QUEUE_LABEL:
return False
status = str(t.get('status') or '').lower()
if status == 'checking' or status == 'paused' or bool(t.get('paused')):
return False
return bool(int(t.get('active') or 0))
def _is_waiting_download_candidate(t: dict[str, Any], manage_stopped: bool) -> bool:
"""Return True for paused/held torrents Smart Queue may resume later."""
if int(t.get('complete') or 0):
return False
if str(t.get('label') or '') == SMART_QUEUE_LABEL:
return True
# Note: Paused items are the primary source for filling the queue, regardless of manage_stopped.
if bool(t.get('paused')) or str(t.get('status') or '').lower() == 'paused':
return True
# Note: Stopped items are added only when the user enabled Use stopped torrents.
return bool(manage_stopped) and not int(t.get('state') or 0)
def check(profile: dict | None = None, user_id: int | None = None, force: bool = False) -> dict[str, Any]: def check(profile: dict | None = None, user_id: int | None = None, force: bool = False) -> dict[str, Any]:
profile = profile or active_profile() profile = profile or active_profile()
if not profile: if not profile:
@@ -219,13 +392,37 @@ def check(profile: dict | None = None, user_id: int | None = None, force: bool =
profile_id = int(profile['id']) profile_id = int(profile['id'])
settings = get_settings(profile_id, user_id) settings = get_settings(profile_id, user_id)
if not force and not int(settings.get('enabled') or 0): if not force and not int(settings.get('enabled') or 0):
add_history(profile_id, 'skipped_disabled', [], [], 0, {'enabled': False}, user_id) restored: list[str] = []
return {'ok': True, 'enabled': False, 'paused': [], 'resumed': [], 'message': 'Smart Queue disabled'} try:
# Note: When Smart Queue is disabled, only technical labels are cleaned up, without starting or pausing torrents.
torrents = rtorrent.list_torrents(profile)
restored = _cleanup_auto_labels(rtorrent.client_for(profile), profile_id, torrents, set(), bool(settings.get('manage_stopped')))
except Exception:
restored = []
add_history(profile_id, 'skipped_disabled', [], [], 0, {'enabled': False, 'labels_restored': restored}, user_id)
return {'ok': True, 'enabled': False, 'paused': [], 'resumed': [], 'labels_restored': restored, 'message': 'Smart Queue disabled'}
torrents = rtorrent.list_torrents(profile) torrents = rtorrent.list_torrents(profile)
excluded = _excluded_hashes(profile_id, user_id) excluded = _excluded_hashes(profile_id, user_id)
downloading = [t for t in torrents if not int(t.get('complete') or 0) and int(t.get('state') or 0) and not t.get('paused') and t.get('hash') not in excluded] manage_stopped = bool(settings.get('manage_stopped'))
stopped = [t for t in torrents if not int(t.get('complete') or 0) and (not int(t.get('state') or 0) or t.get('paused')) and t.get('hash') not in excluded] def is_managed_hold(t: dict[str, Any]) -> bool:
return str(t.get('label') or '') == SMART_QUEUE_LABEL
# Note: Count Smart Queue slots by d.is_active because Paused can have state=1/open=1 and must not occupy the limit.
downloading = [
t for t in torrents
if _is_running_download_slot(t)
and not is_managed_hold(t)
and t.get('hash') not in excluded
]
# Note: Candidates also include regular Paused items without a label. Otherwise the queue sees only one or two items
# and cannot fill the configured target of 100.
stopped = [
t for t in torrents
if t.get('hash') not in excluded
and _is_waiting_download_candidate(t, manage_stopped)
and not _is_running_download_slot(t)
]
min_speed = int(settings.get('min_speed_bytes') or 0) min_speed = int(settings.get('min_speed_bytes') or 0)
min_seeds = int(settings.get('min_seeds') or 0) min_seeds = int(settings.get('min_seeds') or 0)
stalled_seconds = int(settings.get('stalled_seconds') or 300) stalled_seconds = int(settings.get('stalled_seconds') or 300)
@@ -275,9 +472,9 @@ def check(profile: dict | None = None, user_id: int | None = None, force: bool =
to_pause: list[dict[str, Any]] = pause_rank[:max(0, len(downloading) - max_active)] to_pause: list[dict[str, Any]] = pause_rank[:max(0, len(downloading) - max_active)]
pause_hashes = {str(t.get('hash') or '') for t in to_pause} pause_hashes = {str(t.get('hash') or '') for t in to_pause}
# When the cap is not exceeded, stalled downloads can still be rotated out # Note: Stalled rotation runs only when the queue is full. When slots are missing, Smart Queue should
# one-for-one with better stopped candidates while staying within max_active. # first add missing items instead of pausing existing or incorrectly detected stalled items.
if candidates: if candidates and len(downloading) >= max_active:
replaceable_stalled = [t for t in stalled if str(t.get('hash') or '') not in pause_hashes] replaceable_stalled = [t for t in stalled if str(t.get('hash') or '') not in pause_hashes]
for t in replaceable_stalled[:max(0, len(candidates) - len(to_pause))]: for t in replaceable_stalled[:max(0, len(candidates) - len(to_pause))]:
to_pause.append(t) to_pause.append(t)
@@ -286,27 +483,64 @@ def check(profile: dict | None = None, user_id: int | None = None, force: bool =
active_after_pause = max(0, len(downloading) - len(to_pause)) active_after_pause = max(0, len(downloading) - len(to_pause))
available_slots = max(0, max_active - active_after_pause) available_slots = max(0, max_active - active_after_pause)
to_resume = candidates[:available_slots] to_resume = candidates[:available_slots]
# Note: Items outside the current start batch are explicitly marked as pending Smart Queue items.
to_label_waiting = candidates[available_slots:]
c = rtorrent.client_for(profile) c = rtorrent.client_for(profile)
rtorrent_cap = _ensure_rtorrent_download_cap(c, max_active)
paused: list[str] = [] paused: list[str] = []
resumed: list[str] = [] resumed: list[str] = []
label_failed: list[str] = [] label_failed: list[str] = []
start_failed: list[dict[str, str]] = []
start_no_effect: list[dict[str, Any]] = []
resume_requested: list[str] = []
start_results: list[dict[str, Any]] = []
for t in to_pause: for t in to_pause:
try: try:
c.call('d.pause', t['hash']) pause_result = rtorrent.pause_hash(c, t['hash'])
if not pause_result.get('ok'):
raise RuntimeError(pause_result.get('error') or 'pause failed')
if not _mark_auto_paused(c, profile_id, t): if not _mark_auto_paused(c, profile_id, t):
label_failed.append(t['hash']) label_failed.append(t['hash'])
paused.append(t['hash']) paused.append(t['hash'])
except Exception: except Exception:
pass pass
for t in to_resume:
for t in to_label_waiting:
h = str(t.get('hash') or '')
if not h or h in pause_hashes:
continue
try: try:
_restore_auto_label(c, profile_id, t['hash'], str(t.get('label') or '')) if not _mark_auto_paused(c, profile_id, t):
c.call('d.resume', t['hash']) label_failed.append(h)
c.call('d.start', t['hash'])
resumed.append(t['hash'])
except Exception: except Exception:
pass label_failed.append(h)
restored = _cleanup_auto_labels(c, profile_id, torrents, set(paused))
add_history(profile_id, 'force_check' if force else 'auto_check', paused, resumed, len(torrents), {'excluded': len(excluded), 'enabled': bool(settings.get('enabled')), 'auto_label': SMART_QUEUE_LABEL, 'labels_restored': restored, 'labels_failed': label_failed, 'max_active_downloads': max_active, 'active_before': len(downloading), 'active_after': active_after_pause + len(resumed)}, user_id) # Note: Start the whole candidate batch in one round. Remove the label after an accepted RPC,
return {'ok': True, 'enabled': bool(settings.get('enabled')), 'paused': paused, 'resumed': resumed, 'labels_restored': restored, 'labels_failed': label_failed, 'checked': len(torrents), 'excluded': len(excluded), 'settings': settings} # because rTorrent may keep some items in its own queue with active=0 despite a valid d.start/d.resume.
for t in to_resume:
h = str(t.get('hash') or '')
if not h:
continue
try:
result = _start_download(c, t)
start_results.append(result)
resume_requested.append(h)
except Exception as exc:
start_failed.append({'hash': h, 'error': str(exc)})
active_verified, start_no_effect = _verify_started_downloads(c, resume_requested)
for h in active_verified:
_restore_auto_label(c, profile_id, h, None)
# Note: History shows only torrents actually unpaused, not just the number of sent commands.
resumed = list(active_verified)
keep_labels = (
set(paused)
| {str(t.get('hash') or '') for t in to_label_waiting}
| {str(t.get('hash') or '') for t in stopped if str(t.get('label') or '') == SMART_QUEUE_LABEL and str(t.get('hash') or '') not in set(resumed)}
)
restored = _cleanup_auto_labels(c, profile_id, torrents, keep_labels, manage_stopped)
details = {'excluded': len(excluded), 'enabled': bool(settings.get('enabled')), 'auto_label': SMART_QUEUE_LABEL, 'labels_restored': restored, 'labels_failed': label_failed, 'start_failed': start_failed, 'start_no_effect': start_no_effect, 'start_results': start_results, 'resume_requested': resume_requested, 'active_verified': active_verified, 'waiting_labeled': len(to_label_waiting), 'manage_stopped': manage_stopped, 'max_active_downloads': max_active, 'active_before': len(downloading), 'active_after_expected': active_after_pause + len(resumed), 'paused_planned': len(to_pause), 'resumed_planned': len(to_resume), 'rtorrent_cap': rtorrent_cap}
add_history(profile_id, 'force_check' if force else 'auto_check', paused, resumed, len(torrents), details, user_id)
return {'ok': True, 'enabled': bool(settings.get('enabled')), 'paused': paused, 'resumed': resumed, 'resume_requested': resume_requested, 'waiting_labeled': len(to_label_waiting), 'labels_restored': restored, 'labels_failed': label_failed, 'start_failed': start_failed, 'start_no_effect': start_no_effect, 'active_verified': active_verified, 'rtorrent_cap': rtorrent_cap, 'checked': len(torrents), 'excluded': len(excluded), 'settings': settings}

View File

@@ -26,9 +26,11 @@ class TorrentCache:
profile_id = int(profile["id"]) profile_id = int(profile["id"])
try: try:
rows = rtorrent.list_torrents(profile) rows = rtorrent.list_torrents(profile)
with self._lock:
old = dict(self._data.get(profile_id, {}))
post_check_changes = rtorrent.apply_post_check_policy(profile, rows, old)
fresh = {t["hash"]: t for t in rows} fresh = {t["hash"]: t for t in rows}
with self._lock: with self._lock:
old = self._data.get(profile_id, {})
added = [v for h, v in fresh.items() if h not in old] added = [v for h, v in fresh.items() if h not in old]
removed = [h for h in old.keys() if h not in fresh] removed = [h for h in old.keys() if h not in fresh]
updated = [] updated = []
@@ -45,7 +47,7 @@ class TorrentCache:
self._data[profile_id] = fresh self._data[profile_id] = fresh
self._errors[profile_id] = "" self._errors[profile_id] = ""
self._updated_at[profile_id] = time() self._updated_at[profile_id] = time()
return {"ok": True, "profile_id": profile_id, "added": added, "updated": updated, "removed": removed} return {"ok": True, "profile_id": profile_id, "added": added, "updated": updated, "removed": removed, "post_check_changes": post_check_changes}
except Exception as exc: except Exception as exc:
with self._lock: with self._lock:
self._errors[profile_id] = str(exc) self._errors[profile_id] = str(exc)

View File

@@ -0,0 +1,209 @@
from __future__ import annotations
import json
import threading
import time
from typing import Any
from ..db import connect, utcnow
from . import rtorrent
from .torrent_cache import torrent_cache
CACHE_SECONDS = 15 * 60
_STARTUP_DELAY_SECONDS = 3 * 60
_STARTED_AT = time.monotonic()
_LOCK = threading.Lock()
_BACKGROUND_LOCK = threading.Lock()
_BACKGROUND_PROFILE_IDS: set[int] = set()
def _human_size(value: int | float) -> str:
size = float(value or 0)
for unit in ("B", "KiB", "MiB", "GiB", "TiB", "PiB"):
if abs(size) < 1024 or unit == "PiB":
return f"{size:.1f} {unit}" if unit != "B" else f"{int(size)} B"
size /= 1024
return f"{size:.1f} PiB"
def _empty(profile_id: int, error: str = "") -> dict[str, Any]:
now = utcnow()
return {
"profile_id": profile_id,
"torrent_count": 0,
"complete_count": 0,
"incomplete_count": 0,
"total_torrent_size": 0,
"total_torrent_size_h": _human_size(0),
"total_file_size": 0,
"total_file_size_h": _human_size(0),
"file_count": 0,
"seeds_total": 0,
"peers_total": 0,
"down_rate_total": 0,
"up_rate_total": 0,
"down_rate_total_h": "0 B/s",
"up_rate_total_h": "0 B/s",
"sampled_torrents": 0,
"errors": [],
"error": error,
"created_at": now,
"updated_at": now,
"age_seconds": 0,
"stale": True,
}
def _load_cached(profile_id: int) -> dict[str, Any] | None:
with connect() as conn:
row = conn.execute("SELECT * FROM torrent_stats_cache WHERE profile_id=?", (profile_id,)).fetchone()
if not row:
return None
payload = json.loads(row.get("payload_json") or "{}")
payload["created_at"] = row.get("created_at")
payload["updated_at"] = row.get("updated_at")
try:
payload["age_seconds"] = max(0, int(time.time() - float(row.get("updated_epoch") or 0)))
except Exception:
payload["age_seconds"] = 0
payload["stale"] = payload["age_seconds"] >= CACHE_SECONDS
return payload
def _save(profile_id: int, payload: dict[str, Any]) -> dict[str, Any]:
now = utcnow()
payload = dict(payload)
payload["updated_at"] = now
payload["age_seconds"] = 0
payload["stale"] = False
with connect() as conn:
conn.execute(
"""
INSERT INTO torrent_stats_cache(profile_id,payload_json,created_at,updated_at,updated_epoch)
VALUES(?,?,?,?,?)
ON CONFLICT(profile_id) DO UPDATE SET
payload_json=excluded.payload_json,
updated_at=excluded.updated_at,
updated_epoch=excluded.updated_epoch
""",
(profile_id, json.dumps(payload), now, now, time.time()),
)
return payload
def collect(profile: dict) -> dict[str, Any]:
"""Collect heavier torrent/file statistics on demand or every cache window."""
profile_id = int(profile.get("id") or 0)
torrents = rtorrent.list_torrents(profile)
total_torrent_size = sum(int(t.get("size") or 0) for t in torrents)
seeds_total = sum(int(t.get("seeds") or 0) for t in torrents)
peers_total = sum(int(t.get("peers") or 0) for t in torrents)
down_rate_total = sum(int(t.get("down_rate") or 0) for t in torrents)
up_rate_total = sum(int(t.get("up_rate") or 0) for t in torrents)
total_file_size = 0
file_count = 0
errors: list[dict[str, str]] = []
# Note: File metadata is queried per torrent only during cached statistics refresh, not during every UI poll.
for torrent in torrents:
h = str(torrent.get("hash") or "")
if not h:
continue
try:
files = rtorrent.torrent_files(profile, h)
file_count += len(files)
total_file_size += sum(int(f.get("size") or 0) for f in files)
except Exception as exc:
errors.append({"hash": h, "name": str(torrent.get("name") or ""), "error": str(exc)})
torrent_cache.refresh(profile)
payload = {
"profile_id": profile_id,
"torrent_count": len(torrents),
"complete_count": sum(1 for t in torrents if int(t.get("complete") or 0)),
"incomplete_count": sum(1 for t in torrents if not int(t.get("complete") or 0)),
"total_torrent_size": total_torrent_size,
"total_torrent_size_h": _human_size(total_torrent_size),
"total_file_size": total_file_size,
"total_file_size_h": _human_size(total_file_size),
"file_count": file_count,
"seeds_total": seeds_total,
"peers_total": peers_total,
"down_rate_total": down_rate_total,
"up_rate_total": up_rate_total,
"down_rate_total_h": rtorrent.human_rate(down_rate_total),
"up_rate_total_h": rtorrent.human_rate(up_rate_total),
"sampled_torrents": len(torrents),
"errors": errors[:25],
"error": "" if not errors else f"File metadata failed for {len(errors)} torrent(s)",
"created_at": utcnow(),
}
return _save(profile_id, payload)
def get(profile: dict | None, force: bool = False) -> dict[str, Any]:
if not profile:
return _empty(0, "No active rTorrent profile")
profile_id = int(profile.get("id") or 0)
cached = _load_cached(profile_id)
if cached and not force and not cached.get("stale"):
return cached
if cached and not force:
return cached
with _LOCK:
cached = _load_cached(profile_id)
if cached and not force and not cached.get("stale"):
return cached
return collect(profile)
def maybe_refresh(profile: dict | None, force: bool = False) -> dict[str, Any] | None:
if not profile:
return None
if not force and time.monotonic() - _STARTED_AT < _STARTUP_DELAY_SECONDS:
return None
cached = _load_cached(int(profile.get("id") or 0))
if cached and not cached.get("stale") and not force:
return cached
try:
return get(profile, force=True)
except Exception:
return cached
def queue_refresh(socketio, profile: dict | None, force: bool = False, emit_update: bool = True, room: str | None = None) -> dict[str, Any] | None:
"""Schedule heavier statistics refresh outside the main WebSocket/system poller."""
if not profile:
return None
if not force and time.monotonic() - _STARTED_AT < _STARTUP_DELAY_SECONDS:
return _load_cached(int(profile.get("id") or 0))
profile_id = int(profile.get("id") or 0)
cached = _load_cached(profile_id)
if cached and not cached.get("stale") and not force:
return cached
with _BACKGROUND_LOCK:
if profile_id in _BACKGROUND_PROFILE_IDS:
return cached
_BACKGROUND_PROFILE_IDS.add(profile_id)
profile_snapshot = dict(profile)
def runner():
try:
# Note: This can query file metadata per torrent, so it never runs inside the fast CPU/RAM/disk poller.
stats = get(profile_snapshot, force=True)
if emit_update and stats:
payload = {"profile_id": profile_id, "stats": stats}
socketio.emit("torrent_stats_update", payload, to=room) if room else socketio.emit("torrent_stats_update", payload)
except Exception as exc:
if emit_update:
payload = {"profile_id": profile_id, "ok": False, "error": str(exc)}
socketio.emit("torrent_stats_update", payload, to=room) if room else socketio.emit("torrent_stats_update", payload)
finally:
with _BACKGROUND_LOCK:
_BACKGROUND_PROFILE_IDS.discard(profile_id)
socketio.start_background_task(runner)
return cached

View File

@@ -36,22 +36,28 @@ def _has_error(row: dict) -> bool:
return bool(message and any(pattern in message for pattern in _ERROR_PATTERNS)) return bool(message and any(pattern in message for pattern in _ERROR_PATTERNS))
def _is_checking(row: dict) -> bool:
return str(row.get("status") or "") == "Checking" or _number(row, "hashing") > 0
def _matches(row: dict, summary_type: str) -> bool: def _matches(row: dict, summary_type: str) -> bool:
status = str(row.get("status") or "") status = str(row.get("status") or "")
checking = _is_checking(row)
if summary_type == "all": if summary_type == "all":
return True return True
if summary_type == "downloading": if summary_type == "downloading":
return not bool(row.get("complete")) and bool(row.get("state")) and not bool(row.get("paused")) return not checking and not bool(row.get("complete")) and bool(row.get("state")) and not bool(row.get("paused"))
if summary_type == "seeding": if summary_type == "seeding":
return status != "Checking" and bool(row.get("complete")) and bool(row.get("state")) and not bool(row.get("paused")) return not checking and bool(row.get("complete")) and bool(row.get("state")) and not bool(row.get("paused"))
if summary_type == "paused": if summary_type == "paused":
return bool(row.get("paused")) or status == "Paused" return not checking and (bool(row.get("paused")) or status == "Paused")
if summary_type == "checking": if summary_type == "checking":
return status == "Checking" or _number(row, "hashing") > 0 return checking
if summary_type == "error": if summary_type == "error":
return _has_error(row) return _has_error(row)
if summary_type == "stopped": if summary_type == "stopped":
return not bool(row.get("state")) # Note: Stopped count follows the UI filter exactly, so torrents being hash-checked do not inflate an empty Stopped list.
return not checking and not bool(row.get("state"))
return False return False

View File

@@ -1,30 +1,52 @@
from __future__ import annotations from __future__ import annotations
import threading
import psutil import psutil
from flask_socketio import emit from flask_socketio import emit, join_room, leave_room, disconnect
from ..config import POLL_INTERVAL from ..config import POLL_INTERVAL
from .preferences import active_profile, get_profile from .preferences import active_profile, get_profile
from .torrent_cache import torrent_cache from .torrent_cache import torrent_cache
from .torrent_summary import cached_summary from .torrent_summary import cached_summary
from . import rtorrent, smart_queue, traffic_history, automation_rules from . import rtorrent, smart_queue, traffic_history, automation_rules, torrent_stats, auth
def _profile_room(profile_id: int) -> str:
return f"profile:{int(profile_id)}"
def _poller_profiles() -> list[dict]:
# Note: Background polling has no browser session, so auth-enabled mode refreshes all profiles and emits only to per-profile rooms.
if not auth.enabled():
profile = active_profile()
return [profile] if profile else []
from ..db import connect
with connect() as conn:
return conn.execute("SELECT * FROM rtorrent_profiles ORDER BY id").fetchall()
def _emit_profile(socketio, event: str, payload: dict, profile_id: int) -> None:
target = _profile_room(profile_id) if auth.enabled() else None
socketio.emit(event, payload, to=target) if target else socketio.emit(event, payload)
_started = False _started = False
_start_lock = threading.Lock()
def register_socketio_handlers(socketio): def register_socketio_handlers(socketio):
global _started
def poller(): def poller():
tick = 0 tick = 0
while True: while True:
profile = active_profile() for profile in _poller_profiles():
if profile: if not profile:
continue
pid = int(profile["id"])
diff = torrent_cache.refresh(profile) diff = torrent_cache.refresh(profile)
heartbeat = {"ok": bool(diff.get("ok")), "profile_id": profile["id"], "tick": tick, "error": diff.get("error", "")} heartbeat = {"ok": bool(diff.get("ok")), "profile_id": pid, "tick": tick, "error": diff.get("error", "")}
if diff.get("ok") and (diff["added"] or diff["updated"] or diff["removed"]): if diff.get("ok") and (diff["added"] or diff["updated"] or diff["removed"]):
socketio.emit("torrent_patch", {**diff, "summary": cached_summary(profile["id"], torrent_cache.snapshot(profile["id"]), force=True)}) _emit_profile(socketio, "torrent_patch", {**diff, "summary": cached_summary(pid, torrent_cache.snapshot(pid), force=True)}, pid)
elif not diff.get("ok"): elif not diff.get("ok"):
socketio.emit("rtorrent_error", diff) _emit_profile(socketio, "rtorrent_error", diff, pid)
try: try:
status = rtorrent.system_status(profile) status = rtorrent.system_status(profile)
if bool(profile.get("is_remote")): if bool(profile.get("is_remote")):
@@ -35,41 +57,62 @@ def register_socketio_handlers(socketio):
status["ram"] = psutil.virtual_memory().percent status["ram"] = psutil.virtual_memory().percent
status["usage_source"] = "local" status["usage_source"] = "local"
status["usage_available"] = True status["usage_available"] = True
status["profile_id"] = profile["id"] status["profile_id"] = pid
traffic_history.record(profile["id"], status.get("down_rate", 0), status.get("up_rate", 0), status.get("total_down", 0), status.get("total_up", 0)) traffic_history.record(pid, status.get("down_rate", 0), status.get("up_rate", 0), status.get("total_down", 0), status.get("total_up", 0))
socketio.emit("system_stats", status) _emit_profile(socketio, "system_stats", status, pid)
heartbeat["ok"] = True heartbeat["ok"] = True
except Exception as exc: except Exception as exc:
heartbeat["ok"] = False heartbeat["ok"] = False
heartbeat["error"] = str(exc) heartbeat["error"] = str(exc)
socketio.emit("rtorrent_error", {"profile_id": profile["id"], "error": str(exc)}) _emit_profile(socketio, "rtorrent_error", {"profile_id": pid, "error": str(exc)}, pid)
if tick % max(1, int(15 * 60 / POLL_INTERVAL)) == 0:
# Note: Queue heavier torrent statistics outside the fast system_stats poller.
torrent_stats.queue_refresh(socketio, profile, force=False, room=_profile_room(pid) if auth.enabled() else None)
if tick % max(1, int(30 / POLL_INTERVAL)) == 0: if tick % max(1, int(30 / POLL_INTERVAL)) == 0:
try: try:
result = smart_queue.check(profile, force=False) result = smart_queue.check(profile, force=False)
if result.get("enabled"): if result.get("enabled"):
socketio.emit("smart_queue_update", result) _emit_profile(socketio, "smart_queue_update", result, pid)
if result.get("paused") or result.get("resumed") or result.get("resume_requested"):
# Note: After Smart Queue changes, refresh cache immediately so the Downloading list does not wait for the next poller cycle.
queue_diff = torrent_cache.refresh(profile)
if queue_diff.get("ok"):
_emit_profile(socketio, "torrent_patch", {**queue_diff, "summary": cached_summary(pid, torrent_cache.snapshot(pid), force=True)}, pid)
except Exception as exc: except Exception as exc:
socketio.emit("smart_queue_update", {"ok": False, "error": str(exc)}) _emit_profile(socketio, "smart_queue_update", {"ok": False, "error": str(exc)}, pid)
try: try:
auto_result = automation_rules.check(profile, force=False) auto_result = automation_rules.check(profile, force=False)
if auto_result.get("applied"): if auto_result.get("applied"):
socketio.emit("automation_update", auto_result) _emit_profile(socketio, "automation_update", auto_result, pid)
except Exception as exc: except Exception as exc:
socketio.emit("automation_update", {"ok": False, "error": str(exc)}) _emit_profile(socketio, "automation_update", {"ok": False, "error": str(exc)}, pid)
socketio.emit("heartbeat", heartbeat) _emit_profile(socketio, "heartbeat", heartbeat, pid)
tick += 1 tick += 1
socketio.sleep(POLL_INTERVAL) socketio.sleep(POLL_INTERVAL)
def ensure_poller_started():
global _started
with _start_lock:
if not _started:
# Note: The poller starts with the app, so Smart Queue and automations work without an open UI.
socketio.start_background_task(poller)
_started = True
ensure_poller_started()
@socketio.on("connect") @socketio.on("connect")
def handle_connect(): def handle_connect():
global _started ensure_poller_started()
if not _started: if auth.enabled() and not auth.current_user_id():
socketio.start_background_task(poller) # Note: Socket.IO uses the same session auth as REST API; unauthenticated clients are disconnected.
_started = True disconnect()
return False
profile = active_profile() profile = active_profile()
if profile:
join_room(_profile_room(profile["id"]))
emit("connected", {"ok": True, "profile": profile}) emit("connected", {"ok": True, "profile": profile})
if not profile: if not profile:
# Note: Fresh installs have no rTorrent yet; tell the client to show setup instead of waiting for a snapshot. # Note: Fresh installs or users without profile access get setup state, not another user's snapshot.
emit("profile_required", {"ok": True, "profiles": []}) emit("profile_required", {"ok": True, "profiles": []})
return return
rows = torrent_cache.snapshot(profile["id"]) rows = torrent_cache.snapshot(profile["id"])
@@ -77,6 +120,12 @@ def register_socketio_handlers(socketio):
@socketio.on("select_profile") @socketio.on("select_profile")
def handle_select_profile(data): def handle_select_profile(data):
if auth.enabled() and not auth.current_user_id():
disconnect()
return
old_profile = active_profile()
if old_profile:
leave_room(_profile_room(old_profile["id"]))
profile_id = int((data or {}).get("profile_id") or 0) profile_id = int((data or {}).get("profile_id") or 0)
if not profile_id: if not profile_id:
# Note: Ignore empty profile selections created before the first rTorrent profile exists. # Note: Ignore empty profile selections created before the first rTorrent profile exists.
@@ -84,8 +133,9 @@ def register_socketio_handlers(socketio):
return return
profile = get_profile(profile_id) profile = get_profile(profile_id)
if not profile: if not profile:
emit("rtorrent_error", {"error": "Profile does not exist"}) emit("rtorrent_error", {"error": "Profile access denied or profile does not exist"})
return return
join_room(_profile_room(profile_id))
diff = torrent_cache.refresh(profile) diff = torrent_cache.refresh(profile)
rows = torrent_cache.snapshot(profile_id) rows = torrent_cache.snapshot(profile_id)
emit("torrent_snapshot", {"profile_id": profile_id, "torrents": rows, "summary": cached_summary(profile_id, rows, force=True), "error": diff.get("error", "")}) emit("torrent_snapshot", {"profile_id": profile_id, "torrents": rows, "summary": cached_summary(profile_id, rows, force=True), "error": diff.get("error", "")})

View File

@@ -5,7 +5,7 @@ import threading
import time import time
import uuid import uuid
from concurrent.futures import ThreadPoolExecutor from concurrent.futures import ThreadPoolExecutor
from . import rtorrent from . import rtorrent, auth
from .preferences import get_profile from .preferences import get_profile
from ..config import WORKERS from ..config import WORKERS
from ..db import connect, utcnow, default_user_id from ..db import connect, utcnow, default_user_id
@@ -23,7 +23,13 @@ def set_socketio(socketio):
def _emit(name: str, payload: dict): def _emit(name: str, payload: dict):
if _socketio: if not _socketio:
return
profile_id = payload.get("profile_id")
if auth.enabled() and profile_id:
# Note: Job/socket events are sent only to clients joined to the affected profile room.
_socketio.emit(name, payload, to=f"profile:{int(profile_id)}")
else:
_socketio.emit(name, payload) _socketio.emit(name, payload)
@@ -97,7 +103,7 @@ def _set_job(job_id: str, status: str, error: str = "", result: dict | None = No
def enqueue(action_name: str, profile_id: int, payload: dict, user_id: int | None = None, max_attempts: int = 2) -> str: def enqueue(action_name: str, profile_id: int, payload: dict, user_id: int | None = None, max_attempts: int = 2) -> str:
user_id = user_id or default_user_id() user_id = user_id or auth.current_user_id() or default_user_id()
job_id = uuid.uuid4().hex job_id = uuid.uuid4().hex
now = utcnow() now = utcnow()
with connect() as conn: with connect() as conn:
@@ -130,7 +136,7 @@ def _run(job_id: str):
profile = get_profile(int(job["profile_id"]), int(job["user_id"])) profile = get_profile(int(job["profile_id"]), int(job["user_id"]))
if not profile: if not profile:
_set_job(job_id, "failed", "rTorrent profile does not exist", finished=True) _set_job(job_id, "failed", "rTorrent profile does not exist", finished=True)
_emit("job_update", {"id": job_id, "status": "failed", "error": "profile not found"}) _emit("job_update", {"id": job_id, "profile_id": job.get("profile_id"), "status": "failed", "error": "profile not found"})
return return
profile_id = int(profile["id"]) profile_id = int(profile["id"])
ordered_lock = None ordered_lock = None
@@ -150,19 +156,26 @@ def _run(job_id: str):
with connect() as conn: with connect() as conn:
conn.execute("UPDATE jobs SET status='running', attempts=?, started_at=COALESCE(started_at, ?), updated_at=? WHERE id=?", (attempts, utcnow(), utcnow(), job_id)) conn.execute("UPDATE jobs SET status='running', attempts=?, started_at=COALESCE(started_at, ?), updated_at=? WHERE id=?", (attempts, utcnow(), utcnow(), job_id))
_emit("operation_started", {"job_id": job_id, "action": job["action"], "profile_id": profile["id"], "hashes": payload.get("hashes") or [], "hash_count": len(payload.get("hashes") or []), "bulk": len(payload.get("hashes") or []) > 1}) _emit("operation_started", {"job_id": job_id, "action": job["action"], "profile_id": profile["id"], "hashes": payload.get("hashes") or [], "hash_count": len(payload.get("hashes") or []), "bulk": len(payload.get("hashes") or []) > 1})
_emit("job_update", {"id": job_id, "status": "running", "attempts": attempts}) _emit("job_update", {"id": job_id, "profile_id": profile["id"], "status": "running", "attempts": attempts})
result = _execute(profile, job["action"], payload) result = _execute(profile, job["action"], payload)
fresh = _job_row(job_id)
# Note: Emergency cancel keeps a cancelled job from being overwritten when work finishes later.
if fresh and fresh["status"] == "cancelled":
return
_set_job(job_id, "done", result=result, finished=True) _set_job(job_id, "done", result=result, finished=True)
_emit("operation_finished", {"job_id": job_id, "action": job["action"], "profile_id": profile["id"], "hashes": payload.get("hashes") or [], "hash_count": len(payload.get("hashes") or []), "bulk": len(payload.get("hashes") or []) > 1, "result": result}) _emit("operation_finished", {"job_id": job_id, "action": job["action"], "profile_id": profile["id"], "hashes": payload.get("hashes") or [], "hash_count": len(payload.get("hashes") or []), "bulk": len(payload.get("hashes") or []) > 1, "result": result})
_emit("job_update", {"id": job_id, "status": "done", "result": result}) _emit("job_update", {"id": job_id, "profile_id": profile["id"], "status": "done", "result": result})
except Exception as exc: except Exception as exc:
fresh = _job_row(job_id) or {} fresh = _job_row(job_id) or {}
attempts = int(fresh.get("attempts") or 1) attempts = int(fresh.get("attempts") or 1)
max_attempts = int(fresh.get("max_attempts") or 2) max_attempts = int(fresh.get("max_attempts") or 2)
# Note: Emergency cancel keeps an exception from a cancelled job from moving it back to retry or failed.
if fresh and fresh.get("status") == "cancelled":
return
status = "pending" if attempts < max_attempts else "failed" status = "pending" if attempts < max_attempts else "failed"
_set_job(job_id, status, str(exc), finished=(status == "failed")) _set_job(job_id, status, str(exc), finished=(status == "failed"))
_emit("operation_failed", {"job_id": job_id, "action": job.get("action"), "profile_id": job.get("profile_id"), "hashes": payload.get("hashes") or [], "error": str(exc)}) _emit("operation_failed", {"job_id": job_id, "action": job.get("action"), "profile_id": job.get("profile_id"), "hashes": payload.get("hashes") or [], "error": str(exc)})
_emit("job_update", {"id": job_id, "status": status, "error": str(exc), "attempts": attempts}) _emit("job_update", {"id": job_id, "profile_id": job.get("profile_id"), "status": status, "error": str(exc), "attempts": attempts})
if status == "pending": if status == "pending":
_executor.submit(_run, job_id) _executor.submit(_run, job_id)
finally: finally:
@@ -182,6 +195,9 @@ def _job_summary(row: dict, payload: dict, result: dict) -> str:
ctx = payload.get("job_context") or {} ctx = payload.get("job_context") or {}
count = int(ctx.get("hash_count") or len(payload.get("hashes") or []) or result.get("count") or 0) count = int(ctx.get("hash_count") or len(payload.get("hashes") or []) or result.get("count") or 0)
parts = [] parts = []
if ctx.get("bulk_label"):
# Note: Shows which generated bulk part is being displayed in the job queue.
parts.append(f"{ctx.get('bulk_label')} of {ctx.get('bulk_parts')}")
if count: if count:
parts.append(("bulk " if count > 1 else "single ") + f"{count} torrent(s)") parts.append(("bulk " if count > 1 else "single ") + f"{count} torrent(s)")
if ctx.get("target_path"): if ctx.get("target_path"):
@@ -215,36 +231,65 @@ def _public_job(row) -> dict:
return d return d
def _job_scope_sql(writable: bool = False) -> tuple[str, tuple]:
visible = auth.writable_profile_ids() if writable else auth.visible_profile_ids()
if visible is None:
return "", ()
if not visible:
return " WHERE 1=0", ()
placeholders = ",".join("?" for _ in visible)
return f" WHERE profile_id IN ({placeholders})", tuple(visible)
def list_jobs(limit: int = 200, offset: int = 0): def list_jobs(limit: int = 200, offset: int = 0):
limit = max(1, min(int(limit or 50), 500)) limit = max(1, min(int(limit or 50), 500))
offset = max(0, int(offset or 0)) offset = max(0, int(offset or 0))
where, params = _job_scope_sql()
with connect() as conn: with connect() as conn:
rows = conn.execute("SELECT * FROM jobs ORDER BY created_at DESC LIMIT ? OFFSET ?", (limit, offset)).fetchall() rows = conn.execute(f"SELECT * FROM jobs{where} ORDER BY created_at DESC LIMIT ? OFFSET ?", (*params, limit, offset)).fetchall()
total = conn.execute("SELECT COUNT(*) AS n FROM jobs").fetchone()["n"] total = conn.execute(f"SELECT COUNT(*) AS n FROM jobs{where}", params).fetchone()["n"]
return {"rows": [_public_job(r) for r in rows], "total": total, "limit": limit, "offset": offset} return {"rows": [_public_job(r) for r in rows], "total": total, "limit": limit, "offset": offset}
def cancel_job(job_id: str) -> bool: def cancel_job(job_id: str) -> bool:
row = _job_row(job_id) row = _job_row(job_id)
if not row or row["status"] not in {"pending", "failed"}: if not row or row["status"] not in {"pending", "running"}:
return False return False
# Note: Emergency cancel is useful only for unfinished jobs; failed/done entries stay available for retry or log cleanup.
_set_job(job_id, "cancelled", finished=True) _set_job(job_id, "cancelled", finished=True)
_emit("job_update", {"id": job_id, "status": "cancelled"}) _emit("job_update", {"id": job_id, "profile_id": row.get("profile_id"), "status": "cancelled"})
return True return True
def clear_jobs() -> int: def clear_jobs() -> int:
where, params = _job_scope_sql(writable=True)
status_clause = "status NOT IN ('pending', 'running')"
sql = f"DELETE FROM jobs{where} AND {status_clause}" if where else f"DELETE FROM jobs WHERE {status_clause}"
with connect() as conn: with connect() as conn:
cur = conn.execute("DELETE FROM jobs WHERE status NOT IN ('pending', 'running')") cur = conn.execute(sql, params)
return int(cur.rowcount or 0) return int(cur.rowcount or 0)
def emergency_clear_jobs() -> int:
# Note: Emergency cleanup first marks active jobs as cancelled, then clears the whole job log list.
now = utcnow()
where, params = _job_scope_sql(writable=True)
status_clause = "status IN ('pending', 'running')"
update_sql = f"UPDATE jobs SET status='cancelled', error='Emergency cancelled by user', finished_at=COALESCE(finished_at, ?), updated_at=?{where} AND {status_clause}" if where else "UPDATE jobs SET status='cancelled', error='Emergency cancelled by user', finished_at=COALESCE(finished_at, ?), updated_at=? WHERE status IN ('pending', 'running')"
with connect() as conn:
conn.execute(update_sql, (now, now, *params) if where else (now, now))
cur = conn.execute(f"DELETE FROM jobs{where}", params) if where else conn.execute("DELETE FROM jobs")
deleted = int(cur.rowcount or 0)
_emit("job_update", {"status": "cleared", "emergency": True})
return deleted
def retry_job(job_id: str) -> bool: def retry_job(job_id: str) -> bool:
row = _job_row(job_id) row = _job_row(job_id)
if not row or row["status"] not in {"failed", "cancelled"}: if not row or row["status"] not in {"failed", "cancelled"}:
return False return False
with connect() as conn: with connect() as conn:
conn.execute("UPDATE jobs SET status='pending', error='', finished_at=NULL, updated_at=? WHERE id=?", (utcnow(), job_id)) conn.execute("UPDATE jobs SET status='pending', error='', finished_at=NULL, updated_at=? WHERE id=?", (utcnow(), job_id))
_emit("job_update", {"id": job_id, "status": "pending"}) _emit("job_update", {"id": job_id, "profile_id": row.get("profile_id"), "status": "pending"})
_executor.submit(_run, job_id) _executor.submit(_run, job_id)
return True return True

File diff suppressed because one or more lines are too long

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,24 @@
<!doctype html>
<html lang="en" data-bs-theme="dark">
<head>
<meta charset="utf-8">
<meta name="viewport" content="width=device-width, initial-scale=1">
<title>pyTorrent {{ code }}</title>
<link href="{{ bootstrap_theme_url('default') }}" rel="stylesheet">
<link href="{{ frontend_asset_url('fontawesome_css') }}" rel="stylesheet">
<link href="{{ static_url('styles.css') }}" rel="stylesheet">
</head>
<body class="error-page">
<main class="error-card" role="alert">
<div class="error-brand"><i class="fa-solid fa-robot"></i> pyTorrent</div>
<div class="error-icon" aria-hidden="true"><i class="fa-solid {{ icon }}"></i></div>
<p class="error-code">{{ code }}</p>
<h1>{{ title }}</h1>
<p>{{ message }}</p>
<div class="error-actions">
<a class="btn btn-primary" href="{{ url_for('main.index') }}"><i class="fa-solid fa-house"></i> Back to dashboard</a>
<a class="btn btn-outline-secondary" href="{{ url_for('main.docs') }}"><i class="fa-solid fa-book"></i> API docs</a>
</div>
</main>
</body>
</html>

File diff suppressed because one or more lines are too long

View File

@@ -0,0 +1,27 @@
<!doctype html>
<html lang="en" data-bs-theme="dark">
<head>
<meta charset="utf-8">
<meta name="viewport" content="width=device-width, initial-scale=1">
<title>pyTorrent login</title>
<link href="{{ bootstrap_theme_url('default') }}" rel="stylesheet">
<link href="{{ frontend_asset_url('fontawesome_css') }}" rel="stylesheet">
<link href="{{ static_url('styles.css') }}" rel="stylesheet">
</head>
<body class="auth-page">
<main class="initial-loader-card auth-card">
<div class="initial-loader-brand"><i class="fa-solid fa-robot"></i> pyTorrent</div>
<div class="auth-lock" aria-hidden="true"><i class="fa-solid fa-lock"></i></div>
<h1 class="initial-loader-title">Sign in</h1>
<p class="initial-loader-text">Authentication is enabled for this pyTorrent instance.</p>
{% if error %}<div class="alert alert-danger auth-alert">{{ error }}</div>{% endif %}
<form class="auth-form" method="post">
<label class="form-label" for="username">User</label>
<input id="username" class="form-control" name="username" autocomplete="username" autofocus>
<label class="form-label" for="password">Password</label>
<input id="password" class="form-control" name="password" type="password" autocomplete="current-password">
<button class="btn btn-primary w-100" type="submit"><i class="fa-solid fa-right-to-bracket"></i> Log in</button>
</form>
</main>
</body>
</html>

View File

@@ -4,3 +4,4 @@ python-dotenv>=1.0
geoip2>=4.8 geoip2>=4.8
psutil>=5.9 psutil>=5.9
simple-websocket>=1.0 simple-websocket>=1.0
gunicorn>=22.0

113
scripts/download_frontend_libs.py Executable file
View File

@@ -0,0 +1,113 @@
#!/usr/bin/env python3
from __future__ import annotations
import re
from pathlib import Path
from urllib.parse import urljoin
from urllib.request import Request, urlopen
ROOT = Path(__file__).resolve().parents[1]
LIBS_STATIC_DIR = "libs"
BOOTSTRAP_VERSION = "5.3.3"
BOOTSWATCH_VERSION = "5.3.3"
FONTAWESOME_VERSION = "6.5.2"
FLAG_ICONS_VERSION = "7.2.3"
SWAGGER_UI_VERSION = "5"
SOCKET_IO_VERSION = "4.7.5"
BOOTSTRAP_THEMES = (
"default",
"flatly",
"litera",
"lumen",
"minty",
"sketchy",
"solar",
"spacelab",
"united",
"zephyr",
)
STATIC_ASSETS = {
"bootstrap_js": {
"local": f"{LIBS_STATIC_DIR}/bootstrap/{BOOTSTRAP_VERSION}/js/bootstrap.bundle.min.js",
"cdn": f"https://cdn.jsdelivr.net/npm/bootstrap@{BOOTSTRAP_VERSION}/dist/js/bootstrap.bundle.min.js",
},
"socket_io_js": {
"local": f"{LIBS_STATIC_DIR}/socket.io/{SOCKET_IO_VERSION}/socket.io.min.js",
"cdn": f"https://cdn.socket.io/{SOCKET_IO_VERSION}/socket.io.min.js",
},
"fontawesome_css": {
"local": f"{LIBS_STATIC_DIR}/fontawesome/{FONTAWESOME_VERSION}/css/all.min.css",
"cdn": f"https://cdnjs.cloudflare.com/ajax/libs/font-awesome/{FONTAWESOME_VERSION}/css/all.min.css",
},
"flag_icons_css": {
"local": f"{LIBS_STATIC_DIR}/flag-icons/{FLAG_ICONS_VERSION}/css/flag-icons.min.css",
"cdn": f"https://cdn.jsdelivr.net/gh/lipis/flag-icons@{FLAG_ICONS_VERSION}/css/flag-icons.min.css",
},
"swagger_css": {
"local": f"{LIBS_STATIC_DIR}/swagger-ui/{SWAGGER_UI_VERSION}/swagger-ui.css",
"cdn": f"https://cdn.jsdelivr.net/npm/swagger-ui-dist@{SWAGGER_UI_VERSION}/swagger-ui.css",
},
"swagger_js": {
"local": f"{LIBS_STATIC_DIR}/swagger-ui/{SWAGGER_UI_VERSION}/swagger-ui-bundle.js",
"cdn": f"https://cdn.jsdelivr.net/npm/swagger-ui-dist@{SWAGGER_UI_VERSION}/swagger-ui-bundle.js",
},
}
URL_RE = re.compile(r"url\((['\"]?)(?!data:)(?!https?:)([^)'\"]+)\1\)")
def bootstrap_css_asset(theme: str) -> dict[str, str]:
if theme == "default":
return {
"local": f"{LIBS_STATIC_DIR}/bootstrap/{BOOTSTRAP_VERSION}/css/bootstrap.min.css",
"cdn": f"https://cdn.jsdelivr.net/npm/bootstrap@{BOOTSTRAP_VERSION}/dist/css/bootstrap.min.css",
}
return {
"local": f"{LIBS_STATIC_DIR}/bootswatch/{BOOTSWATCH_VERSION}/{theme}/bootstrap.min.css",
"cdn": f"https://cdn.jsdelivr.net/npm/bootswatch@{BOOTSWATCH_VERSION}/dist/{theme}/bootstrap.min.css",
}
def download(url: str, dest: Path) -> None:
dest.parent.mkdir(parents=True, exist_ok=True)
req = Request(url, headers={"User-Agent": "pyTorrent installer"})
with urlopen(req, timeout=60) as response:
data = response.read()
if not data:
raise RuntimeError(f"Empty response for {url}")
tmp = dest.with_suffix(dest.suffix + ".tmp")
tmp.write_bytes(data)
tmp.replace(dest)
print(f"OK {dest.relative_to(ROOT)}")
def download_css_with_assets(url: str, dest: Path) -> None:
download(url, dest)
text = dest.read_text(encoding="utf-8", errors="ignore")
for match in URL_RE.finditer(text):
rel = match.group(2).split("#", 1)[0].split("?", 1)[0]
if not rel:
continue
asset_url = urljoin(url, rel)
asset_dest = (dest.parent / rel).resolve()
try:
asset_dest.relative_to(ROOT)
except ValueError:
continue
if not asset_dest.exists():
download(asset_url, asset_dest)
def main() -> None:
items = list(STATIC_ASSETS.values())
items.extend(bootstrap_css_asset(theme) for theme in BOOTSTRAP_THEMES)
for item in items:
url = item["cdn"]
dest = ROOT / "pytorrent" / "static" / item["local"]
if dest.suffix == ".css":
download_css_with_assets(url, dest)
else:
download(url, dest)
if __name__ == "__main__":
main()

View File

@@ -1,16 +1,25 @@
[Unit] [Unit]
Description=pyTorrent web UI for rTorrent Description=pyTorrent Web UI
After=network.target After=network-online.target
Wants=network-online.target
[Service] [Service]
Type=simple Type=simple
WorkingDirectory=/opt/pytorrent #User=root
EnvironmentFile=/opt/pytorrent/.env #Group=root
ExecStart=/opt/pytorrent/venv/bin/python /opt/pytorrent/app.py User=pytorrent
Group=pytorrent
WorkingDirectory=/opt/pyTorrent
Environment="PYTHONUNBUFFERED=1"
EnvironmentFile=/opt/pyTorrent/.env
# Note: threaded Gunicorn preserves Flask-SocketIO background tasks without running Werkzeug in production.
ExecStart=/opt/pyTorrent/venv/bin/gunicorn -c /opt/pyTorrent/gunicorn.conf.py --worker-class gthread --workers 1 --threads 32 --bind ${PYTORRENT_HOST}:${PYTORRENT_PORT} --access-logfile - --error-logfile - wsgi:app
Restart=always Restart=always
RestartSec=3 RestartSec=3
User=www-data KillSignal=SIGINT
Group=www-data TimeoutStopSec=20
NoNewPrivileges=true
PrivateTmp=true
[Install] [Install]
WantedBy=multi-user.target WantedBy=multi-user.target

4
wsgi.py Normal file
View File

@@ -0,0 +1,4 @@
from pytorrent import create_app
# Note: Gunicorn imports this object; background Socket.IO tasks still start through create_app().
app = create_app()