add auth support

This commit is contained in:
Mateusz Gruszczyński
2026-05-06 08:38:07 +02:00
parent aea3c92830
commit dc1cac4e6f
20 changed files with 1185 additions and 220 deletions

View File

@@ -9,10 +9,17 @@ PYTORRENT_GEOIP_DB=data/GeoLite2-City.mmdb
PYTORRENT_ALLOW_UNSAFE_WERKZEUG=0 PYTORRENT_ALLOW_UNSAFE_WERKZEUG=0
PYTORRENT_SCGI_RETRIES=8 PYTORRENT_SCGI_RETRIES=8
# python -m pytorrent.cli reset-password admin new_Pass
PYTORRENT_AUTH_ENABLE=false
# Reverse proxy / HTTPS
PYTORRENT_PROXY_FIX_ENABLE=false
PYTORRENT_SESSION_COOKIE_SECURE=false
# PYTORRENT_SOCKETIO_CORS_ALLOWED_ORIGINS=https://your-domain.com
# Retention / Smart Queue # Retention / Smart Queue
PYTORRENT_TRAFFIC_HISTORY_RETENTION_DAYS=90 PYTORRENT_TRAFFIC_HISTORY_RETENTION_DAYS=90
PYTORRENT_JOBS_RETENTION_DAYS=30 PYTORRENT_JOBS_RETENTION_DAYS=30
PYTORRENT_SMART_QUEUE_HISTORY_RETENTION_DAYS=30 PYTORRENT_SMART_QUEUE_HISTORY_RETENTION_DAYS=30
PYTORRENT_LOG_RETENTION_DAYS=30 PYTORRENT_LOG_RETENTION_DAYS=30
PYTORRENT_SMART_QUEUE_LABEL="Smart Queue Paused" PYTORRENT_SMART_QUEUE_LABEL="Smart Queue Paused"

117
README.md
View File

@@ -1,33 +1,33 @@
# pyTorrent # pyTorrent
Monopage web UI dla rTorrent inspirowany workflow ruTorrent. Single-page web UI for rTorrent inspired by the ruTorrent workflow.
## Funkcje ## Features
- Flask + Flask-SocketIO. - Flask + Flask-SocketIO.
- SQLite na preferencje, profile SCGI, motyw Bootstrapa i font UI. - SQLite storage for preferences, SCGI profiles, Bootstrap theme and UI font.
- Dowolna liczba profili rTorrent per user. - Multiple rTorrent profiles per user.
- Profile można dodawać i edytować z UI; flaga zdalnej lokalizacji ukrywa CPU/RAM hosta aplikacji, żeby nie mylić ich z zasobami zdalnego rTorrenta; publiczny IP dla port check jest dalej sprawdzany zdalnie, jeśli rTorrent to obsługuje. - Profiles can be added and edited from the UI; the remote profile flag hides local CPU/RAM usage to avoid confusing it with remote rTorrent host resources.
- Przełączanie aktywnego rTorrent z UI. - Active rTorrent profile switching from the UI.
- Live lista torrentów przez WebSocket. - Live torrent list over WebSocket.
- Cache aplikacyjny i wysyłanie patchy bez przeładowywania całej tabeli. - Application-side cache with patch updates instead of full table reloads.
- Operacje usera wykonywane w ThreadPoolExecutor. - User operations executed through ThreadPoolExecutor.
- Akcje `move` i `remove` są wykonywane per profil w kolejności zlecenia, więc późniejsze usunięcie poczeka na wcześniejsze przenoszenia. - `move` and `remove` actions are executed per profile in request order, so later deletes wait for earlier moves.
- Log jobsów pokazuje krótką datę i godzinę w tabeli oraz pełny timestamp w tooltipie. - Job log shows a short date/time in the table and the full timestamp in the tooltip.
- Masowe start/pause/stop/resume/recheck/remove/move. - Bulk start, pause, stop, resume, recheck, remove and move.
- Move obsługuje `move_data=true`, który fizycznie przenosi dane po stronie rTorrent w tle i odpytuje plik statusu, dzięki czemu długie `mv` nie kończy się timeoutem SCGI; jeśli cel już istnieje, jest nadpisywany (`force`), a timeouty z `mkdir`/startu/pollingu move nie przerywają operacji. Potem aktualizuje katalog torrenta, a `recheck` domyślnie włącza się przy fizycznym przenoszeniu. - Move supports `move_data=true`; data is physically moved on the rTorrent side in the background and status is polled from a marker file, so long `mv` operations do not hit the SCGI timeout.
- Modal dodawania wielu magnetów. - Multi-magnet add modal.
- Dolny status bar: CPU, RAM, wersja rTorrent, prędkości, limity, total DL/UP oraz status portu, gdy port check jest włączony. - Bottom status bar with CPU, RAM, rTorrent version, speeds, limits, total DL/UP and port-check status when enabled.
- Prawoklik na torrentach. - Torrent context menu.
- Skróty klawiaturowe. - Keyboard shortcuts.
- Szczegóły: General, Files, Peers, Trackers, Log. - Details tabs: General, Files, Peers, Trackers and Log.
- Smart Queue pokazuje domyślnie 10 ostatnich operacji; można rozwinąć historię do 100 wpisów. - Smart Queue shows the last 10 operations by default and can expand history to 100 rows.
- GeoIP peerów z MaxMind GeoLite2-City.mmdb, z cache IP. - Peer GeoIP with MaxMind GeoLite2-City.mmdb and IP cache.
- Cache-busting statyków przez MD5 i nagłówki cache. - Static cache busting with MD5 and cache headers.
- Preferencje wyglądu: domyślny Bootstrap albo Bootswatch: Flatly, Litera, Lumen, Minty, Sketchy, Solar, Spacelab, United, Zephyr. - Appearance preferences: default Bootstrap or Bootswatch themes Flatly, Litera, Lumen, Minty, Sketchy, Solar, Spacelab, United and Zephyr.
- Preferencje fontu: domyślny font motywu, Adwaita Mono oraz dodatkowe pasujące fonty. - Font preferences: default theme font, Adwaita Mono and additional matching fonts.
## Uruchomienie ## Run locally
```bash ```bash
./install.sh ./install.sh
@@ -35,34 +35,54 @@ Monopage web UI dla rTorrent inspirowany workflow ruTorrent.
python app.py python app.py
``` ```
Domyślnie: `http://127.0.0.1:8090`. Default URL: `http://127.0.0.1:8090`.
## Uruchomienie produkcyjne ## Production run
Preferowany wariant bez deweloperskiego Werkzeug: Preferred mode without development Werkzeug:
```bash ```bash
. venv/bin/activate . venv/bin/activate
gunicorn --worker-class gthread --workers 1 --threads 32 --bind 0.0.0.0:8090 --access-logfile - --error-logfile - wsgi:app gunicorn --worker-class gthread --workers 1 --threads 32 --bind 0.0.0.0:8090 --access-logfile - --error-logfile - wsgi:app
``` ```
Note: aplikacja zostaje przy `async_mode="threading"`, więc WebSocket, `start_background_task`, kolejka operacji i poller działają w tym samym modelu co wcześniej. Note: the app keeps `async_mode="threading"`, so WebSocket, `start_background_task`, operation queues and the poller run in the same model as before.
Alternatywy przeanalizowane, ale nie wdrożone: Alternatives reviewed but not enabled by default:
- `eventlet` przez Gunicorn: działa z Flask-SocketIO, ale wymaga green threads i monkey-patching; większe ryzyko regresji dla operacji plikowych/SCGI. - Gunicorn with `eventlet`: works with Flask-SocketIO, but requires green threads and monkey patching, which increases regression risk for file and SCGI operations.
- `gevent` przez Gunicorn: dobry wariant produkcyjny, ale wymaga dodatkowych zależności i testów zgodności. - Gunicorn with `gevent`: a valid production option, but it needs extra dependencies and compatibility testing.
- wiele workerów Gunicorn: wymaga Redis/RabbitMQ/Kafka jako message queue dla Socket.IO, więc nie jest zamiennikiem 1:1. - Multiple Gunicorn workers: requires Redis, RabbitMQ or Kafka as the Socket.IO message queue, so it is not a drop-in replacement.
## Profil SCGI ## Reverse proxy
Przykład: When pyTorrent is served behind a reverse proxy, enable proxy header handling only when the proxy is trusted:
```env
PYTORRENT_PROXY_FIX_ENABLE=true
PYTORRENT_SESSION_COOKIE_SECURE=true
```
The proxy should forward at least:
```txt
X-Forwarded-For
X-Forwarded-Proto
X-Forwarded-Host
X-Forwarded-Port
```
This keeps login redirects, session cookies and same-origin API checks correct when HTTPS is terminated by the proxy. If pyTorrent is mounted under a sub-path, also forward `X-Forwarded-Prefix`.
## SCGI profile
Example:
```txt ```txt
scgi://127.0.0.1:5000/RPC2 scgi://127.0.0.1:5000/RPC2
``` ```
Po stronie rTorrent: On the rTorrent side:
```txt ```txt
network.scgi.open_port = 127.0.0.1:5000 network.scgi.open_port = 127.0.0.1:5000
@@ -70,22 +90,39 @@ network.scgi.open_port = 127.0.0.1:5000
## GeoIP ## GeoIP
Instalator pobiera bazę GeoLite2-City jednorazowo do: The installer downloads GeoLite2-City once to:
```txt ```txt
data/GeoLite2-City.mmdb data/GeoLite2-City.mmdb
``` ```
Można też uruchomić ręcznie: Manual download:
```bash ```bash
./scripts/download_geoip.sh ./scripts/download_geoip.sh
``` ```
Skrypt używa głównego źródła `https://git.io/GeoLite2-City.mmdb`, a przy błędzie fallbacku `https://github.com/P3TERX/GeoLite.mmdb/raw/download/GeoLite2-City.mmdb`. Katalog `data` ma uprawnienia `755`, a plik bazy `644`. The script uses `https://git.io/GeoLite2-City.mmdb` as the primary source and `https://github.com/P3TERX/GeoLite.mmdb/raw/download/GeoLite2-City.mmdb` as fallback. The `data` directory is set to `755`, and the database file is set to `644`.
## API docs ## API docs
Dokumentacja OpenAPI jest dostępna pod `/docs`. Endpoint `/api/profiles` obsługuje `max_parallel_jobs` z domyślną wartością `5` oraz `is_remote`; `PUT /api/profiles/{profile_id}` edytuje istniejący profil. Endpoint `/api/preferences` obsługuje m.in. `theme`, `bootstrap_theme`, `font_family`, `table_columns_json`, `peers_refresh_seconds` i `port_check_enabled`. Endpoint `/api/port-check` zwraca status portu wraz z `checked_at`; dla zdalnego profilu publiczny IP jest pobierany przez rTorrent z fallbackami `ifconfig.co`, `ifconfig.me` i `ipapi.linuxiarz.pl`, jeśli dana konfiguracja rTorrenta wspiera zdalne polecenia, a metoda `POST` wymusza ponowny check z pominięciem cache. Endpoint `/api/system/status` dla zdalnego profilu zwraca `usage_available=false` i nie odczytuje CPU/RAM. OpenAPI documentation is available at `/docs`. `/api/profiles` supports `max_parallel_jobs` with default value `5` and `is_remote`; `PUT /api/profiles/{profile_id}` edits an existing profile. `/api/preferences` supports fields including `theme`, `bootstrap_theme`, `font_family`, `table_columns_json`, `peers_refresh_seconds` and `port_check_enabled`. `/api/port-check` returns port status with `checked_at`; for remote profiles the public IP is read through rTorrent with fallbacks when supported. `/api/system/status` returns `usage_available=false` for remote profiles and does not read local CPU/RAM.
`/api/openapi.json` zawiera reusable schemas dla głównych odpowiedzi API, w tym `TorrentListResponse`, `TorrentSummary`, `TorrentFilterSummary`, `CleanupSummary` i `AppStatus`. `GET /api/torrents` dokumentuje teraz pole `summary` używane przez sidebar filters. `/api/openapi.json` includes reusable schemas for main API responses, including `TorrentListResponse`, `TorrentSummary`, `TorrentFilterSummary`, `CleanupSummary` and `AppStatus`. `GET /api/torrents` documents the `summary` field used by sidebar filters.
## Admin CLI
Reset an existing user's password:
```bash
. venv/bin/activate
python -m pytorrent.cli reset-password admin new_password
```
Without the password argument, the CLI asks for it interactively:
```bash
python -m pytorrent.cli reset-password admin
```
The command uses the same database as the app and respects `PYTORRENT_DB_PATH` from `.env`. The reset changes only the password hash and leaves role and permissions unchanged.

View File

@@ -46,7 +46,7 @@ def make_zip(repo_path: Path, output_zip: Path) -> None:
zf.write(abs_path, arcname=rel_path) zf.write(abs_path, arcname=rel_path)
print(f"Utworzono archiwum: {output_zip}") print(f"Utworzono archiwum: {output_zip}")
print(f"Dodano plików: {len(files)}") print(f"Added files: {len(files)}")
def main(): def main():
@@ -60,7 +60,7 @@ def main():
try: try:
run_git_command(["rev-parse", "--show-toplevel"], repo_path) run_git_command(["rev-parse", "--show-toplevel"], repo_path)
except subprocess.CalledProcessError: except subprocess.CalledProcessError:
print("Błąd: ten katalog nie jest repozytorium Git.", file=sys.stderr) print("Error: this directory is not a Git repository.", file=sys.stderr)
sys.exit(1) sys.exit(1)
make_zip(repo_path, output_zip) make_zip(repo_path, output_zip)

View File

@@ -3,17 +3,42 @@ from __future__ import annotations
from pathlib import Path from pathlib import Path
from flask import Flask, request, url_for from flask import Flask, request, url_for
from flask_socketio import SocketIO from flask_socketio import SocketIO
from .config import SECRET_KEY from werkzeug.middleware.proxy_fix import ProxyFix
from .config import (
SECRET_KEY,
SESSION_COOKIE_SECURE,
PROXY_FIX_ENABLE,
PROXY_FIX_X_FOR,
PROXY_FIX_X_PROTO,
PROXY_FIX_X_HOST,
PROXY_FIX_X_PORT,
PROXY_FIX_X_PREFIX,
SOCKETIO_CORS_ALLOWED_ORIGINS,
)
from .db import init_db from .db import init_db
from .utils import file_md5 from .utils import file_md5
socketio = SocketIO(cors_allowed_origins="*", ping_timeout=30, async_mode="threading") socketio = SocketIO(cors_allowed_origins=SOCKETIO_CORS_ALLOWED_ORIGINS, ping_timeout=30, async_mode="threading")
_static_md5_cache: dict[tuple, str] = {} _static_md5_cache: dict[tuple, str] = {}
def create_app() -> Flask: def create_app() -> Flask:
app = Flask(__name__) app = Flask(__name__)
if PROXY_FIX_ENABLE:
app.wsgi_app = ProxyFix(
app.wsgi_app,
x_for=PROXY_FIX_X_FOR,
x_proto=PROXY_FIX_X_PROTO,
x_host=PROXY_FIX_X_HOST,
x_port=PROXY_FIX_X_PORT,
x_prefix=PROXY_FIX_X_PREFIX,
)
app.secret_key = SECRET_KEY app.secret_key = SECRET_KEY
app.config.update(
SESSION_COOKIE_HTTPONLY=True,
SESSION_COOKIE_SAMESITE="Lax",
SESSION_COOKIE_SECURE=SESSION_COOKIE_SECURE,
)
@app.context_processor @app.context_processor
def static_helpers(): def static_helpers():
@@ -49,6 +74,8 @@ def create_app() -> Flask:
app.register_blueprint(main_bp) app.register_blueprint(main_bp)
app.register_blueprint(api_bp) app.register_blueprint(api_bp)
init_db() init_db()
from .services.auth import install_guards
install_guards(app)
socketio.init_app(app) socketio.init_app(app)
from .services.workers import set_socketio from .services.workers import set_socketio

79
pytorrent/cli.py Normal file
View File

@@ -0,0 +1,79 @@
from __future__ import annotations
import argparse
import getpass
import sys
from .db import connect, init_db, utcnow
from .services.auth import password_hash
def reset_password(username: str, password: str) -> bool:
"""Note: Reset the selected user password hash without changing role or permissions."""
username = (username or "").strip()
if not username:
raise ValueError("Username is required")
if password is None or password == "":
raise ValueError("Password cannot be empty")
init_db()
now = utcnow()
hashed = password_hash(password)
with connect() as conn:
row = conn.execute("SELECT id FROM users WHERE username=?", (username,)).fetchone()
if not row:
return False
conn.execute(
"UPDATE users SET password_hash=?, updated_at=? WHERE username=?",
(hashed, now, username),
)
return True
def _password_from_args(args: argparse.Namespace) -> str:
"""Note: Allow the password to be passed as an argument or entered securely in interactive mode."""
if args.password is not None:
return args.password
first = getpass.getpass("New password: ")
second = getpass.getpass("Repeat password: ")
if first != second:
raise ValueError("Passwords do not match")
return first
def build_parser() -> argparse.ArgumentParser:
"""Note: Define simple administrative commands launched with python -m pytorrent.cli."""
parser = argparse.ArgumentParser(description="pyTorrent CLI")
sub = parser.add_subparsers(dest="command", required=True)
reset = sub.add_parser("reset-password", help="Reset password for an existing user")
reset.add_argument("username", help="User login")
reset.add_argument("password", nargs="?", help="New password; omit to type it interactively")
reset.set_defaults(func=_cmd_reset_password)
return parser
def _cmd_reset_password(args: argparse.Namespace) -> int:
"""Note: Run the password reset and return a readable terminal status."""
password = _password_from_args(args)
if reset_password(args.username, password):
print(f"Password reset for user: {args.username}")
return 0
print(f"User not found: {args.username}", file=sys.stderr)
return 1
def main(argv: list[str] | None = None) -> int:
"""Note: Main CLI entrypoint with error handling and without starting the web app."""
parser = build_parser()
args = parser.parse_args(argv)
try:
return int(args.func(args) or 0)
except Exception as exc:
print(f"Error: {exc}", file=sys.stderr)
return 1
if __name__ == "__main__":
raise SystemExit(main())

View File

@@ -1,6 +1,7 @@
from __future__ import annotations from __future__ import annotations
import os import os
import secrets
from pathlib import Path from pathlib import Path
from dotenv import load_dotenv from dotenv import load_dotenv
@@ -15,7 +16,8 @@ def _env_bool(name: str, default: bool = False) -> bool:
return value.strip().lower() in {"1", "true", "yes", "on"} return value.strip().lower() in {"1", "true", "yes", "on"}
SECRET_KEY = os.getenv("PYTORRENT_SECRET_KEY", "dev-change-me") _SECRET_KEY_ENV = os.getenv("PYTORRENT_SECRET_KEY")
SECRET_KEY = _SECRET_KEY_ENV or "dev-change-me"
DB_PATH = Path(os.getenv("PYTORRENT_DB_PATH", str(BASE_DIR / "data" / "pytorrent.sqlite3"))) DB_PATH = Path(os.getenv("PYTORRENT_DB_PATH", str(BASE_DIR / "data" / "pytorrent.sqlite3")))
if not DB_PATH.is_absolute(): if not DB_PATH.is_absolute():
DB_PATH = BASE_DIR / DB_PATH DB_PATH = BASE_DIR / DB_PATH
@@ -23,6 +25,18 @@ if not DB_PATH.is_absolute():
HOST = os.getenv("PYTORRENT_HOST", "0.0.0.0") HOST = os.getenv("PYTORRENT_HOST", "0.0.0.0")
PORT = int(os.getenv("PYTORRENT_PORT", "8090")) PORT = int(os.getenv("PYTORRENT_PORT", "8090"))
DEBUG = _env_bool("PYTORRENT_DEBUG", False) DEBUG = _env_bool("PYTORRENT_DEBUG", False)
# Note: Optional authentication remains disabled unless explicitly enabled in .env.
AUTH_ENABLE = _env_bool("PYTORRENT_AUTH_ENABLE", False)
if AUTH_ENABLE and (not _SECRET_KEY_ENV or SECRET_KEY == "dev-change-me"):
# Note: Auth mode cannot use Flask's development secret; persist a local random session key instead.
_secret_file = BASE_DIR / "data" / ".session_secret"
_secret_file.parent.mkdir(parents=True, exist_ok=True)
if _secret_file.exists():
SECRET_KEY = _secret_file.read_text(encoding="utf-8").strip()
if not SECRET_KEY or SECRET_KEY == "dev-change-me":
SECRET_KEY = secrets.token_urlsafe(48)
_secret_file.write_text(SECRET_KEY, encoding="utf-8")
SESSION_COOKIE_SECURE = _env_bool("PYTORRENT_SESSION_COOKIE_SECURE", False)
# Note: Keep Werkzeug opt-in only for explicit local/dev use, never by default in services. # Note: Keep Werkzeug opt-in only for explicit local/dev use, never by default in services.
ALLOW_UNSAFE_WERKZEUG = _env_bool("PYTORRENT_ALLOW_UNSAFE_WERKZEUG", DEBUG) ALLOW_UNSAFE_WERKZEUG = _env_bool("PYTORRENT_ALLOW_UNSAFE_WERKZEUG", DEBUG)
POLL_INTERVAL = float(os.getenv("PYTORRENT_POLL_INTERVAL", "1.0")) POLL_INTERVAL = float(os.getenv("PYTORRENT_POLL_INTERVAL", "1.0"))
@@ -39,6 +53,17 @@ def _env_int(name: str, default: int, minimum: int = 0) -> int:
return default return default
PROXY_FIX_ENABLE = _env_bool("PYTORRENT_PROXY_FIX_ENABLE", False)
PROXY_FIX_X_FOR = _env_int("PYTORRENT_PROXY_FIX_X_FOR", 1, 0)
PROXY_FIX_X_PROTO = _env_int("PYTORRENT_PROXY_FIX_X_PROTO", 1, 0)
PROXY_FIX_X_HOST = _env_int("PYTORRENT_PROXY_FIX_X_HOST", 1, 0)
PROXY_FIX_X_PORT = _env_int("PYTORRENT_PROXY_FIX_X_PORT", 1, 0)
PROXY_FIX_X_PREFIX = _env_int("PYTORRENT_PROXY_FIX_X_PREFIX", 1, 0)
_SOCKETIO_CORS = os.getenv("PYTORRENT_SOCKETIO_CORS_ALLOWED_ORIGINS", "").strip()
SOCKETIO_CORS_ALLOWED_ORIGINS = None if not _SOCKETIO_CORS else [item.strip() for item in _SOCKETIO_CORS.split(",") if item.strip()]
TRAFFIC_HISTORY_RETENTION_DAYS = _env_int("PYTORRENT_TRAFFIC_HISTORY_RETENTION_DAYS", 90, 1) TRAFFIC_HISTORY_RETENTION_DAYS = _env_int("PYTORRENT_TRAFFIC_HISTORY_RETENTION_DAYS", 90, 1)
JOBS_RETENTION_DAYS = _env_int("PYTORRENT_JOBS_RETENTION_DAYS", 30, 1) JOBS_RETENTION_DAYS = _env_int("PYTORRENT_JOBS_RETENTION_DAYS", 30, 1)
SMART_QUEUE_HISTORY_RETENTION_DAYS = _env_int("PYTORRENT_SMART_QUEUE_HISTORY_RETENTION_DAYS", 30, 1) SMART_QUEUE_HISTORY_RETENTION_DAYS = _env_int("PYTORRENT_SMART_QUEUE_HISTORY_RETENTION_DAYS", 30, 1)

View File

@@ -10,7 +10,20 @@ CREATE TABLE IF NOT EXISTS users (
id INTEGER PRIMARY KEY AUTOINCREMENT, id INTEGER PRIMARY KEY AUTOINCREMENT,
username TEXT UNIQUE NOT NULL, username TEXT UNIQUE NOT NULL,
password_hash TEXT, password_hash TEXT,
created_at TEXT NOT NULL role TEXT DEFAULT 'user',
is_active INTEGER DEFAULT 1,
created_at TEXT NOT NULL,
updated_at TEXT
);
CREATE TABLE IF NOT EXISTS user_profile_permissions (
user_id INTEGER NOT NULL,
profile_id INTEGER NOT NULL DEFAULT 0,
access_level TEXT NOT NULL DEFAULT 'ro',
created_at TEXT NOT NULL,
updated_at TEXT NOT NULL,
PRIMARY KEY(user_id, profile_id),
FOREIGN KEY(user_id) REFERENCES users(id)
); );
CREATE TABLE IF NOT EXISTS user_preferences ( CREATE TABLE IF NOT EXISTS user_preferences (
@@ -246,6 +259,9 @@ CREATE TABLE IF NOT EXISTS torrent_stats_cache (
""" """
MIGRATIONS = [ MIGRATIONS = [
"ALTER TABLE users ADD COLUMN role TEXT DEFAULT 'user'",
"ALTER TABLE users ADD COLUMN is_active INTEGER DEFAULT 1",
"ALTER TABLE users ADD COLUMN updated_at TEXT",
"ALTER TABLE user_preferences ADD COLUMN mobile_mode INTEGER DEFAULT 0", "ALTER TABLE user_preferences ADD COLUMN mobile_mode INTEGER DEFAULT 0",
"ALTER TABLE user_preferences ADD COLUMN peers_refresh_seconds INTEGER DEFAULT 0", "ALTER TABLE user_preferences ADD COLUMN peers_refresh_seconds INTEGER DEFAULT 0",
"ALTER TABLE user_preferences ADD COLUMN port_check_enabled INTEGER DEFAULT 0", "ALTER TABLE user_preferences ADD COLUMN port_check_enabled INTEGER DEFAULT 0",
@@ -299,15 +315,21 @@ def init_db():
pass pass
now = utcnow() now = utcnow()
conn.execute( conn.execute(
"INSERT OR IGNORE INTO users(id, username, password_hash, created_at) VALUES(1, 'default', NULL, ?)", "INSERT OR IGNORE INTO users(id, username, password_hash, role, is_active, created_at, updated_at) VALUES(1, 'default', NULL, 'admin', 1, ?, ?)",
(now,), (now, now),
) )
conn.execute("UPDATE users SET role=COALESCE(role, 'admin'), is_active=COALESCE(is_active, 1), updated_at=COALESCE(updated_at, ?) WHERE id=1", (now,))
pref = conn.execute("SELECT id FROM user_preferences WHERE user_id=1").fetchone() pref = conn.execute("SELECT id FROM user_preferences WHERE user_id=1").fetchone()
if not pref: if not pref:
conn.execute( conn.execute(
"INSERT INTO user_preferences(user_id, theme, created_at, updated_at) VALUES(1, 'dark', ?, ?)", "INSERT INTO user_preferences(user_id, theme, created_at, updated_at) VALUES(1, 'dark', ?, ?)",
(now, now), (now, now),
) )
try:
from .services.auth import ensure_admin_user
ensure_admin_user()
except Exception:
pass
def default_user_id() -> int: def default_user_id() -> int:

View File

@@ -13,9 +13,10 @@ import socket
import json import json
import psutil import psutil
import xml.etree.ElementTree as ET import xml.etree.ElementTree as ET
from flask import Blueprint, jsonify, request from flask import Blueprint, jsonify, request, abort
from ..config import DB_PATH, JOBS_RETENTION_DAYS, SMART_QUEUE_HISTORY_RETENTION_DAYS, WORKERS from ..config import DB_PATH, JOBS_RETENTION_DAYS, SMART_QUEUE_HISTORY_RETENTION_DAYS, WORKERS
from ..db import default_user_id, connect, utcnow from ..db import connect, utcnow
from ..services.auth import current_user_id as default_user_id, current_user, list_users, save_user, delete_user, login_user, logout_user, enabled as auth_enabled, require_profile_write
from ..services import preferences, rtorrent, torrent_stats from ..services import preferences, rtorrent, torrent_stats
from ..services.torrent_cache import torrent_cache from ..services.torrent_cache import torrent_cache
from ..services.torrent_summary import cached_summary from ..services.torrent_summary import cached_summary
@@ -27,6 +28,77 @@ bp = Blueprint("api", __name__, url_prefix="/api")
MOVE_BULK_MAX_HASHES = 100 MOVE_BULK_MAX_HASHES = 100
@bp.post("/auth/login")
def auth_login():
# Note: Auth API is hidden when optional authentication is disabled.
if not auth_enabled():
abort(404)
data = request.get_json(silent=True) or {}
user = login_user(str(data.get("username") or ""), str(data.get("password") or ""))
if not user:
return jsonify({"ok": False, "error": "Invalid username or password"}), 401
return ok({"user": user, "auth_enabled": auth_enabled()})
@bp.get("/auth/me")
def auth_me():
if not auth_enabled():
abort(404)
return ok({"user": current_user(), "auth_enabled": auth_enabled()})
@bp.post("/auth/logout")
def auth_logout():
if not auth_enabled():
abort(404)
logout_user()
return ok()
@bp.get("/auth/users")
def auth_users_list():
if not auth_enabled():
abort(404)
return ok({"users": list_users()})
@bp.post("/auth/users")
def auth_users_create():
if not auth_enabled():
abort(404)
try:
return ok({"user": save_user(request.get_json(silent=True) or {})})
except Exception as exc:
return jsonify({"ok": False, "error": str(exc)}), 400
@bp.put("/auth/users/<int:user_id>")
def auth_users_update(user_id: int):
if not auth_enabled():
abort(404)
try:
return ok({"user": save_user(request.get_json(silent=True) or {}, user_id)})
except Exception as exc:
return jsonify({"ok": False, "error": str(exc)}), 400
@bp.delete("/auth/users/<int:user_id>")
def auth_users_delete(user_id: int):
if not auth_enabled():
abort(404)
try:
delete_user(user_id)
return ok({"users": list_users()})
except Exception as exc:
return jsonify({"ok": False, "error": str(exc)}), 400
def _job_profile_id(job_id: str) -> int | None:
with connect() as conn:
row = conn.execute("SELECT profile_id FROM jobs WHERE id=?", (job_id,)).fetchone()
return int(row.get("profile_id") or 0) if row else None
def ok(payload=None): def ok(payload=None):
data = {"ok": True} data = {"ok": True}
if payload: if payload:
@@ -312,7 +384,7 @@ def _chunk_hashes(hashes: list[str], size: int = MOVE_BULK_MAX_HASHES) -> list[l
def enqueue_bulk_parts(profile: dict, action_name: str, data: dict) -> list[dict]: def enqueue_bulk_parts(profile: dict, action_name: str, data: dict) -> list[dict]:
# Note: Jedna wspolna funkcja dzieli duze operacje move/remove na male, uporzadkowane party bez ruszania pozostalych akcji. # Note: One shared helper splits large move/remove operations into small ordered parts without changing other actions.
base_payload = enrich_bulk_payload(profile, action_name, data) base_payload = enrich_bulk_payload(profile, action_name, data)
hashes = base_payload.get("hashes") or [] hashes = base_payload.get("hashes") or []
chunks = _chunk_hashes(hashes) chunks = _chunk_hashes(hashes)
@@ -342,12 +414,12 @@ def enqueue_bulk_parts(profile: dict, action_name: str, data: dict) -> list[dict
def enqueue_move_bulk_parts(profile: dict, data: dict) -> list[dict]: def enqueue_move_bulk_parts(profile: dict, data: dict) -> list[dict]:
# Note: Zachowuje stary publiczny helper dla move, ale korzysta z tej samej logiki partycji. # Note: Keep the old public move helper while using the same partitioning logic.
return enqueue_bulk_parts(profile, "move", data) return enqueue_bulk_parts(profile, "move", data)
def enqueue_remove_bulk_parts(profile: dict, data: dict) -> list[dict]: def enqueue_remove_bulk_parts(profile: dict, data: dict) -> list[dict]:
# Note: Remove/rm dostaje identyczne dzielenie na party jak move, co zmniejsza load na rTorrent. # Note: Remove/rm uses the same partitioning as move, which lowers rTorrent load.
return enqueue_bulk_parts(profile, "remove", data) return enqueue_bulk_parts(profile, "remove", data)
@@ -413,6 +485,8 @@ def torrents():
@bp.get("/torrent-stats") @bp.get("/torrent-stats")
def torrent_stats_get(): def torrent_stats_get():
profile = preferences.active_profile() profile = preferences.active_profile()
if not profile:
return ok({"stats": {}, "error": "No profile"})
force = str(request.args.get("force") or "").lower() in {"1", "true", "yes"} force = str(request.args.get("force") or "").lower() in {"1", "true", "yes"}
try: try:
# Note: Heavy file metadata is served from a 15-minute DB cache unless the user explicitly refreshes it. # Note: Heavy file metadata is served from a 15-minute DB cache unless the user explicitly refreshes it.
@@ -640,7 +714,7 @@ def jobs_list():
@bp.post("/jobs/clear") @bp.post("/jobs/clear")
def jobs_clear(): def jobs_clear():
if str(request.args.get("force") or "").lower() in {"1", "true", "yes"}: if str(request.args.get("force") or "").lower() in {"1", "true", "yes"}:
# Awaryjne czyszczenie: endpoint zachowuje standardowe działanie, a force=1 uruchamia tryb ratunkowy. # Note: Emergency cleanup keeps the endpoint behavior unchanged, while force=1 enables rescue mode.
deleted = emergency_clear_jobs() deleted = emergency_clear_jobs()
return ok({"deleted": deleted, "emergency": True}) return ok({"deleted": deleted, "emergency": True})
deleted = clear_jobs() deleted = clear_jobs()
@@ -685,6 +759,7 @@ def cleanup_all():
@bp.post("/jobs/<job_id>/cancel") @bp.post("/jobs/<job_id>/cancel")
def jobs_cancel(job_id: str): def jobs_cancel(job_id: str):
require_profile_write(_job_profile_id(job_id))
if not cancel_job(job_id): if not cancel_job(job_id):
return jsonify({"ok": False, "error": "Only unfinished jobs can be cancelled"}), 400 return jsonify({"ok": False, "error": "Only unfinished jobs can be cancelled"}), 400
return ok({"emergency": True}) return ok({"emergency": True})
@@ -692,6 +767,7 @@ def jobs_cancel(job_id: str):
@bp.post("/jobs/<job_id>/retry") @bp.post("/jobs/<job_id>/retry")
def jobs_retry(job_id: str): def jobs_retry(job_id: str):
require_profile_write(_job_profile_id(job_id))
if not retry_job(job_id): if not retry_job(job_id):
return jsonify({"ok": False, "error": "Only failed or cancelled jobs can be retried"}), 400 return jsonify({"ok": False, "error": "Only failed or cancelled jobs can be retried"}), 400
return ok() return ok()
@@ -910,7 +986,7 @@ def smart_queue_check():
return ok({'result': {'ok': False, 'error': 'No profile'}}) return ok({'result': {'ok': False, 'error': 'No profile'}})
try: try:
result = smart_queue.check(profile, force=True) result = smart_queue.check(profile, force=True)
# Note: Ręczny check zwraca od razu świeży snapshot, żeby UI pokazało realną liczbę Downloading po akcji. # Note: Manual check immediately returns a fresh snapshot so the UI shows the real Downloading count after the action.
diff = torrent_cache.refresh(profile) diff = torrent_cache.refresh(profile)
rows = torrent_cache.snapshot(profile['id']) rows = torrent_cache.snapshot(profile['id'])
return ok({'result': result, 'torrent_patch': {**diff, 'summary': cached_summary(profile['id'], rows, force=True)}}) return ok({'result': result, 'torrent_patch': {**diff, 'summary': cached_summary(profile['id'], rows, force=True)}})

View File

@@ -1,11 +1,34 @@
from __future__ import annotations from __future__ import annotations
from flask import Blueprint, render_template, jsonify, Response from flask import Blueprint, render_template, jsonify, Response, request, redirect, url_for, abort
from ..services.preferences import get_preferences, list_profiles, active_profile, BOOTSTRAP_THEMES, FONT_FAMILIES, bootstrap_css_url from ..services.preferences import get_preferences, list_profiles, active_profile, BOOTSTRAP_THEMES, FONT_FAMILIES, bootstrap_css_url
from ..services import auth
bp = Blueprint("main", __name__) bp = Blueprint("main", __name__)
@bp.route("/login", methods=["GET", "POST"])
def login():
# Note: When optional authentication is disabled, /login is intentionally unavailable.
if not auth.enabled():
abort(404)
error = ""
if request.method == "POST":
user = auth.login_user(request.form.get("username", ""), request.form.get("password", ""))
if user:
return redirect(request.args.get("next") or url_for("main.index"))
error = "Invalid username or password"
return render_template("login.html", error=error)
@bp.get("/logout")
def logout():
auth.logout_user()
if not auth.enabled():
return redirect(url_for("main.index"))
return redirect(url_for("main.login"))
@bp.get("/") @bp.get("/")
def index(): def index():
prefs = get_preferences() prefs = get_preferences()
@@ -17,6 +40,8 @@ def index():
bootstrap_themes=BOOTSTRAP_THEMES, bootstrap_themes=BOOTSTRAP_THEMES,
font_families=FONT_FAMILIES, font_families=FONT_FAMILIES,
bootstrap_css_url=bootstrap_css_url((prefs or {}).get("bootstrap_theme")), bootstrap_css_url=bootstrap_css_url((prefs or {}).get("bootstrap_theme")),
auth_enabled=auth.enabled(),
current_user=auth.current_user(),
) )
@@ -80,7 +105,13 @@ def openapi():
"/api/traffic/history": {"get": {"summary": "Transfer history for charts", "parameters": [{"name": "range", "in": "query", "schema": {"type": "string", "enum": ["15m", "1h", "3h", "6h", "24h", "7d", "30d", "90d"]}}], "responses": {"200": {"description": "Aggregated traffic history"}}}} "/api/traffic/history": {"get": {"summary": "Transfer history for charts", "parameters": [{"name": "range", "in": "query", "schema": {"type": "string", "enum": ["15m", "1h", "3h", "6h", "24h", "7d", "30d", "90d"]}}], "responses": {"200": {"description": "Aggregated traffic history"}}}}
} }
paths.update({ paths.update({
"/api/auth/login": {"post": {"summary": "Log in with username and password when authentication is enabled", "requestBody": {"content": {"application/json": {"schema": {"type": "object", "properties": {"username": {"type": "string"}, "password": {"type": "string", "format": "password"}}, "required": ["username", "password"]}}}}, "responses": {"200": {"description": "Logged in"}, "401": {"description": "Invalid credentials"}, "404": {"description": "Authentication disabled"}}}},
"/api/auth/me": {"get": {"summary": "Return current authenticated user", "responses": {"200": {"description": "Current user"}, "404": {"description": "Authentication disabled"}}}},
"/api/auth/logout": {"post": {"summary": "Log out current user", "responses": {"200": {"description": "Logged out"}, "404": {"description": "Authentication disabled"}}}},
"/api/auth/users": {"get": {"summary": "List users, admin only", "responses": {"200": {"description": "Users"}, "403": {"description": "Admin only"}}}, "post": {"summary": "Create user, admin only", "requestBody": {"content": {"application/json": {"schema": {"$ref": "#/components/schemas/AuthUserInput"}}}}, "responses": {"200": {"description": "User created"}, "403": {"description": "Admin only"}}}},
"/api/auth/users/{user_id}": {"put": {"summary": "Update user, admin only", "parameters": [{"name": "user_id", "in": "path", "required": True, "schema": {"type": "integer"}}], "requestBody": {"content": {"application/json": {"schema": {"$ref": "#/components/schemas/AuthUserInput"}}}}, "responses": {"200": {"description": "User updated"}, "403": {"description": "Admin only"}}}, "delete": {"summary": "Delete user, admin only", "parameters": [{"name": "user_id", "in": "path", "required": True, "schema": {"type": "integer"}}], "responses": {"200": {"description": "User deleted"}, "403": {"description": "Admin only"}}}},
"/api/profiles/{profile_id}": {"delete": {"summary": "Delete rTorrent profile", "parameters": [{"name": "profile_id", "in": "path", "required": True, "schema": {"type": "integer"}}], "responses": {"200": {"description": "Deleted"}}}}, "/api/profiles/{profile_id}": {"delete": {"summary": "Delete rTorrent profile", "parameters": [{"name": "profile_id", "in": "path", "required": True, "schema": {"type": "integer"}}], "responses": {"200": {"description": "Deleted"}}}},
"/api/torrent-stats": {"get": {"summary": "Torrent statistics and cached file metadata", "parameters": [{"name": "force", "in": "query", "schema": {"type": "boolean", "default": False}}], "responses": {"200": {"description": "Torrent statistics"}}}},
"/api/path/default": {"get": {"summary": "Read active rTorrent default download path", "responses": {"200": {"description": "Default path"}}}}, "/api/path/default": {"get": {"summary": "Read active rTorrent default download path", "responses": {"200": {"description": "Default path"}}}},
"/api/torrents/{torrent_hash}/files/priority": {"post": {"summary": "Set file priorities", "parameters": [{"name": "torrent_hash", "in": "path", "required": True, "schema": {"type": "string"}}], "requestBody": {"content": {"application/json": {"schema": {"type": "object", "properties": {"files": {"type": "array", "items": {"type": "object", "properties": {"index": {"type": "integer"}, "priority": {"type": "integer", "enum": [0, 1, 2]}}}}}}}}}, "responses": {"200": {"description": "Updated priorities"}, "207": {"description": "Partial update"}}}}, "/api/torrents/{torrent_hash}/files/priority": {"post": {"summary": "Set file priorities", "parameters": [{"name": "torrent_hash", "in": "path", "required": True, "schema": {"type": "string"}}], "requestBody": {"content": {"application/json": {"schema": {"type": "object", "properties": {"files": {"type": "array", "items": {"type": "object", "properties": {"index": {"type": "integer"}, "priority": {"type": "integer", "enum": [0, 1, 2]}}}}}}}}}, "responses": {"200": {"description": "Updated priorities"}, "207": {"description": "Partial update"}}}},
"/api/torrents/{torrent_hash}/peers/action": {"post": {"summary": "Run peer action", "parameters": [{"name": "torrent_hash", "in": "path", "required": True, "schema": {"type": "string"}}], "requestBody": {"content": {"application/json": {"schema": {"type": "object", "properties": {"peer_index": {"type": "integer"}, "action": {"type": "string", "enum": ["disconnect", "kick", "snub", "unsnub", "ban"]}}}}}}, "responses": {"200": {"description": "Peer action result"}}}}, "/api/torrents/{torrent_hash}/peers/action": {"post": {"summary": "Run peer action", "parameters": [{"name": "torrent_hash", "in": "path", "required": True, "schema": {"type": "string"}}], "requestBody": {"content": {"application/json": {"schema": {"type": "object", "properties": {"peer_index": {"type": "integer"}, "action": {"type": "string", "enum": ["disconnect", "kick", "snub", "unsnub", "ban"]}}}}}}, "responses": {"200": {"description": "Peer action result"}}}},
@@ -98,6 +129,17 @@ def openapi():
"properties": {"ok": {"type": "boolean"}}, "properties": {"ok": {"type": "boolean"}},
"required": ["ok"], "required": ["ok"],
}, },
"AuthUserInput": {
"type": "object",
"properties": {
"username": {"type": "string"},
"password": {"type": "string", "format": "password", "description": "Optional on update"},
"role": {"type": "string", "enum": ["admin", "user"]},
"is_active": {"type": "boolean"},
"permissions": {"type": "array", "items": {"type": "object", "properties": {"profile_id": {"type": "integer", "description": "0 means all profiles"}, "access_level": {"type": "string", "enum": ["ro", "full"]}}}},
},
"required": ["username"],
},
"Profile": { "Profile": {
"type": "object", "type": "object",
"additionalProperties": True, "additionalProperties": True,
@@ -278,4 +320,9 @@ def openapi():
}, },
}) })
components.setdefault("securitySchemes", {})["sessionCookie"] = {"type": "apiKey", "in": "cookie", "name": "session"}
for path, methods in paths.items():
if path != "/api/auth/login":
for operation in methods.values():
operation.setdefault("security", [{"sessionCookie": []}])
return jsonify({"openapi": "3.0.3", "info": {"title": "pyTorrent API", "version": "0.2.0"}, "paths": paths, "components": components}) return jsonify({"openapi": "3.0.3", "info": {"title": "pyTorrent API", "version": "0.2.0"}, "paths": paths, "components": components})

344
pytorrent/services/auth.py Normal file
View File

@@ -0,0 +1,344 @@
from __future__ import annotations
from functools import wraps
from typing import Any
from urllib.parse import urlparse
from flask import abort, jsonify, redirect, request, session, url_for
from werkzeug.security import check_password_hash, generate_password_hash
from ..config import AUTH_ENABLE
from ..db import connect, default_user_id, utcnow
PUBLIC_ENDPOINTS = {"main.login", "main.logout", "api.auth_login", "api.auth_me", "static"}
RTORRENT_WRITE_PREFIXES = (
"/api/torrents/",
"/api/speed/limits",
"/api/labels",
"/api/ratio-groups",
"/api/rss",
"/api/smart-queue",
"/api/automations",
"/api/jobs",
)
RTORRENT_CONFIG_PREFIXES = ("/api/rtorrent-config",)
ADMIN_PREFIXES = ("/api/auth/users", "/api/profiles")
# Note: API reads that expose rTorrent/profile data must also respect profile permissions.
PROFILE_READ_PREFIXES = (
"/api/torrents",
"/api/torrent-stats",
"/api/system/status",
"/api/app/status",
"/api/port-check",
"/api/path",
"/api/labels",
"/api/ratio-groups",
"/api/rss",
"/api/rtorrent-config",
"/api/smart-queue",
"/api/traffic/history",
"/api/automations",
)
def enabled() -> bool:
return bool(AUTH_ENABLE)
def password_hash(password: str) -> str:
return generate_password_hash(password or "")
def current_user_id() -> int:
if not enabled():
return default_user_id()
try:
return int(session.get("user_id") or 0)
except Exception:
return 0
def current_user() -> dict[str, Any] | None:
uid = current_user_id()
if not uid:
return None
with connect() as conn:
return conn.execute(
"SELECT id, username, role, is_active, created_at, updated_at FROM users WHERE id=?",
(uid,),
).fetchone()
def is_admin(user: dict[str, Any] | None = None) -> bool:
if not enabled():
return True
user = user or current_user()
return bool(user and user.get("role") == "admin" and int(user.get("is_active") or 0))
def _permissions(user_id: int | None = None) -> list[dict[str, Any]]:
if not enabled():
return [{"profile_id": 0, "access_level": "full"}]
uid = user_id or current_user_id()
if not uid:
return []
with connect() as conn:
return conn.execute(
"SELECT profile_id, access_level FROM user_profile_permissions WHERE user_id=?",
(uid,),
).fetchall()
def can_access_profile(profile_id: int | None, user_id: int | None = None) -> bool:
if not enabled():
return True
uid = user_id or current_user_id()
if not uid:
return False
with connect() as conn:
user = conn.execute("SELECT role, is_active FROM users WHERE id=?", (uid,)).fetchone()
if not user or not int(user.get("is_active") or 0):
return False
if user.get("role") == "admin":
return True
pid = int(profile_id or 0)
row = conn.execute(
"SELECT 1 FROM user_profile_permissions WHERE user_id=? AND (profile_id=0 OR profile_id=?) LIMIT 1",
(uid, pid),
).fetchone()
return bool(row)
def can_write_profile(profile_id: int | None, user_id: int | None = None) -> bool:
if not enabled():
return True
uid = user_id or current_user_id()
if not uid:
return False
with connect() as conn:
user = conn.execute("SELECT role, is_active FROM users WHERE id=?", (uid,)).fetchone()
if not user or not int(user.get("is_active") or 0):
return False
if user.get("role") == "admin":
return True
pid = int(profile_id or 0)
row = conn.execute(
"SELECT access_level FROM user_profile_permissions WHERE user_id=? AND (profile_id=0 OR profile_id=?) ORDER BY profile_id DESC LIMIT 1",
(uid, pid),
).fetchone()
return bool(row and row.get("access_level") == "full")
def visible_profile_ids(user_id: int | None = None) -> set[int] | None:
if not enabled():
return None
uid = user_id or current_user_id()
if not uid:
return set()
with connect() as conn:
user = conn.execute("SELECT role, is_active FROM users WHERE id=?", (uid,)).fetchone()
if not user or not int(user.get("is_active") or 0):
return set()
if user.get("role") == "admin":
return None
rows = conn.execute("SELECT profile_id FROM user_profile_permissions WHERE user_id=?", (uid,)).fetchall()
if any(int(row.get("profile_id") or 0) == 0 for row in rows):
return None
return {int(row.get("profile_id") or 0) for row in rows}
def same_origin_request() -> bool:
"""Return False only when an unsafe request clearly comes from another origin."""
origin = request.headers.get("Origin") or request.headers.get("Referer")
if not origin:
return True
try:
parsed = urlparse(origin)
return parsed.scheme == request.scheme and parsed.netloc == request.host
except Exception:
return False
def writable_profile_ids(user_id: int | None = None) -> set[int] | None:
if not enabled():
return None
uid = user_id or current_user_id()
if not uid:
return set()
with connect() as conn:
user = conn.execute("SELECT role, is_active FROM users WHERE id=?", (uid,)).fetchone()
if not user or not int(user.get("is_active") or 0):
return set()
if user.get("role") == "admin":
return None
rows = conn.execute("SELECT profile_id FROM user_profile_permissions WHERE user_id=? AND access_level='full'", (uid,)).fetchall()
if any(int(row.get("profile_id") or 0) == 0 for row in rows):
return None
return {int(row.get("profile_id") or 0) for row in rows}
def require_admin() -> None:
if enabled() and not is_admin():
abort(403)
def require_profile_read(profile_id: int | None) -> None:
if enabled() and not can_access_profile(profile_id):
abort(403)
def require_profile_write(profile_id: int | None) -> None:
if enabled() and not can_write_profile(profile_id):
abort(403)
def login_user(username: str, password: str) -> dict[str, Any] | None:
if not enabled():
return {"id": default_user_id(), "username": "default", "role": "admin", "is_active": 1}
with connect() as conn:
user = conn.execute("SELECT * FROM users WHERE username=?", (username.strip(),)).fetchone()
if not user or not int(user.get("is_active") or 0):
return None
if not user.get("password_hash") or not check_password_hash(user.get("password_hash"), password or ""):
return None
session.clear()
session["user_id"] = int(user["id"])
session["username"] = user["username"]
session["role"] = user.get("role") or "user"
return current_user()
def logout_user() -> None:
session.clear()
def ensure_admin_user() -> None:
if not enabled():
return
now = utcnow()
with connect() as conn:
row = conn.execute("SELECT id FROM users WHERE username='admin'").fetchone()
if not row:
conn.execute(
"INSERT INTO users(username,password_hash,role,is_active,created_at,updated_at) VALUES(?,?,?,?,?,?)",
("admin", password_hash("admin"), "admin", 1, now, now),
)
else:
conn.execute("UPDATE users SET role='admin', is_active=1, updated_at=? WHERE username='admin'", (now,))
def list_users() -> list[dict[str, Any]]:
require_admin()
with connect() as conn:
users = conn.execute(
"SELECT id, username, role, is_active, created_at, updated_at FROM users ORDER BY username COLLATE NOCASE"
).fetchall()
perms = conn.execute(
"SELECT user_id, profile_id, access_level FROM user_profile_permissions ORDER BY user_id, profile_id"
).fetchall()
by_user: dict[int, list[dict[str, Any]]] = {}
for perm in perms:
by_user.setdefault(int(perm["user_id"]), []).append({
"profile_id": int(perm.get("profile_id") or 0),
"access_level": perm.get("access_level") or "ro",
})
for user in users:
user["permissions"] = by_user.get(int(user["id"]), [])
return users
def save_user(data: dict[str, Any], user_id: int | None = None) -> dict[str, Any]:
require_admin()
now = utcnow()
username = str(data.get("username") or "").strip()
role = "admin" if data.get("role") == "admin" else "user"
is_active = 1 if data.get("is_active", True) else 0
if not username:
raise ValueError("Username is required")
with connect() as conn:
if user_id:
row = conn.execute("SELECT id FROM users WHERE id=?", (user_id,)).fetchone()
if not row:
raise ValueError("User does not exist")
conn.execute(
"UPDATE users SET username=?, role=?, is_active=?, updated_at=? WHERE id=?",
(username, role, is_active, now, user_id),
)
else:
cur = conn.execute(
"INSERT INTO users(username,password_hash,role,is_active,created_at,updated_at) VALUES(?,?,?,?,?,?)",
(username, password_hash(str(data.get("password") or username)), role, is_active, now, now),
)
user_id = int(cur.lastrowid)
if data.get("password"):
conn.execute("UPDATE users SET password_hash=?, updated_at=? WHERE id=?", (password_hash(str(data.get("password"))), now, user_id))
if role != "admin":
conn.execute("DELETE FROM user_profile_permissions WHERE user_id=?", (user_id,))
for item in data.get("permissions") or []:
profile_id = int(item.get("profile_id") or 0)
access = "full" if item.get("access_level") == "full" else "ro"
conn.execute(
"INSERT OR REPLACE INTO user_profile_permissions(user_id,profile_id,access_level,created_at,updated_at) VALUES(?,?,?,?,?)",
(user_id, profile_id, access, now, now),
)
else:
conn.execute("DELETE FROM user_profile_permissions WHERE user_id=?", (user_id,))
return conn.execute("SELECT id, username, role, is_active, created_at, updated_at FROM users WHERE id=?", (user_id,)).fetchone()
def delete_user(user_id: int) -> None:
require_admin()
if int(user_id) == current_user_id():
raise ValueError("Cannot delete current user")
with connect() as conn:
conn.execute("DELETE FROM user_profile_permissions WHERE user_id=?", (user_id,))
conn.execute("DELETE FROM users WHERE id=? AND username <> 'admin'", (user_id,))
def install_guards(app) -> None:
@app.before_request
def _auth_guard():
if not enabled():
return None
endpoint = request.endpoint or ""
if endpoint in PUBLIC_ENDPOINTS or endpoint.startswith("static"):
return None
if not current_user_id():
if request.path.startswith("/api/"):
return jsonify({"ok": False, "error": "Authentication required"}), 401
return redirect(url_for("main.login", next=request.full_path if request.query_string else request.path))
user = current_user()
if not user or not int(user.get("is_active") or 0):
logout_user()
return jsonify({"ok": False, "error": "Authentication required"}), 401 if request.path.startswith("/api/") else redirect(url_for("main.login"))
if request.path.startswith("/api/auth/users") and not is_admin(user):
return jsonify({"ok": False, "error": "Admin only"}), 403
if request.path.startswith(PROFILE_READ_PREFIXES):
profile_id = _request_profile_id()
if profile_id and not can_access_profile(profile_id):
return jsonify({"ok": False, "error": "Profile access denied"}), 403
if request.method not in {"GET", "HEAD", "OPTIONS"}:
if request.path.startswith("/api/") and not same_origin_request():
return jsonify({"ok": False, "error": "Cross-origin API request blocked"}), 403
if request.path.startswith("/api/profiles") and not request.path.endswith("/activate") and not is_admin(user):
return jsonify({"ok": False, "error": "Admin only"}), 403
profile_id = _request_profile_id()
if request.path.startswith(RTORRENT_CONFIG_PREFIXES) and not can_write_profile(profile_id):
return jsonify({"ok": False, "error": "Read-only profile access"}), 403
if request.path.startswith(RTORRENT_WRITE_PREFIXES) and not can_write_profile(profile_id):
return jsonify({"ok": False, "error": "Read-only profile access"}), 403
return None
def _request_profile_id() -> int | None:
if request.view_args and request.view_args.get("profile_id"):
return int(request.view_args["profile_id"])
try:
payload = request.get_json(silent=True) or {}
if payload.get("profile_id"):
return int(payload.get("profile_id"))
except Exception:
pass
from . import preferences
profile = preferences.active_profile()
return int(profile["id"]) if profile else None

View File

@@ -3,6 +3,7 @@ from __future__ import annotations
import json import json
from ..db import connect, utcnow, default_user_id from ..db import connect, utcnow, default_user_id
from . import auth
BOOTSTRAP_THEMES = { BOOTSTRAP_THEMES = {
"default": "Default Bootstrap", "default": "Default Bootstrap",
@@ -34,43 +35,44 @@ def bootstrap_css_url(theme: str | None) -> str:
def list_profiles(user_id: int | None = None): def list_profiles(user_id: int | None = None):
user_id = user_id or default_user_id() user_id = user_id or auth.current_user_id() or default_user_id()
visible = auth.visible_profile_ids(user_id)
with connect() as conn: with connect() as conn:
if visible is None:
return conn.execute(
"SELECT * FROM rtorrent_profiles ORDER BY is_default DESC, name COLLATE NOCASE"
).fetchall()
if not visible:
return []
placeholders = ",".join("?" for _ in visible)
return conn.execute( return conn.execute(
"SELECT * FROM rtorrent_profiles WHERE user_id=? ORDER BY is_default DESC, name COLLATE NOCASE", f"SELECT * FROM rtorrent_profiles WHERE id IN ({placeholders}) ORDER BY is_default DESC, name COLLATE NOCASE",
(user_id,), tuple(visible),
).fetchall() ).fetchall()
def get_profile(profile_id: int, user_id: int | None = None): def get_profile(profile_id: int, user_id: int | None = None):
user_id = user_id or default_user_id() user_id = user_id or auth.current_user_id() or default_user_id()
if not auth.can_access_profile(profile_id, user_id):
return None
with connect() as conn: with connect() as conn:
return conn.execute( return conn.execute("SELECT * FROM rtorrent_profiles WHERE id=?", (profile_id,)).fetchone()
"SELECT * FROM rtorrent_profiles WHERE id=? AND user_id=?",
(profile_id, user_id),
).fetchone()
def active_profile(user_id: int | None = None): def active_profile(user_id: int | None = None):
user_id = user_id or default_user_id() user_id = user_id or auth.current_user_id() or default_user_id()
with connect() as conn: with connect() as conn:
pref = conn.execute("SELECT active_rtorrent_id FROM user_preferences WHERE user_id=?", (user_id,)).fetchone() pref = conn.execute("SELECT active_rtorrent_id FROM user_preferences WHERE user_id=?", (user_id,)).fetchone()
if pref and pref.get("active_rtorrent_id"): if pref and pref.get("active_rtorrent_id") and auth.can_access_profile(int(pref["active_rtorrent_id"]), user_id):
row = conn.execute( row = conn.execute("SELECT * FROM rtorrent_profiles WHERE id=?", (pref["active_rtorrent_id"],)).fetchone()
"SELECT * FROM rtorrent_profiles WHERE id=? AND user_id=?",
(pref["active_rtorrent_id"], user_id),
).fetchone()
if row: if row:
return row return row
row = conn.execute( profiles = list_profiles(user_id)
"SELECT * FROM rtorrent_profiles WHERE user_id=? ORDER BY is_default DESC, id ASC LIMIT 1", return profiles[0] if profiles else None
(user_id,),
).fetchone()
return row
def save_profile(data: dict, user_id: int | None = None): def save_profile(data: dict, user_id: int | None = None):
user_id = user_id or default_user_id() user_id = user_id or auth.current_user_id() or default_user_id()
now = utcnow() now = utcnow()
name = str(data.get("name") or "rTorrent").strip() name = str(data.get("name") or "rTorrent").strip()
scgi_url = str(data.get("scgi_url") or "").strip() scgi_url = str(data.get("scgi_url") or "").strip()
@@ -79,7 +81,7 @@ def save_profile(data: dict, user_id: int | None = None):
is_remote = 1 if data.get("is_remote") else 0 is_remote = 1 if data.get("is_remote") else 0
is_default = 1 if data.get("is_default") else 0 is_default = 1 if data.get("is_default") else 0
if not scgi_url.startswith("scgi://"): if not scgi_url.startswith("scgi://"):
raise ValueError("SCGI URL musi zaczynać się od scgi://") raise ValueError("SCGI URL must start with scgi://")
with connect() as conn: with connect() as conn:
if is_default: if is_default:
conn.execute("UPDATE rtorrent_profiles SET is_default=0 WHERE user_id=?", (user_id,)) conn.execute("UPDATE rtorrent_profiles SET is_default=0 WHERE user_id=?", (user_id,))
@@ -94,11 +96,11 @@ def save_profile(data: dict, user_id: int | None = None):
"UPDATE user_preferences SET active_rtorrent_id=?, updated_at=? WHERE user_id=?", "UPDATE user_preferences SET active_rtorrent_id=?, updated_at=? WHERE user_id=?",
(profile_id, now, user_id), (profile_id, now, user_id),
) )
return conn.execute("SELECT * FROM rtorrent_profiles WHERE id=? AND user_id=?", (profile_id, user_id)).fetchone() return conn.execute("SELECT * FROM rtorrent_profiles WHERE id=?", (profile_id,)).fetchone()
def update_profile(profile_id: int, data: dict, user_id: int | None = None): def update_profile(profile_id: int, data: dict, user_id: int | None = None):
user_id = user_id or default_user_id() user_id = user_id or auth.current_user_id() or default_user_id()
now = utcnow() now = utcnow()
name = str(data.get("name") or "rTorrent").strip() name = str(data.get("name") or "rTorrent").strip()
scgi_url = str(data.get("scgi_url") or "").strip() scgi_url = str(data.get("scgi_url") or "").strip()
@@ -107,24 +109,25 @@ def update_profile(profile_id: int, data: dict, user_id: int | None = None):
is_remote = 1 if data.get("is_remote") else 0 is_remote = 1 if data.get("is_remote") else 0
is_default = 1 if data.get("is_default") else 0 is_default = 1 if data.get("is_default") else 0
if not scgi_url.startswith("scgi://"): if not scgi_url.startswith("scgi://"):
raise ValueError("SCGI URL musi zaczynać się od scgi://") raise ValueError("SCGI URL must start with scgi://")
with connect() as conn: with connect() as conn:
row = conn.execute("SELECT id FROM rtorrent_profiles WHERE id=? AND user_id=?", (profile_id, user_id)).fetchone() row = conn.execute("SELECT id FROM rtorrent_profiles WHERE id=?", (profile_id,)).fetchone()
if not row: if not row or not auth.can_write_profile(profile_id, user_id):
raise ValueError("Profil nie istnieje") raise ValueError("Profil nie istnieje")
if is_default: if is_default:
conn.execute("UPDATE rtorrent_profiles SET is_default=0 WHERE user_id=?", (user_id,)) conn.execute("UPDATE rtorrent_profiles SET is_default=0 WHERE user_id=?", (user_id,))
conn.execute( conn.execute(
"UPDATE rtorrent_profiles SET name=?, scgi_url=?, is_default=?, timeout_seconds=?, max_parallel_jobs=?, is_remote=?, updated_at=? WHERE id=? AND user_id=?", "UPDATE rtorrent_profiles SET name=?, scgi_url=?, is_default=?, timeout_seconds=?, max_parallel_jobs=?, is_remote=?, updated_at=? WHERE id=?",
(name, scgi_url, is_default, timeout, max_parallel, is_remote, now, profile_id, user_id), (name, scgi_url, is_default, timeout, max_parallel, is_remote, now, profile_id),
) )
return conn.execute("SELECT * FROM rtorrent_profiles WHERE id=? AND user_id=?", (profile_id, user_id)).fetchone() return conn.execute("SELECT * FROM rtorrent_profiles WHERE id=?", (profile_id,)).fetchone()
def delete_profile(profile_id: int, user_id: int | None = None): def delete_profile(profile_id: int, user_id: int | None = None):
user_id = user_id or default_user_id() user_id = user_id or auth.current_user_id() or default_user_id()
auth.require_profile_write(profile_id)
with connect() as conn: with connect() as conn:
conn.execute("DELETE FROM rtorrent_profiles WHERE id=? AND user_id=?", (profile_id, user_id)) conn.execute("DELETE FROM rtorrent_profiles WHERE id=?", (profile_id,))
active = active_profile(user_id) active = active_profile(user_id)
conn.execute( conn.execute(
"UPDATE user_preferences SET active_rtorrent_id=?, updated_at=? WHERE user_id=?", "UPDATE user_preferences SET active_rtorrent_id=?, updated_at=? WHERE user_id=?",
@@ -133,10 +136,10 @@ def delete_profile(profile_id: int, user_id: int | None = None):
def activate_profile(profile_id: int, user_id: int | None = None): def activate_profile(profile_id: int, user_id: int | None = None):
user_id = user_id or default_user_id() user_id = user_id or auth.current_user_id() or default_user_id()
with connect() as conn: with connect() as conn:
row = conn.execute("SELECT id FROM rtorrent_profiles WHERE id=? AND user_id=?", (profile_id, user_id)).fetchone() row = conn.execute("SELECT id FROM rtorrent_profiles WHERE id=?", (profile_id,)).fetchone()
if not row: if not row or not auth.can_access_profile(profile_id, user_id):
raise ValueError("Profil nie istnieje") raise ValueError("Profil nie istnieje")
conn.execute( conn.execute(
"UPDATE user_preferences SET active_rtorrent_id=?, updated_at=? WHERE user_id=?", "UPDATE user_preferences SET active_rtorrent_id=?, updated_at=? WHERE user_id=?",
@@ -146,13 +149,18 @@ def activate_profile(profile_id: int, user_id: int | None = None):
def get_preferences(user_id: int | None = None): def get_preferences(user_id: int | None = None):
user_id = user_id or default_user_id() user_id = user_id or auth.current_user_id() or default_user_id()
with connect() as conn: with connect() as conn:
return conn.execute("SELECT * FROM user_preferences WHERE user_id=?", (user_id,)).fetchone() pref = conn.execute("SELECT * FROM user_preferences WHERE user_id=?", (user_id,)).fetchone()
if not pref:
now = utcnow()
conn.execute("INSERT INTO user_preferences(user_id, theme, created_at, updated_at) VALUES(?, 'dark', ?, ?)", (user_id, now, now))
pref = conn.execute("SELECT * FROM user_preferences WHERE user_id=?", (user_id,)).fetchone()
return pref
def save_preferences(data: dict, user_id: int | None = None): def save_preferences(data: dict, user_id: int | None = None):
user_id = user_id or default_user_id() user_id = user_id or auth.current_user_id() or default_user_id()
allowed_theme = data.get("theme") if data.get("theme") in {"light", "dark"} else None allowed_theme = data.get("theme") if data.get("theme") in {"light", "dark"} else None
bootstrap_theme = data.get("bootstrap_theme") if data.get("bootstrap_theme") in BOOTSTRAP_THEMES else None bootstrap_theme = data.get("bootstrap_theme") if data.get("bootstrap_theme") in BOOTSTRAP_THEMES else None
font_family = data.get("font_family") if data.get("font_family") in FONT_FAMILIES else None font_family = data.get("font_family") if data.get("font_family") in FONT_FAMILIES else None

View File

@@ -85,7 +85,7 @@ class ScgiRtorrentClient:
def _scgi_retry_attempts() -> int: def _scgi_retry_attempts() -> int:
# Note: Krotki retry/backoff chroni masowe operacje przed chwilowym Errno 111 przy wysokim loadzie rTorrent. # Note: Short retry/backoff protects bulk operations from temporary Errno 111 during high rTorrent load.
try: try:
return max(1, min(10, int(os.environ.get("PYTORRENT_SCGI_RETRIES", "5")))) return max(1, min(10, int(os.environ.get("PYTORRENT_SCGI_RETRIES", "5"))))
except Exception: except Exception:
@@ -97,7 +97,7 @@ def _scgi_retry_delay(attempt: int) -> float:
def _is_transient_scgi_error(exc: Exception) -> bool: def _is_transient_scgi_error(exc: Exception) -> bool:
# Note: Retry obejmuje typowe chwilowe bledy SCGI/socket, ale nie ukrywa bledow merytorycznych XML-RPC. # Note: Retry covers common temporary SCGI/socket errors but does not hide semantic XML-RPC errors.
if isinstance(exc, (ConnectionRefusedError, ConnectionResetError, TimeoutError, socket.timeout)): if isinstance(exc, (ConnectionRefusedError, ConnectionResetError, TimeoutError, socket.timeout)):
return True return True
err_no = getattr(exc, "errno", None) err_no = getattr(exc, "errno", None)
@@ -115,7 +115,7 @@ _UNSUPPORTED_EXEC_METHODS: set[str] = set()
_EXEC_TARGET_STYLE: dict[str, int] = {} _EXEC_TARGET_STYLE: dict[str, int] = {}
def _rt_execute_preview(method_name: str, call_args: tuple) -> str: def _rt_execute_preview(method_name: str, call_args: tuple) -> str:
# Note: Skrocony opis RPC usuwa dlugie skrypty z komunikatu bledu, ale zostawia metode i pierwsze argumenty do diagnostyki. # Note: The compact RPC summary removes long scripts from error messages while keeping the method and first arguments for diagnostics.
preview = ", ".join(repr(x) for x in call_args[:3]) preview = ", ".join(repr(x) for x in call_args[:3])
if len(call_args) > 3: if len(call_args) > 3:
preview += ", ..." preview += ", ..."
@@ -123,7 +123,7 @@ def _rt_execute_preview(method_name: str, call_args: tuple) -> str:
def _rt_execute_target_variants(method: str, args: tuple) -> list[tuple]: def _rt_execute_target_variants(method: str, args: tuple) -> list[tuple]:
# Note: rTorrent XML-RPC w zaleznosci od wersji wymaga pustego targetu albo go odrzuca; zapamietujemy dzialajacy wariant per metoda. # Note: Depending on version, rTorrent XML-RPC either requires or rejects an empty target; cache the working variant per method.
variants = [("", *args), args] variants = [("", *args), args]
preferred = _EXEC_TARGET_STYLE.get(method) preferred = _EXEC_TARGET_STYLE.get(method)
if preferred is not None and 0 <= preferred < len(variants): if preferred is not None and 0 <= preferred < len(variants):
@@ -137,7 +137,7 @@ def _is_rt_method_missing(exc: Exception) -> bool:
def _rt_execute_methods(method: str) -> list[str]: def _rt_execute_methods(method: str) -> list[str]:
# Note: execute2.* jest probowane dopiero gdy podstawowe execute.* nie istnieje, zeby nie generowac falszywych bledow retry. # Note: execute2.* is tried only when the base execute.* method does not exist to avoid false retry errors.
methods = [method] methods = [method]
if method.startswith("execute."): if method.startswith("execute."):
fallback = method.replace("execute.", "execute2.", 1) fallback = method.replace("execute.", "execute2.", 1)
@@ -239,7 +239,7 @@ def _run_remote_move(c: ScgiRtorrentClient, src: str, dst: str, poll_interval: f
try: try:
output = str(_rt_execute(c, "execute.capture", "sh", "-c", poll_script, "pytorrent-move-poll", status_path) or "").strip() output = str(_rt_execute(c, "execute.capture", "sh", "-c", poll_script, "pytorrent-move-poll", status_path) or "").strip()
except Exception as exc: except Exception as exc:
# Note: Podczas masowego move rTorrent potrafi chwilowo nie utworzyc pipe dla execute.capture; polling czeka i probuje dalej. # Note: During bulk moves, rTorrent may briefly not create the execute.capture pipe; polling waits and retries.
if _is_rt_timeout_error(exc) or _is_transient_scgi_error(exc): if _is_rt_timeout_error(exc) or _is_transient_scgi_error(exc):
continue continue
raise raise
@@ -289,7 +289,7 @@ def _safe_rm_rf_path(path: str) -> str:
def _run_remote_rm(c: ScgiRtorrentClient, path: str, poll_interval: float = 2.0) -> None: def _run_remote_rm(c: ScgiRtorrentClient, path: str, poll_interval: float = 2.0) -> None:
# Note: rm -rf dziala w tle po stronie rTorrent, wiec dlugie kasowanie nie trzyma jednego polaczenia SCGI. # Note: rm -rf runs in the background on the rTorrent side, so long deletes do not hold a single SCGI connection.
token = uuid.uuid4().hex token = uuid.uuid4().hex
status_path = f"/tmp/pytorrent-rm-{token}.status" status_path = f"/tmp/pytorrent-rm-{token}.status"
script = ( script = (
@@ -310,7 +310,7 @@ def _run_remote_rm(c: ScgiRtorrentClient, path: str, poll_interval: float = 2.0)
try: try:
output = str(_rt_execute(c, "execute.capture", "sh", "-c", poll_script, "pytorrent-rm-poll", status_path) or "").strip() output = str(_rt_execute(c, "execute.capture", "sh", "-c", poll_script, "pytorrent-rm-poll", status_path) or "").strip()
except Exception as exc: except Exception as exc:
# Note: Remove uzywa tego samego bezpiecznego pollingu co move, wiec chwilowy brak pipe nie wywala calej kolejki. # Note: Remove uses the same safe polling as move, so a temporary missing pipe does not fail the whole queue.
if _is_rt_timeout_error(exc) or _is_transient_scgi_error(exc): if _is_rt_timeout_error(exc) or _is_transient_scgi_error(exc):
continue continue
raise raise
@@ -393,6 +393,21 @@ def _row_progress_complete(row: dict) -> bool:
return bool(row.get("complete")) or (size > 0 and completed >= size) or float(row.get("progress") or 0) >= 100.0 return bool(row.get("complete")) or (size > 0 and completed >= size) or float(row.get("progress") or 0) >= 100.0
def _remove_post_check_label_if_finished(c: ScgiRtorrentClient, row: dict) -> bool:
labels = _label_names(str(row.get("label") or ""))
if POST_CHECK_DOWNLOAD_LABEL not in labels:
return False
status = str(row.get("status") or "").lower()
if not (_row_progress_complete(row) or status == "seeding"):
return False
labels = [label for label in labels if label != POST_CHECK_DOWNLOAD_LABEL]
value = _label_value(labels)
# Note: Clean the temporary label after reaching 100% or entering seeding, even when the state no longer comes directly from recheck.
c.call("d.custom1.set", str(row.get("hash") or ""), value)
row["label"] = value
return True
def apply_post_check_policy(profile: dict, rows: list[dict], previous_rows: dict[str, dict] | None = None) -> list[dict]: def apply_post_check_policy(profile: dict, rows: list[dict], previous_rows: dict[str, dict] | None = None) -> list[dict]:
"""Start complete torrents after check; pause and label incomplete ones.""" """Start complete torrents after check; pause and label incomplete ones."""
previous_rows = previous_rows or {} previous_rows = previous_rows or {}
@@ -401,6 +416,11 @@ def apply_post_check_policy(profile: dict, rows: list[dict], previous_rows: dict
for row in rows: for row in rows:
h = str(row.get("hash") or "") h = str(row.get("hash") or "")
prev = previous_rows.get(h) or {} prev = previous_rows.get(h) or {}
try:
if h and _remove_post_check_label_if_finished(c, row):
changes.append({"hash": h, "action": "remove_post_check_label", "complete": True})
except Exception as exc:
changes.append({"hash": h, "action": "remove_post_check_label_failed", "error": str(exc)})
was_checking = str(prev.get("status") or "") == "Checking" or int(prev.get("hashing") or 0) > 0 was_checking = str(prev.get("status") or "") == "Checking" or int(prev.get("hashing") or 0) > 0
is_checking = str(row.get("status") or "") == "Checking" or int(row.get("hashing") or 0) > 0 is_checking = str(row.get("status") or "") == "Checking" or int(row.get("hashing") or 0) > 0
if not h or not was_checking or is_checking: if not h or not was_checking or is_checking:
@@ -408,7 +428,7 @@ def apply_post_check_policy(profile: dict, rows: list[dict], previous_rows: dict
complete = _row_progress_complete(row) complete = _row_progress_complete(row)
try: try:
if complete: if complete:
# Note: Po zakonczonym checku pelny torrent jest automatycznie startowany, zeby od razu seedowal. # Note: After a completed check, a complete torrent is started automatically so it can seed immediately.
c.call("d.start", h) c.call("d.start", h)
labels = [label for label in _label_names(str(row.get("label") or "")) if label != POST_CHECK_DOWNLOAD_LABEL] labels = [label for label in _label_names(str(row.get("label") or "")) if label != POST_CHECK_DOWNLOAD_LABEL]
if _label_value(labels) != str(row.get("label") or ""): if _label_value(labels) != str(row.get("label") or ""):
@@ -417,7 +437,7 @@ def apply_post_check_policy(profile: dict, rows: list[dict], previous_rows: dict
row.update({"state": 1, "active": 1, "paused": False, "status": "Seeding"}) row.update({"state": 1, "active": 1, "paused": False, "status": "Seeding"})
changes.append({"hash": h, "action": "start", "complete": True}) changes.append({"hash": h, "action": "start", "complete": True})
else: else:
# Note: Niepelny torrent po checku trafia do pauzy i dostaje etykiete informujaca, ze wymaga dalszego pobierania. # Note: After check, an incomplete torrent is paused and labeled to show that it needs more downloading.
c.call("d.start", h) c.call("d.start", h)
c.call("d.pause", h) c.call("d.pause", h)
labels = _label_names(str(row.get("label") or "")) labels = _label_names(str(row.get("label") or ""))
@@ -1186,7 +1206,7 @@ def _download_runtime_state(c: ScgiRtorrentClient, h: str) -> dict:
state = _int_rpc(c, 'd.state', h) state = _int_rpc(c, 'd.state', h)
active = _int_rpc(c, 'd.is_active', h) active = _int_rpc(c, 'd.is_active', h)
opened = _int_rpc(c, 'd.is_open', h) opened = _int_rpc(c, 'd.is_open', h)
# Note: W rTorrent pauza nie zmienia d.state. Paused to state=1, open=1, active=0. # Note: In rTorrent, pause does not change d.state. Paused means state=1, open=1, active=0.
return { return {
'state': state, 'state': state,
'open': opened, 'open': opened,
@@ -1205,7 +1225,7 @@ def pause_hash(c: ScgiRtorrentClient, torrent_hash: str) -> dict:
before = _download_runtime_state(c, h) before = _download_runtime_state(c, h)
result = {'hash': h, 'before': before, 'commands': []} result = {'hash': h, 'before': before, 'commands': []}
try: try:
# Note: Smart Queue zatrzymuje slot przez d.pause, nie przez d.stop, żeby późniejsze d.resume działało jak w ruTorrent. # Note: Smart Queue frees a slot with d.pause, not d.stop, so later d.resume behaves like ruTorrent.
c.call('d.pause', h) c.call('d.pause', h)
result['commands'].append('d.pause') result['commands'].append('d.pause')
result['after'] = _download_runtime_state(c, h) result['after'] = _download_runtime_state(c, h)
@@ -1229,8 +1249,8 @@ def resume_paused_hash(c: ScgiRtorrentClient, torrent_hash: str) -> dict:
result.update({'ok': True, 'skipped': 'already_active', 'after': before}) result.update({'ok': True, 'skipped': 'already_active', 'after': before})
return result return result
try: try:
# Note: ruTorrent dla od-pauzowania wysyła odpowiednik unpause/d.resume. Nie dokładamy d.start/d.open, # Note: ruTorrent unpauses with the equivalent of d.resume. Do not add d.start/d.open,
# bo to są komendy dla stanu Stopped/Open, a nie dla czystego Paused. # because those commands belong to Stopped/Open state, not a clean Paused state.
c.call('d.resume', h) c.call('d.resume', h)
result['commands'].append('d.resume') result['commands'].append('d.resume')
result['after'] = _download_runtime_state(c, h) result['after'] = _download_runtime_state(c, h)
@@ -1253,13 +1273,13 @@ def start_or_resume_hash(c: ScgiRtorrentClient, torrent_hash: str) -> dict:
return result return result
if before.get('paused') or (before.get('state') and not before.get('active')): if before.get('paused') or (before.get('state') and not before.get('active')):
# Note: Paused w rTorrent wznawiamy tylko przez d.resume; d.start jest tu celowo pomijane. # Note: Paused rTorrent items are resumed only with d.resume; d.start is intentionally skipped here.
resumed = resume_paused_hash(c, h) resumed = resume_paused_hash(c, h)
resumed['mode'] = 'resume_paused' resumed['mode'] = 'resume_paused'
return resumed return resumed
try: try:
# Note: d.start zostaje wyłącznie dla Stopped/closed, czyli dla stanu innego niż pause->resume. # Note: d.start remains only for Stopped/closed items, not for the pause-to-resume path.
c.call('d.open', h) c.call('d.open', h)
result['commands'].append('d.open') result['commands'].append('d.open')
except Exception as exc: except Exception as exc:
@@ -1352,15 +1372,15 @@ def action(profile: dict, torrent_hashes: list[str], name: str, payload: dict |
results.append(item) results.append(item)
return {"ok": True, "count": len(torrent_hashes), "move_data": move_data, "results": results} return {"ok": True, "count": len(torrent_hashes), "move_data": move_data, "results": results}
if name == "pause": if name == "pause":
# Note: Pauza aplikacji jest teraz czystym d.pause, żeby późniejszy resume działał bez stop/start. # Note: The app pause action is now a pure d.pause so later resume works without stop/start.
results = [pause_hash(c, h) for h in torrent_hashes] results = [pause_hash(c, h) for h in torrent_hashes]
return {"ok": True, "count": len(torrent_hashes), "remove_data": False, "results": results} return {"ok": True, "count": len(torrent_hashes), "remove_data": False, "results": results}
if name in {"resume", "unpause"}: if name in {"resume", "unpause"}:
# Note: Resume/Unpause używa wyłącznie d.resume dla stanu Paused. # Note: Resume/Unpause uses only d.resume for Paused state.
results = [resume_paused_hash(c, h) for h in torrent_hashes] results = [resume_paused_hash(c, h) for h in torrent_hashes]
return {"ok": True, "count": len(torrent_hashes), "remove_data": False, "results": results} return {"ok": True, "count": len(torrent_hashes), "remove_data": False, "results": results}
if name == "start": if name == "start":
# Note: Start rozdziela Stopped od Paused; paused idzie przez d.resume, stopped przez d.start. # Note: Start separates Stopped from Paused; paused items go through d.resume, stopped items through d.start.
results = [start_or_resume_hash(c, h) for h in torrent_hashes] results = [start_or_resume_hash(c, h) for h in torrent_hashes]
return {"ok": True, "count": len(torrent_hashes), "remove_data": False, "results": results} return {"ok": True, "count": len(torrent_hashes), "remove_data": False, "results": results}

View File

@@ -53,7 +53,7 @@ def save_settings(profile_id: int, data: dict[str, Any], user_id: int | None = N
'stalled_seconds': max(30, int(data.get('stalled_seconds') or current.get('stalled_seconds') or 300)), 'stalled_seconds': max(30, int(data.get('stalled_seconds') or current.get('stalled_seconds') or 300)),
'min_speed_bytes': max(0, int(data.get('min_speed_bytes') or current.get('min_speed_bytes') or 0)), 'min_speed_bytes': max(0, int(data.get('min_speed_bytes') or current.get('min_speed_bytes') or 0)),
'min_seeds': max(0, int(data.get('min_seeds') or current.get('min_seeds') or 0)), 'min_seeds': max(0, int(data.get('min_seeds') or current.get('min_seeds') or 0)),
# Note: Switch chroni całkiem zatrzymane torrenty przed automatycznym startem; domyślnie Smart Queue zarządza tylko paused. # Note: This switch protects fully stopped torrents from automatic starts; by default Smart Queue manages only paused items.
'manage_stopped': 1 if data.get('manage_stopped', current.get('manage_stopped')) else 0, 'manage_stopped': 1 if data.get('manage_stopped', current.get('manage_stopped')) else 0,
} }
now = utcnow() now = utcnow()
@@ -169,14 +169,14 @@ def _restore_auto_label(client: Any, profile_id: int, torrent_hash: str, current
if live_label != SMART_QUEUE_LABEL: if live_label != SMART_QUEUE_LABEL:
return False return False
try: try:
# Note: Czyści label Smart Queue także wtedy, gdy torrent został oznaczony wcześniej, ale nie ma już wpisu z poprzednim labelem. # Note: Clear the Smart Queue label even when the torrent was marked earlier but no previous-label entry remains.
client.call('d.custom1.set', torrent_hash, '') client.call('d.custom1.set', torrent_hash, '')
return True return True
except Exception: except Exception:
return False return False
previous = row.get('previous_label') or '' previous = row.get('previous_label') or ''
try: try:
# Note: Przy wznowieniu Smart Queue oddaje poprzedni label tylko wtedy, gdy nadal widzi swój label techniczny. # Note: On resume, Smart Queue restores the previous label only while it still sees its own technical label.
if live_label == SMART_QUEUE_LABEL or current_label is None: if live_label == SMART_QUEUE_LABEL or current_label is None:
client.call('d.custom1.set', torrent_hash, previous) client.call('d.custom1.set', torrent_hash, previous)
conn.execute('DELETE FROM smart_queue_auto_labels WHERE profile_id=? AND torrent_hash=?', (profile_id, torrent_hash)) conn.execute('DELETE FROM smart_queue_auto_labels WHERE profile_id=? AND torrent_hash=?', (profile_id, torrent_hash))
@@ -202,15 +202,15 @@ def _call_rtorrent_setter(client: Any, method: str, value: int) -> bool:
def _ensure_rtorrent_download_cap(client: Any, max_active: int) -> dict[str, Any]: def _ensure_rtorrent_download_cap(client: Any, max_active: int) -> dict[str, Any]:
"""Raise rTorrent download caps that can silently limit Smart Queue to one item.""" """Raise rTorrent download caps that can silently limit Smart Queue to one item."""
result: dict[str, Any] = {'checked': False, 'updated': False, 'items': []} result: dict[str, Any] = {'checked': False, 'updated': False, 'items': []}
# Note: rTorrent może mieć osobny limit globalny i per-throttle. Gdy div=1, # Note: rTorrent may have separate global and per-throttle limits. When div=1,
# startowanie kończy się praktycznie jednym aktywnym torrentem mimo targetu 100. # starts can effectively stop at one active torrent even when the target is 100.
for key in ('throttle.max_downloads.global', 'throttle.max_downloads.div'): for key in ('throttle.max_downloads.global', 'throttle.max_downloads.div'):
item: dict[str, Any] = {'key': key, 'checked': False, 'updated': False} item: dict[str, Any] = {'key': key, 'checked': False, 'updated': False}
try: try:
current = int(client.call(key) or 0) current = int(client.call(key) or 0)
item.update({'checked': True, 'current': current, 'target': int(max_active)}) item.update({'checked': True, 'current': current, 'target': int(max_active)})
result['checked'] = True result['checked'] = True
# Note: 0 oznacza unlimited; podnosimy tylko dodatnie limity niższe od targetu. # Note: 0 means unlimited; raise only positive limits lower than the target.
if 0 < current < max_active: if 0 < current < max_active:
ok = _call_rtorrent_setter(client, f'{key}.set', int(max_active)) ok = _call_rtorrent_setter(client, f'{key}.set', int(max_active))
item['updated'] = ok item['updated'] = ok
@@ -231,9 +231,9 @@ def _start_download(client: Any, torrent: dict[str, Any]) -> dict[str, Any]:
if not h: if not h:
return {'hash': h, 'ok': False, 'error': 'missing hash'} return {'hash': h, 'ok': False, 'error': 'missing hash'}
if bool(torrent.get('paused')) or str(torrent.get('status') or '').lower() == 'paused' or int(torrent.get('state') or 0): if bool(torrent.get('paused')) or str(torrent.get('status') or '').lower() == 'paused' or int(torrent.get('state') or 0):
# Note: Kandydaci Smart Queue po d.pause mają być wznawiani przez d.resume, bez d.start/d.stop. # Note: Smart Queue candidates paused with d.pause must be resumed with d.resume, without d.start/d.stop.
return rtorrent.resume_paused_hash(client, h) return rtorrent.resume_paused_hash(client, h)
# Note: Tylko opcjonalne manage_stopped korzysta ze ścieżki start dla całkowicie zatrzymanych torrentów. # Note: Only optional manage_stopped uses the start path for fully stopped torrents.
return rtorrent.start_or_resume_hash(client, h) return rtorrent.start_or_resume_hash(client, h)
@@ -277,8 +277,8 @@ def _read_live_start_state(client: Any, torrent_hash: str) -> dict[str, Any]:
result[key] = int(value or 0) if key in {'state', 'active', 'open', 'priority'} else str(value or '') result[key] = int(value or 0) if key in {'state', 'active', 'open', 'priority'} else str(value or '')
except Exception as exc: except Exception as exc:
result[f'{key}_error'] = str(exc) result[f'{key}_error'] = str(exc)
# Note: Nie uznajemy d.is_open ani state=1 za wznowienie; Paused też potrafi mieć te wartości. # Note: Do not treat d.is_open or state=1 as resumed; Paused can also have those values.
# Smart Queue zalicza start dopiero po d.is_active=1, czyli po realnym zdjęciu pauzy. # Smart Queue counts a start only after d.is_active=1, meaning the pause was actually removed.
result['started'] = bool(int(result.get('active') or 0)) result['started'] = bool(int(result.get('active') or 0))
return result return result
@@ -308,11 +308,11 @@ def _is_smart_queue_hold(torrent: dict[str, Any] | None, manage_stopped: bool =
return False return False
if str(torrent.get('label') or '') == SMART_QUEUE_LABEL: if str(torrent.get('label') or '') == SMART_QUEUE_LABEL:
return True return True
# Note: Paused w rTorrent zwykle ma state=1 i active=0, więc nie wolno wymagać state=0. # Note: Paused in rTorrent usually has state=1 and active=0, so state=0 must not be required.
# Dzięki temu Smart Queue widzi pauzowane torrenty jako oczekujące i może później dobić target kolejki. # This lets Smart Queue treat paused torrents as pending and fill the queue target later.
if bool(torrent.get('paused')): if bool(torrent.get('paused')):
return True return True
# Note: Całkiem zatrzymane pozycje są zarządzane tylko po włączeniu opcji Use stopped torrents. # Note: Fully stopped items are managed only when Use stopped torrents is enabled.
if not manage_stopped: if not manage_stopped:
return False return False
return not int(torrent.get('state') or 0) return not int(torrent.get('state') or 0)
@@ -322,7 +322,7 @@ def _clear_untracked_smart_queue_label(client: Any, torrent_hash: str, current_l
if current_label != SMART_QUEUE_LABEL: if current_label != SMART_QUEUE_LABEL:
return False return False
try: try:
# Note: Czyści osierocony label Smart Queue, gdy brak wpisu z poprzednim labelem w bazie. # Note: Clear an orphaned Smart Queue label when no previous-label entry exists in the database.
client.call('d.custom1.set', torrent_hash, '') client.call('d.custom1.set', torrent_hash, '')
return True return True
except Exception: except Exception:
@@ -359,8 +359,8 @@ def _cleanup_auto_labels(client: Any, profile_id: int, torrents: list[dict[str,
def _is_running_download_slot(t: dict[str, Any]) -> bool: def _is_running_download_slot(t: dict[str, Any]) -> bool:
"""Return True for incomplete torrents that already occupy a Smart Queue slot.""" """Return True for incomplete torrents that already occupy a Smart Queue slot."""
# Note: Limit Smart Queue oznacza docelową liczbę realnie aktywnych slotów. # Note: The Smart Queue limit means the target number of actually active slots.
# Paused potrafi mieć state=1/open=1, dlatego slot liczymy dopiero po d.is_active=1. # Paused can have state=1/open=1, so a slot is counted only after d.is_active=1.
if int(t.get('complete') or 0): if int(t.get('complete') or 0):
return False return False
if str(t.get('label') or '') == SMART_QUEUE_LABEL: if str(t.get('label') or '') == SMART_QUEUE_LABEL:
@@ -377,10 +377,10 @@ def _is_waiting_download_candidate(t: dict[str, Any], manage_stopped: bool) -> b
return False return False
if str(t.get('label') or '') == SMART_QUEUE_LABEL: if str(t.get('label') or '') == SMART_QUEUE_LABEL:
return True return True
# Note: Paused jest podstawowym źródłem dobijania kolejki, niezależnie od opcji manage_stopped. # Note: Paused items are the primary source for filling the queue, regardless of manage_stopped.
if bool(t.get('paused')) or str(t.get('status') or '').lower() == 'paused': if bool(t.get('paused')) or str(t.get('status') or '').lower() == 'paused':
return True return True
# Note: Stopped dokładamy tylko wtedy, gdy użytkownik zaznaczył Use stopped torrents. # Note: Stopped items are added only when the user enabled Use stopped torrents.
return bool(manage_stopped) and not int(t.get('state') or 0) return bool(manage_stopped) and not int(t.get('state') or 0)
@@ -394,7 +394,7 @@ def check(profile: dict | None = None, user_id: int | None = None, force: bool =
if not force and not int(settings.get('enabled') or 0): if not force and not int(settings.get('enabled') or 0):
restored: list[str] = [] restored: list[str] = []
try: try:
# Note: Przy wyłączonym Smart Queue sprzątamy wyłącznie techniczne labele, bez startowania lub pauzowania torrentów. # Note: When Smart Queue is disabled, only technical labels are cleaned up, without starting or pausing torrents.
torrents = rtorrent.list_torrents(profile) torrents = rtorrent.list_torrents(profile)
restored = _cleanup_auto_labels(rtorrent.client_for(profile), profile_id, torrents, set(), bool(settings.get('manage_stopped'))) restored = _cleanup_auto_labels(rtorrent.client_for(profile), profile_id, torrents, set(), bool(settings.get('manage_stopped')))
except Exception: except Exception:
@@ -408,15 +408,15 @@ def check(profile: dict | None = None, user_id: int | None = None, force: bool =
def is_managed_hold(t: dict[str, Any]) -> bool: def is_managed_hold(t: dict[str, Any]) -> bool:
return str(t.get('label') or '') == SMART_QUEUE_LABEL return str(t.get('label') or '') == SMART_QUEUE_LABEL
# Note: Slot Smart Queue liczymy po d.is_active, bo Paused może mieć state=1/open=1 i nie może zajmować miejsca w limicie. # Note: Count Smart Queue slots by d.is_active because Paused can have state=1/open=1 and must not occupy the limit.
downloading = [ downloading = [
t for t in torrents t for t in torrents
if _is_running_download_slot(t) if _is_running_download_slot(t)
and not is_managed_hold(t) and not is_managed_hold(t)
and t.get('hash') not in excluded and t.get('hash') not in excluded
] ]
# Note: Kandydaci obejmują także zwykłe Paused bez labela. Inaczej kolejka widzi tylko 1-2 sztuki # Note: Candidates also include regular Paused items without a label. Otherwise the queue sees only one or two items
# i nie potrafi dobić do zadanego targetu 100. # and cannot fill the configured target of 100.
stopped = [ stopped = [
t for t in torrents t for t in torrents
if t.get('hash') not in excluded if t.get('hash') not in excluded
@@ -472,8 +472,8 @@ def check(profile: dict | None = None, user_id: int | None = None, force: bool =
to_pause: list[dict[str, Any]] = pause_rank[:max(0, len(downloading) - max_active)] to_pause: list[dict[str, Any]] = pause_rank[:max(0, len(downloading) - max_active)]
pause_hashes = {str(t.get('hash') or '') for t in to_pause} pause_hashes = {str(t.get('hash') or '') for t in to_pause}
# Note: Rotacja stalled działa tylko przy pełnej kolejce. Gdy brakuje slotów, Smart Queue ma # Note: Stalled rotation runs only when the queue is full. When slots are missing, Smart Queue should
# najpierw dobrać brakujące pozycje, a nie pauzować już istniejące lub błędnie uznane za stalled. # first add missing items instead of pausing existing or incorrectly detected stalled items.
if candidates and len(downloading) >= max_active: if candidates and len(downloading) >= max_active:
replaceable_stalled = [t for t in stalled if str(t.get('hash') or '') not in pause_hashes] replaceable_stalled = [t for t in stalled if str(t.get('hash') or '') not in pause_hashes]
for t in replaceable_stalled[:max(0, len(candidates) - len(to_pause))]: for t in replaceable_stalled[:max(0, len(candidates) - len(to_pause))]:
@@ -483,7 +483,7 @@ def check(profile: dict | None = None, user_id: int | None = None, force: bool =
active_after_pause = max(0, len(downloading) - len(to_pause)) active_after_pause = max(0, len(downloading) - len(to_pause))
available_slots = max(0, max_active - active_after_pause) available_slots = max(0, max_active - active_after_pause)
to_resume = candidates[:available_slots] to_resume = candidates[:available_slots]
# Note: Pozycje poza bieżącą pulą startu zostają jawnie oznaczone jako oczekujące Smart Queue. # Note: Items outside the current start batch are explicitly marked as pending Smart Queue items.
to_label_waiting = candidates[available_slots:] to_label_waiting = candidates[available_slots:]
c = rtorrent.client_for(profile) c = rtorrent.client_for(profile)
@@ -517,8 +517,8 @@ def check(profile: dict | None = None, user_id: int | None = None, force: bool =
except Exception: except Exception:
label_failed.append(h) label_failed.append(h)
# Note: Startujemy całą pulę kandydatów w jednej rundzie. Label zdejmujemy po zaakceptowanym RPC, # Note: Start the whole candidate batch in one round. Remove the label after an accepted RPC,
# bo rTorrent może trzymać część pozycji w swojej kolejce z active=0 mimo poprawnego d.start/d.resume. # because rTorrent may keep some items in its own queue with active=0 despite a valid d.start/d.resume.
for t in to_resume: for t in to_resume:
h = str(t.get('hash') or '') h = str(t.get('hash') or '')
if not h: if not h:
@@ -533,7 +533,7 @@ def check(profile: dict | None = None, user_id: int | None = None, force: bool =
active_verified, start_no_effect = _verify_started_downloads(c, resume_requested) active_verified, start_no_effect = _verify_started_downloads(c, resume_requested)
for h in active_verified: for h in active_verified:
_restore_auto_label(c, profile_id, h, None) _restore_auto_label(c, profile_id, h, None)
# Note: Historia pokazuje tylko torrenty faktycznie zdjęte z pauzy, a nie samą liczbę wysłanych komend. # Note: History shows only torrents actually unpaused, not just the number of sent commands.
resumed = list(active_verified) resumed = list(active_verified)
keep_labels = ( keep_labels = (
set(paused) set(paused)

View File

@@ -171,7 +171,7 @@ def maybe_refresh(profile: dict | None, force: bool = False) -> dict[str, Any] |
return cached return cached
def queue_refresh(socketio, profile: dict | None, force: bool = False, emit_update: bool = True) -> dict[str, Any] | None: def queue_refresh(socketio, profile: dict | None, force: bool = False, emit_update: bool = True, room: str | None = None) -> dict[str, Any] | None:
"""Schedule heavier statistics refresh outside the main WebSocket/system poller.""" """Schedule heavier statistics refresh outside the main WebSocket/system poller."""
if not profile: if not profile:
return None return None
@@ -195,10 +195,12 @@ def queue_refresh(socketio, profile: dict | None, force: bool = False, emit_upda
# Note: This can query file metadata per torrent, so it never runs inside the fast CPU/RAM/disk poller. # Note: This can query file metadata per torrent, so it never runs inside the fast CPU/RAM/disk poller.
stats = get(profile_snapshot, force=True) stats = get(profile_snapshot, force=True)
if emit_update and stats: if emit_update and stats:
socketio.emit("torrent_stats_update", {"profile_id": profile_id, "stats": stats}) payload = {"profile_id": profile_id, "stats": stats}
socketio.emit("torrent_stats_update", payload, to=room) if room else socketio.emit("torrent_stats_update", payload)
except Exception as exc: except Exception as exc:
if emit_update: if emit_update:
socketio.emit("torrent_stats_update", {"profile_id": profile_id, "ok": False, "error": str(exc)}) payload = {"profile_id": profile_id, "ok": False, "error": str(exc)}
socketio.emit("torrent_stats_update", payload, to=room) if room else socketio.emit("torrent_stats_update", payload)
finally: finally:
with _BACKGROUND_LOCK: with _BACKGROUND_LOCK:
_BACKGROUND_PROFILE_IDS.discard(profile_id) _BACKGROUND_PROFILE_IDS.discard(profile_id)

View File

@@ -2,12 +2,31 @@ from __future__ import annotations
import threading import threading
import psutil import psutil
from flask_socketio import emit from flask_socketio import emit, join_room, leave_room, disconnect
from ..config import POLL_INTERVAL from ..config import POLL_INTERVAL
from .preferences import active_profile, get_profile from .preferences import active_profile, get_profile
from .torrent_cache import torrent_cache from .torrent_cache import torrent_cache
from .torrent_summary import cached_summary from .torrent_summary import cached_summary
from . import rtorrent, smart_queue, traffic_history, automation_rules, torrent_stats from . import rtorrent, smart_queue, traffic_history, automation_rules, torrent_stats, auth
def _profile_room(profile_id: int) -> str:
return f"profile:{int(profile_id)}"
def _poller_profiles() -> list[dict]:
# Note: Background polling has no browser session, so auth-enabled mode refreshes all profiles and emits only to per-profile rooms.
if not auth.enabled():
profile = active_profile()
return [profile] if profile else []
from ..db import connect
with connect() as conn:
return conn.execute("SELECT * FROM rtorrent_profiles ORDER BY id").fetchall()
def _emit_profile(socketio, event: str, payload: dict, profile_id: int) -> None:
target = _profile_room(profile_id) if auth.enabled() else None
socketio.emit(event, payload, to=target) if target else socketio.emit(event, payload)
_started = False _started = False
_start_lock = threading.Lock() _start_lock = threading.Lock()
@@ -18,14 +37,16 @@ def register_socketio_handlers(socketio):
def poller(): def poller():
tick = 0 tick = 0
while True: while True:
profile = active_profile() for profile in _poller_profiles():
if profile: if not profile:
continue
pid = int(profile["id"])
diff = torrent_cache.refresh(profile) diff = torrent_cache.refresh(profile)
heartbeat = {"ok": bool(diff.get("ok")), "profile_id": profile["id"], "tick": tick, "error": diff.get("error", "")} heartbeat = {"ok": bool(diff.get("ok")), "profile_id": pid, "tick": tick, "error": diff.get("error", "")}
if diff.get("ok") and (diff["added"] or diff["updated"] or diff["removed"]): if diff.get("ok") and (diff["added"] or diff["updated"] or diff["removed"]):
socketio.emit("torrent_patch", {**diff, "summary": cached_summary(profile["id"], torrent_cache.snapshot(profile["id"]), force=True)}) _emit_profile(socketio, "torrent_patch", {**diff, "summary": cached_summary(pid, torrent_cache.snapshot(pid), force=True)}, pid)
elif not diff.get("ok"): elif not diff.get("ok"):
socketio.emit("rtorrent_error", diff) _emit_profile(socketio, "rtorrent_error", diff, pid)
try: try:
status = rtorrent.system_status(profile) status = rtorrent.system_status(profile)
if bool(profile.get("is_remote")): if bool(profile.get("is_remote")):
@@ -36,36 +57,36 @@ def register_socketio_handlers(socketio):
status["ram"] = psutil.virtual_memory().percent status["ram"] = psutil.virtual_memory().percent
status["usage_source"] = "local" status["usage_source"] = "local"
status["usage_available"] = True status["usage_available"] = True
status["profile_id"] = profile["id"] status["profile_id"] = pid
traffic_history.record(profile["id"], status.get("down_rate", 0), status.get("up_rate", 0), status.get("total_down", 0), status.get("total_up", 0)) traffic_history.record(pid, status.get("down_rate", 0), status.get("up_rate", 0), status.get("total_down", 0), status.get("total_up", 0))
socketio.emit("system_stats", status) _emit_profile(socketio, "system_stats", status, pid)
heartbeat["ok"] = True heartbeat["ok"] = True
except Exception as exc: except Exception as exc:
heartbeat["ok"] = False heartbeat["ok"] = False
heartbeat["error"] = str(exc) heartbeat["error"] = str(exc)
socketio.emit("rtorrent_error", {"profile_id": profile["id"], "error": str(exc)}) _emit_profile(socketio, "rtorrent_error", {"profile_id": pid, "error": str(exc)}, pid)
if tick % max(1, int(15 * 60 / POLL_INTERVAL)) == 0: if tick % max(1, int(15 * 60 / POLL_INTERVAL)) == 0:
# Note: Queue heavier torrent statistics outside the fast system_stats poller. # Note: Queue heavier torrent statistics outside the fast system_stats poller.
torrent_stats.queue_refresh(socketio, profile, force=False) torrent_stats.queue_refresh(socketio, profile, force=False, room=_profile_room(pid) if auth.enabled() else None)
if tick % max(1, int(30 / POLL_INTERVAL)) == 0: if tick % max(1, int(30 / POLL_INTERVAL)) == 0:
try: try:
result = smart_queue.check(profile, force=False) result = smart_queue.check(profile, force=False)
if result.get("enabled"): if result.get("enabled"):
socketio.emit("smart_queue_update", result) _emit_profile(socketio, "smart_queue_update", result, pid)
if result.get("paused") or result.get("resumed") or result.get("resume_requested"): if result.get("paused") or result.get("resumed") or result.get("resume_requested"):
# Note: Po zmianach Smart Queue natychmiast odświeżamy cache, żeby lista Downloading nie czekała na następny cykl pollera. # Note: After Smart Queue changes, refresh cache immediately so the Downloading list does not wait for the next poller cycle.
queue_diff = torrent_cache.refresh(profile) queue_diff = torrent_cache.refresh(profile)
if queue_diff.get("ok"): if queue_diff.get("ok"):
socketio.emit("torrent_patch", {**queue_diff, "summary": cached_summary(profile["id"], torrent_cache.snapshot(profile["id"]), force=True)}) _emit_profile(socketio, "torrent_patch", {**queue_diff, "summary": cached_summary(pid, torrent_cache.snapshot(pid), force=True)}, pid)
except Exception as exc: except Exception as exc:
socketio.emit("smart_queue_update", {"ok": False, "error": str(exc)}) _emit_profile(socketio, "smart_queue_update", {"ok": False, "error": str(exc)}, pid)
try: try:
auto_result = automation_rules.check(profile, force=False) auto_result = automation_rules.check(profile, force=False)
if auto_result.get("applied"): if auto_result.get("applied"):
socketio.emit("automation_update", auto_result) _emit_profile(socketio, "automation_update", auto_result, pid)
except Exception as exc: except Exception as exc:
socketio.emit("automation_update", {"ok": False, "error": str(exc)}) _emit_profile(socketio, "automation_update", {"ok": False, "error": str(exc)}, pid)
socketio.emit("heartbeat", heartbeat) _emit_profile(socketio, "heartbeat", heartbeat, pid)
tick += 1 tick += 1
socketio.sleep(POLL_INTERVAL) socketio.sleep(POLL_INTERVAL)
@@ -73,7 +94,7 @@ def register_socketio_handlers(socketio):
global _started global _started
with _start_lock: with _start_lock:
if not _started: if not _started:
# Note: Poller startuje przy starcie aplikacji, więc Smart Queue i automatyzacje działają bez otwartego UI. # Note: The poller starts with the app, so Smart Queue and automations work without an open UI.
socketio.start_background_task(poller) socketio.start_background_task(poller)
_started = True _started = True
@@ -82,10 +103,16 @@ def register_socketio_handlers(socketio):
@socketio.on("connect") @socketio.on("connect")
def handle_connect(): def handle_connect():
ensure_poller_started() ensure_poller_started()
if auth.enabled() and not auth.current_user_id():
# Note: Socket.IO uses the same session auth as REST API; unauthenticated clients are disconnected.
disconnect()
return False
profile = active_profile() profile = active_profile()
if profile:
join_room(_profile_room(profile["id"]))
emit("connected", {"ok": True, "profile": profile}) emit("connected", {"ok": True, "profile": profile})
if not profile: if not profile:
# Note: Fresh installs have no rTorrent yet; tell the client to show setup instead of waiting for a snapshot. # Note: Fresh installs or users without profile access get setup state, not another user's snapshot.
emit("profile_required", {"ok": True, "profiles": []}) emit("profile_required", {"ok": True, "profiles": []})
return return
rows = torrent_cache.snapshot(profile["id"]) rows = torrent_cache.snapshot(profile["id"])
@@ -93,6 +120,12 @@ def register_socketio_handlers(socketio):
@socketio.on("select_profile") @socketio.on("select_profile")
def handle_select_profile(data): def handle_select_profile(data):
if auth.enabled() and not auth.current_user_id():
disconnect()
return
old_profile = active_profile()
if old_profile:
leave_room(_profile_room(old_profile["id"]))
profile_id = int((data or {}).get("profile_id") or 0) profile_id = int((data or {}).get("profile_id") or 0)
if not profile_id: if not profile_id:
# Note: Ignore empty profile selections created before the first rTorrent profile exists. # Note: Ignore empty profile selections created before the first rTorrent profile exists.
@@ -100,8 +133,9 @@ def register_socketio_handlers(socketio):
return return
profile = get_profile(profile_id) profile = get_profile(profile_id)
if not profile: if not profile:
emit("rtorrent_error", {"error": "Profile does not exist"}) emit("rtorrent_error", {"error": "Profile access denied or profile does not exist"})
return return
join_room(_profile_room(profile_id))
diff = torrent_cache.refresh(profile) diff = torrent_cache.refresh(profile)
rows = torrent_cache.snapshot(profile_id) rows = torrent_cache.snapshot(profile_id)
emit("torrent_snapshot", {"profile_id": profile_id, "torrents": rows, "summary": cached_summary(profile_id, rows, force=True), "error": diff.get("error", "")}) emit("torrent_snapshot", {"profile_id": profile_id, "torrents": rows, "summary": cached_summary(profile_id, rows, force=True), "error": diff.get("error", "")})

View File

@@ -5,7 +5,7 @@ import threading
import time import time
import uuid import uuid
from concurrent.futures import ThreadPoolExecutor from concurrent.futures import ThreadPoolExecutor
from . import rtorrent from . import rtorrent, auth
from .preferences import get_profile from .preferences import get_profile
from ..config import WORKERS from ..config import WORKERS
from ..db import connect, utcnow, default_user_id from ..db import connect, utcnow, default_user_id
@@ -23,7 +23,13 @@ def set_socketio(socketio):
def _emit(name: str, payload: dict): def _emit(name: str, payload: dict):
if _socketio: if not _socketio:
return
profile_id = payload.get("profile_id")
if auth.enabled() and profile_id:
# Note: Job/socket events are sent only to clients joined to the affected profile room.
_socketio.emit(name, payload, to=f"profile:{int(profile_id)}")
else:
_socketio.emit(name, payload) _socketio.emit(name, payload)
@@ -97,7 +103,7 @@ def _set_job(job_id: str, status: str, error: str = "", result: dict | None = No
def enqueue(action_name: str, profile_id: int, payload: dict, user_id: int | None = None, max_attempts: int = 2) -> str: def enqueue(action_name: str, profile_id: int, payload: dict, user_id: int | None = None, max_attempts: int = 2) -> str:
user_id = user_id or default_user_id() user_id = user_id or auth.current_user_id() or default_user_id()
job_id = uuid.uuid4().hex job_id = uuid.uuid4().hex
now = utcnow() now = utcnow()
with connect() as conn: with connect() as conn:
@@ -130,7 +136,7 @@ def _run(job_id: str):
profile = get_profile(int(job["profile_id"]), int(job["user_id"])) profile = get_profile(int(job["profile_id"]), int(job["user_id"]))
if not profile: if not profile:
_set_job(job_id, "failed", "rTorrent profile does not exist", finished=True) _set_job(job_id, "failed", "rTorrent profile does not exist", finished=True)
_emit("job_update", {"id": job_id, "status": "failed", "error": "profile not found"}) _emit("job_update", {"id": job_id, "profile_id": job.get("profile_id"), "status": "failed", "error": "profile not found"})
return return
profile_id = int(profile["id"]) profile_id = int(profile["id"])
ordered_lock = None ordered_lock = None
@@ -150,26 +156,26 @@ def _run(job_id: str):
with connect() as conn: with connect() as conn:
conn.execute("UPDATE jobs SET status='running', attempts=?, started_at=COALESCE(started_at, ?), updated_at=? WHERE id=?", (attempts, utcnow(), utcnow(), job_id)) conn.execute("UPDATE jobs SET status='running', attempts=?, started_at=COALESCE(started_at, ?), updated_at=? WHERE id=?", (attempts, utcnow(), utcnow(), job_id))
_emit("operation_started", {"job_id": job_id, "action": job["action"], "profile_id": profile["id"], "hashes": payload.get("hashes") or [], "hash_count": len(payload.get("hashes") or []), "bulk": len(payload.get("hashes") or []) > 1}) _emit("operation_started", {"job_id": job_id, "action": job["action"], "profile_id": profile["id"], "hashes": payload.get("hashes") or [], "hash_count": len(payload.get("hashes") or []), "bulk": len(payload.get("hashes") or []) > 1})
_emit("job_update", {"id": job_id, "status": "running", "attempts": attempts}) _emit("job_update", {"id": job_id, "profile_id": profile["id"], "status": "running", "attempts": attempts})
result = _execute(profile, job["action"], payload) result = _execute(profile, job["action"], payload)
fresh = _job_row(job_id) fresh = _job_row(job_id)
# Awaryjne anulowanie: jeżeli użytkownik anuluje zadanie w trakcie pracy, wynik nie nadpisuje statusu cancelled. # Note: Emergency cancel keeps a cancelled job from being overwritten when work finishes later.
if fresh and fresh["status"] == "cancelled": if fresh and fresh["status"] == "cancelled":
return return
_set_job(job_id, "done", result=result, finished=True) _set_job(job_id, "done", result=result, finished=True)
_emit("operation_finished", {"job_id": job_id, "action": job["action"], "profile_id": profile["id"], "hashes": payload.get("hashes") or [], "hash_count": len(payload.get("hashes") or []), "bulk": len(payload.get("hashes") or []) > 1, "result": result}) _emit("operation_finished", {"job_id": job_id, "action": job["action"], "profile_id": profile["id"], "hashes": payload.get("hashes") or [], "hash_count": len(payload.get("hashes") or []), "bulk": len(payload.get("hashes") or []) > 1, "result": result})
_emit("job_update", {"id": job_id, "status": "done", "result": result}) _emit("job_update", {"id": job_id, "profile_id": profile["id"], "status": "done", "result": result})
except Exception as exc: except Exception as exc:
fresh = _job_row(job_id) or {} fresh = _job_row(job_id) or {}
attempts = int(fresh.get("attempts") or 1) attempts = int(fresh.get("attempts") or 1)
max_attempts = int(fresh.get("max_attempts") or 2) max_attempts = int(fresh.get("max_attempts") or 2)
# Awaryjne anulowanie: wyjątek z anulowanego zadania nie przywraca go do retry ani failed. # Note: Emergency cancel keeps an exception from a cancelled job from moving it back to retry or failed.
if fresh and fresh.get("status") == "cancelled": if fresh and fresh.get("status") == "cancelled":
return return
status = "pending" if attempts < max_attempts else "failed" status = "pending" if attempts < max_attempts else "failed"
_set_job(job_id, status, str(exc), finished=(status == "failed")) _set_job(job_id, status, str(exc), finished=(status == "failed"))
_emit("operation_failed", {"job_id": job_id, "action": job.get("action"), "profile_id": job.get("profile_id"), "hashes": payload.get("hashes") or [], "error": str(exc)}) _emit("operation_failed", {"job_id": job_id, "action": job.get("action"), "profile_id": job.get("profile_id"), "hashes": payload.get("hashes") or [], "error": str(exc)})
_emit("job_update", {"id": job_id, "status": status, "error": str(exc), "attempts": attempts}) _emit("job_update", {"id": job_id, "profile_id": job.get("profile_id"), "status": status, "error": str(exc), "attempts": attempts})
if status == "pending": if status == "pending":
_executor.submit(_run, job_id) _executor.submit(_run, job_id)
finally: finally:
@@ -225,12 +231,23 @@ def _public_job(row) -> dict:
return d return d
def _job_scope_sql(writable: bool = False) -> tuple[str, tuple]:
visible = auth.writable_profile_ids() if writable else auth.visible_profile_ids()
if visible is None:
return "", ()
if not visible:
return " WHERE 1=0", ()
placeholders = ",".join("?" for _ in visible)
return f" WHERE profile_id IN ({placeholders})", tuple(visible)
def list_jobs(limit: int = 200, offset: int = 0): def list_jobs(limit: int = 200, offset: int = 0):
limit = max(1, min(int(limit or 50), 500)) limit = max(1, min(int(limit or 50), 500))
offset = max(0, int(offset or 0)) offset = max(0, int(offset or 0))
where, params = _job_scope_sql()
with connect() as conn: with connect() as conn:
rows = conn.execute("SELECT * FROM jobs ORDER BY created_at DESC LIMIT ? OFFSET ?", (limit, offset)).fetchall() rows = conn.execute(f"SELECT * FROM jobs{where} ORDER BY created_at DESC LIMIT ? OFFSET ?", (*params, limit, offset)).fetchall()
total = conn.execute("SELECT COUNT(*) AS n FROM jobs").fetchone()["n"] total = conn.execute(f"SELECT COUNT(*) AS n FROM jobs{where}", params).fetchone()["n"]
return {"rows": [_public_job(r) for r in rows], "total": total, "limit": limit, "offset": offset} return {"rows": [_public_job(r) for r in rows], "total": total, "limit": limit, "offset": offset}
@@ -238,24 +255,30 @@ def cancel_job(job_id: str) -> bool:
row = _job_row(job_id) row = _job_row(job_id)
if not row or row["status"] not in {"pending", "running"}: if not row or row["status"] not in {"pending", "running"}:
return False return False
# Note: Emergency cancel ma sens tylko dla niedokonczonych zadan; failed/done zostaja tylko do retry albo czyszczenia logow. # Note: Emergency cancel is useful only for unfinished jobs; failed/done entries stay available for retry or log cleanup.
_set_job(job_id, "cancelled", finished=True) _set_job(job_id, "cancelled", finished=True)
_emit("job_update", {"id": job_id, "status": "cancelled"}) _emit("job_update", {"id": job_id, "profile_id": row.get("profile_id"), "status": "cancelled"})
return True return True
def clear_jobs() -> int: def clear_jobs() -> int:
where, params = _job_scope_sql(writable=True)
status_clause = "status NOT IN ('pending', 'running')"
sql = f"DELETE FROM jobs{where} AND {status_clause}" if where else f"DELETE FROM jobs WHERE {status_clause}"
with connect() as conn: with connect() as conn:
cur = conn.execute("DELETE FROM jobs WHERE status NOT IN ('pending', 'running')") cur = conn.execute(sql, params)
return int(cur.rowcount or 0) return int(cur.rowcount or 0)
def emergency_clear_jobs() -> int: def emergency_clear_jobs() -> int:
# Awaryjne czyszczenie: najpierw zamyka aktywne zadania jako cancelled, potem czyści całą listę job logów. # Note: Emergency cleanup first marks active jobs as cancelled, then clears the whole job log list.
now = utcnow() now = utcnow()
where, params = _job_scope_sql(writable=True)
status_clause = "status IN ('pending', 'running')"
update_sql = f"UPDATE jobs SET status='cancelled', error='Emergency cancelled by user', finished_at=COALESCE(finished_at, ?), updated_at=?{where} AND {status_clause}" if where else "UPDATE jobs SET status='cancelled', error='Emergency cancelled by user', finished_at=COALESCE(finished_at, ?), updated_at=? WHERE status IN ('pending', 'running')"
with connect() as conn: with connect() as conn:
conn.execute("UPDATE jobs SET status='cancelled', error='Emergency cancelled by user', finished_at=COALESCE(finished_at, ?), updated_at=? WHERE status IN ('pending', 'running')", (now, now)) conn.execute(update_sql, (now, now, *params) if where else (now, now))
cur = conn.execute("DELETE FROM jobs") cur = conn.execute(f"DELETE FROM jobs{where}", params) if where else conn.execute("DELETE FROM jobs")
deleted = int(cur.rowcount or 0) deleted = int(cur.rowcount or 0)
_emit("job_update", {"status": "cleared", "emergency": True}) _emit("job_update", {"status": "cleared", "emergency": True})
return deleted return deleted
@@ -267,6 +290,6 @@ def retry_job(job_id: str) -> bool:
return False return False
with connect() as conn: with connect() as conn:
conn.execute("UPDATE jobs SET status='pending', error='', finished_at=NULL, updated_at=? WHERE id=?", (utcnow(), job_id)) conn.execute("UPDATE jobs SET status='pending', error='', finished_at=NULL, updated_at=? WHERE id=?", (utcnow(), job_id))
_emit("job_update", {"id": job_id, "status": "pending"}) _emit("job_update", {"id": job_id, "profile_id": row.get("profile_id"), "status": "pending"})
_executor.submit(_run, job_id) _executor.submit(_run, job_id)
return True return True

File diff suppressed because one or more lines are too long

View File

@@ -1,4 +1,3 @@
/* Note: CSS po zmianach jest formatowany jednolicie; nie dodano nowych zduplikowanych klas ani nadpisan selektorow. */
:root { :root {
--app-font-family: --app-font-family:
Inter, system-ui, -apple-system, Segoe UI, Roboto, Arial, sans-serif; Inter, system-ui, -apple-system, Segoe UI, Roboto, Arial, sans-serif;
@@ -224,7 +223,6 @@ body {
overflow: auto; overflow: auto;
background: rgba(var(--bs-secondary-bg-rgb), 0.9); background: rgba(var(--bs-secondary-bg-rgb), 0.9);
} }
/* Note: Sidebar filters are wider and use one structured block per class to avoid duplicate overrides. */
.filter { .filter {
width: 100%; width: 100%;
display: grid; display: grid;
@@ -660,7 +658,6 @@ body {
} }
} }
/* Feature additions without changing the existing visual shell */
.date-compact { .date-compact {
white-space: nowrap; white-space: nowrap;
} }
@@ -749,7 +746,6 @@ body {
background: #f59e0b !important; background: #f59e0b !important;
color: #111 !important; color: #111 !important;
} }
/* Note: Manual mobile mode is defined once here; media queries below only adapt breakpoints. */
body.mobile-mode .table-wrap, body.mobile-mode .table-wrap,
body.mobile-mode .details { body.mobile-mode .details {
display: none !important; display: none !important;
@@ -782,7 +778,6 @@ body.mobile-mode .main-grid {
} }
} }
/* Fixes: compact one-line progress cell and readable percent inside the bar. */
.torrent-table td:nth-child(5) { .torrent-table td:nth-child(5) {
min-width: 92px; min-width: 92px;
width: 110px; width: 110px;
@@ -910,7 +905,6 @@ body.mobile-mode .mobile-card {
} }
} }
/* Requested fixes: stable charts, Smart Queue exceptions, label actions, mobile readability */
.history-grid { .history-grid {
display: grid; display: grid;
grid-template-columns: 1fr; grid-template-columns: 1fr;
@@ -945,12 +939,6 @@ body.mobile-mode .mobile-card {
grid-template-columns: 1fr; grid-template-columns: 1fr;
} }
} }
.smart-actions {
display: flex;
align-items: center;
gap: 0.45rem;
flex-wrap: wrap;
}
.empty-mini { .empty-mini {
padding: 0.7rem 0.8rem; padding: 0.7rem 0.8rem;
border: 1px dashed var(--bs-border-color); border: 1px dashed var(--bs-border-color);
@@ -993,7 +981,6 @@ body.mobile-mode .mobile-card {
} }
} }
/* Requested fixes: clean progress, mobile auto list, pagers, rTorrent config, peers refresh */
.torrent-progress { .torrent-progress {
height: 16px; height: 16px;
min-width: 92px; min-width: 92px;
@@ -1066,7 +1053,6 @@ body.mobile-mode .mobile-card {
min-width: 96px; min-width: 96px;
} }
/* Mobile list: force visible on narrow screens even without manual toggle. */
@media (max-width: 900px) { @media (max-width: 900px) {
body:not(.modal-open) .table-wrap { body:not(.modal-open) .table-wrap {
display: none !important; display: none !important;
@@ -1094,7 +1080,6 @@ body.mobile-mode .mobile-card {
font-style: italic; font-style: italic;
} }
/* Mobile blank-view fix: sidebar disappears at 900px, so the mobile list must also be forced from 900px down. */
@media (max-width: 900px) { @media (max-width: 900px) {
.main-grid { .main-grid {
display: grid !important; display: grid !important;
@@ -1227,7 +1212,6 @@ body.mobile-mode .mobile-card {
padding: 0.75rem; padding: 0.75rem;
background: var(--bs-tertiary-bg); background: var(--bs-tertiary-bg);
} }
/* Note: Bulk actions overlay the list area; base .content/.details rules keep the layout pinned. */
#bulkBar { #bulkBar {
grid-row: 1; grid-row: 1;
grid-column: 1; grid-column: 1;
@@ -1325,7 +1309,6 @@ body.mobile-mode .mobile-card {
min-height: 160px; min-height: 160px;
} }
/* Torrent warning and mobile controls */
.torrent-warning td { .torrent-warning td {
background: rgba(245, 158, 11, 0.075) !important; background: rgba(245, 158, 11, 0.075) !important;
} }
@@ -1417,7 +1400,6 @@ body.mobile-mode #mobileList {
} }
} }
/* rTorrent config */
.rt-config-grid { .rt-config-grid {
display: grid; display: grid;
gap: 0.6rem; gap: 0.6rem;
@@ -1501,7 +1483,6 @@ body.mobile-mode #mobileList {
font-size: 0.82rem; font-size: 0.82rem;
} }
/* Tracker management */
.tracker-toolbar, .tracker-toolbar,
.tracker-actions { .tracker-actions {
display: flex; display: flex;
@@ -1530,7 +1511,6 @@ body.mobile-mode #mobileList {
word-break: break-all; word-break: break-all;
} }
/* Cleanup and app diagnostics */
.tool-note { .tool-note {
color: var(--bs-secondary-color); color: var(--bs-secondary-color);
font-size: 0.82rem; font-size: 0.82rem;
@@ -1646,7 +1626,6 @@ body.mobile-mode #mobileList {
white-space: nowrap; white-space: nowrap;
} }
/* Operation status, mobile progress and separated preferences */
.torrent-operating td { .torrent-operating td {
background: rgba(13, 202, 240, 0.085) !important; background: rgba(13, 202, 240, 0.085) !important;
} }
@@ -1698,7 +1677,6 @@ body.mobile-mode #mobileList {
border-left: 0.25rem solid var(--bs-primary); border-left: 0.25rem solid var(--bs-primary);
} }
/* Note: Empty first-run state is grouped separately to keep setup styles isolated and avoid duplicated table overrides. */
.empty-state { .empty-state {
display: inline-flex; display: inline-flex;
flex-direction: column; flex-direction: column;
@@ -1719,14 +1697,12 @@ body.mobile-mode #mobileList {
display: none !important; display: none !important;
} }
/* Footer preferences */
.footer-preferences { .footer-preferences {
display: grid; display: grid;
grid-template-columns: repeat(auto-fit, minmax(180px, 1fr)); grid-template-columns: repeat(auto-fit, minmax(180px, 1fr));
gap: 0.5rem; gap: 0.5rem;
} }
/* Note: Footer switch cards mirror column cards so Bootstrap form-switch margins cannot push toggles outside the field. */
.footer-pref-card { .footer-pref-card {
display: flex; display: flex;
align-items: center; align-items: center;
@@ -1776,7 +1752,6 @@ body.mobile-mode #mobileList {
} }
/* Torrent statistics */
.torrent-stats-toolbar { .torrent-stats-toolbar {
display: flex; display: flex;
align-items: center; align-items: center;
@@ -1820,7 +1795,6 @@ body.mobile-mode #mobileList {
white-space: nowrap; white-space: nowrap;
} }
/* Peer table links */
.peer-ip { .peer-ip {
display: inline-flex; display: inline-flex;
align-items: center; align-items: center;
@@ -1837,3 +1811,193 @@ body.mobile-mode #mobileList {
.peer-ip-link:hover { .peer-ip-link:hover {
color: var(--bs-primary); color: var(--bs-primary);
} }
.auth-page {
display: grid;
min-height: 100vh;
place-items: center;
padding: 1rem;
background: radial-gradient(
circle at 50% 35%,
rgba(var(--bs-secondary-bg-rgb), 0.98),
var(--bs-body-bg) 68%
);
color: var(--bs-body-color);
}
.auth-card {
width: min(92vw, 430px);
}
.auth-lock {
display: inline-grid;
width: 3rem;
height: 3rem;
margin: 1.35rem 0 1rem;
place-items: center;
border: 1px solid var(--bs-border-color);
border-radius: 999px;
background: rgba(var(--bs-tertiary-bg-rgb), 0.72);
color: var(--bs-primary);
font-size: 1.15rem;
}
.auth-alert {
margin: 1rem 0 0;
padding: 0.5rem 0.75rem;
text-align: left;
}
.auth-form {
margin-top: 1.2rem;
text-align: left;
}
.auth-form .form-label {
margin-bottom: 0.35rem;
font-size: 0.82rem;
font-weight: 700;
color: var(--bs-secondary-color);
}
.auth-form .form-control {
margin-bottom: 0.85rem;
}
.auth-form .btn {
margin-top: 0.35rem;
}
.user-form-grid {
display: grid;
grid-template-columns: minmax(150px, 1fr) minmax(160px, 1fr) 120px 150px 110px auto auto;
gap: 0.55rem;
align-items: center;
}
.smart-panel {
container-type: inline-size;
}
.smart-header {
display: flex;
align-items: flex-start;
justify-content: space-between;
gap: 1rem;
padding-bottom: 0.75rem;
border-bottom: 1px solid var(--bs-border-color);
}
.smart-header-actions {
display: flex;
align-items: center;
gap: 0.45rem;
flex-wrap: wrap;
justify-content: flex-end;
}
.smart-settings-list {
display: grid;
gap: 0.65rem;
margin-top: 0.85rem;
}
.smart-setting-row {
display: flex;
align-items: center;
justify-content: space-between;
gap: 1rem;
min-height: 52px;
padding: 0.6rem 0.7rem;
border: 1px solid var(--bs-border-color);
border-radius: 0.65rem;
background: rgba(var(--bs-secondary-bg-rgb), 0.28);
}
.smart-toggle-row .form-check {
display: flex;
align-items: center;
min-height: 0;
margin: 0;
padding-left: 2.25rem;
}
.smart-toggle-row .form-check-input {
margin-top: 0;
}
.smart-setting-row .form-check-label,
.smart-input-field span {
font-weight: 700;
}
.smart-input-grid {
display: grid;
grid-template-columns: repeat(4, minmax(120px, 1fr));
gap: 0.65rem;
}
.smart-input-field {
display: grid;
gap: 0.35rem;
min-width: 0;
padding: 0.6rem 0.7rem;
border: 1px solid var(--bs-border-color);
border-radius: 0.65rem;
background: rgba(var(--bs-body-bg-rgb), 0.48);
}
.smart-input-field small {
color: var(--bs-secondary-color);
line-height: 1.2;
}
.smart-input-field .form-control {
width: 100%;
}
.smart-actions {
display: flex;
align-items: center;
gap: 0.45rem;
flex-wrap: wrap;
padding: 0.7rem;
border: 1px solid var(--bs-border-color);
border-radius: 0.65rem;
background: rgba(var(--bs-secondary-bg-rgb), 0.24);
}
@media (max-width: 992px) {
.user-form-grid {
grid-template-columns: repeat(2, minmax(0, 1fr));
}
.smart-input-grid {
grid-template-columns: repeat(2, minmax(0, 1fr));
}
}
@media (max-width: 576px) {
.user-form-grid,
.smart-input-grid {
grid-template-columns: 1fr;
}
.smart-header,
.smart-setting-row {
align-items: stretch;
flex-direction: column;
}
.smart-header-actions {
justify-content: stretch;
}
.smart-header-actions .btn {
flex: 1 1 auto;
}
.smart-toggle-row .form-check {
padding-left: 0;
}
}

File diff suppressed because one or more lines are too long

View File

@@ -0,0 +1,27 @@
<!doctype html>
<html lang="en" data-bs-theme="dark">
<head>
<meta charset="utf-8">
<meta name="viewport" content="width=device-width, initial-scale=1">
<title>pyTorrent login</title>
<link href="https://cdn.jsdelivr.net/npm/bootstrap@5.3.3/dist/css/bootstrap.min.css" rel="stylesheet">
<link href="https://cdnjs.cloudflare.com/ajax/libs/font-awesome/6.5.2/css/all.min.css" rel="stylesheet">
<link href="{{ static_url('styles.css') }}" rel="stylesheet">
</head>
<body class="auth-page">
<main class="initial-loader-card auth-card">
<div class="initial-loader-brand"><i class="fa-solid fa-robot"></i> pyTorrent</div>
<div class="auth-lock" aria-hidden="true"><i class="fa-solid fa-lock"></i></div>
<h1 class="initial-loader-title">Sign in</h1>
<p class="initial-loader-text">Authentication is enabled for this pyTorrent instance.</p>
{% if error %}<div class="alert alert-danger auth-alert">{{ error }}</div>{% endif %}
<form class="auth-form" method="post">
<label class="form-label" for="username">User</label>
<input id="username" class="form-control" name="username" autocomplete="username" autofocus>
<label class="form-label" for="password">Password</label>
<input id="password" class="form-control" name="password" type="password" autocomplete="current-password">
<button class="btn btn-primary w-100" type="submit"><i class="fa-solid fa-right-to-bracket"></i> Log in</button>
</form>
</main>
</body>
</html>