Compare commits
85 Commits
95fb7649f2
...
master
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
af6e6f7875 | ||
| f7fd0ffd05 | |||
|
|
c34a2f4330 | ||
|
|
da17facf39 | ||
|
|
c331dcf41f | ||
|
|
c3516470fb | ||
|
|
dd09b428fa | ||
|
|
559aa20553 | ||
|
|
661ccc627d | ||
|
|
8a33a8af24 | ||
|
|
f2eed0f7fe | ||
|
|
6a44870985 | ||
|
|
2409d33687 | ||
|
|
87a5697767 | ||
|
|
3b9eaedb33 | ||
|
|
673665d03d | ||
|
|
51090a0eec | ||
|
|
4eda325a64 | ||
|
|
2bc24610d4 | ||
|
|
02875fdc92 | ||
|
|
96e17d4b63 | ||
|
|
3eee5be37a | ||
|
|
607d1c59c1 | ||
|
|
f76609672d | ||
|
|
7bedeec39f | ||
|
|
f245f082d5 | ||
|
|
80cea1dbc3 | ||
|
|
d5d41bcec2 | ||
|
|
76412284bb | ||
|
|
cd6c4fad85 | ||
|
|
f445d25c5d | ||
|
|
b5f1c26a83 | ||
|
|
3fae7550cc | ||
|
|
7fc505cfac | ||
|
|
10d25b88b1 | ||
| de7f9451b1 | |||
|
|
91dcd1c5db | ||
|
|
8334aa97e2 | ||
|
|
85e1e6adcd | ||
|
|
7a4bda98a2 | ||
|
|
e99d19ece0 | ||
|
|
f9280daa21 | ||
|
|
5d7abe6e59 | ||
|
|
440b187c39 | ||
|
|
2691442fc1 | ||
|
|
98f155b53a | ||
|
|
0730e7316c | ||
|
|
7c31136535 | ||
|
|
a72b6eb364 | ||
|
|
e2017b8344 | ||
|
|
21cbd5baa6 | ||
|
|
c19ff17134 | ||
|
|
6587e74892 | ||
|
|
1baf4a8495 | ||
|
|
890e3d1564 | ||
|
|
dc1cac4e6f | ||
|
|
aea3c92830 | ||
|
|
45cb6cbb3a | ||
|
|
eedfce7207 | ||
|
|
fc5fedbde2 | ||
|
|
0f6f9d740c | ||
|
|
0e0c3359ee | ||
|
|
912d89abba | ||
|
|
48bf3ca209 | ||
|
|
c3d12bde46 | ||
|
|
904f36e06f | ||
|
|
df8750bacb | ||
|
|
b74b32dd21 | ||
|
|
2d19481c4c | ||
|
|
dc78f8fd38 | ||
|
|
eb3d743500 | ||
|
|
5874f8669d | ||
|
|
08772ddda5 | ||
|
|
3c14a6f510 | ||
|
|
0b580f590e | ||
|
|
fa0d2f13fe | ||
| 4b236c21f8 | |||
|
|
0dcdf0e22b | ||
|
|
879c60d563 | ||
|
|
d5b7d97528 | ||
|
|
1ff1525f0b | ||
|
|
d55533d78a | ||
|
|
e9940bf16c | ||
|
|
dca7389a1a | ||
|
|
31bba1269d |
16
.env.example
16
.env.example
@@ -6,11 +6,23 @@ PYTORRENT_DEBUG=0
|
|||||||
PYTORRENT_POLL_INTERVAL=1.0
|
PYTORRENT_POLL_INTERVAL=1.0
|
||||||
PYTORRENT_WORKERS=16
|
PYTORRENT_WORKERS=16
|
||||||
PYTORRENT_GEOIP_DB=data/GeoLite2-City.mmdb
|
PYTORRENT_GEOIP_DB=data/GeoLite2-City.mmdb
|
||||||
|
PYTORRENT_ALLOW_UNSAFE_WERKZEUG=0
|
||||||
|
PYTORRENT_SCGI_RETRIES=8
|
||||||
|
|
||||||
|
# css/js libs
|
||||||
|
PYTORRENT_USE_OFFLINE_LIBS=false
|
||||||
|
|
||||||
|
# python -m pytorrent.cli reset-password admin new_Pass
|
||||||
|
PYTORRENT_AUTH_ENABLE=false
|
||||||
|
|
||||||
|
# Reverse proxy / HTTPS
|
||||||
|
PYTORRENT_PROXY_FIX_ENABLE=false
|
||||||
|
PYTORRENT_SESSION_COOKIE_SECURE=false
|
||||||
|
# PYTORRENT_SOCKETIO_CORS_ALLOWED_ORIGINS=https://your-domain.com
|
||||||
|
|
||||||
# Retention / Smart Queue
|
# Retention / Smart Queue
|
||||||
PYTORRENT_TRAFFIC_HISTORY_RETENTION_DAYS=90
|
PYTORRENT_TRAFFIC_HISTORY_RETENTION_DAYS=90
|
||||||
PYTORRENT_JOBS_RETENTION_DAYS=30
|
PYTORRENT_JOBS_RETENTION_DAYS=30
|
||||||
PYTORRENT_SMART_QUEUE_HISTORY_RETENTION_DAYS=30
|
PYTORRENT_SMART_QUEUE_HISTORY_RETENTION_DAYS=30
|
||||||
PYTORRENT_LOG_RETENTION_DAYS=30
|
PYTORRENT_LOG_RETENTION_DAYS=30
|
||||||
PYTORRENT_SMART_QUEUE_LABEL="Smart Queue Paused"
|
PYTORRENT_SMART_QUEUE_LABEL="Smart Queue"
|
||||||
|
|
||||||
|
|||||||
3
.gitignore
vendored
3
.gitignore
vendored
@@ -34,6 +34,9 @@ storage/*
|
|||||||
*.sqlite3-shm
|
*.sqlite3-shm
|
||||||
*.sqlite3
|
*.sqlite3
|
||||||
data/*
|
data/*
|
||||||
|
!data/tracker_favicons
|
||||||
|
data/tracker_favicons/*.ico
|
||||||
logs/*
|
logs/*
|
||||||
|
|
||||||
todo.txt
|
todo.txt
|
||||||
|
pytorrent/static/libs/*
|
||||||
120
README.md
120
README.md
@@ -1,33 +1,33 @@
|
|||||||
# pyTorrent
|
# pyTorrent
|
||||||
|
|
||||||
Monopage web UI dla rTorrent inspirowany workflow ruTorrent.
|
Single-page web UI for rTorrent inspired by the ruTorrent workflow.
|
||||||
|
|
||||||
## Funkcje
|
## Features
|
||||||
|
|
||||||
- Flask + Flask-SocketIO.
|
- Flask + Flask-SocketIO.
|
||||||
- SQLite na preferencje, profile SCGI, motyw Bootstrapa i font UI.
|
- SQLite storage for preferences, SCGI profiles, Bootstrap theme and UI font.
|
||||||
- Dowolna liczba profili rTorrent per user.
|
- Multiple rTorrent profiles per user.
|
||||||
- Profile można dodawać i edytować z UI; flaga zdalnej lokalizacji ukrywa CPU/RAM hosta aplikacji, żeby nie mylić ich z zasobami zdalnego rTorrenta; publiczny IP dla port check jest dalej sprawdzany zdalnie, jeśli rTorrent to obsługuje.
|
- Profiles can be added and edited from the UI; the remote profile flag hides local CPU/RAM usage to avoid confusing it with remote rTorrent host resources.
|
||||||
- Przełączanie aktywnego rTorrent z UI.
|
- Active rTorrent profile switching from the UI.
|
||||||
- Live lista torrentów przez WebSocket.
|
- Live torrent list over WebSocket.
|
||||||
- Cache aplikacyjny i wysyłanie patchy bez przeładowywania całej tabeli.
|
- Application-side cache with patch updates instead of full table reloads.
|
||||||
- Operacje usera wykonywane w ThreadPoolExecutor.
|
- User operations executed through ThreadPoolExecutor.
|
||||||
- Akcje `move` i `remove` są wykonywane per profil w kolejności zlecenia, więc późniejsze usunięcie poczeka na wcześniejsze przenoszenia.
|
- `move` and `remove` actions are executed per profile in request order, so later deletes wait for earlier moves.
|
||||||
- Log jobsów pokazuje krótką datę i godzinę w tabeli oraz pełny timestamp w tooltipie.
|
- Job log shows a short date/time in the table and the full timestamp in the tooltip.
|
||||||
- Masowe start/pause/stop/resume/recheck/remove/move.
|
- Bulk start, pause, stop, resume, recheck, remove and move.
|
||||||
- Move obsługuje `move_data=true`, który fizycznie przenosi dane po stronie rTorrent w tle i odpytuje plik statusu, dzięki czemu długie `mv` nie kończy się timeoutem SCGI; jeśli cel już istnieje, jest nadpisywany (`force`), a timeouty z `mkdir`/startu/pollingu move nie przerywają operacji. Potem aktualizuje katalog torrenta, a `recheck` domyślnie włącza się przy fizycznym przenoszeniu.
|
- Move supports `move_data=true`; data is physically moved on the rTorrent side in the background and status is polled from a marker file, so long `mv` operations do not hit the SCGI timeout.
|
||||||
- Modal dodawania wielu magnetów.
|
- Multi-magnet add modal.
|
||||||
- Dolny status bar: CPU, RAM, wersja rTorrent, prędkości, limity, total DL/UP oraz status portu, gdy port check jest włączony.
|
- Bottom status bar with CPU, RAM, rTorrent version, speeds, limits, total DL/UP and port-check status when enabled.
|
||||||
- Prawoklik na torrentach.
|
- Torrent context menu.
|
||||||
- Skróty klawiaturowe.
|
- Keyboard shortcuts.
|
||||||
- Szczegóły: General, Files, Peers, Trackers, Log.
|
- Details tabs: General, Files, Peers, Trackers and Log.
|
||||||
- Smart Queue pokazuje domyślnie 10 ostatnich operacji; można rozwinąć historię do 100 wpisów.
|
- Smart Queue shows the last 10 operations by default and can expand history to 100 rows.
|
||||||
- GeoIP peerów z MaxMind GeoLite2-City.mmdb, z cache IP.
|
- Peer GeoIP with MaxMind GeoLite2-City.mmdb and IP cache.
|
||||||
- Cache-busting statyków przez MD5 i nagłówki cache.
|
- Static cache busting with MD5 and cache headers.
|
||||||
- Preferencje wyglądu: domyślny Bootstrap albo Bootswatch: Flatly, Litera, Lumen, Minty, Sketchy, Solar, Spacelab, United, Zephyr.
|
- Appearance preferences: default Bootstrap or Bootswatch themes Flatly, Litera, Lumen, Minty, Sketchy, Solar, Spacelab, United and Zephyr.
|
||||||
- Preferencje fontu: domyślny font motywu, Adwaita Mono oraz dodatkowe pasujące fonty.
|
- Font preferences: default theme font, Adwaita Mono and additional matching fonts.
|
||||||
|
|
||||||
## Uruchomienie
|
## Run locally
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
./install.sh
|
./install.sh
|
||||||
@@ -35,17 +35,54 @@ Monopage web UI dla rTorrent inspirowany workflow ruTorrent.
|
|||||||
python app.py
|
python app.py
|
||||||
```
|
```
|
||||||
|
|
||||||
Domyślnie: `http://127.0.0.1:8090`.
|
Default URL: `http://127.0.0.1:8090`.
|
||||||
|
|
||||||
## Profil SCGI
|
## Production run
|
||||||
|
|
||||||
Przykład:
|
Preferred mode without development Werkzeug:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
. venv/bin/activate
|
||||||
|
gunicorn --worker-class gthread --workers 1 --threads 32 --bind 0.0.0.0:8090 --access-logfile - --error-logfile - wsgi:app
|
||||||
|
```
|
||||||
|
|
||||||
|
Note: the app keeps `async_mode="threading"`, so WebSocket, `start_background_task`, operation queues and the poller run in the same model as before.
|
||||||
|
|
||||||
|
Alternatives reviewed but not enabled by default:
|
||||||
|
|
||||||
|
- Gunicorn with `eventlet`: works with Flask-SocketIO, but requires green threads and monkey patching, which increases regression risk for file and SCGI operations.
|
||||||
|
- Gunicorn with `gevent`: a valid production option, but it needs extra dependencies and compatibility testing.
|
||||||
|
- Multiple Gunicorn workers: requires Redis, RabbitMQ or Kafka as the Socket.IO message queue, so it is not a drop-in replacement.
|
||||||
|
|
||||||
|
## Reverse proxy
|
||||||
|
|
||||||
|
When pyTorrent is served behind a reverse proxy, enable proxy header handling only when the proxy is trusted:
|
||||||
|
|
||||||
|
```env
|
||||||
|
PYTORRENT_PROXY_FIX_ENABLE=true
|
||||||
|
PYTORRENT_SESSION_COOKIE_SECURE=true
|
||||||
|
```
|
||||||
|
|
||||||
|
The proxy should forward at least:
|
||||||
|
|
||||||
|
```txt
|
||||||
|
X-Forwarded-For
|
||||||
|
X-Forwarded-Proto
|
||||||
|
X-Forwarded-Host
|
||||||
|
X-Forwarded-Port
|
||||||
|
```
|
||||||
|
|
||||||
|
This keeps login redirects, session cookies and same-origin API checks correct when HTTPS is terminated by the proxy. If pyTorrent is mounted under a sub-path, also forward `X-Forwarded-Prefix`.
|
||||||
|
|
||||||
|
## SCGI profile
|
||||||
|
|
||||||
|
Example:
|
||||||
|
|
||||||
```txt
|
```txt
|
||||||
scgi://127.0.0.1:5000/RPC2
|
scgi://127.0.0.1:5000/RPC2
|
||||||
```
|
```
|
||||||
|
|
||||||
Po stronie rTorrent:
|
On the rTorrent side:
|
||||||
|
|
||||||
```txt
|
```txt
|
||||||
network.scgi.open_port = 127.0.0.1:5000
|
network.scgi.open_port = 127.0.0.1:5000
|
||||||
@@ -53,22 +90,39 @@ network.scgi.open_port = 127.0.0.1:5000
|
|||||||
|
|
||||||
## GeoIP
|
## GeoIP
|
||||||
|
|
||||||
Instalator pobiera bazę GeoLite2-City jednorazowo do:
|
The installer downloads GeoLite2-City once to:
|
||||||
|
|
||||||
```txt
|
```txt
|
||||||
data/GeoLite2-City.mmdb
|
data/GeoLite2-City.mmdb
|
||||||
```
|
```
|
||||||
|
|
||||||
Można też uruchomić ręcznie:
|
Manual download:
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
./scripts/download_geoip.sh
|
./scripts/download_geoip.sh
|
||||||
```
|
```
|
||||||
|
|
||||||
Skrypt używa głównego źródła `https://git.io/GeoLite2-City.mmdb`, a przy błędzie fallbacku `https://github.com/P3TERX/GeoLite.mmdb/raw/download/GeoLite2-City.mmdb`. Katalog `data` ma uprawnienia `755`, a plik bazy `644`.
|
The script uses `https://git.io/GeoLite2-City.mmdb` as the primary source and `https://github.com/P3TERX/GeoLite.mmdb/raw/download/GeoLite2-City.mmdb` as fallback. The `data` directory is set to `755`, and the database file is set to `644`.
|
||||||
|
|
||||||
## API docs
|
## API docs
|
||||||
|
|
||||||
Dokumentacja OpenAPI jest dostępna pod `/docs`. Endpoint `/api/profiles` obsługuje `max_parallel_jobs` z domyślną wartością `5` oraz `is_remote`; `PUT /api/profiles/{profile_id}` edytuje istniejący profil. Endpoint `/api/preferences` obsługuje m.in. `theme`, `bootstrap_theme`, `font_family`, `table_columns_json`, `peers_refresh_seconds` i `port_check_enabled`. Endpoint `/api/port-check` zwraca status portu wraz z `checked_at`; dla zdalnego profilu publiczny IP jest pobierany przez rTorrent z fallbackami `ifconfig.co`, `ifconfig.me` i `ipapi.linuxiarz.pl`, jeśli dana konfiguracja rTorrenta wspiera zdalne polecenia, a metoda `POST` wymusza ponowny check z pominięciem cache. Endpoint `/api/system/status` dla zdalnego profilu zwraca `usage_available=false` i nie odczytuje CPU/RAM.
|
OpenAPI documentation is available at `/docs`. `/api/profiles` supports `max_parallel_jobs` with default value `5` and `is_remote`; `PUT /api/profiles/{profile_id}` edits an existing profile. `/api/preferences` supports fields including `theme`, `bootstrap_theme`, `font_family`, `table_columns_json`, `peers_refresh_seconds` and `port_check_enabled`. `/api/port-check` returns port status with `checked_at`; for remote profiles the public IP is read through rTorrent with fallbacks when supported. `/api/system/status` returns `usage_available=false` for remote profiles and does not read local CPU/RAM.
|
||||||
|
|
||||||
`/api/openapi.json` zawiera reusable schemas dla głównych odpowiedzi API, w tym `TorrentListResponse`, `TorrentSummary`, `TorrentFilterSummary`, `CleanupSummary` i `AppStatus`. `GET /api/torrents` dokumentuje teraz pole `summary` używane przez sidebar filters.
|
`/api/openapi.json` includes reusable schemas for main API responses, including `TorrentListResponse`, `TorrentSummary`, `TorrentFilterSummary`, `CleanupSummary` and `AppStatus`. `GET /api/torrents` documents the `summary` field used by sidebar filters.
|
||||||
|
|
||||||
|
## Admin CLI
|
||||||
|
|
||||||
|
Reset an existing user's password:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
. venv/bin/activate
|
||||||
|
python -m pytorrent.cli reset-password admin new_password
|
||||||
|
```
|
||||||
|
|
||||||
|
Without the password argument, the CLI asks for it interactively:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
python -m pytorrent.cli reset-password admin
|
||||||
|
```
|
||||||
|
|
||||||
|
The command uses the same database as the app and respects `PYTORRENT_DB_PATH` from `.env`. The reset changes only the password hash and leaves role and permissions unchanged.
|
||||||
|
|||||||
11
app.py
11
app.py
@@ -1,7 +1,14 @@
|
|||||||
from pytorrent import create_app, socketio
|
from pytorrent import create_app, socketio
|
||||||
from pytorrent.config import HOST, PORT, DEBUG
|
from pytorrent.config import ALLOW_UNSAFE_WERKZEUG, DEBUG, HOST, PORT
|
||||||
|
|
||||||
app = create_app()
|
app = create_app()
|
||||||
|
|
||||||
if __name__ == "__main__":
|
if __name__ == "__main__":
|
||||||
socketio.run(app, host=HOST, port=PORT, debug=DEBUG, allow_unsafe_werkzeug=True)
|
# Note: This entrypoint is kept for local development; production should use gunicorn via wsgi:app.
|
||||||
|
socketio.run(
|
||||||
|
app,
|
||||||
|
host=HOST,
|
||||||
|
port=PORT,
|
||||||
|
debug=DEBUG,
|
||||||
|
allow_unsafe_werkzeug=ALLOW_UNSAFE_WERKZEUG,
|
||||||
|
)
|
||||||
|
|||||||
@@ -8,20 +8,16 @@ Wants=network-online.target
|
|||||||
|
|
||||||
[Service]
|
[Service]
|
||||||
Type=simple
|
Type=simple
|
||||||
#User=root
|
|
||||||
#Group=root
|
|
||||||
User=pytorrent
|
User=pytorrent
|
||||||
Group=pytorrent
|
Group=pytorrent
|
||||||
WorkingDirectory=/opt/pyTorrent
|
WorkingDirectory=/opt/pyTorrent
|
||||||
Environment="PYTHONUNBUFFERED=1"
|
Environment="PYTHONUNBUFFERED=1"
|
||||||
EnvironmentFile=/opt/pyTorrent/.env
|
EnvironmentFile=/opt/pyTorrent/.env
|
||||||
ExecStart=/opt/pyTorrent/venv/bin/python /opt/pyTorrent/app.py
|
ExecStart=/opt/pyTorrent/venv/bin/gunicorn -c /opt/pyTorrent/gunicorn.conf.py --worker-class gthread --workers 1 --threads 32 --bind ${PYTORRENT_HOST}:${PYTORRENT_PORT} --access-logfile - --error-logfile - wsgi:app
|
||||||
Restart=always
|
Restart=always
|
||||||
RestartSec=3
|
RestartSec=3
|
||||||
KillSignal=SIGINT
|
KillSignal=SIGINT
|
||||||
TimeoutStopSec=20
|
TimeoutStopSec=20
|
||||||
|
|
||||||
# opcjonalnie
|
|
||||||
NoNewPrivileges=true
|
NoNewPrivileges=true
|
||||||
PrivateTmp=true
|
PrivateTmp=true
|
||||||
|
|
||||||
|
|||||||
3
gunicorn.conf.py
Normal file
3
gunicorn.conf.py
Normal file
@@ -0,0 +1,3 @@
|
|||||||
|
import gunicorn.http.wsgi
|
||||||
|
|
||||||
|
gunicorn.http.wsgi.SERVER = "pyTorrent"
|
||||||
@@ -5,6 +5,8 @@ python3 -m venv venv
|
|||||||
pip install --upgrade pip
|
pip install --upgrade pip
|
||||||
pip install -r requirements.txt
|
pip install -r requirements.txt
|
||||||
cp -n .env.example .env || true
|
cp -n .env.example .env || true
|
||||||
|
grep -q '^PYTORRENT_USE_OFFLINE_LIBS=' .env || echo 'PYTORRENT_USE_OFFLINE_LIBS=true' >> .env
|
||||||
|
./scripts/download_frontend_libs.py
|
||||||
mkdir -p data
|
mkdir -p data
|
||||||
chmod 755 data
|
chmod 755 data
|
||||||
./scripts/download_geoip.sh data/GeoLite2-City.mmdb
|
./scripts/download_geoip.sh data/GeoLite2-City.mmdb
|
||||||
|
|||||||
@@ -46,7 +46,7 @@ def make_zip(repo_path: Path, output_zip: Path) -> None:
|
|||||||
zf.write(abs_path, arcname=rel_path)
|
zf.write(abs_path, arcname=rel_path)
|
||||||
|
|
||||||
print(f"Utworzono archiwum: {output_zip}")
|
print(f"Utworzono archiwum: {output_zip}")
|
||||||
print(f"Dodano plików: {len(files)}")
|
print(f"Added files: {len(files)}")
|
||||||
|
|
||||||
|
|
||||||
def main():
|
def main():
|
||||||
@@ -60,7 +60,7 @@ def main():
|
|||||||
try:
|
try:
|
||||||
run_git_command(["rev-parse", "--show-toplevel"], repo_path)
|
run_git_command(["rev-parse", "--show-toplevel"], repo_path)
|
||||||
except subprocess.CalledProcessError:
|
except subprocess.CalledProcessError:
|
||||||
print("Błąd: ten katalog nie jest repozytorium Git.", file=sys.stderr)
|
print("Error: this directory is not a Git repository.", file=sys.stderr)
|
||||||
sys.exit(1)
|
sys.exit(1)
|
||||||
|
|
||||||
make_zip(repo_path, output_zip)
|
make_zip(repo_path, output_zip)
|
||||||
|
|||||||
@@ -1,19 +1,78 @@
|
|||||||
from __future__ import annotations
|
from __future__ import annotations
|
||||||
|
|
||||||
from pathlib import Path
|
from pathlib import Path
|
||||||
from flask import Flask, request, url_for
|
from flask import Flask, jsonify, render_template, request, url_for
|
||||||
from flask_socketio import SocketIO
|
from flask_socketio import SocketIO
|
||||||
from .config import SECRET_KEY
|
from werkzeug.middleware.proxy_fix import ProxyFix
|
||||||
|
from .config import (
|
||||||
|
SECRET_KEY,
|
||||||
|
SESSION_COOKIE_SECURE,
|
||||||
|
PROXY_FIX_ENABLE,
|
||||||
|
PROXY_FIX_X_FOR,
|
||||||
|
PROXY_FIX_X_PROTO,
|
||||||
|
PROXY_FIX_X_HOST,
|
||||||
|
PROXY_FIX_X_PORT,
|
||||||
|
PROXY_FIX_X_PREFIX,
|
||||||
|
SOCKETIO_CORS_ALLOWED_ORIGINS,
|
||||||
|
)
|
||||||
from .db import init_db
|
from .db import init_db
|
||||||
|
from .services.frontend_assets import asset_path, bootstrap_css_path, validate_offline_assets
|
||||||
from .utils import file_md5
|
from .utils import file_md5
|
||||||
|
|
||||||
socketio = SocketIO(cors_allowed_origins="*", ping_timeout=30, async_mode="threading")
|
socketio = SocketIO(cors_allowed_origins=SOCKETIO_CORS_ALLOWED_ORIGINS, ping_timeout=30, async_mode="threading")
|
||||||
_static_md5_cache: dict[tuple, str] = {}
|
_static_md5_cache: dict[tuple, str] = {}
|
||||||
|
|
||||||
|
|
||||||
|
def _wants_json_response() -> bool:
|
||||||
|
"""Return true for API/error clients that should not receive an HTML page."""
|
||||||
|
best = request.accept_mimetypes.best_match(["application/json", "text/html"])
|
||||||
|
return request.path.startswith("/api/") or best == "application/json"
|
||||||
|
|
||||||
|
|
||||||
|
def register_error_pages(app: Flask) -> None:
|
||||||
|
@app.errorhandler(404)
|
||||||
|
def not_found(error):
|
||||||
|
if _wants_json_response():
|
||||||
|
return jsonify({"ok": False, "error": "Not found"}), 404
|
||||||
|
return render_template(
|
||||||
|
"error.html",
|
||||||
|
code=404,
|
||||||
|
title="Page not found",
|
||||||
|
message="The requested pyTorrent view does not exist or is not available.",
|
||||||
|
icon="fa-compass-drafting",
|
||||||
|
), 404
|
||||||
|
|
||||||
|
@app.errorhandler(500)
|
||||||
|
def server_error(error):
|
||||||
|
if _wants_json_response():
|
||||||
|
return jsonify({"ok": False, "error": "Internal server error"}), 500
|
||||||
|
return render_template(
|
||||||
|
"error.html",
|
||||||
|
code=500,
|
||||||
|
title="Application error",
|
||||||
|
message="pyTorrent hit an internal error while handling this request.",
|
||||||
|
icon="fa-bug",
|
||||||
|
), 500
|
||||||
|
|
||||||
|
|
||||||
def create_app() -> Flask:
|
def create_app() -> Flask:
|
||||||
|
validate_offline_assets()
|
||||||
app = Flask(__name__)
|
app = Flask(__name__)
|
||||||
|
if PROXY_FIX_ENABLE:
|
||||||
|
app.wsgi_app = ProxyFix(
|
||||||
|
app.wsgi_app,
|
||||||
|
x_for=PROXY_FIX_X_FOR,
|
||||||
|
x_proto=PROXY_FIX_X_PROTO,
|
||||||
|
x_host=PROXY_FIX_X_HOST,
|
||||||
|
x_port=PROXY_FIX_X_PORT,
|
||||||
|
x_prefix=PROXY_FIX_X_PREFIX,
|
||||||
|
)
|
||||||
app.secret_key = SECRET_KEY
|
app.secret_key = SECRET_KEY
|
||||||
|
app.config.update(
|
||||||
|
SESSION_COOKIE_HTTPONLY=True,
|
||||||
|
SESSION_COOKIE_SAMESITE="Lax",
|
||||||
|
SESSION_COOKIE_SECURE=SESSION_COOKIE_SECURE,
|
||||||
|
)
|
||||||
|
|
||||||
@app.context_processor
|
@app.context_processor
|
||||||
def static_helpers():
|
def static_helpers():
|
||||||
@@ -30,25 +89,48 @@ def create_app() -> Flask:
|
|||||||
return url_for("static", filename=filename, v=version)
|
return url_for("static", filename=filename, v=version)
|
||||||
except OSError:
|
except OSError:
|
||||||
return url_for("static", filename=filename)
|
return url_for("static", filename=filename)
|
||||||
return {"static_url": static_url}
|
|
||||||
|
def frontend_asset_url(key: str) -> str:
|
||||||
|
path = asset_path(key)
|
||||||
|
return path if path.startswith("http") else static_url(path)
|
||||||
|
|
||||||
|
def bootstrap_theme_url(theme: str | None = None) -> str:
|
||||||
|
path = bootstrap_css_path(theme)
|
||||||
|
return path if path.startswith("http") else static_url(path)
|
||||||
|
|
||||||
|
return {
|
||||||
|
"static_url": static_url,
|
||||||
|
"frontend_asset_url": frontend_asset_url,
|
||||||
|
"bootstrap_theme_url": bootstrap_theme_url,
|
||||||
|
}
|
||||||
|
|
||||||
@app.after_request
|
@app.after_request
|
||||||
def cache_headers(response):
|
def cache_headers(response):
|
||||||
response.headers.pop('Content-Disposition', None)
|
response.headers.pop("Content-Disposition", None)
|
||||||
|
|
||||||
if request.endpoint == "static":
|
static_file = request.path.startswith("/static/")
|
||||||
|
tracker_icon = request.path.startswith("/static/tracker_favicons/")
|
||||||
|
favicon_ico = request.path == "/favicon.ico"
|
||||||
|
|
||||||
|
if static_file and not tracker_icon:
|
||||||
response.headers["Cache-Control"] = "public, max-age=31536000, immutable"
|
response.headers["Cache-Control"] = "public, max-age=31536000, immutable"
|
||||||
|
elif favicon_ico:
|
||||||
|
response.headers["Cache-Control"] = "public, max-age=86400"
|
||||||
else:
|
else:
|
||||||
response.headers["Cache-Control"] = "no-cache, no-store, must-revalidate"
|
response.headers["Cache-Control"] = "no-store, private"
|
||||||
response.headers["Pragma"] = "no-cache"
|
|
||||||
response.headers["Expires"] = "0"
|
|
||||||
return response
|
return response
|
||||||
|
|
||||||
from .routes.main import bp as main_bp
|
from .routes.main import bp as main_bp
|
||||||
from .routes.api import bp as api_bp
|
from .routes.api import bp as api_bp
|
||||||
app.register_blueprint(main_bp)
|
app.register_blueprint(main_bp)
|
||||||
app.register_blueprint(api_bp)
|
app.register_blueprint(api_bp)
|
||||||
|
register_error_pages(app)
|
||||||
init_db()
|
init_db()
|
||||||
|
from .services.speed_peaks import load_cache
|
||||||
|
load_cache()
|
||||||
|
from .services.auth import install_guards
|
||||||
|
install_guards(app)
|
||||||
|
|
||||||
socketio.init_app(app)
|
socketio.init_app(app)
|
||||||
from .services.workers import set_socketio
|
from .services.workers import set_socketio
|
||||||
|
|||||||
111
pytorrent/cli.py
Normal file
111
pytorrent/cli.py
Normal file
@@ -0,0 +1,111 @@
|
|||||||
|
from __future__ import annotations
|
||||||
|
|
||||||
|
import argparse
|
||||||
|
import getpass
|
||||||
|
import sys
|
||||||
|
import json
|
||||||
|
|
||||||
|
from .db import connect, init_db, utcnow
|
||||||
|
from .services.auth import password_hash
|
||||||
|
from .services import tracker_cache
|
||||||
|
|
||||||
|
|
||||||
|
def reset_password(username: str, password: str) -> bool:
|
||||||
|
"""Note: Reset the selected user password hash without changing role or permissions."""
|
||||||
|
username = (username or "").strip()
|
||||||
|
if not username:
|
||||||
|
raise ValueError("Username is required")
|
||||||
|
if password is None or password == "":
|
||||||
|
raise ValueError("Password cannot be empty")
|
||||||
|
|
||||||
|
init_db()
|
||||||
|
now = utcnow()
|
||||||
|
hashed = password_hash(password)
|
||||||
|
with connect() as conn:
|
||||||
|
row = conn.execute("SELECT id FROM users WHERE username=?", (username,)).fetchone()
|
||||||
|
if not row:
|
||||||
|
return False
|
||||||
|
conn.execute(
|
||||||
|
"UPDATE users SET password_hash=?, updated_at=? WHERE username=?",
|
||||||
|
(hashed, now, username),
|
||||||
|
)
|
||||||
|
return True
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
def fetch_tracker_favicon(domain: str, refresh: bool = True, debug: bool = False) -> str:
|
||||||
|
"""Note: Download or refresh one tracker favicon from CLI without starting the web server."""
|
||||||
|
clean = tracker_cache.tracker_domain(domain)
|
||||||
|
if not clean:
|
||||||
|
raise ValueError("Tracker domain is required")
|
||||||
|
init_db()
|
||||||
|
path, mime = tracker_cache.favicon_path(clean, enabled=True, force=refresh)
|
||||||
|
row = tracker_cache.favicon_cache_row(clean)
|
||||||
|
if not path:
|
||||||
|
detail = (row or {}).get("error") if row else "favicon not found"
|
||||||
|
if debug and row:
|
||||||
|
raise RuntimeError(f"{detail or 'favicon not found'}; cache={json.dumps(dict(row), default=str)}")
|
||||||
|
raise RuntimeError(str(detail or "favicon not found"))
|
||||||
|
if debug and row:
|
||||||
|
return f"{path} ({mime or 'unknown'}) cache={json.dumps(dict(row), default=str)}"
|
||||||
|
return f"{path} ({mime or 'unknown'})"
|
||||||
|
|
||||||
|
def _password_from_args(args: argparse.Namespace) -> str:
|
||||||
|
"""Note: Allow the password to be passed as an argument or entered securely in interactive mode."""
|
||||||
|
if args.password is not None:
|
||||||
|
return args.password
|
||||||
|
first = getpass.getpass("New password: ")
|
||||||
|
second = getpass.getpass("Repeat password: ")
|
||||||
|
if first != second:
|
||||||
|
raise ValueError("Passwords do not match")
|
||||||
|
return first
|
||||||
|
|
||||||
|
|
||||||
|
def build_parser() -> argparse.ArgumentParser:
|
||||||
|
"""Note: Define simple administrative commands launched with python -m pytorrent.cli."""
|
||||||
|
parser = argparse.ArgumentParser(description="pyTorrent CLI")
|
||||||
|
sub = parser.add_subparsers(dest="command", required=True)
|
||||||
|
|
||||||
|
reset = sub.add_parser("reset-password", help="Reset password for an existing user")
|
||||||
|
reset.add_argument("username", help="User login")
|
||||||
|
reset.add_argument("password", nargs="?", help="New password; omit to type it interactively")
|
||||||
|
reset.set_defaults(func=_cmd_reset_password)
|
||||||
|
|
||||||
|
icon = sub.add_parser("tracker-favicon", help="Download or refresh a tracker favicon cache file")
|
||||||
|
icon.add_argument("domain", help="Tracker domain, e.g. t.pte.nu")
|
||||||
|
icon.add_argument("--no-refresh", action="store_true", help="Use fresh cache when available")
|
||||||
|
icon.add_argument("--debug", action="store_true", help="Print cache diagnostics on success or failure")
|
||||||
|
icon.set_defaults(func=_cmd_tracker_favicon)
|
||||||
|
|
||||||
|
return parser
|
||||||
|
|
||||||
|
|
||||||
|
def _cmd_reset_password(args: argparse.Namespace) -> int:
|
||||||
|
"""Note: Run the password reset and return a readable terminal status."""
|
||||||
|
password = _password_from_args(args)
|
||||||
|
if reset_password(args.username, password):
|
||||||
|
print(f"Password reset for user: {args.username}")
|
||||||
|
return 0
|
||||||
|
print(f"User not found: {args.username}", file=sys.stderr)
|
||||||
|
return 1
|
||||||
|
|
||||||
|
|
||||||
|
def _cmd_tracker_favicon(args: argparse.Namespace) -> int:
|
||||||
|
"""Note: Run favicon discovery from CLI and print the saved file path."""
|
||||||
|
print(fetch_tracker_favicon(args.domain, refresh=not args.no_refresh, debug=bool(args.debug)))
|
||||||
|
return 0
|
||||||
|
|
||||||
|
|
||||||
|
def main(argv: list[str] | None = None) -> int:
|
||||||
|
"""Note: Main CLI entrypoint with error handling and without starting the web app."""
|
||||||
|
parser = build_parser()
|
||||||
|
args = parser.parse_args(argv)
|
||||||
|
try:
|
||||||
|
return int(args.func(args) or 0)
|
||||||
|
except Exception as exc:
|
||||||
|
print(f"Error: {exc}", file=sys.stderr)
|
||||||
|
return 1
|
||||||
|
|
||||||
|
|
||||||
|
if __name__ == "__main__":
|
||||||
|
raise SystemExit(main())
|
||||||
@@ -1,20 +1,46 @@
|
|||||||
from __future__ import annotations
|
from __future__ import annotations
|
||||||
|
|
||||||
import os
|
import os
|
||||||
|
import secrets
|
||||||
from pathlib import Path
|
from pathlib import Path
|
||||||
from dotenv import load_dotenv
|
from dotenv import load_dotenv
|
||||||
|
|
||||||
BASE_DIR = Path(__file__).resolve().parent.parent
|
BASE_DIR = Path(__file__).resolve().parent.parent
|
||||||
load_dotenv(BASE_DIR / ".env")
|
load_dotenv(BASE_DIR / ".env")
|
||||||
|
|
||||||
SECRET_KEY = os.getenv("PYTORRENT_SECRET_KEY", "dev-change-me")
|
|
||||||
|
def _env_bool(name: str, default: bool = False) -> bool:
|
||||||
|
value = os.getenv(name)
|
||||||
|
if value is None:
|
||||||
|
return default
|
||||||
|
return value.strip().lower() in {"1", "true", "yes", "on"}
|
||||||
|
|
||||||
|
|
||||||
|
_SECRET_KEY_ENV = os.getenv("PYTORRENT_SECRET_KEY")
|
||||||
|
SECRET_KEY = _SECRET_KEY_ENV or "dev-change-me"
|
||||||
DB_PATH = Path(os.getenv("PYTORRENT_DB_PATH", str(BASE_DIR / "data" / "pytorrent.sqlite3")))
|
DB_PATH = Path(os.getenv("PYTORRENT_DB_PATH", str(BASE_DIR / "data" / "pytorrent.sqlite3")))
|
||||||
if not DB_PATH.is_absolute():
|
if not DB_PATH.is_absolute():
|
||||||
DB_PATH = BASE_DIR / DB_PATH
|
DB_PATH = BASE_DIR / DB_PATH
|
||||||
|
|
||||||
HOST = os.getenv("PYTORRENT_HOST", "0.0.0.0")
|
HOST = os.getenv("PYTORRENT_HOST", "0.0.0.0")
|
||||||
PORT = int(os.getenv("PYTORRENT_PORT", "8090"))
|
PORT = int(os.getenv("PYTORRENT_PORT", "8090"))
|
||||||
DEBUG = os.getenv("PYTORRENT_DEBUG", "0") == "1"
|
DEBUG = _env_bool("PYTORRENT_DEBUG", False)
|
||||||
|
# Notatka: tryb offline wymusza lokalne JS/CSS i wyłącza zależność od CDN.
|
||||||
|
USE_OFFLINE_LIBS = _env_bool("PYTORRENT_USE_OFFLINE_LIBS", False)
|
||||||
|
# Note: Optional authentication remains disabled unless explicitly enabled in .env.
|
||||||
|
AUTH_ENABLE = _env_bool("PYTORRENT_AUTH_ENABLE", False)
|
||||||
|
if AUTH_ENABLE and (not _SECRET_KEY_ENV or SECRET_KEY == "dev-change-me"):
|
||||||
|
# Note: Auth mode cannot use Flask's development secret; persist a local random session key instead.
|
||||||
|
_secret_file = BASE_DIR / "data" / ".session_secret"
|
||||||
|
_secret_file.parent.mkdir(parents=True, exist_ok=True)
|
||||||
|
if _secret_file.exists():
|
||||||
|
SECRET_KEY = _secret_file.read_text(encoding="utf-8").strip()
|
||||||
|
if not SECRET_KEY or SECRET_KEY == "dev-change-me":
|
||||||
|
SECRET_KEY = secrets.token_urlsafe(48)
|
||||||
|
_secret_file.write_text(SECRET_KEY, encoding="utf-8")
|
||||||
|
SESSION_COOKIE_SECURE = _env_bool("PYTORRENT_SESSION_COOKIE_SECURE", False)
|
||||||
|
# Note: Keep Werkzeug opt-in only for explicit local/dev use, never by default in services.
|
||||||
|
ALLOW_UNSAFE_WERKZEUG = _env_bool("PYTORRENT_ALLOW_UNSAFE_WERKZEUG", DEBUG)
|
||||||
POLL_INTERVAL = float(os.getenv("PYTORRENT_POLL_INTERVAL", "1.0"))
|
POLL_INTERVAL = float(os.getenv("PYTORRENT_POLL_INTERVAL", "1.0"))
|
||||||
WORKERS = int(os.getenv("PYTORRENT_WORKERS", "16"))
|
WORKERS = int(os.getenv("PYTORRENT_WORKERS", "16"))
|
||||||
GEOIP_DB = Path(os.getenv("PYTORRENT_GEOIP_DB", str(BASE_DIR / "data" / "GeoLite2-City.mmdb")))
|
GEOIP_DB = Path(os.getenv("PYTORRENT_GEOIP_DB", str(BASE_DIR / "data" / "GeoLite2-City.mmdb")))
|
||||||
@@ -29,8 +55,20 @@ def _env_int(name: str, default: int, minimum: int = 0) -> int:
|
|||||||
return default
|
return default
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
PROXY_FIX_ENABLE = _env_bool("PYTORRENT_PROXY_FIX_ENABLE", False)
|
||||||
|
PROXY_FIX_X_FOR = _env_int("PYTORRENT_PROXY_FIX_X_FOR", 1, 0)
|
||||||
|
PROXY_FIX_X_PROTO = _env_int("PYTORRENT_PROXY_FIX_X_PROTO", 1, 0)
|
||||||
|
PROXY_FIX_X_HOST = _env_int("PYTORRENT_PROXY_FIX_X_HOST", 1, 0)
|
||||||
|
PROXY_FIX_X_PORT = _env_int("PYTORRENT_PROXY_FIX_X_PORT", 1, 0)
|
||||||
|
PROXY_FIX_X_PREFIX = _env_int("PYTORRENT_PROXY_FIX_X_PREFIX", 1, 0)
|
||||||
|
|
||||||
|
_SOCKETIO_CORS = os.getenv("PYTORRENT_SOCKETIO_CORS_ALLOWED_ORIGINS", "").strip()
|
||||||
|
SOCKETIO_CORS_ALLOWED_ORIGINS = None if not _SOCKETIO_CORS else [item.strip() for item in _SOCKETIO_CORS.split(",") if item.strip()]
|
||||||
|
|
||||||
TRAFFIC_HISTORY_RETENTION_DAYS = _env_int("PYTORRENT_TRAFFIC_HISTORY_RETENTION_DAYS", 90, 1)
|
TRAFFIC_HISTORY_RETENTION_DAYS = _env_int("PYTORRENT_TRAFFIC_HISTORY_RETENTION_DAYS", 90, 1)
|
||||||
JOBS_RETENTION_DAYS = _env_int("PYTORRENT_JOBS_RETENTION_DAYS", 30, 1)
|
JOBS_RETENTION_DAYS = _env_int("PYTORRENT_JOBS_RETENTION_DAYS", 30, 1)
|
||||||
SMART_QUEUE_HISTORY_RETENTION_DAYS = _env_int("PYTORRENT_SMART_QUEUE_HISTORY_RETENTION_DAYS", 30, 1)
|
SMART_QUEUE_HISTORY_RETENTION_DAYS = _env_int("PYTORRENT_SMART_QUEUE_HISTORY_RETENTION_DAYS", 30, 1)
|
||||||
LOG_RETENTION_DAYS = _env_int("PYTORRENT_LOG_RETENTION_DAYS", 30, 1)
|
LOG_RETENTION_DAYS = _env_int("PYTORRENT_LOG_RETENTION_DAYS", 30, 1)
|
||||||
SMART_QUEUE_LABEL = os.getenv("PYTORRENT_SMART_QUEUE_LABEL", "Smart Queue Paused")
|
SMART_QUEUE_LABEL = os.getenv("PYTORRENT_SMART_QUEUE_LABEL", "Smart Queue Stopped")
|
||||||
|
SMART_QUEUE_STALLED_LABEL = os.getenv("PYTORRENT_SMART_QUEUE_STALLED_LABEL", "Stalled")
|
||||||
|
|||||||
@@ -10,7 +10,20 @@ CREATE TABLE IF NOT EXISTS users (
|
|||||||
id INTEGER PRIMARY KEY AUTOINCREMENT,
|
id INTEGER PRIMARY KEY AUTOINCREMENT,
|
||||||
username TEXT UNIQUE NOT NULL,
|
username TEXT UNIQUE NOT NULL,
|
||||||
password_hash TEXT,
|
password_hash TEXT,
|
||||||
created_at TEXT NOT NULL
|
role TEXT DEFAULT 'user',
|
||||||
|
is_active INTEGER DEFAULT 1,
|
||||||
|
created_at TEXT NOT NULL,
|
||||||
|
updated_at TEXT
|
||||||
|
);
|
||||||
|
|
||||||
|
CREATE TABLE IF NOT EXISTS user_profile_permissions (
|
||||||
|
user_id INTEGER NOT NULL,
|
||||||
|
profile_id INTEGER NOT NULL DEFAULT 0,
|
||||||
|
access_level TEXT NOT NULL DEFAULT 'ro',
|
||||||
|
created_at TEXT NOT NULL,
|
||||||
|
updated_at TEXT NOT NULL,
|
||||||
|
PRIMARY KEY(user_id, profile_id),
|
||||||
|
FOREIGN KEY(user_id) REFERENCES users(id)
|
||||||
);
|
);
|
||||||
|
|
||||||
CREATE TABLE IF NOT EXISTS user_preferences (
|
CREATE TABLE IF NOT EXISTS user_preferences (
|
||||||
@@ -26,6 +39,9 @@ CREATE TABLE IF NOT EXISTS user_preferences (
|
|||||||
peers_refresh_seconds INTEGER DEFAULT 0,
|
peers_refresh_seconds INTEGER DEFAULT 0,
|
||||||
port_check_enabled INTEGER DEFAULT 0,
|
port_check_enabled INTEGER DEFAULT 0,
|
||||||
footer_items_json TEXT,
|
footer_items_json TEXT,
|
||||||
|
title_speed_enabled INTEGER DEFAULT 0,
|
||||||
|
tracker_favicons_enabled INTEGER DEFAULT 0,
|
||||||
|
interface_scale INTEGER DEFAULT 100,
|
||||||
created_at TEXT NOT NULL,
|
created_at TEXT NOT NULL,
|
||||||
updated_at TEXT NOT NULL,
|
updated_at TEXT NOT NULL,
|
||||||
FOREIGN KEY(user_id) REFERENCES users(id)
|
FOREIGN KEY(user_id) REFERENCES users(id)
|
||||||
@@ -126,6 +142,10 @@ CREATE TABLE IF NOT EXISTS smart_queue_settings (
|
|||||||
stalled_seconds INTEGER DEFAULT 300,
|
stalled_seconds INTEGER DEFAULT 300,
|
||||||
min_speed_bytes INTEGER DEFAULT 1024,
|
min_speed_bytes INTEGER DEFAULT 1024,
|
||||||
min_seeds INTEGER DEFAULT 1,
|
min_seeds INTEGER DEFAULT 1,
|
||||||
|
min_peers INTEGER DEFAULT 0,
|
||||||
|
ignore_seed_peer INTEGER DEFAULT 0,
|
||||||
|
ignore_speed INTEGER DEFAULT 0,
|
||||||
|
manage_stopped INTEGER DEFAULT 0,
|
||||||
updated_at TEXT NOT NULL,
|
updated_at TEXT NOT NULL,
|
||||||
PRIMARY KEY(user_id, profile_id)
|
PRIMARY KEY(user_id, profile_id)
|
||||||
);
|
);
|
||||||
@@ -135,6 +155,7 @@ CREATE TABLE IF NOT EXISTS smart_queue_stalled (
|
|||||||
torrent_hash TEXT NOT NULL,
|
torrent_hash TEXT NOT NULL,
|
||||||
first_stalled_at TEXT NOT NULL,
|
first_stalled_at TEXT NOT NULL,
|
||||||
updated_at TEXT NOT NULL,
|
updated_at TEXT NOT NULL,
|
||||||
|
timer_key TEXT DEFAULT '',
|
||||||
PRIMARY KEY(profile_id, torrent_hash)
|
PRIMARY KEY(profile_id, torrent_hash)
|
||||||
);
|
);
|
||||||
|
|
||||||
@@ -182,6 +203,22 @@ CREATE TABLE IF NOT EXISTS traffic_history (
|
|||||||
|
|
||||||
CREATE INDEX IF NOT EXISTS idx_traffic_history_profile_created ON traffic_history(profile_id, created_at);
|
CREATE INDEX IF NOT EXISTS idx_traffic_history_profile_created ON traffic_history(profile_id, created_at);
|
||||||
|
|
||||||
|
CREATE TABLE IF NOT EXISTS transfer_speed_peaks (
|
||||||
|
profile_id INTEGER PRIMARY KEY,
|
||||||
|
session_started_at TEXT NOT NULL,
|
||||||
|
session_down_peak INTEGER DEFAULT 0,
|
||||||
|
session_up_peak INTEGER DEFAULT 0,
|
||||||
|
session_down_peak_at TEXT,
|
||||||
|
session_up_peak_at TEXT,
|
||||||
|
all_time_down_peak INTEGER DEFAULT 0,
|
||||||
|
all_time_up_peak INTEGER DEFAULT 0,
|
||||||
|
all_time_down_peak_at TEXT,
|
||||||
|
all_time_up_peak_at TEXT,
|
||||||
|
created_at TEXT NOT NULL,
|
||||||
|
updated_at TEXT NOT NULL,
|
||||||
|
FOREIGN KEY(profile_id) REFERENCES rtorrent_profiles(id) ON DELETE CASCADE
|
||||||
|
);
|
||||||
|
|
||||||
CREATE TABLE IF NOT EXISTS automation_rules (
|
CREATE TABLE IF NOT EXISTS automation_rules (
|
||||||
id INTEGER PRIMARY KEY AUTOINCREMENT,
|
id INTEGER PRIMARY KEY AUTOINCREMENT,
|
||||||
user_id INTEGER NOT NULL,
|
user_id INTEGER NOT NULL,
|
||||||
@@ -234,15 +271,49 @@ CREATE TABLE IF NOT EXISTS app_settings (
|
|||||||
key TEXT PRIMARY KEY,
|
key TEXT PRIMARY KEY,
|
||||||
value TEXT
|
value TEXT
|
||||||
);
|
);
|
||||||
|
|
||||||
|
CREATE TABLE IF NOT EXISTS torrent_stats_cache (
|
||||||
|
profile_id INTEGER PRIMARY KEY,
|
||||||
|
payload_json TEXT NOT NULL,
|
||||||
|
created_at TEXT NOT NULL,
|
||||||
|
updated_at TEXT NOT NULL,
|
||||||
|
updated_epoch REAL DEFAULT 0
|
||||||
|
);
|
||||||
|
|
||||||
|
CREATE TABLE IF NOT EXISTS tracker_summary_cache (
|
||||||
|
profile_id INTEGER NOT NULL,
|
||||||
|
torrent_hash TEXT NOT NULL,
|
||||||
|
trackers_json TEXT NOT NULL,
|
||||||
|
updated_at TEXT NOT NULL,
|
||||||
|
updated_epoch REAL DEFAULT 0,
|
||||||
|
PRIMARY KEY(profile_id, torrent_hash)
|
||||||
|
);
|
||||||
|
CREATE INDEX IF NOT EXISTS idx_tracker_summary_cache_profile ON tracker_summary_cache(profile_id, updated_epoch);
|
||||||
|
|
||||||
|
CREATE TABLE IF NOT EXISTS tracker_favicon_cache (
|
||||||
|
domain TEXT PRIMARY KEY,
|
||||||
|
source_url TEXT,
|
||||||
|
file_path TEXT,
|
||||||
|
mime_type TEXT,
|
||||||
|
updated_at TEXT NOT NULL,
|
||||||
|
updated_epoch REAL DEFAULT 0,
|
||||||
|
error TEXT
|
||||||
|
);
|
||||||
"""
|
"""
|
||||||
|
|
||||||
MIGRATIONS = [
|
MIGRATIONS = [
|
||||||
|
"ALTER TABLE users ADD COLUMN role TEXT DEFAULT 'user'",
|
||||||
|
"ALTER TABLE users ADD COLUMN is_active INTEGER DEFAULT 1",
|
||||||
|
"ALTER TABLE users ADD COLUMN updated_at TEXT",
|
||||||
"ALTER TABLE user_preferences ADD COLUMN mobile_mode INTEGER DEFAULT 0",
|
"ALTER TABLE user_preferences ADD COLUMN mobile_mode INTEGER DEFAULT 0",
|
||||||
"ALTER TABLE user_preferences ADD COLUMN peers_refresh_seconds INTEGER DEFAULT 0",
|
"ALTER TABLE user_preferences ADD COLUMN peers_refresh_seconds INTEGER DEFAULT 0",
|
||||||
"ALTER TABLE user_preferences ADD COLUMN port_check_enabled INTEGER DEFAULT 0",
|
"ALTER TABLE user_preferences ADD COLUMN port_check_enabled INTEGER DEFAULT 0",
|
||||||
"ALTER TABLE user_preferences ADD COLUMN bootstrap_theme TEXT DEFAULT 'default'",
|
"ALTER TABLE user_preferences ADD COLUMN bootstrap_theme TEXT DEFAULT 'default'",
|
||||||
"ALTER TABLE user_preferences ADD COLUMN font_family TEXT DEFAULT 'default'",
|
"ALTER TABLE user_preferences ADD COLUMN font_family TEXT DEFAULT 'default'",
|
||||||
"ALTER TABLE user_preferences ADD COLUMN footer_items_json TEXT",
|
"ALTER TABLE user_preferences ADD COLUMN footer_items_json TEXT",
|
||||||
|
"ALTER TABLE user_preferences ADD COLUMN title_speed_enabled INTEGER DEFAULT 0",
|
||||||
|
"ALTER TABLE user_preferences ADD COLUMN tracker_favicons_enabled INTEGER DEFAULT 0",
|
||||||
|
"ALTER TABLE user_preferences ADD COLUMN interface_scale INTEGER DEFAULT 100",
|
||||||
"ALTER TABLE rtorrent_profiles ADD COLUMN max_parallel_jobs INTEGER DEFAULT 5",
|
"ALTER TABLE rtorrent_profiles ADD COLUMN max_parallel_jobs INTEGER DEFAULT 5",
|
||||||
"ALTER TABLE rtorrent_profiles ADD COLUMN is_remote INTEGER DEFAULT 0",
|
"ALTER TABLE rtorrent_profiles ADD COLUMN is_remote INTEGER DEFAULT 0",
|
||||||
"ALTER TABLE jobs ADD COLUMN attempts INTEGER DEFAULT 0",
|
"ALTER TABLE jobs ADD COLUMN attempts INTEGER DEFAULT 0",
|
||||||
@@ -253,6 +324,15 @@ MIGRATIONS = [
|
|||||||
"ALTER TABLE automation_rules ADD COLUMN cooldown_minutes INTEGER DEFAULT 60",
|
"ALTER TABLE automation_rules ADD COLUMN cooldown_minutes INTEGER DEFAULT 60",
|
||||||
"ALTER TABLE rtorrent_config_overrides ADD COLUMN apply_on_start INTEGER DEFAULT 0",
|
"ALTER TABLE rtorrent_config_overrides ADD COLUMN apply_on_start INTEGER DEFAULT 0",
|
||||||
"ALTER TABLE rtorrent_config_overrides ADD COLUMN baseline_value TEXT",
|
"ALTER TABLE rtorrent_config_overrides ADD COLUMN baseline_value TEXT",
|
||||||
|
"ALTER TABLE torrent_stats_cache ADD COLUMN updated_epoch REAL DEFAULT 0",
|
||||||
|
"ALTER TABLE smart_queue_settings ADD COLUMN manage_stopped INTEGER DEFAULT 0",
|
||||||
|
"ALTER TABLE smart_queue_settings ADD COLUMN min_peers INTEGER DEFAULT 0",
|
||||||
|
"ALTER TABLE smart_queue_settings ADD COLUMN ignore_seed_peer INTEGER DEFAULT 0",
|
||||||
|
"ALTER TABLE smart_queue_settings ADD COLUMN ignore_speed INTEGER DEFAULT 0",
|
||||||
|
"ALTER TABLE smart_queue_stalled ADD COLUMN timer_key TEXT DEFAULT ''",
|
||||||
|
"CREATE TABLE IF NOT EXISTS tracker_summary_cache (profile_id INTEGER NOT NULL, torrent_hash TEXT NOT NULL, trackers_json TEXT NOT NULL, updated_at TEXT NOT NULL, updated_epoch REAL DEFAULT 0, PRIMARY KEY(profile_id, torrent_hash))",
|
||||||
|
"CREATE INDEX IF NOT EXISTS idx_tracker_summary_cache_profile ON tracker_summary_cache(profile_id, updated_epoch)",
|
||||||
|
"CREATE TABLE IF NOT EXISTS tracker_favicon_cache (domain TEXT PRIMARY KEY, source_url TEXT, file_path TEXT, mime_type TEXT, updated_at TEXT NOT NULL, updated_epoch REAL DEFAULT 0, error TEXT)",
|
||||||
]
|
]
|
||||||
|
|
||||||
|
|
||||||
@@ -288,15 +368,21 @@ def init_db():
|
|||||||
pass
|
pass
|
||||||
now = utcnow()
|
now = utcnow()
|
||||||
conn.execute(
|
conn.execute(
|
||||||
"INSERT OR IGNORE INTO users(id, username, password_hash, created_at) VALUES(1, 'default', NULL, ?)",
|
"INSERT OR IGNORE INTO users(id, username, password_hash, role, is_active, created_at, updated_at) VALUES(1, 'default', NULL, 'admin', 1, ?, ?)",
|
||||||
(now,),
|
(now, now),
|
||||||
)
|
)
|
||||||
|
conn.execute("UPDATE users SET role=COALESCE(role, 'admin'), is_active=COALESCE(is_active, 1), updated_at=COALESCE(updated_at, ?) WHERE id=1", (now,))
|
||||||
pref = conn.execute("SELECT id FROM user_preferences WHERE user_id=1").fetchone()
|
pref = conn.execute("SELECT id FROM user_preferences WHERE user_id=1").fetchone()
|
||||||
if not pref:
|
if not pref:
|
||||||
conn.execute(
|
conn.execute(
|
||||||
"INSERT INTO user_preferences(user_id, theme, created_at, updated_at) VALUES(1, 'dark', ?, ?)",
|
"INSERT INTO user_preferences(user_id, theme, created_at, updated_at) VALUES(1, 'dark', ?, ?)",
|
||||||
(now, now),
|
(now, now),
|
||||||
)
|
)
|
||||||
|
try:
|
||||||
|
from .services.auth import ensure_admin_user
|
||||||
|
ensure_admin_user()
|
||||||
|
except Exception:
|
||||||
|
pass
|
||||||
|
|
||||||
|
|
||||||
def default_user_id() -> int:
|
def default_user_id() -> int:
|
||||||
|
|||||||
@@ -13,17 +13,91 @@ import socket
|
|||||||
import json
|
import json
|
||||||
import psutil
|
import psutil
|
||||||
import xml.etree.ElementTree as ET
|
import xml.etree.ElementTree as ET
|
||||||
from flask import Blueprint, jsonify, request
|
from flask import Blueprint, jsonify, request, abort, send_file, redirect
|
||||||
from ..config import DB_PATH, JOBS_RETENTION_DAYS, SMART_QUEUE_HISTORY_RETENTION_DAYS, WORKERS
|
from ..config import DB_PATH, JOBS_RETENTION_DAYS, SMART_QUEUE_HISTORY_RETENTION_DAYS, WORKERS
|
||||||
from ..db import default_user_id, connect, utcnow
|
from ..db import connect, utcnow
|
||||||
from ..services import preferences, rtorrent
|
from ..services.auth import current_user_id as default_user_id, current_user, list_users, save_user, delete_user, login_user, logout_user, enabled as auth_enabled, require_profile_write
|
||||||
|
from ..services import preferences, rtorrent, torrent_stats, speed_peaks, tracker_cache
|
||||||
from ..services.torrent_cache import torrent_cache
|
from ..services.torrent_cache import torrent_cache
|
||||||
from ..services.torrent_summary import cached_summary
|
from ..services.torrent_summary import cached_summary
|
||||||
from ..services.workers import enqueue, list_jobs, cancel_job, retry_job, clear_jobs
|
from ..services.workers import enqueue, list_jobs, cancel_job, retry_job, clear_jobs, emergency_clear_jobs
|
||||||
from ..services.geoip import lookup_ip
|
from ..services.geoip import lookup_ip
|
||||||
|
|
||||||
bp = Blueprint("api", __name__, url_prefix="/api")
|
bp = Blueprint("api", __name__, url_prefix="/api")
|
||||||
|
|
||||||
|
MOVE_BULK_MAX_HASHES = 100
|
||||||
|
|
||||||
|
|
||||||
|
@bp.post("/auth/login")
|
||||||
|
def auth_login():
|
||||||
|
# Note: Auth API is hidden when optional authentication is disabled.
|
||||||
|
if not auth_enabled():
|
||||||
|
abort(404)
|
||||||
|
data = request.get_json(silent=True) or {}
|
||||||
|
user = login_user(str(data.get("username") or ""), str(data.get("password") or ""))
|
||||||
|
if not user:
|
||||||
|
return jsonify({"ok": False, "error": "Invalid username or password"}), 401
|
||||||
|
return ok({"user": user, "auth_enabled": auth_enabled()})
|
||||||
|
|
||||||
|
|
||||||
|
@bp.get("/auth/me")
|
||||||
|
def auth_me():
|
||||||
|
if not auth_enabled():
|
||||||
|
abort(404)
|
||||||
|
return ok({"user": current_user(), "auth_enabled": auth_enabled()})
|
||||||
|
|
||||||
|
|
||||||
|
@bp.post("/auth/logout")
|
||||||
|
def auth_logout():
|
||||||
|
if not auth_enabled():
|
||||||
|
abort(404)
|
||||||
|
logout_user()
|
||||||
|
return ok()
|
||||||
|
|
||||||
|
|
||||||
|
@bp.get("/auth/users")
|
||||||
|
def auth_users_list():
|
||||||
|
if not auth_enabled():
|
||||||
|
abort(404)
|
||||||
|
return ok({"users": list_users()})
|
||||||
|
|
||||||
|
|
||||||
|
@bp.post("/auth/users")
|
||||||
|
def auth_users_create():
|
||||||
|
if not auth_enabled():
|
||||||
|
abort(404)
|
||||||
|
try:
|
||||||
|
return ok({"user": save_user(request.get_json(silent=True) or {})})
|
||||||
|
except Exception as exc:
|
||||||
|
return jsonify({"ok": False, "error": str(exc)}), 400
|
||||||
|
|
||||||
|
|
||||||
|
@bp.put("/auth/users/<int:user_id>")
|
||||||
|
def auth_users_update(user_id: int):
|
||||||
|
if not auth_enabled():
|
||||||
|
abort(404)
|
||||||
|
try:
|
||||||
|
return ok({"user": save_user(request.get_json(silent=True) or {}, user_id)})
|
||||||
|
except Exception as exc:
|
||||||
|
return jsonify({"ok": False, "error": str(exc)}), 400
|
||||||
|
|
||||||
|
|
||||||
|
@bp.delete("/auth/users/<int:user_id>")
|
||||||
|
def auth_users_delete(user_id: int):
|
||||||
|
if not auth_enabled():
|
||||||
|
abort(404)
|
||||||
|
try:
|
||||||
|
delete_user(user_id)
|
||||||
|
return ok({"users": list_users()})
|
||||||
|
except Exception as exc:
|
||||||
|
return jsonify({"ok": False, "error": str(exc)}), 400
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
def _job_profile_id(job_id: str) -> int | None:
|
||||||
|
with connect() as conn:
|
||||||
|
row = conn.execute("SELECT profile_id FROM jobs WHERE id=?", (job_id,)).fetchone()
|
||||||
|
return int(row.get("profile_id") or 0) if row else None
|
||||||
|
|
||||||
def ok(payload=None):
|
def ok(payload=None):
|
||||||
data = {"ok": True}
|
data = {"ok": True}
|
||||||
@@ -252,9 +326,11 @@ def cleanup_summary() -> dict:
|
|||||||
"jobs_total": _table_count("jobs"),
|
"jobs_total": _table_count("jobs"),
|
||||||
"jobs_clearable": _table_count("jobs", "WHERE status NOT IN ('pending', 'running')"),
|
"jobs_clearable": _table_count("jobs", "WHERE status NOT IN ('pending', 'running')"),
|
||||||
"smart_queue_history_total": _table_count("smart_queue_history"),
|
"smart_queue_history_total": _table_count("smart_queue_history"),
|
||||||
|
"automation_history_total": _table_count("automation_history"),
|
||||||
"retention_days": {
|
"retention_days": {
|
||||||
"jobs": JOBS_RETENTION_DAYS,
|
"jobs": JOBS_RETENTION_DAYS,
|
||||||
"smart_queue_history": SMART_QUEUE_HISTORY_RETENTION_DAYS,
|
"smart_queue_history": SMART_QUEUE_HISTORY_RETENTION_DAYS,
|
||||||
|
"automation_history": SMART_QUEUE_HISTORY_RETENTION_DAYS,
|
||||||
},
|
},
|
||||||
"database": _db_size(),
|
"database": _db_size(),
|
||||||
}
|
}
|
||||||
@@ -303,6 +379,52 @@ def enrich_bulk_payload(profile: dict, action_name: str, data: dict) -> dict:
|
|||||||
return payload
|
return payload
|
||||||
|
|
||||||
|
|
||||||
|
def _chunk_hashes(hashes: list[str], size: int = MOVE_BULK_MAX_HASHES) -> list[list[str]]:
|
||||||
|
# Note: Splits very large torrent selections into predictable chunks so each queued job stays small and recoverable.
|
||||||
|
safe_size = max(1, int(size or MOVE_BULK_MAX_HASHES))
|
||||||
|
return [hashes[index:index + safe_size] for index in range(0, len(hashes), safe_size)]
|
||||||
|
|
||||||
|
|
||||||
|
def enqueue_bulk_parts(profile: dict, action_name: str, data: dict) -> list[dict]:
|
||||||
|
# Note: One shared helper splits large move/remove operations into small ordered parts without changing other actions.
|
||||||
|
base_payload = enrich_bulk_payload(profile, action_name, data)
|
||||||
|
hashes = base_payload.get("hashes") or []
|
||||||
|
chunks = _chunk_hashes(hashes)
|
||||||
|
if len(chunks) <= 1:
|
||||||
|
job_id = enqueue(action_name, profile["id"], base_payload)
|
||||||
|
return [{"job_id": job_id, "label": "bulk-1", "part": 1, "parts": 1, "hashes": hashes, "hash_count": len(hashes)}]
|
||||||
|
|
||||||
|
jobs = []
|
||||||
|
items_by_hash = {str(item.get("hash")): item for item in (base_payload.get("job_context") or {}).get("items") or []}
|
||||||
|
for index, chunk in enumerate(chunks, start=1):
|
||||||
|
payload = dict(base_payload)
|
||||||
|
payload["hashes"] = chunk
|
||||||
|
context = dict(base_payload.get("job_context") or {})
|
||||||
|
context.update({
|
||||||
|
"bulk": True,
|
||||||
|
"bulk_label": f"bulk-{index}",
|
||||||
|
"bulk_part": index,
|
||||||
|
"bulk_parts": len(chunks),
|
||||||
|
"hash_count": len(chunk),
|
||||||
|
"parent_hash_count": len(hashes),
|
||||||
|
"items": [items_by_hash[h] for h in chunk if h in items_by_hash],
|
||||||
|
})
|
||||||
|
payload["job_context"] = context
|
||||||
|
job_id = enqueue(action_name, profile["id"], payload)
|
||||||
|
jobs.append({"job_id": job_id, "label": context["bulk_label"], "part": index, "parts": len(chunks), "hashes": chunk, "hash_count": len(chunk)})
|
||||||
|
return jobs
|
||||||
|
|
||||||
|
|
||||||
|
def enqueue_move_bulk_parts(profile: dict, data: dict) -> list[dict]:
|
||||||
|
# Note: Keep the old public move helper while using the same partitioning logic.
|
||||||
|
return enqueue_bulk_parts(profile, "move", data)
|
||||||
|
|
||||||
|
|
||||||
|
def enqueue_remove_bulk_parts(profile: dict, data: dict) -> list[dict]:
|
||||||
|
# Note: Remove/rm uses the same partitioning as move, which lowers rTorrent load.
|
||||||
|
return enqueue_bulk_parts(profile, "remove", data)
|
||||||
|
|
||||||
|
|
||||||
@bp.get("/profiles")
|
@bp.get("/profiles")
|
||||||
def profiles_list():
|
def profiles_list():
|
||||||
return ok({"profiles": preferences.list_profiles(), "active": preferences.active_profile()})
|
return ok({"profiles": preferences.list_profiles(), "active": preferences.active_profile()})
|
||||||
@@ -362,6 +484,71 @@ def torrents():
|
|||||||
})
|
})
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
@bp.get("/trackers/summary")
|
||||||
|
def trackers_summary():
|
||||||
|
profile = preferences.active_profile()
|
||||||
|
if not profile:
|
||||||
|
return ok({"summary": {"hashes": {}, "trackers": [], "errors": [], "scanned": 0, "pending": 0}, "error": "No profile"})
|
||||||
|
try:
|
||||||
|
# Note: Tracker summary returns cached data immediately; optional warmup scans rTorrent in the background for very large libraries.
|
||||||
|
scan_limit = min(250, max(0, int(request.args.get("scan_limit") or 0)))
|
||||||
|
bg_limit = min(250, max(1, int(request.args.get("bg_limit") or 80)))
|
||||||
|
warm = str(request.args.get("warm") or "").lower() in {"1", "true", "yes"}
|
||||||
|
hashes = [t.get("hash") for t in torrent_cache.snapshot(profile["id"]) if t.get("hash")]
|
||||||
|
prefs = preferences.get_preferences()
|
||||||
|
include_favicons = bool(prefs and prefs.get("tracker_favicons_enabled"))
|
||||||
|
loader = lambda h: rtorrent.torrent_trackers(profile, h)
|
||||||
|
summary = tracker_cache.summary(profile, hashes, loader, scan_limit=scan_limit, include_favicons=include_favicons)
|
||||||
|
if warm and int(summary.get("pending") or 0) > 0:
|
||||||
|
summary["warming"] = tracker_cache.warm_summary_cache(profile, hashes, loader, batch_size=bg_limit)
|
||||||
|
return ok({"summary": summary})
|
||||||
|
except Exception as exc:
|
||||||
|
return ok({"summary": {"hashes": {}, "trackers": [], "errors": [{"error": str(exc)}], "scanned": 0, "pending": 0}, "error": str(exc)})
|
||||||
|
|
||||||
|
|
||||||
|
@bp.get("/trackers/favicon/<path:domain>")
|
||||||
|
@bp.get("/tracker-favicon/<path:domain>")
|
||||||
|
def tracker_favicon(domain: str):
|
||||||
|
prefs = preferences.get_preferences()
|
||||||
|
force = str(request.args.get("refresh") or "").lower() in {"1", "true", "yes", "force"}
|
||||||
|
# Note: Manual refresh must work from CLI even when tracker favicons are disabled in Preferences.
|
||||||
|
enabled = force or bool(prefs and prefs.get("tracker_favicons_enabled"))
|
||||||
|
static_url = tracker_cache.favicon_public_url(domain, enabled=enabled, create=True, force=force)
|
||||||
|
if static_url:
|
||||||
|
# Note: The API only discovers/cache-warms the icon; the browser receives the file from /static/tracker_favicons/.
|
||||||
|
return redirect(static_url, code=302)
|
||||||
|
cached = tracker_cache.favicon_cache_row(domain)
|
||||||
|
return jsonify({
|
||||||
|
"ok": False,
|
||||||
|
"error": "favicon not found",
|
||||||
|
"domain": tracker_cache.tracker_domain(domain),
|
||||||
|
"enabled": bool(enabled),
|
||||||
|
"cached_error": (cached or {}).get("error") if cached else None,
|
||||||
|
}), 404
|
||||||
|
|
||||||
|
|
||||||
|
@bp.get("/trackers/favicon")
|
||||||
|
def tracker_favicon_query():
|
||||||
|
# Note: Query-string alias makes cache warming easier from shell scripts where path routing/proxies may differ.
|
||||||
|
domain = str(request.args.get("domain") or "").strip()
|
||||||
|
if not domain:
|
||||||
|
return jsonify({"ok": False, "error": "domain is required"}), 400
|
||||||
|
return tracker_favicon(domain)
|
||||||
|
|
||||||
|
@bp.get("/torrent-stats")
|
||||||
|
def torrent_stats_get():
|
||||||
|
profile = preferences.active_profile()
|
||||||
|
if not profile:
|
||||||
|
return ok({"stats": {}, "error": "No profile"})
|
||||||
|
force = str(request.args.get("force") or "").lower() in {"1", "true", "yes"}
|
||||||
|
try:
|
||||||
|
# Note: Heavy file metadata is served from a 15-minute DB cache unless the user explicitly refreshes it.
|
||||||
|
return ok({"stats": torrent_stats.get(profile, force=force)})
|
||||||
|
except Exception as exc:
|
||||||
|
return jsonify({"ok": False, "error": str(exc)}), 500
|
||||||
|
|
||||||
|
|
||||||
@bp.get("/torrents/<torrent_hash>/files")
|
@bp.get("/torrents/<torrent_hash>/files")
|
||||||
def torrent_files(torrent_hash: str):
|
def torrent_files(torrent_hash: str):
|
||||||
profile = preferences.active_profile()
|
profile = preferences.active_profile()
|
||||||
@@ -395,19 +582,6 @@ def torrent_peers(torrent_hash: str):
|
|||||||
return ok({"peers": peers})
|
return ok({"peers": peers})
|
||||||
|
|
||||||
|
|
||||||
@bp.post("/torrents/<torrent_hash>/peers/action")
|
|
||||||
def torrent_peer_action(torrent_hash: str):
|
|
||||||
profile = preferences.active_profile()
|
|
||||||
if not profile:
|
|
||||||
return jsonify({"ok": False, "error": "No profile"}), 400
|
|
||||||
data = request.get_json(silent=True) or {}
|
|
||||||
try:
|
|
||||||
result = rtorrent.peer_action(profile, torrent_hash, int(data.get("peer_index")), str(data.get("action") or ""))
|
|
||||||
return ok({"result": result, "message": f"Peer {result['action']} via {result['method']}"})
|
|
||||||
except Exception as exc:
|
|
||||||
return jsonify({"ok": False, "error": str(exc)}), 400
|
|
||||||
|
|
||||||
|
|
||||||
@bp.get("/torrents/<torrent_hash>/trackers")
|
@bp.get("/torrents/<torrent_hash>/trackers")
|
||||||
def torrent_trackers(torrent_hash: str):
|
def torrent_trackers(torrent_hash: str):
|
||||||
profile = preferences.active_profile()
|
profile = preferences.active_profile()
|
||||||
@@ -434,9 +608,23 @@ def torrent_action(action_name: str):
|
|||||||
if not profile:
|
if not profile:
|
||||||
return jsonify({"ok": False, "error": "No profile"}), 400
|
return jsonify({"ok": False, "error": "No profile"}), 400
|
||||||
data = request.get_json(silent=True) or {}
|
data = request.get_json(silent=True) or {}
|
||||||
allowed = {"start", "pause", "stop", "resume", "recheck", "reannounce", "remove", "move", "set_label", "set_ratio_group"}
|
allowed = {"start", "pause", "unpause", "stop", "resume", "recheck", "reannounce", "remove", "move", "set_label", "set_ratio_group"}
|
||||||
if action_name not in allowed:
|
if action_name not in allowed:
|
||||||
return jsonify({"ok": False, "error": "Unknown action"}), 400
|
return jsonify({"ok": False, "error": "Unknown action"}), 400
|
||||||
|
if action_name in {"move", "remove"}:
|
||||||
|
# Note: Large move/remove requests are split into ordered bulk parts; smaller requests keep the old single-job response shape.
|
||||||
|
jobs = enqueue_bulk_parts(profile, action_name, data)
|
||||||
|
first_job_id = jobs[0]["job_id"] if jobs else None
|
||||||
|
total_hashes = sum(int(job.get("hash_count") or 0) for job in jobs)
|
||||||
|
return ok({
|
||||||
|
"job_id": first_job_id,
|
||||||
|
"job_ids": [job["job_id"] for job in jobs],
|
||||||
|
"jobs": jobs,
|
||||||
|
"hash_count": total_hashes,
|
||||||
|
"bulk": total_hashes > 1,
|
||||||
|
"bulk_parts": len(jobs),
|
||||||
|
"chunk_size": MOVE_BULK_MAX_HASHES,
|
||||||
|
})
|
||||||
payload = enrich_bulk_payload(profile, action_name, data)
|
payload = enrich_bulk_payload(profile, action_name, data)
|
||||||
job_id = enqueue(action_name, profile["id"], payload)
|
job_id = enqueue(action_name, profile["id"], payload)
|
||||||
return ok({"job_id": job_id, "hash_count": len(payload.get("hashes") or []), "bulk": len(payload.get("hashes") or []) > 1})
|
return ok({"job_id": job_id, "hash_count": len(payload.get("hashes") or []), "bulk": len(payload.get("hashes") or []) > 1})
|
||||||
@@ -493,6 +681,8 @@ def system_status():
|
|||||||
status["ram"] = psutil.virtual_memory().percent
|
status["ram"] = psutil.virtual_memory().percent
|
||||||
status["usage_source"] = "local"
|
status["usage_source"] = "local"
|
||||||
status["usage_available"] = True
|
status["usage_available"] = True
|
||||||
|
# Notatka: REST status zwraca ostatnie rekordy bez czekania na kolejny komunikat Socket.IO.
|
||||||
|
status["speed_peaks"] = speed_peaks.record(profile["id"], status.get("down_rate", 0), status.get("up_rate", 0))
|
||||||
return ok({"status": status})
|
return ok({"status": status})
|
||||||
except Exception as exc:
|
except Exception as exc:
|
||||||
return jsonify({"ok": False, "error": str(exc)})
|
return jsonify({"ok": False, "error": str(exc)})
|
||||||
@@ -534,6 +724,11 @@ def app_status():
|
|||||||
status["scgi"] = rtorrent.scgi_diagnostics(profile)
|
status["scgi"] = rtorrent.scgi_diagnostics(profile)
|
||||||
except Exception as exc:
|
except Exception as exc:
|
||||||
status["scgi"] = {"ok": False, "error": str(exc), "url": profile.get("scgi_url")}
|
status["scgi"] = {"ok": False, "error": str(exc), "url": profile.get("scgi_url")}
|
||||||
|
try:
|
||||||
|
# Notatka: panel diagnostyczny pokazuje te same rekordy DL/UL co stopka.
|
||||||
|
status["speed_peaks"] = speed_peaks.current(profile["id"])
|
||||||
|
except Exception as exc:
|
||||||
|
status["speed_peaks"] = {"error": str(exc)}
|
||||||
try:
|
try:
|
||||||
prefs = preferences.get_preferences()
|
prefs = preferences.get_preferences()
|
||||||
status["port_check"] = {"status": "disabled", "enabled": False} if not bool((prefs or {}).get("port_check_enabled")) else port_check_status(force=False)
|
status["port_check"] = {"status": "disabled", "enabled": False} if not bool((prefs or {}).get("port_check_enabled")) else port_check_status(force=False)
|
||||||
@@ -566,8 +761,12 @@ def jobs_list():
|
|||||||
|
|
||||||
@bp.post("/jobs/clear")
|
@bp.post("/jobs/clear")
|
||||||
def jobs_clear():
|
def jobs_clear():
|
||||||
|
if str(request.args.get("force") or "").lower() in {"1", "true", "yes"}:
|
||||||
|
# Note: Emergency cleanup keeps the endpoint behavior unchanged, while force=1 enables rescue mode.
|
||||||
|
deleted = emergency_clear_jobs()
|
||||||
|
return ok({"deleted": deleted, "emergency": True})
|
||||||
deleted = clear_jobs()
|
deleted = clear_jobs()
|
||||||
return ok({"deleted": deleted})
|
return ok({"deleted": deleted, "emergency": False})
|
||||||
|
|
||||||
|
|
||||||
@bp.get("/cleanup/summary")
|
@bp.get("/cleanup/summary")
|
||||||
@@ -593,6 +792,19 @@ def cleanup_smart_queue():
|
|||||||
return ok({"deleted": deleted, "cleanup": cleanup_summary()})
|
return ok({"deleted": deleted, "cleanup": cleanup_summary()})
|
||||||
|
|
||||||
|
|
||||||
|
@bp.post("/cleanup/automations")
|
||||||
|
def cleanup_automations():
|
||||||
|
with connect() as conn:
|
||||||
|
exists = conn.execute("SELECT name FROM sqlite_master WHERE type='table' AND name='automation_history'").fetchone()
|
||||||
|
if not exists:
|
||||||
|
deleted = 0
|
||||||
|
else:
|
||||||
|
# Note: Cleanup panel removes only automation logs, not saved automation rules.
|
||||||
|
cur = conn.execute("DELETE FROM automation_history")
|
||||||
|
deleted = int(cur.rowcount or 0)
|
||||||
|
return ok({"deleted": deleted, "cleanup": cleanup_summary()})
|
||||||
|
|
||||||
|
|
||||||
@bp.post("/cleanup/all")
|
@bp.post("/cleanup/all")
|
||||||
def cleanup_all():
|
def cleanup_all():
|
||||||
deleted_jobs = clear_jobs()
|
deleted_jobs = clear_jobs()
|
||||||
@@ -603,18 +815,26 @@ def cleanup_all():
|
|||||||
else:
|
else:
|
||||||
cur = conn.execute("DELETE FROM smart_queue_history")
|
cur = conn.execute("DELETE FROM smart_queue_history")
|
||||||
deleted_smart = int(cur.rowcount or 0)
|
deleted_smart = int(cur.rowcount or 0)
|
||||||
return ok({"deleted": {"jobs": deleted_jobs, "smart_queue_history": deleted_smart}, "cleanup": cleanup_summary()})
|
exists_auto = conn.execute("SELECT name FROM sqlite_master WHERE type='table' AND name='automation_history'").fetchone()
|
||||||
|
if not exists_auto:
|
||||||
|
deleted_auto = 0
|
||||||
|
else:
|
||||||
|
cur = conn.execute("DELETE FROM automation_history")
|
||||||
|
deleted_auto = int(cur.rowcount or 0)
|
||||||
|
return ok({"deleted": {"jobs": deleted_jobs, "smart_queue_history": deleted_smart, "automation_history": deleted_auto}, "cleanup": cleanup_summary()})
|
||||||
|
|
||||||
|
|
||||||
@bp.post("/jobs/<job_id>/cancel")
|
@bp.post("/jobs/<job_id>/cancel")
|
||||||
def jobs_cancel(job_id: str):
|
def jobs_cancel(job_id: str):
|
||||||
|
require_profile_write(_job_profile_id(job_id))
|
||||||
if not cancel_job(job_id):
|
if not cancel_job(job_id):
|
||||||
return jsonify({"ok": False, "error": "Only pending or failed jobs can be cancelled"}), 400
|
return jsonify({"ok": False, "error": "Only unfinished jobs can be cancelled"}), 400
|
||||||
return ok()
|
return ok({"emergency": True})
|
||||||
|
|
||||||
|
|
||||||
@bp.post("/jobs/<job_id>/retry")
|
@bp.post("/jobs/<job_id>/retry")
|
||||||
def jobs_retry(job_id: str):
|
def jobs_retry(job_id: str):
|
||||||
|
require_profile_write(_job_profile_id(job_id))
|
||||||
if not retry_job(job_id):
|
if not retry_job(job_id):
|
||||||
return jsonify({"ok": False, "error": "Only failed or cancelled jobs can be retried"}), 400
|
return jsonify({"ok": False, "error": "Only failed or cancelled jobs can be retried"}), 400
|
||||||
return ok()
|
return ok()
|
||||||
@@ -832,7 +1052,11 @@ def smart_queue_check():
|
|||||||
if not profile:
|
if not profile:
|
||||||
return ok({'result': {'ok': False, 'error': 'No profile'}})
|
return ok({'result': {'ok': False, 'error': 'No profile'}})
|
||||||
try:
|
try:
|
||||||
return ok({'result': smart_queue.check(profile, force=True)})
|
result = smart_queue.check(profile, force=True)
|
||||||
|
# Note: Manual check immediately returns a fresh snapshot so the UI shows the real Downloading count after the action.
|
||||||
|
diff = torrent_cache.refresh(profile)
|
||||||
|
rows = torrent_cache.snapshot(profile['id'])
|
||||||
|
return ok({'result': result, 'torrent_patch': {**diff, 'summary': cached_summary(profile['id'], rows, force=True)}})
|
||||||
except Exception as exc:
|
except Exception as exc:
|
||||||
return jsonify({'ok': False, 'error': str(exc)}), 500
|
return jsonify({'ok': False, 'error': str(exc)}), 500
|
||||||
|
|
||||||
@@ -882,6 +1106,36 @@ def automations_get():
|
|||||||
return jsonify({'ok': False, 'error': str(exc), 'rules': [], 'history': []}), 500
|
return jsonify({'ok': False, 'error': str(exc), 'rules': [], 'history': []}), 500
|
||||||
|
|
||||||
|
|
||||||
|
@bp.get('/automations/export')
|
||||||
|
def automations_export():
|
||||||
|
from ..services import automation_rules
|
||||||
|
profile = preferences.active_profile()
|
||||||
|
if not profile:
|
||||||
|
return jsonify({'ok': False, 'error': 'No profile'}), 400
|
||||||
|
try:
|
||||||
|
# Note: JSON export is profile-scoped and excludes execution history/cooldown state.
|
||||||
|
data = automation_rules.export_rules(profile['id'])
|
||||||
|
return ok({'export': data, 'count': len(data.get('rules') or [])})
|
||||||
|
except Exception as exc:
|
||||||
|
return jsonify({'ok': False, 'error': str(exc)}), 400
|
||||||
|
|
||||||
|
|
||||||
|
@bp.post('/automations/import')
|
||||||
|
def automations_import():
|
||||||
|
from ..services import automation_rules
|
||||||
|
profile = preferences.active_profile()
|
||||||
|
if not profile:
|
||||||
|
return jsonify({'ok': False, 'error': 'No profile'}), 400
|
||||||
|
try:
|
||||||
|
payload = request.get_json(silent=True) or {}
|
||||||
|
replace = str(request.args.get('replace') or '').lower() in {'1', 'true', 'yes'} or bool(payload.get('replace')) if isinstance(payload, dict) else False
|
||||||
|
# Note: Import appends rules by default, so existing automations remain untouched.
|
||||||
|
imported = automation_rules.import_rules(profile['id'], payload, replace=replace)
|
||||||
|
return ok({'imported': len(imported), 'rules': automation_rules.list_rules(profile['id'])})
|
||||||
|
except Exception as exc:
|
||||||
|
return jsonify({'ok': False, 'error': str(exc)}), 400
|
||||||
|
|
||||||
|
|
||||||
@bp.post('/automations')
|
@bp.post('/automations')
|
||||||
def automations_save():
|
def automations_save():
|
||||||
from ..services import automation_rules
|
from ..services import automation_rules
|
||||||
@@ -918,3 +1172,17 @@ def automations_check():
|
|||||||
return ok({'result': automation_rules.check(profile, force=True), 'history': automation_rules.list_history(profile['id'])})
|
return ok({'result': automation_rules.check(profile, force=True), 'history': automation_rules.list_history(profile['id'])})
|
||||||
except Exception as exc:
|
except Exception as exc:
|
||||||
return jsonify({'ok': False, 'error': str(exc)}), 500
|
return jsonify({'ok': False, 'error': str(exc)}), 500
|
||||||
|
|
||||||
|
|
||||||
|
@bp.delete('/automations/history')
|
||||||
|
def automations_history_clear():
|
||||||
|
from ..services import automation_rules
|
||||||
|
profile = preferences.active_profile()
|
||||||
|
if not profile:
|
||||||
|
return jsonify({'ok': False, 'error': 'No profile'}), 400
|
||||||
|
try:
|
||||||
|
# Note: Clear only automation execution logs; rules and cooldown state stay unchanged.
|
||||||
|
deleted = automation_rules.clear_history(profile['id'])
|
||||||
|
return ok({'deleted': deleted, 'history': automation_rules.list_history(profile['id']), 'cleanup': cleanup_summary()})
|
||||||
|
except Exception as exc:
|
||||||
|
return jsonify({'ok': False, 'error': str(exc)}), 500
|
||||||
|
|||||||
@@ -1,11 +1,53 @@
|
|||||||
from __future__ import annotations
|
from __future__ import annotations
|
||||||
|
|
||||||
from flask import Blueprint, render_template, jsonify, Response
|
from flask import Blueprint, render_template, jsonify, Response, request, redirect, url_for, abort
|
||||||
from ..services.preferences import get_preferences, list_profiles, active_profile, BOOTSTRAP_THEMES, FONT_FAMILIES, bootstrap_css_url
|
from ..services.preferences import get_preferences, list_profiles, active_profile, BOOTSTRAP_THEMES, FONT_FAMILIES
|
||||||
|
from ..services import auth
|
||||||
|
from ..services.frontend_assets import asset_path
|
||||||
|
|
||||||
|
# for favicon
|
||||||
|
from flask import current_app, send_from_directory
|
||||||
|
|
||||||
bp = Blueprint("main", __name__)
|
bp = Blueprint("main", __name__)
|
||||||
|
|
||||||
|
|
||||||
|
def _asset_url(key: str) -> str:
|
||||||
|
path = asset_path(key)
|
||||||
|
return path if path.startswith("http") else url_for("static", filename=path)
|
||||||
|
|
||||||
|
|
||||||
|
@bp.get("/favicon.ico")
|
||||||
|
def favicon_ico():
|
||||||
|
response = send_from_directory(
|
||||||
|
current_app.static_folder,
|
||||||
|
"favicon.svg",
|
||||||
|
mimetype="image/svg+xml",
|
||||||
|
)
|
||||||
|
return response
|
||||||
|
|
||||||
|
|
||||||
|
@bp.route("/login", methods=["GET", "POST"])
|
||||||
|
def login():
|
||||||
|
# Note: When optional authentication is disabled, /login is intentionally unavailable.
|
||||||
|
if not auth.enabled():
|
||||||
|
abort(404)
|
||||||
|
error = ""
|
||||||
|
if request.method == "POST":
|
||||||
|
user = auth.login_user(request.form.get("username", ""), request.form.get("password", ""))
|
||||||
|
if user:
|
||||||
|
return redirect(request.args.get("next") or url_for("main.index"))
|
||||||
|
error = "Invalid username or password"
|
||||||
|
return render_template("login.html", error=error)
|
||||||
|
|
||||||
|
|
||||||
|
@bp.get("/logout")
|
||||||
|
def logout():
|
||||||
|
auth.logout_user()
|
||||||
|
if not auth.enabled():
|
||||||
|
return redirect(url_for("main.index"))
|
||||||
|
return redirect(url_for("main.login"))
|
||||||
|
|
||||||
|
|
||||||
@bp.get("/")
|
@bp.get("/")
|
||||||
def index():
|
def index():
|
||||||
prefs = get_preferences()
|
prefs = get_preferences()
|
||||||
@@ -16,13 +58,14 @@ def index():
|
|||||||
active_profile=active_profile(),
|
active_profile=active_profile(),
|
||||||
bootstrap_themes=BOOTSTRAP_THEMES,
|
bootstrap_themes=BOOTSTRAP_THEMES,
|
||||||
font_families=FONT_FAMILIES,
|
font_families=FONT_FAMILIES,
|
||||||
bootstrap_css_url=bootstrap_css_url((prefs or {}).get("bootstrap_theme")),
|
auth_enabled=auth.enabled(),
|
||||||
|
current_user=auth.current_user(),
|
||||||
)
|
)
|
||||||
|
|
||||||
|
|
||||||
@bp.get("/docs")
|
@bp.get("/docs")
|
||||||
def docs():
|
def docs():
|
||||||
html = """<!doctype html><html lang=\"en\"><head><meta charset=\"utf-8\"><meta name=\"viewport\" content=\"width=device-width, initial-scale=1\"><title>pyTorrent API Docs</title><link rel=\"stylesheet\" href=\"https://cdn.jsdelivr.net/npm/swagger-ui-dist@5/swagger-ui.css\"></head><body><div id=\"swagger-ui\"></div><script src=\"https://cdn.jsdelivr.net/npm/swagger-ui-dist@5/swagger-ui-bundle.js\"></script><script>window.onload=()=>SwaggerUIBundle({url:'/api/openapi.json',dom_id:'#swagger-ui',deepLinking:true,persistAuthorization:true});</script></body></html>"""
|
html = f"""<!doctype html><html lang="en"><head><meta charset="utf-8"><meta name="viewport" content="width=device-width, initial-scale=1"><title>pyTorrent API Docs</title><link rel="stylesheet" href="{_asset_url('swagger_css')}"></head><body><div id="swagger-ui"></div><script src="{_asset_url('swagger_js')}"></script><script>window.onload=()=>SwaggerUIBundle({{url:'/api/openapi.json',dom_id:'#swagger-ui',deepLinking:true,persistAuthorization:true}});</script></body></html>"""
|
||||||
return Response(html, mimetype="text/html")
|
return Response(html, mimetype="text/html")
|
||||||
|
|
||||||
|
|
||||||
@@ -55,7 +98,7 @@ def openapi():
|
|||||||
},
|
},
|
||||||
},
|
},
|
||||||
"/api/torrents": {"get": {"summary": "Get cached torrent snapshot", "responses": {"200": {"description": "Torrent list"}}}},
|
"/api/torrents": {"get": {"summary": "Get cached torrent snapshot", "responses": {"200": {"description": "Torrent list"}}}},
|
||||||
"/api/torrents/{action_name}": {"post": {"summary": "Queue torrent action", "description": "For move, path is the target directory; move_data=true physically moves data on the rTorrent host using a detached shell move with status polling, force-overwrites an existing destination, tolerates rTorrent execute timeouts around mkdir/start/polling, handles retries after a partially completed move, avoids SCGI timeout on long mv operations, and recheck defaults to move_data. Move and remove jobs are ordered per profile, so a later remove waits for earlier move/remove jobs to finish.", "parameters": [{"name": "action_name", "in": "path", "required": True, "schema": {"type": "string", "enum": ["start", "pause", "stop", "resume", "recheck", "remove", "move", "set_label", "set_ratio_group"]}}], "requestBody": {"content": {"application/json": {"schema": {"type": "object", "properties": {"hashes": {"type": "array", "items": {"type": "string"}}, "path": {"type": "string", "description": "Target directory for move"}, "move_data": {"type": "boolean", "description": "Physically move data before setting torrent directory"}, "recheck": {"type": "boolean", "description": "Run hash check after physical move; defaults to move_data"}, "label": {"type": "string"}, "ratio_group": {"type": "string"}, "remove_data": {"type": "boolean"}}}}}}, "responses": {"200": {"description": "Job queued"}}}},
|
"/api/torrents/{action_name}": {"post": {"summary": "Queue torrent action", "description": "For move, path is the target directory; move_data=true physically moves data on the rTorrent host using a detached shell move with status polling, force-overwrites an existing destination, tolerates rTorrent execute timeouts around mkdir/start/polling, handles retries after a partially completed move, avoids SCGI timeout on long mv operations, and recheck defaults to move_data. Large move selections are split into ordered bulk parts of up to 100 hashes. Move and remove jobs are ordered per profile, so a later remove waits for earlier move/remove jobs to finish.", "parameters": [{"name": "action_name", "in": "path", "required": True, "schema": {"type": "string", "enum": ["start", "pause", "stop", "resume", "recheck", "remove", "move", "set_label", "set_ratio_group"]}}], "requestBody": {"content": {"application/json": {"schema": {"type": "object", "properties": {"hashes": {"type": "array", "items": {"type": "string"}}, "path": {"type": "string", "description": "Target directory for move"}, "move_data": {"type": "boolean", "description": "Physically move data before setting torrent directory"}, "recheck": {"type": "boolean", "description": "Run hash check after physical move; defaults to move_data"}, "label": {"type": "string"}, "ratio_group": {"type": "string"}, "remove_data": {"type": "boolean"}}}}}}, "responses": {"200": {"description": "Job queued"}}}},
|
||||||
"/api/torrents/add": {"post": {"summary": "Add magnet links or torrent files", "requestBody": {"content": {"multipart/form-data": {"schema": {"type": "object", "properties": {"uris": {"type": "string"}, "directory": {"type": "string"}, "label": {"type": "string"}, "start": {"type": "boolean"}, "files": {"type": "array", "items": {"type": "string", "format": "binary"}}}}}, "application/json": {"schema": {"type": "object"}}}}, "responses": {"200": {"description": "Jobs queued"}}}},
|
"/api/torrents/add": {"post": {"summary": "Add magnet links or torrent files", "requestBody": {"content": {"multipart/form-data": {"schema": {"type": "object", "properties": {"uris": {"type": "string"}, "directory": {"type": "string"}, "label": {"type": "string"}, "start": {"type": "boolean"}, "files": {"type": "array", "items": {"type": "string", "format": "binary"}}}}}, "application/json": {"schema": {"type": "object"}}}}, "responses": {"200": {"description": "Jobs queued"}}}},
|
||||||
"/api/torrents/{torrent_hash}/files": {"get": {"summary": "Torrent files", "parameters": [{"name": "torrent_hash", "in": "path", "required": True, "schema": {"type": "string"}}], "responses": {"200": {"description": "Files"}}}},
|
"/api/torrents/{torrent_hash}/files": {"get": {"summary": "Torrent files", "parameters": [{"name": "torrent_hash", "in": "path", "required": True, "schema": {"type": "string"}}], "responses": {"200": {"description": "Files"}}}},
|
||||||
"/api/torrents/{torrent_hash}/peers": {"get": {"summary": "Torrent peers with GeoIP", "parameters": [{"name": "torrent_hash", "in": "path", "required": True, "schema": {"type": "string"}}], "responses": {"200": {"description": "Peers"}}}},
|
"/api/torrents/{torrent_hash}/peers": {"get": {"summary": "Torrent peers with GeoIP", "parameters": [{"name": "torrent_hash", "in": "path", "required": True, "schema": {"type": "string"}}], "responses": {"200": {"description": "Peers"}}}},
|
||||||
@@ -74,16 +117,21 @@ def openapi():
|
|||||||
"/api/rss/feeds": {"post": {"summary": "Add RSS feed", "requestBody": {"content": {"application/json": {"schema": {"type": "object"}}}}, "responses": {"200": {"description": "RSS config"}}}},
|
"/api/rss/feeds": {"post": {"summary": "Add RSS feed", "requestBody": {"content": {"application/json": {"schema": {"type": "object"}}}}, "responses": {"200": {"description": "RSS config"}}}},
|
||||||
"/api/rss/rules": {"post": {"summary": "Add RSS rule", "requestBody": {"content": {"application/json": {"schema": {"type": "object"}}}}, "responses": {"200": {"description": "RSS config"}}}},
|
"/api/rss/rules": {"post": {"summary": "Add RSS rule", "requestBody": {"content": {"application/json": {"schema": {"type": "object"}}}}, "responses": {"200": {"description": "RSS config"}}}},
|
||||||
"/api/rss/check": {"post": {"summary": "Manually check RSS feeds", "responses": {"200": {"description": "Queued matches"}}}},
|
"/api/rss/check": {"post": {"summary": "Manually check RSS feeds", "responses": {"200": {"description": "Queued matches"}}}},
|
||||||
"/api/smart-queue": {"get": {"summary": "Get Smart Queue settings, exceptions and history", "parameters": [{"name": "history_limit", "in": "query", "schema": {"type": "integer", "default": 10, "minimum": 1, "maximum": 100}, "description": "Number of Smart Queue history rows to return"}], "responses": {"200": {"description": "Smart Queue config with history and history_total"}}}, "post": {"summary": "Save Smart Queue settings", "requestBody": {"content": {"application/json": {"schema": {"type": "object", "properties": {"enabled": {"type": "boolean"}, "max_active_downloads": {"type": "integer"}, "stalled_seconds": {"type": "integer"}, "min_speed_bytes": {"type": "integer"}, "min_seeds": {"type": "integer"}}}}}}, "responses": {"200": {"description": "Saved"}}}},
|
"/api/smart-queue": {"get": {"summary": "Get Smart Queue settings, exceptions and history", "parameters": [{"name": "history_limit", "in": "query", "schema": {"type": "integer", "default": 10, "minimum": 1, "maximum": 100}, "description": "Number of Smart Queue history rows to return"}], "responses": {"200": {"description": "Smart Queue config with history and history_total"}}}, "post": {"summary": "Save Smart Queue settings", "requestBody": {"content": {"application/json": {"schema": {"type": "object", "properties": {"enabled": {"type": "boolean"}, "max_active_downloads": {"type": "integer"}, "stalled_seconds": {"type": "integer"}, "min_speed_bytes": {"type": "integer"}, "min_seeds": {"type": "integer"}, "min_peers": {"type": "integer"}, "ignore_seed_peer": {"type": "boolean"}, "ignore_speed": {"type": "boolean"}}}}}}, "responses": {"200": {"description": "Saved"}}}},
|
||||||
"/api/smart-queue/check": {"post": {"summary": "Run Smart Queue immediately", "responses": {"200": {"description": "Smart Queue action result"}}}},
|
"/api/smart-queue/check": {"post": {"summary": "Run Smart Queue immediately", "responses": {"200": {"description": "Smart Queue action result"}}}},
|
||||||
"/api/smart-queue/exclusion": {"post": {"summary": "Add or remove a torrent Smart Queue exception", "requestBody": {"content": {"application/json": {"schema": {"type": "object", "properties": {"hash": {"type": "string"}, "excluded": {"type": "boolean"}, "reason": {"type": "string"}}}}}}, "responses": {"200": {"description": "Exception list"}}}},
|
"/api/smart-queue/exclusion": {"post": {"summary": "Add or remove a torrent Smart Queue exception", "requestBody": {"content": {"application/json": {"schema": {"type": "object", "properties": {"hash": {"type": "string"}, "excluded": {"type": "boolean"}, "reason": {"type": "string"}}}}}}, "responses": {"200": {"description": "Exception list"}}}},
|
||||||
"/api/traffic/history": {"get": {"summary": "Transfer history for charts", "parameters": [{"name": "range", "in": "query", "schema": {"type": "string", "enum": ["15m", "1h", "3h", "6h", "24h", "7d", "30d", "90d"]}}], "responses": {"200": {"description": "Aggregated traffic history"}}}}
|
"/api/traffic/history": {"get": {"summary": "Transfer history for charts", "parameters": [{"name": "range", "in": "query", "schema": {"type": "string", "enum": ["15m", "1h", "3h", "6h", "24h", "7d", "30d", "90d"]}}], "responses": {"200": {"description": "Aggregated traffic history"}}}}
|
||||||
}
|
}
|
||||||
paths.update({
|
paths.update({
|
||||||
|
"/api/auth/login": {"post": {"summary": "Log in with username and password when authentication is enabled", "requestBody": {"content": {"application/json": {"schema": {"type": "object", "properties": {"username": {"type": "string"}, "password": {"type": "string", "format": "password"}}, "required": ["username", "password"]}}}}, "responses": {"200": {"description": "Logged in"}, "401": {"description": "Invalid credentials"}, "404": {"description": "Authentication disabled"}}}},
|
||||||
|
"/api/auth/me": {"get": {"summary": "Return current authenticated user", "responses": {"200": {"description": "Current user"}, "404": {"description": "Authentication disabled"}}}},
|
||||||
|
"/api/auth/logout": {"post": {"summary": "Log out current user", "responses": {"200": {"description": "Logged out"}, "404": {"description": "Authentication disabled"}}}},
|
||||||
|
"/api/auth/users": {"get": {"summary": "List users, admin only", "responses": {"200": {"description": "Users"}, "403": {"description": "Admin only"}}}, "post": {"summary": "Create user, admin only", "requestBody": {"content": {"application/json": {"schema": {"$ref": "#/components/schemas/AuthUserInput"}}}}, "responses": {"200": {"description": "User created"}, "403": {"description": "Admin only"}}}},
|
||||||
|
"/api/auth/users/{user_id}": {"put": {"summary": "Update user, admin only", "parameters": [{"name": "user_id", "in": "path", "required": True, "schema": {"type": "integer"}}], "requestBody": {"content": {"application/json": {"schema": {"$ref": "#/components/schemas/AuthUserInput"}}}}, "responses": {"200": {"description": "User updated"}, "403": {"description": "Admin only"}}}, "delete": {"summary": "Delete user, admin only", "parameters": [{"name": "user_id", "in": "path", "required": True, "schema": {"type": "integer"}}], "responses": {"200": {"description": "User deleted"}, "403": {"description": "Admin only"}}}},
|
||||||
"/api/profiles/{profile_id}": {"delete": {"summary": "Delete rTorrent profile", "parameters": [{"name": "profile_id", "in": "path", "required": True, "schema": {"type": "integer"}}], "responses": {"200": {"description": "Deleted"}}}},
|
"/api/profiles/{profile_id}": {"delete": {"summary": "Delete rTorrent profile", "parameters": [{"name": "profile_id", "in": "path", "required": True, "schema": {"type": "integer"}}], "responses": {"200": {"description": "Deleted"}}}},
|
||||||
|
"/api/torrent-stats": {"get": {"summary": "Torrent statistics and cached file metadata", "parameters": [{"name": "force", "in": "query", "schema": {"type": "boolean", "default": False}}], "responses": {"200": {"description": "Torrent statistics"}}}},
|
||||||
"/api/path/default": {"get": {"summary": "Read active rTorrent default download path", "responses": {"200": {"description": "Default path"}}}},
|
"/api/path/default": {"get": {"summary": "Read active rTorrent default download path", "responses": {"200": {"description": "Default path"}}}},
|
||||||
"/api/torrents/{torrent_hash}/files/priority": {"post": {"summary": "Set file priorities", "parameters": [{"name": "torrent_hash", "in": "path", "required": True, "schema": {"type": "string"}}], "requestBody": {"content": {"application/json": {"schema": {"type": "object", "properties": {"files": {"type": "array", "items": {"type": "object", "properties": {"index": {"type": "integer"}, "priority": {"type": "integer", "enum": [0, 1, 2]}}}}}}}}}, "responses": {"200": {"description": "Updated priorities"}, "207": {"description": "Partial update"}}}},
|
"/api/torrents/{torrent_hash}/files/priority": {"post": {"summary": "Set file priorities", "parameters": [{"name": "torrent_hash", "in": "path", "required": True, "schema": {"type": "string"}}], "requestBody": {"content": {"application/json": {"schema": {"type": "object", "properties": {"files": {"type": "array", "items": {"type": "object", "properties": {"index": {"type": "integer"}, "priority": {"type": "integer", "enum": [0, 1, 2]}}}}}}}}}, "responses": {"200": {"description": "Updated priorities"}, "207": {"description": "Partial update"}}}},
|
||||||
"/api/torrents/{torrent_hash}/peers/action": {"post": {"summary": "Run peer action", "parameters": [{"name": "torrent_hash", "in": "path", "required": True, "schema": {"type": "string"}}], "requestBody": {"content": {"application/json": {"schema": {"type": "object", "properties": {"peer_index": {"type": "integer"}, "action": {"type": "string", "enum": ["disconnect", "kick", "snub", "unsnub", "ban"]}}}}}}, "responses": {"200": {"description": "Peer action result"}}}},
|
|
||||||
"/api/labels/{label_id}": {"delete": {"summary": "Delete saved label", "parameters": [{"name": "label_id", "in": "path", "required": True, "schema": {"type": "integer"}}], "responses": {"200": {"description": "Labels"}}}},
|
"/api/labels/{label_id}": {"delete": {"summary": "Delete saved label", "parameters": [{"name": "label_id", "in": "path", "required": True, "schema": {"type": "integer"}}], "responses": {"200": {"description": "Labels"}}}},
|
||||||
"/api/rtorrent-config": {"get": {"summary": "Read supported rTorrent config fields", "responses": {"200": {"description": "Config fields"}}}, "post": {"summary": "Save supported rTorrent config fields", "requestBody": {"content": {"application/json": {"schema": {"type": "object", "properties": {"values": {"type": "object"}}}}}}, "responses": {"200": {"description": "Save result"}}}},
|
"/api/rtorrent-config": {"get": {"summary": "Read supported rTorrent config fields", "responses": {"200": {"description": "Config fields"}}}, "post": {"summary": "Save supported rTorrent config fields", "requestBody": {"content": {"application/json": {"schema": {"type": "object", "properties": {"values": {"type": "object"}}}}}}, "responses": {"200": {"description": "Save result"}}}},
|
||||||
"/api/rtorrent-config/generate": {"post": {"summary": "Generate rTorrent config text from provided values", "requestBody": {"content": {"application/json": {"schema": {"type": "object", "properties": {"values": {"type": "object"}}}}}}, "responses": {"200": {"description": "Generated config text"}}}},
|
"/api/rtorrent-config/generate": {"post": {"summary": "Generate rTorrent config text from provided values", "requestBody": {"content": {"application/json": {"schema": {"type": "object", "properties": {"values": {"type": "object"}}}}}}, "responses": {"200": {"description": "Generated config text"}}}},
|
||||||
@@ -98,6 +146,17 @@ def openapi():
|
|||||||
"properties": {"ok": {"type": "boolean"}},
|
"properties": {"ok": {"type": "boolean"}},
|
||||||
"required": ["ok"],
|
"required": ["ok"],
|
||||||
},
|
},
|
||||||
|
"AuthUserInput": {
|
||||||
|
"type": "object",
|
||||||
|
"properties": {
|
||||||
|
"username": {"type": "string"},
|
||||||
|
"password": {"type": "string", "format": "password", "description": "Optional on update"},
|
||||||
|
"role": {"type": "string", "enum": ["admin", "user"]},
|
||||||
|
"is_active": {"type": "boolean"},
|
||||||
|
"permissions": {"type": "array", "items": {"type": "object", "properties": {"profile_id": {"type": "integer", "description": "0 means all profiles"}, "access_level": {"type": "string", "enum": ["ro", "full"]}}}},
|
||||||
|
},
|
||||||
|
"required": ["username"],
|
||||||
|
},
|
||||||
"Profile": {
|
"Profile": {
|
||||||
"type": "object",
|
"type": "object",
|
||||||
"additionalProperties": True,
|
"additionalProperties": True,
|
||||||
@@ -278,4 +337,9 @@ def openapi():
|
|||||||
},
|
},
|
||||||
})
|
})
|
||||||
|
|
||||||
return jsonify({"openapi": "3.0.3", "info": {"title": "pyTorrent API", "version": "0.2.0"}, "paths": paths, "components": components})
|
components.setdefault("securitySchemes", {})["sessionCookie"] = {"type": "apiKey", "in": "cookie", "name": "session"}
|
||||||
|
for path, methods in paths.items():
|
||||||
|
if path != "/api/auth/login":
|
||||||
|
for operation in methods.values():
|
||||||
|
operation.setdefault("security", [{"sessionCookie": []}])
|
||||||
|
return jsonify({"openapi": "3.0.3", "info": {"title": "pyTorrent API", "version": "0.0.1"}, "paths": paths, "components": components})
|
||||||
|
|||||||
344
pytorrent/services/auth.py
Normal file
344
pytorrent/services/auth.py
Normal file
@@ -0,0 +1,344 @@
|
|||||||
|
from __future__ import annotations
|
||||||
|
|
||||||
|
from functools import wraps
|
||||||
|
from typing import Any
|
||||||
|
from urllib.parse import urlparse
|
||||||
|
|
||||||
|
from flask import abort, jsonify, redirect, request, session, url_for
|
||||||
|
from werkzeug.security import check_password_hash, generate_password_hash
|
||||||
|
|
||||||
|
from ..config import AUTH_ENABLE
|
||||||
|
from ..db import connect, default_user_id, utcnow
|
||||||
|
|
||||||
|
PUBLIC_ENDPOINTS = {"main.login", "main.logout", "api.auth_login", "api.auth_me", "static"}
|
||||||
|
RTORRENT_WRITE_PREFIXES = (
|
||||||
|
"/api/torrents/",
|
||||||
|
"/api/speed/limits",
|
||||||
|
"/api/labels",
|
||||||
|
"/api/ratio-groups",
|
||||||
|
"/api/rss",
|
||||||
|
"/api/smart-queue",
|
||||||
|
"/api/automations",
|
||||||
|
"/api/jobs",
|
||||||
|
)
|
||||||
|
RTORRENT_CONFIG_PREFIXES = ("/api/rtorrent-config",)
|
||||||
|
ADMIN_PREFIXES = ("/api/auth/users", "/api/profiles")
|
||||||
|
# Note: API reads that expose rTorrent/profile data must also respect profile permissions.
|
||||||
|
PROFILE_READ_PREFIXES = (
|
||||||
|
"/api/torrents",
|
||||||
|
"/api/torrent-stats",
|
||||||
|
"/api/system/status",
|
||||||
|
"/api/app/status",
|
||||||
|
"/api/port-check",
|
||||||
|
"/api/path",
|
||||||
|
"/api/labels",
|
||||||
|
"/api/ratio-groups",
|
||||||
|
"/api/rss",
|
||||||
|
"/api/rtorrent-config",
|
||||||
|
"/api/smart-queue",
|
||||||
|
"/api/traffic/history",
|
||||||
|
"/api/automations",
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
|
def enabled() -> bool:
|
||||||
|
return bool(AUTH_ENABLE)
|
||||||
|
|
||||||
|
|
||||||
|
def password_hash(password: str) -> str:
|
||||||
|
return generate_password_hash(password or "")
|
||||||
|
|
||||||
|
|
||||||
|
def current_user_id() -> int:
|
||||||
|
if not enabled():
|
||||||
|
return default_user_id()
|
||||||
|
try:
|
||||||
|
return int(session.get("user_id") or 0)
|
||||||
|
except Exception:
|
||||||
|
return 0
|
||||||
|
|
||||||
|
|
||||||
|
def current_user() -> dict[str, Any] | None:
|
||||||
|
uid = current_user_id()
|
||||||
|
if not uid:
|
||||||
|
return None
|
||||||
|
with connect() as conn:
|
||||||
|
return conn.execute(
|
||||||
|
"SELECT id, username, role, is_active, created_at, updated_at FROM users WHERE id=?",
|
||||||
|
(uid,),
|
||||||
|
).fetchone()
|
||||||
|
|
||||||
|
|
||||||
|
def is_admin(user: dict[str, Any] | None = None) -> bool:
|
||||||
|
if not enabled():
|
||||||
|
return True
|
||||||
|
user = user or current_user()
|
||||||
|
return bool(user and user.get("role") == "admin" and int(user.get("is_active") or 0))
|
||||||
|
|
||||||
|
|
||||||
|
def _permissions(user_id: int | None = None) -> list[dict[str, Any]]:
|
||||||
|
if not enabled():
|
||||||
|
return [{"profile_id": 0, "access_level": "full"}]
|
||||||
|
uid = user_id or current_user_id()
|
||||||
|
if not uid:
|
||||||
|
return []
|
||||||
|
with connect() as conn:
|
||||||
|
return conn.execute(
|
||||||
|
"SELECT profile_id, access_level FROM user_profile_permissions WHERE user_id=?",
|
||||||
|
(uid,),
|
||||||
|
).fetchall()
|
||||||
|
|
||||||
|
|
||||||
|
def can_access_profile(profile_id: int | None, user_id: int | None = None) -> bool:
|
||||||
|
if not enabled():
|
||||||
|
return True
|
||||||
|
uid = user_id or current_user_id()
|
||||||
|
if not uid:
|
||||||
|
return False
|
||||||
|
with connect() as conn:
|
||||||
|
user = conn.execute("SELECT role, is_active FROM users WHERE id=?", (uid,)).fetchone()
|
||||||
|
if not user or not int(user.get("is_active") or 0):
|
||||||
|
return False
|
||||||
|
if user.get("role") == "admin":
|
||||||
|
return True
|
||||||
|
pid = int(profile_id or 0)
|
||||||
|
row = conn.execute(
|
||||||
|
"SELECT 1 FROM user_profile_permissions WHERE user_id=? AND (profile_id=0 OR profile_id=?) LIMIT 1",
|
||||||
|
(uid, pid),
|
||||||
|
).fetchone()
|
||||||
|
return bool(row)
|
||||||
|
|
||||||
|
|
||||||
|
def can_write_profile(profile_id: int | None, user_id: int | None = None) -> bool:
|
||||||
|
if not enabled():
|
||||||
|
return True
|
||||||
|
uid = user_id or current_user_id()
|
||||||
|
if not uid:
|
||||||
|
return False
|
||||||
|
with connect() as conn:
|
||||||
|
user = conn.execute("SELECT role, is_active FROM users WHERE id=?", (uid,)).fetchone()
|
||||||
|
if not user or not int(user.get("is_active") or 0):
|
||||||
|
return False
|
||||||
|
if user.get("role") == "admin":
|
||||||
|
return True
|
||||||
|
pid = int(profile_id or 0)
|
||||||
|
row = conn.execute(
|
||||||
|
"SELECT access_level FROM user_profile_permissions WHERE user_id=? AND (profile_id=0 OR profile_id=?) ORDER BY profile_id DESC LIMIT 1",
|
||||||
|
(uid, pid),
|
||||||
|
).fetchone()
|
||||||
|
return bool(row and row.get("access_level") == "full")
|
||||||
|
|
||||||
|
|
||||||
|
def visible_profile_ids(user_id: int | None = None) -> set[int] | None:
|
||||||
|
if not enabled():
|
||||||
|
return None
|
||||||
|
uid = user_id or current_user_id()
|
||||||
|
if not uid:
|
||||||
|
return set()
|
||||||
|
with connect() as conn:
|
||||||
|
user = conn.execute("SELECT role, is_active FROM users WHERE id=?", (uid,)).fetchone()
|
||||||
|
if not user or not int(user.get("is_active") or 0):
|
||||||
|
return set()
|
||||||
|
if user.get("role") == "admin":
|
||||||
|
return None
|
||||||
|
rows = conn.execute("SELECT profile_id FROM user_profile_permissions WHERE user_id=?", (uid,)).fetchall()
|
||||||
|
if any(int(row.get("profile_id") or 0) == 0 for row in rows):
|
||||||
|
return None
|
||||||
|
return {int(row.get("profile_id") or 0) for row in rows}
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
def same_origin_request() -> bool:
|
||||||
|
"""Return False only when an unsafe request clearly comes from another origin."""
|
||||||
|
origin = request.headers.get("Origin") or request.headers.get("Referer")
|
||||||
|
if not origin:
|
||||||
|
return True
|
||||||
|
try:
|
||||||
|
parsed = urlparse(origin)
|
||||||
|
return parsed.scheme == request.scheme and parsed.netloc == request.host
|
||||||
|
except Exception:
|
||||||
|
return False
|
||||||
|
|
||||||
|
|
||||||
|
def writable_profile_ids(user_id: int | None = None) -> set[int] | None:
|
||||||
|
if not enabled():
|
||||||
|
return None
|
||||||
|
uid = user_id or current_user_id()
|
||||||
|
if not uid:
|
||||||
|
return set()
|
||||||
|
with connect() as conn:
|
||||||
|
user = conn.execute("SELECT role, is_active FROM users WHERE id=?", (uid,)).fetchone()
|
||||||
|
if not user or not int(user.get("is_active") or 0):
|
||||||
|
return set()
|
||||||
|
if user.get("role") == "admin":
|
||||||
|
return None
|
||||||
|
rows = conn.execute("SELECT profile_id FROM user_profile_permissions WHERE user_id=? AND access_level='full'", (uid,)).fetchall()
|
||||||
|
if any(int(row.get("profile_id") or 0) == 0 for row in rows):
|
||||||
|
return None
|
||||||
|
return {int(row.get("profile_id") or 0) for row in rows}
|
||||||
|
|
||||||
|
def require_admin() -> None:
|
||||||
|
if enabled() and not is_admin():
|
||||||
|
abort(403)
|
||||||
|
|
||||||
|
|
||||||
|
def require_profile_read(profile_id: int | None) -> None:
|
||||||
|
if enabled() and not can_access_profile(profile_id):
|
||||||
|
abort(403)
|
||||||
|
|
||||||
|
|
||||||
|
def require_profile_write(profile_id: int | None) -> None:
|
||||||
|
if enabled() and not can_write_profile(profile_id):
|
||||||
|
abort(403)
|
||||||
|
|
||||||
|
|
||||||
|
def login_user(username: str, password: str) -> dict[str, Any] | None:
|
||||||
|
if not enabled():
|
||||||
|
return {"id": default_user_id(), "username": "default", "role": "admin", "is_active": 1}
|
||||||
|
with connect() as conn:
|
||||||
|
user = conn.execute("SELECT * FROM users WHERE username=?", (username.strip(),)).fetchone()
|
||||||
|
if not user or not int(user.get("is_active") or 0):
|
||||||
|
return None
|
||||||
|
if not user.get("password_hash") or not check_password_hash(user.get("password_hash"), password or ""):
|
||||||
|
return None
|
||||||
|
session.clear()
|
||||||
|
session["user_id"] = int(user["id"])
|
||||||
|
session["username"] = user["username"]
|
||||||
|
session["role"] = user.get("role") or "user"
|
||||||
|
return current_user()
|
||||||
|
|
||||||
|
|
||||||
|
def logout_user() -> None:
|
||||||
|
session.clear()
|
||||||
|
|
||||||
|
|
||||||
|
def ensure_admin_user() -> None:
|
||||||
|
if not enabled():
|
||||||
|
return
|
||||||
|
now = utcnow()
|
||||||
|
with connect() as conn:
|
||||||
|
row = conn.execute("SELECT id FROM users WHERE username='admin'").fetchone()
|
||||||
|
if not row:
|
||||||
|
conn.execute(
|
||||||
|
"INSERT INTO users(username,password_hash,role,is_active,created_at,updated_at) VALUES(?,?,?,?,?,?)",
|
||||||
|
("admin", password_hash("admin"), "admin", 1, now, now),
|
||||||
|
)
|
||||||
|
else:
|
||||||
|
conn.execute("UPDATE users SET role='admin', is_active=1, updated_at=? WHERE username='admin'", (now,))
|
||||||
|
|
||||||
|
|
||||||
|
def list_users() -> list[dict[str, Any]]:
|
||||||
|
require_admin()
|
||||||
|
with connect() as conn:
|
||||||
|
users = conn.execute(
|
||||||
|
"SELECT id, username, role, is_active, created_at, updated_at FROM users ORDER BY username COLLATE NOCASE"
|
||||||
|
).fetchall()
|
||||||
|
perms = conn.execute(
|
||||||
|
"SELECT user_id, profile_id, access_level FROM user_profile_permissions ORDER BY user_id, profile_id"
|
||||||
|
).fetchall()
|
||||||
|
by_user: dict[int, list[dict[str, Any]]] = {}
|
||||||
|
for perm in perms:
|
||||||
|
by_user.setdefault(int(perm["user_id"]), []).append({
|
||||||
|
"profile_id": int(perm.get("profile_id") or 0),
|
||||||
|
"access_level": perm.get("access_level") or "ro",
|
||||||
|
})
|
||||||
|
for user in users:
|
||||||
|
user["permissions"] = by_user.get(int(user["id"]), [])
|
||||||
|
return users
|
||||||
|
|
||||||
|
|
||||||
|
def save_user(data: dict[str, Any], user_id: int | None = None) -> dict[str, Any]:
|
||||||
|
require_admin()
|
||||||
|
now = utcnow()
|
||||||
|
username = str(data.get("username") or "").strip()
|
||||||
|
role = "admin" if data.get("role") == "admin" else "user"
|
||||||
|
is_active = 1 if data.get("is_active", True) else 0
|
||||||
|
if not username:
|
||||||
|
raise ValueError("Username is required")
|
||||||
|
with connect() as conn:
|
||||||
|
if user_id:
|
||||||
|
row = conn.execute("SELECT id FROM users WHERE id=?", (user_id,)).fetchone()
|
||||||
|
if not row:
|
||||||
|
raise ValueError("User does not exist")
|
||||||
|
conn.execute(
|
||||||
|
"UPDATE users SET username=?, role=?, is_active=?, updated_at=? WHERE id=?",
|
||||||
|
(username, role, is_active, now, user_id),
|
||||||
|
)
|
||||||
|
else:
|
||||||
|
cur = conn.execute(
|
||||||
|
"INSERT INTO users(username,password_hash,role,is_active,created_at,updated_at) VALUES(?,?,?,?,?,?)",
|
||||||
|
(username, password_hash(str(data.get("password") or username)), role, is_active, now, now),
|
||||||
|
)
|
||||||
|
user_id = int(cur.lastrowid)
|
||||||
|
if data.get("password"):
|
||||||
|
conn.execute("UPDATE users SET password_hash=?, updated_at=? WHERE id=?", (password_hash(str(data.get("password"))), now, user_id))
|
||||||
|
if role != "admin":
|
||||||
|
conn.execute("DELETE FROM user_profile_permissions WHERE user_id=?", (user_id,))
|
||||||
|
for item in data.get("permissions") or []:
|
||||||
|
profile_id = int(item.get("profile_id") or 0)
|
||||||
|
access = "full" if item.get("access_level") == "full" else "ro"
|
||||||
|
conn.execute(
|
||||||
|
"INSERT OR REPLACE INTO user_profile_permissions(user_id,profile_id,access_level,created_at,updated_at) VALUES(?,?,?,?,?)",
|
||||||
|
(user_id, profile_id, access, now, now),
|
||||||
|
)
|
||||||
|
else:
|
||||||
|
conn.execute("DELETE FROM user_profile_permissions WHERE user_id=?", (user_id,))
|
||||||
|
return conn.execute("SELECT id, username, role, is_active, created_at, updated_at FROM users WHERE id=?", (user_id,)).fetchone()
|
||||||
|
|
||||||
|
|
||||||
|
def delete_user(user_id: int) -> None:
|
||||||
|
require_admin()
|
||||||
|
if int(user_id) == current_user_id():
|
||||||
|
raise ValueError("Cannot delete current user")
|
||||||
|
with connect() as conn:
|
||||||
|
conn.execute("DELETE FROM user_profile_permissions WHERE user_id=?", (user_id,))
|
||||||
|
conn.execute("DELETE FROM users WHERE id=? AND username <> 'admin'", (user_id,))
|
||||||
|
|
||||||
|
|
||||||
|
def install_guards(app) -> None:
|
||||||
|
@app.before_request
|
||||||
|
def _auth_guard():
|
||||||
|
if not enabled():
|
||||||
|
return None
|
||||||
|
endpoint = request.endpoint or ""
|
||||||
|
if endpoint in PUBLIC_ENDPOINTS or endpoint.startswith("static"):
|
||||||
|
return None
|
||||||
|
if not current_user_id():
|
||||||
|
if request.path.startswith("/api/"):
|
||||||
|
return jsonify({"ok": False, "error": "Authentication required"}), 401
|
||||||
|
return redirect(url_for("main.login", next=request.full_path if request.query_string else request.path))
|
||||||
|
user = current_user()
|
||||||
|
if not user or not int(user.get("is_active") or 0):
|
||||||
|
logout_user()
|
||||||
|
return jsonify({"ok": False, "error": "Authentication required"}), 401 if request.path.startswith("/api/") else redirect(url_for("main.login"))
|
||||||
|
if request.path.startswith("/api/auth/users") and not is_admin(user):
|
||||||
|
return jsonify({"ok": False, "error": "Admin only"}), 403
|
||||||
|
if request.path.startswith(PROFILE_READ_PREFIXES):
|
||||||
|
profile_id = _request_profile_id()
|
||||||
|
if profile_id and not can_access_profile(profile_id):
|
||||||
|
return jsonify({"ok": False, "error": "Profile access denied"}), 403
|
||||||
|
if request.method not in {"GET", "HEAD", "OPTIONS"}:
|
||||||
|
if request.path.startswith("/api/") and not same_origin_request():
|
||||||
|
return jsonify({"ok": False, "error": "Cross-origin API request blocked"}), 403
|
||||||
|
if request.path.startswith("/api/profiles") and not request.path.endswith("/activate") and not is_admin(user):
|
||||||
|
return jsonify({"ok": False, "error": "Admin only"}), 403
|
||||||
|
profile_id = _request_profile_id()
|
||||||
|
if request.path.startswith(RTORRENT_CONFIG_PREFIXES) and not can_write_profile(profile_id):
|
||||||
|
return jsonify({"ok": False, "error": "Read-only profile access"}), 403
|
||||||
|
if request.path.startswith(RTORRENT_WRITE_PREFIXES) and not can_write_profile(profile_id):
|
||||||
|
return jsonify({"ok": False, "error": "Read-only profile access"}), 403
|
||||||
|
return None
|
||||||
|
|
||||||
|
|
||||||
|
def _request_profile_id() -> int | None:
|
||||||
|
if request.view_args and request.view_args.get("profile_id"):
|
||||||
|
return int(request.view_args["profile_id"])
|
||||||
|
try:
|
||||||
|
payload = request.get_json(silent=True) or {}
|
||||||
|
if payload.get("profile_id"):
|
||||||
|
return int(payload.get("profile_id"))
|
||||||
|
except Exception:
|
||||||
|
pass
|
||||||
|
from . import preferences
|
||||||
|
profile = preferences.active_profile()
|
||||||
|
return int(profile["id"]) if profile else None
|
||||||
@@ -5,6 +5,10 @@ import json
|
|||||||
from ..db import connect, default_user_id, utcnow
|
from ..db import connect, default_user_id, utcnow
|
||||||
from . import rtorrent
|
from . import rtorrent
|
||||||
from .preferences import active_profile
|
from .preferences import active_profile
|
||||||
|
from .workers import enqueue
|
||||||
|
|
||||||
|
AUTOMATION_JOB_CHUNK_SIZE = 100
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
def _loads(value: str | None, default: Any) -> Any:
|
def _loads(value: str | None, default: Any) -> Any:
|
||||||
@@ -62,6 +66,44 @@ def get_rule(rule_id: int, profile_id: int, user_id: int | None = None) -> dict[
|
|||||||
return _rule_row(row)
|
return _rule_row(row)
|
||||||
|
|
||||||
|
|
||||||
|
def _portable_rule(rule: dict[str, Any]) -> dict[str, Any]:
|
||||||
|
return {
|
||||||
|
'name': str(rule.get('name') or 'Automation rule'),
|
||||||
|
'enabled': bool(rule.get('enabled', True)),
|
||||||
|
'cooldown_minutes': max(0, int(rule.get('cooldown_minutes') or 0)),
|
||||||
|
'conditions': list(rule.get('conditions') or []),
|
||||||
|
'effects': list(rule.get('effects') or []),
|
||||||
|
}
|
||||||
|
|
||||||
|
|
||||||
|
def export_rules(profile_id: int, user_id: int | None = None) -> dict[str, Any]:
|
||||||
|
# Note: Export contains only portable rule definitions, never DB ids or execution history.
|
||||||
|
rules = [_portable_rule(rule) for rule in list_rules(profile_id, user_id)]
|
||||||
|
return {'version': 1, 'app': 'pyTorrent', 'exported_at': utcnow(), 'rules': rules}
|
||||||
|
|
||||||
|
|
||||||
|
def import_rules(profile_id: int, payload: dict[str, Any] | list[Any], user_id: int | None = None, replace: bool = False) -> list[dict[str, Any]]:
|
||||||
|
user_id = user_id or default_user_id()
|
||||||
|
raw_rules = payload if isinstance(payload, list) else payload.get('rules', []) if isinstance(payload, dict) else []
|
||||||
|
if not isinstance(raw_rules, list) or not raw_rules:
|
||||||
|
raise ValueError('Import file does not contain automation rules')
|
||||||
|
if replace:
|
||||||
|
with connect() as conn:
|
||||||
|
# Note: Optional replace is profile-scoped; it does not touch other profiles or history tables.
|
||||||
|
conn.execute('DELETE FROM automation_rules WHERE user_id=? AND profile_id=?', (user_id, profile_id))
|
||||||
|
conn.execute('DELETE FROM automation_rule_state WHERE profile_id=?', (profile_id,))
|
||||||
|
imported = []
|
||||||
|
for raw in raw_rules:
|
||||||
|
if not isinstance(raw, dict):
|
||||||
|
continue
|
||||||
|
rule = _portable_rule(raw)
|
||||||
|
rule.pop('id', None)
|
||||||
|
imported.append(save_rule(profile_id, rule, user_id))
|
||||||
|
if not imported:
|
||||||
|
raise ValueError('No valid automation rules found')
|
||||||
|
return imported
|
||||||
|
|
||||||
|
|
||||||
def save_rule(profile_id: int, data: dict[str, Any], user_id: int | None = None) -> dict[str, Any]:
|
def save_rule(profile_id: int, data: dict[str, Any], user_id: int | None = None) -> dict[str, Any]:
|
||||||
user_id = user_id or default_user_id()
|
user_id = user_id or default_user_id()
|
||||||
name = str(data.get('name') or 'Automation rule').strip() or 'Automation rule'
|
name = str(data.get('name') or 'Automation rule').strip() or 'Automation rule'
|
||||||
@@ -94,11 +136,21 @@ def list_history(profile_id: int, user_id: int | None = None, limit: int = 30) -
|
|||||||
return conn.execute('SELECT * FROM automation_history WHERE user_id=? AND profile_id=? ORDER BY created_at DESC LIMIT ?', (user_id, profile_id, max(1, min(int(limit or 30), 100)))).fetchall()
|
return conn.execute('SELECT * FROM automation_history WHERE user_id=? AND profile_id=? ORDER BY created_at DESC LIMIT ?', (user_id, profile_id, max(1, min(int(limit or 30), 100)))).fetchall()
|
||||||
|
|
||||||
|
|
||||||
|
def clear_history(profile_id: int, user_id: int | None = None) -> int:
|
||||||
|
user_id = user_id or default_user_id()
|
||||||
|
with connect() as conn:
|
||||||
|
# Note: Manual automation log cleanup is scoped to the active profile and current user.
|
||||||
|
cur = conn.execute('DELETE FROM automation_history WHERE user_id=? AND profile_id=?', (user_id, profile_id))
|
||||||
|
return int(cur.rowcount or 0)
|
||||||
|
|
||||||
|
|
||||||
def _condition_true(t: dict[str, Any], cond: dict[str, Any]) -> bool:
|
def _condition_true(t: dict[str, Any], cond: dict[str, Any]) -> bool:
|
||||||
typ = str(cond.get('type') or '')
|
typ = str(cond.get('type') or '')
|
||||||
if typ == 'completed': return bool(int(t.get('complete') or 0))
|
if typ == 'completed': return bool(int(t.get('complete') or 0))
|
||||||
if typ == 'no_seeds': return int(t.get('seeds') or 0) <= int(cond.get('seeds') or 0)
|
if typ == 'no_seeds': return int(t.get('seeds') or 0) <= int(cond.get('seeds') or 0)
|
||||||
if typ == 'ratio_gte': return float(t.get('ratio') or 0) >= float(cond.get('ratio') or 0)
|
if typ == 'ratio_gte': return float(t.get('ratio') or 0) >= float(cond.get('ratio') or 0)
|
||||||
|
if typ == 'progress_gte': return float(t.get('progress') or 0) >= float(cond.get('progress') or 0)
|
||||||
|
if typ == 'progress_lte': return float(t.get('progress') or 0) <= float(cond.get('progress') or 0)
|
||||||
if typ == 'label_missing': return str(cond.get('label') or '').strip() not in _label_names(t.get('label'))
|
if typ == 'label_missing': return str(cond.get('label') or '').strip() not in _label_names(t.get('label'))
|
||||||
if typ == 'label_has': return str(cond.get('label') or '').strip() in _label_names(t.get('label'))
|
if typ == 'label_has': return str(cond.get('label') or '').strip() in _label_names(t.get('label'))
|
||||||
if typ == 'status': return str(t.get('status') or '').lower() == str(cond.get('status') or '').lower()
|
if typ == 'status': return str(t.get('status') or '').lower() == str(cond.get('status') or '').lower()
|
||||||
@@ -111,8 +163,11 @@ def _conditions_match(conn, rule: dict[str, Any], profile_id: int, t: dict[str,
|
|||||||
if not h: return False
|
if not h: return False
|
||||||
immediate_ok = True; delayed_ok = True; now = utcnow(); now_ts = _now_ts()
|
immediate_ok = True; delayed_ok = True; now = utcnow(); now_ts = _now_ts()
|
||||||
for cond in rule.get('conditions') or []:
|
for cond in rule.get('conditions') or []:
|
||||||
ok = _condition_true(t, cond)
|
raw_ok = _condition_true(t, cond)
|
||||||
if cond.get('type') == 'no_seeds' and int(cond.get('minutes') or 0) > 0:
|
negated = bool(cond.get('negate'))
|
||||||
|
# Note: Negation is applied in the backend, so UI and API only store the condition flag.
|
||||||
|
ok = (not raw_ok) if negated else raw_ok
|
||||||
|
if cond.get('type') == 'no_seeds' and int(cond.get('minutes') or 0) > 0 and not negated:
|
||||||
row = conn.execute('SELECT condition_since_at FROM automation_rule_state WHERE rule_id=? AND profile_id=? AND torrent_hash=?', (rule['id'], profile_id, h)).fetchone()
|
row = conn.execute('SELECT condition_since_at FROM automation_rule_state WHERE rule_id=? AND profile_id=? AND torrent_hash=?', (rule['id'], profile_id, h)).fetchone()
|
||||||
if ok:
|
if ok:
|
||||||
since = row['condition_since_at'] if row and row.get('condition_since_at') else now
|
since = row['condition_since_at'] if row and row.get('condition_since_at') else now
|
||||||
@@ -125,30 +180,143 @@ def _conditions_match(conn, rule: dict[str, Any], profile_id: int, t: dict[str,
|
|||||||
return immediate_ok and delayed_ok
|
return immediate_ok and delayed_ok
|
||||||
|
|
||||||
|
|
||||||
def _cooldown_ok(conn, rule: dict[str, Any], profile_id: int, torrent_hash: str) -> bool:
|
def _cooldown_ok(conn, rule: dict[str, Any], profile_id: int, torrent_hash: str = '__rule__') -> bool:
|
||||||
cooldown = int(rule.get('cooldown_minutes') or 0)
|
cooldown = int(rule.get('cooldown_minutes') or 0)
|
||||||
|
if cooldown <= 0: return True
|
||||||
row = conn.execute('SELECT last_applied_at FROM automation_rule_state WHERE rule_id=? AND profile_id=? AND torrent_hash=?', (rule['id'], profile_id, torrent_hash)).fetchone()
|
row = conn.execute('SELECT last_applied_at FROM automation_rule_state WHERE rule_id=? AND profile_id=? AND torrent_hash=?', (rule['id'], profile_id, torrent_hash)).fetchone()
|
||||||
if not row or not row.get('last_applied_at'): return True
|
if not row or not row.get('last_applied_at'): return True
|
||||||
return _now_ts() - _ts(row['last_applied_at']) >= cooldown * 60
|
return _now_ts() - _ts(row['last_applied_at']) >= cooldown * 60
|
||||||
|
|
||||||
|
|
||||||
def _apply_effects(c: Any, profile: dict[str, Any], torrent: dict[str, Any], effects: list[dict[str, Any]]) -> list[dict[str, Any]]:
|
def _mark_rule_cooldown(conn, rule: dict[str, Any], profile_id: int, now: str) -> None:
|
||||||
h = str(torrent.get('hash') or ''); labels = _label_names(torrent.get('label')); applied = []
|
# Note: Cooldown is rule-level, so one batch execution blocks the whole automation until the cooldown expires.
|
||||||
|
conn.execute('INSERT INTO automation_rule_state(rule_id,profile_id,torrent_hash,last_applied_at,updated_at) VALUES(?,?,?,?,?) ON CONFLICT(rule_id,profile_id,torrent_hash) DO UPDATE SET last_applied_at=excluded.last_applied_at, updated_at=excluded.updated_at', (rule['id'], profile_id, '__rule__', now, now))
|
||||||
|
|
||||||
|
|
||||||
|
def _chunk_hashes(hashes: list[str], size: int = AUTOMATION_JOB_CHUNK_SIZE) -> list[list[str]]:
|
||||||
|
# Note: Automation jobs use the same small-batch idea as manual bulk jobs, so long move/remove/actions remain visible and recoverable.
|
||||||
|
safe_size = max(1, int(size or AUTOMATION_JOB_CHUNK_SIZE))
|
||||||
|
return [hashes[index:index + safe_size] for index in range(0, len(hashes), safe_size)]
|
||||||
|
|
||||||
|
|
||||||
|
def _job_context(rule: dict[str, Any], eff_type: str, hashes: list[str], torrents_by_hash: dict[str, dict[str, Any]], extra: dict[str, Any] | None = None) -> dict[str, Any]:
|
||||||
|
# Note: Job context marks jobs created by automations, making the Jobs log explain what rule queued the work.
|
||||||
|
ctx = {
|
||||||
|
'source': 'automation',
|
||||||
|
'rule_id': rule.get('id'),
|
||||||
|
'rule_name': str(rule.get('name') or ''),
|
||||||
|
'effect': eff_type,
|
||||||
|
'bulk': len(hashes) > 1,
|
||||||
|
'hash_count': len(hashes),
|
||||||
|
'requested_at': utcnow(),
|
||||||
|
'items': [
|
||||||
|
{
|
||||||
|
'hash': h,
|
||||||
|
'name': str((torrents_by_hash.get(h) or {}).get('name') or ''),
|
||||||
|
'path': str((torrents_by_hash.get(h) or {}).get('path') or ''),
|
||||||
|
}
|
||||||
|
for h in hashes
|
||||||
|
],
|
||||||
|
}
|
||||||
|
if extra:
|
||||||
|
ctx.update(extra)
|
||||||
|
return ctx
|
||||||
|
|
||||||
|
|
||||||
|
def _enqueue_automation_job(profile: dict[str, Any], rule: dict[str, Any], action_name: str, hashes: list[str], payload: dict[str, Any], torrents_by_hash: dict[str, dict[str, Any]], user_id: int | None = None, context_extra: dict[str, Any] | None = None) -> list[str]:
|
||||||
|
# Note: Every automation side effect is queued as a normal job instead of running inline, so it appears in Jobs and uses worker retries/ordering.
|
||||||
|
job_ids: list[str] = []
|
||||||
|
chunks = _chunk_hashes(hashes)
|
||||||
|
for index, chunk in enumerate(chunks, start=1):
|
||||||
|
part_payload = dict(payload or {})
|
||||||
|
part_payload['hashes'] = chunk
|
||||||
|
part_payload['automation_ordered'] = True
|
||||||
|
extra = dict(context_extra or {})
|
||||||
|
if len(chunks) > 1:
|
||||||
|
extra.update({'bulk_label': f'automation-{index}', 'bulk_part': index, 'bulk_parts': len(chunks), 'parent_hash_count': len(hashes)})
|
||||||
|
if action_name == 'move':
|
||||||
|
extra.update({'target_path': str(part_payload.get('path') or ''), 'move_data': bool(part_payload.get('move_data'))})
|
||||||
|
if action_name == 'remove':
|
||||||
|
extra.update({'remove_data': bool(part_payload.get('remove_data'))})
|
||||||
|
part_payload['job_context'] = _job_context(rule, str(context_extra.get('effect_type') if context_extra else action_name), chunk, torrents_by_hash, extra)
|
||||||
|
job_ids.append(enqueue(action_name, int(profile['id']), part_payload, user_id=user_id))
|
||||||
|
return job_ids
|
||||||
|
|
||||||
|
|
||||||
|
def _apply_effects_bulk(c: Any, profile: dict[str, Any], torrents: list[dict[str, Any]], effects: list[dict[str, Any]], rule: dict[str, Any], user_id: int | None = None) -> list[dict[str, Any]]:
|
||||||
|
hashes = [str(t.get('hash') or '') for t in torrents if str(t.get('hash') or '')]
|
||||||
|
torrents_by_hash = {str(t.get('hash') or ''): t for t in torrents if str(t.get('hash') or '')}
|
||||||
|
labels_by_hash = {str(t.get('hash') or ''): _label_names(t.get('label')) for t in torrents}
|
||||||
|
applied: list[dict[str, Any]] = []
|
||||||
|
if not hashes: return applied
|
||||||
for eff in effects:
|
for eff in effects:
|
||||||
typ = str(eff.get('type') or '')
|
typ = str(eff.get('type') or '')
|
||||||
if typ == 'move':
|
if typ == 'move':
|
||||||
path = str(eff.get('path') or '').strip() or rtorrent.default_download_path(profile)
|
path = str(eff.get('path') or '').strip() or rtorrent.default_download_path(profile)
|
||||||
if path: c.call('d.directory.set', h, path); applied.append({'type': 'move', 'path': path})
|
payload = {
|
||||||
|
'path': path,
|
||||||
|
'move_data': bool(eff.get('move_data')),
|
||||||
|
'recheck': bool(eff.get('recheck', eff.get('move_data'))),
|
||||||
|
'keep_seeding': bool(eff.get('keep_seeding')),
|
||||||
|
}
|
||||||
|
job_ids = _enqueue_automation_job(profile, rule, 'move', hashes, payload, torrents_by_hash, user_id, {'effect_type': 'move'})
|
||||||
|
applied.append({'type': 'move', 'path': path, 'count': len(hashes), 'target_hashes': hashes, 'move_data': payload['move_data'], 'recheck': payload['recheck'], 'keep_seeding': payload['keep_seeding'], 'job_ids': job_ids})
|
||||||
elif typ == 'add_label':
|
elif typ == 'add_label':
|
||||||
label = str(eff.get('label') or '').strip()
|
label = str(eff.get('label') or '').strip()
|
||||||
if label and label not in labels: labels.append(label); c.call('d.custom1.set', h, _label_value(labels))
|
if label:
|
||||||
if label: applied.append({'type': 'add_label', 'label': label})
|
# Note: Add-label automations are idempotent and queue only torrents that need a changed label value.
|
||||||
|
grouped: dict[str, list[str]] = {}
|
||||||
|
for h in hashes:
|
||||||
|
labels = labels_by_hash.get(h, [])
|
||||||
|
if label in labels:
|
||||||
|
continue
|
||||||
|
new_labels = list(labels) + [label]
|
||||||
|
value = _label_value(new_labels)
|
||||||
|
labels_by_hash[h] = _label_names(value)
|
||||||
|
grouped.setdefault(value, []).append(h)
|
||||||
|
target_hashes = [h for group in grouped.values() for h in group]
|
||||||
|
job_ids: list[str] = []
|
||||||
|
for value, group_hashes in grouped.items():
|
||||||
|
job_ids.extend(_enqueue_automation_job(profile, rule, 'set_label', group_hashes, {'label': value}, torrents_by_hash, user_id, {'effect_type': 'add_label', 'label': label}))
|
||||||
|
if target_hashes:
|
||||||
|
applied.append({'type': 'add_label', 'label': label, 'count': len(target_hashes), 'target_hashes': target_hashes, 'job_ids': job_ids})
|
||||||
elif typ == 'remove_label':
|
elif typ == 'remove_label':
|
||||||
label = str(eff.get('label') or '').strip(); labels = [x for x in labels if x != label]; c.call('d.custom1.set', h, _label_value(labels)); applied.append({'type': 'remove_label', 'label': label})
|
label = str(eff.get('label') or '').strip()
|
||||||
|
if label:
|
||||||
|
# Note: Remove-label automations are queued only for torrents where the requested label exists.
|
||||||
|
grouped: dict[str, list[str]] = {}
|
||||||
|
for h in hashes:
|
||||||
|
labels = labels_by_hash.get(h, [])
|
||||||
|
if label not in labels:
|
||||||
|
continue
|
||||||
|
value = _label_value([x for x in labels if x != label])
|
||||||
|
labels_by_hash[h] = _label_names(value)
|
||||||
|
grouped.setdefault(value, []).append(h)
|
||||||
|
target_hashes = [h for group in grouped.values() for h in group]
|
||||||
|
job_ids: list[str] = []
|
||||||
|
for value, group_hashes in grouped.items():
|
||||||
|
job_ids.extend(_enqueue_automation_job(profile, rule, 'set_label', group_hashes, {'label': value}, torrents_by_hash, user_id, {'effect_type': 'remove_label', 'label': label}))
|
||||||
|
if target_hashes:
|
||||||
|
applied.append({'type': 'remove_label', 'label': label, 'count': len(target_hashes), 'target_hashes': target_hashes, 'job_ids': job_ids})
|
||||||
elif typ == 'set_labels':
|
elif typ == 'set_labels':
|
||||||
value = _label_value(_label_names(eff.get('labels'))); c.call('d.custom1.set', h, value); labels = _label_names(value); applied.append({'type': 'set_labels', 'labels': value})
|
value = _label_value(_label_names(eff.get('labels')))
|
||||||
elif typ in {'pause', 'stop', 'start', 'resume', 'recheck'}:
|
target_labels = _label_names(value)
|
||||||
method = {'pause':'d.pause','stop':'d.stop','start':'d.start','resume':'d.resume','recheck':'d.check_hash'}[typ]; c.call(method, h); applied.append({'type': typ})
|
# Note: Set-labels queues a job only if the current labels differ from the requested exact list.
|
||||||
|
target_hashes = [h for h in hashes if labels_by_hash.get(h, []) != target_labels]
|
||||||
|
for h in target_hashes:
|
||||||
|
labels_by_hash[h] = list(target_labels)
|
||||||
|
if target_hashes:
|
||||||
|
job_ids = _enqueue_automation_job(profile, rule, 'set_label', target_hashes, {'label': value}, torrents_by_hash, user_id, {'effect_type': 'set_labels', 'labels': value})
|
||||||
|
applied.append({'type': 'set_labels', 'labels': value, 'count': len(target_hashes), 'target_hashes': target_hashes, 'job_ids': job_ids})
|
||||||
|
elif typ in {'pause', 'stop', 'start', 'resume', 'recheck', 'reannounce'}:
|
||||||
|
# Note: Runtime actions are queued as jobs too, so automation activity is visible in the Jobs panel.
|
||||||
|
job_ids = _enqueue_automation_job(profile, rule, typ, hashes, {}, torrents_by_hash, user_id, {'effect_type': typ})
|
||||||
|
applied.append({'type': typ, 'count': len(hashes), 'target_hashes': hashes, 'job_ids': job_ids})
|
||||||
|
elif typ == 'remove':
|
||||||
|
# Note: Remove is supported for automation payloads and still goes through ordered worker jobs.
|
||||||
|
payload = {'remove_data': bool(eff.get('remove_data'))}
|
||||||
|
job_ids = _enqueue_automation_job(profile, rule, 'remove', hashes, payload, torrents_by_hash, user_id, {'effect_type': 'remove'})
|
||||||
|
applied.append({'type': 'remove', 'count': len(hashes), 'target_hashes': hashes, 'remove_data': payload['remove_data'], 'job_ids': job_ids})
|
||||||
return applied
|
return applied
|
||||||
|
|
||||||
|
|
||||||
@@ -157,17 +325,44 @@ def check(profile: dict | None = None, user_id: int | None = None, force: bool =
|
|||||||
if not profile: return {'ok': False, 'error': 'No active rTorrent profile'}
|
if not profile: return {'ok': False, 'error': 'No active rTorrent profile'}
|
||||||
user_id = user_id or default_user_id(); profile_id = int(profile['id'])
|
user_id = user_id or default_user_id(); profile_id = int(profile['id'])
|
||||||
rules = [r for r in list_rules(profile_id, user_id) if force or int(r.get('enabled') or 0)]
|
rules = [r for r in list_rules(profile_id, user_id) if force or int(r.get('enabled') or 0)]
|
||||||
if not rules: return {'ok': True, 'checked': 0, 'applied': [], 'rules': 0}
|
if not rules: return {'ok': True, 'checked': 0, 'applied': [], 'batches': [], 'rules': 0}
|
||||||
torrents = rtorrent.list_torrents(profile); c = rtorrent.client_for(profile); applied = []; now = utcnow()
|
torrents = rtorrent.list_torrents(profile); applied = []; batches = []; now = utcnow()
|
||||||
|
planned: list[dict[str, Any]] = []
|
||||||
with connect() as conn:
|
with connect() as conn:
|
||||||
for rule in rules:
|
for rule in rules:
|
||||||
for t in torrents:
|
# Note: This pass only matches rules and updates condition timers; job creation is intentionally delayed until after this DB transaction commits.
|
||||||
h = str(t.get('hash') or '')
|
if not force and not _cooldown_ok(conn, rule, profile_id):
|
||||||
if not _conditions_match(conn, rule, profile_id, t): continue
|
continue
|
||||||
if not force and not _cooldown_ok(conn, rule, profile_id, h): continue
|
matched = [t for t in torrents if _conditions_match(conn, rule, profile_id, t)]
|
||||||
try: actions = _apply_effects(c, profile, t, rule.get('effects') or [])
|
if not matched:
|
||||||
except Exception as exc: actions = [{'error': str(exc)}]
|
continue
|
||||||
|
hashes = [str(t.get('hash') or '') for t in matched if str(t.get('hash') or '')]
|
||||||
|
if hashes:
|
||||||
|
planned.append({'rule': rule, 'matched': matched, 'hashes': hashes})
|
||||||
|
for item in planned:
|
||||||
|
rule = item['rule']
|
||||||
|
matched = item['matched']
|
||||||
|
hashes = item['hashes']
|
||||||
|
# Note: Automation jobs are enqueued outside the rule-state transaction, preventing SQLite self-locks when enqueue() writes to jobs.
|
||||||
|
try:
|
||||||
|
actions = _apply_effects_bulk(None, profile, matched, rule.get('effects') or [], rule, user_id)
|
||||||
|
except Exception as exc:
|
||||||
|
actions = [{'error': str(exc), 'count': len(hashes), 'target_hashes': hashes}]
|
||||||
|
changed_hashes = sorted({h for a in actions for h in (a.get('target_hashes') or [])})
|
||||||
|
if not actions or not changed_hashes:
|
||||||
|
# Note: Matching torrents with no real action are not logged and do not restart the cooldown.
|
||||||
|
continue
|
||||||
|
history_actions = [{k: v for k, v in a.items() if k != 'target_hashes'} for a in actions]
|
||||||
|
matched_by_hash = {str(t.get('hash') or ''): t for t in matched}
|
||||||
|
with connect() as conn:
|
||||||
|
# Note: State/history writes happen after enqueue succeeds, so failed job creation does not create misleading automation history.
|
||||||
|
for h in changed_hashes:
|
||||||
|
t = matched_by_hash.get(h, {})
|
||||||
conn.execute('INSERT INTO automation_rule_state(rule_id,profile_id,torrent_hash,last_matched_at,last_applied_at,updated_at) VALUES(?,?,?,?,?,?) ON CONFLICT(rule_id,profile_id,torrent_hash) DO UPDATE SET last_matched_at=excluded.last_matched_at, last_applied_at=excluded.last_applied_at, updated_at=excluded.updated_at', (rule['id'], profile_id, h, now, now, now))
|
conn.execute('INSERT INTO automation_rule_state(rule_id,profile_id,torrent_hash,last_matched_at,last_applied_at,updated_at) VALUES(?,?,?,?,?,?) ON CONFLICT(rule_id,profile_id,torrent_hash) DO UPDATE SET last_matched_at=excluded.last_matched_at, last_applied_at=excluded.last_applied_at, updated_at=excluded.updated_at', (rule['id'], profile_id, h, now, now, now))
|
||||||
conn.execute('INSERT INTO automation_history(user_id,profile_id,rule_id,torrent_hash,torrent_name,rule_name,actions_json,created_at) VALUES(?,?,?,?,?,?,?,?)', (user_id, profile_id, rule['id'], h, str(t.get('name') or ''), str(rule.get('name') or ''), json.dumps(actions), now))
|
applied.append({'rule_id': rule['id'], 'rule_name': rule.get('name'), 'hash': h, 'name': t.get('name'), 'actions': [{'type': a.get('type', 'error'), 'count': a.get('count', len(changed_hashes))} for a in actions]})
|
||||||
applied.append({'rule_id': rule['id'], 'rule_name': rule.get('name'), 'hash': h, 'name': t.get('name'), 'actions': actions})
|
_mark_rule_cooldown(conn, rule, profile_id, now)
|
||||||
return {'ok': True, 'checked': len(torrents), 'rules': len(rules), 'applied': applied}
|
torrent_name = str(matched_by_hash.get(changed_hashes[0], {}).get('name') or '') if len(changed_hashes) == 1 else f'{len(changed_hashes)} torrents'
|
||||||
|
torrent_hash = changed_hashes[0] if len(changed_hashes) == 1 else f'batch:{rule["id"]}:{now}'
|
||||||
|
conn.execute('INSERT INTO automation_history(user_id,profile_id,rule_id,torrent_hash,torrent_name,rule_name,actions_json,created_at) VALUES(?,?,?,?,?,?,?,?)', (user_id, profile_id, rule['id'], torrent_hash, torrent_name, str(rule.get('name') or ''), json.dumps(history_actions), now))
|
||||||
|
batches.append({'rule_id': rule['id'], 'rule_name': rule.get('name'), 'count': len(changed_hashes), 'actions': history_actions})
|
||||||
|
return {'ok': True, 'checked': len(torrents), 'rules': len(rules), 'applied': applied, 'batches': batches}
|
||||||
|
|||||||
108
pytorrent/services/frontend_assets.py
Normal file
108
pytorrent/services/frontend_assets.py
Normal file
@@ -0,0 +1,108 @@
|
|||||||
|
from __future__ import annotations
|
||||||
|
|
||||||
|
from pathlib import Path
|
||||||
|
|
||||||
|
from ..config import BASE_DIR, USE_OFFLINE_LIBS
|
||||||
|
|
||||||
|
LIBS_STATIC_DIR = "libs"
|
||||||
|
LIBS_DIR = BASE_DIR / "pytorrent" / "static" / LIBS_STATIC_DIR
|
||||||
|
BOOTSTRAP_VERSION = "5.3.3"
|
||||||
|
BOOTSWATCH_VERSION = "5.3.3"
|
||||||
|
FONTAWESOME_VERSION = "6.5.2"
|
||||||
|
FLAG_ICONS_VERSION = "7.2.3"
|
||||||
|
SWAGGER_UI_VERSION = "5"
|
||||||
|
SOCKET_IO_VERSION = "4.7.5"
|
||||||
|
|
||||||
|
BOOTSTRAP_THEMES = (
|
||||||
|
"default",
|
||||||
|
"flatly",
|
||||||
|
"litera",
|
||||||
|
"lumen",
|
||||||
|
"minty",
|
||||||
|
"sketchy",
|
||||||
|
"solar",
|
||||||
|
"spacelab",
|
||||||
|
"united",
|
||||||
|
"zephyr",
|
||||||
|
)
|
||||||
|
|
||||||
|
STATIC_ASSETS = {
|
||||||
|
"bootstrap_js": {
|
||||||
|
"local": f"{LIBS_STATIC_DIR}/bootstrap/{BOOTSTRAP_VERSION}/js/bootstrap.bundle.min.js",
|
||||||
|
"cdn": f"https://cdn.jsdelivr.net/npm/bootstrap@{BOOTSTRAP_VERSION}/dist/js/bootstrap.bundle.min.js",
|
||||||
|
},
|
||||||
|
"fontawesome_css": {
|
||||||
|
"local": f"{LIBS_STATIC_DIR}/fontawesome/{FONTAWESOME_VERSION}/css/all.min.css",
|
||||||
|
"cdn": f"https://cdnjs.cloudflare.com/ajax/libs/font-awesome/{FONTAWESOME_VERSION}/css/all.min.css",
|
||||||
|
},
|
||||||
|
"flag_icons_css": {
|
||||||
|
"local": f"{LIBS_STATIC_DIR}/flag-icons/{FLAG_ICONS_VERSION}/css/flag-icons.min.css",
|
||||||
|
"cdn": f"https://cdn.jsdelivr.net/gh/lipis/flag-icons@{FLAG_ICONS_VERSION}/css/flag-icons.min.css",
|
||||||
|
},
|
||||||
|
"socket_io_js": {
|
||||||
|
"local": f"{LIBS_STATIC_DIR}/socket.io/{SOCKET_IO_VERSION}/socket.io.min.js",
|
||||||
|
"cdn": f"https://cdn.socket.io/{SOCKET_IO_VERSION}/socket.io.min.js",
|
||||||
|
},
|
||||||
|
"swagger_css": {
|
||||||
|
"local": f"{LIBS_STATIC_DIR}/swagger-ui/{SWAGGER_UI_VERSION}/swagger-ui.css",
|
||||||
|
"cdn": f"https://cdn.jsdelivr.net/npm/swagger-ui-dist@{SWAGGER_UI_VERSION}/swagger-ui.css",
|
||||||
|
},
|
||||||
|
"swagger_js": {
|
||||||
|
"local": f"{LIBS_STATIC_DIR}/swagger-ui/{SWAGGER_UI_VERSION}/swagger-ui-bundle.js",
|
||||||
|
"cdn": f"https://cdn.jsdelivr.net/npm/swagger-ui-dist@{SWAGGER_UI_VERSION}/swagger-ui-bundle.js",
|
||||||
|
},
|
||||||
|
}
|
||||||
|
|
||||||
|
|
||||||
|
def bootstrap_css_asset(theme: str | None = None) -> dict[str, str]:
|
||||||
|
theme = theme if theme in BOOTSTRAP_THEMES else "default"
|
||||||
|
if theme == "default":
|
||||||
|
return {
|
||||||
|
"local": f"{LIBS_STATIC_DIR}/bootstrap/{BOOTSTRAP_VERSION}/css/bootstrap.min.css",
|
||||||
|
"cdn": f"https://cdn.jsdelivr.net/npm/bootstrap@{BOOTSTRAP_VERSION}/dist/css/bootstrap.min.css",
|
||||||
|
}
|
||||||
|
return {
|
||||||
|
"local": f"{LIBS_STATIC_DIR}/bootswatch/{BOOTSWATCH_VERSION}/{theme}/bootstrap.min.css",
|
||||||
|
"cdn": f"https://cdn.jsdelivr.net/npm/bootswatch@{BOOTSWATCH_VERSION}/dist/{theme}/bootstrap.min.css",
|
||||||
|
}
|
||||||
|
|
||||||
|
|
||||||
|
def asset_path(key: str) -> str:
|
||||||
|
return STATIC_ASSETS[key]["local" if USE_OFFLINE_LIBS else "cdn"]
|
||||||
|
|
||||||
|
|
||||||
|
def bootstrap_css_path(theme: str | None = None) -> str:
|
||||||
|
return bootstrap_css_asset(theme)["local" if USE_OFFLINE_LIBS else "cdn"]
|
||||||
|
|
||||||
|
|
||||||
|
def required_offline_paths() -> list[Path]:
|
||||||
|
paths = [LIBS_DIR.parent / item["local"] for item in STATIC_ASSETS.values()]
|
||||||
|
paths.extend(LIBS_DIR.parent / bootstrap_css_asset(theme)["local"] for theme in BOOTSTRAP_THEMES)
|
||||||
|
return paths
|
||||||
|
|
||||||
|
|
||||||
|
def missing_offline_paths() -> list[Path]:
|
||||||
|
missing = [path for path in required_offline_paths() if not path.is_file() or path.stat().st_size <= 0]
|
||||||
|
required_dirs = [
|
||||||
|
LIBS_DIR / f"fontawesome/{FONTAWESOME_VERSION}/webfonts",
|
||||||
|
LIBS_DIR / f"flag-icons/{FLAG_ICONS_VERSION}/flags/4x3",
|
||||||
|
LIBS_DIR / f"flag-icons/{FLAG_ICONS_VERSION}/flags/1x1",
|
||||||
|
]
|
||||||
|
for directory in required_dirs:
|
||||||
|
if not directory.is_dir() or not any(directory.iterdir()):
|
||||||
|
missing.append(directory)
|
||||||
|
return missing
|
||||||
|
|
||||||
|
|
||||||
|
def validate_offline_assets() -> None:
|
||||||
|
if not USE_OFFLINE_LIBS:
|
||||||
|
return
|
||||||
|
missing = missing_offline_paths()
|
||||||
|
if missing:
|
||||||
|
preview = "\n".join(f"- {path.relative_to(BASE_DIR)}" for path in missing[:20])
|
||||||
|
extra = "" if len(missing) <= 20 else f"\n- ... and {len(missing) - 20} more"
|
||||||
|
raise RuntimeError(
|
||||||
|
"PYTORRENT_USE_OFFLINE_LIBS=true, but frontend libraries are missing. "
|
||||||
|
"Run: ./scripts/download_frontend_libs.py or ./install.sh\n"
|
||||||
|
f"Missing files:\n{preview}{extra}"
|
||||||
|
)
|
||||||
@@ -3,6 +3,7 @@ from __future__ import annotations
|
|||||||
import json
|
import json
|
||||||
|
|
||||||
from ..db import connect, utcnow, default_user_id
|
from ..db import connect, utcnow, default_user_id
|
||||||
|
from . import auth
|
||||||
|
|
||||||
BOOTSTRAP_THEMES = {
|
BOOTSTRAP_THEMES = {
|
||||||
"default": "Default Bootstrap",
|
"default": "Default Bootstrap",
|
||||||
@@ -27,50 +28,49 @@ FONT_FAMILIES = {
|
|||||||
}
|
}
|
||||||
|
|
||||||
def bootstrap_css_url(theme: str | None) -> str:
|
def bootstrap_css_url(theme: str | None) -> str:
|
||||||
theme = theme if theme in BOOTSTRAP_THEMES else "default"
|
from .frontend_assets import bootstrap_css_path
|
||||||
if theme == "default":
|
|
||||||
return "https://cdn.jsdelivr.net/npm/bootstrap@5.3.3/dist/css/bootstrap.min.css"
|
|
||||||
return f"https://cdn.jsdelivr.net/npm/bootswatch@5.3.3/dist/{theme}/bootstrap.min.css"
|
|
||||||
|
|
||||||
|
return bootstrap_css_path(theme)
|
||||||
|
|
||||||
def list_profiles(user_id: int | None = None):
|
def list_profiles(user_id: int | None = None):
|
||||||
user_id = user_id or default_user_id()
|
user_id = user_id or auth.current_user_id() or default_user_id()
|
||||||
|
visible = auth.visible_profile_ids(user_id)
|
||||||
with connect() as conn:
|
with connect() as conn:
|
||||||
|
if visible is None:
|
||||||
return conn.execute(
|
return conn.execute(
|
||||||
"SELECT * FROM rtorrent_profiles WHERE user_id=? ORDER BY is_default DESC, name COLLATE NOCASE",
|
"SELECT * FROM rtorrent_profiles ORDER BY is_default DESC, name COLLATE NOCASE"
|
||||||
(user_id,),
|
).fetchall()
|
||||||
|
if not visible:
|
||||||
|
return []
|
||||||
|
placeholders = ",".join("?" for _ in visible)
|
||||||
|
return conn.execute(
|
||||||
|
f"SELECT * FROM rtorrent_profiles WHERE id IN ({placeholders}) ORDER BY is_default DESC, name COLLATE NOCASE",
|
||||||
|
tuple(visible),
|
||||||
).fetchall()
|
).fetchall()
|
||||||
|
|
||||||
|
|
||||||
def get_profile(profile_id: int, user_id: int | None = None):
|
def get_profile(profile_id: int, user_id: int | None = None):
|
||||||
user_id = user_id or default_user_id()
|
user_id = user_id or auth.current_user_id() or default_user_id()
|
||||||
|
if not auth.can_access_profile(profile_id, user_id):
|
||||||
|
return None
|
||||||
with connect() as conn:
|
with connect() as conn:
|
||||||
return conn.execute(
|
return conn.execute("SELECT * FROM rtorrent_profiles WHERE id=?", (profile_id,)).fetchone()
|
||||||
"SELECT * FROM rtorrent_profiles WHERE id=? AND user_id=?",
|
|
||||||
(profile_id, user_id),
|
|
||||||
).fetchone()
|
|
||||||
|
|
||||||
|
|
||||||
def active_profile(user_id: int | None = None):
|
def active_profile(user_id: int | None = None):
|
||||||
user_id = user_id or default_user_id()
|
user_id = user_id or auth.current_user_id() or default_user_id()
|
||||||
with connect() as conn:
|
with connect() as conn:
|
||||||
pref = conn.execute("SELECT active_rtorrent_id FROM user_preferences WHERE user_id=?", (user_id,)).fetchone()
|
pref = conn.execute("SELECT active_rtorrent_id FROM user_preferences WHERE user_id=?", (user_id,)).fetchone()
|
||||||
if pref and pref.get("active_rtorrent_id"):
|
if pref and pref.get("active_rtorrent_id") and auth.can_access_profile(int(pref["active_rtorrent_id"]), user_id):
|
||||||
row = conn.execute(
|
row = conn.execute("SELECT * FROM rtorrent_profiles WHERE id=?", (pref["active_rtorrent_id"],)).fetchone()
|
||||||
"SELECT * FROM rtorrent_profiles WHERE id=? AND user_id=?",
|
|
||||||
(pref["active_rtorrent_id"], user_id),
|
|
||||||
).fetchone()
|
|
||||||
if row:
|
if row:
|
||||||
return row
|
return row
|
||||||
row = conn.execute(
|
profiles = list_profiles(user_id)
|
||||||
"SELECT * FROM rtorrent_profiles WHERE user_id=? ORDER BY is_default DESC, id ASC LIMIT 1",
|
return profiles[0] if profiles else None
|
||||||
(user_id,),
|
|
||||||
).fetchone()
|
|
||||||
return row
|
|
||||||
|
|
||||||
|
|
||||||
def save_profile(data: dict, user_id: int | None = None):
|
def save_profile(data: dict, user_id: int | None = None):
|
||||||
user_id = user_id or default_user_id()
|
user_id = user_id or auth.current_user_id() or default_user_id()
|
||||||
now = utcnow()
|
now = utcnow()
|
||||||
name = str(data.get("name") or "rTorrent").strip()
|
name = str(data.get("name") or "rTorrent").strip()
|
||||||
scgi_url = str(data.get("scgi_url") or "").strip()
|
scgi_url = str(data.get("scgi_url") or "").strip()
|
||||||
@@ -79,7 +79,7 @@ def save_profile(data: dict, user_id: int | None = None):
|
|||||||
is_remote = 1 if data.get("is_remote") else 0
|
is_remote = 1 if data.get("is_remote") else 0
|
||||||
is_default = 1 if data.get("is_default") else 0
|
is_default = 1 if data.get("is_default") else 0
|
||||||
if not scgi_url.startswith("scgi://"):
|
if not scgi_url.startswith("scgi://"):
|
||||||
raise ValueError("SCGI URL musi zaczynać się od scgi://")
|
raise ValueError("SCGI URL must start with scgi://")
|
||||||
with connect() as conn:
|
with connect() as conn:
|
||||||
if is_default:
|
if is_default:
|
||||||
conn.execute("UPDATE rtorrent_profiles SET is_default=0 WHERE user_id=?", (user_id,))
|
conn.execute("UPDATE rtorrent_profiles SET is_default=0 WHERE user_id=?", (user_id,))
|
||||||
@@ -94,11 +94,11 @@ def save_profile(data: dict, user_id: int | None = None):
|
|||||||
"UPDATE user_preferences SET active_rtorrent_id=?, updated_at=? WHERE user_id=?",
|
"UPDATE user_preferences SET active_rtorrent_id=?, updated_at=? WHERE user_id=?",
|
||||||
(profile_id, now, user_id),
|
(profile_id, now, user_id),
|
||||||
)
|
)
|
||||||
return conn.execute("SELECT * FROM rtorrent_profiles WHERE id=? AND user_id=?", (profile_id, user_id)).fetchone()
|
return conn.execute("SELECT * FROM rtorrent_profiles WHERE id=?", (profile_id,)).fetchone()
|
||||||
|
|
||||||
|
|
||||||
def update_profile(profile_id: int, data: dict, user_id: int | None = None):
|
def update_profile(profile_id: int, data: dict, user_id: int | None = None):
|
||||||
user_id = user_id or default_user_id()
|
user_id = user_id or auth.current_user_id() or default_user_id()
|
||||||
now = utcnow()
|
now = utcnow()
|
||||||
name = str(data.get("name") or "rTorrent").strip()
|
name = str(data.get("name") or "rTorrent").strip()
|
||||||
scgi_url = str(data.get("scgi_url") or "").strip()
|
scgi_url = str(data.get("scgi_url") or "").strip()
|
||||||
@@ -107,24 +107,25 @@ def update_profile(profile_id: int, data: dict, user_id: int | None = None):
|
|||||||
is_remote = 1 if data.get("is_remote") else 0
|
is_remote = 1 if data.get("is_remote") else 0
|
||||||
is_default = 1 if data.get("is_default") else 0
|
is_default = 1 if data.get("is_default") else 0
|
||||||
if not scgi_url.startswith("scgi://"):
|
if not scgi_url.startswith("scgi://"):
|
||||||
raise ValueError("SCGI URL musi zaczynać się od scgi://")
|
raise ValueError("SCGI URL must start with scgi://")
|
||||||
with connect() as conn:
|
with connect() as conn:
|
||||||
row = conn.execute("SELECT id FROM rtorrent_profiles WHERE id=? AND user_id=?", (profile_id, user_id)).fetchone()
|
row = conn.execute("SELECT id FROM rtorrent_profiles WHERE id=?", (profile_id,)).fetchone()
|
||||||
if not row:
|
if not row or not auth.can_write_profile(profile_id, user_id):
|
||||||
raise ValueError("Profil nie istnieje")
|
raise ValueError("Profil nie istnieje")
|
||||||
if is_default:
|
if is_default:
|
||||||
conn.execute("UPDATE rtorrent_profiles SET is_default=0 WHERE user_id=?", (user_id,))
|
conn.execute("UPDATE rtorrent_profiles SET is_default=0 WHERE user_id=?", (user_id,))
|
||||||
conn.execute(
|
conn.execute(
|
||||||
"UPDATE rtorrent_profiles SET name=?, scgi_url=?, is_default=?, timeout_seconds=?, max_parallel_jobs=?, is_remote=?, updated_at=? WHERE id=? AND user_id=?",
|
"UPDATE rtorrent_profiles SET name=?, scgi_url=?, is_default=?, timeout_seconds=?, max_parallel_jobs=?, is_remote=?, updated_at=? WHERE id=?",
|
||||||
(name, scgi_url, is_default, timeout, max_parallel, is_remote, now, profile_id, user_id),
|
(name, scgi_url, is_default, timeout, max_parallel, is_remote, now, profile_id),
|
||||||
)
|
)
|
||||||
return conn.execute("SELECT * FROM rtorrent_profiles WHERE id=? AND user_id=?", (profile_id, user_id)).fetchone()
|
return conn.execute("SELECT * FROM rtorrent_profiles WHERE id=?", (profile_id,)).fetchone()
|
||||||
|
|
||||||
|
|
||||||
def delete_profile(profile_id: int, user_id: int | None = None):
|
def delete_profile(profile_id: int, user_id: int | None = None):
|
||||||
user_id = user_id or default_user_id()
|
user_id = user_id or auth.current_user_id() or default_user_id()
|
||||||
|
auth.require_profile_write(profile_id)
|
||||||
with connect() as conn:
|
with connect() as conn:
|
||||||
conn.execute("DELETE FROM rtorrent_profiles WHERE id=? AND user_id=?", (profile_id, user_id))
|
conn.execute("DELETE FROM rtorrent_profiles WHERE id=?", (profile_id,))
|
||||||
active = active_profile(user_id)
|
active = active_profile(user_id)
|
||||||
conn.execute(
|
conn.execute(
|
||||||
"UPDATE user_preferences SET active_rtorrent_id=?, updated_at=? WHERE user_id=?",
|
"UPDATE user_preferences SET active_rtorrent_id=?, updated_at=? WHERE user_id=?",
|
||||||
@@ -133,10 +134,10 @@ def delete_profile(profile_id: int, user_id: int | None = None):
|
|||||||
|
|
||||||
|
|
||||||
def activate_profile(profile_id: int, user_id: int | None = None):
|
def activate_profile(profile_id: int, user_id: int | None = None):
|
||||||
user_id = user_id or default_user_id()
|
user_id = user_id or auth.current_user_id() or default_user_id()
|
||||||
with connect() as conn:
|
with connect() as conn:
|
||||||
row = conn.execute("SELECT id FROM rtorrent_profiles WHERE id=? AND user_id=?", (profile_id, user_id)).fetchone()
|
row = conn.execute("SELECT id FROM rtorrent_profiles WHERE id=?", (profile_id,)).fetchone()
|
||||||
if not row:
|
if not row or not auth.can_access_profile(profile_id, user_id):
|
||||||
raise ValueError("Profil nie istnieje")
|
raise ValueError("Profil nie istnieje")
|
||||||
conn.execute(
|
conn.execute(
|
||||||
"UPDATE user_preferences SET active_rtorrent_id=?, updated_at=? WHERE user_id=?",
|
"UPDATE user_preferences SET active_rtorrent_id=?, updated_at=? WHERE user_id=?",
|
||||||
@@ -146,13 +147,18 @@ def activate_profile(profile_id: int, user_id: int | None = None):
|
|||||||
|
|
||||||
|
|
||||||
def get_preferences(user_id: int | None = None):
|
def get_preferences(user_id: int | None = None):
|
||||||
user_id = user_id or default_user_id()
|
user_id = user_id or auth.current_user_id() or default_user_id()
|
||||||
with connect() as conn:
|
with connect() as conn:
|
||||||
return conn.execute("SELECT * FROM user_preferences WHERE user_id=?", (user_id,)).fetchone()
|
pref = conn.execute("SELECT * FROM user_preferences WHERE user_id=?", (user_id,)).fetchone()
|
||||||
|
if not pref:
|
||||||
|
now = utcnow()
|
||||||
|
conn.execute("INSERT INTO user_preferences(user_id, theme, created_at, updated_at) VALUES(?, 'dark', ?, ?)", (user_id, now, now))
|
||||||
|
pref = conn.execute("SELECT * FROM user_preferences WHERE user_id=?", (user_id,)).fetchone()
|
||||||
|
return pref
|
||||||
|
|
||||||
|
|
||||||
def save_preferences(data: dict, user_id: int | None = None):
|
def save_preferences(data: dict, user_id: int | None = None):
|
||||||
user_id = user_id or default_user_id()
|
user_id = user_id or auth.current_user_id() or default_user_id()
|
||||||
allowed_theme = data.get("theme") if data.get("theme") in {"light", "dark"} else None
|
allowed_theme = data.get("theme") if data.get("theme") in {"light", "dark"} else None
|
||||||
bootstrap_theme = data.get("bootstrap_theme") if data.get("bootstrap_theme") in BOOTSTRAP_THEMES else None
|
bootstrap_theme = data.get("bootstrap_theme") if data.get("bootstrap_theme") in BOOTSTRAP_THEMES else None
|
||||||
font_family = data.get("font_family") if data.get("font_family") in FONT_FAMILIES else None
|
font_family = data.get("font_family") if data.get("font_family") in FONT_FAMILIES else None
|
||||||
@@ -160,6 +166,9 @@ def save_preferences(data: dict, user_id: int | None = None):
|
|||||||
peers_refresh_seconds = data.get("peers_refresh_seconds")
|
peers_refresh_seconds = data.get("peers_refresh_seconds")
|
||||||
port_check_enabled = data.get("port_check_enabled")
|
port_check_enabled = data.get("port_check_enabled")
|
||||||
footer_items_json = data.get("footer_items_json")
|
footer_items_json = data.get("footer_items_json")
|
||||||
|
title_speed_enabled = data.get("title_speed_enabled")
|
||||||
|
tracker_favicons_enabled = data.get("tracker_favicons_enabled")
|
||||||
|
interface_scale = data.get("interface_scale")
|
||||||
with connect() as conn:
|
with connect() as conn:
|
||||||
now = utcnow()
|
now = utcnow()
|
||||||
if allowed_theme:
|
if allowed_theme:
|
||||||
@@ -176,6 +185,15 @@ def save_preferences(data: dict, user_id: int | None = None):
|
|||||||
conn.execute("UPDATE user_preferences SET peers_refresh_seconds=?, updated_at=? WHERE user_id=?", (sec, now, user_id))
|
conn.execute("UPDATE user_preferences SET peers_refresh_seconds=?, updated_at=? WHERE user_id=?", (sec, now, user_id))
|
||||||
if port_check_enabled is not None:
|
if port_check_enabled is not None:
|
||||||
conn.execute("UPDATE user_preferences SET port_check_enabled=?, updated_at=? WHERE user_id=?", (1 if port_check_enabled else 0, now, user_id))
|
conn.execute("UPDATE user_preferences SET port_check_enabled=?, updated_at=? WHERE user_id=?", (1 if port_check_enabled else 0, now, user_id))
|
||||||
|
if title_speed_enabled is not None:
|
||||||
|
conn.execute("UPDATE user_preferences SET title_speed_enabled=?, updated_at=? WHERE user_id=?", (1 if title_speed_enabled else 0, now, user_id))
|
||||||
|
if tracker_favicons_enabled is not None:
|
||||||
|
conn.execute("UPDATE user_preferences SET tracker_favicons_enabled=?, updated_at=? WHERE user_id=?", (1 if tracker_favicons_enabled else 0, now, user_id))
|
||||||
|
if interface_scale is not None:
|
||||||
|
scale = int(interface_scale or 100)
|
||||||
|
if scale < 80: scale = 80
|
||||||
|
if scale > 140: scale = 140
|
||||||
|
conn.execute("UPDATE user_preferences SET interface_scale=?, updated_at=? WHERE user_id=?", (scale, now, user_id))
|
||||||
if footer_items_json is not None:
|
if footer_items_json is not None:
|
||||||
# Note: Store only JSON objects so footer visibility can be extended without schema churn.
|
# Note: Store only JSON objects so footer visibility can be extended without schema churn.
|
||||||
value = footer_items_json if isinstance(footer_items_json, str) else json.dumps(footer_items_json)
|
value = footer_items_json if isinstance(footer_items_json, str) else json.dumps(footer_items_json)
|
||||||
|
|||||||
@@ -30,6 +30,8 @@ def cleanup(force: bool = False) -> dict[str, int]:
|
|||||||
targets = {
|
targets = {
|
||||||
"traffic_history": ("created_at", TRAFFIC_HISTORY_RETENTION_DAYS),
|
"traffic_history": ("created_at", TRAFFIC_HISTORY_RETENTION_DAYS),
|
||||||
"smart_queue_history": ("created_at", SMART_QUEUE_HISTORY_RETENTION_DAYS),
|
"smart_queue_history": ("created_at", SMART_QUEUE_HISTORY_RETENTION_DAYS),
|
||||||
|
# Note: Automation history follows Smart Queue retention; rules and rule state are never deleted here.
|
||||||
|
"automation_history": ("created_at", SMART_QUEUE_HISTORY_RETENTION_DAYS),
|
||||||
"jobs": ("updated_at", JOBS_RETENTION_DAYS),
|
"jobs": ("updated_at", JOBS_RETENTION_DAYS),
|
||||||
"logs": ("created_at", LOG_RETENTION_DAYS),
|
"logs": ("created_at", LOG_RETENTION_DAYS),
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -1,5 +1,6 @@
|
|||||||
from __future__ import annotations
|
from __future__ import annotations
|
||||||
|
|
||||||
|
import errno
|
||||||
import os
|
import os
|
||||||
import posixpath
|
import posixpath
|
||||||
import socket
|
import socket
|
||||||
@@ -53,6 +54,10 @@ class ScgiRtorrentClient:
|
|||||||
}
|
}
|
||||||
header_blob = b"".join(k.encode() + b"\0" + v.encode() + b"\0" for k, v in headers.items())
|
header_blob = b"".join(k.encode() + b"\0" + v.encode() + b"\0" for k, v in headers.items())
|
||||||
payload = str(len(header_blob)).encode("ascii") + b":" + header_blob + b"," + body
|
payload = str(len(header_blob)).encode("ascii") + b":" + header_blob + b"," + body
|
||||||
|
attempts = _scgi_retry_attempts()
|
||||||
|
last_exc = None
|
||||||
|
for attempt in range(1, attempts + 1):
|
||||||
|
try:
|
||||||
with socket.create_connection((self.host, self.port), timeout=self.timeout) as sock:
|
with socket.create_connection((self.host, self.port), timeout=self.timeout) as sock:
|
||||||
sock.settimeout(self.timeout)
|
sock.settimeout(self.timeout)
|
||||||
sock.sendall(payload)
|
sock.sendall(payload)
|
||||||
@@ -71,6 +76,35 @@ class ScgiRtorrentClient:
|
|||||||
response = response.split(b"\n\n", 1)[1]
|
response = response.split(b"\n\n", 1)[1]
|
||||||
result, _ = loads(response)
|
result, _ = loads(response)
|
||||||
return result[0] if len(result) == 1 else result
|
return result[0] if len(result) == 1 else result
|
||||||
|
except Exception as exc:
|
||||||
|
last_exc = exc
|
||||||
|
if attempt >= attempts or not _is_transient_scgi_error(exc):
|
||||||
|
raise
|
||||||
|
time.sleep(_scgi_retry_delay(attempt))
|
||||||
|
raise last_exc or ConnectionError("rTorrent SCGI call failed")
|
||||||
|
|
||||||
|
|
||||||
|
def _scgi_retry_attempts() -> int:
|
||||||
|
# Note: Short retry/backoff protects bulk operations from temporary Errno 111 during high rTorrent load.
|
||||||
|
try:
|
||||||
|
return max(1, min(10, int(os.environ.get("PYTORRENT_SCGI_RETRIES", "5"))))
|
||||||
|
except Exception:
|
||||||
|
return 5
|
||||||
|
|
||||||
|
|
||||||
|
def _scgi_retry_delay(attempt: int) -> float:
|
||||||
|
return min(5.0, 0.35 * (2 ** max(0, attempt - 1)))
|
||||||
|
|
||||||
|
|
||||||
|
def _is_transient_scgi_error(exc: Exception) -> bool:
|
||||||
|
# Note: Retry covers common temporary SCGI/socket errors but does not hide semantic XML-RPC errors.
|
||||||
|
if isinstance(exc, (ConnectionRefusedError, ConnectionResetError, TimeoutError, socket.timeout)):
|
||||||
|
return True
|
||||||
|
err_no = getattr(exc, "errno", None)
|
||||||
|
if err_no in {errno.ECONNREFUSED, errno.ECONNRESET, errno.ETIMEDOUT, errno.EHOSTUNREACH, errno.ENETUNREACH}:
|
||||||
|
return True
|
||||||
|
msg = str(exc).lower()
|
||||||
|
return any(text in msg for text in ("connection refused", "connection reset", "timed out", "timeout", "empty response", "pipe creation failed", "resource temporarily unavailable", "try again", "temporarily unavailable"))
|
||||||
|
|
||||||
|
|
||||||
def client_for(profile: dict) -> ScgiRtorrentClient:
|
def client_for(profile: dict) -> ScgiRtorrentClient:
|
||||||
@@ -78,32 +112,78 @@ def client_for(profile: dict) -> ScgiRtorrentClient:
|
|||||||
|
|
||||||
|
|
||||||
_UNSUPPORTED_EXEC_METHODS: set[str] = set()
|
_UNSUPPORTED_EXEC_METHODS: set[str] = set()
|
||||||
|
_EXEC_TARGET_STYLE: dict[str, int] = {}
|
||||||
|
|
||||||
def _rt_execute(c: ScgiRtorrentClient, method: str, *args):
|
def _rt_execute_preview(method_name: str, call_args: tuple) -> str:
|
||||||
"""Run rTorrent execute.* as the rTorrent user across XML-RPC variants."""
|
# Note: The compact RPC summary removes long scripts from error messages while keeping the method and first arguments for diagnostics.
|
||||||
method_names = [method]
|
|
||||||
if method.startswith("execute."):
|
|
||||||
execute2 = method.replace("execute.", "execute2.", 1)
|
|
||||||
if execute2 not in _UNSUPPORTED_EXEC_METHODS:
|
|
||||||
method_names.append(execute2)
|
|
||||||
errors = []
|
|
||||||
for method_name in method_names:
|
|
||||||
for call_args in (("", *args), args):
|
|
||||||
try:
|
|
||||||
return c.call(method_name, *call_args)
|
|
||||||
except Exception as exc:
|
|
||||||
message = str(exc)
|
|
||||||
if "not defined" in message.lower():
|
|
||||||
_UNSUPPORTED_EXEC_METHODS.add(method_name)
|
|
||||||
preview = ", ".join(repr(x) for x in call_args[:3])
|
preview = ", ".join(repr(x) for x in call_args[:3])
|
||||||
if len(call_args) > 3:
|
if len(call_args) > 3:
|
||||||
preview += ", ..."
|
preview += ", ..."
|
||||||
errors.append(f"{method_name}({preview}): {exc}")
|
return f"{method_name}({preview})"
|
||||||
|
|
||||||
|
|
||||||
|
def _rt_execute_target_variants(method: str, args: tuple) -> list[tuple]:
|
||||||
|
# Note: Depending on version, rTorrent XML-RPC either requires or rejects an empty target; cache the working variant per method.
|
||||||
|
variants = [("", *args), args]
|
||||||
|
preferred = _EXEC_TARGET_STYLE.get(method)
|
||||||
|
if preferred is not None and 0 <= preferred < len(variants):
|
||||||
|
return [variants[preferred]] + [v for i, v in enumerate(variants) if i != preferred]
|
||||||
|
return variants
|
||||||
|
|
||||||
|
|
||||||
|
def _is_rt_method_missing(exc: Exception) -> bool:
|
||||||
|
msg = str(exc).lower()
|
||||||
|
return "not defined" in msg or "no such method" in msg or "unknown method" in msg
|
||||||
|
|
||||||
|
|
||||||
|
def _rt_execute_methods(method: str) -> list[str]:
|
||||||
|
# Note: execute2.* is tried only when the base execute.* method does not exist to avoid false retry errors.
|
||||||
|
methods = [method]
|
||||||
|
if method.startswith("execute."):
|
||||||
|
fallback = method.replace("execute.", "execute2.", 1)
|
||||||
|
if fallback not in _UNSUPPORTED_EXEC_METHODS:
|
||||||
|
methods.append(fallback)
|
||||||
|
return methods
|
||||||
|
|
||||||
|
|
||||||
|
def _rt_execute(c: ScgiRtorrentClient, method: str, *args):
|
||||||
|
"""Run rTorrent execute.* as the rTorrent user across XML-RPC variants."""
|
||||||
|
errors: list[str] = []
|
||||||
|
attempts = _scgi_retry_attempts()
|
||||||
|
for attempt in range(1, attempts + 1):
|
||||||
|
errors.clear()
|
||||||
|
transient_seen = False
|
||||||
|
primary_missing = False
|
||||||
|
for method_index, method_name in enumerate(_rt_execute_methods(method)):
|
||||||
|
if method_name in _UNSUPPORTED_EXEC_METHODS:
|
||||||
|
continue
|
||||||
|
if method_index > 0 and not primary_missing:
|
||||||
|
continue
|
||||||
|
for call_args in _rt_execute_target_variants(method_name, args):
|
||||||
|
try:
|
||||||
|
result = c.call(method_name, *call_args)
|
||||||
|
if method_name == method:
|
||||||
|
_EXEC_TARGET_STYLE[method_name] = 0 if call_args and call_args[0] == "" else 1
|
||||||
|
return result
|
||||||
|
except Exception as exc:
|
||||||
|
if _is_rt_method_missing(exc):
|
||||||
|
_UNSUPPORTED_EXEC_METHODS.add(method_name)
|
||||||
|
if method_name == method:
|
||||||
|
primary_missing = True
|
||||||
|
errors.append(f"{method_name}: method not defined")
|
||||||
|
break
|
||||||
|
transient_seen = transient_seen or _is_transient_scgi_error(exc)
|
||||||
|
errors.append(f"{_rt_execute_preview(method_name, call_args)}: {exc}")
|
||||||
|
if transient_seen and attempt < attempts:
|
||||||
|
time.sleep(_scgi_retry_delay(attempt))
|
||||||
|
continue
|
||||||
|
break
|
||||||
raise RuntimeError("rTorrent execute failed: " + "; ".join(errors))
|
raise RuntimeError("rTorrent execute failed: " + "; ".join(errors))
|
||||||
|
|
||||||
|
|
||||||
def _is_rt_timeout_error(exc: Exception) -> bool:
|
def _is_rt_timeout_error(exc: Exception) -> bool:
|
||||||
return isinstance(exc, (TimeoutError, socket.timeout)) or "timed out" in str(exc).lower()
|
msg = str(exc).lower()
|
||||||
|
return isinstance(exc, (TimeoutError, socket.timeout)) or "timed out" in msg or "timeout" in msg
|
||||||
|
|
||||||
|
|
||||||
def _rt_execute_allow_timeout(c: ScgiRtorrentClient, method: str, *args):
|
def _rt_execute_allow_timeout(c: ScgiRtorrentClient, method: str, *args):
|
||||||
@@ -159,7 +239,8 @@ def _run_remote_move(c: ScgiRtorrentClient, src: str, dst: str, poll_interval: f
|
|||||||
try:
|
try:
|
||||||
output = str(_rt_execute(c, "execute.capture", "sh", "-c", poll_script, "pytorrent-move-poll", status_path) or "").strip()
|
output = str(_rt_execute(c, "execute.capture", "sh", "-c", poll_script, "pytorrent-move-poll", status_path) or "").strip()
|
||||||
except Exception as exc:
|
except Exception as exc:
|
||||||
if _is_rt_timeout_error(exc):
|
# Note: During bulk moves, rTorrent may briefly not create the execute.capture pipe; polling waits and retries.
|
||||||
|
if _is_rt_timeout_error(exc) or _is_transient_scgi_error(exc):
|
||||||
continue
|
continue
|
||||||
raise
|
raise
|
||||||
if not output:
|
if not output:
|
||||||
@@ -207,6 +288,47 @@ def _safe_rm_rf_path(path: str) -> str:
|
|||||||
return path
|
return path
|
||||||
|
|
||||||
|
|
||||||
|
def _run_remote_rm(c: ScgiRtorrentClient, path: str, poll_interval: float = 2.0) -> None:
|
||||||
|
# Note: rm -rf runs in the background on the rTorrent side, so long deletes do not hold a single SCGI connection.
|
||||||
|
token = uuid.uuid4().hex
|
||||||
|
status_path = f"/tmp/pytorrent-rm-{token}.status"
|
||||||
|
script = (
|
||||||
|
'target=$1; status=$2; tmp=${status}.tmp; '
|
||||||
|
'rm -f "$status" "$tmp"; '
|
||||||
|
'( rc=0; '
|
||||||
|
'if [ -z "$target" ] || [ "$target" = "/" ] || [ "$target" = "." ]; then echo "unsafe remove target: $target" >&2; rc=5; '
|
||||||
|
'else rm -rf -- "$target" || rc=$?; fi; '
|
||||||
|
'if [ $rc -eq 0 ]; then printf "OK\n" > "$status"; else printf "ERR %s\n" "$rc" > "$status"; fi; '
|
||||||
|
'if [ -s "$tmp" ]; then cat "$tmp" >> "$status"; fi; '
|
||||||
|
'rm -f "$tmp" ) > "$tmp" 2>&1 &'
|
||||||
|
)
|
||||||
|
poll_script = 'status=$1; [ -f "$status" ] && cat "$status" || true'
|
||||||
|
cleanup_script = 'rm -f "$1"'
|
||||||
|
_rt_execute_allow_timeout(c, "execute.throw", "sh", "-c", script, "pytorrent-rm-start", path, status_path)
|
||||||
|
while True:
|
||||||
|
time.sleep(max(0.25, poll_interval))
|
||||||
|
try:
|
||||||
|
output = str(_rt_execute(c, "execute.capture", "sh", "-c", poll_script, "pytorrent-rm-poll", status_path) or "").strip()
|
||||||
|
except Exception as exc:
|
||||||
|
# Note: Remove uses the same safe polling as move, so a temporary missing pipe does not fail the whole queue.
|
||||||
|
if _is_rt_timeout_error(exc) or _is_transient_scgi_error(exc):
|
||||||
|
continue
|
||||||
|
raise
|
||||||
|
if not output:
|
||||||
|
continue
|
||||||
|
try:
|
||||||
|
_rt_execute(c, "execute.throw", "sh", "-c", cleanup_script, "pytorrent-rm-clean", status_path)
|
||||||
|
except Exception:
|
||||||
|
pass
|
||||||
|
first_line = output.splitlines()[0].strip()
|
||||||
|
if first_line == "OK":
|
||||||
|
return
|
||||||
|
if first_line.startswith("ERR"):
|
||||||
|
details = "\n".join(output.splitlines()[1:]).strip()
|
||||||
|
raise RuntimeError(details or first_line)
|
||||||
|
raise RuntimeError(output)
|
||||||
|
|
||||||
|
|
||||||
def _remove_torrent_data(c: ScgiRtorrentClient, torrent_hash: str) -> dict:
|
def _remove_torrent_data(c: ScgiRtorrentClient, torrent_hash: str) -> dict:
|
||||||
data_path = _safe_rm_rf_path(_torrent_data_path(c, torrent_hash))
|
data_path = _safe_rm_rf_path(_torrent_data_path(c, torrent_hash))
|
||||||
try:
|
try:
|
||||||
@@ -217,7 +339,7 @@ def _remove_torrent_data(c: ScgiRtorrentClient, torrent_hash: str) -> dict:
|
|||||||
c.call("d.close", torrent_hash)
|
c.call("d.close", torrent_hash)
|
||||||
except Exception:
|
except Exception:
|
||||||
pass
|
pass
|
||||||
_rt_execute(c, "execute.throw", "rm", "-rf", data_path)
|
_run_remote_rm(c, data_path)
|
||||||
return {"hash": torrent_hash, "removed_path": data_path}
|
return {"hash": torrent_hash, "removed_path": data_path}
|
||||||
|
|
||||||
|
|
||||||
@@ -249,6 +371,146 @@ def browse_path(profile: dict, path: str | None = None) -> dict:
|
|||||||
return {"path": base, "parent": parent, "dirs": dirs[:300], "source": "rtorrent"}
|
return {"path": base, "parent": parent, "dirs": dirs[:300], "source": "rtorrent"}
|
||||||
|
|
||||||
|
|
||||||
|
POST_CHECK_DOWNLOAD_LABEL = "To download after check"
|
||||||
|
_POST_CHECK_WATCH_TTL_SECONDS = 48 * 60 * 60
|
||||||
|
_POST_CHECK_WATCH_MIN_SECONDS = 2.0
|
||||||
|
_POST_CHECK_WATCH: dict[int, dict[str, float]] = {}
|
||||||
|
|
||||||
|
|
||||||
|
def _mark_post_check_watch(profile_id: int, torrent_hash: str) -> None:
|
||||||
|
if not torrent_hash:
|
||||||
|
return
|
||||||
|
_POST_CHECK_WATCH.setdefault(int(profile_id), {})[str(torrent_hash)] = time.time()
|
||||||
|
|
||||||
|
|
||||||
|
def _clear_post_check_watch(profile_id: int, torrent_hash: str) -> None:
|
||||||
|
profile_watch = _POST_CHECK_WATCH.get(int(profile_id))
|
||||||
|
if not profile_watch:
|
||||||
|
return
|
||||||
|
profile_watch.pop(str(torrent_hash), None)
|
||||||
|
if not profile_watch:
|
||||||
|
_POST_CHECK_WATCH.pop(int(profile_id), None)
|
||||||
|
|
||||||
|
|
||||||
|
def _is_post_check_watched(profile_id: int, torrent_hash: str) -> bool:
|
||||||
|
profile_watch = _POST_CHECK_WATCH.get(int(profile_id)) or {}
|
||||||
|
started_at = profile_watch.get(str(torrent_hash))
|
||||||
|
if not started_at:
|
||||||
|
return False
|
||||||
|
age = time.time() - started_at
|
||||||
|
if age > _POST_CHECK_WATCH_TTL_SECONDS:
|
||||||
|
_clear_post_check_watch(profile_id, torrent_hash)
|
||||||
|
return False
|
||||||
|
# Note: A short grace period prevents labeling a recheck that was queued but has not visibly entered hashing yet.
|
||||||
|
return age >= _POST_CHECK_WATCH_MIN_SECONDS
|
||||||
|
|
||||||
|
|
||||||
|
def _label_names(value: str) -> list[str]:
|
||||||
|
names: list[str] = []
|
||||||
|
for part in str(value or "").replace(";", ",").replace("|", ",").split(","):
|
||||||
|
label = part.strip()
|
||||||
|
if label and label not in names:
|
||||||
|
names.append(label)
|
||||||
|
return names
|
||||||
|
|
||||||
|
|
||||||
|
def _label_value(labels: list[str]) -> str:
|
||||||
|
return ", ".join([label for label in labels if str(label or "").strip()])
|
||||||
|
|
||||||
|
|
||||||
|
def _without_post_check_download_label(value: str | None) -> str:
|
||||||
|
return _label_value([label for label in _label_names(str(value or "")) if label != POST_CHECK_DOWNLOAD_LABEL])
|
||||||
|
|
||||||
|
|
||||||
|
def clear_post_check_download_label(c: ScgiRtorrentClient, torrent_hash: str, current_label: str | None = None) -> bool:
|
||||||
|
label_source = current_label
|
||||||
|
if label_source is None:
|
||||||
|
try:
|
||||||
|
label_source = str(c.call("d.custom1", str(torrent_hash or "")) or "")
|
||||||
|
except Exception:
|
||||||
|
label_source = ""
|
||||||
|
labels = _label_names(str(label_source or ""))
|
||||||
|
if POST_CHECK_DOWNLOAD_LABEL not in labels:
|
||||||
|
return False
|
||||||
|
# Note: The temporary post-check label is removed only after the torrent leaves the stopped waiting queue.
|
||||||
|
c.call("d.custom1.set", str(torrent_hash or ""), _label_value([label for label in labels if label != POST_CHECK_DOWNLOAD_LABEL]))
|
||||||
|
return True
|
||||||
|
|
||||||
|
|
||||||
|
def _message_indicates_active_check(message: str) -> bool:
|
||||||
|
msg = str(message or "").lower()
|
||||||
|
if not msg:
|
||||||
|
return False
|
||||||
|
finished_markers = ("complete", "completed", "finished", "success", "succeeded", "failed", "done")
|
||||||
|
if any(marker in msg for marker in finished_markers):
|
||||||
|
return False
|
||||||
|
active_markers = ("checking", "hashing", "hash check queued", "hash check scheduled", "check hash queued", "recheck queued", "rechecking")
|
||||||
|
return any(marker in msg for marker in active_markers)
|
||||||
|
|
||||||
|
|
||||||
|
def _row_progress_complete(row: dict) -> bool:
|
||||||
|
size = int(row.get("size") or 0)
|
||||||
|
completed = int(row.get("completed_bytes") or 0)
|
||||||
|
return bool(row.get("complete")) or (size > 0 and completed >= size) or float(row.get("progress") or 0) >= 100.0
|
||||||
|
|
||||||
|
|
||||||
|
def _cleanup_post_check_label_if_ready(c: ScgiRtorrentClient, row: dict) -> bool:
|
||||||
|
labels = _label_names(str(row.get("label") or ""))
|
||||||
|
if POST_CHECK_DOWNLOAD_LABEL not in labels:
|
||||||
|
return False
|
||||||
|
status = str(row.get("status") or "").lower()
|
||||||
|
started_after_wait = bool(int(row.get("state") or 0)) and status != "checking"
|
||||||
|
if not (_row_progress_complete(row) or status == "seeding" or started_after_wait):
|
||||||
|
return False
|
||||||
|
# Note: Keep the post-check label while the torrent is stopped; remove it once it is started for download/seeding.
|
||||||
|
clear_post_check_download_label(c, str(row.get("hash") or ""), str(row.get("label") or ""))
|
||||||
|
row["label"] = _without_post_check_download_label(str(row.get("label") or ""))
|
||||||
|
return True
|
||||||
|
|
||||||
|
|
||||||
|
def apply_post_check_policy(profile: dict, rows: list[dict], previous_rows: dict[str, dict] | None = None) -> list[dict]:
|
||||||
|
"""Start complete torrents after check; stop and label incomplete ones for Smart Queue."""
|
||||||
|
previous_rows = previous_rows or {}
|
||||||
|
profile_id = int(profile.get("id") or 0)
|
||||||
|
c = client_for(profile)
|
||||||
|
changes: list[dict] = []
|
||||||
|
for row in rows:
|
||||||
|
h = str(row.get("hash") or "")
|
||||||
|
prev = previous_rows.get(h) or {}
|
||||||
|
try:
|
||||||
|
if h and _cleanup_post_check_label_if_ready(c, row):
|
||||||
|
changes.append({"hash": h, "action": "remove_post_check_label"})
|
||||||
|
except Exception as exc:
|
||||||
|
changes.append({"hash": h, "action": "remove_post_check_label_failed", "error": str(exc)})
|
||||||
|
was_checking = str(prev.get("status") or "") == "Checking" or int(prev.get("hashing") or 0) > 0
|
||||||
|
watched_recheck = _is_post_check_watched(profile_id, h)
|
||||||
|
is_checking = str(row.get("status") or "") == "Checking" or int(row.get("hashing") or 0) > 0
|
||||||
|
if not h or not (was_checking or watched_recheck) or is_checking:
|
||||||
|
continue
|
||||||
|
complete = _row_progress_complete(row)
|
||||||
|
try:
|
||||||
|
if complete:
|
||||||
|
# Note: A fully checked torrent is started with the same helper as the manual Start action so it seeds immediately.
|
||||||
|
start_result = start_or_resume_hash(c, h)
|
||||||
|
clear_post_check_download_label(c, h, str(row.get("label") or ""))
|
||||||
|
row.update({"state": 1, "active": 1, "paused": False, "status": "Seeding", "label": _without_post_check_download_label(str(row.get("label") or ""))})
|
||||||
|
changes.append({"hash": h, "action": "start_seed_after_check", "complete": True, "result": start_result})
|
||||||
|
else:
|
||||||
|
labels = _label_names(str(row.get("label") or ""))
|
||||||
|
if POST_CHECK_DOWNLOAD_LABEL not in labels:
|
||||||
|
labels.append(POST_CHECK_DOWNLOAD_LABEL)
|
||||||
|
label_value = _label_value(labels)
|
||||||
|
# Note: Incomplete torrents are left stopped after check so Smart Queue can start them later within the global limit.
|
||||||
|
c.call("d.stop", h)
|
||||||
|
c.call("d.custom1.set", h, label_value)
|
||||||
|
row.update({"state": 0, "active": 0, "paused": False, "status": "Stopped", "label": label_value})
|
||||||
|
changes.append({"hash": h, "action": "stop_and_label_after_check", "complete": False, "label": POST_CHECK_DOWNLOAD_LABEL})
|
||||||
|
_clear_post_check_watch(profile_id, h)
|
||||||
|
except Exception as exc:
|
||||||
|
changes.append({"hash": h, "action": "post_check_policy_failed", "error": str(exc)})
|
||||||
|
return changes
|
||||||
|
|
||||||
|
|
||||||
TORRENT_FIELDS = [
|
TORRENT_FIELDS = [
|
||||||
"d.hash=", "d.name=", "d.state=", "d.complete=", "d.size_bytes=", "d.completed_bytes=",
|
"d.hash=", "d.name=", "d.state=", "d.complete=", "d.size_bytes=", "d.completed_bytes=",
|
||||||
"d.ratio=", "d.up.rate=", "d.down.rate=", "d.up.total=", "d.down.total=", "d.peers_connected=",
|
"d.ratio=", "d.up.rate=", "d.down.rate=", "d.up.total=", "d.down.total=", "d.peers_connected=",
|
||||||
@@ -287,7 +549,8 @@ def normalize_row(row: list) -> dict:
|
|||||||
is_active = int(row[21] or 0) if len(row) > 21 else int(row[2] or 0)
|
is_active = int(row[21] or 0) if len(row) > 21 else int(row[2] or 0)
|
||||||
state = int(row[2] or 0)
|
state = int(row[2] or 0)
|
||||||
complete = int(row[3] or 0)
|
complete = int(row[3] or 0)
|
||||||
is_checking = bool(hashing) or ("hash" in msg_l and ("check" in msg_l or "checking" in msg_l)) or "recheck" in msg_l
|
# Note: d.hashing is authoritative; stale "hash check complete" messages must not keep the UI in Checking forever.
|
||||||
|
is_checking = bool(hashing) or _message_indicates_active_check(msg_l)
|
||||||
is_paused = bool(state) and not bool(is_active) and not is_checking
|
is_paused = bool(state) and not bool(is_active) and not is_checking
|
||||||
status = "Checking" if is_checking else "Paused" if is_paused else "Seeding" if complete and state else "Downloading" if state else "Stopped"
|
status = "Checking" if is_checking else "Paused" if is_paused else "Seeding" if complete and state else "Downloading" if state else "Stopped"
|
||||||
return {
|
return {
|
||||||
@@ -646,31 +909,6 @@ def torrent_peers(profile: dict, torrent_hash: str) -> list[dict]:
|
|||||||
return peers
|
return peers
|
||||||
|
|
||||||
|
|
||||||
def peer_action(profile: dict, torrent_hash: str, peer_index: int, action_name: str) -> dict:
|
|
||||||
if peer_index < 0:
|
|
||||||
raise ValueError("Invalid peer index")
|
|
||||||
methods = {
|
|
||||||
"disconnect": ["p.disconnect", "p.close"],
|
|
||||||
"kick": ["p.disconnect", "p.close"],
|
|
||||||
"snub": ["p.snub"],
|
|
||||||
"unsnub": ["p.unsnub"],
|
|
||||||
"ban": ["p.ban", "p.disconnect"],
|
|
||||||
}
|
|
||||||
candidates = methods.get(action_name)
|
|
||||||
if not candidates:
|
|
||||||
raise ValueError(f"Unknown peer action: {action_name}")
|
|
||||||
c = client_for(profile)
|
|
||||||
target = f"{torrent_hash}:p{int(peer_index)}"
|
|
||||||
errors = []
|
|
||||||
for method in candidates:
|
|
||||||
try:
|
|
||||||
c.call(method, target)
|
|
||||||
return {"ok": True, "action": action_name, "method": method, "peer_index": peer_index}
|
|
||||||
except Exception as exc:
|
|
||||||
errors.append(f"{method}: {exc}")
|
|
||||||
raise RuntimeError("; ".join(errors))
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
def _call_first(c: ScgiRtorrentClient, candidates: list[tuple[str, tuple]]) -> dict:
|
def _call_first(c: ScgiRtorrentClient, candidates: list[tuple[str, tuple]]) -> dict:
|
||||||
@@ -684,6 +922,49 @@ def _call_first(c: ScgiRtorrentClient, candidates: list[tuple[str, tuple]]) -> d
|
|||||||
raise RuntimeError("; ".join(errors))
|
raise RuntimeError("; ".join(errors))
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
def _tracker_domain(url: str) -> str:
|
||||||
|
raw = str(url or '').strip()
|
||||||
|
if not raw:
|
||||||
|
return ''
|
||||||
|
parsed = urlparse(raw if '://' in raw else f'http://{raw}')
|
||||||
|
host = (parsed.hostname or '').lower().strip('.')
|
||||||
|
if host.startswith('www.'):
|
||||||
|
host = host[4:]
|
||||||
|
return host
|
||||||
|
|
||||||
|
|
||||||
|
def tracker_summary(profile: dict, torrent_hashes: list[str] | None = None, limit: int = 1000) -> dict:
|
||||||
|
"""Return tracker domains grouped by torrent for the sidebar filter."""
|
||||||
|
# Note: Tracker summary is read-only and isolated from the normal torrent snapshot, so slow tracker RPC calls cannot break the main list.
|
||||||
|
hashes = [str(h or '').strip() for h in (torrent_hashes or []) if str(h or '').strip()]
|
||||||
|
if not hashes:
|
||||||
|
hashes = [t.get('hash') for t in list_torrents(profile) if t.get('hash')]
|
||||||
|
hashes = hashes[:max(1, int(limit or 1000))]
|
||||||
|
by_hash: dict[str, list[dict]] = {}
|
||||||
|
counts: dict[str, dict] = {}
|
||||||
|
errors = []
|
||||||
|
for h in hashes:
|
||||||
|
try:
|
||||||
|
items = []
|
||||||
|
seen = set()
|
||||||
|
for tr in torrent_trackers(profile, h):
|
||||||
|
url = str(tr.get('url') or '')
|
||||||
|
domain = _tracker_domain(url)
|
||||||
|
if not domain or domain in seen:
|
||||||
|
continue
|
||||||
|
seen.add(domain)
|
||||||
|
item = {'domain': domain, 'url': url}
|
||||||
|
items.append(item)
|
||||||
|
row = counts.setdefault(domain, {'domain': domain, 'url': url, 'count': 0})
|
||||||
|
row['count'] += 1
|
||||||
|
by_hash[h] = items
|
||||||
|
except Exception as exc:
|
||||||
|
errors.append({'hash': h, 'error': str(exc)})
|
||||||
|
by_hash[h] = []
|
||||||
|
trackers = sorted(counts.values(), key=lambda x: (-int(x.get('count') or 0), str(x.get('domain') or '')))
|
||||||
|
return {'hashes': by_hash, 'trackers': trackers, 'errors': errors, 'scanned': len(hashes)}
|
||||||
|
|
||||||
def _safe_tracker_call(c: ScgiRtorrentClient, method: str, target: str, default=None):
|
def _safe_tracker_call(c: ScgiRtorrentClient, method: str, target: str, default=None):
|
||||||
try:
|
try:
|
||||||
return c.call(method, target)
|
return c.call(method, target)
|
||||||
@@ -703,9 +984,39 @@ def _tracker_int(value, default=None):
|
|||||||
return default
|
return default
|
||||||
|
|
||||||
|
|
||||||
|
def _tracker_rows(c: ScgiRtorrentClient, torrent_hash: str) -> list[list]:
|
||||||
|
fields = ("t.url=", "t.is_enabled=", "t.scrape_complete=", "t.scrape_incomplete=", "t.scrape_downloaded=")
|
||||||
|
errors: list[str] = []
|
||||||
|
for args in ((torrent_hash, "", *fields), ("", torrent_hash, *fields)):
|
||||||
|
try:
|
||||||
|
rows = c.call("t.multicall", *args)
|
||||||
|
return [list(r) for r in (rows or [])]
|
||||||
|
except Exception as exc:
|
||||||
|
errors.append(f"t.multicall{args[:2]}: {exc}")
|
||||||
|
# Note: Fallback keeps the sidebar tracker filter usable on rTorrent builds without t.multicall scrape fields.
|
||||||
|
total = _tracker_int(_safe_tracker_call(c, "d.tracker_size", torrent_hash, 0), 0) or 0
|
||||||
|
rows: list[list] = []
|
||||||
|
for index in range(max(0, total)):
|
||||||
|
target = _tracker_target(torrent_hash, index)
|
||||||
|
url = _safe_tracker_call(c, "t.url", target, "")
|
||||||
|
if not url:
|
||||||
|
for args in ((torrent_hash, index), ("", torrent_hash, index)):
|
||||||
|
try:
|
||||||
|
url = c.call("t.url", *args)
|
||||||
|
break
|
||||||
|
except Exception:
|
||||||
|
continue
|
||||||
|
if url:
|
||||||
|
enabled = _safe_tracker_call(c, "t.is_enabled", target, 1)
|
||||||
|
rows.append([url, enabled, None, None, None])
|
||||||
|
if rows:
|
||||||
|
return rows
|
||||||
|
raise RuntimeError("Cannot read trackers: " + "; ".join(errors))
|
||||||
|
|
||||||
|
|
||||||
def torrent_trackers(profile: dict, torrent_hash: str) -> list[dict]:
|
def torrent_trackers(profile: dict, torrent_hash: str) -> list[dict]:
|
||||||
c = client_for(profile)
|
c = client_for(profile)
|
||||||
rows = c.t.multicall(torrent_hash, "", "t.url=", "t.is_enabled=", "t.scrape_complete=", "t.scrape_incomplete=", "t.scrape_downloaded=")
|
rows = _tracker_rows(c, torrent_hash)
|
||||||
trackers = []
|
trackers = []
|
||||||
for idx, r in enumerate(rows):
|
for idx, r in enumerate(rows):
|
||||||
target = _tracker_target(torrent_hash, idx)
|
target = _tracker_target(torrent_hash, idx)
|
||||||
@@ -749,17 +1060,23 @@ def tracker_action(profile: dict, torrent_hash: str, action_name: str, payload:
|
|||||||
("d.tracker.insert", (torrent_hash, 0, url)),
|
("d.tracker.insert", (torrent_hash, 0, url)),
|
||||||
("d.tracker.insert", ("", torrent_hash, "", url)),
|
("d.tracker.insert", ("", torrent_hash, "", url)),
|
||||||
])
|
])
|
||||||
if action_name == "edit":
|
if action_name in {"delete", "remove"}:
|
||||||
url = str(payload.get("url") or "").strip()
|
# Note: Deleting trackers is guarded to keep at least one tracker attached to the torrent.
|
||||||
index = int(payload.get("index", -1))
|
index = int(payload.get("index", -1))
|
||||||
if index < 0:
|
if index < 0:
|
||||||
raise ValueError("Invalid tracker index")
|
raise ValueError("Invalid tracker index")
|
||||||
if not url:
|
total = _tracker_int(_safe_tracker_call(c, "d.tracker_size", torrent_hash, 0), 0) or len(torrent_trackers(profile, torrent_hash))
|
||||||
raise ValueError("Missing tracker URL")
|
if total <= 1:
|
||||||
target = _tracker_target(torrent_hash, index)
|
raise ValueError("Cannot delete the last tracker")
|
||||||
|
if index >= total:
|
||||||
|
raise ValueError("Invalid tracker index")
|
||||||
return _call_first(c, [
|
return _call_first(c, [
|
||||||
("t.url.set", (target, url)),
|
("d.tracker.remove", (torrent_hash, index)),
|
||||||
("t.url.set", ("", target, url)),
|
("d.tracker.remove", (torrent_hash, "", index)),
|
||||||
|
("d.tracker.erase", (torrent_hash, index)),
|
||||||
|
("d.tracker.erase", (torrent_hash, "", index)),
|
||||||
|
("d.tracker.delete", (torrent_hash, index)),
|
||||||
|
("d.tracker.delete", (torrent_hash, "", index)),
|
||||||
])
|
])
|
||||||
raise ValueError(f"Unknown tracker action: {action_name}")
|
raise ValueError(f"Unknown tracker action: {action_name}")
|
||||||
|
|
||||||
@@ -984,14 +1301,145 @@ def apply_startup_overrides(profile: dict) -> dict:
|
|||||||
return {"ok": True, "updated": [], "errors": [], "skipped": True}
|
return {"ok": True, "updated": [], "errors": [], "skipped": True}
|
||||||
return set_config(profile, values, apply_now=True, apply_on_start=True)
|
return set_config(profile, values, apply_now=True, apply_on_start=True)
|
||||||
|
|
||||||
|
|
||||||
|
def _int_rpc(c: ScgiRtorrentClient, method: str, h: str, default: int = 0) -> int:
|
||||||
|
try:
|
||||||
|
return int(c.call(method, h) or 0)
|
||||||
|
except Exception:
|
||||||
|
return default
|
||||||
|
|
||||||
|
|
||||||
|
def _str_rpc(c: ScgiRtorrentClient, method: str, h: str, default: str = '') -> str:
|
||||||
|
try:
|
||||||
|
return str(c.call(method, h) or '')
|
||||||
|
except Exception:
|
||||||
|
return default
|
||||||
|
|
||||||
|
|
||||||
|
def _download_runtime_state(c: ScgiRtorrentClient, h: str) -> dict:
|
||||||
|
"""Read rTorrent state using the native pause model: stopped, paused or active."""
|
||||||
|
state = _int_rpc(c, 'd.state', h)
|
||||||
|
active = _int_rpc(c, 'd.is_active', h)
|
||||||
|
opened = _int_rpc(c, 'd.is_open', h)
|
||||||
|
# Note: In rTorrent, pause does not change d.state. Paused means state=1, open=1, active=0.
|
||||||
|
return {
|
||||||
|
'state': state,
|
||||||
|
'open': opened,
|
||||||
|
'active': active,
|
||||||
|
'paused': bool(state and opened and not active),
|
||||||
|
'stopped': not bool(state),
|
||||||
|
'message': _str_rpc(c, 'd.message', h),
|
||||||
|
}
|
||||||
|
|
||||||
|
|
||||||
|
def pause_hash(c: ScgiRtorrentClient, torrent_hash: str) -> dict:
|
||||||
|
"""Pause an active rTorrent item without stopping or closing it."""
|
||||||
|
h = str(torrent_hash or '')
|
||||||
|
if not h:
|
||||||
|
return {'hash': h, 'ok': False, 'error': 'missing hash'}
|
||||||
|
before = _download_runtime_state(c, h)
|
||||||
|
result = {'hash': h, 'before': before, 'commands': []}
|
||||||
|
try:
|
||||||
|
# Note: Smart Queue frees a slot with d.pause, not d.stop, so later d.resume behaves like ruTorrent.
|
||||||
|
c.call('d.pause', h)
|
||||||
|
result['commands'].append('d.pause')
|
||||||
|
result['after'] = _download_runtime_state(c, h)
|
||||||
|
result['ok'] = True
|
||||||
|
except Exception as exc:
|
||||||
|
result.update({'ok': False, 'error': str(exc), 'after': _download_runtime_state(c, h)})
|
||||||
|
return result
|
||||||
|
|
||||||
|
|
||||||
|
def stop_hash(c: ScgiRtorrentClient, torrent_hash: str) -> dict:
|
||||||
|
"""Stop an active rTorrent item without using pause semantics."""
|
||||||
|
h = str(torrent_hash or '')
|
||||||
|
if not h:
|
||||||
|
return {'hash': h, 'ok': False, 'error': 'missing hash'}
|
||||||
|
before = _download_runtime_state(c, h)
|
||||||
|
result = {'hash': h, 'before': before, 'commands': []}
|
||||||
|
if before.get('stopped'):
|
||||||
|
result.update({'ok': True, 'skipped': 'already_stopped', 'after': before})
|
||||||
|
return result
|
||||||
|
try:
|
||||||
|
# Note: Smart Queue now enforces the queue with d.stop only; user-paused torrents stay untouched.
|
||||||
|
c.call('d.stop', h)
|
||||||
|
result['commands'].append('d.stop')
|
||||||
|
result['after'] = _download_runtime_state(c, h)
|
||||||
|
result['ok'] = True
|
||||||
|
except Exception as exc:
|
||||||
|
result.update({'ok': False, 'error': str(exc), 'after': _download_runtime_state(c, h)})
|
||||||
|
return result
|
||||||
|
|
||||||
|
|
||||||
|
def resume_paused_hash(c: ScgiRtorrentClient, torrent_hash: str) -> dict:
|
||||||
|
"""Resume only a paused rTorrent item; never convert it through stop/start."""
|
||||||
|
h = str(torrent_hash or '')
|
||||||
|
if not h:
|
||||||
|
return {'hash': h, 'ok': False, 'error': 'missing hash'}
|
||||||
|
before = _download_runtime_state(c, h)
|
||||||
|
result: dict = {'hash': h, 'before': before, 'commands': []}
|
||||||
|
if before.get('stopped'):
|
||||||
|
result.update({'ok': False, 'skipped': 'stopped_not_paused', 'after': before})
|
||||||
|
return result
|
||||||
|
if before.get('active'):
|
||||||
|
result.update({'ok': True, 'skipped': 'already_active', 'after': before})
|
||||||
|
return result
|
||||||
|
try:
|
||||||
|
# Note: ruTorrent unpauses with the equivalent of d.resume. Do not add d.start/d.open,
|
||||||
|
# because those commands belong to Stopped/Open state, not a clean Paused state.
|
||||||
|
c.call('d.resume', h)
|
||||||
|
result['commands'].append('d.resume')
|
||||||
|
result['after'] = _download_runtime_state(c, h)
|
||||||
|
result['ok'] = True
|
||||||
|
except Exception as exc:
|
||||||
|
result.update({'ok': False, 'error': str(exc), 'after': _download_runtime_state(c, h)})
|
||||||
|
return result
|
||||||
|
|
||||||
|
|
||||||
|
def start_or_resume_hash(c: ScgiRtorrentClient, torrent_hash: str) -> dict:
|
||||||
|
"""Start stopped torrents or resume torrents paused with d.pause, without mixing both paths."""
|
||||||
|
h = str(torrent_hash or '')
|
||||||
|
if not h:
|
||||||
|
return {'hash': h, 'ok': False, 'error': 'missing hash'}
|
||||||
|
before = _download_runtime_state(c, h)
|
||||||
|
result: dict = {'hash': h, 'before': before, 'commands': []}
|
||||||
|
|
||||||
|
if before.get('active'):
|
||||||
|
result.update({'ok': True, 'skipped': 'already_active', 'after': before})
|
||||||
|
return result
|
||||||
|
|
||||||
|
if before.get('paused') or (before.get('state') and not before.get('active')):
|
||||||
|
# Note: Paused rTorrent items are resumed only with d.resume; d.start is intentionally skipped here.
|
||||||
|
resumed = resume_paused_hash(c, h)
|
||||||
|
resumed['mode'] = 'resume_paused'
|
||||||
|
return resumed
|
||||||
|
|
||||||
|
try:
|
||||||
|
# Note: d.start remains only for Stopped/closed items, not for the pause-to-resume path.
|
||||||
|
c.call('d.open', h)
|
||||||
|
result['commands'].append('d.open')
|
||||||
|
except Exception as exc:
|
||||||
|
result.setdefault('ignored_errors', []).append(f'd.open: {exc}')
|
||||||
|
try:
|
||||||
|
c.call('d.start', h)
|
||||||
|
result['commands'].append('d.start')
|
||||||
|
except Exception as exc:
|
||||||
|
result.setdefault('ignored_errors', []).append(f'd.start: {exc}')
|
||||||
|
try:
|
||||||
|
c.call('d.try_start', h)
|
||||||
|
result['commands'].append('d.try_start')
|
||||||
|
except Exception as exc2:
|
||||||
|
result.setdefault('ignored_errors', []).append(f'd.try_start: {exc2}')
|
||||||
|
result['ok'] = False
|
||||||
|
result['after'] = _download_runtime_state(c, h)
|
||||||
|
result['ok'] = result.get('ok', True)
|
||||||
|
return result
|
||||||
|
|
||||||
def action(profile: dict, torrent_hashes: list[str], name: str, payload: dict | None = None) -> dict:
|
def action(profile: dict, torrent_hashes: list[str], name: str, payload: dict | None = None) -> dict:
|
||||||
payload = payload or {}
|
payload = payload or {}
|
||||||
c = client_for(profile)
|
c = client_for(profile)
|
||||||
methods = {
|
methods = {
|
||||||
"start": "d.start",
|
|
||||||
"pause": "d.pause",
|
|
||||||
"stop": "d.stop",
|
"stop": "d.stop",
|
||||||
"resume": "d.resume",
|
|
||||||
"recheck": "d.check_hash",
|
"recheck": "d.check_hash",
|
||||||
"reannounce": "d.tracker_announce",
|
"reannounce": "d.tracker_announce",
|
||||||
"remove": "d.erase",
|
"remove": "d.erase",
|
||||||
@@ -1010,13 +1458,15 @@ def action(profile: dict, torrent_hashes: list[str], name: str, payload: dict |
|
|||||||
path = _remote_clean_path(payload.get("path") or "")
|
path = _remote_clean_path(payload.get("path") or "")
|
||||||
move_data = bool(payload.get("move_data"))
|
move_data = bool(payload.get("move_data"))
|
||||||
recheck = bool(payload.get("recheck", move_data))
|
recheck = bool(payload.get("recheck", move_data))
|
||||||
|
keep_seeding = bool(payload.get("keep_seeding"))
|
||||||
|
# Note: Automations can force seeding after a physical move even if the torrent was not active before.
|
||||||
if not path:
|
if not path:
|
||||||
raise ValueError("Missing path")
|
raise ValueError("Missing path")
|
||||||
results = []
|
results = []
|
||||||
if move_data:
|
if move_data:
|
||||||
_rt_execute_allow_timeout(c, "execute.throw", "mkdir", "-p", path)
|
_rt_execute_allow_timeout(c, "execute.throw", "mkdir", "-p", path)
|
||||||
for h in torrent_hashes:
|
for h in torrent_hashes:
|
||||||
item = {"hash": h, "path": path, "move_data": move_data}
|
item = {"hash": h, "path": path, "move_data": move_data, "keep_seeding": keep_seeding}
|
||||||
try:
|
try:
|
||||||
was_state = int(c.call("d.state", h) or 0)
|
was_state = int(c.call("d.state", h) or 0)
|
||||||
except Exception:
|
except Exception:
|
||||||
@@ -1050,15 +1500,29 @@ def action(profile: dict, torrent_hashes: list[str], name: str, payload: dict |
|
|||||||
c.call("d.check_hash", h)
|
c.call("d.check_hash", h)
|
||||||
except Exception as exc:
|
except Exception as exc:
|
||||||
item["recheck_error"] = str(exc)
|
item["recheck_error"] = str(exc)
|
||||||
if was_state or was_active:
|
if keep_seeding or was_state or was_active:
|
||||||
try:
|
try:
|
||||||
c.call("d.start", h)
|
c.call("d.start", h)
|
||||||
except Exception:
|
item["started_after_move"] = True
|
||||||
pass
|
except Exception as exc:
|
||||||
|
item["start_after_move_error"] = str(exc)
|
||||||
else:
|
else:
|
||||||
c.call("d.directory.set", h, path)
|
c.call("d.directory.set", h, path)
|
||||||
results.append(item)
|
results.append(item)
|
||||||
return {"ok": True, "count": len(torrent_hashes), "move_data": move_data, "results": results}
|
return {"ok": True, "count": len(torrent_hashes), "move_data": move_data, "keep_seeding": keep_seeding, "results": results}
|
||||||
|
if name == "pause":
|
||||||
|
# Note: The app pause action is now a pure d.pause so later resume works without stop/start.
|
||||||
|
results = [pause_hash(c, h) for h in torrent_hashes]
|
||||||
|
return {"ok": True, "count": len(torrent_hashes), "remove_data": False, "results": results}
|
||||||
|
if name in {"resume", "unpause"}:
|
||||||
|
# Note: Resume/Unpause uses only d.resume for Paused state.
|
||||||
|
results = [resume_paused_hash(c, h) for h in torrent_hashes]
|
||||||
|
return {"ok": True, "count": len(torrent_hashes), "remove_data": False, "results": results}
|
||||||
|
if name == "start":
|
||||||
|
# Note: Start separates Stopped from Paused; paused items go through d.resume, stopped items through d.start.
|
||||||
|
results = [start_or_resume_hash(c, h) for h in torrent_hashes]
|
||||||
|
return {"ok": True, "count": len(torrent_hashes), "remove_data": False, "results": results}
|
||||||
|
|
||||||
method = methods.get(name)
|
method = methods.get(name)
|
||||||
if not method:
|
if not method:
|
||||||
raise ValueError(f"Unknown action: {name}")
|
raise ValueError(f"Unknown action: {name}")
|
||||||
@@ -1068,6 +1532,9 @@ def action(profile: dict, torrent_hashes: list[str], name: str, payload: dict |
|
|||||||
if remove_data:
|
if remove_data:
|
||||||
results.append(_remove_torrent_data(c, h))
|
results.append(_remove_torrent_data(c, h))
|
||||||
c.call(method, h)
|
c.call(method, h)
|
||||||
|
if name == "recheck":
|
||||||
|
# Note: Recheck is tracked so even very fast checks still receive the after-check start/stop policy.
|
||||||
|
_mark_post_check_watch(int(profile.get("id") or 0), h)
|
||||||
return {"ok": True, "count": len(torrent_hashes), "remove_data": remove_data, "results": results}
|
return {"ok": True, "count": len(torrent_hashes), "remove_data": remove_data, "results": results}
|
||||||
|
|
||||||
def add_magnet(profile: dict, uri: str, start: bool = True, directory: str = "", label: str = "") -> dict:
|
def add_magnet(profile: dict, uri: str, start: bool = True, directory: str = "", label: str = "") -> dict:
|
||||||
|
|||||||
@@ -5,7 +5,7 @@ from typing import Any
|
|||||||
import json
|
import json
|
||||||
import time
|
import time
|
||||||
|
|
||||||
from ..config import SMART_QUEUE_LABEL
|
from ..config import SMART_QUEUE_LABEL, SMART_QUEUE_STALLED_LABEL
|
||||||
from ..db import connect, default_user_id, utcnow
|
from ..db import connect, default_user_id, utcnow
|
||||||
from . import rtorrent
|
from . import rtorrent
|
||||||
from .preferences import active_profile, get_profile
|
from .preferences import active_profile, get_profile
|
||||||
@@ -20,6 +20,14 @@ def _ts(value: str | None) -> float:
|
|||||||
return 0.0
|
return 0.0
|
||||||
|
|
||||||
|
|
||||||
|
def _int_setting(data: dict[str, Any], current: dict[str, Any], key: str, default: int, minimum: int = 0) -> int:
|
||||||
|
raw = data.get(key) if key in data else current.get(key)
|
||||||
|
try:
|
||||||
|
return max(minimum, int(raw if raw is not None and raw != '' else default))
|
||||||
|
except (TypeError, ValueError):
|
||||||
|
return max(minimum, int(default))
|
||||||
|
|
||||||
|
|
||||||
def _default_settings(user_id: int, profile_id: int) -> dict[str, Any]:
|
def _default_settings(user_id: int, profile_id: int) -> dict[str, Any]:
|
||||||
return {
|
return {
|
||||||
'user_id': user_id,
|
'user_id': user_id,
|
||||||
@@ -29,6 +37,10 @@ def _default_settings(user_id: int, profile_id: int) -> dict[str, Any]:
|
|||||||
'stalled_seconds': 300,
|
'stalled_seconds': 300,
|
||||||
'min_speed_bytes': 1024,
|
'min_speed_bytes': 1024,
|
||||||
'min_seeds': 1,
|
'min_seeds': 1,
|
||||||
|
'min_peers': 0,
|
||||||
|
'ignore_seed_peer': 0,
|
||||||
|
'ignore_speed': 0,
|
||||||
|
'manage_stopped': 1,
|
||||||
'updated_at': utcnow(),
|
'updated_at': utcnow(),
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -48,24 +60,36 @@ def save_settings(profile_id: int, data: dict[str, Any], user_id: int | None = N
|
|||||||
current = get_settings(profile_id, user_id)
|
current = get_settings(profile_id, user_id)
|
||||||
settings = {
|
settings = {
|
||||||
'enabled': 1 if data.get('enabled', current.get('enabled')) else 0,
|
'enabled': 1 if data.get('enabled', current.get('enabled')) else 0,
|
||||||
'max_active_downloads': max(1, int(data.get('max_active_downloads') or current.get('max_active_downloads') or 5)),
|
'max_active_downloads': _int_setting(data, current, 'max_active_downloads', 5, 1),
|
||||||
'stalled_seconds': max(30, int(data.get('stalled_seconds') or current.get('stalled_seconds') or 300)),
|
'stalled_seconds': _int_setting(data, current, 'stalled_seconds', 300, 30),
|
||||||
'min_speed_bytes': max(0, int(data.get('min_speed_bytes') or current.get('min_speed_bytes') or 0)),
|
'min_speed_bytes': _int_setting(data, current, 'min_speed_bytes', 0, 0),
|
||||||
'min_seeds': max(0, int(data.get('min_seeds') or current.get('min_seeds') or 0)),
|
'min_seeds': _int_setting(data, current, 'min_seeds', 0, 0),
|
||||||
|
# Note: Min peers is optional; when set, stalled detection requires low speed, low seeds and low peers.
|
||||||
|
'min_peers': _int_setting(data, current, 'min_peers', 0, 0),
|
||||||
|
# Note: Ignore seed/peer removes source counts from stalled detection, useful when sources appear rarely.
|
||||||
|
'ignore_seed_peer': 1 if data.get('ignore_seed_peer', current.get('ignore_seed_peer')) else 0,
|
||||||
|
# Note: Ignore speed removes low transfer rate from stalled detection; with both ignores enabled only stalled_seconds matters.
|
||||||
|
'ignore_speed': 1 if data.get('ignore_speed', current.get('ignore_speed')) else 0,
|
||||||
|
# Note: Compatibility field retained; enabled Smart Queue always manages stopped torrents and never manages user-paused torrents.
|
||||||
|
'manage_stopped': 1,
|
||||||
}
|
}
|
||||||
now = utcnow()
|
now = utcnow()
|
||||||
with connect() as conn:
|
with connect() as conn:
|
||||||
conn.execute(
|
conn.execute(
|
||||||
'''INSERT INTO smart_queue_settings(user_id,profile_id,enabled,max_active_downloads,stalled_seconds,min_speed_bytes,min_seeds,updated_at)
|
'''INSERT INTO smart_queue_settings(user_id,profile_id,enabled,max_active_downloads,stalled_seconds,min_speed_bytes,min_seeds,min_peers,ignore_seed_peer,ignore_speed,manage_stopped,updated_at)
|
||||||
VALUES(?,?,?,?,?,?,?,?)
|
VALUES(?,?,?,?,?,?,?,?,?,?,?,?)
|
||||||
ON CONFLICT(user_id, profile_id) DO UPDATE SET
|
ON CONFLICT(user_id, profile_id) DO UPDATE SET
|
||||||
enabled=excluded.enabled,
|
enabled=excluded.enabled,
|
||||||
max_active_downloads=excluded.max_active_downloads,
|
max_active_downloads=excluded.max_active_downloads,
|
||||||
stalled_seconds=excluded.stalled_seconds,
|
stalled_seconds=excluded.stalled_seconds,
|
||||||
min_speed_bytes=excluded.min_speed_bytes,
|
min_speed_bytes=excluded.min_speed_bytes,
|
||||||
min_seeds=excluded.min_seeds,
|
min_seeds=excluded.min_seeds,
|
||||||
|
min_peers=excluded.min_peers,
|
||||||
|
ignore_seed_peer=excluded.ignore_seed_peer,
|
||||||
|
ignore_speed=excluded.ignore_speed,
|
||||||
|
manage_stopped=excluded.manage_stopped,
|
||||||
updated_at=excluded.updated_at''',
|
updated_at=excluded.updated_at''',
|
||||||
(user_id, profile_id, settings['enabled'], settings['max_active_downloads'], settings['stalled_seconds'], settings['min_speed_bytes'], settings['min_seeds'], now),
|
(user_id, profile_id, settings['enabled'], settings['max_active_downloads'], settings['stalled_seconds'], settings['min_speed_bytes'], settings['min_seeds'], settings['min_peers'], settings['ignore_seed_peer'], settings['ignore_speed'], settings['manage_stopped'], now),
|
||||||
)
|
)
|
||||||
return get_settings(profile_id, user_id)
|
return get_settings(profile_id, user_id)
|
||||||
|
|
||||||
@@ -128,6 +152,60 @@ def _excluded_hashes(profile_id: int, user_id: int) -> set[str]:
|
|||||||
return {r['torrent_hash'] for r in list_exclusions(profile_id, user_id)}
|
return {r['torrent_hash'] for r in list_exclusions(profile_id, user_id)}
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
def _label_names(value: str | None) -> list[str]:
|
||||||
|
names: list[str] = []
|
||||||
|
for part in str(value or '').replace(';', ',').replace('|', ',').split(','):
|
||||||
|
label = part.strip()
|
||||||
|
if label and label not in names:
|
||||||
|
names.append(label)
|
||||||
|
return names
|
||||||
|
|
||||||
|
|
||||||
|
def _label_value(labels: list[str]) -> str:
|
||||||
|
output: list[str] = []
|
||||||
|
for label in labels:
|
||||||
|
item = str(label or '').strip()
|
||||||
|
if item and item not in output:
|
||||||
|
output.append(item)
|
||||||
|
return ', '.join(output)
|
||||||
|
|
||||||
|
|
||||||
|
def _has_smart_queue_label(value: str | None) -> bool:
|
||||||
|
return SMART_QUEUE_LABEL in _label_names(value)
|
||||||
|
|
||||||
|
|
||||||
|
def _without_smart_queue_label(value: str | None) -> str:
|
||||||
|
return _label_value([label for label in _label_names(value) if label != SMART_QUEUE_LABEL])
|
||||||
|
|
||||||
|
|
||||||
|
def _has_stalled_label(value: str | None) -> bool:
|
||||||
|
# Note: Stalled is treated case-insensitively so manually edited labels still block Smart Queue.
|
||||||
|
target = SMART_QUEUE_STALLED_LABEL.casefold()
|
||||||
|
return any(label.casefold() == target for label in _label_names(value))
|
||||||
|
|
||||||
|
|
||||||
|
def _without_queue_technical_labels(value: str | None) -> str:
|
||||||
|
return _label_value([label for label in _label_names(value) if label != SMART_QUEUE_LABEL])
|
||||||
|
|
||||||
|
|
||||||
|
def _ensure_stalled_label(client: Any, torrent_hash: str, current_label: str = '') -> bool:
|
||||||
|
labels = [label for label in _label_names(current_label) if label != SMART_QUEUE_LABEL]
|
||||||
|
changed = False
|
||||||
|
if not any(label.casefold() == SMART_QUEUE_STALLED_LABEL.casefold() for label in labels):
|
||||||
|
labels.append(SMART_QUEUE_STALLED_LABEL)
|
||||||
|
changed = True
|
||||||
|
if SMART_QUEUE_LABEL in _label_names(current_label):
|
||||||
|
changed = True
|
||||||
|
if not changed:
|
||||||
|
return True
|
||||||
|
try:
|
||||||
|
# Note: Stalled marking is idempotent; it adds Stalled and removes only the Smart Queue technical marker.
|
||||||
|
client.call('d.custom1.set', torrent_hash, _label_value(labels))
|
||||||
|
return True
|
||||||
|
except Exception:
|
||||||
|
return False
|
||||||
|
|
||||||
def _remember_auto_label(profile_id: int, torrent_hash: str, previous_label: str) -> None:
|
def _remember_auto_label(profile_id: int, torrent_hash: str, previous_label: str) -> None:
|
||||||
now = utcnow()
|
now = utcnow()
|
||||||
with connect() as conn:
|
with connect() as conn:
|
||||||
@@ -147,28 +225,153 @@ def _remember_auto_label(profile_id: int, torrent_hash: str, previous_label: str
|
|||||||
)
|
)
|
||||||
|
|
||||||
|
|
||||||
|
def _read_label(client: Any, torrent_hash: str, fallback: str = '') -> str:
|
||||||
|
try:
|
||||||
|
return str(client.call('d.custom1', torrent_hash) or '')
|
||||||
|
except Exception:
|
||||||
|
return fallback
|
||||||
|
|
||||||
|
|
||||||
def _restore_auto_label(client: Any, profile_id: int, torrent_hash: str, current_label: str | None = None) -> bool:
|
def _restore_auto_label(client: Any, profile_id: int, torrent_hash: str, current_label: str | None = None) -> bool:
|
||||||
with connect() as conn:
|
with connect() as conn:
|
||||||
row = conn.execute(
|
row = conn.execute(
|
||||||
'SELECT previous_label FROM smart_queue_auto_labels WHERE profile_id=? AND torrent_hash=?',
|
'SELECT previous_label FROM smart_queue_auto_labels WHERE profile_id=? AND torrent_hash=?',
|
||||||
(profile_id, torrent_hash),
|
(profile_id, torrent_hash),
|
||||||
).fetchone()
|
).fetchone()
|
||||||
|
live_label = _read_label(client, torrent_hash, current_label or '')
|
||||||
if not row:
|
if not row:
|
||||||
|
if not _has_smart_queue_label(live_label):
|
||||||
return False
|
return False
|
||||||
previous = row.get('previous_label') or ''
|
|
||||||
try:
|
try:
|
||||||
if current_label is None or current_label == SMART_QUEUE_LABEL:
|
# Note: Remove only the Smart Queue technical label and keep every user label untouched.
|
||||||
client.call('d.custom1.set', torrent_hash, previous)
|
client.call('d.custom1.set', torrent_hash, _without_smart_queue_label(live_label))
|
||||||
|
return True
|
||||||
|
except Exception:
|
||||||
|
return False
|
||||||
|
try:
|
||||||
|
# Note: Starting a torrent removes only Smart Queue's technical marker, so labels added while stopped stay untouched.
|
||||||
|
if _has_smart_queue_label(live_label):
|
||||||
|
client.call('d.custom1.set', torrent_hash, _without_smart_queue_label(live_label))
|
||||||
conn.execute('DELETE FROM smart_queue_auto_labels WHERE profile_id=? AND torrent_hash=?', (profile_id, torrent_hash))
|
conn.execute('DELETE FROM smart_queue_auto_labels WHERE profile_id=? AND torrent_hash=?', (profile_id, torrent_hash))
|
||||||
return True
|
return True
|
||||||
except Exception:
|
except Exception:
|
||||||
return False
|
return False
|
||||||
|
|
||||||
|
|
||||||
def _set_smart_queue_label(client: Any, torrent_hash: str, attempts: int = 3) -> bool:
|
|
||||||
|
|
||||||
|
|
||||||
|
def _call_rtorrent_setter(client: Any, method: str, value: int) -> bool:
|
||||||
|
"""Set a scalar rTorrent setting while tolerating XMLRPC signature differences."""
|
||||||
|
for args in ((int(value),), ('', int(value))):
|
||||||
|
try:
|
||||||
|
client.call(method, *args)
|
||||||
|
return True
|
||||||
|
except Exception:
|
||||||
|
continue
|
||||||
|
return False
|
||||||
|
|
||||||
|
|
||||||
|
def _ensure_rtorrent_download_cap(client: Any, max_active: int) -> dict[str, Any]:
|
||||||
|
"""Raise rTorrent download caps that can silently limit Smart Queue to one item."""
|
||||||
|
result: dict[str, Any] = {'checked': False, 'updated': False, 'items': []}
|
||||||
|
# Note: rTorrent may have separate global and per-throttle limits. When div=1,
|
||||||
|
# starts can effectively stop at one active torrent even when the target is 100.
|
||||||
|
for key in ('throttle.max_downloads.global', 'throttle.max_downloads.div'):
|
||||||
|
item: dict[str, Any] = {'key': key, 'checked': False, 'updated': False}
|
||||||
|
try:
|
||||||
|
current = int(client.call(key) or 0)
|
||||||
|
item.update({'checked': True, 'current': current, 'target': int(max_active)})
|
||||||
|
result['checked'] = True
|
||||||
|
# Note: 0 means unlimited; raise only positive limits lower than the target.
|
||||||
|
if 0 < current < max_active:
|
||||||
|
ok = _call_rtorrent_setter(client, f'{key}.set', int(max_active))
|
||||||
|
item['updated'] = ok
|
||||||
|
if ok:
|
||||||
|
result['updated'] = True
|
||||||
|
item['new'] = int(max_active)
|
||||||
|
result.setdefault('current', current)
|
||||||
|
result['new'] = int(max_active)
|
||||||
|
except Exception as exc:
|
||||||
|
item.update({'error': str(exc)})
|
||||||
|
result['items'].append(item)
|
||||||
|
return result
|
||||||
|
|
||||||
|
|
||||||
|
def _start_download(client: Any, torrent: dict[str, Any]) -> dict[str, Any]:
|
||||||
|
"""Start only stopped Smart Queue candidates; paused torrents are a user decision."""
|
||||||
|
h = str(torrent.get('hash') or '')
|
||||||
|
if not h:
|
||||||
|
return {'hash': h, 'ok': False, 'error': 'missing hash'}
|
||||||
|
if _is_user_paused(torrent):
|
||||||
|
# Note: Smart Queue never unpauses user-paused torrents; it manages only stopped items.
|
||||||
|
return {'hash': h, 'ok': False, 'skipped': 'user_paused'}
|
||||||
|
# Note: This is the same helper used by the manual Start action, so queue starts follow the UI path.
|
||||||
|
# Note: Smart Queue uses the same helper as the manual Start action, so start behavior stays identical.
|
||||||
|
return rtorrent.start_or_resume_hash(client, h)
|
||||||
|
|
||||||
|
|
||||||
|
def _verify_started_downloads(client: Any, hashes: list[str], attempts: int = 10, delay: float = 0.5) -> tuple[list[str], list[dict[str, Any]]]:
|
||||||
|
"""Verify starts after rTorrent has time to process manual-equivalent start commands."""
|
||||||
|
pending = [h for h in hashes if h]
|
||||||
|
started: list[str] = []
|
||||||
|
no_effect: list[dict[str, Any]] = []
|
||||||
|
seen_started: set[str] = set()
|
||||||
|
last_state: dict[str, dict[str, Any]] = {}
|
||||||
|
|
||||||
|
for attempt in range(max(1, attempts)):
|
||||||
|
if attempt:
|
||||||
|
time.sleep(delay)
|
||||||
|
for h in list(pending):
|
||||||
|
live = _read_live_start_state(client, h)
|
||||||
|
last_state[h] = live
|
||||||
|
if live.get('started'):
|
||||||
|
seen_started.add(h)
|
||||||
|
pending.remove(h)
|
||||||
|
if not pending:
|
||||||
|
break
|
||||||
|
|
||||||
|
started = [h for h in hashes if h in seen_started]
|
||||||
|
no_effect = [last_state.get(h, {'hash': h, 'started': False}) for h in hashes if h and h not in seen_started]
|
||||||
|
return started, no_effect
|
||||||
|
|
||||||
|
def _read_live_start_state(client: Any, torrent_hash: str) -> dict[str, Any]:
|
||||||
|
result: dict[str, Any] = {'hash': torrent_hash}
|
||||||
|
fields = (
|
||||||
|
('state', 'd.state'),
|
||||||
|
('active', 'd.is_active'),
|
||||||
|
('open', 'd.is_open'),
|
||||||
|
('priority', 'd.priority'),
|
||||||
|
('message', 'd.message'),
|
||||||
|
('label', 'd.custom1'),
|
||||||
|
)
|
||||||
|
for key, method in fields:
|
||||||
|
try:
|
||||||
|
value = client.call(method, torrent_hash)
|
||||||
|
result[key] = int(value or 0) if key in {'state', 'active', 'open', 'priority'} else str(value or '')
|
||||||
|
except Exception as exc:
|
||||||
|
result[f'{key}_error'] = str(exc)
|
||||||
|
# Note: Manual Start in rTorrent is successful when d.state becomes 1.
|
||||||
|
# d.is_active can stay 0 for queued/idle downloads, so it must not be used as the only success check.
|
||||||
|
result['started'] = bool(int(result.get('state') or 0) or int(result.get('active') or 0))
|
||||||
|
return result
|
||||||
|
|
||||||
|
|
||||||
|
def _is_user_paused(torrent: dict[str, Any]) -> bool:
|
||||||
|
"""Return True for torrents paused by the user; Smart Queue must not touch them."""
|
||||||
|
status = str(torrent.get('status') or '').lower()
|
||||||
|
return bool(torrent.get('paused')) or status == 'paused'
|
||||||
|
|
||||||
|
def _set_smart_queue_label(client: Any, torrent_hash: str, current_label: str = '', attempts: int = 3) -> bool:
|
||||||
|
labels = _label_names(current_label)
|
||||||
|
if SMART_QUEUE_LABEL in labels:
|
||||||
|
return True
|
||||||
|
labels.append(SMART_QUEUE_LABEL)
|
||||||
|
value = _label_value(labels)
|
||||||
for attempt in range(max(1, attempts)):
|
for attempt in range(max(1, attempts)):
|
||||||
try:
|
try:
|
||||||
client.call('d.custom1.set', torrent_hash, SMART_QUEUE_LABEL)
|
# Note: Smart Queue appends its technical label instead of overwriting existing torrent labels.
|
||||||
|
client.call('d.custom1.set', torrent_hash, value)
|
||||||
return True
|
return True
|
||||||
except Exception:
|
except Exception:
|
||||||
if attempt < attempts - 1:
|
if attempt < attempts - 1:
|
||||||
@@ -176,41 +379,131 @@ def _set_smart_queue_label(client: Any, torrent_hash: str, attempts: int = 3) ->
|
|||||||
return False
|
return False
|
||||||
|
|
||||||
|
|
||||||
def _mark_auto_paused(client: Any, profile_id: int, torrent: dict[str, Any]) -> bool:
|
def _mark_auto_stopped(client: Any, profile_id: int, torrent: dict[str, Any]) -> bool:
|
||||||
torrent_hash = str(torrent.get('hash') or '')
|
torrent_hash = str(torrent.get('hash') or '')
|
||||||
if not torrent_hash:
|
if not torrent_hash:
|
||||||
return False
|
return False
|
||||||
previous = str(torrent.get('label') or '')
|
previous = str(torrent.get('label') or '')
|
||||||
if previous != SMART_QUEUE_LABEL:
|
if not _has_smart_queue_label(previous):
|
||||||
_remember_auto_label(profile_id, torrent_hash, previous)
|
_remember_auto_label(profile_id, torrent_hash, previous)
|
||||||
return _set_smart_queue_label(client, torrent_hash)
|
return _set_smart_queue_label(client, torrent_hash, previous)
|
||||||
|
|
||||||
|
|
||||||
def _cleanup_auto_labels(client: Any, profile_id: int, torrents: list[dict[str, Any]], keep_hashes: set[str]) -> list[str]:
|
def _is_started_download_slot(torrent: dict[str, Any] | None) -> bool:
|
||||||
|
"""Return True for incomplete torrents already started in rTorrent, including manual starts."""
|
||||||
|
if not torrent or int(torrent.get('complete') or 0):
|
||||||
|
return False
|
||||||
|
status = str(torrent.get('status') or '').lower()
|
||||||
|
if status == 'checking':
|
||||||
|
return False
|
||||||
|
# Note: Manual Start changes d.state first; d.is_active may stay 0 while rTorrent is queued or idle.
|
||||||
|
return bool(int(torrent.get('state') or 0) or int(torrent.get('active') or 0))
|
||||||
|
|
||||||
|
|
||||||
|
def _is_smart_queue_hold(torrent: dict[str, Any] | None, manage_stopped: bool = True) -> bool:
|
||||||
|
if not torrent or int(torrent.get('complete') or 0):
|
||||||
|
return False
|
||||||
|
if _is_started_download_slot(torrent):
|
||||||
|
# Note: A manual start can leave the Smart Queue label behind; started items are active slots, not holds.
|
||||||
|
return False
|
||||||
|
if _has_stalled_label(str(torrent.get('label') or '')):
|
||||||
|
return False
|
||||||
|
if _is_user_paused(torrent):
|
||||||
|
# Note: Paused torrents are always treated as user-controlled and are not Smart Queue holds.
|
||||||
|
return False
|
||||||
|
if _has_smart_queue_label(str(torrent.get('label') or '')):
|
||||||
|
return True
|
||||||
|
# Note: Smart Queue manages stopped torrents by default; the old manage_stopped flag is ignored for compatibility.
|
||||||
|
return not int(torrent.get('state') or 0)
|
||||||
|
|
||||||
|
|
||||||
|
def _clear_untracked_smart_queue_label(client: Any, torrent_hash: str, current_label: str) -> bool:
|
||||||
|
if not _has_smart_queue_label(current_label):
|
||||||
|
return False
|
||||||
|
try:
|
||||||
|
# Note: Clear only the orphaned Smart Queue marker and keep unrelated labels intact.
|
||||||
|
client.call('d.custom1.set', torrent_hash, _without_smart_queue_label(current_label))
|
||||||
|
return True
|
||||||
|
except Exception:
|
||||||
|
return False
|
||||||
|
|
||||||
|
|
||||||
|
def _cleanup_auto_labels(client: Any, profile_id: int, torrents: list[dict[str, Any]], keep_hashes: set[str], manage_stopped: bool = True) -> list[str]:
|
||||||
by_hash = {str(t.get('hash') or ''): t for t in torrents}
|
by_hash = {str(t.get('hash') or ''): t for t in torrents}
|
||||||
restored: list[str] = []
|
restored: list[str] = []
|
||||||
with connect() as conn:
|
with connect() as conn:
|
||||||
rows = conn.execute('SELECT torrent_hash FROM smart_queue_auto_labels WHERE profile_id=?', (profile_id,)).fetchall()
|
rows = conn.execute('SELECT torrent_hash FROM smart_queue_auto_labels WHERE profile_id=?', (profile_id,)).fetchall()
|
||||||
|
tracked_hashes = {str(row.get('torrent_hash') or '') for row in rows if row.get('torrent_hash')}
|
||||||
|
|
||||||
for row in rows:
|
for row in rows:
|
||||||
h = str(row.get('torrent_hash') or '')
|
h = str(row.get('torrent_hash') or '')
|
||||||
t = by_hash.get(h)
|
t = by_hash.get(h)
|
||||||
if not h or h in keep_hashes:
|
if not h or h in keep_hashes:
|
||||||
continue
|
continue
|
||||||
if t is None or int(t.get('complete') or 0):
|
current_label = '' if t is None else str(t.get('label') or '')
|
||||||
if _restore_auto_label(client, profile_id, h, None if t is None else str(t.get('label') or '')):
|
if not _is_smart_queue_hold(t, manage_stopped):
|
||||||
|
if _restore_auto_label(client, profile_id, h, None if t is None else current_label):
|
||||||
restored.append(h)
|
restored.append(h)
|
||||||
continue
|
continue
|
||||||
is_paused_or_stopped = bool(t.get('paused')) or not int(t.get('active') or 0) or not int(t.get('state') or 0)
|
if not _has_smart_queue_label(current_label):
|
||||||
current_label = str(t.get('label') or '')
|
_set_smart_queue_label(client, h, current_label)
|
||||||
if is_paused_or_stopped:
|
|
||||||
if current_label != SMART_QUEUE_LABEL:
|
for h, t in by_hash.items():
|
||||||
_set_smart_queue_label(client, h)
|
if not h or h in keep_hashes or h in tracked_hashes or _is_smart_queue_hold(t, manage_stopped):
|
||||||
continue
|
continue
|
||||||
if _restore_auto_label(client, profile_id, h, current_label):
|
if _clear_untracked_smart_queue_label(client, h, str(t.get('label') or '')):
|
||||||
restored.append(h)
|
restored.append(h)
|
||||||
return restored
|
return restored
|
||||||
|
|
||||||
|
|
||||||
|
def _is_running_download_slot(t: dict[str, Any]) -> bool:
|
||||||
|
"""Return True for incomplete torrents that already occupy a Smart Queue slot."""
|
||||||
|
# Note: Do not exclude Smart Queue/Stalled labels here. Manual Start can leave old labels,
|
||||||
|
# and those torrents still must count toward the global Smart Queue limit.
|
||||||
|
return _is_started_download_slot(t)
|
||||||
|
|
||||||
|
|
||||||
|
def _is_stalled_download(t: dict[str, Any], min_speed: int, min_seeds: int, min_peers: int, ignore_seed_peer: bool, ignore_speed: bool) -> bool:
|
||||||
|
"""Return True when a started torrent should begin or continue the stalled timer."""
|
||||||
|
# Note: Each ignore switch only removes its own criterion; the stalled timer is still respected after criteria match.
|
||||||
|
speed_ok = True if ignore_speed else int(t.get('down_rate') or 0) <= max(0, int(min_speed or 0))
|
||||||
|
source_ok = True if ignore_seed_peer else int(t.get('seeds') or 0) <= max(0, int(min_seeds or 0)) and (min_peers <= 0 or int(t.get('peers') or 0) <= min_peers)
|
||||||
|
return speed_ok and source_ok
|
||||||
|
|
||||||
|
|
||||||
|
def _stalled_timer_key(min_speed: int, min_seeds: int, min_peers: int, stalled_seconds: int, ignore_seed_peer: bool, ignore_speed: bool) -> str:
|
||||||
|
"""Return a stable key for the stalled rules that started the current timer."""
|
||||||
|
# Note: Changing ignore switches or thresholds restarts existing stalled timers instead of reusing old rows.
|
||||||
|
return f"v2|speed={int(min_speed or 0)}|seeds={int(min_seeds or 0)}|peers={int(min_peers or 0)}|seconds={int(stalled_seconds or 0)}|ignore_sources={int(bool(ignore_seed_peer))}|ignore_speed={int(bool(ignore_speed))}"
|
||||||
|
|
||||||
|
|
||||||
|
def _is_low_activity_download(t: dict[str, Any], min_speed: int, min_seeds: int, min_peers: int, ignore_seed_peer: bool = False, ignore_speed: bool = False) -> bool:
|
||||||
|
"""Return True when a started torrent is weak and should be stopped first."""
|
||||||
|
# Note: Stop priority uses only criteria that are not ignored, so disabled criteria cannot stop torrents earlier.
|
||||||
|
low_speed = False if ignore_speed else int(t.get('down_rate') or 0) <= max(0, int(min_speed or 0))
|
||||||
|
low_seeds = False if ignore_seed_peer else int(t.get('seeds') or 0) <= max(0, int(min_seeds or 0))
|
||||||
|
low_peers = False if ignore_seed_peer or min_peers <= 0 else int(t.get('peers') or 0) <= max(0, int(min_peers or 0))
|
||||||
|
return low_speed or low_seeds or low_peers
|
||||||
|
|
||||||
|
|
||||||
|
def _is_waiting_download_candidate(t: dict[str, Any], manage_stopped: bool) -> bool:
|
||||||
|
"""Return True for stopped torrents Smart Queue may start later."""
|
||||||
|
if int(t.get('complete') or 0):
|
||||||
|
return False
|
||||||
|
if str(t.get('status') or '').lower() == 'checking':
|
||||||
|
# Note: Torrents still being checked must finish post-check handling before Smart Queue may start them.
|
||||||
|
return False
|
||||||
|
if _has_stalled_label(str(t.get('label') or '')):
|
||||||
|
return False
|
||||||
|
if _is_user_paused(t):
|
||||||
|
# Note: User-paused torrents are never candidates, even when they have no Smart Queue label.
|
||||||
|
return False
|
||||||
|
if _has_smart_queue_label(str(t.get('label') or '')):
|
||||||
|
return True
|
||||||
|
# Note: Enabled Smart Queue manages all stopped torrents; no separate stopped-torrent switch is needed.
|
||||||
|
return not int(t.get('state') or 0)
|
||||||
|
|
||||||
|
|
||||||
def check(profile: dict | None = None, user_id: int | None = None, force: bool = False) -> dict[str, Any]:
|
def check(profile: dict | None = None, user_id: int | None = None, force: bool = False) -> dict[str, Any]:
|
||||||
profile = profile or active_profile()
|
profile = profile or active_profile()
|
||||||
if not profile:
|
if not profile:
|
||||||
@@ -219,34 +512,78 @@ def check(profile: dict | None = None, user_id: int | None = None, force: bool =
|
|||||||
profile_id = int(profile['id'])
|
profile_id = int(profile['id'])
|
||||||
settings = get_settings(profile_id, user_id)
|
settings = get_settings(profile_id, user_id)
|
||||||
if not force and not int(settings.get('enabled') or 0):
|
if not force and not int(settings.get('enabled') or 0):
|
||||||
add_history(profile_id, 'skipped_disabled', [], [], 0, {'enabled': False}, user_id)
|
restored: list[str] = []
|
||||||
return {'ok': True, 'enabled': False, 'paused': [], 'resumed': [], 'message': 'Smart Queue disabled'}
|
try:
|
||||||
|
# Note: When Smart Queue is disabled, only technical labels are cleaned up, without starting or pausing torrents.
|
||||||
|
torrents = rtorrent.list_torrents(profile)
|
||||||
|
restored = _cleanup_auto_labels(rtorrent.client_for(profile), profile_id, torrents, set(), True)
|
||||||
|
except Exception:
|
||||||
|
restored = []
|
||||||
|
add_history(profile_id, 'skipped_disabled', [], [], 0, {'enabled': False, 'labels_restored': restored}, user_id)
|
||||||
|
return {'ok': True, 'enabled': False, 'paused': [], 'resumed': [], 'stopped': [], 'started': [], 'labels_restored': restored, 'message': 'Smart Queue disabled'}
|
||||||
|
|
||||||
torrents = rtorrent.list_torrents(profile)
|
torrents = rtorrent.list_torrents(profile)
|
||||||
excluded = _excluded_hashes(profile_id, user_id)
|
# Note: Stalled labels block automatic starting only; a manually started Stalled item still counts as a running slot.
|
||||||
downloading = [t for t in torrents if not int(t.get('complete') or 0) and int(t.get('state') or 0) and not t.get('paused') and t.get('hash') not in excluded]
|
stalled_label_hashes = {str(t.get('hash') or '') for t in torrents if _has_stalled_label(str(t.get('label') or '')) and t.get('hash')}
|
||||||
stopped = [t for t in torrents if not int(t.get('complete') or 0) and (not int(t.get('state') or 0) or t.get('paused')) and t.get('hash') not in excluded]
|
user_excluded = _excluded_hashes(profile_id, user_id)
|
||||||
|
manage_stopped = True
|
||||||
|
|
||||||
|
# Note: Count every started incomplete torrent, including items started manually and items with old Smart Queue labels.
|
||||||
|
downloading = [
|
||||||
|
t for t in torrents
|
||||||
|
if _is_running_download_slot(t)
|
||||||
|
and str(t.get('hash') or '') not in user_excluded
|
||||||
|
]
|
||||||
|
# Note: Waiting candidates are stopped queue holds only; Stalled labels are not auto-started again.
|
||||||
|
stopped = [
|
||||||
|
t for t in torrents
|
||||||
|
if str(t.get('hash') or '') not in user_excluded
|
||||||
|
and str(t.get('hash') or '') not in stalled_label_hashes
|
||||||
|
and _is_waiting_download_candidate(t, manage_stopped)
|
||||||
|
and not _is_running_download_slot(t)
|
||||||
|
]
|
||||||
|
manual_labeled_running = [
|
||||||
|
str(t.get('hash') or '') for t in downloading
|
||||||
|
if str(t.get('hash') or '') and _has_smart_queue_label(str(t.get('label') or ''))
|
||||||
|
]
|
||||||
min_speed = int(settings.get('min_speed_bytes') or 0)
|
min_speed = int(settings.get('min_speed_bytes') or 0)
|
||||||
min_seeds = int(settings.get('min_seeds') or 0)
|
min_seeds = int(settings.get('min_seeds') or 0)
|
||||||
|
min_peers = int(settings.get('min_peers') or 0)
|
||||||
|
ignore_seed_peer = bool(int(settings.get('ignore_seed_peer') or 0))
|
||||||
|
ignore_speed = bool(int(settings.get('ignore_speed') or 0))
|
||||||
stalled_seconds = int(settings.get('stalled_seconds') or 300)
|
stalled_seconds = int(settings.get('stalled_seconds') or 300)
|
||||||
|
timer_key = _stalled_timer_key(min_speed, min_seeds, min_peers, stalled_seconds, ignore_seed_peer, ignore_speed)
|
||||||
now = utcnow()
|
now = utcnow()
|
||||||
now_ts = datetime.now(timezone.utc).timestamp()
|
now_ts = datetime.now(timezone.utc).timestamp()
|
||||||
stalled: list[dict[str, Any]] = []
|
stalled: list[dict[str, Any]] = []
|
||||||
|
stop_eligible: list[dict[str, Any]] = []
|
||||||
|
# Note: Toast diagnostics count active torrents whose ignored criteria would otherwise match during this check.
|
||||||
|
ignored_seed_peer_count = 0
|
||||||
|
ignored_speed_count = 0
|
||||||
|
|
||||||
with connect() as conn:
|
with connect() as conn:
|
||||||
for t in downloading:
|
for t in downloading:
|
||||||
is_stalled = int(t.get('down_rate') or 0) <= min_speed and int(t.get('seeds') or 0) <= min_seeds
|
# Note: Stalled detection respects seed/peer and speed ignore switches before starting the timer.
|
||||||
|
if ignore_seed_peer and (int(t.get('seeds') or 0) <= max(0, int(min_seeds or 0)) or (min_peers > 0 and int(t.get('peers') or 0) <= max(0, int(min_peers or 0)))):
|
||||||
|
ignored_seed_peer_count += 1
|
||||||
|
if ignore_speed and int(t.get('down_rate') or 0) <= max(0, int(min_speed or 0)):
|
||||||
|
ignored_speed_count += 1
|
||||||
|
is_stalled = _is_stalled_download(t, min_speed, min_seeds, min_peers, ignore_seed_peer, ignore_speed)
|
||||||
|
# Note: Hard-limit enforcement respects the same ignore switches before choosing weak items.
|
||||||
|
if _is_low_activity_download(t, min_speed, min_seeds, min_peers, ignore_seed_peer, ignore_speed):
|
||||||
|
stop_eligible.append(t)
|
||||||
h = t.get('hash')
|
h = t.get('hash')
|
||||||
if not h:
|
if not h:
|
||||||
continue
|
continue
|
||||||
if is_stalled:
|
if is_stalled:
|
||||||
row = conn.execute('SELECT first_stalled_at FROM smart_queue_stalled WHERE profile_id=? AND torrent_hash=?', (profile_id, h)).fetchone()
|
row = conn.execute('SELECT first_stalled_at, timer_key FROM smart_queue_stalled WHERE profile_id=? AND torrent_hash=?', (profile_id, h)).fetchone()
|
||||||
if row:
|
if row and str(row.get('timer_key') or '') == timer_key:
|
||||||
conn.execute('UPDATE smart_queue_stalled SET updated_at=? WHERE profile_id=? AND torrent_hash=?', (now, profile_id, h))
|
conn.execute('UPDATE smart_queue_stalled SET updated_at=? WHERE profile_id=? AND torrent_hash=?', (now, profile_id, h))
|
||||||
first = row['first_stalled_at']
|
first = row['first_stalled_at']
|
||||||
else:
|
else:
|
||||||
|
# Note: A changed stalled rule starts a fresh timer, so old rows cannot instantly mark torrents as Stalled.
|
||||||
first = now
|
first = now
|
||||||
conn.execute('INSERT OR REPLACE INTO smart_queue_stalled(profile_id,torrent_hash,first_stalled_at,updated_at) VALUES(?,?,?,?)', (profile_id, h, first, now))
|
conn.execute('INSERT OR REPLACE INTO smart_queue_stalled(profile_id,torrent_hash,first_stalled_at,updated_at,timer_key) VALUES(?,?,?,?,?)', (profile_id, h, first, now, timer_key))
|
||||||
if now_ts - _ts(first) >= stalled_seconds:
|
if now_ts - _ts(first) >= stalled_seconds:
|
||||||
stalled.append(t)
|
stalled.append(t)
|
||||||
else:
|
else:
|
||||||
@@ -261,52 +598,106 @@ def check(profile: dict | None = None, user_id: int | None = None, force: bool =
|
|||||||
max_active = max(1, int(settings.get('max_active_downloads') or 5))
|
max_active = max(1, int(settings.get('max_active_downloads') or 5))
|
||||||
stalled_hashes = {str(t.get('hash') or '') for t in stalled}
|
stalled_hashes = {str(t.get('hash') or '') for t in stalled}
|
||||||
|
|
||||||
# Enforce the hard active-download cap first. The previous logic only limited
|
# Enforce the hard active-download cap across the whole started queue, including manual starts.
|
||||||
# newly resumed torrents, so already-active downloads could stay above the limit.
|
# Note: Weak/no-source torrents are stopped first, but the cap is still enforced when the overflow is larger.
|
||||||
pause_rank = sorted(
|
over_limit = max(0, len(downloading) - max_active)
|
||||||
|
stop_eligible_hashes = {str(t.get('hash') or '') for t in stop_eligible}
|
||||||
|
stop_rank = sorted(
|
||||||
downloading,
|
downloading,
|
||||||
key=lambda t: (
|
key=lambda t: (
|
||||||
0 if str(t.get('hash') or '') in stalled_hashes else 1,
|
0 if str(t.get('hash') or '') in stalled_hashes else 1,
|
||||||
|
0 if str(t.get('hash') or '') in stop_eligible_hashes else 1,
|
||||||
int(t.get('down_rate') or 0),
|
int(t.get('down_rate') or 0),
|
||||||
int(t.get('seeds') or 0),
|
int(t.get('seeds') or 0),
|
||||||
int(t.get('peers') or 0),
|
int(t.get('peers') or 0),
|
||||||
),
|
),
|
||||||
)
|
)
|
||||||
to_pause: list[dict[str, Any]] = pause_rank[:max(0, len(downloading) - max_active)]
|
to_stop: list[dict[str, Any]] = stop_rank[:over_limit]
|
||||||
pause_hashes = {str(t.get('hash') or '') for t in to_pause}
|
stop_hashes = {str(t.get('hash') or '') for t in to_stop}
|
||||||
|
|
||||||
# When the cap is not exceeded, stalled downloads can still be rotated out
|
# Note: Confirmed stalled downloads are removed from the active queue immediately, then new candidates can fill those slots.
|
||||||
# one-for-one with better stopped candidates while staying within max_active.
|
for t in stalled:
|
||||||
if candidates:
|
h = str(t.get('hash') or '')
|
||||||
replaceable_stalled = [t for t in stalled if str(t.get('hash') or '') not in pause_hashes]
|
if h and h not in stop_hashes:
|
||||||
for t in replaceable_stalled[:max(0, len(candidates) - len(to_pause))]:
|
to_stop.append(t)
|
||||||
to_pause.append(t)
|
stop_hashes.add(h)
|
||||||
pause_hashes.add(str(t.get('hash') or ''))
|
|
||||||
|
|
||||||
active_after_pause = max(0, len(downloading) - len(to_pause))
|
|
||||||
available_slots = max(0, max_active - active_after_pause)
|
|
||||||
to_resume = candidates[:available_slots]
|
|
||||||
|
|
||||||
c = rtorrent.client_for(profile)
|
c = rtorrent.client_for(profile)
|
||||||
paused: list[str] = []
|
rtorrent_cap = _ensure_rtorrent_download_cap(c, max_active)
|
||||||
resumed: list[str] = []
|
stopped_by_queue: list[str] = []
|
||||||
|
started_by_queue: list[str] = []
|
||||||
label_failed: list[str] = []
|
label_failed: list[str] = []
|
||||||
for t in to_pause:
|
stalled_labeled: list[str] = []
|
||||||
|
stop_failed: list[dict[str, str]] = []
|
||||||
|
start_failed: list[dict[str, str]] = []
|
||||||
|
start_no_effect: list[dict[str, Any]] = []
|
||||||
|
start_requested: list[str] = []
|
||||||
|
start_results: list[dict[str, Any]] = []
|
||||||
|
|
||||||
|
for t in to_stop:
|
||||||
|
h = str(t.get('hash') or '')
|
||||||
try:
|
try:
|
||||||
c.call('d.pause', t['hash'])
|
# Note: Smart Queue stops with the same low-level d.stop command used by the manual Stop action.
|
||||||
if not _mark_auto_paused(c, profile_id, t):
|
# This avoids extra pre-check RPCs and keeps large queues from failing after only a few items.
|
||||||
label_failed.append(t['hash'])
|
c.call('d.stop', h)
|
||||||
paused.append(t['hash'])
|
if h in stalled_hashes:
|
||||||
except Exception:
|
if _ensure_stalled_label(c, h, _read_label(c, h, str(t.get('label') or ''))):
|
||||||
pass
|
stalled_labeled.append(h)
|
||||||
for t in to_resume:
|
else:
|
||||||
|
label_failed.append(h)
|
||||||
|
elif not _mark_auto_stopped(c, profile_id, t):
|
||||||
|
label_failed.append(h)
|
||||||
|
stopped_by_queue.append(h)
|
||||||
|
except Exception as exc:
|
||||||
|
# Note: Stop failures are stored in history instead of being swallowed, so queue drift is visible.
|
||||||
|
stop_failed.append({'hash': h, 'error': str(exc)})
|
||||||
|
|
||||||
|
active_after_stop = max(0, len(downloading) - len(stopped_by_queue))
|
||||||
|
# Note: Starts are planned only after confirmed stops, so failed stops cannot push the queue above the cap.
|
||||||
|
available_slots = max(0, max_active - active_after_stop)
|
||||||
|
to_start = candidates[:available_slots]
|
||||||
|
# Note: Items outside the current start batch are explicitly marked as pending Smart Queue items.
|
||||||
|
to_label_waiting = candidates[available_slots:]
|
||||||
|
|
||||||
|
for t in to_label_waiting:
|
||||||
|
h = str(t.get('hash') or '')
|
||||||
|
if not h or h in stop_hashes:
|
||||||
|
continue
|
||||||
try:
|
try:
|
||||||
_restore_auto_label(c, profile_id, t['hash'], str(t.get('label') or ''))
|
if not _mark_auto_stopped(c, profile_id, t):
|
||||||
c.call('d.resume', t['hash'])
|
label_failed.append(h)
|
||||||
c.call('d.start', t['hash'])
|
|
||||||
resumed.append(t['hash'])
|
|
||||||
except Exception:
|
except Exception:
|
||||||
pass
|
label_failed.append(h)
|
||||||
restored = _cleanup_auto_labels(c, profile_id, torrents, set(paused))
|
|
||||||
add_history(profile_id, 'force_check' if force else 'auto_check', paused, resumed, len(torrents), {'excluded': len(excluded), 'enabled': bool(settings.get('enabled')), 'auto_label': SMART_QUEUE_LABEL, 'labels_restored': restored, 'labels_failed': label_failed, 'max_active_downloads': max_active, 'active_before': len(downloading), 'active_after': active_after_pause + len(resumed)}, user_id)
|
# Note: Start the whole candidate batch in one round. Remove the label after an accepted RPC,
|
||||||
return {'ok': True, 'enabled': bool(settings.get('enabled')), 'paused': paused, 'resumed': resumed, 'labels_restored': restored, 'labels_failed': label_failed, 'checked': len(torrents), 'excluded': len(excluded), 'settings': settings}
|
# because rTorrent may keep some items in its own queue with active=0 despite a valid d.start/d.resume.
|
||||||
|
for t in to_start:
|
||||||
|
h = str(t.get('hash') or '')
|
||||||
|
if not h:
|
||||||
|
continue
|
||||||
|
try:
|
||||||
|
result = _start_download(c, t)
|
||||||
|
start_results.append(result)
|
||||||
|
start_requested.append(h)
|
||||||
|
except Exception as exc:
|
||||||
|
start_failed.append({'hash': h, 'error': str(exc)})
|
||||||
|
|
||||||
|
active_verified, start_no_effect = _verify_started_downloads(c, start_requested)
|
||||||
|
for h in active_verified:
|
||||||
|
_restore_auto_label(c, profile_id, h, None)
|
||||||
|
try:
|
||||||
|
# Note: Once Smart Queue starts a post-check torrent, its temporary download-after-check label is no longer needed.
|
||||||
|
rtorrent.clear_post_check_download_label(c, h, None)
|
||||||
|
except Exception:
|
||||||
|
label_failed.append(h)
|
||||||
|
# Note: History shows only torrents actually started, not just the number of sent commands.
|
||||||
|
started_by_queue = list(active_verified)
|
||||||
|
keep_labels = (
|
||||||
|
set(stopped_by_queue)
|
||||||
|
| {str(t.get('hash') or '') for t in to_label_waiting}
|
||||||
|
| {str(t.get('hash') or '') for t in stopped if _has_smart_queue_label(str(t.get('label') or '')) and str(t.get('hash') or '') not in set(started_by_queue)}
|
||||||
|
)
|
||||||
|
restored = _cleanup_auto_labels(c, profile_id, torrents, keep_labels, manage_stopped)
|
||||||
|
details = {'excluded': len(user_excluded), 'excluded_stalled': len(stalled_label_hashes), 'manual_labeled_running': len(manual_labeled_running), 'manual_labeled_running_hashes': manual_labeled_running[:100], 'enabled': bool(settings.get('enabled')), 'auto_label': SMART_QUEUE_LABEL, 'stalled_label': SMART_QUEUE_STALLED_LABEL, 'stalled_labeled': stalled_labeled, 'labels_restored': restored, 'labels_failed': label_failed, 'stop_failed': stop_failed, 'start_failed': start_failed, 'start_no_effect': start_no_effect, 'start_results': start_results, 'start_requested': start_requested, 'active_verified': active_verified, 'waiting_labeled': len(to_label_waiting), 'manage_stopped': True, 'max_active_downloads': max_active, 'active_before': len(downloading), 'active_after_stop': active_after_stop, 'active_after_expected': active_after_stop + len(started_by_queue), 'over_limit': over_limit, 'stop_eligible': len(stop_eligible), 'ignore_seed_peer': ignore_seed_peer, 'ignore_speed': ignore_speed, 'ignored_seed_peer_count': ignored_seed_peer_count if ignore_seed_peer else 0, 'ignored_speed_count': ignored_speed_count if ignore_speed else 0, 'stalled_seconds': stalled_seconds, 'stalled_timer_key': timer_key, 'healthy_active_protected': 0, 'stopped_planned': len(to_stop), 'started_planned': len(to_start), 'paused_planned': len(to_stop), 'resumed_planned': len(to_start), 'rtorrent_cap': rtorrent_cap}
|
||||||
|
add_history(profile_id, 'force_check' if force else 'auto_check', stopped_by_queue, started_by_queue, len(torrents), {**details, 'stopped': stopped_by_queue, 'started': started_by_queue}, user_id)
|
||||||
|
return {'ok': True, 'enabled': bool(settings.get('enabled')), 'paused': stopped_by_queue, 'resumed': started_by_queue, 'stopped': stopped_by_queue, 'started': started_by_queue, 'start_requested': start_requested, 'waiting_labeled': len(to_label_waiting), 'stalled_labeled': stalled_labeled, 'excluded_stalled': len(stalled_label_hashes), 'manual_labeled_running': len(manual_labeled_running), 'labels_restored': restored, 'labels_failed': label_failed, 'stop_failed': stop_failed, 'start_failed': start_failed, 'start_no_effect': start_no_effect, 'active_verified': active_verified, 'active_before': len(downloading), 'active_after_stop': active_after_stop, 'over_limit': over_limit, 'stop_eligible': len(stop_eligible), 'ignore_seed_peer': ignore_seed_peer, 'ignore_speed': ignore_speed, 'ignored_seed_peer_count': ignored_seed_peer_count if ignore_seed_peer else 0, 'ignored_speed_count': ignored_speed_count if ignore_speed else 0, 'stalled_seconds': stalled_seconds, 'stalled_timer_key': timer_key, 'healthy_active_protected': 0, 'rtorrent_cap': rtorrent_cap, 'checked': len(torrents), 'excluded': len(user_excluded), 'settings': settings}
|
||||||
|
|||||||
159
pytorrent/services/speed_peaks.py
Normal file
159
pytorrent/services/speed_peaks.py
Normal file
@@ -0,0 +1,159 @@
|
|||||||
|
from __future__ import annotations
|
||||||
|
|
||||||
|
import threading
|
||||||
|
from typing import Any
|
||||||
|
|
||||||
|
from ..db import connect, utcnow
|
||||||
|
from .rtorrent import human_rate
|
||||||
|
|
||||||
|
_SESSION_STARTED_AT = utcnow()
|
||||||
|
_CACHE: dict[int, dict[str, Any]] = {}
|
||||||
|
_LOADED = False
|
||||||
|
_LOCK = threading.Lock()
|
||||||
|
|
||||||
|
|
||||||
|
def _empty_peak(profile_id: int, all_time: dict[str, Any] | None = None) -> dict[str, Any]:
|
||||||
|
# Notatka: jedna struktura w pamięci trzyma bieżącą sesję i rekord ogólny dla profilu rTorrent.
|
||||||
|
all_time = all_time or {}
|
||||||
|
return {
|
||||||
|
"profile_id": int(profile_id),
|
||||||
|
"session_started_at": _SESSION_STARTED_AT,
|
||||||
|
"session_down_peak": 0,
|
||||||
|
"session_up_peak": 0,
|
||||||
|
"session_down_peak_at": None,
|
||||||
|
"session_up_peak_at": None,
|
||||||
|
"all_time_down_peak": int(all_time.get("all_time_down_peak") or 0),
|
||||||
|
"all_time_up_peak": int(all_time.get("all_time_up_peak") or 0),
|
||||||
|
"all_time_down_peak_at": all_time.get("all_time_down_peak_at"),
|
||||||
|
"all_time_up_peak_at": all_time.get("all_time_up_peak_at"),
|
||||||
|
}
|
||||||
|
|
||||||
|
|
||||||
|
def load_cache() -> None:
|
||||||
|
# Notatka: rekordy ogólne są ładowane przy starcie aplikacji, a rekord sesji zaczyna się od zera.
|
||||||
|
global _LOADED
|
||||||
|
with _LOCK:
|
||||||
|
if _LOADED:
|
||||||
|
return
|
||||||
|
with connect() as conn:
|
||||||
|
rows = conn.execute("SELECT * FROM transfer_speed_peaks").fetchall()
|
||||||
|
for row in rows:
|
||||||
|
profile_id = int(row.get("profile_id") or 0)
|
||||||
|
if profile_id:
|
||||||
|
_CACHE[profile_id] = _empty_peak(profile_id, row)
|
||||||
|
_LOADED = True
|
||||||
|
|
||||||
|
|
||||||
|
def _ensure_profile(profile_id: int) -> dict[str, Any]:
|
||||||
|
# Notatka: leniwe ładowanie chroni nowe profile dodane po starcie przed pustymi rekordami.
|
||||||
|
profile_id = int(profile_id)
|
||||||
|
item = _CACHE.get(profile_id)
|
||||||
|
if item:
|
||||||
|
return item
|
||||||
|
with connect() as conn:
|
||||||
|
row = conn.execute("SELECT * FROM transfer_speed_peaks WHERE profile_id=?", (profile_id,)).fetchone()
|
||||||
|
item = _empty_peak(profile_id, row)
|
||||||
|
_CACHE[profile_id] = item
|
||||||
|
return item
|
||||||
|
|
||||||
|
|
||||||
|
def _persist(item: dict[str, Any]) -> None:
|
||||||
|
# Notatka: SQLite dostaje zapis tylko wtedy, gdy pojawił się nowy rekord sesji lub rekord ogólny.
|
||||||
|
now = utcnow()
|
||||||
|
with connect() as conn:
|
||||||
|
conn.execute(
|
||||||
|
"""
|
||||||
|
INSERT INTO transfer_speed_peaks(
|
||||||
|
profile_id, session_started_at, session_down_peak, session_up_peak,
|
||||||
|
session_down_peak_at, session_up_peak_at, all_time_down_peak,
|
||||||
|
all_time_up_peak, all_time_down_peak_at, all_time_up_peak_at,
|
||||||
|
created_at, updated_at
|
||||||
|
) VALUES(?,?,?,?,?,?,?,?,?,?,?,?)
|
||||||
|
ON CONFLICT(profile_id) DO UPDATE SET
|
||||||
|
session_started_at=excluded.session_started_at,
|
||||||
|
session_down_peak=excluded.session_down_peak,
|
||||||
|
session_up_peak=excluded.session_up_peak,
|
||||||
|
session_down_peak_at=excluded.session_down_peak_at,
|
||||||
|
session_up_peak_at=excluded.session_up_peak_at,
|
||||||
|
all_time_down_peak=excluded.all_time_down_peak,
|
||||||
|
all_time_up_peak=excluded.all_time_up_peak,
|
||||||
|
all_time_down_peak_at=excluded.all_time_down_peak_at,
|
||||||
|
all_time_up_peak_at=excluded.all_time_up_peak_at,
|
||||||
|
updated_at=excluded.updated_at
|
||||||
|
""",
|
||||||
|
(
|
||||||
|
int(item["profile_id"]),
|
||||||
|
item["session_started_at"],
|
||||||
|
int(item["session_down_peak"]),
|
||||||
|
int(item["session_up_peak"]),
|
||||||
|
item.get("session_down_peak_at"),
|
||||||
|
item.get("session_up_peak_at"),
|
||||||
|
int(item["all_time_down_peak"]),
|
||||||
|
int(item["all_time_up_peak"]),
|
||||||
|
item.get("all_time_down_peak_at"),
|
||||||
|
item.get("all_time_up_peak_at"),
|
||||||
|
now,
|
||||||
|
now,
|
||||||
|
),
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
|
def _public(item: dict[str, Any]) -> dict[str, Any]:
|
||||||
|
# Notatka: frontend dostaje zarówno bajty/s, jak i gotowe etykiety w stylu istniejących prędkości.
|
||||||
|
return {
|
||||||
|
"session_started_at": item["session_started_at"],
|
||||||
|
"session": {
|
||||||
|
"down": int(item["session_down_peak"]),
|
||||||
|
"up": int(item["session_up_peak"]),
|
||||||
|
"down_h": human_rate(int(item["session_down_peak"])),
|
||||||
|
"up_h": human_rate(int(item["session_up_peak"])),
|
||||||
|
"down_at": item.get("session_down_peak_at"),
|
||||||
|
"up_at": item.get("session_up_peak_at"),
|
||||||
|
},
|
||||||
|
"all_time": {
|
||||||
|
"down": int(item["all_time_down_peak"]),
|
||||||
|
"up": int(item["all_time_up_peak"]),
|
||||||
|
"down_h": human_rate(int(item["all_time_down_peak"])),
|
||||||
|
"up_h": human_rate(int(item["all_time_up_peak"])),
|
||||||
|
"down_at": item.get("all_time_down_peak_at"),
|
||||||
|
"up_at": item.get("all_time_up_peak_at"),
|
||||||
|
},
|
||||||
|
}
|
||||||
|
|
||||||
|
|
||||||
|
def record(profile_id: int, down_rate: int = 0, up_rate: int = 0) -> dict[str, Any]:
|
||||||
|
# Notatka: poller wywołuje tę funkcję w tle; baza jest aktualizowana tylko po przebiciu rekordu.
|
||||||
|
load_cache()
|
||||||
|
down_rate = max(0, int(down_rate or 0))
|
||||||
|
up_rate = max(0, int(up_rate or 0))
|
||||||
|
measured_at = utcnow()
|
||||||
|
changed = False
|
||||||
|
with _LOCK:
|
||||||
|
item = _ensure_profile(int(profile_id))
|
||||||
|
if down_rate > int(item["session_down_peak"]):
|
||||||
|
item["session_down_peak"] = down_rate
|
||||||
|
item["session_down_peak_at"] = measured_at
|
||||||
|
changed = True
|
||||||
|
if up_rate > int(item["session_up_peak"]):
|
||||||
|
item["session_up_peak"] = up_rate
|
||||||
|
item["session_up_peak_at"] = measured_at
|
||||||
|
changed = True
|
||||||
|
if down_rate > int(item["all_time_down_peak"]):
|
||||||
|
item["all_time_down_peak"] = down_rate
|
||||||
|
item["all_time_down_peak_at"] = measured_at
|
||||||
|
changed = True
|
||||||
|
if up_rate > int(item["all_time_up_peak"]):
|
||||||
|
item["all_time_up_peak"] = up_rate
|
||||||
|
item["all_time_up_peak_at"] = measured_at
|
||||||
|
changed = True
|
||||||
|
result = _public(item)
|
||||||
|
if changed:
|
||||||
|
_persist(item)
|
||||||
|
return result
|
||||||
|
|
||||||
|
|
||||||
|
def current(profile_id: int) -> dict[str, Any]:
|
||||||
|
# Notatka: REST API może pokazać ostatni znany rekord bez wymuszania nowego pomiaru.
|
||||||
|
load_cache()
|
||||||
|
with _LOCK:
|
||||||
|
return _public(_ensure_profile(int(profile_id)))
|
||||||
@@ -26,9 +26,11 @@ class TorrentCache:
|
|||||||
profile_id = int(profile["id"])
|
profile_id = int(profile["id"])
|
||||||
try:
|
try:
|
||||||
rows = rtorrent.list_torrents(profile)
|
rows = rtorrent.list_torrents(profile)
|
||||||
|
with self._lock:
|
||||||
|
old = dict(self._data.get(profile_id, {}))
|
||||||
|
post_check_changes = rtorrent.apply_post_check_policy(profile, rows, old)
|
||||||
fresh = {t["hash"]: t for t in rows}
|
fresh = {t["hash"]: t for t in rows}
|
||||||
with self._lock:
|
with self._lock:
|
||||||
old = self._data.get(profile_id, {})
|
|
||||||
added = [v for h, v in fresh.items() if h not in old]
|
added = [v for h, v in fresh.items() if h not in old]
|
||||||
removed = [h for h in old.keys() if h not in fresh]
|
removed = [h for h in old.keys() if h not in fresh]
|
||||||
updated = []
|
updated = []
|
||||||
@@ -45,7 +47,7 @@ class TorrentCache:
|
|||||||
self._data[profile_id] = fresh
|
self._data[profile_id] = fresh
|
||||||
self._errors[profile_id] = ""
|
self._errors[profile_id] = ""
|
||||||
self._updated_at[profile_id] = time()
|
self._updated_at[profile_id] = time()
|
||||||
return {"ok": True, "profile_id": profile_id, "added": added, "updated": updated, "removed": removed}
|
return {"ok": True, "profile_id": profile_id, "added": added, "updated": updated, "removed": removed, "post_check_changes": post_check_changes}
|
||||||
except Exception as exc:
|
except Exception as exc:
|
||||||
with self._lock:
|
with self._lock:
|
||||||
self._errors[profile_id] = str(exc)
|
self._errors[profile_id] = str(exc)
|
||||||
|
|||||||
209
pytorrent/services/torrent_stats.py
Normal file
209
pytorrent/services/torrent_stats.py
Normal file
@@ -0,0 +1,209 @@
|
|||||||
|
from __future__ import annotations
|
||||||
|
|
||||||
|
import json
|
||||||
|
import threading
|
||||||
|
import time
|
||||||
|
from typing import Any
|
||||||
|
|
||||||
|
from ..db import connect, utcnow
|
||||||
|
from . import rtorrent
|
||||||
|
from .torrent_cache import torrent_cache
|
||||||
|
|
||||||
|
CACHE_SECONDS = 15 * 60
|
||||||
|
_STARTUP_DELAY_SECONDS = 3 * 60
|
||||||
|
_STARTED_AT = time.monotonic()
|
||||||
|
_LOCK = threading.Lock()
|
||||||
|
_BACKGROUND_LOCK = threading.Lock()
|
||||||
|
_BACKGROUND_PROFILE_IDS: set[int] = set()
|
||||||
|
|
||||||
|
|
||||||
|
def _human_size(value: int | float) -> str:
|
||||||
|
size = float(value or 0)
|
||||||
|
for unit in ("B", "KiB", "MiB", "GiB", "TiB", "PiB"):
|
||||||
|
if abs(size) < 1024 or unit == "PiB":
|
||||||
|
return f"{size:.1f} {unit}" if unit != "B" else f"{int(size)} B"
|
||||||
|
size /= 1024
|
||||||
|
return f"{size:.1f} PiB"
|
||||||
|
|
||||||
|
|
||||||
|
def _empty(profile_id: int, error: str = "") -> dict[str, Any]:
|
||||||
|
now = utcnow()
|
||||||
|
return {
|
||||||
|
"profile_id": profile_id,
|
||||||
|
"torrent_count": 0,
|
||||||
|
"complete_count": 0,
|
||||||
|
"incomplete_count": 0,
|
||||||
|
"total_torrent_size": 0,
|
||||||
|
"total_torrent_size_h": _human_size(0),
|
||||||
|
"total_file_size": 0,
|
||||||
|
"total_file_size_h": _human_size(0),
|
||||||
|
"file_count": 0,
|
||||||
|
"seeds_total": 0,
|
||||||
|
"peers_total": 0,
|
||||||
|
"down_rate_total": 0,
|
||||||
|
"up_rate_total": 0,
|
||||||
|
"down_rate_total_h": "0 B/s",
|
||||||
|
"up_rate_total_h": "0 B/s",
|
||||||
|
"sampled_torrents": 0,
|
||||||
|
"errors": [],
|
||||||
|
"error": error,
|
||||||
|
"created_at": now,
|
||||||
|
"updated_at": now,
|
||||||
|
"age_seconds": 0,
|
||||||
|
"stale": True,
|
||||||
|
}
|
||||||
|
|
||||||
|
|
||||||
|
def _load_cached(profile_id: int) -> dict[str, Any] | None:
|
||||||
|
with connect() as conn:
|
||||||
|
row = conn.execute("SELECT * FROM torrent_stats_cache WHERE profile_id=?", (profile_id,)).fetchone()
|
||||||
|
if not row:
|
||||||
|
return None
|
||||||
|
payload = json.loads(row.get("payload_json") or "{}")
|
||||||
|
payload["created_at"] = row.get("created_at")
|
||||||
|
payload["updated_at"] = row.get("updated_at")
|
||||||
|
try:
|
||||||
|
payload["age_seconds"] = max(0, int(time.time() - float(row.get("updated_epoch") or 0)))
|
||||||
|
except Exception:
|
||||||
|
payload["age_seconds"] = 0
|
||||||
|
payload["stale"] = payload["age_seconds"] >= CACHE_SECONDS
|
||||||
|
return payload
|
||||||
|
|
||||||
|
|
||||||
|
def _save(profile_id: int, payload: dict[str, Any]) -> dict[str, Any]:
|
||||||
|
now = utcnow()
|
||||||
|
payload = dict(payload)
|
||||||
|
payload["updated_at"] = now
|
||||||
|
payload["age_seconds"] = 0
|
||||||
|
payload["stale"] = False
|
||||||
|
with connect() as conn:
|
||||||
|
conn.execute(
|
||||||
|
"""
|
||||||
|
INSERT INTO torrent_stats_cache(profile_id,payload_json,created_at,updated_at,updated_epoch)
|
||||||
|
VALUES(?,?,?,?,?)
|
||||||
|
ON CONFLICT(profile_id) DO UPDATE SET
|
||||||
|
payload_json=excluded.payload_json,
|
||||||
|
updated_at=excluded.updated_at,
|
||||||
|
updated_epoch=excluded.updated_epoch
|
||||||
|
""",
|
||||||
|
(profile_id, json.dumps(payload), now, now, time.time()),
|
||||||
|
)
|
||||||
|
return payload
|
||||||
|
|
||||||
|
|
||||||
|
def collect(profile: dict) -> dict[str, Any]:
|
||||||
|
"""Collect heavier torrent/file statistics on demand or every cache window."""
|
||||||
|
profile_id = int(profile.get("id") or 0)
|
||||||
|
torrents = rtorrent.list_torrents(profile)
|
||||||
|
total_torrent_size = sum(int(t.get("size") or 0) for t in torrents)
|
||||||
|
seeds_total = sum(int(t.get("seeds") or 0) for t in torrents)
|
||||||
|
peers_total = sum(int(t.get("peers") or 0) for t in torrents)
|
||||||
|
down_rate_total = sum(int(t.get("down_rate") or 0) for t in torrents)
|
||||||
|
up_rate_total = sum(int(t.get("up_rate") or 0) for t in torrents)
|
||||||
|
total_file_size = 0
|
||||||
|
file_count = 0
|
||||||
|
errors: list[dict[str, str]] = []
|
||||||
|
|
||||||
|
# Note: File metadata is queried per torrent only during cached statistics refresh, not during every UI poll.
|
||||||
|
for torrent in torrents:
|
||||||
|
h = str(torrent.get("hash") or "")
|
||||||
|
if not h:
|
||||||
|
continue
|
||||||
|
try:
|
||||||
|
files = rtorrent.torrent_files(profile, h)
|
||||||
|
file_count += len(files)
|
||||||
|
total_file_size += sum(int(f.get("size") or 0) for f in files)
|
||||||
|
except Exception as exc:
|
||||||
|
errors.append({"hash": h, "name": str(torrent.get("name") or ""), "error": str(exc)})
|
||||||
|
|
||||||
|
torrent_cache.refresh(profile)
|
||||||
|
payload = {
|
||||||
|
"profile_id": profile_id,
|
||||||
|
"torrent_count": len(torrents),
|
||||||
|
"complete_count": sum(1 for t in torrents if int(t.get("complete") or 0)),
|
||||||
|
"incomplete_count": sum(1 for t in torrents if not int(t.get("complete") or 0)),
|
||||||
|
"total_torrent_size": total_torrent_size,
|
||||||
|
"total_torrent_size_h": _human_size(total_torrent_size),
|
||||||
|
"total_file_size": total_file_size,
|
||||||
|
"total_file_size_h": _human_size(total_file_size),
|
||||||
|
"file_count": file_count,
|
||||||
|
"seeds_total": seeds_total,
|
||||||
|
"peers_total": peers_total,
|
||||||
|
"down_rate_total": down_rate_total,
|
||||||
|
"up_rate_total": up_rate_total,
|
||||||
|
"down_rate_total_h": rtorrent.human_rate(down_rate_total),
|
||||||
|
"up_rate_total_h": rtorrent.human_rate(up_rate_total),
|
||||||
|
"sampled_torrents": len(torrents),
|
||||||
|
"errors": errors[:25],
|
||||||
|
"error": "" if not errors else f"File metadata failed for {len(errors)} torrent(s)",
|
||||||
|
"created_at": utcnow(),
|
||||||
|
}
|
||||||
|
return _save(profile_id, payload)
|
||||||
|
|
||||||
|
|
||||||
|
def get(profile: dict | None, force: bool = False) -> dict[str, Any]:
|
||||||
|
if not profile:
|
||||||
|
return _empty(0, "No active rTorrent profile")
|
||||||
|
profile_id = int(profile.get("id") or 0)
|
||||||
|
cached = _load_cached(profile_id)
|
||||||
|
if cached and not force and not cached.get("stale"):
|
||||||
|
return cached
|
||||||
|
if cached and not force:
|
||||||
|
return cached
|
||||||
|
with _LOCK:
|
||||||
|
cached = _load_cached(profile_id)
|
||||||
|
if cached and not force and not cached.get("stale"):
|
||||||
|
return cached
|
||||||
|
return collect(profile)
|
||||||
|
|
||||||
|
|
||||||
|
def maybe_refresh(profile: dict | None, force: bool = False) -> dict[str, Any] | None:
|
||||||
|
if not profile:
|
||||||
|
return None
|
||||||
|
if not force and time.monotonic() - _STARTED_AT < _STARTUP_DELAY_SECONDS:
|
||||||
|
return None
|
||||||
|
cached = _load_cached(int(profile.get("id") or 0))
|
||||||
|
if cached and not cached.get("stale") and not force:
|
||||||
|
return cached
|
||||||
|
try:
|
||||||
|
return get(profile, force=True)
|
||||||
|
except Exception:
|
||||||
|
return cached
|
||||||
|
|
||||||
|
|
||||||
|
def queue_refresh(socketio, profile: dict | None, force: bool = False, emit_update: bool = True, room: str | None = None) -> dict[str, Any] | None:
|
||||||
|
"""Schedule heavier statistics refresh outside the main WebSocket/system poller."""
|
||||||
|
if not profile:
|
||||||
|
return None
|
||||||
|
if not force and time.monotonic() - _STARTED_AT < _STARTUP_DELAY_SECONDS:
|
||||||
|
return _load_cached(int(profile.get("id") or 0))
|
||||||
|
|
||||||
|
profile_id = int(profile.get("id") or 0)
|
||||||
|
cached = _load_cached(profile_id)
|
||||||
|
if cached and not cached.get("stale") and not force:
|
||||||
|
return cached
|
||||||
|
|
||||||
|
with _BACKGROUND_LOCK:
|
||||||
|
if profile_id in _BACKGROUND_PROFILE_IDS:
|
||||||
|
return cached
|
||||||
|
_BACKGROUND_PROFILE_IDS.add(profile_id)
|
||||||
|
|
||||||
|
profile_snapshot = dict(profile)
|
||||||
|
|
||||||
|
def runner():
|
||||||
|
try:
|
||||||
|
# Note: This can query file metadata per torrent, so it never runs inside the fast CPU/RAM/disk poller.
|
||||||
|
stats = get(profile_snapshot, force=True)
|
||||||
|
if emit_update and stats:
|
||||||
|
payload = {"profile_id": profile_id, "stats": stats}
|
||||||
|
socketio.emit("torrent_stats_update", payload, to=room) if room else socketio.emit("torrent_stats_update", payload)
|
||||||
|
except Exception as exc:
|
||||||
|
if emit_update:
|
||||||
|
payload = {"profile_id": profile_id, "ok": False, "error": str(exc)}
|
||||||
|
socketio.emit("torrent_stats_update", payload, to=room) if room else socketio.emit("torrent_stats_update", payload)
|
||||||
|
finally:
|
||||||
|
with _BACKGROUND_LOCK:
|
||||||
|
_BACKGROUND_PROFILE_IDS.discard(profile_id)
|
||||||
|
|
||||||
|
socketio.start_background_task(runner)
|
||||||
|
return cached
|
||||||
@@ -36,22 +36,28 @@ def _has_error(row: dict) -> bool:
|
|||||||
return bool(message and any(pattern in message for pattern in _ERROR_PATTERNS))
|
return bool(message and any(pattern in message for pattern in _ERROR_PATTERNS))
|
||||||
|
|
||||||
|
|
||||||
|
def _is_checking(row: dict) -> bool:
|
||||||
|
return str(row.get("status") or "") == "Checking" or _number(row, "hashing") > 0
|
||||||
|
|
||||||
|
|
||||||
def _matches(row: dict, summary_type: str) -> bool:
|
def _matches(row: dict, summary_type: str) -> bool:
|
||||||
status = str(row.get("status") or "")
|
status = str(row.get("status") or "")
|
||||||
|
checking = _is_checking(row)
|
||||||
if summary_type == "all":
|
if summary_type == "all":
|
||||||
return True
|
return True
|
||||||
if summary_type == "downloading":
|
if summary_type == "downloading":
|
||||||
return not bool(row.get("complete")) and bool(row.get("state")) and not bool(row.get("paused"))
|
return not checking and not bool(row.get("complete")) and bool(row.get("state")) and not bool(row.get("paused"))
|
||||||
if summary_type == "seeding":
|
if summary_type == "seeding":
|
||||||
return status != "Checking" and bool(row.get("complete")) and bool(row.get("state")) and not bool(row.get("paused"))
|
return not checking and bool(row.get("complete")) and bool(row.get("state")) and not bool(row.get("paused"))
|
||||||
if summary_type == "paused":
|
if summary_type == "paused":
|
||||||
return bool(row.get("paused")) or status == "Paused"
|
return not checking and (bool(row.get("paused")) or status == "Paused")
|
||||||
if summary_type == "checking":
|
if summary_type == "checking":
|
||||||
return status == "Checking" or _number(row, "hashing") > 0
|
return checking
|
||||||
if summary_type == "error":
|
if summary_type == "error":
|
||||||
return _has_error(row)
|
return _has_error(row)
|
||||||
if summary_type == "stopped":
|
if summary_type == "stopped":
|
||||||
return not bool(row.get("state"))
|
# Note: Stopped count follows the UI filter exactly, so torrents being hash-checked do not inflate an empty Stopped list.
|
||||||
|
return not checking and not bool(row.get("state"))
|
||||||
return False
|
return False
|
||||||
|
|
||||||
|
|
||||||
|
|||||||
440
pytorrent/services/tracker_cache.py
Normal file
440
pytorrent/services/tracker_cache.py
Normal file
@@ -0,0 +1,440 @@
|
|||||||
|
from __future__ import annotations
|
||||||
|
|
||||||
|
import json
|
||||||
|
import mimetypes
|
||||||
|
import re
|
||||||
|
import time
|
||||||
|
import threading
|
||||||
|
import ssl
|
||||||
|
import urllib.error
|
||||||
|
import urllib.parse
|
||||||
|
import urllib.request
|
||||||
|
from html.parser import HTMLParser
|
||||||
|
from pathlib import Path
|
||||||
|
|
||||||
|
from ..config import BASE_DIR
|
||||||
|
from ..db import connect, utcnow
|
||||||
|
|
||||||
|
TRACKER_CACHE_TTL_SECONDS = 7 * 24 * 60 * 60
|
||||||
|
FAVICON_CACHE_TTL_SECONDS = 7 * 24 * 60 * 60
|
||||||
|
TRACKER_SCAN_LIMIT = 80
|
||||||
|
FAVICON_DIR = BASE_DIR / "data" / "tracker_favicons"
|
||||||
|
PUBLIC_FAVICON_BASE = "/static/tracker_favicons"
|
||||||
|
_TRACKER_SCAN_LOCKS: dict[int, threading.Lock] = {}
|
||||||
|
_TRACKER_SCAN_LOCKS_GUARD = threading.Lock()
|
||||||
|
|
||||||
|
|
||||||
|
class _IconParser(HTMLParser):
|
||||||
|
def __init__(self):
|
||||||
|
super().__init__()
|
||||||
|
self.icons: list[str] = []
|
||||||
|
|
||||||
|
def handle_starttag(self, tag: str, attrs):
|
||||||
|
if tag.lower() != "link":
|
||||||
|
return
|
||||||
|
data = {str(k).lower(): str(v or "") for k, v in attrs}
|
||||||
|
rel = re.sub(r"\s+", " ", data.get("rel", "").lower()).strip()
|
||||||
|
href = data.get("href", "").strip()
|
||||||
|
if href and "icon" in rel:
|
||||||
|
self.icons.append(href)
|
||||||
|
|
||||||
|
|
||||||
|
def _now_epoch() -> float:
|
||||||
|
return time.time()
|
||||||
|
|
||||||
|
|
||||||
|
def tracker_domain(url: str) -> str:
|
||||||
|
raw = str(url or "").strip()
|
||||||
|
if not raw:
|
||||||
|
return ""
|
||||||
|
parsed = urllib.parse.urlparse(raw if "://" in raw else f"http://{raw}")
|
||||||
|
host = (parsed.hostname or "").lower().strip(".")
|
||||||
|
if host.startswith("www."):
|
||||||
|
host = host[4:]
|
||||||
|
return host
|
||||||
|
|
||||||
|
|
||||||
|
def _root_domain(domain: str) -> str:
|
||||||
|
parts = [p for p in str(domain or "").lower().strip(".").split(".") if p]
|
||||||
|
if len(parts) <= 2:
|
||||||
|
return ".".join(parts)
|
||||||
|
# Note: Tracker favicon discovery needs the real main site first; for t.pte.nu that is pte.nu, not t.pte.nu.
|
||||||
|
known_second_level_suffixes = {"co", "com", "net", "org", "gov", "edu", "ac"}
|
||||||
|
if len(parts[-1]) == 2 and parts[-2] in known_second_level_suffixes and len(parts) >= 3:
|
||||||
|
return ".".join(parts[-3:])
|
||||||
|
return ".".join(parts[-2:])
|
||||||
|
|
||||||
|
|
||||||
|
def _safe_filename(domain: str) -> str:
|
||||||
|
return re.sub(r"[^a-z0-9_.-]+", "_", domain.lower()).strip("._") or "tracker"
|
||||||
|
|
||||||
|
|
||||||
|
def _read_cached(profile_id: int, hashes: list[str], ttl: int) -> tuple[dict[str, list[dict]], set[str]]:
|
||||||
|
if not hashes:
|
||||||
|
return {}, set()
|
||||||
|
now = _now_epoch()
|
||||||
|
cached: dict[str, list[dict]] = {}
|
||||||
|
fresh: set[str] = set()
|
||||||
|
with connect() as conn:
|
||||||
|
for start in range(0, len(hashes), 900):
|
||||||
|
chunk = hashes[start:start + 900]
|
||||||
|
placeholders = ",".join("?" for _ in chunk)
|
||||||
|
rows = conn.execute(
|
||||||
|
f"SELECT torrent_hash, trackers_json, updated_epoch FROM tracker_summary_cache WHERE profile_id=? AND torrent_hash IN ({placeholders})",
|
||||||
|
(profile_id, *chunk),
|
||||||
|
).fetchall()
|
||||||
|
for row in rows:
|
||||||
|
h = str(row.get("torrent_hash") or "")
|
||||||
|
try:
|
||||||
|
items = json.loads(row.get("trackers_json") or "[]")
|
||||||
|
except Exception:
|
||||||
|
items = []
|
||||||
|
cached[h] = items if isinstance(items, list) else []
|
||||||
|
if now - float(row.get("updated_epoch") or 0) < ttl:
|
||||||
|
fresh.add(h)
|
||||||
|
return cached, fresh
|
||||||
|
|
||||||
|
|
||||||
|
def _store(profile_id: int, torrent_hash: str, trackers: list[dict]) -> None:
|
||||||
|
now = utcnow()
|
||||||
|
epoch = _now_epoch()
|
||||||
|
compact = []
|
||||||
|
seen = set()
|
||||||
|
for item in trackers:
|
||||||
|
domain = tracker_domain(str(item.get("url") or item.get("domain") or "")) or str(item.get("domain") or "")
|
||||||
|
if not domain or domain in seen:
|
||||||
|
continue
|
||||||
|
seen.add(domain)
|
||||||
|
compact.append({"domain": domain, "url": str(item.get("url") or "")})
|
||||||
|
with connect() as conn:
|
||||||
|
conn.execute(
|
||||||
|
"""
|
||||||
|
INSERT INTO tracker_summary_cache(profile_id, torrent_hash, trackers_json, updated_at, updated_epoch)
|
||||||
|
VALUES(?, ?, ?, ?, ?)
|
||||||
|
ON CONFLICT(profile_id, torrent_hash) DO UPDATE SET
|
||||||
|
trackers_json=excluded.trackers_json,
|
||||||
|
updated_at=excluded.updated_at,
|
||||||
|
updated_epoch=excluded.updated_epoch
|
||||||
|
""",
|
||||||
|
(profile_id, torrent_hash, json.dumps(compact), now, epoch),
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
|
def summary(profile: dict, hashes: list[str], loader, scan_limit: int = TRACKER_SCAN_LIMIT, include_favicons: bool = False) -> dict:
|
||||||
|
"""Build tracker sidebar data from disk cache and refresh a small batch per request."""
|
||||||
|
# Note: Tracker data is cached per torrent hash, so huge rTorrent libraries are never scanned in one UI request.
|
||||||
|
profile_id = int(profile.get("id") or 0)
|
||||||
|
clean_hashes = [str(h or "").strip() for h in hashes if str(h or "").strip()]
|
||||||
|
cached, fresh = _read_cached(profile_id, clean_hashes, TRACKER_CACHE_TTL_SECONDS)
|
||||||
|
missing = [h for h in clean_hashes if h not in fresh]
|
||||||
|
errors: list[dict] = []
|
||||||
|
scanned_now = 0
|
||||||
|
for h in missing[:max(0, int(scan_limit or 0))]:
|
||||||
|
try:
|
||||||
|
trackers = loader(h)
|
||||||
|
_store(profile_id, h, trackers)
|
||||||
|
cached[h] = [{"domain": tracker_domain(t.get("url") or t.get("domain") or ""), "url": str(t.get("url") or "")} for t in trackers]
|
||||||
|
fresh.add(h)
|
||||||
|
scanned_now += 1
|
||||||
|
except Exception as exc:
|
||||||
|
errors.append({"hash": h, "error": str(exc)})
|
||||||
|
by_hash: dict[str, list[dict]] = {}
|
||||||
|
counts: dict[str, dict] = {}
|
||||||
|
for h in clean_hashes:
|
||||||
|
items = []
|
||||||
|
seen = set()
|
||||||
|
for item in cached.get(h, []):
|
||||||
|
domain = tracker_domain(str(item.get("url") or item.get("domain") or "")) or str(item.get("domain") or "")
|
||||||
|
if not domain or domain in seen:
|
||||||
|
continue
|
||||||
|
seen.add(domain)
|
||||||
|
row = {"domain": domain, "url": str(item.get("url") or "")}
|
||||||
|
items.append(row)
|
||||||
|
bucket = counts.setdefault(domain, {"domain": domain, "url": row["url"], "count": 0})
|
||||||
|
bucket["count"] += 1
|
||||||
|
if not bucket.get("url") and row["url"]:
|
||||||
|
bucket["url"] = row["url"]
|
||||||
|
by_hash[h] = items
|
||||||
|
trackers = sorted(counts.values(), key=lambda x: (-int(x.get("count") or 0), str(x.get("domain") or "")))
|
||||||
|
if include_favicons:
|
||||||
|
# Note: Summary returns only already cached static favicon URLs; network favicon discovery stays outside the hot tracker count path.
|
||||||
|
for item in trackers:
|
||||||
|
item["favicon_url"] = favicon_public_url(str(item.get("domain") or ""), enabled=True, create=False)
|
||||||
|
pending = max(0, len([h for h in clean_hashes if h not in fresh]))
|
||||||
|
return {"hashes": by_hash, "trackers": trackers, "errors": errors[:25], "scanned": len(clean_hashes), "scanned_now": scanned_now, "pending": pending, "cached": len(clean_hashes) - pending}
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
def _scan_lock(profile_id: int) -> threading.Lock:
|
||||||
|
with _TRACKER_SCAN_LOCKS_GUARD:
|
||||||
|
if profile_id not in _TRACKER_SCAN_LOCKS:
|
||||||
|
_TRACKER_SCAN_LOCKS[profile_id] = threading.Lock()
|
||||||
|
return _TRACKER_SCAN_LOCKS[profile_id]
|
||||||
|
|
||||||
|
|
||||||
|
def warm_summary_cache(profile: dict, hashes: list[str], loader, batch_size: int = TRACKER_SCAN_LIMIT) -> bool:
|
||||||
|
"""Start a non-blocking tracker cache warmup for large libraries."""
|
||||||
|
# Note: Tracker cache warming runs in one background thread per profile, so F5 returns cached data immediately instead of waiting for rTorrent scans.
|
||||||
|
profile_id = int(profile.get("id") or 0)
|
||||||
|
clean_hashes = [str(h or "").strip() for h in hashes if str(h or "").strip()]
|
||||||
|
if not profile_id or not clean_hashes:
|
||||||
|
return False
|
||||||
|
lock = _scan_lock(profile_id)
|
||||||
|
if lock.locked():
|
||||||
|
return False
|
||||||
|
|
||||||
|
def _worker():
|
||||||
|
if not lock.acquire(blocking=False):
|
||||||
|
return
|
||||||
|
try:
|
||||||
|
while True:
|
||||||
|
result = summary(profile, clean_hashes, loader, scan_limit=max(1, int(batch_size or TRACKER_SCAN_LIMIT)), include_favicons=False)
|
||||||
|
if int(result.get("pending") or 0) <= 0 or int(result.get("scanned_now") or 0) <= 0:
|
||||||
|
break
|
||||||
|
time.sleep(0.05)
|
||||||
|
finally:
|
||||||
|
lock.release()
|
||||||
|
|
||||||
|
threading.Thread(target=_worker, name=f"tracker-cache-warm-{profile_id}", daemon=True).start()
|
||||||
|
return True
|
||||||
|
|
||||||
|
|
||||||
|
def favicon_public_url(domain: str, enabled: bool = True, create: bool = False, force: bool = False) -> str:
|
||||||
|
"""Return the static URL for a cached tracker favicon, optionally creating or refreshing it first."""
|
||||||
|
# Note: Favicon files stay in data/tracker_favicons, but the browser loads them via the static/tracker_favicons symlink.
|
||||||
|
clean = tracker_domain(domain)
|
||||||
|
if not enabled or not clean:
|
||||||
|
return ""
|
||||||
|
if create:
|
||||||
|
favicon_path(clean, enabled=True, force=force)
|
||||||
|
cached = _cached_favicon(clean)
|
||||||
|
now = _now_epoch()
|
||||||
|
if not cached or now - float(cached.get("updated_epoch") or 0) >= FAVICON_CACHE_TTL_SECONDS:
|
||||||
|
return ""
|
||||||
|
path = Path(str(cached.get("file_path") or ""))
|
||||||
|
if not path.exists() or not path.is_file():
|
||||||
|
return ""
|
||||||
|
try:
|
||||||
|
rel = path.resolve().relative_to(FAVICON_DIR.resolve())
|
||||||
|
except Exception:
|
||||||
|
rel = Path(path.name)
|
||||||
|
return f"{PUBLIC_FAVICON_BASE}/{urllib.parse.quote(str(rel).replace(chr(92), '/'))}"
|
||||||
|
|
||||||
|
def _fetch(url: str, limit: int = 262144) -> tuple[bytes, str, str]:
|
||||||
|
# Note: Favicon discovery uses browser-like headers and a certificate fallback, because tracker login pages/CDNs often reject minimal Python requests.
|
||||||
|
req = urllib.request.Request(
|
||||||
|
url,
|
||||||
|
headers={
|
||||||
|
"User-Agent": "Mozilla/5.0 (compatible; pyTorrent favicon fetcher)",
|
||||||
|
"Accept": "text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,image/png,image/svg+xml,image/*,*/*;q=0.8",
|
||||||
|
"Connection": "close",
|
||||||
|
},
|
||||||
|
)
|
||||||
|
|
||||||
|
def _read(context=None):
|
||||||
|
with urllib.request.urlopen(req, timeout=8, context=context) as resp:
|
||||||
|
data = resp.read(limit + 1)
|
||||||
|
if len(data) > limit:
|
||||||
|
data = data[:limit]
|
||||||
|
content_type = str(resp.headers.get("Content-Type") or "").split(";", 1)[0].strip().lower()
|
||||||
|
final_url = str(resp.geturl() or url)
|
||||||
|
return data, content_type, final_url
|
||||||
|
|
||||||
|
try:
|
||||||
|
return _read()
|
||||||
|
except urllib.error.URLError as exc:
|
||||||
|
reason = getattr(exc, "reason", None)
|
||||||
|
if isinstance(reason, ssl.SSLError) or "CERTIFICATE_VERIFY_FAILED" in str(exc):
|
||||||
|
return _read(ssl._create_unverified_context())
|
||||||
|
raise
|
||||||
|
|
||||||
|
|
||||||
|
def _is_icon(data: bytes, content_type: str, url: str) -> bool:
|
||||||
|
"""Validate that downloaded bytes are a browser-readable image, not only an image-like HTTP header."""
|
||||||
|
# Note: Some trackers serve a broken /favicon.ico with image/vnd.microsoft.icon; pyTorrent now validates bytes before caching it.
|
||||||
|
if not data or len(data) < 16:
|
||||||
|
return False
|
||||||
|
head = data[:32]
|
||||||
|
lower = data[:512].lstrip().lower()
|
||||||
|
if head.startswith(b"\x00\x00\x01\x00") or head.startswith(b"\x00\x00\x02\x00"):
|
||||||
|
try:
|
||||||
|
count = int.from_bytes(data[4:6], "little")
|
||||||
|
except Exception:
|
||||||
|
count = 0
|
||||||
|
return 0 < count <= 256 and len(data) >= 6 + (16 * count)
|
||||||
|
if head.startswith(b"\x89PNG\r\n\x1a\n"):
|
||||||
|
return True
|
||||||
|
if head.startswith(b"\xff\xd8\xff"):
|
||||||
|
return True
|
||||||
|
if head.startswith((b"GIF87a", b"GIF89a")):
|
||||||
|
return True
|
||||||
|
if head.startswith(b"RIFF") and data[8:12] == b"WEBP":
|
||||||
|
return True
|
||||||
|
if lower.startswith(b"<svg") or b"<svg" in lower[:256]:
|
||||||
|
return True
|
||||||
|
ctype = content_type.lower()
|
||||||
|
if ctype in {"image/svg+xml"}:
|
||||||
|
return b"<svg" in lower[:512]
|
||||||
|
return False
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
def _attr_value(tag: str, name: str) -> str:
|
||||||
|
# Note: Accept quoted and unquoted HTML attributes so favicon discovery works with compact/minified tracker pages.
|
||||||
|
match = re.search(rf"\b{name}\s*=\s*(['\"])(.*?)\1", tag, re.I | re.S)
|
||||||
|
if match:
|
||||||
|
return match.group(2).strip()
|
||||||
|
match = re.search(rf"\b{name}\s*=\s*([^\s>]+)", tag, re.I | re.S)
|
||||||
|
return match.group(1).strip().strip("'\"") if match else ""
|
||||||
|
|
||||||
|
|
||||||
|
def _extract_icon_hrefs(html: str) -> list[str]:
|
||||||
|
# Note: Read any <link rel=...icon... href=...> order, including shortcut icon and relative CDN paths.
|
||||||
|
hrefs: list[str] = []
|
||||||
|
parser = _IconParser()
|
||||||
|
try:
|
||||||
|
parser.feed(html)
|
||||||
|
hrefs.extend(parser.icons)
|
||||||
|
except Exception:
|
||||||
|
pass
|
||||||
|
for match in re.finditer(r"<link\b[^>]*>", html, re.I | re.S):
|
||||||
|
tag = match.group(0)
|
||||||
|
rel = _attr_value(tag, "rel").lower()
|
||||||
|
href = _attr_value(tag, "href")
|
||||||
|
if href and "icon" in rel:
|
||||||
|
hrefs.append(href)
|
||||||
|
clean = []
|
||||||
|
seen = set()
|
||||||
|
for href in hrefs:
|
||||||
|
href = str(href or "").strip()
|
||||||
|
if href and href not in seen:
|
||||||
|
seen.add(href)
|
||||||
|
clean.append(href)
|
||||||
|
return clean
|
||||||
|
|
||||||
|
|
||||||
|
def _tracker_icon_hosts(domain: str) -> list[str]:
|
||||||
|
host = tracker_domain(domain)
|
||||||
|
root = _root_domain(host)
|
||||||
|
# Note: Direct favicon fallback checks the tracker host first, then the main domain.
|
||||||
|
return [h for h in dict.fromkeys([host, root]) if h]
|
||||||
|
|
||||||
|
|
||||||
|
def _tracker_html_hosts(domain: str) -> list[str]:
|
||||||
|
host = tracker_domain(domain)
|
||||||
|
root = _root_domain(host)
|
||||||
|
# Note: HTML discovery checks the main site first, because tracker announce hosts often return text/plain.
|
||||||
|
return [h for h in dict.fromkeys([root, host]) if h]
|
||||||
|
|
||||||
|
|
||||||
|
def _favicon_candidates(domain: str) -> list[str]:
|
||||||
|
candidates = []
|
||||||
|
for h in _tracker_icon_hosts(domain):
|
||||||
|
candidates.extend([f"https://{h}/favicon.ico", f"http://{h}/favicon.ico"])
|
||||||
|
return list(dict.fromkeys(candidates))
|
||||||
|
|
||||||
|
|
||||||
|
def _html_icon_candidates(domain: str, errors: list[str] | None = None) -> list[str]:
|
||||||
|
urls = []
|
||||||
|
for h in _tracker_html_hosts(domain):
|
||||||
|
for scheme in ("https", "http"):
|
||||||
|
base = f"{scheme}://{h}/"
|
||||||
|
try:
|
||||||
|
data, ctype, final_url = _fetch(base, limit=524288)
|
||||||
|
except Exception as exc:
|
||||||
|
if errors is not None:
|
||||||
|
errors.append(f"{base}: {exc}")
|
||||||
|
continue
|
||||||
|
lower = data[:4096].lower()
|
||||||
|
if "html" not in ctype and b"<html" not in lower and b"<link" not in data.lower():
|
||||||
|
if errors is not None:
|
||||||
|
errors.append(f"{base}: response is not html ({ctype or 'unknown content-type'})")
|
||||||
|
continue
|
||||||
|
html = data.decode("utf-8", errors="ignore")
|
||||||
|
for href in _extract_icon_hrefs(html):
|
||||||
|
urls.append(urllib.parse.urljoin(final_url, href))
|
||||||
|
return list(dict.fromkeys(urls))
|
||||||
|
|
||||||
|
|
||||||
|
def _cached_favicon(domain: str):
|
||||||
|
clean = tracker_domain(domain)
|
||||||
|
if not clean:
|
||||||
|
return None
|
||||||
|
with connect() as conn:
|
||||||
|
return conn.execute("SELECT * FROM tracker_favicon_cache WHERE domain=?", (clean,)).fetchone()
|
||||||
|
|
||||||
|
|
||||||
|
def favicon_cache_row(domain: str):
|
||||||
|
"""Note: Expose the favicon cache row for diagnostics without duplicating SQL in routes or CLI."""
|
||||||
|
return _cached_favicon(domain)
|
||||||
|
|
||||||
|
|
||||||
|
def favicon_path(domain: str, enabled: bool = True, force: bool = False) -> tuple[Path | None, str | None]:
|
||||||
|
clean = tracker_domain(domain)
|
||||||
|
if not enabled or not clean:
|
||||||
|
return None, None
|
||||||
|
cached = _cached_favicon(clean)
|
||||||
|
now = _now_epoch()
|
||||||
|
if cached and not force and now - float(cached.get("updated_epoch") or 0) < FAVICON_CACHE_TTL_SECONDS:
|
||||||
|
path = Path(str(cached.get("file_path") or ""))
|
||||||
|
mime = str(cached.get("mime_type") or mimetypes.guess_type(path.name)[0] or "image/x-icon")
|
||||||
|
if path.exists() and path.is_file():
|
||||||
|
try:
|
||||||
|
if _is_icon(path.read_bytes()[:524288], mime, str(cached.get("source_url") or path.name)):
|
||||||
|
return path, mime
|
||||||
|
except Exception:
|
||||||
|
pass
|
||||||
|
if cached.get("error"):
|
||||||
|
return None, None
|
||||||
|
# Note: Favicon lookup checks the main-domain HTML first, then tracker HTML, then direct /favicon.ico fallbacks.
|
||||||
|
FAVICON_DIR.mkdir(parents=True, exist_ok=True)
|
||||||
|
errors = []
|
||||||
|
candidates = _html_icon_candidates(clean, errors) + _favicon_candidates(clean)
|
||||||
|
candidates = list(dict.fromkeys(candidates))
|
||||||
|
idx = 0
|
||||||
|
while idx < len(candidates):
|
||||||
|
url = candidates[idx]
|
||||||
|
idx += 1
|
||||||
|
try:
|
||||||
|
data, ctype, final_url = _fetch(url, limit=524288)
|
||||||
|
if not _is_icon(data, ctype, final_url):
|
||||||
|
errors.append(f"{url}: invalid icon ({ctype or 'unknown content-type'}, {len(data)} bytes)")
|
||||||
|
continue
|
||||||
|
ext = Path(urllib.parse.urlparse(final_url).path).suffix.lower() or mimetypes.guess_extension(ctype) or ".ico"
|
||||||
|
if ext not in {".ico", ".png", ".jpg", ".jpeg", ".svg", ".webp"}:
|
||||||
|
ext = ".ico"
|
||||||
|
path = FAVICON_DIR / f"{_safe_filename(clean)}{ext}"
|
||||||
|
path.write_bytes(data)
|
||||||
|
mime = ctype if ctype.startswith("image/") else (mimetypes.guess_type(path.name)[0] or "image/x-icon")
|
||||||
|
with connect() as conn:
|
||||||
|
conn.execute(
|
||||||
|
"""
|
||||||
|
INSERT INTO tracker_favicon_cache(domain, source_url, file_path, mime_type, updated_at, updated_epoch, error)
|
||||||
|
VALUES(?, ?, ?, ?, ?, ?, NULL)
|
||||||
|
ON CONFLICT(domain) DO UPDATE SET
|
||||||
|
source_url=excluded.source_url,
|
||||||
|
file_path=excluded.file_path,
|
||||||
|
mime_type=excluded.mime_type,
|
||||||
|
updated_at=excluded.updated_at,
|
||||||
|
updated_epoch=excluded.updated_epoch,
|
||||||
|
error=NULL
|
||||||
|
""",
|
||||||
|
(clean, final_url, str(path), mime, utcnow(), now),
|
||||||
|
)
|
||||||
|
return path, mime
|
||||||
|
except Exception as exc:
|
||||||
|
errors.append(f"{url}: {exc}")
|
||||||
|
# HTML is checked once before direct /favicon.ico probes; do not guess cdn/static/www hosts unless HTML points there.
|
||||||
|
with connect() as conn:
|
||||||
|
conn.execute(
|
||||||
|
"""
|
||||||
|
INSERT INTO tracker_favicon_cache(domain, source_url, file_path, mime_type, updated_at, updated_epoch, error)
|
||||||
|
VALUES(?, '', '', '', ?, ?, ?)
|
||||||
|
ON CONFLICT(domain) DO UPDATE SET
|
||||||
|
updated_at=excluded.updated_at,
|
||||||
|
updated_epoch=excluded.updated_epoch,
|
||||||
|
error=excluded.error
|
||||||
|
""",
|
||||||
|
(clean, utcnow(), now, "; ".join(errors[-8:]) or "favicon not found"),
|
||||||
|
)
|
||||||
|
return None, None
|
||||||
@@ -1,30 +1,52 @@
|
|||||||
from __future__ import annotations
|
from __future__ import annotations
|
||||||
|
|
||||||
|
import threading
|
||||||
import psutil
|
import psutil
|
||||||
from flask_socketio import emit
|
from flask_socketio import emit, join_room, leave_room, disconnect
|
||||||
from ..config import POLL_INTERVAL
|
from ..config import POLL_INTERVAL
|
||||||
from .preferences import active_profile, get_profile
|
from .preferences import active_profile, get_profile
|
||||||
from .torrent_cache import torrent_cache
|
from .torrent_cache import torrent_cache
|
||||||
from .torrent_summary import cached_summary
|
from .torrent_summary import cached_summary
|
||||||
from . import rtorrent, smart_queue, traffic_history, automation_rules
|
from . import rtorrent, smart_queue, traffic_history, automation_rules, torrent_stats, auth, speed_peaks
|
||||||
|
|
||||||
|
|
||||||
|
def _profile_room(profile_id: int) -> str:
|
||||||
|
return f"profile:{int(profile_id)}"
|
||||||
|
|
||||||
|
|
||||||
|
def _poller_profiles() -> list[dict]:
|
||||||
|
# Note: Background polling has no browser session, so auth-enabled mode refreshes all profiles and emits only to per-profile rooms.
|
||||||
|
if not auth.enabled():
|
||||||
|
profile = active_profile()
|
||||||
|
return [profile] if profile else []
|
||||||
|
from ..db import connect
|
||||||
|
with connect() as conn:
|
||||||
|
return conn.execute("SELECT * FROM rtorrent_profiles ORDER BY id").fetchall()
|
||||||
|
|
||||||
|
|
||||||
|
def _emit_profile(socketio, event: str, payload: dict, profile_id: int) -> None:
|
||||||
|
target = _profile_room(profile_id) if auth.enabled() else None
|
||||||
|
socketio.emit(event, payload, to=target) if target else socketio.emit(event, payload)
|
||||||
|
|
||||||
_started = False
|
_started = False
|
||||||
|
_start_lock = threading.Lock()
|
||||||
|
|
||||||
|
|
||||||
def register_socketio_handlers(socketio):
|
def register_socketio_handlers(socketio):
|
||||||
global _started
|
|
||||||
|
|
||||||
def poller():
|
def poller():
|
||||||
tick = 0
|
tick = 0
|
||||||
while True:
|
while True:
|
||||||
profile = active_profile()
|
for profile in _poller_profiles():
|
||||||
if profile:
|
if not profile:
|
||||||
|
continue
|
||||||
|
pid = int(profile["id"])
|
||||||
diff = torrent_cache.refresh(profile)
|
diff = torrent_cache.refresh(profile)
|
||||||
heartbeat = {"ok": bool(diff.get("ok")), "profile_id": profile["id"], "tick": tick, "error": diff.get("error", "")}
|
heartbeat = {"ok": bool(diff.get("ok")), "profile_id": pid, "tick": tick, "error": diff.get("error", "")}
|
||||||
if diff.get("ok") and (diff["added"] or diff["updated"] or diff["removed"]):
|
if diff.get("ok") and (diff["added"] or diff["updated"] or diff["removed"]):
|
||||||
socketio.emit("torrent_patch", {**diff, "summary": cached_summary(profile["id"], torrent_cache.snapshot(profile["id"]), force=True)})
|
_emit_profile(socketio, "torrent_patch", {**diff, "summary": cached_summary(pid, torrent_cache.snapshot(pid), force=True)}, pid)
|
||||||
elif not diff.get("ok"):
|
elif not diff.get("ok"):
|
||||||
socketio.emit("rtorrent_error", diff)
|
_emit_profile(socketio, "rtorrent_error", diff, pid)
|
||||||
try:
|
try:
|
||||||
status = rtorrent.system_status(profile)
|
status = rtorrent.system_status(profile)
|
||||||
if bool(profile.get("is_remote")):
|
if bool(profile.get("is_remote")):
|
||||||
@@ -35,41 +57,64 @@ def register_socketio_handlers(socketio):
|
|||||||
status["ram"] = psutil.virtual_memory().percent
|
status["ram"] = psutil.virtual_memory().percent
|
||||||
status["usage_source"] = "local"
|
status["usage_source"] = "local"
|
||||||
status["usage_available"] = True
|
status["usage_available"] = True
|
||||||
status["profile_id"] = profile["id"]
|
status["profile_id"] = pid
|
||||||
traffic_history.record(profile["id"], status.get("down_rate", 0), status.get("up_rate", 0), status.get("total_down", 0), status.get("total_up", 0))
|
traffic_history.record(pid, status.get("down_rate", 0), status.get("up_rate", 0), status.get("total_down", 0), status.get("total_up", 0))
|
||||||
socketio.emit("system_stats", status)
|
# Notatka: najwyższe DL/UL są liczone w tle razem z istniejącym pollerem i zapisywane tylko po przebiciu rekordu.
|
||||||
|
status["speed_peaks"] = speed_peaks.record(pid, status.get("down_rate", 0), status.get("up_rate", 0))
|
||||||
|
_emit_profile(socketio, "system_stats", status, pid)
|
||||||
heartbeat["ok"] = True
|
heartbeat["ok"] = True
|
||||||
except Exception as exc:
|
except Exception as exc:
|
||||||
heartbeat["ok"] = False
|
heartbeat["ok"] = False
|
||||||
heartbeat["error"] = str(exc)
|
heartbeat["error"] = str(exc)
|
||||||
socketio.emit("rtorrent_error", {"profile_id": profile["id"], "error": str(exc)})
|
_emit_profile(socketio, "rtorrent_error", {"profile_id": pid, "error": str(exc)}, pid)
|
||||||
|
if tick % max(1, int(15 * 60 / POLL_INTERVAL)) == 0:
|
||||||
|
# Note: Queue heavier torrent statistics outside the fast system_stats poller.
|
||||||
|
torrent_stats.queue_refresh(socketio, profile, force=False, room=_profile_room(pid) if auth.enabled() else None)
|
||||||
if tick % max(1, int(30 / POLL_INTERVAL)) == 0:
|
if tick % max(1, int(30 / POLL_INTERVAL)) == 0:
|
||||||
try:
|
try:
|
||||||
result = smart_queue.check(profile, force=False)
|
result = smart_queue.check(profile, force=False)
|
||||||
if result.get("enabled"):
|
if result.get("enabled"):
|
||||||
socketio.emit("smart_queue_update", result)
|
_emit_profile(socketio, "smart_queue_update", result, pid)
|
||||||
|
if result.get("stopped") or result.get("started") or result.get("start_requested") or result.get("paused") or result.get("resumed"):
|
||||||
|
# Note: Note: After Smart Queue STOP/START changes, refresh cache immediately so the Downloading list does not wait for the next poller cycle.
|
||||||
|
queue_diff = torrent_cache.refresh(profile)
|
||||||
|
if queue_diff.get("ok"):
|
||||||
|
_emit_profile(socketio, "torrent_patch", {**queue_diff, "summary": cached_summary(pid, torrent_cache.snapshot(pid), force=True)}, pid)
|
||||||
except Exception as exc:
|
except Exception as exc:
|
||||||
socketio.emit("smart_queue_update", {"ok": False, "error": str(exc)})
|
_emit_profile(socketio, "smart_queue_update", {"ok": False, "error": str(exc)}, pid)
|
||||||
try:
|
try:
|
||||||
auto_result = automation_rules.check(profile, force=False)
|
auto_result = automation_rules.check(profile, force=False)
|
||||||
if auto_result.get("applied"):
|
if auto_result.get("applied"):
|
||||||
socketio.emit("automation_update", auto_result)
|
_emit_profile(socketio, "automation_update", auto_result, pid)
|
||||||
except Exception as exc:
|
except Exception as exc:
|
||||||
socketio.emit("automation_update", {"ok": False, "error": str(exc)})
|
_emit_profile(socketio, "automation_update", {"ok": False, "error": str(exc)}, pid)
|
||||||
socketio.emit("heartbeat", heartbeat)
|
_emit_profile(socketio, "heartbeat", heartbeat, pid)
|
||||||
tick += 1
|
tick += 1
|
||||||
socketio.sleep(POLL_INTERVAL)
|
socketio.sleep(POLL_INTERVAL)
|
||||||
|
|
||||||
@socketio.on("connect")
|
def ensure_poller_started():
|
||||||
def handle_connect():
|
|
||||||
global _started
|
global _started
|
||||||
|
with _start_lock:
|
||||||
if not _started:
|
if not _started:
|
||||||
|
# Note: The poller starts with the app, so Smart Queue and automations work without an open UI.
|
||||||
socketio.start_background_task(poller)
|
socketio.start_background_task(poller)
|
||||||
_started = True
|
_started = True
|
||||||
|
|
||||||
|
ensure_poller_started()
|
||||||
|
|
||||||
|
@socketio.on("connect")
|
||||||
|
def handle_connect():
|
||||||
|
ensure_poller_started()
|
||||||
|
if auth.enabled() and not auth.current_user_id():
|
||||||
|
# Note: Socket.IO uses the same session auth as REST API; unauthenticated clients are disconnected.
|
||||||
|
disconnect()
|
||||||
|
return False
|
||||||
profile = active_profile()
|
profile = active_profile()
|
||||||
|
if profile:
|
||||||
|
join_room(_profile_room(profile["id"]))
|
||||||
emit("connected", {"ok": True, "profile": profile})
|
emit("connected", {"ok": True, "profile": profile})
|
||||||
if not profile:
|
if not profile:
|
||||||
# Note: Fresh installs have no rTorrent yet; tell the client to show setup instead of waiting for a snapshot.
|
# Note: Fresh installs or users without profile access get setup state, not another user's snapshot.
|
||||||
emit("profile_required", {"ok": True, "profiles": []})
|
emit("profile_required", {"ok": True, "profiles": []})
|
||||||
return
|
return
|
||||||
rows = torrent_cache.snapshot(profile["id"])
|
rows = torrent_cache.snapshot(profile["id"])
|
||||||
@@ -77,6 +122,12 @@ def register_socketio_handlers(socketio):
|
|||||||
|
|
||||||
@socketio.on("select_profile")
|
@socketio.on("select_profile")
|
||||||
def handle_select_profile(data):
|
def handle_select_profile(data):
|
||||||
|
if auth.enabled() and not auth.current_user_id():
|
||||||
|
disconnect()
|
||||||
|
return
|
||||||
|
old_profile = active_profile()
|
||||||
|
if old_profile:
|
||||||
|
leave_room(_profile_room(old_profile["id"]))
|
||||||
profile_id = int((data or {}).get("profile_id") or 0)
|
profile_id = int((data or {}).get("profile_id") or 0)
|
||||||
if not profile_id:
|
if not profile_id:
|
||||||
# Note: Ignore empty profile selections created before the first rTorrent profile exists.
|
# Note: Ignore empty profile selections created before the first rTorrent profile exists.
|
||||||
@@ -84,8 +135,9 @@ def register_socketio_handlers(socketio):
|
|||||||
return
|
return
|
||||||
profile = get_profile(profile_id)
|
profile = get_profile(profile_id)
|
||||||
if not profile:
|
if not profile:
|
||||||
emit("rtorrent_error", {"error": "Profile does not exist"})
|
emit("rtorrent_error", {"error": "Profile access denied or profile does not exist"})
|
||||||
return
|
return
|
||||||
|
join_room(_profile_room(profile_id))
|
||||||
diff = torrent_cache.refresh(profile)
|
diff = torrent_cache.refresh(profile)
|
||||||
rows = torrent_cache.snapshot(profile_id)
|
rows = torrent_cache.snapshot(profile_id)
|
||||||
emit("torrent_snapshot", {"profile_id": profile_id, "torrents": rows, "summary": cached_summary(profile_id, rows, force=True), "error": diff.get("error", "")})
|
emit("torrent_snapshot", {"profile_id": profile_id, "torrents": rows, "summary": cached_summary(profile_id, rows, force=True), "error": diff.get("error", "")})
|
||||||
|
|||||||
@@ -5,7 +5,7 @@ import threading
|
|||||||
import time
|
import time
|
||||||
import uuid
|
import uuid
|
||||||
from concurrent.futures import ThreadPoolExecutor
|
from concurrent.futures import ThreadPoolExecutor
|
||||||
from . import rtorrent
|
from . import rtorrent, auth
|
||||||
from .preferences import get_profile
|
from .preferences import get_profile
|
||||||
from ..config import WORKERS
|
from ..config import WORKERS
|
||||||
from ..db import connect, utcnow, default_user_id
|
from ..db import connect, utcnow, default_user_id
|
||||||
@@ -23,7 +23,13 @@ def set_socketio(socketio):
|
|||||||
|
|
||||||
|
|
||||||
def _emit(name: str, payload: dict):
|
def _emit(name: str, payload: dict):
|
||||||
if _socketio:
|
if not _socketio:
|
||||||
|
return
|
||||||
|
profile_id = payload.get("profile_id")
|
||||||
|
if auth.enabled() and profile_id:
|
||||||
|
# Note: Job/socket events are sent only to clients joined to the affected profile room.
|
||||||
|
_socketio.emit(name, payload, to=f"profile:{int(profile_id)}")
|
||||||
|
else:
|
||||||
_socketio.emit(name, payload)
|
_socketio.emit(name, payload)
|
||||||
|
|
||||||
|
|
||||||
@@ -48,25 +54,33 @@ def _job_row(job_id: str):
|
|||||||
return conn.execute("SELECT rowid AS _rowid, * FROM jobs WHERE id=?", (job_id,)).fetchone()
|
return conn.execute("SELECT rowid AS _rowid, * FROM jobs WHERE id=?", (job_id,)).fetchone()
|
||||||
|
|
||||||
|
|
||||||
def _is_ordered_action(action_name: str) -> bool:
|
def _job_payload(row) -> dict:
|
||||||
return action_name in {"move", "remove"}
|
try:
|
||||||
|
return json.loads((row or {}).get("payload_json") or "{}")
|
||||||
|
except Exception:
|
||||||
|
return {}
|
||||||
|
|
||||||
|
|
||||||
|
def _is_ordered_job(row) -> bool:
|
||||||
|
payload = _job_payload(row)
|
||||||
|
# Note: Move/remove remain ordered, and automation-created jobs can opt in so effect order is visible and predictable.
|
||||||
|
return str((row or {}).get("action") or "") in {"move", "remove"} or bool(payload.get("automation_ordered"))
|
||||||
|
|
||||||
|
|
||||||
def _has_prior_ordered_jobs(profile_id: int, rowid: int) -> bool:
|
def _has_prior_ordered_jobs(profile_id: int, rowid: int) -> bool:
|
||||||
with connect() as conn:
|
with connect() as conn:
|
||||||
row = conn.execute(
|
rows = conn.execute(
|
||||||
"""
|
"""
|
||||||
SELECT 1
|
SELECT rowid AS _rowid, action, payload_json
|
||||||
FROM jobs
|
FROM jobs
|
||||||
WHERE profile_id=?
|
WHERE profile_id=?
|
||||||
AND rowid<?
|
AND rowid<?
|
||||||
AND action IN ('move', 'remove')
|
|
||||||
AND status IN ('pending', 'running')
|
AND status IN ('pending', 'running')
|
||||||
LIMIT 1
|
ORDER BY rowid
|
||||||
""",
|
""",
|
||||||
(profile_id, rowid),
|
(profile_id, rowid),
|
||||||
).fetchone()
|
).fetchall()
|
||||||
return bool(row)
|
return any(_is_ordered_job(row) for row in rows)
|
||||||
|
|
||||||
|
|
||||||
def _wait_for_prior_ordered_jobs(job_id: str, profile_id: int, rowid: int) -> bool:
|
def _wait_for_prior_ordered_jobs(job_id: str, profile_id: int, rowid: int) -> bool:
|
||||||
@@ -97,7 +111,7 @@ def _set_job(job_id: str, status: str, error: str = "", result: dict | None = No
|
|||||||
|
|
||||||
|
|
||||||
def enqueue(action_name: str, profile_id: int, payload: dict, user_id: int | None = None, max_attempts: int = 2) -> str:
|
def enqueue(action_name: str, profile_id: int, payload: dict, user_id: int | None = None, max_attempts: int = 2) -> str:
|
||||||
user_id = user_id or default_user_id()
|
user_id = user_id or auth.current_user_id() or default_user_id()
|
||||||
job_id = uuid.uuid4().hex
|
job_id = uuid.uuid4().hex
|
||||||
now = utcnow()
|
now = utcnow()
|
||||||
with connect() as conn:
|
with connect() as conn:
|
||||||
@@ -130,11 +144,11 @@ def _run(job_id: str):
|
|||||||
profile = get_profile(int(job["profile_id"]), int(job["user_id"]))
|
profile = get_profile(int(job["profile_id"]), int(job["user_id"]))
|
||||||
if not profile:
|
if not profile:
|
||||||
_set_job(job_id, "failed", "rTorrent profile does not exist", finished=True)
|
_set_job(job_id, "failed", "rTorrent profile does not exist", finished=True)
|
||||||
_emit("job_update", {"id": job_id, "status": "failed", "error": "profile not found"})
|
_emit("job_update", {"id": job_id, "profile_id": job.get("profile_id"), "status": "failed", "error": "profile not found"})
|
||||||
return
|
return
|
||||||
profile_id = int(profile["id"])
|
profile_id = int(profile["id"])
|
||||||
ordered_lock = None
|
ordered_lock = None
|
||||||
if _is_ordered_action(str(job["action"])):
|
if _is_ordered_job(job):
|
||||||
if not _wait_for_prior_ordered_jobs(job_id, profile_id, int(job["_rowid"])):
|
if not _wait_for_prior_ordered_jobs(job_id, profile_id, int(job["_rowid"])):
|
||||||
return
|
return
|
||||||
ordered_lock = _get_exclusive_lock(profile_id)
|
ordered_lock = _get_exclusive_lock(profile_id)
|
||||||
@@ -150,19 +164,26 @@ def _run(job_id: str):
|
|||||||
with connect() as conn:
|
with connect() as conn:
|
||||||
conn.execute("UPDATE jobs SET status='running', attempts=?, started_at=COALESCE(started_at, ?), updated_at=? WHERE id=?", (attempts, utcnow(), utcnow(), job_id))
|
conn.execute("UPDATE jobs SET status='running', attempts=?, started_at=COALESCE(started_at, ?), updated_at=? WHERE id=?", (attempts, utcnow(), utcnow(), job_id))
|
||||||
_emit("operation_started", {"job_id": job_id, "action": job["action"], "profile_id": profile["id"], "hashes": payload.get("hashes") or [], "hash_count": len(payload.get("hashes") or []), "bulk": len(payload.get("hashes") or []) > 1})
|
_emit("operation_started", {"job_id": job_id, "action": job["action"], "profile_id": profile["id"], "hashes": payload.get("hashes") or [], "hash_count": len(payload.get("hashes") or []), "bulk": len(payload.get("hashes") or []) > 1})
|
||||||
_emit("job_update", {"id": job_id, "status": "running", "attempts": attempts})
|
_emit("job_update", {"id": job_id, "profile_id": profile["id"], "status": "running", "attempts": attempts})
|
||||||
result = _execute(profile, job["action"], payload)
|
result = _execute(profile, job["action"], payload)
|
||||||
|
fresh = _job_row(job_id)
|
||||||
|
# Note: Emergency cancel keeps a cancelled job from being overwritten when work finishes later.
|
||||||
|
if fresh and fresh["status"] == "cancelled":
|
||||||
|
return
|
||||||
_set_job(job_id, "done", result=result, finished=True)
|
_set_job(job_id, "done", result=result, finished=True)
|
||||||
_emit("operation_finished", {"job_id": job_id, "action": job["action"], "profile_id": profile["id"], "hashes": payload.get("hashes") or [], "hash_count": len(payload.get("hashes") or []), "bulk": len(payload.get("hashes") or []) > 1, "result": result})
|
_emit("operation_finished", {"job_id": job_id, "action": job["action"], "profile_id": profile["id"], "hashes": payload.get("hashes") or [], "hash_count": len(payload.get("hashes") or []), "bulk": len(payload.get("hashes") or []) > 1, "result": result})
|
||||||
_emit("job_update", {"id": job_id, "status": "done", "result": result})
|
_emit("job_update", {"id": job_id, "profile_id": profile["id"], "status": "done", "result": result})
|
||||||
except Exception as exc:
|
except Exception as exc:
|
||||||
fresh = _job_row(job_id) or {}
|
fresh = _job_row(job_id) or {}
|
||||||
attempts = int(fresh.get("attempts") or 1)
|
attempts = int(fresh.get("attempts") or 1)
|
||||||
max_attempts = int(fresh.get("max_attempts") or 2)
|
max_attempts = int(fresh.get("max_attempts") or 2)
|
||||||
|
# Note: Emergency cancel keeps an exception from a cancelled job from moving it back to retry or failed.
|
||||||
|
if fresh and fresh.get("status") == "cancelled":
|
||||||
|
return
|
||||||
status = "pending" if attempts < max_attempts else "failed"
|
status = "pending" if attempts < max_attempts else "failed"
|
||||||
_set_job(job_id, status, str(exc), finished=(status == "failed"))
|
_set_job(job_id, status, str(exc), finished=(status == "failed"))
|
||||||
_emit("operation_failed", {"job_id": job_id, "action": job.get("action"), "profile_id": job.get("profile_id"), "hashes": payload.get("hashes") or [], "error": str(exc)})
|
_emit("operation_failed", {"job_id": job_id, "action": job.get("action"), "profile_id": job.get("profile_id"), "hashes": payload.get("hashes") or [], "error": str(exc)})
|
||||||
_emit("job_update", {"id": job_id, "status": status, "error": str(exc), "attempts": attempts})
|
_emit("job_update", {"id": job_id, "profile_id": job.get("profile_id"), "status": status, "error": str(exc), "attempts": attempts})
|
||||||
if status == "pending":
|
if status == "pending":
|
||||||
_executor.submit(_run, job_id)
|
_executor.submit(_run, job_id)
|
||||||
finally:
|
finally:
|
||||||
@@ -182,6 +203,9 @@ def _job_summary(row: dict, payload: dict, result: dict) -> str:
|
|||||||
ctx = payload.get("job_context") or {}
|
ctx = payload.get("job_context") or {}
|
||||||
count = int(ctx.get("hash_count") or len(payload.get("hashes") or []) or result.get("count") or 0)
|
count = int(ctx.get("hash_count") or len(payload.get("hashes") or []) or result.get("count") or 0)
|
||||||
parts = []
|
parts = []
|
||||||
|
if ctx.get("bulk_label"):
|
||||||
|
# Note: Shows which generated bulk part is being displayed in the job queue.
|
||||||
|
parts.append(f"{ctx.get('bulk_label')} of {ctx.get('bulk_parts')}")
|
||||||
if count:
|
if count:
|
||||||
parts.append(("bulk " if count > 1 else "single ") + f"{count} torrent(s)")
|
parts.append(("bulk " if count > 1 else "single ") + f"{count} torrent(s)")
|
||||||
if ctx.get("target_path"):
|
if ctx.get("target_path"):
|
||||||
@@ -215,36 +239,65 @@ def _public_job(row) -> dict:
|
|||||||
return d
|
return d
|
||||||
|
|
||||||
|
|
||||||
|
def _job_scope_sql(writable: bool = False) -> tuple[str, tuple]:
|
||||||
|
visible = auth.writable_profile_ids() if writable else auth.visible_profile_ids()
|
||||||
|
if visible is None:
|
||||||
|
return "", ()
|
||||||
|
if not visible:
|
||||||
|
return " WHERE 1=0", ()
|
||||||
|
placeholders = ",".join("?" for _ in visible)
|
||||||
|
return f" WHERE profile_id IN ({placeholders})", tuple(visible)
|
||||||
|
|
||||||
|
|
||||||
def list_jobs(limit: int = 200, offset: int = 0):
|
def list_jobs(limit: int = 200, offset: int = 0):
|
||||||
limit = max(1, min(int(limit or 50), 500))
|
limit = max(1, min(int(limit or 50), 500))
|
||||||
offset = max(0, int(offset or 0))
|
offset = max(0, int(offset or 0))
|
||||||
|
where, params = _job_scope_sql()
|
||||||
with connect() as conn:
|
with connect() as conn:
|
||||||
rows = conn.execute("SELECT * FROM jobs ORDER BY created_at DESC LIMIT ? OFFSET ?", (limit, offset)).fetchall()
|
rows = conn.execute(f"SELECT * FROM jobs{where} ORDER BY created_at DESC LIMIT ? OFFSET ?", (*params, limit, offset)).fetchall()
|
||||||
total = conn.execute("SELECT COUNT(*) AS n FROM jobs").fetchone()["n"]
|
total = conn.execute(f"SELECT COUNT(*) AS n FROM jobs{where}", params).fetchone()["n"]
|
||||||
return {"rows": [_public_job(r) for r in rows], "total": total, "limit": limit, "offset": offset}
|
return {"rows": [_public_job(r) for r in rows], "total": total, "limit": limit, "offset": offset}
|
||||||
|
|
||||||
|
|
||||||
def cancel_job(job_id: str) -> bool:
|
def cancel_job(job_id: str) -> bool:
|
||||||
row = _job_row(job_id)
|
row = _job_row(job_id)
|
||||||
if not row or row["status"] not in {"pending", "failed"}:
|
if not row or row["status"] not in {"pending", "running"}:
|
||||||
return False
|
return False
|
||||||
|
# Note: Emergency cancel is useful only for unfinished jobs; failed/done entries stay available for retry or log cleanup.
|
||||||
_set_job(job_id, "cancelled", finished=True)
|
_set_job(job_id, "cancelled", finished=True)
|
||||||
_emit("job_update", {"id": job_id, "status": "cancelled"})
|
_emit("job_update", {"id": job_id, "profile_id": row.get("profile_id"), "status": "cancelled"})
|
||||||
return True
|
return True
|
||||||
|
|
||||||
|
|
||||||
def clear_jobs() -> int:
|
def clear_jobs() -> int:
|
||||||
|
where, params = _job_scope_sql(writable=True)
|
||||||
|
status_clause = "status NOT IN ('pending', 'running')"
|
||||||
|
sql = f"DELETE FROM jobs{where} AND {status_clause}" if where else f"DELETE FROM jobs WHERE {status_clause}"
|
||||||
with connect() as conn:
|
with connect() as conn:
|
||||||
cur = conn.execute("DELETE FROM jobs WHERE status NOT IN ('pending', 'running')")
|
cur = conn.execute(sql, params)
|
||||||
return int(cur.rowcount or 0)
|
return int(cur.rowcount or 0)
|
||||||
|
|
||||||
|
|
||||||
|
def emergency_clear_jobs() -> int:
|
||||||
|
# Note: Emergency cleanup first marks active jobs as cancelled, then clears the whole job log list.
|
||||||
|
now = utcnow()
|
||||||
|
where, params = _job_scope_sql(writable=True)
|
||||||
|
status_clause = "status IN ('pending', 'running')"
|
||||||
|
update_sql = f"UPDATE jobs SET status='cancelled', error='Emergency cancelled by user', finished_at=COALESCE(finished_at, ?), updated_at=?{where} AND {status_clause}" if where else "UPDATE jobs SET status='cancelled', error='Emergency cancelled by user', finished_at=COALESCE(finished_at, ?), updated_at=? WHERE status IN ('pending', 'running')"
|
||||||
|
with connect() as conn:
|
||||||
|
conn.execute(update_sql, (now, now, *params) if where else (now, now))
|
||||||
|
cur = conn.execute(f"DELETE FROM jobs{where}", params) if where else conn.execute("DELETE FROM jobs")
|
||||||
|
deleted = int(cur.rowcount or 0)
|
||||||
|
_emit("job_update", {"status": "cleared", "emergency": True})
|
||||||
|
return deleted
|
||||||
|
|
||||||
|
|
||||||
def retry_job(job_id: str) -> bool:
|
def retry_job(job_id: str) -> bool:
|
||||||
row = _job_row(job_id)
|
row = _job_row(job_id)
|
||||||
if not row or row["status"] not in {"failed", "cancelled"}:
|
if not row or row["status"] not in {"failed", "cancelled"}:
|
||||||
return False
|
return False
|
||||||
with connect() as conn:
|
with connect() as conn:
|
||||||
conn.execute("UPDATE jobs SET status='pending', error='', finished_at=NULL, updated_at=? WHERE id=?", (utcnow(), job_id))
|
conn.execute("UPDATE jobs SET status='pending', error='', finished_at=NULL, updated_at=? WHERE id=?", (utcnow(), job_id))
|
||||||
_emit("job_update", {"id": job_id, "status": "pending"})
|
_emit("job_update", {"id": job_id, "profile_id": row.get("profile_id"), "status": "pending"})
|
||||||
_executor.submit(_run, job_id)
|
_executor.submit(_run, job_id)
|
||||||
return True
|
return True
|
||||||
|
|||||||
@@ -5,6 +5,15 @@
|
|||||||
const torrents = new Map();
|
const torrents = new Map();
|
||||||
let visibleRows = [], selected = new Set(), selectedHash = null, lastSelectedHash = null, activeFilter = "all";
|
let visibleRows = [], selected = new Set(), selectedHash = null, lastSelectedHash = null, activeFilter = "all";
|
||||||
let sortState = {key: "name", dir: 1}, renderPending = false, renderVersion = 0, lastRenderSignature = "";
|
let sortState = {key: "name", dir: 1}, renderPending = false, renderVersion = 0, lastRenderSignature = "";
|
||||||
|
const MOBILE_SORT_STEPS = [
|
||||||
|
{key:"down_rate", dir:-1, label:"DL"},
|
||||||
|
{key:"up_rate", dir:-1, label:"UL"},
|
||||||
|
{key:"progress", dir:-1, label:"Progress"},
|
||||||
|
{key:"ratio", dir:-1, label:"Ratio"},
|
||||||
|
{key:"size", dir:-1, label:"Size"},
|
||||||
|
{key:"seeds", dir:-1, label:"Seeds"},
|
||||||
|
{key:"name", dir:1, label:"Name"}
|
||||||
|
];
|
||||||
let lastLimits = {down: 0, up: 0}, pendingBusy = 0, pathTarget = null, lastPathParent = "/";
|
let lastLimits = {down: 0, up: 0}, pendingBusy = 0, pathTarget = null, lastPathParent = "/";
|
||||||
const traffic = [], systemUsage = [];
|
const traffic = [], systemUsage = [];
|
||||||
const socket = io({transports:["polling"], reconnection:true, reconnectionAttempts:Infinity, reconnectionDelay:700, reconnectionDelayMax:5000, timeout:8000});
|
const socket = io({transports:["polling"], reconnection:true, reconnectionAttempts:Infinity, reconnectionDelay:700, reconnectionDelayMax:5000, timeout:8000});
|
||||||
@@ -12,15 +21,28 @@
|
|||||||
let hiddenColumns = new Set((window.PYTORRENT?.tableColumns?.hidden || []));
|
let hiddenColumns = new Set((window.PYTORRENT?.tableColumns?.hidden || []));
|
||||||
let knownLabels = [];
|
let knownLabels = [];
|
||||||
let jobsPage = 0, jobsLimit = 25, jobsTotal = 0, smartHistoryExpanded = false;
|
let jobsPage = 0, jobsLimit = 25, jobsTotal = 0, smartHistoryExpanded = false;
|
||||||
|
let automationSmartQueueStats = null;
|
||||||
let peersRefreshTimer = null;
|
let peersRefreshTimer = null;
|
||||||
let peersRefreshSeconds = Number(window.PYTORRENT?.peersRefreshSeconds || 0);
|
let peersRefreshSeconds = Number(window.PYTORRENT?.peersRefreshSeconds || 0);
|
||||||
let portCheckEnabled = !!Number(window.PYTORRENT?.portCheckEnabled || 0);
|
let portCheckEnabled = !!Number(window.PYTORRENT?.portCheckEnabled || 0);
|
||||||
let bootstrapTheme = window.PYTORRENT?.bootstrapTheme || "default";
|
let bootstrapTheme = window.PYTORRENT?.bootstrapTheme || "default";
|
||||||
let fontFamily = window.PYTORRENT?.fontFamily || "default";
|
let fontFamily = window.PYTORRENT?.fontFamily || "default";
|
||||||
|
let interfaceScale = Number(window.PYTORRENT?.interfaceScale || 100);
|
||||||
|
let titleSpeedEnabled = !!Number(window.PYTORRENT?.titleSpeedEnabled || 0);
|
||||||
|
let trackerFaviconsEnabled = !!Number(window.PYTORRENT?.trackerFaviconsEnabled || 0);
|
||||||
|
let trackerSummary = {hashes:{}, trackers:[], scanned:0, errors:[]};
|
||||||
|
let trackerSummaryStatus = 'idle';
|
||||||
|
let trackerSummarySignature = "";
|
||||||
|
let trackerSummaryTimer = null;
|
||||||
|
let lastLabelFiltersSignature = "";
|
||||||
|
let lastTrackerFiltersSignature = "";
|
||||||
|
let lastMobileFiltersSignature = "";
|
||||||
|
const BASE_TITLE = document.title || "pyTorrent";
|
||||||
|
const lastBrowserSpeed = {down: "0 B/s", up: "0 B/s"};
|
||||||
const FOOTER_ITEM_DEFS = [
|
const FOOTER_ITEM_DEFS = [
|
||||||
["cpu", "CPU"], ["ram", "RAM"], ["usage_chart", "CPU/RAM chart"], ["disk", "Disk"],
|
["cpu", "CPU"], ["ram", "RAM"], ["usage_chart", "CPU/RAM chart"], ["disk", "Disk"],
|
||||||
["version", "rTorrent version"], ["speed_down", "Download speed"], ["speed_up", "Upload speed"],
|
["version", "rTorrent version"], ["speed_down", "Download speed"], ["speed_up", "Upload speed"],
|
||||||
["limits", "Speed limits"], ["totals", "Total transfer"], ["port_check", "Port check"],
|
["speed_peaks", "Peak speeds"], ["limits", "Speed limits"], ["totals", "Total transfer"], ["port_check", "Port check"],
|
||||||
["clock", "Clock"], ["sockets", "Open sockets"], ["shown", "Shown torrents"], ["selected", "Selected torrents"], ["docs", "API docs"]
|
["clock", "Clock"], ["sockets", "Open sockets"], ["shown", "Shown torrents"], ["selected", "Selected torrents"], ["docs", "API docs"]
|
||||||
];
|
];
|
||||||
let footerItems = {...Object.fromEntries(FOOTER_ITEM_DEFS.map(([key]) => [key, true])), ...(window.PYTORRENT?.footerItems || {})};
|
let footerItems = {...Object.fromEntries(FOOTER_ITEM_DEFS.map(([key]) => [key, true])), ...(window.PYTORRENT?.footerItems || {})};
|
||||||
@@ -34,7 +56,31 @@
|
|||||||
// Note: Keeps live filter tooltips stable while the pointer is over a filter button.
|
// Note: Keeps live filter tooltips stable while the pointer is over a filter button.
|
||||||
const filterTooltipState = new WeakMap();
|
const filterTooltipState = new WeakMap();
|
||||||
|
|
||||||
function toast(msg, type="secondary") { const h=$('toastHost'); if(!h) return; const el=document.createElement('div'); el.className=`toast-item text-bg-${type}`; el.innerHTML=esc(msg); h.appendChild(el); setTimeout(()=>el.remove(),3500); }
|
const toastGroups = new Map();
|
||||||
|
function toastKey(msg, type){ return `${type}::${String(msg ?? '')}`; }
|
||||||
|
function toast(msg, type="secondary") {
|
||||||
|
// Note: Groups identical toasts fired together, so repeated automation/action events do not flood the UI.
|
||||||
|
const h=$('toastHost');
|
||||||
|
if(!h) return;
|
||||||
|
const text=String(msg ?? '');
|
||||||
|
const key=toastKey(text,type);
|
||||||
|
const existing=toastGroups.get(key);
|
||||||
|
if(existing){
|
||||||
|
existing.count += 1;
|
||||||
|
const badge=existing.el.querySelector('.toast-count');
|
||||||
|
if(badge){ badge.textContent=`×${existing.count}`; badge.classList.remove('d-none'); }
|
||||||
|
clearTimeout(existing.timer);
|
||||||
|
existing.timer=setTimeout(()=>{ existing.el.remove(); toastGroups.delete(key); },3500);
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
const el=document.createElement('div');
|
||||||
|
el.className=`toast-item text-bg-${type}`;
|
||||||
|
el.innerHTML=`<span class="toast-message">${esc(text)}</span><span class="toast-count d-none">×1</span>`;
|
||||||
|
h.appendChild(el);
|
||||||
|
const entry={el,count:1,timer:null};
|
||||||
|
entry.timer=setTimeout(()=>{ el.remove(); toastGroups.delete(key); },3500);
|
||||||
|
toastGroups.set(key,entry);
|
||||||
|
}
|
||||||
function setBusy(on){ pendingBusy += on ? 1 : -1; if(pendingBusy<0) pendingBusy=0; $('globalLoader')?.classList.toggle('d-none', pendingBusy===0); $('busyBadge')?.classList.toggle('d-none', pendingBusy===0); }
|
function setBusy(on){ pendingBusy += on ? 1 : -1; if(pendingBusy<0) pendingBusy=0; $('globalLoader')?.classList.toggle('d-none', pendingBusy===0); $('busyBadge')?.classList.toggle('d-none', pendingBusy===0); }
|
||||||
function setInitialLoader(title, text){ if(initialLoaderDone) return; if($('initialLoaderTitle') && title) $('initialLoaderTitle').textContent=title; if($('initialLoaderText') && text) $('initialLoaderText').textContent=text; }
|
function setInitialLoader(title, text){ if(initialLoaderDone) return; if($('initialLoaderTitle') && title) $('initialLoaderTitle').textContent=title; if($('initialLoaderText') && text) $('initialLoaderText').textContent=text; }
|
||||||
function hideInitialLoader(){ if(initialLoaderDone) return; initialLoaderDone=true; $('initialLoader')?.classList.add('is-hidden'); }
|
function hideInitialLoader(){ if(initialLoaderDone) return; initialLoaderDone=true; $('initialLoader')?.classList.add('is-hidden'); }
|
||||||
@@ -65,11 +111,13 @@
|
|||||||
return new Intl.DateTimeFormat('pl-PL', opts).format(parsed.d).replace(',', '');
|
return new Intl.DateTimeFormat('pl-PL', opts).format(parsed.d).replace(',', '');
|
||||||
}
|
}
|
||||||
function dateCell(value){ const parsed=parseDate(value); if(!parsed) return esc(value||''); return `<span class="date-compact" title="${esc(formatDate(value,'full'))}">${esc(formatDate(value))}</span>`; }
|
function dateCell(value){ const parsed=parseDate(value); if(!parsed) return esc(value||''); return `<span class="date-compact" title="${esc(formatDate(value,'full'))}">${esc(formatDate(value))}</span>`; }
|
||||||
|
// Note: Human-readable date cells keep full timestamps visible without squeezing table columns.
|
||||||
|
function humanDateCell(value){ const parsed=parseDate(value); if(!parsed) return esc(value||''); const full=formatDate(value,'full'); return `<span class="date-readable" title="${esc(parsed.raw)}">${esc(full)}</span>`; }
|
||||||
function compactCell(value, max=120){ const text=String(value||""); if(!text) return ""; const short=text.length>max ? `${text.slice(0, Math.floor(max*0.62))}…${text.slice(-Math.floor(max*0.28))}` : text; return `<span class="text-compact" title="${esc(text)}">${esc(short)}</span>`; }
|
function compactCell(value, max=120){ const text=String(value||""); if(!text) return ""; const short=text.length>max ? `${text.slice(0, Math.floor(max*0.62))}…${text.slice(-Math.floor(max*0.28))}` : text; return `<span class="text-compact" title="${esc(text)}">${esc(short)}</span>`; }
|
||||||
function progressBar(value, extraClass=''){ const pct=Math.max(0,Math.min(100,Number(value||0))); const hue=Math.round((pct/100)*120); const light=30+Math.round((pct/100)*5); const bg=pct<=0?'transparent':pct>=100?'var(--torrent-progress-complete)':`hsl(${hue} 52% ${light}%)`; const done=pct>=100?' is-complete':''; const cls=extraClass?` ${extraClass}`:''; return `<div class="progress torrent-progress${done}${cls}" title="${esc(pct)}%"><div class="progress-bar" style="width:${pct}%;background:${bg}"></div><span>${esc(pct)}%</span></div>`; }
|
function progressBar(value, extraClass=''){ const pct=Math.max(0,Math.min(100,Number(value||0))); const hue=Math.round((pct/100)*120); const light=30+Math.round((pct/100)*5); const bg=pct<=0?'transparent':pct>=100?'var(--torrent-progress-complete)':`hsl(${hue} 52% ${light}%)`; const done=pct>=100?' is-complete':''; const cls=extraClass?` ${extraClass}`:''; return `<div class="progress torrent-progress${done}${cls}" title="${esc(pct)}%"><div class="progress-bar" style="width:${pct}%;background:${bg}"></div><span>${esc(pct)}%</span></div>`; }
|
||||||
function progress(t){ return progressBar(t.progress); }
|
function progress(t){ return progressBar(t.progress); }
|
||||||
// Note: Displays status filter summaries calculated and cached by the backend API.
|
// Note: Displays status filter summaries calculated and cached by the backend API.
|
||||||
const FILTER_COUNT_IDS = {all:'countAll', downloading:'countDownloading', seeding:'countSeeding', paused:'countPaused', checking:'countChecking', error:'countError', stopped:'countStopped'};
|
const FILTER_COUNT_IDS = {all:'countAll', downloading:'countDownloading', seeding:'countSeeding', paused:'countPaused', checking:'countChecking', error:'countError', stopped:'countStopped', moving:'countMoving'};
|
||||||
function formatFilterBytes(value){ return fmtBytes(value).replace(/\.0 (?=GiB|TiB)/, ' '); }
|
function formatFilterBytes(value){ return fmtBytes(value).replace(/\.0 (?=GiB|TiB)/, ' '); }
|
||||||
function filterMetaLine(bucket){
|
function filterMetaLine(bucket){
|
||||||
if(!bucket || !Number(bucket.count||0)) return '';
|
if(!bucket || !Number(bucket.count||0)) return '';
|
||||||
@@ -137,34 +185,139 @@
|
|||||||
}
|
}
|
||||||
applyFilterTooltip(button, tooltip, ariaLabel);
|
applyFilterTooltip(button, tooltip, ariaLabel);
|
||||||
}
|
}
|
||||||
|
function movingOperationRows(){
|
||||||
|
// Note: The Moving filter is based only on active move operations, not queued jobs.
|
||||||
|
return [...torrents.values()].filter(t=>{
|
||||||
|
const op=activeOperationFor(t);
|
||||||
|
return op?.action==='move' && op?.state==='running';
|
||||||
|
});
|
||||||
|
}
|
||||||
|
function movingFilterCount(){ return movingOperationRows().length; }
|
||||||
function setFilterSummary(type){
|
function setFilterSummary(type){
|
||||||
const el=$(FILTER_COUNT_IDS[type]);
|
const el=$(FILTER_COUNT_IDS[type]);
|
||||||
if(!el) return;
|
if(!el) return;
|
||||||
const bucket=torrentSummary?.filters?.[type] || {count:0};
|
const bucket=type==='moving' ? {count:movingFilterCount()} : (torrentSummary?.filters?.[type] || {count:0});
|
||||||
const meta=filterMetaLine(bucket, type);
|
const meta=type==='moving' ? '' : filterMetaLine(bucket, type);
|
||||||
const tooltip=filterTooltipLine(bucket, type);
|
const tooltip=type==='moving' && bucket.count ? 'Active moving operations' : filterTooltipLine(bucket, type);
|
||||||
el.innerHTML=`<span class="filter-count">${esc(bucket.count||0)}</span>${meta?`<span class="filter-meta">${esc(meta)}</span>`:''}`;
|
el.innerHTML=`<span class="filter-count">${esc(bucket.count||0)}</span>${meta?`<span class="filter-meta">${esc(meta)}</span>`:''}`;
|
||||||
const button=el.closest('.filter');
|
const button=el.closest('.filter');
|
||||||
if(button){
|
if(button){
|
||||||
const ariaLabel = tooltip ? `${button.dataset.filter || type}: ${tooltip.replace(/\n/g, ', ')}` : '';
|
const ariaLabel = tooltip ? `${button.dataset.filter || type}: ${tooltip.replace(/\n/g, ', ')}` : '';
|
||||||
|
button.classList.toggle('d-none', type==='moving' && !Number(bucket.count||0));
|
||||||
setStableFilterTooltip(button, tooltip, ariaLabel);
|
setStableFilterTooltip(button, tooltip, ariaLabel);
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
function labelNames(value){ return String(value||'').split(/[,;|]+/).map(x=>x.trim()).filter(Boolean).filter((x,i,a)=>a.indexOf(x)===i); }
|
function labelNames(value){ return String(value||'').split(/[,;|]+/).map(x=>x.trim()).filter(Boolean).filter((x,i,a)=>a.indexOf(x)===i); }
|
||||||
function labelValue(labels){ return [...new Set((labels||[]).map(x=>String(x||'').trim()).filter(Boolean))].join(', '); }
|
function labelValue(labels){ return [...new Set((labels||[]).map(x=>String(x||'').trim()).filter(Boolean))].join(', '); }
|
||||||
function rowHasLabel(t,label){ return labelNames(t.label).includes(label); }
|
function rowHasLabel(t,label){ return labelNames(t.label).includes(label); }
|
||||||
|
function trackerRowsForHash(hash){ return trackerSummary.hashes?.[hash] || []; }
|
||||||
|
function rowHasTracker(t, domain){ return trackerRowsForHash(t.hash).some(x=>x.domain===domain); }
|
||||||
function torrentHasError(t){ return !!torrentWarning(t); }
|
function torrentHasError(t){ return !!torrentWarning(t); }
|
||||||
function isChecking(t){ return t?.status==='Checking' || Number(t?.hashing||0)>0; }
|
function isChecking(t){ return t?.status==='Checking' || Number(t?.hashing||0)>0; }
|
||||||
function rowVisible(t){ const q=($('searchBox')?.value||'').toLowerCase(); if(q && ![t.name,t.path,t.label,t.hash,t.ratio_group].join(' ').toLowerCase().includes(q)) return false; if(activeFilter==='downloading') return !isChecking(t) && !t.complete && t.state && !t.paused; if(activeFilter==='seeding') return !isChecking(t) && t.complete && t.state && !t.paused; if(activeFilter==='paused') return !!t.paused || t.status==='Paused'; if(activeFilter==='checking') return isChecking(t); if(activeFilter==='error') return torrentHasError(t); if(activeFilter==='stopped') return !t.state && !isChecking(t); if(activeFilter.startsWith('label:')) return rowHasLabel(t,activeFilter.slice(6)); return true; }
|
function rowVisible(t){ const q=($('searchBox')?.value||'').toLowerCase(); if(q && ![t.name,t.path,t.label,t.hash,t.ratio_group].join(' ').toLowerCase().includes(q)) return false; if(activeFilter==='downloading') return !isChecking(t) && !t.complete && t.state && !t.paused; if(activeFilter==='seeding') return !isChecking(t) && t.complete && t.state && !t.paused; if(activeFilter==='paused') return !!t.paused || t.status==='Paused'; if(activeFilter==='checking') return isChecking(t); if(activeFilter==='error') return torrentHasError(t); if(activeFilter==='stopped') return !t.state && !isChecking(t); if(activeFilter==='moving') { const op=activeOperationFor(t); return op?.action==='move' && op?.state==='running'; } if(activeFilter.startsWith('label:')) return rowHasLabel(t,activeFilter.slice(6)); if(activeFilter.startsWith('tracker:')) return rowHasTracker(t,activeFilter.slice(8)); return true; }
|
||||||
function compareRows(a,b){ const k=sortState.key; let av=a[k], bv=b[k]; if(typeof av==='string'||typeof bv==='string') return String(av||'').localeCompare(String(bv||''))*sortState.dir; return ((Number(av||0)>Number(bv||0))?1:(Number(av||0)<Number(bv||0)?-1:0))*sortState.dir; }
|
function compareRows(a,b){ const k=sortState.key; let av=a[k], bv=b[k]; if(typeof av==='string'||typeof bv==='string') return String(av||'').localeCompare(String(bv||''))*sortState.dir; return ((Number(av||0)>Number(bv||0))?1:(Number(av||0)<Number(bv||0)?-1:0))*sortState.dir; }
|
||||||
function sortIcon(key){ if(sortState.key!==key) return ''; return sortState.dir>0?" <i class='fa-solid fa-caret-up'></i>":" <i class='fa-solid fa-caret-down'></i>"; }
|
function sortIcon(key){ if(sortState.key!==key) return ''; return sortState.dir>0?" <i class='fa-solid fa-caret-up'></i>":" <i class='fa-solid fa-caret-down'></i>"; }
|
||||||
|
function mobileSortDef(){ return MOBILE_SORT_STEPS.find(x=>x.key===sortState.key && x.dir===sortState.dir) || MOBILE_SORT_STEPS.find(x=>x.key===sortState.key) || MOBILE_SORT_STEPS[0]; }
|
||||||
|
function mobileSortLabel(){ const def=mobileSortDef(); return `${def.label} ${sortState.dir>0?'↑':'↓'}`; }
|
||||||
|
function cycleMobileSort(){ const current=MOBILE_SORT_STEPS.findIndex(x=>x.key===sortState.key && x.dir===sortState.dir); const next=MOBILE_SORT_STEPS[(current+1) % MOBILE_SORT_STEPS.length]; sortState={key:next.key, dir:next.dir}; if($('tableWrap'))$('tableWrap').scrollTop=0; if($('mobileList'))$('mobileList').scrollTop=0; scheduleRender(true); }
|
||||||
function updateSortHeaders(){ document.querySelectorAll('.torrent-table thead th[data-sort]').forEach(th=>{ const base=th.dataset.baseText||th.textContent.trim(); th.dataset.baseText=base; th.innerHTML=`${esc(base)}${sortIcon(th.dataset.sort)}`; th.classList.toggle('sorted',sortState.key===th.dataset.sort); }); }
|
function updateSortHeaders(){ document.querySelectorAll('.torrent-table thead th[data-sort]').forEach(th=>{ const base=th.dataset.baseText||th.textContent.trim(); th.dataset.baseText=base; th.innerHTML=`${esc(base)}${sortIcon(th.dataset.sort)}`; th.classList.toggle('sorted',sortState.key===th.dataset.sort); }); }
|
||||||
// Note: Refreshes sidebar counters from the cached API summary, not from browser-side aggregation.
|
// Note: Refreshes sidebar counters from the cached API summary, not from browser-side aggregation.
|
||||||
|
function syncFilterButtons(){
|
||||||
|
// Note: The active class is synchronized after automatically returning from Moving to All.
|
||||||
|
document.querySelectorAll('.filter').forEach(x=>x.classList.toggle('active', x.dataset.filter===activeFilter));
|
||||||
|
}
|
||||||
function renderCounts(){
|
function renderCounts(){
|
||||||
|
// Note: When the last move operation finishes, the hidden filter does not leave an empty list active.
|
||||||
|
if(activeFilter==='moving' && !movingFilterCount()) activeFilter='all';
|
||||||
|
syncFilterButtons();
|
||||||
Object.keys(FILTER_COUNT_IDS).forEach(setFilterSummary);
|
Object.keys(FILTER_COUNT_IDS).forEach(setFilterSummary);
|
||||||
$('statSelected').textContent=selected.size;
|
$('statSelected').textContent=selected.size;
|
||||||
}
|
}
|
||||||
function renderLabelFilters(){ const box=$('labelFilters'); if(!box) return; const counts=new Map(); [...torrents.values()].forEach(t=>labelNames(t.label).forEach(l=>counts.set(l,(counts.get(l)||0)+1))); const labels=[...counts.keys()].filter(l=>counts.get(l)>0).sort((a,b)=>a.localeCompare(b)); if(activeFilter.startsWith('label:') && !counts.has(activeFilter.slice(6))) activeFilter='all'; box.innerHTML=labels.length?`<div class="small text-muted px-2 mb-1">Labels</div>${labels.map(l=>`<button class="filter label-filter ${activeFilter==='label:'+l?'active':''}" data-filter="label:${esc(l)}"><span><i class="fa-solid fa-tag"></i> ${esc(l)}</span><span>${counts.get(l)}</span></button>`).join('')}`:''; box.querySelectorAll('.filter').forEach(b=>b.addEventListener('click',()=>{document.querySelectorAll('.filter').forEach(x=>x.classList.remove('active')); b.classList.add('active'); activeFilter=b.dataset.filter; if($('tableWrap'))$('tableWrap').scrollTop=0; scheduleRender(true);})); }
|
function bindSidebarFilterClicks(root){
|
||||||
|
root?.querySelectorAll('.filter').forEach(b=>b.addEventListener('click',()=>{
|
||||||
|
document.querySelectorAll('.filter').forEach(x=>x.classList.remove('active'));
|
||||||
|
b.classList.add('active');
|
||||||
|
activeFilter=b.dataset.filter;
|
||||||
|
if($('tableWrap')) $('tableWrap').scrollTop=0;
|
||||||
|
scheduleRender(true);
|
||||||
|
}));
|
||||||
|
}
|
||||||
|
function renderLabelFilters(force=false){
|
||||||
|
const box=$('labelFilters');
|
||||||
|
if(!box) return;
|
||||||
|
const counts=new Map();
|
||||||
|
[...torrents.values()].forEach(t=>labelNames(t.label).forEach(l=>counts.set(l,(counts.get(l)||0)+1)));
|
||||||
|
const labels=[...counts.keys()].filter(l=>counts.get(l)>0).sort((a,b)=>a.localeCompare(b));
|
||||||
|
if(activeFilter.startsWith('label:') && !counts.has(activeFilter.slice(6))) activeFilter='all';
|
||||||
|
const sig=labels.map(l=>`${l}:${counts.get(l)}`).join('|');
|
||||||
|
if(!force && sig===lastLabelFiltersSignature){ syncFilterButtons(); return; }
|
||||||
|
lastLabelFiltersSignature=sig;
|
||||||
|
box.innerHTML=labels.length?`<div class="small text-muted px-2 mb-1">Labels</div>${labels.map(l=>`<button class="filter label-filter ${activeFilter==='label:'+l?'active':''}" data-filter="label:${esc(l)}"><span><i class="fa-solid fa-tag"></i> ${esc(l)}</span><span>${counts.get(l)}</span></button>`).join('')}`:'';
|
||||||
|
bindSidebarFilterClicks(box);
|
||||||
|
}
|
||||||
|
function trackerFavicon(tracker){
|
||||||
|
const domain=typeof tracker==='string'?tracker:(tracker?.domain||'');
|
||||||
|
if(!trackerFaviconsEnabled || !domain) return '<i class="fa-solid fa-bullseye"></i>';
|
||||||
|
// Note: Normal rendering must use cached/static URLs only. Avoid refresh=1 here, otherwise scroll-triggered paints can re-warm icons repeatedly.
|
||||||
|
const fallback=`/api/trackers/favicon/${encodeURIComponent(domain)}`;
|
||||||
|
const src=(typeof tracker==='object' && tracker?.favicon_url) ? tracker.favicon_url : fallback;
|
||||||
|
return `<img class="tracker-favicon" src="${esc(src)}" alt="" loading="lazy" data-fallback-src="${esc(fallback)}" onerror="if(this.dataset.retry!=='1'){this.dataset.retry='1';this.src=this.dataset.fallbackSrc;}else{this.classList.add('d-none')}"><i class="fa-solid fa-bullseye tracker-fallback-icon"></i>`;
|
||||||
|
}
|
||||||
|
function trackerFilterPlaceholder(){
|
||||||
|
if(trackerSummaryStatus==='loading') return '<div class="tracker-filter-empty"><span class="spinner-border spinner-border-xs"></span> Loading cached trackers...</div>';
|
||||||
|
if(trackerSummaryStatus==='error') return '<div class="tracker-filter-empty text-warning"><i class="fa-solid fa-triangle-exclamation"></i> Tracker list unavailable</div>';
|
||||||
|
if(Number(trackerSummary.pending||0)) return `<div class="tracker-filter-empty"><span class="spinner-border spinner-border-xs"></span> Tracker cache: ${esc(trackerSummary.cached||0)}/${esc(trackerSummary.scanned||0)}</div>`;
|
||||||
|
if(hasTorrentSnapshot && torrents.size) return '<div class="tracker-filter-empty">No trackers found</div>';
|
||||||
|
return '<div class="tracker-filter-empty">Waiting for torrents...</div>';
|
||||||
|
}
|
||||||
|
function renderTrackerFilters(force=false){
|
||||||
|
const box=$('trackerFilters');
|
||||||
|
if(!box) return;
|
||||||
|
const trackers=trackerSummary.trackers || [];
|
||||||
|
if(activeFilter.startsWith('tracker:') && !trackers.some(t=>t.domain===activeFilter.slice(8))) activeFilter='all';
|
||||||
|
const sig=[
|
||||||
|
trackerSummaryStatus,
|
||||||
|
trackerFaviconsEnabled ? 1 : 0,
|
||||||
|
trackerSummary.pending || 0,
|
||||||
|
trackerSummary.cached || 0,
|
||||||
|
trackerSummary.scanned || 0,
|
||||||
|
trackers.map(t=>`${t.domain}:${t.count||0}:${t.favicon_url||''}`).join('|')
|
||||||
|
].join('::');
|
||||||
|
if(!force && sig===lastTrackerFiltersSignature){ syncFilterButtons(); return; }
|
||||||
|
lastTrackerFiltersSignature=sig;
|
||||||
|
// Note: Tracker filter section is always visible, so an empty or failed tracker scan does not look like a missing feature.
|
||||||
|
const rows=trackers.length
|
||||||
|
? trackers.map(t=>`<button class="filter tracker-filter ${activeFilter==='tracker:'+t.domain?'active':''}" data-filter="tracker:${esc(t.domain)}"><span>${trackerFavicon(t)} ${esc(t.domain)}</span><span>${esc(t.count||0)}</span></button>`).join('')
|
||||||
|
: trackerFilterPlaceholder();
|
||||||
|
box.innerHTML=`<div class="small text-muted px-2 mb-1">Trackers</div>${rows}`;
|
||||||
|
bindSidebarFilterClicks(box);
|
||||||
|
}
|
||||||
|
async function refreshTrackerSummary(force=false){
|
||||||
|
const hashes=[...torrents.keys()].sort();
|
||||||
|
const sig=`${hashes.length}:${hashes[0]||''}:${hashes[hashes.length-1]||''}:${trackerFaviconsEnabled?1:0}`;
|
||||||
|
if(!force && sig===trackerSummarySignature && !Number(trackerSummary.pending||0)) return;
|
||||||
|
trackerSummarySignature=sig;
|
||||||
|
if(!hashes.length){ trackerSummary={hashes:{},trackers:[],scanned:0,errors:[],pending:0,cached:0}; trackerSummaryStatus='empty'; renderTrackerFilters(); return; }
|
||||||
|
trackerSummaryStatus=(trackerSummary.trackers||[]).length?'ready':'loading';
|
||||||
|
renderTrackerFilters();
|
||||||
|
try{
|
||||||
|
// Note: Nie wysyłamy 13k hashy w URL; backend bierze lokalny snapshot i doczytuje cache małymi porcjami.
|
||||||
|
const j=await (await fetch('/api/trackers/summary?scan_limit=0&warm=1&bg_limit=80')).json();
|
||||||
|
if(!j.ok && !j.summary) throw new Error(j.error||'Tracker summary failed');
|
||||||
|
trackerSummary=j.summary||{hashes:{},trackers:[],scanned:0,errors:[],pending:0,cached:0};
|
||||||
|
trackerSummaryStatus=(trackerSummary.trackers||[]).length?'ready':Number(trackerSummary.pending||0)?'empty':'empty';
|
||||||
|
renderTrackerFilters();
|
||||||
|
scheduleRender(true);
|
||||||
|
if(Number(trackerSummary.pending||0)>0){
|
||||||
|
clearTimeout(trackerSummaryTimer);
|
||||||
|
trackerSummaryTimer=setTimeout(()=>refreshTrackerSummary(true).catch(()=>{}), 5000);
|
||||||
|
}
|
||||||
|
}catch(e){ trackerSummaryStatus='error'; renderTrackerFilters(); console.warn('Tracker summary failed', e); }
|
||||||
|
}
|
||||||
|
function scheduleTrackerSummary(force=false){
|
||||||
|
clearTimeout(trackerSummaryTimer);
|
||||||
|
trackerSummaryTimer=setTimeout(()=>refreshTrackerSummary(force).catch(()=>{}), force?50:600);
|
||||||
|
}
|
||||||
function buildVisibleRows(){ visibleRows=[...torrents.values()].filter(rowVisible).sort(compareRows); $('statShown').textContent=visibleRows.length; }
|
function buildVisibleRows(){ visibleRows=[...torrents.values()].filter(rowVisible).sort(compareRows); $('statShown').textContent=visibleRows.length; }
|
||||||
function applyColumnVisibility(){ document.querySelectorAll('[data-col]').forEach(el=>el.classList.toggle('hidden-col', hiddenColumns.has(el.dataset.col))); }
|
function applyColumnVisibility(){ document.querySelectorAll('[data-col]').forEach(el=>el.classList.toggle('hidden-col', hiddenColumns.has(el.dataset.col))); }
|
||||||
function actionLabel(action){
|
function actionLabel(action){
|
||||||
@@ -179,6 +332,12 @@
|
|||||||
[...new Set(hashes||[])].filter(Boolean).forEach(hash=>activeOperations.set(hash,{action,jobId,state,label,updatedAt:Date.now()}));
|
[...new Set(hashes||[])].filter(Boolean).forEach(hash=>activeOperations.set(hash,{action,jobId,state,label,updatedAt:Date.now()}));
|
||||||
scheduleRender(true);
|
scheduleRender(true);
|
||||||
}
|
}
|
||||||
|
function markQueuedJobs(response, fallbackHashes, action){
|
||||||
|
// Note: Supports API responses that split one large user action into multiple queued bulk parts.
|
||||||
|
const jobs=Array.isArray(response?.jobs)?response.jobs:[];
|
||||||
|
if(jobs.length){ jobs.forEach(job=>markTorrentOperation(job.hashes||[],action,job.job_id,'queued')); return; }
|
||||||
|
markTorrentOperation(fallbackHashes,action,response?.job_id,'queued');
|
||||||
|
}
|
||||||
function clearJobOperation(jobId, hashes=[]){
|
function clearJobOperation(jobId, hashes=[]){
|
||||||
if(jobId){ [...activeOperations].forEach(([hash,op])=>{ if(op.jobId===jobId) activeOperations.delete(hash); }); }
|
if(jobId){ [...activeOperations].forEach(([hash,op])=>{ if(op.jobId===jobId) activeOperations.delete(hash); }); }
|
||||||
(hashes||[]).forEach(hash=>activeOperations.delete(hash));
|
(hashes||[]).forEach(hash=>activeOperations.delete(hash));
|
||||||
@@ -200,9 +359,22 @@
|
|||||||
function torrentWarning(t){ const msg=String(t.message||'').trim(); if(!msg) return null; const l=msg.toLowerCase(); const patterns=['error','failed','failure','timeout','timed out','tracker','could not','cannot','refused','unreachable','denied']; return patterns.some(p=>l.includes(p)) ? msg : null; }
|
function torrentWarning(t){ const msg=String(t.message||'').trim(); if(!msg) return null; const l=msg.toLowerCase(); const patterns=['error','failed','failure','timeout','timed out','tracker','could not','cannot','refused','unreachable','denied']; return patterns.some(p=>l.includes(p)) ? msg : null; }
|
||||||
function torrentNameIcon(t){ const m=statusMeta(t); return `<i class="fa-solid ${m.icon} ${m.color}"></i>`; }
|
function torrentNameIcon(t){ const m=statusMeta(t); return `<i class="fa-solid ${m.icon} ${m.color}"></i>`; }
|
||||||
function renderRow(t){ const labels=labelNames(t.label).map(l=>`<span class="chip label-mini"><i class="fa-solid fa-tag"></i> ${esc(l)}</span>`).join(' '); const warn=torrentWarning(t); const op=activeOperationFor(t); const classes=[selected.has(t.hash)?'selected':'', t.paused?'torrent-paused':'', op?'torrent-operating':'', warn?'torrent-warning':''].filter(Boolean).join(' '); const title=[t.name,warn,op?op.label:''].filter(Boolean).join('\n'); return `<tr data-hash="${esc(t.hash)}" class="${classes}"><td data-col="select" class="sel"><input class="row-check" type="checkbox" ${selected.has(t.hash)?'checked':''}></td><td data-col="name" class="name" title="${esc(title)}">${warn?'<i class="fa-solid fa-triangle-exclamation torrent-warning-icon"></i> ':''}${torrentNameIcon(t)} ${esc(t.name)}</td><td data-col="status">${statusBadge(t)}</td><td data-col="size">${esc(t.size_h)}</td><td data-col="progress">${progress(t)}</td><td data-col="down_rate">${esc(t.down_rate_h)}</td><td data-col="up_rate">${esc(t.up_rate_h)}</td><td data-col="seeds">${esc(t.seeds)}</td><td data-col="peers">${esc(t.peers)}</td><td data-col="ratio">${esc(t.ratio)}</td><td data-col="path" class="path" title="${esc(t.path)}">${esc(t.path)}</td><td data-col="label">${labels||'<span class="text-muted">-</span>'}</td><td data-col="ratio_group">${esc(t.ratio_group||'')}</td></tr>`; }
|
function renderRow(t){ const labels=labelNames(t.label).map(l=>`<span class="chip label-mini"><i class="fa-solid fa-tag"></i> ${esc(l)}</span>`).join(' '); const warn=torrentWarning(t); const op=activeOperationFor(t); const classes=[selected.has(t.hash)?'selected':'', t.paused?'torrent-paused':'', op?'torrent-operating':'', warn?'torrent-warning':''].filter(Boolean).join(' '); const title=[t.name,warn,op?op.label:''].filter(Boolean).join('\n'); return `<tr data-hash="${esc(t.hash)}" class="${classes}"><td data-col="select" class="sel"><input class="row-check" type="checkbox" ${selected.has(t.hash)?'checked':''}></td><td data-col="name" class="name" title="${esc(title)}">${warn?'<i class="fa-solid fa-triangle-exclamation torrent-warning-icon"></i> ':''}${torrentNameIcon(t)} ${esc(t.name)}</td><td data-col="status">${statusBadge(t)}</td><td data-col="size">${esc(t.size_h)}</td><td data-col="progress">${progress(t)}</td><td data-col="down_rate">${esc(t.down_rate_h)}</td><td data-col="up_rate">${esc(t.up_rate_h)}</td><td data-col="seeds">${esc(t.seeds)}</td><td data-col="peers">${esc(t.peers)}</td><td data-col="ratio">${esc(t.ratio)}</td><td data-col="path" class="path" title="${esc(t.path)}">${esc(t.path)}</td><td data-col="label">${labels||'<span class="text-muted">-</span>'}</td><td data-col="ratio_group">${esc(t.ratio_group||'')}</td></tr>`; }
|
||||||
function mobileFilterDefs(){ const arr=[...torrents.values()]; const f=torrentSummary?.filters||{}; const defs=[['all','All',f.all?.count??0],['downloading','Downloading',f.downloading?.count??0],['seeding','Seeding',f.seeding?.count??0],['paused','Paused',f.paused?.count??0],['checking','Checking',f.checking?.count??0],['error','With error',f.error?.count??0],['stopped','Stopped',f.stopped?.count??0]]; const counts=new Map(); arr.forEach(t=>labelNames(t.label).forEach(l=>counts.set(l,(counts.get(l)||0)+1))); [...counts.keys()].sort((a,b)=>a.localeCompare(b)).forEach(l=>defs.push([`label:${l}`,l,counts.get(l),'label'])); return defs; }
|
function mobileFilterDefs(){ const arr=[...torrents.values()]; const f=torrentSummary?.filters||{}; const defs=[['all','All',f.all?.count??0],['downloading','Downloading',f.downloading?.count??0],['seeding','Seeding',f.seeding?.count??0],['paused','Paused',f.paused?.count??0],['checking','Checking',f.checking?.count??0],['error','With error',f.error?.count??0],['stopped','Stopped',f.stopped?.count??0]]; const movingCount=movingFilterCount(); if(movingCount) defs.push(['moving','Moving',movingCount]); const counts=new Map(); arr.forEach(t=>labelNames(t.label).forEach(l=>counts.set(l,(counts.get(l)||0)+1))); [...counts.keys()].sort((a,b)=>a.localeCompare(b)).forEach(l=>defs.push([`label:${l}`,l,counts.get(l),'label'])); (trackerSummary.trackers||[]).forEach(t=>defs.push([`tracker:${t.domain}`,t.domain,t.count,'tracker'])); return defs; }
|
||||||
function renderMobileFilters(){ const bar=$('mobileFilterBar'); if(!bar) return; const allVisible=visibleRows.length>0 && visibleRows.every(t=>selected.has(t.hash)); const someVisible=visibleRows.some(t=>selected.has(t.hash)); const opts=mobileFilterDefs().map(([key,label,count,type])=>`<option value="${esc(key)}" ${activeFilter===key?'selected':''}>${type==='label'?'Label: ':''}${esc(label)} (${count})</option>`).join(''); bar.innerHTML=`<div class="mobile-filter-actions"><button id="mobileSelectAll" class="btn btn-xs ${allVisible?'btn-primary':'btn-outline-primary'}" type="button"><i class="fa-solid fa-check-double"></i> ${allVisible?'Unselect all':'Select all'}</button><button id="mobileClearSelection" class="btn btn-xs btn-outline-secondary" type="button" ${someVisible?'':'disabled'}><i class="fa-solid fa-xmark"></i> Clear</button><span>${selected.size} selected</span></div><div class="mobile-filter-select-row"><label for="mobileFilterSelect"><i class="fa-solid fa-filter"></i> Filter</label><select id="mobileFilterSelect" class="form-select form-select-sm">${opts}</select></div>`; }
|
function renderMobileFilters(){
|
||||||
function renderMobile(){ const list=$('mobileList'); if(!list) return; const src=visibleRows.length?visibleRows:[...torrents.values()].filter(rowVisible).sort(compareRows); const rows=src.slice(0,250); renderMobileFilters(); list.innerHTML=rows.map(t=>{ const warn=torrentWarning(t); const op=activeOperationFor(t); const classes=[selected.has(t.hash)?'selected':'', op?'torrent-operating':'', warn?'torrent-warning':''].filter(Boolean).join(' '); return `<div class="mobile-card ${classes}" data-hash="${esc(t.hash)}" title="${esc(warn||op?.label||'')}"><div class="name">${warn?'<i class="fa-solid fa-triangle-exclamation torrent-warning-icon"></i> ':''}${torrentNameIcon(t)} ${esc(t.name)}</div><div class="small text-muted">${statusBadge(t)} · ${esc(t.progress)}% · Ratio ${esc(t.ratio)}</div><div class="small">DL ${esc(t.down_rate_h)} / UL ${esc(t.up_rate_h)}</div><div class="small text-truncate">${esc(t.path)}</div><div class="mobile-actions"><button class="btn btn-xs btn-outline-success" data-action="start"><i class="fa-solid fa-play"></i></button><button class="btn btn-xs btn-outline-warning" data-action="pause"><i class="fa-solid fa-pause"></i></button><button class="btn btn-xs btn-outline-secondary" data-action="stop"><i class="fa-solid fa-stop"></i></button></div><div class="mobile-progress">${progress(t)}</div></div>`; }).join('') || (hasTorrentSnapshot ? `<div class="empty">No torrents.</div>` : loadingMarkup('Loading torrents...')); }
|
const bar=$('mobileFilterBar');
|
||||||
|
if(!bar) return;
|
||||||
|
const allVisible=visibleRows.length>0 && visibleRows.every(t=>selected.has(t.hash));
|
||||||
|
const someVisible=visibleRows.some(t=>selected.has(t.hash));
|
||||||
|
const defs=mobileFilterDefs();
|
||||||
|
const sig=[activeFilter, sortState.key, sortState.dir, selected.size, allVisible ? 1 : 0, someVisible ? 1 : 0, defs.map(d=>`${d[0]}:${d[2]}`).join('|')].join('::');
|
||||||
|
if(sig===lastMobileFiltersSignature) return;
|
||||||
|
lastMobileFiltersSignature=sig;
|
||||||
|
const opts=defs.map(([key,label,count,type])=>`<option value="${esc(key)}" ${activeFilter===key?'selected':''}>${type==='label'?'Label: ':type==='tracker'?'Tracker: ':''}${esc(label)} (${count})</option>`).join('');
|
||||||
|
const bulk=selected.size?`<button id="mobileBulkLabel" class="btn btn-xs btn-outline-primary" type="button" data-bs-toggle="modal" data-bs-target="#labelModal"><i class="fa-solid fa-tag"></i> Label</button><button id="mobileBulkMove" class="btn btn-xs btn-outline-primary" type="button" data-action="move"><i class="fa-solid fa-folder-open"></i> Move</button>`:'';
|
||||||
|
// Note: Mobile bulk actions reuse the existing label modal and move picker, so desktop behavior stays unchanged.
|
||||||
|
bar.innerHTML=`<div class="mobile-filter-actions"><button id="mobileSelectAll" class="btn btn-xs ${allVisible?'btn-primary':'btn-outline-primary'}" type="button"><i class="fa-solid fa-check-double"></i> ${allVisible?'Unselect all':'Select all'}</button><button id="mobileClearSelection" class="btn btn-xs btn-outline-secondary" type="button" ${someVisible?'':'disabled'}><i class="fa-solid fa-xmark"></i> Clear</button>${bulk}<span>${selected.size} selected</span></div><div class="mobile-filter-select-row"><label for="mobileFilterSelect"><i class="fa-solid fa-filter"></i> Filter</label><select id="mobileFilterSelect" class="form-select form-select-sm">${opts}</select></div><div class="mobile-sort-row"><button id="mobileSortCycle" class="btn btn-xs btn-outline-secondary" type="button"><i class="fa-solid fa-arrow-down-wide-short"></i> Sort: ${esc(mobileSortLabel())}</button></div>`;
|
||||||
|
}
|
||||||
|
function renderMobile(){ const list=$('mobileList'); if(!list) return; const src=visibleRows.length?visibleRows:[...torrents.values()].filter(rowVisible).sort(compareRows); const rows=src.slice(0,250); renderMobileFilters(); list.innerHTML=rows.map(t=>{ const warn=torrentWarning(t); const op=activeOperationFor(t); const classes=[selected.has(t.hash)?'selected':'', op?'torrent-operating':'', warn?'torrent-warning':''].filter(Boolean).join(' '); return `<div class="mobile-card ${classes}" data-hash="${esc(t.hash)}" title="${esc(warn||op?.label||'')}"><div class="name">${warn?'<i class="fa-solid fa-triangle-exclamation torrent-warning-icon"></i> ':''}${torrentNameIcon(t)} ${esc(t.name)}</div><div class="small text-muted">${statusBadge(t)} · ${esc(t.progress)}% · Ratio ${esc(t.ratio)}</div><div class="small">DL ${esc(t.down_rate_h)} / UL ${esc(t.up_rate_h)}</div><div class="small text-truncate">${esc(t.path)}</div><div class="mobile-actions"><button class="btn btn-xs btn-outline-success" data-action="start" title="Start"><i class="fa-solid fa-play"></i></button><button class="btn btn-xs btn-outline-warning" data-action="pause" title="Pause"><i class="fa-solid fa-pause"></i></button><button class="btn btn-xs btn-outline-secondary" data-action="stop" title="Stop"><i class="fa-solid fa-stop"></i></button><button class="btn btn-xs btn-outline-primary" data-action="move" title="Move"><i class="fa-solid fa-folder-open"></i></button><button class="btn btn-xs btn-outline-primary" data-mobile-modal="label" title="Set label"><i class="fa-solid fa-tag"></i></button><button class="btn btn-xs btn-outline-info" data-action="recheck" title="Force recheck"><i class="fa-solid fa-rotate"></i></button><button class="btn btn-xs btn-outline-primary" data-action="reannounce" title="Reannounce"><i class="fa-solid fa-bullhorn"></i></button></div><div class="mobile-progress">${progress(t)}</div></div>`; }).join('') || (hasTorrentSnapshot ? `<div class="empty">No torrents.</div>` : loadingMarkup('Loading torrents...')); }
|
||||||
function renderTable(){ updateBulkBar(); renderCounts(); renderLabelFilters(); updateSortHeaders(); buildVisibleRows(); renderMobile(); const body=$('torrentBody'); if(!visibleRows.length){ body.innerHTML=hasTorrentSnapshot?'<tr><td colspan="13" class="empty">No torrents for this filter.</td></tr>':loadingTableRow('Loading torrents...'); return; } const wrap=$('tableWrap'); const start=Math.max(0,Math.floor((wrap?.scrollTop||0)/ROW_HEIGHT)-OVERSCAN); const count=Math.ceil((wrap?.clientHeight||500)/ROW_HEIGHT)+OVERSCAN*2; const end=Math.min(visibleRows.length,start+count); const sig=`${renderVersion}:${start}:${end}:${visibleRows.length}:${sortState.key}:${sortState.dir}:${selected.size}:${activeFilter}:${$('searchBox')?.value||''}:${[...selected].slice(0,30).join(',')}`; if(sig===lastRenderSignature) return; lastRenderSignature=sig; const top=start*ROW_HEIGHT,bottom=Math.max(0,(visibleRows.length-end)*ROW_HEIGHT); body.innerHTML=(top?`<tr class="virtual-spacer"><td colspan="13" style="height:${top}px"></td></tr>`:'')+visibleRows.slice(start,end).map(renderRow).join('')+(bottom?`<tr class="virtual-spacer"><td colspan="13" style="height:${bottom}px"></td></tr>`:''); applyColumnVisibility(); }
|
function renderTable(){ updateBulkBar(); renderCounts(); renderLabelFilters(); updateSortHeaders(); buildVisibleRows(); renderMobile(); const body=$('torrentBody'); if(!visibleRows.length){ body.innerHTML=hasTorrentSnapshot?'<tr><td colspan="13" class="empty">No torrents for this filter.</td></tr>':loadingTableRow('Loading torrents...'); return; } const wrap=$('tableWrap'); const start=Math.max(0,Math.floor((wrap?.scrollTop||0)/ROW_HEIGHT)-OVERSCAN); const count=Math.ceil((wrap?.clientHeight||500)/ROW_HEIGHT)+OVERSCAN*2; const end=Math.min(visibleRows.length,start+count); const sig=`${renderVersion}:${start}:${end}:${visibleRows.length}:${sortState.key}:${sortState.dir}:${selected.size}:${activeFilter}:${$('searchBox')?.value||''}:${[...selected].slice(0,30).join(',')}`; if(sig===lastRenderSignature) return; lastRenderSignature=sig; const top=start*ROW_HEIGHT,bottom=Math.max(0,(visibleRows.length-end)*ROW_HEIGHT); body.innerHTML=(top?`<tr class="virtual-spacer"><td colspan="13" style="height:${top}px"></td></tr>`:'')+visibleRows.slice(start,end).map(renderRow).join('')+(bottom?`<tr class="virtual-spacer"><td colspan="13" style="height:${bottom}px"></td></tr>`:''); applyColumnVisibility(); }
|
||||||
function scheduleRender(force=false){ if(force){lastRenderSignature='';renderVersion++;} if(renderPending)return; renderPending=true; requestAnimationFrame(()=>{renderPending=false;renderTable();}); }
|
function scheduleRender(force=false){ if(force){lastRenderSignature='';renderVersion++;} if(renderPending)return; renderPending=true; requestAnimationFrame(()=>{renderPending=false;renderTable();}); }
|
||||||
function patchRows(msg){ if(msg.summary) torrentSummary=msg.summary; (msg.removed||[]).forEach(h=>{torrents.delete(h);selected.delete(h);activeOperations.delete(h);if(selectedHash===h)selectedHash=null;}); (msg.added||[]).forEach(t=>torrents.set(t.hash,t)); (msg.updated||[]).forEach(p=>torrents.set(p.hash,{...(torrents.get(p.hash)||{}),...p})); scheduleRender(true); if(selectedHash&&torrents.has(selectedHash)&&activeTab()==='general') renderGeneral(); }
|
function patchRows(msg){ if(msg.summary) torrentSummary=msg.summary; (msg.removed||[]).forEach(h=>{torrents.delete(h);selected.delete(h);activeOperations.delete(h);if(selectedHash===h)selectedHash=null;}); (msg.added||[]).forEach(t=>torrents.set(t.hash,t)); (msg.updated||[]).forEach(p=>torrents.set(p.hash,{...(torrents.get(p.hash)||{}),...p})); scheduleRender(true); if(selectedHash&&torrents.has(selectedHash)&&activeTab()==='general') renderGeneral(); }
|
||||||
@@ -211,9 +383,11 @@
|
|||||||
function setSelectionRange(hash, keepExisting=false){ const current=visibleRows.findIndex(t=>t.hash===hash); const last=visibleRows.findIndex(t=>t.hash===lastSelectedHash); if(current<0 || last<0){ selected.add(hash); lastSelectedHash=hash; return; } if(!keepExisting) selected.clear(); const a=Math.min(current,last), b=Math.max(current,last); visibleRows.slice(a,b+1).forEach(t=>selected.add(t.hash)); selectedHash=hash; }
|
function setSelectionRange(hash, keepExisting=false){ const current=visibleRows.findIndex(t=>t.hash===hash); const last=visibleRows.findIndex(t=>t.hash===lastSelectedHash); if(current<0 || last<0){ selected.add(hash); lastSelectedHash=hash; return; } if(!keepExisting) selected.clear(); const a=Math.min(current,last), b=Math.max(current,last); visibleRows.slice(a,b+1).forEach(t=>selected.add(t.hash)); selectedHash=hash; }
|
||||||
async function post(url,data,method='POST'){ const res=await fetch(url,{method,headers:{'Content-Type':'application/json'},body:JSON.stringify(data||{})}); const json=await res.json(); if(!json.ok) throw new Error(json.error||'Operation failed'); return json; }
|
async function post(url,data,method='POST'){ const res=await fetch(url,{method,headers:{'Content-Type':'application/json'},body:JSON.stringify(data||{})}); const json=await res.json(); if(!json.ok) throw new Error(json.error||'Operation failed'); return json; }
|
||||||
|
|
||||||
async function runAction(action, extra={}){ const hashes=selectedHashes(); if(!hashes.length) return toast('No torrents selected','warning'); let payload={hashes,...extra}; if(action==='move'){ openPathPicker('move'); return; } setBusy(true); try{ const j=await post(`/api/torrents/${action}`,payload); markTorrentOperation(hashes, action, j.job_id, 'queued'); if(action==='recheck'){ hashes.forEach(h=>{ const t=torrents.get(h); if(t) torrents.set(h,{...t,status:'Checking',hashing:1,message:'Force recheck queued'}); }); scheduleRender(true); } toast(`${action} queued`,'success'); if(action==='set_label') await loadLabels(); }catch(e){toast(e.message,'danger');} finally{setBusy(false);} }
|
async function runAction(action, extra={}){ const hashes=selectedHashes(); if(!hashes.length) return toast('No torrents selected','warning'); let payload={hashes,...extra}; if(action==='move'){ openPathPicker('move'); return; } setBusy(true); try{ const j=await post(`/api/torrents/${action}`,payload); markQueuedJobs(j, hashes, action); if(action==='recheck'){ hashes.forEach(h=>{ const t=torrents.get(h); if(t) torrents.set(h,{...t,status:'Checking',hashing:1,message:'Force recheck queued'}); }); scheduleRender(true); } const parts=Number(j.bulk_parts||1); toast(parts>1?`${action} queued in ${parts} bulk parts`:`${action} queued`,'success'); if(action==='set_label') await loadLabels(); }catch(e){toast(e.message,'danger');} finally{setBusy(false);} }
|
||||||
function flag(iso){ const code=String(iso||'').toLowerCase(); return code?`<span class="fi fi-${esc(code)}"></span> <span>${esc(code.toUpperCase())}</span>`:'-'; }
|
function flag(iso){ const code=String(iso||'').toLowerCase(); return code?`<span class="fi fi-${esc(code)}"></span> <span>${esc(code.toUpperCase())}</span>`:'-'; }
|
||||||
function table(headers,rows){ return `<table class="table table-sm detail-table"><thead><tr>${headers.map(h=>`<th>${esc(h)}</th>`).join('')}</tr></thead><tbody>${rows.map(r=>`<tr>${r.map(c=>`<td>${c}</td>`).join('')}</tr>`).join('')}</tbody></table>`; }
|
function table(headers,rows,extraClass=''){ const cls=extraClass?` ${extraClass}`:''; return `<table class="table table-sm detail-table${cls}"><thead><tr>${headers.map(h=>`<th>${esc(h)}</th>`).join('')}</tr></thead><tbody>${rows.map(r=>`<tr>${r.map(c=>`<td>${c}</td>`).join('')}</tr>`).join('')}</tbody></table>`; }
|
||||||
|
function responsiveTable(headers,rows,extraClass=''){ return `<div class="responsive-table-wrap">${table(headers,rows,extraClass)}</div>`; }
|
||||||
|
function downloadJson(filename, data){ const blob=new Blob([JSON.stringify(data,null,2)],{type:'application/json'}); const url=URL.createObjectURL(blob); const a=document.createElement('a'); a.href=url; a.download=filename; document.body.appendChild(a); a.click(); a.remove(); setTimeout(()=>URL.revokeObjectURL(url),500); }
|
||||||
function renderGeneral(){ const t=torrents.get(selectedHash); const labels=t?labelNames(t.label).map(l=>`<span class="chip label-mini"><i class="fa-solid fa-tag"></i> ${esc(l)}</span>`).join(' '):''; $('detailPane').innerHTML=t?`<div class="general-grid"><div><b>Name</b><span>${esc(t.name)}</span></div><div><b>Hash</b><span>${esc(t.hash)}</span></div><div><b>Path</b><span>${esc(t.path)}</span></div><div><b>Size</b><span>${esc(t.size_h)}</span></div><div><b>Progress</b><span>${esc(t.progress)}%</span></div><div><b>Ratio</b><span>${esc(t.ratio)}</span></div><div><b>Downloaded</b><span>${esc(t.down_total_h)}</span></div><div><b>Uploaded</b><span>${esc(t.up_total_h)}</span></div><div><b>Labels</b><span>${labels||'<span class="text-muted">-</span>'}</span></div><div><b>Ratio group</b><span>${esc(t.ratio_group||'')}</span></div></div>`:'Select a torrent.'; }
|
function renderGeneral(){ const t=torrents.get(selectedHash); const labels=t?labelNames(t.label).map(l=>`<span class="chip label-mini"><i class="fa-solid fa-tag"></i> ${esc(l)}</span>`).join(' '):''; $('detailPane').innerHTML=t?`<div class="general-grid"><div><b>Name</b><span>${esc(t.name)}</span></div><div><b>Hash</b><span>${esc(t.hash)}</span></div><div><b>Path</b><span>${esc(t.path)}</span></div><div><b>Size</b><span>${esc(t.size_h)}</span></div><div><b>Progress</b><span>${esc(t.progress)}%</span></div><div><b>Ratio</b><span>${esc(t.ratio)}</span></div><div><b>Downloaded</b><span>${esc(t.down_total_h)}</span></div><div><b>Uploaded</b><span>${esc(t.up_total_h)}</span></div><div><b>Labels</b><span>${labels||'<span class="text-muted">-</span>'}</span></div><div><b>Ratio group</b><span>${esc(t.ratio_group||'')}</span></div></div>`:'Select a torrent.'; }
|
||||||
const FILE_PRIORITY_LABELS = {0: "Skip", 1: "Normal", 2: "High"};
|
const FILE_PRIORITY_LABELS = {0: "Skip", 1: "Normal", 2: "High"};
|
||||||
function priorityClass(priority){ priority=Number(priority||0); return priority===2?"text-bg-success":priority===0?"text-bg-secondary":"text-bg-primary"; }
|
function priorityClass(priority){ priority=Number(priority||0); return priority===2?"text-bg-success":priority===0?"text-bg-secondary":"text-bg-primary"; }
|
||||||
@@ -243,30 +417,23 @@
|
|||||||
return badges.join(' ') || '<span class="text-muted">-</span>';
|
return badges.join(' ') || '<span class="text-muted">-</span>';
|
||||||
}
|
}
|
||||||
function renderPeers(peers){
|
function renderPeers(peers){
|
||||||
const rows=(peers||[]).map(p=>[flag(p.country_iso),esc(p.ip),esc(p.country),esc(p.city),esc(p.client),progressBar(p.completed,'peer-progress'),esc(p.down_rate_h),esc(p.up_rate_h),esc(p.port),peerBadges(p),`<div class="peer-actions"><button class="btn btn-xs btn-outline-warning peer-action" data-peer-index="${esc(p.index)}" data-peer-action="disconnect" title="Kick peer"><i class="fa-solid fa-user-slash"></i><span>Kick</span></button><button class="btn btn-xs btn-outline-secondary peer-action" data-peer-index="${esc(p.index)}" data-peer-action="snub" title="Snub peer"><i class="fa-solid fa-volume-xmark"></i><span>Snub</span></button><button class="btn btn-xs btn-outline-primary peer-action" data-peer-index="${esc(p.index)}" data-peer-action="unsnub" title="Unsnub peer"><i class="fa-solid fa-volume-high"></i><span>Unsnub</span></button><button class="btn btn-xs btn-outline-danger peer-action" data-peer-index="${esc(p.index)}" data-peer-action="ban" title="Ban peer if supported"><i class="fa-solid fa-ban"></i><span>Ban</span></button></div>`]);
|
const rows=(peers||[]).map(p=>[flag(p.country_iso),`<span class="peer-ip">${esc(p.ip)}<a class="peer-ip-link" href="https://ipinfo.io/${encodeURIComponent(p.ip||'')}" target="_blank" rel="noopener noreferrer" title="Open IP info"><i class="fa-solid fa-link"></i></a></span>`,esc(p.country),esc(p.city),esc(p.client),progressBar(p.completed,'peer-progress'),esc(p.down_rate_h),esc(p.up_rate_h),esc(p.port),peerBadges(p)]);
|
||||||
$('detailPane').innerHTML=table(['Flag','IP','Country','City','Client','%','DL','UL','Port','Flags','Actions'],rows);
|
$('detailPane').innerHTML=table(['Flag','IP','Country','City','Client','%','DL','UL','Port','Flags'],rows);
|
||||||
}
|
|
||||||
async function peerAction(index, action){
|
|
||||||
if(!selectedHash) return;
|
|
||||||
setBusy(true);
|
|
||||||
try{
|
|
||||||
const j=await post(`/api/torrents/${encodeURIComponent(selectedHash)}/peers/action`,{peer_index:Number(index),action});
|
|
||||||
toast(j.message || `Peer ${action} done`,'success');
|
|
||||||
await loadDetails('peers');
|
|
||||||
}catch(e){ toast(e.message,'danger'); }
|
|
||||||
finally{ setBusy(false); }
|
|
||||||
}
|
}
|
||||||
function fmtTs(value){ const n=Number(value||0); if(!n) return '-'; try{return new Date(n*1000).toLocaleString();}catch(e){return String(n);} }
|
function fmtTs(value){ const n=Number(value||0); if(!n) return '-'; try{return new Date(n*1000).toLocaleString();}catch(e){return String(n);} }
|
||||||
function trackerSeedsPeers(t){ const hasScrape = t.seeds !== null || t.peers !== null; return hasScrape ? `${t.seeds ?? "-"} / ${t.peers ?? "-"}` : "-"; }
|
function trackerSeedsPeers(t){ const hasScrape = t.seeds !== null || t.peers !== null; return hasScrape ? `${t.seeds ?? "-"} / ${t.peers ?? "-"}` : "-"; }
|
||||||
function renderTrackers(trackers){
|
function renderTrackers(trackers){
|
||||||
|
// Note: Tracker URL editing is intentionally replaced by safe deletion; adding trackers remains unchanged.
|
||||||
const pane=$('detailPane');
|
const pane=$('detailPane');
|
||||||
const rows=(trackers||[]).map(t=>{
|
const list=trackers||[];
|
||||||
|
const canDelete=list.length>1;
|
||||||
|
const rows=list.map(t=>{
|
||||||
const idx=esc(t.index), url=esc(t.url);
|
const idx=esc(t.index), url=esc(t.url);
|
||||||
return [`<span class="text-muted">#${idx}</span>`, `<div class="tracker-url-view" data-tracker-index="${idx}"><span class="tracker-url-text">${url || '<span class="text-muted">-</span>'}</span></div><div class="tracker-url-edit d-none" data-tracker-index="${idx}"><input class="form-control form-control-sm tracker-url" data-tracker-index="${idx}" value="${url}"></div>`, t.enabled?'yes':'no', esc(trackerSeedsPeers(t)), esc(t.downloaded ?? '-'), fmtTs(t.last_announce), `<div class="tracker-actions"><button class="btn btn-xs btn-outline-secondary tracker-edit-start" data-index="${idx}"><i class="fa-solid fa-pen"></i> Edit</button><button class="btn btn-xs btn-outline-primary tracker-edit-save d-none" data-index="${idx}"><i class="fa-solid fa-floppy-disk"></i> Save</button><button class="btn btn-xs btn-outline-secondary tracker-edit-cancel d-none" data-index="${idx}"><i class="fa-solid fa-xmark"></i> Cancel</button></div>`];
|
const deleteDisabled=canDelete ? '' : ' disabled title="At least one tracker must remain"';
|
||||||
|
return [`<span class="text-muted">#${idx}</span>`, `<span class="tracker-url-text">${url || '<span class="text-muted">-</span>'}</span>`, t.enabled?'yes':'no', esc(trackerSeedsPeers(t)), esc(t.downloaded ?? '-'), fmtTs(t.last_announce), `<div class="tracker-actions"><button class="btn btn-xs btn-outline-danger tracker-delete" data-index="${idx}"${deleteDisabled}><i class="fa-solid fa-trash"></i> Delete</button></div>`];
|
||||||
});
|
});
|
||||||
pane.innerHTML=`<div class="tracker-toolbar"><div class="input-group input-group-sm"><input id="trackerAddUrl" class="form-control" placeholder="https://tracker.example/announce"><button id="trackerAddBtn" class="btn btn-outline-primary"><i class="fa-solid fa-plus"></i> Add tracker</button></div><button id="trackerReannounceBtn" class="btn btn-sm btn-outline-primary"><i class="fa-solid fa-bullhorn"></i> Reannounce</button></div>${table(['#','URL','On','Seeds / Peers','Done','Last announce','Actions'], rows.length?rows:[[ '<span class="text-muted">-</span>','<span class="text-muted">No trackers.</span>','','','','','' ]])}`;
|
pane.innerHTML=`<div class="tracker-toolbar"><div class="input-group input-group-sm"><input id="trackerAddUrl" class="form-control tracker-add-input" placeholder="https://tracker.example/announce"><button id="trackerAddBtn" class="btn btn-outline-primary"><i class="fa-solid fa-plus"></i> Add tracker</button></div><button id="trackerReannounceBtn" class="btn btn-sm btn-outline-primary"><i class="fa-solid fa-bullhorn"></i> Reannounce</button></div>${table(['#','URL','On','Seeds / Peers','Done','Last announce','Actions'], rows.length?rows:[[ '<span class="text-muted">-</span>','<span class="text-muted">No trackers.</span>','','','','','' ]])}`;
|
||||||
}
|
}
|
||||||
function setTrackerEdit(index,on){ const sel=String(index); document.querySelector(`.tracker-url-view[data-tracker-index="${CSS.escape(sel)}"]`)?.classList.toggle('d-none', on); document.querySelector(`.tracker-url-edit[data-tracker-index="${CSS.escape(sel)}"]`)?.classList.toggle('d-none', !on); document.querySelector(`.tracker-edit-start[data-index="${CSS.escape(sel)}"]`)?.classList.toggle('d-none', on); document.querySelector(`.tracker-edit-save[data-index="${CSS.escape(sel)}"]`)?.classList.toggle('d-none', !on); document.querySelector(`.tracker-edit-cancel[data-index="${CSS.escape(sel)}"]`)?.classList.toggle('d-none', !on); }
|
|
||||||
async function trackerAction(action,payload={}){
|
async function trackerAction(action,payload={}){
|
||||||
if(!selectedHash) return toast('No torrent selected','warning');
|
if(!selectedHash) return toast('No torrent selected','warning');
|
||||||
setBusy(true);
|
setBusy(true);
|
||||||
@@ -305,16 +472,59 @@
|
|||||||
async function applyDefaultDownloadPath(force=false){ const p=await getDefaultDownloadPath(); ['addPath','rssPath','autoEffectPath'].forEach(id=>{ const el=$(id); if(el && (force || !el.value)) el.value=p; }); return p; }
|
async function applyDefaultDownloadPath(force=false){ const p=await getDefaultDownloadPath(); ['addPath','rssPath','autoEffectPath'].forEach(id=>{ const el=$(id); if(el && (force || !el.value)) el.value=p; }); return p; }
|
||||||
async function openPathPicker(target){ pathTarget=target; const def=await getDefaultDownloadPath(); const initial=def || ($(target)?.value||'/'); $('moveOptions')?.classList.toggle('d-none', target!=='move'); if($('moveDataPhysical')) $('moveDataPhysical').checked=true; if($('moveRecheck')) $('moveRecheck').checked=true; new bootstrap.Modal($('pathModal')).show(); browsePath(initial); }
|
async function openPathPicker(target){ pathTarget=target; const def=await getDefaultDownloadPath(); const initial=def || ($(target)?.value||'/'); $('moveOptions')?.classList.toggle('d-none', target!=='move'); if($('moveDataPhysical')) $('moveDataPhysical').checked=true; if($('moveRecheck')) $('moveRecheck').checked=true; new bootstrap.Modal($('pathModal')).show(); browsePath(initial); }
|
||||||
async function browsePath(path){ $('pathList').innerHTML='<span class="spinner-border spinner-border-sm"></span> Loading...'; try{ const res=await fetch(`/api/path/browse?path=${encodeURIComponent(path||'/')}`); const j=await res.json(); if(!j.ok) throw new Error(j.error); $('pathCurrent').value=j.path; lastPathParent=j.parent; $('pathList').innerHTML=j.dirs.map(d=>`<div class="path-row" data-path="${esc(d.path)}"><i class="fa-solid fa-folder"></i><span>${esc(d.name)}</span></div>`).join('')||'<div class="p-3 text-muted">No directories.</div>'; }catch(e){$('pathList').innerHTML=`<div class="text-danger p-2">${esc(e.message)}</div>`;} }
|
async function browsePath(path){ $('pathList').innerHTML='<span class="spinner-border spinner-border-sm"></span> Loading...'; try{ const res=await fetch(`/api/path/browse?path=${encodeURIComponent(path||'/')}`); const j=await res.json(); if(!j.ok) throw new Error(j.error); $('pathCurrent').value=j.path; lastPathParent=j.parent; $('pathList').innerHTML=j.dirs.map(d=>`<div class="path-row" data-path="${esc(d.path)}"><i class="fa-solid fa-folder"></i><span>${esc(d.name)}</span></div>`).join('')||'<div class="p-3 text-muted">No directories.</div>'; }catch(e){$('pathList').innerHTML=`<div class="text-danger p-2">${esc(e.message)}</div>`;} }
|
||||||
$('pathList')?.addEventListener('click',e=>{const r=e.target.closest('.path-row'); if(r) browsePath(r.dataset.path);}); $('pathGoBtn')?.addEventListener('click',()=>browsePath($('pathCurrent').value)); $('pathUpBtn')?.addEventListener('click',()=>browsePath(lastPathParent)); $('pathReloadBtn')?.addEventListener('click',()=>browsePath($('pathCurrent').value)); $('pathSelectBtn')?.addEventListener('click',async()=>{const p=$('pathCurrent').value; if(pathTarget==='move'){ const hashes=selectedHashes(); const j=await post('/api/torrents/move',{hashes,path:p,move_data:!!($('moveDataPhysical')?.checked),recheck:!!($('moveRecheck')?.checked)}); markTorrentOperation(hashes,'move',j.job_id,'queued'); toast($('moveDataPhysical')?.checked?'physical move queued':'move queued','success'); } else if($(pathTarget)) $(pathTarget).value=p; bootstrap.Modal.getInstance($('pathModal'))?.hide();}); document.querySelectorAll('.browse-path').forEach(b=>b.addEventListener('click',()=>openPathPicker(b.dataset.target)));
|
$('pathList')?.addEventListener('click',e=>{const r=e.target.closest('.path-row'); if(r) browsePath(r.dataset.path);}); $('pathGoBtn')?.addEventListener('click',()=>browsePath($('pathCurrent').value)); $('pathUpBtn')?.addEventListener('click',()=>browsePath(lastPathParent)); $('pathReloadBtn')?.addEventListener('click',()=>browsePath($('pathCurrent').value)); $('pathSelectBtn')?.addEventListener('click',async()=>{const p=$('pathCurrent').value; if(pathTarget==='move'){ const hashes=selectedHashes(); const j=await post('/api/torrents/move',{hashes,path:p,move_data:!!($('moveDataPhysical')?.checked),recheck:!!($('moveRecheck')?.checked)}); markQueuedJobs(j,hashes,'move'); const parts=Number(j.bulk_parts||1); toast(parts>1?`move queued in ${parts} bulk parts`:$('moveDataPhysical')?.checked?'physical move queued':'move queued','success'); } else if($(pathTarget)) $(pathTarget).value=p; bootstrap.Modal.getInstance($('pathModal'))?.hide();}); document.querySelectorAll('.browse-path').forEach(b=>b.addEventListener('click',()=>openPathPicker(b.dataset.target)));
|
||||||
|
|
||||||
function renderColumnManager(){ const box=$('columnManager'); if(!box) return; box.innerHTML=COLUMN_DEFS.map(([key,label])=>`<label class="column-card form-check form-switch ${hiddenColumns.has(key)?'':'active'}"><input class="form-check-input column-toggle" type="checkbox" data-col-key="${esc(key)}" ${hiddenColumns.has(key)?'':'checked'}><span class="form-check-label"><i class="fa-solid fa-table-columns"></i> ${esc(label)}</span></label>`).join(''); }
|
function renderColumnManager(){ const box=$('columnManager'); if(!box) return; box.innerHTML=COLUMN_DEFS.map(([key,label])=>`<label class="column-card form-check form-switch ${hiddenColumns.has(key)?'':'active'}"><input class="form-check-input column-toggle" type="checkbox" data-col-key="${esc(key)}" ${hiddenColumns.has(key)?'':'checked'}><span class="form-check-label"><i class="fa-solid fa-table-columns"></i> ${esc(label)}</span></label>`).join(''); }
|
||||||
$('saveColumnsBtn')?.addEventListener('click',async()=>{ document.querySelectorAll('.column-toggle').forEach(cb=>cb.checked?hiddenColumns.delete(cb.dataset.colKey):hiddenColumns.add(cb.dataset.colKey)); applyColumnVisibility(); scheduleRender(true); await post('/api/preferences',{table_columns_json:JSON.stringify({hidden:[...hiddenColumns]})}).catch(e=>toast(e.message,'danger')); toast('Columns saved','success'); });
|
$('saveColumnsBtn')?.addEventListener('click',async()=>{ document.querySelectorAll('.column-toggle').forEach(cb=>cb.checked?hiddenColumns.delete(cb.dataset.colKey):hiddenColumns.add(cb.dataset.colKey)); applyColumnVisibility(); scheduleRender(true); await post('/api/preferences',{table_columns_json:JSON.stringify({hidden:[...hiddenColumns]})}).catch(e=>toast(e.message,'danger')); toast('Columns saved','success'); });
|
||||||
$('resetColumnsBtn')?.addEventListener('click',async()=>{ hiddenColumns.clear(); renderColumnManager(); applyColumnVisibility(); scheduleRender(true); await post('/api/preferences',{table_columns_json:JSON.stringify({hidden:[]})}).catch(()=>{}); });
|
$('resetColumnsBtn')?.addEventListener('click',async()=>{ hiddenColumns.clear(); renderColumnManager(); applyColumnVisibility(); scheduleRender(true); await post('/api/preferences',{table_columns_json:JSON.stringify({hidden:[]})}).catch(()=>{}); });
|
||||||
|
|
||||||
async function loadJobs(page=jobsPage){ const box=$('jobsTable'); if(!box)return; jobsPage=Math.max(0,page|0); box.innerHTML='<span class="spinner-border spinner-border-sm"></span> Loading jobs...'; const offset=jobsPage*jobsLimit; const j=await (await fetch(`/api/jobs?limit=${jobsLimit}&offset=${offset}`)).json(); const rows=j.jobs||[]; jobsTotal=Number(j.total||rows.length); const details=r=>{ const count=Number(r.hash_count||0); if(r.is_bulk || count>1) return `<span class="badge text-bg-info">bulk</span><br><span class="text-muted">${esc(count)} torrent(s), details hidden</span>`; const bits=[]; if(count) bits.push(`${esc(count)} torrent`); if(r.summary) bits.push(esc(r.summary)); return bits.join('<br>') || '-'; }; box.innerHTML=table(['Status','Action','Profile','Count','Details','Attempts','Started','Finished','Error','Actions'],rows.map(r=>[`<span class="badge text-bg-${r.status==='done'?'success':r.status==='failed'?'danger':r.status==='running'?'primary':r.status==='cancelled'?'secondary':'warning'}">${esc(r.status)}</span>`,esc(r.action),esc(r.profile_id),esc(r.hash_count||0),details(r),esc(r.attempts||0),dateCell(r.started_at||r.created_at),dateCell(r.finished_at||r.updated_at),compactCell(r.error||'',140),`<button class="btn btn-xs btn-outline-primary job-retry" data-id="${esc(r.id)}"><i class="fa-solid fa-rotate-left"></i> retry</button> <button class="btn btn-xs btn-outline-danger job-cancel" data-id="${esc(r.id)}"><i class="fa-solid fa-ban"></i> cancel</button>`])); renderJobsPager(); }
|
function jobActions(r){ const id=esc(r.id); const status=String(r.status||''); const actions=[]; if(status==='failed'||status==='cancelled') actions.push(`<button class="btn btn-xs btn-outline-primary job-retry" data-id="${id}"><i class="fa-solid fa-rotate-left"></i> retry</button>`); if(status==='pending'||status==='running') actions.push(`<button class="btn btn-xs btn-outline-danger job-cancel" data-id="${id}" data-status="${esc(status)}"><i class="fa-solid fa-triangle-exclamation"></i> emergency cancel</button>`); return actions.join(' ') || '<span class="text-muted">-</span>'; }
|
||||||
|
function jobStatusBadgeClass(status){
|
||||||
|
// Note: Running means active work, so it uses primary instead of danger; danger stays reserved for failed.
|
||||||
|
const classes={done:'success',failed:'danger',running:'primary',cancelled:'secondary',pending:'warning'};
|
||||||
|
return classes[String(status||'')] || 'warning';
|
||||||
|
}
|
||||||
|
async function loadJobs(page=jobsPage){
|
||||||
|
const box=$('jobsTable');
|
||||||
|
// Note: Finished shows only a real finished_at value; running/pending do not receive a date from updated_at.
|
||||||
|
if(!box) return;
|
||||||
|
jobsPage=Math.max(0,page|0);
|
||||||
|
box.innerHTML='<span class="spinner-border spinner-border-sm"></span> Loading jobs...';
|
||||||
|
const offset=jobsPage*jobsLimit;
|
||||||
|
const j=await (await fetch(`/api/jobs?limit=${jobsLimit}&offset=${offset}`)).json();
|
||||||
|
const rows=j.jobs||[];
|
||||||
|
jobsTotal=Number(j.total||rows.length);
|
||||||
|
const details=r=>{
|
||||||
|
const count=Number(r.hash_count||0);
|
||||||
|
if(r.is_bulk || count>1) return `<span class="badge text-bg-info">bulk</span><br><span class="text-muted">${esc(count)} torrent(s), details hidden</span>`;
|
||||||
|
const bits=[];
|
||||||
|
if(count) bits.push(`${esc(count)} torrent`);
|
||||||
|
if(r.summary) bits.push(esc(r.summary));
|
||||||
|
return bits.join('<br>') || '-';
|
||||||
|
};
|
||||||
|
box.innerHTML=responsiveTable(
|
||||||
|
['Status','Action','Profile','Count','Details','Attempts','Started','Finished','Error','Actions'],
|
||||||
|
rows.map(r=>[
|
||||||
|
`<span class="badge text-bg-${jobStatusBadgeClass(r.status)}">${esc(r.status)}</span>`,
|
||||||
|
esc(r.action),
|
||||||
|
esc(r.profile_id),
|
||||||
|
esc(r.hash_count||0),
|
||||||
|
details(r),
|
||||||
|
esc(r.attempts||0),
|
||||||
|
humanDateCell(r.started_at||r.created_at),
|
||||||
|
humanDateCell(r.finished_at),
|
||||||
|
compactCell(r.error||'',140),
|
||||||
|
jobActions(r),
|
||||||
|
]),
|
||||||
|
'jobs-table'
|
||||||
|
);
|
||||||
|
renderJobsPager();
|
||||||
|
}
|
||||||
function renderJobsPager(){ const p=$('jobsPager'); if(!p)return; const pages=Math.max(1,Math.ceil(jobsTotal/jobsLimit)); p.innerHTML=`<div class="d-flex align-items-center gap-2 flex-wrap"><button class="btn btn-sm btn-outline-secondary" id="jobsPrev" ${jobsPage<=0?'disabled':''}><i class="fa-solid fa-chevron-left"></i> Prev</button><span class="small text-muted">Page ${jobsPage+1} / ${pages} · ${jobsTotal} jobs</span><button class="btn btn-sm btn-outline-secondary" id="jobsNext" ${jobsPage>=pages-1?'disabled':''}>Next <i class="fa-solid fa-chevron-right"></i></button></div>`; $('jobsPrev')?.addEventListener('click',()=>loadJobs(jobsPage-1)); $('jobsNext')?.addEventListener('click',()=>loadJobs(jobsPage+1)); }
|
function renderJobsPager(){ const p=$('jobsPager'); if(!p)return; const pages=Math.max(1,Math.ceil(jobsTotal/jobsLimit)); p.innerHTML=`<div class="d-flex align-items-center gap-2 flex-wrap"><button class="btn btn-sm btn-outline-secondary" id="jobsPrev" ${jobsPage<=0?'disabled':''}><i class="fa-solid fa-chevron-left"></i> Prev</button><span class="small text-muted">Page ${jobsPage+1} / ${pages} · ${jobsTotal} jobs</span><button class="btn btn-sm btn-outline-secondary" id="jobsNext" ${jobsPage>=pages-1?'disabled':''}>Next <i class="fa-solid fa-chevron-right"></i></button></div>`; $('jobsPrev')?.addEventListener('click',()=>loadJobs(jobsPage-1)); $('jobsNext')?.addEventListener('click',()=>loadJobs(jobsPage+1)); }
|
||||||
$('jobsModal')?.addEventListener('show.bs.modal',loadJobs); $('refreshJobsBtn')?.addEventListener('click',loadJobs); $('jobsTable')?.addEventListener('click',async e=>{ const btn=e.target.closest('.job-retry,.job-cancel'); if(!btn)return; const id=btn.dataset.id; if(!id)return; if(btn.classList.contains('job-retry')) await post(`/api/jobs/${id}/retry`,{}).catch(x=>toast(x.message,'danger')); if(btn.classList.contains('job-cancel')) await post(`/api/jobs/${id}/cancel`,{}).catch(x=>toast(x.message,'danger')); loadJobs(); });
|
// Note: Job log buttons are separated so normal cleanup cannot accidentally trigger emergency cleanup.
|
||||||
$('clearJobsBtn')?.addEventListener('click',async()=>{ if(!confirm('Clear finished job logs? Pending and running jobs will stay.')) return; try{ const j=await post('/api/jobs/clear',{}); toast(`Cleared ${j.deleted||0} job log(s)`,'success'); jobsPage=0; loadJobs(0); }catch(e){ toast(e.message,'danger'); } });
|
$('jobsModal')?.addEventListener('show.bs.modal',loadJobs); $('refreshJobsBtn')?.addEventListener('click',loadJobs); $('jobsTable')?.addEventListener('click',async e=>{ const btn=e.target.closest('.job-retry,.job-cancel'); if(!btn)return; const id=btn.dataset.id; if(!id)return; if(btn.classList.contains('job-retry')) await post(`/api/jobs/${id}/retry`,{}).catch(x=>toast(x.message,'danger')); if(btn.classList.contains('job-cancel')){ const st=btn.dataset.status||''; if((st==='pending'||st==='running') && !confirm('Emergency cancel this unfinished job?')) return; await post(`/api/jobs/${id}/cancel`,{}).catch(x=>toast(x.message,'danger')); } loadJobs(); });
|
||||||
|
$('clearJobsBtn')?.addEventListener('click',async()=>{ if(!confirm('Clear finished job logs? Pending and running jobs will stay.')) return; try{ const j=await post('/api/jobs/clear',{}); toast(`Cleared ${j.deleted||0} finished job log(s)`,'success'); jobsPage=0; loadJobs(0); }catch(e){ toast(e.message,'danger'); } });
|
||||||
|
$('emergencyClearJobsBtn')?.addEventListener('click',async()=>{ if(!confirm('Emergency clean ALL job logs, including unfinished jobs?')) return; try{ const j=await post('/api/jobs/clear?force=1',{}); toast(`Emergency cleared ${j.deleted||0} job log(s)`,'success'); jobsPage=0; loadJobs(0); }catch(e){ toast(e.message,'danger'); } });
|
||||||
|
|
||||||
async function loadLabels(){ const j=await (await fetch('/api/labels')).json(); const labels=j.labels||[]; knownLabels=labels; renderLabelFilters(); renderLabelChooser(); if($('labelsManager')) $('labelsManager').innerHTML=labels.length?labels.map(l=>`<div class="label-manager-row"><span class="chip"><i class="fa-solid fa-tag"></i> ${esc(l.name)}</span><button class="btn btn-xs btn-outline-danger delete-label" data-id="${esc(l.id)}" title="Delete label"><i class="fa-solid fa-trash"></i></button></div>`).join(''):'<span class="text-muted">No labels.</span>'; }
|
async function loadLabels(){ const j=await (await fetch('/api/labels')).json(); const labels=j.labels||[]; knownLabels=labels; renderLabelFilters(); renderLabelChooser(); if($('labelsManager')) $('labelsManager').innerHTML=labels.length?labels.map(l=>`<div class="label-manager-row"><span class="chip"><i class="fa-solid fa-tag"></i> ${esc(l.name)}</span><button class="btn btn-xs btn-outline-danger delete-label" data-id="${esc(l.id)}" title="Delete label"><i class="fa-solid fa-trash"></i></button></div>`).join(''):'<span class="text-muted">No labels.</span>'; }
|
||||||
function renderLabelChooser(){ if($('selectedLabelList')) $('selectedLabelList').innerHTML=[...modalLabels].map(l=>`<button class="chip label-selected" data-label="${esc(l)}" title="Remove"><i class="fa-solid fa-tag"></i> ${esc(l)} <i class="fa-solid fa-xmark ms-1"></i></button>`).join('') || '<span class="text-muted small">No labels selected.</span>'; if($('labelList')) $('labelList').innerHTML=knownLabels.map(l=>`<button class="chip label-chip ${modalLabels.has(l.name)?'active':''}" data-label="${esc(l.name)}"><i class="fa-solid fa-tag"></i> ${esc(l.name)}</button>`).join('') || '<span class="text-muted small">No saved labels.</span>'; }
|
function renderLabelChooser(){ if($('selectedLabelList')) $('selectedLabelList').innerHTML=[...modalLabels].map(l=>`<button class="chip label-selected" data-label="${esc(l)}" title="Remove"><i class="fa-solid fa-tag"></i> ${esc(l)} <i class="fa-solid fa-xmark ms-1"></i></button>`).join('') || '<span class="text-muted small">No labels selected.</span>'; if($('labelList')) $('labelList').innerHTML=knownLabels.map(l=>`<button class="chip label-chip ${modalLabels.has(l.name)?'active':''}" data-label="${esc(l.name)}"><i class="fa-solid fa-tag"></i> ${esc(l.name)}</button>`).join('') || '<span class="text-muted small">No saved labels.</span>'; }
|
||||||
@@ -330,9 +540,97 @@
|
|||||||
$('ratioAssignModal')?.addEventListener('show.bs.modal',loadRatios); $('applyRatioBtn')?.addEventListener('click',async()=>{ await runAction('set_ratio_group',{ratio_group:$('ratioAssignSelect').value}); bootstrap.Modal.getInstance($('ratioAssignModal'))?.hide(); }); $('ratioSaveBtn')?.addEventListener('click',async()=>{ await post('/api/ratio-groups',{name:$('ratioName').value,min_ratio:$('ratioMin').value,max_ratio:$('ratioMax').value,seed_time_minutes:$('ratioSeed').value,action:$('ratioAction').value}); loadRatios(); });
|
$('ratioAssignModal')?.addEventListener('show.bs.modal',loadRatios); $('applyRatioBtn')?.addEventListener('click',async()=>{ await runAction('set_ratio_group',{ratio_group:$('ratioAssignSelect').value}); bootstrap.Modal.getInstance($('ratioAssignModal'))?.hide(); }); $('ratioSaveBtn')?.addEventListener('click',async()=>{ await post('/api/ratio-groups',{name:$('ratioName').value,min_ratio:$('ratioMin').value,max_ratio:$('ratioMax').value,seed_time_minutes:$('ratioSeed').value,action:$('ratioAction').value}); loadRatios(); });
|
||||||
async function loadRss(){ const j=await (await fetch('/api/rss')).json(); const feeds=j.feeds||[], rules=j.rules||[]; if($('rssManager')) $('rssManager').innerHTML=`<h6>Feeds</h6>${table(['Name','URL','Last error'],feeds.map(f=>[esc(f.name),esc(f.url),esc(f.last_error||'')]))}<h6 class="mt-3">Rules</h6>${table(['Name','Pattern','Path','Label'],rules.map(r=>[esc(r.name),esc(r.pattern),esc(r.save_path),esc(r.label)]))}`; }
|
async function loadRss(){ const j=await (await fetch('/api/rss')).json(); const feeds=j.feeds||[], rules=j.rules||[]; if($('rssManager')) $('rssManager').innerHTML=`<h6>Feeds</h6>${table(['Name','URL','Last error'],feeds.map(f=>[esc(f.name),esc(f.url),esc(f.last_error||'')]))}<h6 class="mt-3">Rules</h6>${table(['Name','Pattern','Path','Label'],rules.map(r=>[esc(r.name),esc(r.pattern),esc(r.save_path),esc(r.label)]))}`; }
|
||||||
|
|
||||||
async function loadSmartQueue(){ if($('smartManager')) $('smartManager').innerHTML=loadingMarkup('Loading Smart Queue...'); if($('smartHistory')) $('smartHistory').innerHTML=loadingMarkup('Loading Smart Queue history...'); const historyLimit=smartHistoryExpanded?100:10; const j=await (await fetch(`/api/smart-queue?history_limit=${historyLimit}`)).json(); if(!j.ok) return; const st=j.settings||{}, ex=j.exclusions||[], hist=j.history||[]; const totalHistory=Number(j.history_total ?? hist.length); if($('smartEnabled')) $('smartEnabled').checked=!!st.enabled; if($('smartMaxActive')) $('smartMaxActive').value=st.max_active_downloads||5; if($('smartStalled')) $('smartStalled').value=st.stalled_seconds||300; if($('smartMinSpeed')) $('smartMinSpeed').value=Math.round((st.min_speed_bytes||0)/1024); if($('smartMinSeeds')) $('smartMinSeeds').value=st.min_seeds||1; if($('smartManager')) $('smartManager').innerHTML=ex.length?table(['Hash','Reason','Created','Action'],ex.map(x=>[esc(x.torrent_hash),esc(x.reason||''),dateCell(x.created_at),`<button class="btn btn-xs btn-outline-danger smart-unexclude" data-hash="${esc(x.torrent_hash)}"><i class="fa-solid fa-xmark"></i> remove exception</button>`])):'<div class="empty-mini"><i class="fa-solid fa-circle-info"></i> No Smart Queue exceptions. Select torrents and use <b>Exclude selected</b> to keep them outside the queue.</div>'; if($('smartHistory')) { const body=hist.length?table(['Time','Event','Checked','Paused','Resumed'],hist.map(h=>[dateCell(h.created_at),esc(h.event),esc(h.checked_count||0),esc(h.paused_count||0),esc(h.resumed_count||0)])):'<div class="empty-mini">No Smart Queue operations yet.</div>'; const canToggle=totalHistory>10; const toggle=canToggle?`<button id="smartHistoryToggle" class="btn btn-xs btn-outline-secondary mt-2">${smartHistoryExpanded?'Show last 10':'Show more'} (${esc(totalHistory)})</button>`:''; $('smartHistory').innerHTML=`${body}${toggle}`; } }
|
function smartHistoryDetails(row){ try{ return typeof row.details_json==='string'?JSON.parse(row.details_json||'{}'):(row.details_json||{}); }catch(e){ return {}; } }
|
||||||
|
function smartQueueToastMessage(r){ const noEffect=r.start_no_effect?.length||0; const requested=r.start_requested?.length||0; const stopFailed=r.stop_failed?.length||0; const limit=r.max_active_downloads||r.settings?.max_active_downloads||''; const activeBefore=r.active_before; const activeAfter=r.active_after_stop ?? r.active_after_expected; const activeTail=activeBefore!==undefined?`, active ${esc(activeBefore)}->${esc(activeAfter ?? '?')}${limit?`/${esc(limit)}`:''}`:''; const cap=r.rtorrent_cap?.updated?`, cap ${r.rtorrent_cap.current}->${r.rtorrent_cap.new}`:''; const waiting=r.waiting_labeled||0; const stalled=r.stalled_labeled?.length||0; const ignoredSeedPeer=(r.ignore_seed_peer||r.settings?.ignore_seed_peer)?Number(r.ignored_seed_peer_count||0):0; const ignoredSpeed=(r.ignore_speed||r.settings?.ignore_speed)?Number(r.ignored_speed_count||0):0; const tail=noEffect?`, no effect ${noEffect}`:requested?`, requested ${requested}`:''; const waitTail=waiting?`, waiting labeled ${waiting}`:''; const stalledTail=stalled?`, stalled ${stalled}`:''; const ignoredSeedTail=(r.ignore_seed_peer||r.settings?.ignore_seed_peer)?`, ignored missing seeds/peers ${ignoredSeedPeer}`:''; const ignoredSpeedTail=(r.ignore_speed||r.settings?.ignore_speed)?`, ignored speed ${ignoredSpeed}`:''; const failTail=stopFailed?`, stop failed ${stopFailed}`:''; return `Smart Queue: stopped ${r.stopped?.length||r.paused?.length||0}, started ${r.started?.length||r.resumed?.length||0}${activeTail}${tail}${waitTail}${stalledTail}${ignoredSeedTail}${ignoredSpeedTail}${failTail}${cap}`; }
|
||||||
|
function buildSmartQueueNerdStats(hist=[], totalHistory=0){
|
||||||
|
// Note: Small Smart Queue telemetry for automation nerds; it reads history only and does not affect queue behavior.
|
||||||
|
const stats=hist.reduce((acc,h)=>{
|
||||||
|
const details=smartHistoryDetails(h);
|
||||||
|
const stopped=Number(h.paused_count||0);
|
||||||
|
const started=Number(h.resumed_count||0);
|
||||||
|
const checked=Number(h.checked_count||0);
|
||||||
|
const over=Number(details.over_limit||0);
|
||||||
|
const stopFailed=Array.isArray(details.stop_failed)?details.stop_failed.length:0;
|
||||||
|
acc.checked += checked;
|
||||||
|
acc.stopped += stopped;
|
||||||
|
acc.started += started;
|
||||||
|
acc.overLimit += over;
|
||||||
|
acc.stopFailed += stopFailed;
|
||||||
|
if(over>0) acc.overEvents += 1;
|
||||||
|
return acc;
|
||||||
|
},{checked:0,stopped:0,started:0,overLimit:0,overEvents:0,stopFailed:0});
|
||||||
|
const latest=hist[0]||null;
|
||||||
|
return {...stats,total:Number(totalHistory||hist.length||0),sample:hist.length,latestEvent:latest?.event||'-',latestAt:latest?.created_at||''};
|
||||||
|
}
|
||||||
|
|
||||||
|
function renderSmartQueueNerdStats(stats){
|
||||||
|
// Note: Compact cards keep the extra diagnostics readable above Automation history without changing the history table.
|
||||||
|
if(!stats) return '<div class="automation-smart-stats empty-mini">No Smart Queue stats yet.</div>';
|
||||||
|
const cards=[
|
||||||
|
['Runs',stats.total,`${stats.sample} loaded`],
|
||||||
|
['Checked',stats.checked,'torrent scans'],
|
||||||
|
['Stopped',stats.stopped,'queue trims'],
|
||||||
|
['Started',stats.started,'queue fills'],
|
||||||
|
['Over limit',stats.overEvents,`${stats.overLimit} total over`],
|
||||||
|
['Stop failed',stats.stopFailed,'rTorrent rejects'],
|
||||||
|
['Latest',stats.latestEvent,stats.latestAt?dateCell(stats.latestAt):'no timestamp'],
|
||||||
|
];
|
||||||
|
return `<div class="automation-smart-stats" aria-label="Smart Queue nerd stats">${cards.map(([label,value,hint])=>`<div class="automation-smart-stat"><span>${esc(label)}</span><b>${esc(value)}</b><small>${hint}</small></div>`).join('')}</div>`;
|
||||||
|
}
|
||||||
|
async function loadSmartQueue(){
|
||||||
|
if($('smartManager')) $('smartManager').innerHTML=loadingMarkup('Loading Smart Queue...');
|
||||||
|
if($('smartHistory')) $('smartHistory').innerHTML=loadingMarkup('Loading Smart Queue history...');
|
||||||
|
const historyLimit=smartHistoryExpanded?100:10;
|
||||||
|
const j=await (await fetch(`/api/smart-queue?history_limit=${historyLimit}`)).json();
|
||||||
|
if(!j.ok) return;
|
||||||
|
const st=j.settings||{}, ex=j.exclusions||[], hist=j.history||[];
|
||||||
|
const totalHistory=Number(j.history_total ?? hist.length);
|
||||||
|
if($('smartEnabled')) $('smartEnabled').checked=!!st.enabled;
|
||||||
|
if($('smartMaxActive')) $('smartMaxActive').value=st.max_active_downloads||5;
|
||||||
|
if($('smartStalled')) $('smartStalled').value=st.stalled_seconds||300;
|
||||||
|
if($('smartMinSpeed')) $('smartMinSpeed').value=Math.round((st.min_speed_bytes||0)/1024);
|
||||||
|
if($('smartMinSeeds')) $('smartMinSeeds').value=st.min_seeds||1;
|
||||||
|
if($('smartMinPeers')) $('smartMinPeers').value=st.min_peers||0;
|
||||||
|
if($('smartIgnoreSeedPeer')) $('smartIgnoreSeedPeer').checked=!!st.ignore_seed_peer;
|
||||||
|
if($('smartIgnoreSpeed')) $('smartIgnoreSpeed').checked=!!st.ignore_speed;
|
||||||
|
if($('smartManager')){
|
||||||
|
$('smartManager').innerHTML=ex.length
|
||||||
|
? responsiveTable(['Hash','Reason','Created','Action'],ex.map(x=>[esc(x.torrent_hash),esc(x.reason||''),dateCell(x.created_at),`<button class="btn btn-xs btn-outline-danger smart-unexclude" data-hash="${esc(x.torrent_hash)}"><i class="fa-solid fa-xmark"></i> remove exception</button>`]),'smart-exclusions-table')
|
||||||
|
: '<div class="empty-mini"><i class="fa-solid fa-circle-info"></i> No Smart Queue exceptions. Select torrents and use <b>Exclude selected</b> to keep them outside the queue.</div>';
|
||||||
|
}
|
||||||
|
if($('smartHistory')){
|
||||||
|
const body=hist.length
|
||||||
|
? responsiveTable(['Time','Event','Checked','Active','Limit','Over','Stopped','Started','Stop failed'],hist.map(h=>{ const d=smartHistoryDetails(h); return [dateCell(h.created_at),esc(h.event),esc(h.checked_count||0),esc(d.active_before??'-'),esc(d.max_active_downloads??'-'),esc(d.over_limit??0),esc(h.paused_count||0),esc(h.resumed_count||0),esc((d.stop_failed||[]).length||0)]; }),'smart-history-table')
|
||||||
|
: '<div class="empty-mini">No Smart Queue operations yet.</div>';
|
||||||
|
const canToggle=totalHistory>10;
|
||||||
|
const toggle=canToggle?`<button id="smartHistoryToggle" class="btn btn-xs btn-outline-secondary mt-2">${smartHistoryExpanded?'Show last 10':'Show more'} (${esc(totalHistory)})</button>`:'';
|
||||||
|
$('smartHistory').innerHTML=`${body}${toggle}`;
|
||||||
|
}
|
||||||
|
}
|
||||||
async function setSmartException(hashes, excluded, reason='manual'){ const list=[...new Set(hashes||[])].filter(Boolean); if(!list.length) return toast('No torrents selected','warning'); setBusy(true); try{ for(const h of list) await post('/api/smart-queue/exclusion',{hash:h,excluded,reason}); toast(excluded?'Smart Queue exception added':'Smart Queue exception removed','success'); await loadSmartQueue(); }catch(e){toast(e.message,'danger');} finally{setBusy(false);} }
|
async function setSmartException(hashes, excluded, reason='manual'){ const list=[...new Set(hashes||[])].filter(Boolean); if(!list.length) return toast('No torrents selected','warning'); setBusy(true); try{ for(const h of list) await post('/api/smart-queue/exclusion',{hash:h,excluded,reason}); toast(excluded?'Smart Queue exception added':'Smart Queue exception removed','success'); await loadSmartQueue(); }catch(e){toast(e.message,'danger');} finally{setBusy(false);} }
|
||||||
async function saveSmartQueue(){ await post('/api/smart-queue',{enabled:$('smartEnabled')?.checked,max_active_downloads:$('smartMaxActive')?.value,stalled_seconds:$('smartStalled')?.value,min_speed_bytes:Math.round(Number($('smartMinSpeed')?.value||0)*1024),min_seeds:$('smartMinSeeds')?.value}); toast('Smart Queue saved','success'); await loadSmartQueue(); }
|
async function saveSmartQueue(){ await post('/api/smart-queue',{enabled:$('smartEnabled')?.checked,max_active_downloads:$('smartMaxActive')?.value,stalled_seconds:$('smartStalled')?.value,min_speed_bytes:Math.round(Number($('smartMinSpeed')?.value||0)*1024),min_seeds:$('smartMinSeeds')?.value,min_peers:$('smartMinPeers')?.value,ignore_seed_peer:$('smartIgnoreSeedPeer')?.checked,ignore_speed:$('smartIgnoreSpeed')?.checked}); toast('Smart Queue saved','success'); await loadSmartQueue(); }
|
||||||
|
|
||||||
|
async function loadAuthUsers(){
|
||||||
|
if(!window.PYTORRENT.authEnabled || !$('authUsersManager')) return;
|
||||||
|
const [usersRes, profilesRes]=await Promise.all([fetch('/api/auth/users'), fetch('/api/profiles')]);
|
||||||
|
const usersJson=await usersRes.json();
|
||||||
|
const profilesJson=await profilesRes.json();
|
||||||
|
const profiles=profilesJson.profiles||[];
|
||||||
|
if($('authProfile')) $('authProfile').innerHTML=`<option value="0">All profiles</option>`+profiles.map(p=>`<option value="${esc(p.id)}">${esc(p.name)}</option>`).join('');
|
||||||
|
const rows=(usersJson.users||[]).map(u=>{
|
||||||
|
const perms=(u.permissions||[]).map(p=>`${p.profile_id?('profile '+p.profile_id):'all'}: ${p.access_level==='full'?'Full':'R/O'}`).join(', ') || (u.role==='admin'?'all: Full':'none');
|
||||||
|
return [esc(u.username),esc(u.role),u.is_active?'yes':'no',esc(perms),`<button class="btn btn-xs btn-outline-secondary auth-edit" data-user='${esc(JSON.stringify(u))}'><i class="fa-solid fa-pen"></i></button> <button class="btn btn-xs btn-outline-danger auth-delete" data-id="${esc(u.id)}"><i class="fa-solid fa-trash"></i></button>`];
|
||||||
|
});
|
||||||
|
$('authUsersManager').innerHTML=rows.length?table(['User','Role','Active','Profile rights','Actions'],rows):'<div class="empty-mini">No users.</div>';
|
||||||
|
}
|
||||||
|
function resetAuthUserForm(){ ['authUserId','authUsername','authPassword'].forEach(id=>{ if($(id)) $(id).value=''; }); if($('authRole')) $('authRole').value='user'; if($('authProfile')) $('authProfile').value='0'; if($('authAccess')) $('authAccess').value='ro'; if($('authActive')) $('authActive').checked=true; $('authUserCancelBtn')?.classList.add('d-none'); }
|
||||||
|
function editAuthUser(user){ if(!user) return; if($('authUserId')) $('authUserId').value=user.id||''; if($('authUsername')) $('authUsername').value=user.username||''; if($('authPassword')) $('authPassword').value=''; if($('authRole')) $('authRole').value=user.role||'user'; if($('authActive')) $('authActive').checked=!!user.is_active; const perm=(user.permissions||[])[0]||{profile_id:0,access_level:'ro'}; if($('authProfile')) $('authProfile').value=String(perm.profile_id||0); if($('authAccess')) $('authAccess').value=perm.access_level||'ro'; $('authUserCancelBtn')?.classList.remove('d-none'); }
|
||||||
|
async function saveAuthUser(){
|
||||||
|
const id=$('authUserId')?.value||'';
|
||||||
|
const role=$('authRole')?.value||'user';
|
||||||
|
const payload={username:$('authUsername')?.value||'',password:$('authPassword')?.value||'',role,is_active:!!$('authActive')?.checked,permissions:role==='admin'?[]:[{profile_id:Number($('authProfile')?.value||0),access_level:$('authAccess')?.value||'ro'}]};
|
||||||
|
try{ await post(id?`/api/auth/users/${id}`:'/api/auth/users',payload,id?'PUT':'POST'); toast('User saved','success'); resetAuthUserForm(); await loadAuthUsers(); }catch(e){ toast(e.message,'danger'); }
|
||||||
|
}
|
||||||
function normalizeRtConfigValue(value, type='text'){
|
function normalizeRtConfigValue(value, type='text'){
|
||||||
const raw=String(value ?? '').trim();
|
const raw=String(value ?? '').trim();
|
||||||
if(type==='bool') return ['1','true','yes','on'].includes(raw.toLowerCase()) ? '1' : '0';
|
if(type==='bool') return ['1','true','yes','on'].includes(raw.toLowerCase()) ? '1' : '0';
|
||||||
@@ -441,21 +739,194 @@
|
|||||||
}
|
}
|
||||||
async function generateRtConfig(){ const values=collectRtConfigChanges(); try{ const res=await fetch('/api/rtorrent-config/generate',{method:'POST',headers:{'Content-Type':'application/json'},body:JSON.stringify({values})}); const j=await res.json(); if(!j.ok) throw new Error(j.error||'Generate failed'); if($('rtConfigOutput')) $('rtConfigOutput').value=j.config_text||''; toast('Config generated','success'); }catch(e){ toast(e.message,'danger'); } }
|
async function generateRtConfig(){ const values=collectRtConfigChanges(); try{ const res=await fetch('/api/rtorrent-config/generate',{method:'POST',headers:{'Content-Type':'application/json'},body:JSON.stringify({values})}); const j=await res.json(); if(!j.ok) throw new Error(j.error||'Generate failed'); if($('rtConfigOutput')) $('rtConfigOutput').value=j.config_text||''; toast('Config generated','success'); }catch(e){ toast(e.message,'danger'); } }
|
||||||
|
|
||||||
function bootstrapThemeUrl(theme){ return theme && theme !== "default" ? `https://cdn.jsdelivr.net/npm/bootswatch@5.3.3/dist/${encodeURIComponent(theme)}/bootstrap.min.css` : "https://cdn.jsdelivr.net/npm/bootstrap@5.3.3/dist/css/bootstrap.min.css"; }
|
function bootstrapThemeUrl(theme){ /* Notatka: motywy korzystają z mapy URL wygenerowanej przez backend, więc działają także offline. */ const key=theme||"default"; return window.PYTORRENT?.bootstrapThemeUrls?.[key] || window.PYTORRENT?.bootstrapThemeUrls?.default || ""; }
|
||||||
function applyBootstrapTheme(theme){ bootstrapTheme = theme || "default"; const link=$("bootstrapThemeStylesheet"); if(link) link.href = bootstrapThemeUrl(bootstrapTheme); if($("bootstrapThemeSelect")) $("bootstrapThemeSelect").value = bootstrapTheme; }
|
function applyBootstrapTheme(theme){ bootstrapTheme = theme || "default"; const link=$("bootstrapThemeStylesheet"); if(link) link.href = bootstrapThemeUrl(bootstrapTheme); if($("bootstrapThemeSelect")) $("bootstrapThemeSelect").value = bootstrapTheme; }
|
||||||
function applyFontFamily(font){ fontFamily = font || "default"; document.documentElement.dataset.appFont = fontFamily; if($("fontFamilySelect")) $("fontFamilySelect").value = fontFamily; }
|
function applyFontFamily(font){ fontFamily = font || "default"; document.documentElement.dataset.appFont = fontFamily; if($("fontFamilySelect")) $("fontFamilySelect").value = fontFamily; }
|
||||||
async function saveAppearancePreferences(){ applyBootstrapTheme($("bootstrapThemeSelect")?.value || "default"); applyFontFamily($("fontFamilySelect")?.value || "default"); try{ await post("/api/preferences",{bootstrap_theme:bootstrapTheme,font_family:fontFamily}); toast("Appearance preferences saved","success"); }catch(e){ toast(e.message,"danger"); } }
|
function clampInterfaceScale(value){ value = Number(value || 100); if(!Number.isFinite(value)) value = 100; return Math.max(80, Math.min(140, Math.round(value / 5) * 5)); }
|
||||||
|
function applyInterfaceScale(value){ interfaceScale = clampInterfaceScale(value); document.documentElement.style.setProperty("--ui-scale", String(interfaceScale / 100)); if($("interfaceScaleRange")) $("interfaceScaleRange").value = interfaceScale; if($("interfaceScaleValue")) $("interfaceScaleValue").textContent = `${interfaceScale}%`; scheduleRender(false); }
|
||||||
|
async function saveAppearancePreferences(){ applyBootstrapTheme($("bootstrapThemeSelect")?.value || "default"); applyFontFamily($("fontFamilySelect")?.value || "default"); applyInterfaceScale($("interfaceScaleRange")?.value || interfaceScale); try{ await post("/api/preferences",{bootstrap_theme:bootstrapTheme,font_family:fontFamily,interface_scale:interfaceScale}); toast("Appearance preferences saved","success"); }catch(e){ toast(e.message,"danger"); } }
|
||||||
|
if($("titleSpeedEnabled")) $("titleSpeedEnabled").checked=titleSpeedEnabled;
|
||||||
|
|
||||||
function setupPeersRefresh(tab=activeTab()){ clearInterval(peersRefreshTimer); peersRefreshTimer=null; if($('peersRefreshSelect')) $('peersRefreshSelect').value=String(peersRefreshSeconds||0); if(tab==='peers' && peersRefreshSeconds>0){ peersRefreshTimer=setInterval(()=>{ if(activeTab()==='peers' && selectedHash) loadDetails('peers'); }, peersRefreshSeconds*1000); } }
|
function setupPeersRefresh(tab=activeTab()){ clearInterval(peersRefreshTimer); peersRefreshTimer=null; if($('peersRefreshSelect')) $('peersRefreshSelect').value=String(peersRefreshSeconds||0); if(tab==='peers' && peersRefreshSeconds>0){ peersRefreshTimer=setInterval(()=>{ if(activeTab()==='peers' && selectedHash) loadDetails('peers'); }, peersRefreshSeconds*1000); } }
|
||||||
function syncMobileMode(){ const auto=window.matchMedia&&window.matchMedia("(max-width: 900px)").matches; document.body.classList.toggle("mobile-mode", auto || document.body.classList.contains("mobile-mode-manual")); scheduleRender(true); }
|
function syncMobileMode(){ const auto=window.matchMedia&&window.matchMedia("(max-width: 900px)").matches; document.body.classList.toggle("mobile-mode", auto || document.body.classList.contains("mobile-mode-manual")); scheduleRender(true); }
|
||||||
|
|
||||||
|
|
||||||
function automationCondition(){ const type=$('autoConditionType')?.value||'completed'; const cond={type}; if(type==='no_seeds'){cond.seeds=Number($('autoCondSeeds')?.value||0);cond.minutes=Number($('autoCondMinutes')?.value||0);} if(type==='ratio_gte')cond.ratio=Number($('autoCondRatio')?.value||1); if(type==='label_missing'||type==='label_has')cond.label=$('autoCondLabel')?.value||''; if(type==='status')cond.status=$('autoCondStatus')?.value||'Seeding'; if(type==='path_contains')cond.text=$('autoCondText')?.value||''; return cond; }
|
let automationRulesCache=[];
|
||||||
function automationEffect(){ const type=$('autoEffectType')?.value||'add_label'; const eff={type}; if(type==='move')eff.path=$('autoEffectPath')?.value||''; if(type==='add_label'||type==='remove_label')eff.label=$('autoEffectLabel')?.value||''; if(type==='set_labels')eff.labels=$('autoEffectLabels')?.value||''; return eff; }
|
let automationConditions=[];
|
||||||
function updateAutomationForm(){ const ct=$('autoConditionType')?.value||''; document.querySelectorAll('[data-auto-cond]').forEach(el=>el.classList.toggle('d-none', !el.dataset.autoCond.split(',').includes(ct))); const et=$('autoEffectType')?.value||''; document.querySelectorAll('[data-auto-effect]').forEach(el=>el.classList.toggle('d-none', !el.dataset.autoEffect.split(',').includes(et))); }
|
let automationEffects=[];
|
||||||
function ruleSummary(r){ const cs=(r.conditions||[]).map(c=>c.type==='no_seeds'?`no seeds <=${c.seeds||0} for ${c.minutes||0} min`:c.type==='ratio_gte'?`ratio >= ${c.ratio}`:c.type==='label_missing'?`missing label ${c.label||''}`:c.type==='label_has'?`has label ${c.label||''}`:c.type==='status'?`status ${c.status||''}`:c.type==='path_contains'?`path contains ${c.text||''}`:'completed').join(' + '); const es=(r.effects||[]).map(e=>e.type==='move'?`move to ${e.path||'default path'}`:e.type==='add_label'?`add label ${e.label||''}`:e.type==='remove_label'?`remove label ${e.label||''}`:e.type==='set_labels'?`set labels ${e.labels||''}`:e.type).join(' + '); return `${cs} → ${es}`; }
|
|
||||||
async function loadAutomations(){ const j=await (await fetch('/api/automations')).json(); const rules=j.rules||[], hist=j.history||[]; if($('automationManager')) $('automationManager').innerHTML=rules.length?rules.map(r=>`<div class="automation-row"><div><b>${esc(r.name)}</b> ${r.enabled?'<span class="badge text-bg-success">on</span>':'<span class="badge text-bg-secondary">off</span>'}<div class="small text-muted">${esc(ruleSummary(r))} · cooldown ${esc(r.cooldown_minutes||0)} min</div></div><button class="btn btn-xs btn-outline-danger automation-delete" data-id="${esc(r.id)}"><i class="fa-solid fa-trash"></i></button></div>`).join(''):'<div class="empty-mini">No automation rules.</div>'; if($('automationHistory')) $('automationHistory').innerHTML=hist.length?table(['Time','Rule','Torrent','Actions'],hist.map(h=>[esc(h.created_at),esc(h.rule_name||''),esc(h.torrent_name||h.torrent_hash||''),esc(h.actions_json||'')])):'<div class="empty-mini">No automation history yet.</div>'; }
|
function automationCondition(){
|
||||||
async function saveAutomation(){ const payload={name:$('autoName')?.value||'Automation rule',enabled:!!$('autoEnabled')?.checked,cooldown_minutes:Number($('autoCooldown')?.value||60),conditions:[automationCondition()],effects:[automationEffect()]}; setBusy(true); try{ await post('/api/automations',payload); toast('Automation rule saved','success'); await loadAutomations(); }catch(e){toast(e.message,'danger');} finally{setBusy(false);} }
|
const type=$('autoConditionType')?.value||'completed';
|
||||||
|
const cond={type, negate:!!$('autoCondNegate')?.checked};
|
||||||
|
if(type==='no_seeds'){ cond.seeds=Number($('autoCondSeeds')?.value||0); cond.minutes=Number($('autoCondMinutes')?.value||0); }
|
||||||
|
if(type==='ratio_gte') cond.ratio=Number($('autoCondRatio')?.value||1);
|
||||||
|
// Note: Progress conditions compare the torrent completion percentage stored in the live torrent row.
|
||||||
|
if(type==='progress_gte'||type==='progress_lte') cond.progress=Number($('autoCondProgress')?.value||0);
|
||||||
|
if(type==='label_missing'||type==='label_has') cond.label=$('autoCondLabel')?.value||'';
|
||||||
|
if(type==='status') cond.status=$('autoCondStatus')?.value||'Seeding';
|
||||||
|
if(type==='path_contains') cond.text=$('autoCondText')?.value||'';
|
||||||
|
return cond;
|
||||||
|
}
|
||||||
|
|
||||||
|
function automationEffect(){
|
||||||
|
const type=$('autoEffectType')?.value||'add_label';
|
||||||
|
const eff={type};
|
||||||
|
if(type==='move'){
|
||||||
|
eff.path=$('autoEffectPath')?.value||'';
|
||||||
|
eff.move_data=!!$('autoMoveData')?.checked;
|
||||||
|
eff.recheck=!!$('autoMoveRecheck')?.checked;
|
||||||
|
eff.keep_seeding=!!$('autoMoveKeepSeeding')?.checked;
|
||||||
|
}
|
||||||
|
if(type==='add_label'||type==='remove_label') eff.label=$('autoEffectLabel')?.value||'';
|
||||||
|
if(type==='set_labels') eff.labels=$('autoEffectLabels')?.value||'';
|
||||||
|
return eff;
|
||||||
|
}
|
||||||
|
|
||||||
|
function updateAutomationForm(){
|
||||||
|
const ct=$('autoConditionType')?.value||'';
|
||||||
|
document.querySelectorAll('[data-auto-cond]').forEach(el=>el.classList.toggle('d-none', !el.dataset.autoCond.split(',').includes(ct)));
|
||||||
|
const et=$('autoEffectType')?.value||'';
|
||||||
|
document.querySelectorAll('[data-auto-effect]').forEach(el=>el.classList.toggle('d-none', !el.dataset.autoEffect.split(',').includes(et)));
|
||||||
|
}
|
||||||
|
|
||||||
|
function conditionText(c={}){
|
||||||
|
const base=c.type==='no_seeds'?`seeds <= ${c.seeds||0} for ${c.minutes||0} min`:c.type==='ratio_gte'?`ratio >= ${c.ratio}`:c.type==='progress_gte'?`progress >= ${c.progress||0}%`:c.type==='progress_lte'?`progress <= ${c.progress||0}%`:c.type==='label_missing'?`missing label ${c.label||''}`:c.type==='label_has'?`has label ${c.label||''}`:c.type==='status'?`status = ${c.status||''}`:c.type==='path_contains'?`path contains ${c.text||''}`:'completed';
|
||||||
|
return c.negate?`NOT (${base})`:base;
|
||||||
|
}
|
||||||
|
function effectText(e={}){
|
||||||
|
if(e.type==='move'){
|
||||||
|
const flags=[];
|
||||||
|
if(e.move_data) flags.push('move data');
|
||||||
|
if(e.recheck) flags.push('recheck');
|
||||||
|
if(e.keep_seeding) flags.push('keep seeding');
|
||||||
|
return `move to ${e.path||'default path'}${flags.length?` (${flags.join(', ')})`:''}`;
|
||||||
|
}
|
||||||
|
return e.type==='add_label'?`add label ${e.label||''}`:e.type==='remove_label'?`remove label ${e.label||''}`:e.type==='set_labels'?`set labels ${e.labels||''}`:e.type;
|
||||||
|
}
|
||||||
|
function ruleSummary(r){
|
||||||
|
const cs=(r.conditions||[]).map(conditionText).join(' + ')||'no conditions';
|
||||||
|
const es=(r.effects||[]).map(effectText).join(' → ')||'no actions';
|
||||||
|
return `${cs} → ${es}`;
|
||||||
|
}
|
||||||
|
|
||||||
|
function renderAutomationBuilder(){
|
||||||
|
const cBox=$('automationConditionList');
|
||||||
|
if(cBox) cBox.innerHTML=automationConditions.length?automationConditions.map((c,i)=>`<span class="automation-chip"><b>IF</b> ${esc(conditionText(c))}<button class="btn btn-xs btn-link automation-remove-condition" data-index="${i}" type="button"><i class="fa-solid fa-xmark"></i></button></span>`).join(''):'<span class="text-muted small">No conditions added yet.</span>';
|
||||||
|
const eBox=$('automationEffectList');
|
||||||
|
if(eBox) eBox.innerHTML=automationEffects.length?automationEffects.map((e,i)=>`<span class="automation-chip"><b>${i+1}</b> ${esc(effectText(e))}<button class="btn btn-xs btn-link automation-remove-effect" data-index="${i}" type="button"><i class="fa-solid fa-xmark"></i></button></span>`).join(''):'<span class="text-muted small">No actions added yet.</span>';
|
||||||
|
}
|
||||||
|
function resetAutomationForm(){
|
||||||
|
if($('autoEditId')) $('autoEditId').value='';
|
||||||
|
if($('autoName')) $('autoName').value='';
|
||||||
|
if($('autoEnabled')) $('autoEnabled').checked=true;
|
||||||
|
if($('autoCooldown')) $('autoCooldown').value='60';
|
||||||
|
automationConditions=[]; automationEffects=[];
|
||||||
|
$('automationCancelEditBtn')?.classList.add('d-none');
|
||||||
|
if($('automationSaveBtn')) $('automationSaveBtn').innerHTML='<i class="fa-solid fa-floppy-disk"></i> Save rule';
|
||||||
|
renderAutomationBuilder(); updateAutomationForm();
|
||||||
|
}
|
||||||
|
function editAutomationRule(rule){
|
||||||
|
if(!rule) return;
|
||||||
|
if($('autoEditId')) $('autoEditId').value=rule.id||'';
|
||||||
|
if($('autoName')) $('autoName').value=rule.name||'';
|
||||||
|
if($('autoEnabled')) $('autoEnabled').checked=!!rule.enabled;
|
||||||
|
if($('autoCooldown')) $('autoCooldown').value=rule.cooldown_minutes ?? 60;
|
||||||
|
automationConditions=Array.isArray(rule.conditions)?JSON.parse(JSON.stringify(rule.conditions)):[];
|
||||||
|
automationEffects=Array.isArray(rule.effects)?JSON.parse(JSON.stringify(rule.effects)):[];
|
||||||
|
$('automationCancelEditBtn')?.classList.remove('d-none');
|
||||||
|
if($('automationSaveBtn')) $('automationSaveBtn').innerHTML='<i class="fa-solid fa-floppy-disk"></i> Update rule';
|
||||||
|
renderAutomationBuilder();
|
||||||
|
}
|
||||||
|
|
||||||
|
function summarizeActionObject(a={}){
|
||||||
|
if(a.error) return `<span class="badge text-bg-danger">${esc(a.error)}</span>`;
|
||||||
|
const count=a.count || a.result?.count || a.result?.results?.length || '';
|
||||||
|
const parts=[];
|
||||||
|
if(a.type) parts.push(a.type);
|
||||||
|
if(count) parts.push(`${count} torrent(s)`);
|
||||||
|
if(a.path) parts.push(a.path);
|
||||||
|
if(a.label) parts.push(`label ${a.label}`);
|
||||||
|
if(a.labels) parts.push(`labels ${a.labels}`);
|
||||||
|
if(a.move_data) parts.push('move data');
|
||||||
|
if(a.recheck) parts.push('recheck');
|
||||||
|
if(a.keep_seeding) parts.push('keep seeding');
|
||||||
|
return `<span class="automation-action-pill">${esc(parts.join(' · ')||'action')}</span>`;
|
||||||
|
}
|
||||||
|
function automationHistoryActions(raw){
|
||||||
|
let actions=[];
|
||||||
|
try{ actions=JSON.parse(raw||'[]'); }catch(e){ return `<div class="automation-history-raw">${esc(raw||'')}</div>`; }
|
||||||
|
if(!Array.isArray(actions)) actions=[actions];
|
||||||
|
const summary=actions.map(summarizeActionObject).join(' ');
|
||||||
|
const details=esc(JSON.stringify(actions,null,2));
|
||||||
|
// Note: Large automation payloads are collapsed so JSON never stretches the modal width.
|
||||||
|
return `<details class="automation-history-details"><summary>${summary||'No actions'}</summary><pre>${details}</pre></details>`;
|
||||||
|
}
|
||||||
|
|
||||||
|
function renderAutomationHistory(hist=[]){
|
||||||
|
if(!$('automationHistory')) return;
|
||||||
|
const toolbar='<div class="automation-history-toolbar"><button id="automationClearHistoryBtn" class="btn btn-xs btn-outline-danger" type="button"><i class="fa-solid fa-trash"></i> Clear history</button></div>';
|
||||||
|
const rows=hist.map(h=>[humanDateCell(h.created_at),esc(h.rule_name||''),esc(h.torrent_name||h.torrent_hash||''),automationHistoryActions(h.actions_json||'')]);
|
||||||
|
// Note: Automation history uses the shared responsive table wrapper so it stays inside narrow mobile modals.
|
||||||
|
const body=hist.length?responsiveTable(['Time','Rule','Torrent / batch','Actions'],rows,'automation-history-table'):'<div class="empty-mini">No automation history yet.</div>';
|
||||||
|
$('automationHistory').innerHTML=toolbar+body;
|
||||||
|
}
|
||||||
|
|
||||||
|
async function clearAutomationHistory(){
|
||||||
|
if(!confirm('Clear automation history?')) return;
|
||||||
|
setBusy(true);
|
||||||
|
try{ const j=await fetch('/api/automations/history',{method:'DELETE'}).then(r=>r.json()); if(!j.ok) throw new Error(j.error||'Clear automation history failed'); toast(`Automation logs deleted: ${j.deleted||0}`,'success'); renderAutomationHistory(j.history||[]); }
|
||||||
|
catch(e){ toast(e.message,'danger'); }
|
||||||
|
finally{ setBusy(false); }
|
||||||
|
}
|
||||||
|
|
||||||
|
async function exportAutomations(){
|
||||||
|
try{ const j=await (await fetch('/api/automations/export')).json(); if(!j.ok) throw new Error(j.error||'Automation export failed'); downloadJson(`pytorrent-automation-rules-${new Date().toISOString().slice(0,10)}.json`, j.export||j); toast(`Exported ${j.count||0} automation rule(s)`,'success'); }
|
||||||
|
catch(e){ toast(e.message,'danger'); }
|
||||||
|
}
|
||||||
|
|
||||||
|
async function importAutomations(file){
|
||||||
|
if(!file) return;
|
||||||
|
try{ const payload=JSON.parse(await file.text()); const j=await post('/api/automations/import',payload); toast(`Imported ${j.imported||0} automation rule(s)`,'success'); await loadAutomations(); }
|
||||||
|
catch(e){ toast(e.message||'Automation import failed','danger'); }
|
||||||
|
finally{ if($('automationImportFile')) $('automationImportFile').value=''; }
|
||||||
|
}
|
||||||
|
|
||||||
|
async function loadAutomations(){
|
||||||
|
const j=await fetch('/api/automations').then(r=>r.json());
|
||||||
|
const rules=j.rules||[], hist=j.history||[];
|
||||||
|
automationRulesCache=rules;
|
||||||
|
if($('automationManager')) $('automationManager').innerHTML=rules.length?rules.map(r=>{
|
||||||
|
const enabled=!!r.enabled;
|
||||||
|
const toggleTitle=enabled?'Disable automation':'Enable automation';
|
||||||
|
const toggleIcon=enabled?'fa-toggle-on':'fa-toggle-off';
|
||||||
|
const toggleClass=enabled?'btn-outline-warning':'btn-outline-success';
|
||||||
|
return `<div class="automation-row"><div class="automation-row-main"><div><b>${esc(r.name)}</b> ${enabled?'<span class="badge text-bg-success">on</span>':'<span class="badge text-bg-secondary">off</span>'}</div><div class="small text-muted automation-rule-summary">${esc(ruleSummary(r))} · cooldown ${esc(r.cooldown_minutes||0)} min</div></div><div class="automation-row-actions"><button class="btn btn-xs ${toggleClass} automation-toggle" data-id="${esc(r.id)}" type="button" title="${toggleTitle}"><i class="fa-solid ${toggleIcon}"></i></button><button class="btn btn-xs btn-outline-secondary automation-edit" data-id="${esc(r.id)}" type="button" title="Edit automation"><i class="fa-solid fa-pen"></i></button><button class="btn btn-xs btn-outline-danger automation-delete" data-id="${esc(r.id)}" type="button" title="Delete automation"><i class="fa-solid fa-trash"></i></button></div></div>`;
|
||||||
|
}).join(''):'<div class="empty-mini">No automation rules.</div>';
|
||||||
|
renderAutomationHistory(hist);
|
||||||
|
}
|
||||||
|
|
||||||
|
async function toggleAutomationRule(rule){
|
||||||
|
if(!rule) return;
|
||||||
|
const payload={...rule, enabled:!rule.enabled};
|
||||||
|
// Note: Toggle keeps the rule definition unchanged and only switches automatic execution on or off.
|
||||||
|
setBusy(true);
|
||||||
|
try{ await post('/api/automations',payload); toast(payload.enabled?'Automation enabled':'Automation disabled','success'); await loadAutomations(); }
|
||||||
|
catch(e){ toast(e.message,'danger'); }
|
||||||
|
finally{ setBusy(false); }
|
||||||
|
}
|
||||||
|
|
||||||
|
async function saveAutomation(){
|
||||||
|
const currentCond=automationCondition();
|
||||||
|
const currentEff=automationEffect();
|
||||||
|
const conditions=automationConditions.length?automationConditions:[currentCond];
|
||||||
|
const effects=automationEffects.length?automationEffects:[currentEff];
|
||||||
|
const payload={id:Number($('autoEditId')?.value||0)||undefined,name:$('autoName')?.value||'Automation rule',enabled:!!$('autoEnabled')?.checked,cooldown_minutes:Number($('autoCooldown')?.value||60),conditions,effects};
|
||||||
|
setBusy(true);
|
||||||
|
try{ await post('/api/automations',payload); toast(payload.id?'Automation rule updated':'Automation rule saved','success'); resetAutomationForm(); await loadAutomations(); }
|
||||||
|
catch(e){toast(e.message,'danger');}
|
||||||
|
finally{setBusy(false);}
|
||||||
|
}
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
function cleanupCountCard(label, value, note=''){
|
function cleanupCountCard(label, value, note=''){
|
||||||
@@ -469,9 +940,10 @@
|
|||||||
cleanupCountCard('Job logs total', data.jobs_total, `retention ${retention.jobs||'-'} days`),
|
cleanupCountCard('Job logs total', data.jobs_total, `retention ${retention.jobs||'-'} days`),
|
||||||
cleanupCountCard('Job logs clearable', data.jobs_clearable, 'done / failed / cancelled'),
|
cleanupCountCard('Job logs clearable', data.jobs_clearable, 'done / failed / cancelled'),
|
||||||
cleanupCountCard('Smart Queue logs', data.smart_queue_history_total, `retention ${retention.smart_queue_history||'-'} days`),
|
cleanupCountCard('Smart Queue logs', data.smart_queue_history_total, `retention ${retention.smart_queue_history||'-'} days`),
|
||||||
|
cleanupCountCard('Automation logs', data.automation_history_total, `retention ${retention.automation_history||'-'} days`),
|
||||||
cleanupCountCard('Database size', db.size_h||db.size||'-', db.path||'')
|
cleanupCountCard('Database size', db.size_h||db.size||'-', db.path||'')
|
||||||
];
|
];
|
||||||
box.innerHTML=`<div class="cleanup-grid">${cards.join('')}</div><div class="cleanup-actions mt-3"><button id="cleanupJobsBtn" class="btn btn-sm btn-outline-danger"><i class="fa-solid fa-trash"></i> Clear job logs</button><button id="cleanupSmartQueueBtn" class="btn btn-sm btn-outline-danger"><i class="fa-solid fa-trash"></i> Clear Smart Queue logs</button><button id="cleanupAllBtn" class="btn btn-sm btn-danger"><i class="fa-solid fa-broom"></i> Clear both</button><button id="cleanupRefreshBtn" class="btn btn-sm btn-outline-secondary"><i class="fa-solid fa-rotate"></i> Refresh</button></div><div class="tool-note mt-2">Job cleanup uses the existing job endpoint logic, so pending and running jobs are preserved.</div>`;
|
box.innerHTML=`<div class="cleanup-grid">${cards.join('')}</div><div class="cleanup-actions mt-3"><button id="cleanupJobsBtn" class="btn btn-sm btn-outline-danger"><i class="fa-solid fa-trash"></i> Clear job logs</button><button id="cleanupSmartQueueBtn" class="btn btn-sm btn-outline-danger"><i class="fa-solid fa-trash"></i> Clear Smart Queue logs</button><button id="cleanupAutomationsBtn" class="btn btn-sm btn-outline-danger"><i class="fa-solid fa-trash"></i> Clear automation logs</button><button id="cleanupAllBtn" class="btn btn-sm btn-danger"><i class="fa-solid fa-broom"></i> Clear logs</button><button id="cleanupRefreshBtn" class="btn btn-sm btn-outline-secondary"><i class="fa-solid fa-rotate"></i> Refresh</button></div><div class="tool-note mt-2">Job cleanup preserves pending and running jobs. Automation cleanup removes only history, not rules.</div>`;
|
||||||
}
|
}
|
||||||
async function loadCleanup(){
|
async function loadCleanup(){
|
||||||
const box=$('cleanupManager'); if(!box) return;
|
const box=$('cleanupManager'); if(!box) return;
|
||||||
@@ -492,6 +964,7 @@
|
|||||||
renderCleanup(j.cleanup||{});
|
renderCleanup(j.cleanup||{});
|
||||||
if(endpoint.includes('/jobs')){ jobsPage=0; loadJobs(0).catch(()=>{}); }
|
if(endpoint.includes('/jobs')){ jobsPage=0; loadJobs(0).catch(()=>{}); }
|
||||||
if(endpoint.includes('/smart-queue')) loadSmartQueue().catch(()=>{});
|
if(endpoint.includes('/smart-queue')) loadSmartQueue().catch(()=>{});
|
||||||
|
if(endpoint.includes('/automations')) loadAutomations().catch(()=>{});
|
||||||
}catch(e){ toast(e.message,'danger'); }
|
}catch(e){ toast(e.message,'danger'); }
|
||||||
finally{ setBusy(false); }
|
finally{ setBusy(false); }
|
||||||
}
|
}
|
||||||
@@ -517,6 +990,53 @@
|
|||||||
try{ await post('/api/preferences',{footer_items_json:footerItems}); toast('Footer preferences saved','success'); }
|
try{ await post('/api/preferences',{footer_items_json:footerItems}); toast('Footer preferences saved','success'); }
|
||||||
catch(e){ toast(e.message,'danger'); }
|
catch(e){ toast(e.message,'danger'); }
|
||||||
}
|
}
|
||||||
|
function compactSpeedText(value){
|
||||||
|
// Notatka: stopka ma ograniczone miejsce, więc usuwa spację tylko z etykiet prędkości.
|
||||||
|
return String(value || '0 B/s').replace(/\s+(?=[KMGT]?i?B\/s$|B\/s$)/, '');
|
||||||
|
}
|
||||||
|
function speedPairText(down, up){
|
||||||
|
// Notatka: spójny zapis pary DL/UL jest używany w stopce i diagnostyce.
|
||||||
|
return `${compactSpeedText(down)} / ${compactSpeedText(up)}`;
|
||||||
|
}
|
||||||
|
function peakDateText(value){
|
||||||
|
// Notatka: skraca ISO timestamp z bazy do czytelnej etykiety w podpowiedzi.
|
||||||
|
return value ? String(value).replace('T',' ').replace(/\+00:00$/, ' UTC') : '-';
|
||||||
|
}
|
||||||
|
function updateSpeedPeaks(peaks={}){
|
||||||
|
// Notatka: prezentuje rekord sesji i rekord ogólny obok bieżących prędkości w stopce.
|
||||||
|
const session=peaks.session||{};
|
||||||
|
const allTime=peaks.all_time||{};
|
||||||
|
const sessionText=speedPairText(session.down_h, session.up_h);
|
||||||
|
const allTimeText=speedPairText(allTime.down_h, allTime.up_h);
|
||||||
|
if($('statPeakSession')) $('statPeakSession').textContent=sessionText;
|
||||||
|
if($('statPeakAllTime')) $('statPeakAllTime').textContent=allTimeText;
|
||||||
|
const box=$('statusSpeedPeaks');
|
||||||
|
if(box){
|
||||||
|
box.title=`Peak speed DL/UL\nSession: ${sessionText}\nSession DL at: ${peakDateText(session.down_at)}\nSession UL at: ${peakDateText(session.up_at)}\nAll-time: ${allTimeText}\nAll-time DL at: ${peakDateText(allTime.down_at)}\nAll-time UL at: ${peakDateText(allTime.up_at)}`;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
function updateBrowserSpeedTitle(downH, upH){
|
||||||
|
// Notatka: w stylu ruTorrent pokazuje DL/UL w tytule karty; window.status jest próbą dla starszych przeglądarek.
|
||||||
|
if(downH != null) lastBrowserSpeed.down=downH || '0 B/s';
|
||||||
|
if(upH != null) lastBrowserSpeed.up=upH || '0 B/s';
|
||||||
|
const speedTitle=`DL ${lastBrowserSpeed.down} / UL ${lastBrowserSpeed.up}`;
|
||||||
|
document.title=titleSpeedEnabled ? `${speedTitle} - ${BASE_TITLE}` : BASE_TITLE;
|
||||||
|
try{ window.status=titleSpeedEnabled ? speedTitle : ''; }catch(e){}
|
||||||
|
}
|
||||||
|
async function saveTitleSpeedPreference(){
|
||||||
|
// Notatka: zmiana działa od razu i jest zapisywana jako preferencja użytkownika.
|
||||||
|
titleSpeedEnabled=!!$('titleSpeedEnabled')?.checked;
|
||||||
|
updateBrowserSpeedTitle();
|
||||||
|
try{ await post('/api/preferences',{title_speed_enabled:titleSpeedEnabled}); toast('Browser title speed saved','success'); }
|
||||||
|
catch(e){ toast(e.message,'danger'); }
|
||||||
|
}
|
||||||
|
async function saveTrackerFaviconsPreference(){
|
||||||
|
// Note: Tracker favicon toggle changes only icon rendering; tracker filter counts and actions stay untouched.
|
||||||
|
trackerFaviconsEnabled=!!$('trackerFaviconsEnabled')?.checked;
|
||||||
|
renderTrackerFilters();
|
||||||
|
try{ await post('/api/preferences',{tracker_favicons_enabled:trackerFaviconsEnabled}); toast('Tracker favicon preference saved','success'); }
|
||||||
|
catch(e){ toast(e.message,'danger'); }
|
||||||
|
}
|
||||||
function updateFooterClock(){
|
function updateFooterClock(){
|
||||||
const el=$('statClock');
|
const el=$('statClock');
|
||||||
if(el) el.textContent=new Date().toLocaleTimeString([], {hour:'2-digit', minute:'2-digit', second:'2-digit'});
|
if(el) el.textContent=new Date().toLocaleTimeString([], {hour:'2-digit', minute:'2-digit', second:'2-digit'});
|
||||||
@@ -549,33 +1069,72 @@
|
|||||||
}
|
}
|
||||||
if($('statusPortCheckBadge')) $('statusPortCheckBadge').outerHTML=portStatusBadge(data,'id="statusPortCheckBadge" ',true);
|
if($('statusPortCheckBadge')) $('statusPortCheckBadge').outerHTML=portStatusBadge(data,'id="statusPortCheckBadge" ',true);
|
||||||
}
|
}
|
||||||
async function loadPreferences(){ if($('portCheckEnabled')) $('portCheckEnabled').checked=portCheckEnabled; applyBootstrapTheme(bootstrapTheme); applyFontFamily(fontFamily); renderFooterPreferences(); applyFooterPreferences(); await loadPortCheck(false); }
|
async function loadPreferences(){ if($('portCheckEnabled')) $('portCheckEnabled').checked=portCheckEnabled; applyBootstrapTheme(bootstrapTheme); applyFontFamily(fontFamily); applyInterfaceScale(interfaceScale); renderFooterPreferences(); applyFooterPreferences(); await loadPortCheck(false); }
|
||||||
async function savePortCheckPref(){ portCheckEnabled=!!$('portCheckEnabled')?.checked; try{ await post('/api/preferences',{port_check_enabled:portCheckEnabled}); toast('Preferences saved','success'); await loadPortCheck(false); }catch(e){ toast(e.message,'danger'); } }
|
async function savePortCheckPref(){ portCheckEnabled=!!$('portCheckEnabled')?.checked; try{ await post('/api/preferences',{port_check_enabled:portCheckEnabled}); toast('Preferences saved','success'); await loadPortCheck(false); }catch(e){ toast(e.message,'danger'); } }
|
||||||
async function loadPortCheck(force=false){ try{ const res=force?await post('/api/port-check',{}):await (await fetch('/api/port-check')).json(); if(!res.ok) throw new Error(res.error||'Port check failed'); renderPortCheck(res.port_check||{}); }catch(e){ renderPortCheck({status:'error',enabled:portCheckEnabled,error:e.message}); } }
|
async function loadPortCheck(force=false){ try{ const res=force?await post('/api/port-check',{}):await (await fetch('/api/port-check')).json(); if(!res.ok) throw new Error(res.error||'Port check failed'); renderPortCheck(res.port_check||{}); }catch(e){ renderPortCheck({status:'error',enabled:portCheckEnabled,error:e.message}); } }
|
||||||
async function loadAppStatus(){
|
async function loadAppStatus(){
|
||||||
const box=$('appStatusManager'); if(!box) return;
|
const box=$('appStatusManager'); if(!box) return;
|
||||||
box.innerHTML='<span class="spinner-border spinner-border-sm"></span> Loading diagnostics...';
|
box.innerHTML='<span class="spinner-border spinner-border-sm"></span> Loading diagnostics...';
|
||||||
try{
|
try{
|
||||||
const j=await (await fetch('/api/app/status')).json();
|
const [j,smart]=await Promise.all([
|
||||||
|
fetch('/api/app/status').then(r=>r.json()),
|
||||||
|
fetch('/api/smart-queue?history_limit=100').then(r=>r.json()).catch(()=>({ok:false}))
|
||||||
|
]);
|
||||||
if(!j.ok) throw new Error(j.error||'Failed to load diagnostics');
|
if(!j.ok) throw new Error(j.error||'Failed to load diagnostics');
|
||||||
const st=j.status||{}, py=st.pytorrent||{}, scgi=st.scgi||{}, profile=st.profile||{}, pc=st.port_check||{}, cleanup=st.cleanup||{}, db=cleanup.database||{};
|
const st=j.status||{}, py=st.pytorrent||{}, scgi=st.scgi||{}, profile=st.profile||{}, pc=st.port_check||{}, cleanup=st.cleanup||{}, db=cleanup.database||{};
|
||||||
|
const peaks=st.speed_peaks||{}, peakSession=peaks.session||{}, peakAllTime=peaks.all_time||{};
|
||||||
|
const smartStats=smart?.ok?buildSmartQueueNerdStats(smart.history||[], Number(smart.history_total||0)):null;
|
||||||
const cards=[
|
const cards=[
|
||||||
diagCard('pyTorrent PID', py.pid), diagCard('pyTorrent uptime', `${py.uptime_seconds||0}s`), diagCard('Memory RSS', py.memory_rss_h||py.memory_rss),
|
diagCard('pyTorrent PID', py.pid), diagCard('pyTorrent uptime', `${py.uptime_seconds||0}s`), diagCard('Memory RSS', py.memory_rss_h||py.memory_rss),
|
||||||
diagCard('Threads', py.threads), diagCard('CPU', `${py.cpu_percent ?? '-'}%`), diagCard('Jobs total', py.jobs_total),
|
diagCard('Threads', py.threads), diagCard('CPU', `${py.cpu_percent ?? '-'}%`), diagCard('Jobs total', py.jobs_total),
|
||||||
diagCard('Worker threads', py.worker_threads), diagCard('Python', py.python||'-'), diagCard('DB size', db.size_h||'-'),
|
diagCard('Worker threads', py.worker_threads), diagCard('Python', py.python||'-'), diagCard('DB size', db.size_h||'-'),
|
||||||
diagCard('Active profile', profile.name||profile.id||'-'), diagCard('API response time', `${st.api_ms ?? '-'} ms`),
|
diagCard('Active profile', profile.name||profile.id||'-'), diagCard('API response time', `${st.api_ms ?? '-'} ms`),
|
||||||
diagCard('Job logs clearable', cleanup.jobs_clearable ?? '-'), diagCard('Smart Queue logs', cleanup.smart_queue_history_total ?? '-'),
|
diagCard('Peak session DL/UL', speedPairText(peakSession.down_h, peakSession.up_h)), diagCard('Peak all-time DL/UL', speedPairText(peakAllTime.down_h, peakAllTime.up_h)),
|
||||||
|
diagCard('Job logs clearable', cleanup.jobs_clearable ?? '-'), diagCard('Smart Queue logs', cleanup.smart_queue_history_total ?? '-'), diagCard('Automation logs', cleanup.automation_history_total ?? '-'),
|
||||||
diagCard('Port check', portStatusLabel(pc.status), pc.status==='closed'?'diag-error':''), diagCard('Incoming port', pc.port||'-'), diagCard('Port check source', pc.source||(pc.enabled?'unknown':'disabled')),
|
diagCard('Port check', portStatusLabel(pc.status), pc.status==='closed'?'diag-error':''), diagCard('Incoming port', pc.port||'-'), diagCard('Port check source', pc.source||(pc.enabled?'unknown':'disabled')),
|
||||||
diagCard('SCGI status', scgi.ok?'OK':'ERROR', scgi.ok?'':'diag-error'), diagCard('SCGI URL', scgi.url||'-'), diagCard('SCGI connect', scgi.connect_ms!=null?`${scgi.connect_ms} ms`:'-'),
|
diagCard('SCGI status', scgi.ok?'OK':'ERROR', scgi.ok?'':'diag-error'), diagCard('SCGI URL', scgi.url||'-'), diagCard('SCGI connect', scgi.connect_ms!=null?`${scgi.connect_ms} ms`:'-'),
|
||||||
diagCard('SCGI first byte', scgi.first_byte_ms!=null?`${scgi.first_byte_ms} ms`:'-'), diagCard('SCGI total', scgi.total_ms!=null?`${scgi.total_ms} ms`:'-'),
|
diagCard('SCGI first byte', scgi.first_byte_ms!=null?`${scgi.first_byte_ms} ms`:'-'), diagCard('SCGI total', scgi.total_ms!=null?`${scgi.total_ms} ms`:'-'),
|
||||||
diagCard('Request bytes', scgi.request_bytes), diagCard('Response bytes', scgi.response_bytes), diagCard('XML bytes', scgi.xml_bytes), diagCard('rTorrent version', scgi.client_version||'-')
|
diagCard('Request bytes', scgi.request_bytes), diagCard('Response bytes', scgi.response_bytes), diagCard('XML bytes', scgi.xml_bytes), diagCard('rTorrent version', scgi.client_version||'-')
|
||||||
];
|
];
|
||||||
box.innerHTML=`<div class="diag-grid">${cards.join('')}</div>${scgi.error?`<div class="alert alert-danger mt-3 mb-0">${esc(scgi.error)}</div>`:''}`;
|
const smartBlock=`<div class="section-title mt-3"><i class="fa-solid fa-list-check"></i> Smart Queue statistics</div>${renderSmartQueueNerdStats(smartStats)}`;
|
||||||
|
box.innerHTML=`<div class="diag-grid">${cards.join('')}</div>${smartBlock}${scgi.error?`<div class="alert alert-danger mt-3 mb-0">${esc(scgi.error)}</div>`:''}`;
|
||||||
|
}catch(e){ box.innerHTML=`<div class="alert alert-danger mb-0">${esc(e.message)}</div>`; }
|
||||||
|
}
|
||||||
|
|
||||||
|
function torrentStatsCard(label, value, note=''){
|
||||||
|
return `<div class="torrent-stats-card"><b>${esc(label)}</b><span>${esc(value ?? '-')}</span>${note?`<small>${esc(note)}</small>`:''}</div>`;
|
||||||
|
}
|
||||||
|
function renderTorrentStats(stats={}){
|
||||||
|
const box=$('torrentStatsManager');
|
||||||
|
if(!box) return;
|
||||||
|
const age=Number(stats.age_seconds||0);
|
||||||
|
const updated=stats.updated_at ? String(stats.updated_at).replace('T',' ').replace(/\+00:00$/,' UTC') : '-';
|
||||||
|
const cards=[
|
||||||
|
torrentStatsCard('Torrents', stats.torrent_count, `${stats.complete_count||0} complete / ${stats.incomplete_count||0} incomplete`),
|
||||||
|
torrentStatsCard('Torrent size', stats.total_torrent_size_h || fmtBytes(stats.total_torrent_size)),
|
||||||
|
torrentStatsCard('Files size', stats.total_file_size_h || fmtBytes(stats.total_file_size), `${stats.file_count||0} files`),
|
||||||
|
torrentStatsCard('Seeds / peers', `${stats.seeds_total||0} / ${stats.peers_total||0}`, 'current sum from last sample'),
|
||||||
|
torrentStatsCard('Speed DL / UL', `${stats.down_rate_total_h||'0 B/s'} / ${stats.up_rate_total_h||'0 B/s'}`),
|
||||||
|
torrentStatsCard('Sampled', stats.sampled_torrents ?? 0, stats.stale?'cache is stale':'cache is fresh')
|
||||||
|
];
|
||||||
|
if($('torrentStatsMeta')) $('torrentStatsMeta').textContent=`Updated: ${updated}, age: ${age}s`;
|
||||||
|
const errors=Array.isArray(stats.errors)&&stats.errors.length ? `<div class="alert alert-warning py-2 mt-3 mb-0">File metadata warnings: ${esc(stats.errors.length)} torrent(s). ${esc(stats.error||'')}</div>` : '';
|
||||||
|
box.innerHTML=`<div class="torrent-stats-grid">${cards.join('')}</div>${errors}`;
|
||||||
|
}
|
||||||
|
async function loadTorrentStats(force=false){
|
||||||
|
const box=$('torrentStatsManager');
|
||||||
|
if(!box) return;
|
||||||
|
box.innerHTML='<span class="spinner-border spinner-border-sm"></span> Loading torrent statistics...';
|
||||||
|
try{
|
||||||
|
const j=await (await fetch(`/api/torrent-stats${force?'?force=1':''}`)).json();
|
||||||
|
if(!j.ok) throw new Error(j.error||'Torrent statistics failed');
|
||||||
|
renderTorrentStats(j.stats||{});
|
||||||
|
if(force) toast('Torrent statistics refreshed','success');
|
||||||
}catch(e){ box.innerHTML=`<div class="text-danger">${esc(e.message)}</div>`; }
|
}catch(e){ box.innerHTML=`<div class="text-danger">${esc(e.message)}</div>`; }
|
||||||
}
|
}
|
||||||
|
|
||||||
$('toolsModal')?.addEventListener('show.bs.modal',()=>{refreshProfiles();loadLabels();loadRatios();loadRss();loadSmartQueue();loadRtConfig();loadAutomations();loadCleanup();loadAppStatus();loadPreferences();renderColumnManager();applyColumnVisibility();updateAutomationForm();}); const toolPanelIds={rtorrents:'toolRtorrents',settings:'toolRtorrents',preferences:'toolPreferences',labels:'toolLabels',ratio:'toolRatio',rss:'toolRss',columns:'toolColumns',smart:'toolSmart',automations:'toolAutomations',rtconfig:'toolRtconfig',cleanup:'toolCleanup',appstatus:'toolAppstatus'}; const hideToolPanels=()=>Object.values(toolPanelIds).filter((v,i,a)=>a.indexOf(v)===i).forEach(id=>$(id)?.classList.add('d-none')); const showToolPanel=tool=>{hideToolPanels(); $(toolPanelIds[tool]||'toolRtorrents')?.classList.remove('d-none');}; const activateToolTab=tool=>{document.querySelectorAll('.tool-tab').forEach(x=>x.classList.toggle('active',(x.dataset.tool||'rtorrents')===tool)); showToolPanel(tool); if(tool==='appstatus') loadAppStatus(); if(tool==='cleanup') loadCleanup(); if(tool==='preferences') loadPreferences();}; document.querySelectorAll('.tool-tab').forEach(b=>b.addEventListener('click',()=>activateToolTab(b.dataset.tool||'rtorrents'))); $('rssFeedBtn')?.addEventListener('click',async()=>{await post('/api/rss/feeds',{name:$('rssName').value,url:$('rssUrl').value}); loadRss();}); $('rssRuleBtn')?.addEventListener('click',async()=>{await post('/api/rss/rules',{name:$('rssRuleName').value,pattern:$('rssPattern').value,save_path:$('rssPath').value,label:$('rssLabel').value}); loadRss();}); $('rssCheckBtn')?.addEventListener('click',async()=>{setBusy(true); try{const j=await post('/api/rss/check',{}); toast(`RSS queued ${j.queued} item(s)`,'success');}catch(e){toast(e.message,'danger');} finally{setBusy(false);}}); $('smartSaveBtn')?.addEventListener('click',saveSmartQueue); $('smartCheckBtn')?.addEventListener('click',async()=>{setBusy(true); try{const j=await post('/api/smart-queue/check',{}); const r=j.result||{}; toast(`Smart Queue: paused ${r.paused?.length||0}, resumed ${r.resumed?.length||0}`,'success'); await loadSmartQueue();}catch(e){toast(e.message,'danger');}finally{setBusy(false);}}); $('smartManager')?.addEventListener('click',async e=>{const h=e.target.closest('.smart-unexclude')?.dataset.hash; if(!h)return; await post('/api/smart-queue/exclusion',{hash:h,excluded:false}); await loadSmartQueue();}); $('cleanupManager')?.addEventListener('click',async e=>{ if(e.target.closest('#cleanupRefreshBtn')) return loadCleanup(); if(e.target.closest('#cleanupJobsBtn')) return runCleanupAction('/api/cleanup/jobs','Clear finished job logs'); if(e.target.closest('#cleanupSmartQueueBtn')) return runCleanupAction('/api/cleanup/smart-queue','Clear Smart Queue logs'); if(e.target.closest('#cleanupAllBtn')) return runCleanupAction('/api/cleanup/all','Clear job and Smart Queue logs'); }); $('rtConfigReloadBtn')?.addEventListener('click',loadRtConfig); $('rtConfigSaveBtn')?.addEventListener('click',saveRtConfig); $('rtConfigGenerateBtn')?.addEventListener('click',generateRtConfig); $('rtConfigManager')?.addEventListener('input',e=>{ if(e.target.classList.contains('rt-config-input')) updateRtConfigDirty(); }); $('rtConfigManager')?.addEventListener('change',e=>{ if(e.target.classList.contains('rt-config-input')){ const label=e.target.closest('.rt-config-switch')?.querySelector('.form-check-label'); if(label) label.textContent=e.target.checked?'On':'Off'; updateRtConfigDirty(); } }); $('rtConfigApplyOnStart')?.addEventListener('change',updateRtConfigDirty); $('peersRefreshSelect')?.addEventListener('change',async e=>{peersRefreshSeconds=Number(e.target.value||0); await post('/api/preferences',{peers_refresh_seconds:peersRefreshSeconds}).catch(()=>{}); setupPeersRefresh(activeTab()); toast('Peers refresh preference saved','success');});
|
$('toolsModal')?.addEventListener('show.bs.modal',()=>{refreshProfiles();loadLabels();loadRatios();loadRss();loadSmartQueue();loadRtConfig();loadAutomations();loadCleanup();loadAppStatus();loadPreferences();loadAuthUsers();renderColumnManager();applyColumnVisibility();updateAutomationForm();}); const toolPanelIds={rtorrents:'toolRtorrents',settings:'toolRtorrents',torrentstats:'toolTorrentStats',preferences:'toolPreferences',users:'toolUsers',labels:'toolLabels',ratio:'toolRatio',rss:'toolRss',columns:'toolColumns',smart:'toolSmart',automations:'toolAutomations',rtconfig:'toolRtconfig',cleanup:'toolCleanup',appstatus:'toolAppstatus'}; const hideToolPanels=()=>Object.values(toolPanelIds).filter((v,i,a)=>a.indexOf(v)===i).forEach(id=>$(id)?.classList.add('d-none')); const showToolPanel=tool=>{hideToolPanels(); $(toolPanelIds[tool]||'toolRtorrents')?.classList.remove('d-none');}; const activateToolTab=tool=>{document.querySelectorAll('.tool-tab').forEach(x=>x.classList.toggle('active',(x.dataset.tool||'rtorrents')===tool)); showToolPanel(tool); if(tool==='torrentstats') loadTorrentStats(false); if(tool==='appstatus') loadAppStatus(); if(tool==='cleanup') loadCleanup(); if(tool==='preferences') loadPreferences(); if(tool==='users') loadAuthUsers();}; document.querySelectorAll('.tool-tab').forEach(b=>b.addEventListener('click',()=>activateToolTab(b.dataset.tool||'rtorrents'))); $('torrentStatsRefreshBtn')?.addEventListener('click',()=>loadTorrentStats(true)); $('authUserSaveBtn')?.addEventListener('click',saveAuthUser); $('authUserCancelBtn')?.addEventListener('click',resetAuthUserForm); $('authUsersManager')?.addEventListener('click',async e=>{ const edit=e.target.closest('.auth-edit'); const del=e.target.closest('.auth-delete'); if(edit){ editAuthUser(JSON.parse(edit.dataset.user||'{}')); return; } if(del && confirm('Delete user?')){ await fetch(`/api/auth/users/${del.dataset.id}`,{method:'DELETE'}); loadAuthUsers(); } }); $('rssFeedBtn')?.addEventListener('click',async()=>{await post('/api/rss/feeds',{name:$('rssName').value,url:$('rssUrl').value}); loadRss();}); $('rssRuleBtn')?.addEventListener('click',async()=>{await post('/api/rss/rules',{name:$('rssRuleName').value,pattern:$('rssPattern').value,save_path:$('rssPath').value,label:$('rssLabel').value}); loadRss();}); $('rssCheckBtn')?.addEventListener('click',async()=>{setBusy(true); try{const j=await post('/api/rss/check',{}); toast(`RSS queued ${j.queued} item(s)`,'success');}catch(e){toast(e.message,'danger');} finally{setBusy(false);}}); $('smartSaveBtn')?.addEventListener('click',saveSmartQueue); $('smartCheckBtn')?.addEventListener('click',async()=>{setBusy(true); try{const j=await post('/api/smart-queue/check',{}); const r=j.result||{}; if(j.torrent_patch) patchRows(j.torrent_patch); toast(smartQueueToastMessage(r),'success'); await loadSmartQueue();}catch(e){toast(e.message,'danger');}finally{setBusy(false);}}); $('smartManager')?.addEventListener('click',async e=>{const h=e.target.closest('.smart-unexclude')?.dataset.hash; if(!h)return; await post('/api/smart-queue/exclusion',{hash:h,excluded:false}); await loadSmartQueue();}); $('cleanupManager')?.addEventListener('click',async e=>{ if(e.target.closest('#cleanupRefreshBtn')) return loadCleanup(); if(e.target.closest('#cleanupJobsBtn')) return runCleanupAction('/api/cleanup/jobs','Clear finished job logs'); if(e.target.closest('#cleanupSmartQueueBtn')) return runCleanupAction('/api/cleanup/smart-queue','Clear Smart Queue logs'); if(e.target.closest('#cleanupAutomationsBtn')) return runCleanupAction('/api/cleanup/automations','Clear automation logs'); if(e.target.closest('#cleanupAllBtn')) return runCleanupAction('/api/cleanup/all','Clear job, Smart Queue and automation logs'); }); $('rtConfigReloadBtn')?.addEventListener('click',loadRtConfig); $('rtConfigSaveBtn')?.addEventListener('click',saveRtConfig); $('rtConfigGenerateBtn')?.addEventListener('click',generateRtConfig); $('rtConfigManager')?.addEventListener('input',e=>{ if(e.target.classList.contains('rt-config-input')) updateRtConfigDirty(); }); $('rtConfigManager')?.addEventListener('change',e=>{ if(e.target.classList.contains('rt-config-input')){ const label=e.target.closest('.rt-config-switch')?.querySelector('.form-check-label'); if(label) label.textContent=e.target.checked?'On':'Off'; updateRtConfigDirty(); } }); $('rtConfigApplyOnStart')?.addEventListener('change',updateRtConfigDirty); $('peersRefreshSelect')?.addEventListener('change',async e=>{peersRefreshSeconds=Number(e.target.value||0); await post('/api/preferences',{peers_refresh_seconds:peersRefreshSeconds}).catch(()=>{}); setupPeersRefresh(activeTab()); toast('Peers refresh preference saved','success');});
|
||||||
$('autoConditionType')?.addEventListener('change',updateAutomationForm); $('autoEffectType')?.addEventListener('change',updateAutomationForm); $('automationSaveBtn')?.addEventListener('click',saveAutomation); $('automationCheckBtn')?.addEventListener('click',async()=>{setBusy(true);try{const j=await post('/api/automations/check',{}); toast(`Automations applied ${j.result?.applied?.length||0} item(s)`,'success'); await loadAutomations();}catch(e){toast(e.message,'danger');}finally{setBusy(false);}}); $('automationManager')?.addEventListener('click',async e=>{const id=e.target.closest('.automation-delete')?.dataset.id;if(!id)return;if(!confirm('Delete this automation rule?'))return;const r=await fetch('/api/automations/'+id,{method:'DELETE'});const j=await r.json();if(!j.ok)toast(j.error||'Delete failed','danger');await loadAutomations();});
|
$('autoConditionType')?.addEventListener('change',updateAutomationForm); $('autoEffectType')?.addEventListener('change',updateAutomationForm); $('automationAddConditionBtn')?.addEventListener('click',()=>{automationConditions.push(automationCondition()); renderAutomationBuilder();}); $('automationAddEffectBtn')?.addEventListener('click',()=>{automationEffects.push(automationEffect()); renderAutomationBuilder();}); $('automationConditionList')?.addEventListener('click',e=>{const b=e.target.closest('.automation-remove-condition'); if(!b)return; automationConditions.splice(Number(b.dataset.index||0),1); renderAutomationBuilder();}); $('automationEffectList')?.addEventListener('click',e=>{const b=e.target.closest('.automation-remove-effect'); if(!b)return; automationEffects.splice(Number(b.dataset.index||0),1); renderAutomationBuilder();}); $('automationCancelEditBtn')?.addEventListener('click',resetAutomationForm); $('automationSaveBtn')?.addEventListener('click',saveAutomation); $('automationExportBtn')?.addEventListener('click',exportAutomations); $('automationImportBtn')?.addEventListener('click',()=>$('automationImportFile')?.click()); $('automationImportFile')?.addEventListener('change',e=>importAutomations(e.target.files?.[0])); $('automationCheckBtn')?.addEventListener('click',async()=>{setBusy(true);try{const j=await post('/api/automations/check',{}); const torrents=j.result?.applied?.length||0; const batches=j.result?.batches?.length||0; toast(`Automations applied ${torrents} torrent(s) in ${batches} batch(es)`,'success'); await loadAutomations();}catch(e){toast(e.message,'danger');}finally{setBusy(false);}}); $('automationManager')?.addEventListener('click',async e=>{const toggle=e.target.closest('.automation-toggle'); if(toggle){ await toggleAutomationRule(automationRulesCache.find(r=>String(r.id)===String(toggle.dataset.id))); return; } const edit=e.target.closest('.automation-edit'); if(edit){ editAutomationRule(automationRulesCache.find(r=>String(r.id)===String(edit.dataset.id))); return; } const id=e.target.closest('.automation-delete')?.dataset.id;if(!id)return;if(!confirm('Delete this automation rule?'))return;const r=await fetch('/api/automations/'+id,{method:'DELETE'});const j=await r.json();if(!j.ok)toast(j.error||'Delete failed','danger');await loadAutomations();}); $('automationHistory')?.addEventListener('click',e=>{ if(e.target.closest('#automationClearHistoryBtn')) clearAutomationHistory(); });
|
||||||
document.addEventListener('click',async e=>{ const btn=e.target.closest('.delete-label'); if(!btn)return; if(!confirm('Delete this label?')) return; setBusy(true); try{ const r=await fetch('/api/labels/'+btn.dataset.id,{method:'DELETE'}); const j=await r.json(); if(!j.ok) throw new Error(j.error||'Delete failed'); await loadLabels(); toast('Label deleted','success'); }catch(err){toast(err.message,'danger');} finally{setBusy(false);} });
|
document.addEventListener('click',async e=>{ const btn=e.target.closest('.delete-label'); if(!btn)return; if(!confirm('Delete this label?')) return; setBusy(true); try{ const r=await fetch('/api/labels/'+btn.dataset.id,{method:'DELETE'}); const j=await r.json(); if(!j.ok) throw new Error(j.error||'Delete failed'); await loadLabels(); toast('Label deleted','success'); }catch(err){toast(err.message,'danger');} finally{setBusy(false);} });
|
||||||
$('bulkClearBtn')?.addEventListener('click',()=>{selected.clear(); selectedHash=null; lastSelectedHash=null; updateBulkBar(); if($('selectAll')) $('selectAll').checked=false; if($('detailPane')) $('detailPane').innerHTML='Select a torrent.'; setupPeersRefresh('general'); scheduleRender(true);});
|
$('bulkClearBtn')?.addEventListener('click',()=>{selected.clear(); selectedHash=null; lastSelectedHash=null; updateBulkBar(); if($('selectAll')) $('selectAll').checked=false; if($('detailPane')) $('detailPane').innerHTML='Select a torrent.'; setupPeersRefresh('general'); scheduleRender(true);});
|
||||||
$('smartExcludeSelectedBtn')?.addEventListener('click',()=>setSmartException(selectedHashes(),true,'manual'));
|
$('smartExcludeSelectedBtn')?.addEventListener('click',()=>setSmartException(selectedHashes(),true,'manual'));
|
||||||
@@ -584,9 +1143,9 @@
|
|||||||
|
|
||||||
document.addEventListener('change',e=>{ const sel=e.target.closest('#mobileFilterSelect'); if(!sel)return; activeFilter=sel.value; document.querySelectorAll('.filter').forEach(x=>x.classList.toggle('active', x.dataset.filter===activeFilter)); if($('tableWrap'))$('tableWrap').scrollTop=0; if($('mobileList'))$('mobileList').scrollTop=0; scheduleRender(true); });
|
document.addEventListener('change',e=>{ const sel=e.target.closest('#mobileFilterSelect'); if(!sel)return; activeFilter=sel.value; document.querySelectorAll('.filter').forEach(x=>x.classList.toggle('active', x.dataset.filter===activeFilter)); if($('tableWrap'))$('tableWrap').scrollTop=0; if($('mobileList'))$('mobileList').scrollTop=0; scheduleRender(true); });
|
||||||
function awaitMaybeRun(action){ runAction(action).catch?.(()=>{}); }
|
function awaitMaybeRun(action){ runAction(action).catch?.(()=>{}); }
|
||||||
document.addEventListener('click',e=>{ const ctx=$('ctxMenu'); if(!e.target.closest('#ctxMenu')) ctx.style.display='none'; const mobileFilter=e.target.closest('#mobileFilterBar .mobile-filter'); if(mobileFilter){ document.querySelectorAll('.filter').forEach(x=>x.classList.remove('active')); document.querySelectorAll('.filter').forEach(x=>{ if(x.dataset.filter===mobileFilter.dataset.filter) x.classList.add('active'); }); activeFilter=mobileFilter.dataset.filter; if($('tableWrap'))$('tableWrap').scrollTop=0; if($('mobileList'))$('mobileList').scrollTop=0; scheduleRender(true); return; } const mobileSelectAll=e.target.closest('#mobileSelectAll'); if(mobileSelectAll){ const all=visibleRows.length>0 && visibleRows.every(t=>selected.has(t.hash)); if(all) visibleRows.forEach(t=>selected.delete(t.hash)); else visibleRows.forEach(t=>selected.add(t.hash)); if(selected.size===0){selectedHash=null;lastSelectedHash=null;} else {selectedHash=[...selected][selected.size-1];lastSelectedHash=selectedHash;} scheduleRender(true); return; } const mobileClear=e.target.closest('#mobileClearSelection'); if(mobileClear){ selected.clear(); selectedHash=null; lastSelectedHash=null; scheduleRender(true); return; } const mobileAct=e.target.closest('.mobile-card [data-action]'); if(mobileAct){ const card0=mobileAct.closest('.mobile-card'); selected.clear(); selected.add(card0.dataset.hash); selectedHash=card0.dataset.hash; awaitMaybeRun(mobileAct.dataset.action); scheduleRender(true); return; } const card=e.target.closest('.mobile-card'); const tr=e.target.closest('tr[data-hash]'); const row=tr||card; if(row){ const h=row.dataset.hash; const additive=e.ctrlKey||e.metaKey; if(e.shiftKey){ setSelectionRange(h, additive); } else if(e.target.classList.contains('row-check')){ e.target.checked?selected.add(h):selected.delete(h); lastSelectedHash=h; selectedHash=h; } else { selectedHash=h; if(!additive)selected.clear(); selected.add(h); lastSelectedHash=h; loadDetails(activeTab()); } scheduleRender(true); } const copy=e.target.closest('[data-copy]'); if(copy) copySelected(copy.dataset.copy); const smartEx=e.target.closest('#smartExcludeCtx'); if(smartEx){ selectedHashes().forEach(h=>post('/api/smart-queue/exclusion',{hash:h,excluded:true,reason:'manual'}).catch(()=>{})); toast('Smart Queue exception saved','success'); loadSmartQueue().catch(()=>{}); } const act=e.target.closest('.torrent-action,[data-action]'); if(act&&act.dataset.action&&!act.closest('#detailTabs')&&!act.closest('.mobile-card')) runAction(act.dataset.action); });
|
document.addEventListener('click',e=>{ const ctx=$('ctxMenu'); if(!e.target.closest('#ctxMenu')) ctx.style.display='none'; const mobileFilter=e.target.closest('#mobileFilterBar .mobile-filter'); if(mobileFilter){ document.querySelectorAll('.filter').forEach(x=>x.classList.remove('active')); document.querySelectorAll('.filter').forEach(x=>{ if(x.dataset.filter===mobileFilter.dataset.filter) x.classList.add('active'); }); activeFilter=mobileFilter.dataset.filter; if($('tableWrap'))$('tableWrap').scrollTop=0; if($('mobileList'))$('mobileList').scrollTop=0; scheduleRender(true); return; } const mobileSort=e.target.closest('#mobileSortCycle'); if(mobileSort){ cycleMobileSort(); return; } const mobileSelectAll=e.target.closest('#mobileSelectAll'); if(mobileSelectAll){ const all=visibleRows.length>0 && visibleRows.every(t=>selected.has(t.hash)); if(all) visibleRows.forEach(t=>selected.delete(t.hash)); else visibleRows.forEach(t=>selected.add(t.hash)); if(selected.size===0){selectedHash=null;lastSelectedHash=null;} else {selectedHash=[...selected][selected.size-1];lastSelectedHash=selectedHash;} scheduleRender(true); return; } const mobileClear=e.target.closest('#mobileClearSelection'); if(mobileClear){ selected.clear(); selectedHash=null; lastSelectedHash=null; scheduleRender(true); return; } const mobileAct=e.target.closest('.mobile-card [data-action]'); if(mobileAct){ const card0=mobileAct.closest('.mobile-card'); selected.clear(); selected.add(card0.dataset.hash); selectedHash=card0.dataset.hash; lastSelectedHash=selectedHash; awaitMaybeRun(mobileAct.dataset.action); scheduleRender(true); return; } const mobileModal=e.target.closest('.mobile-card [data-mobile-modal]'); if(mobileModal){ const card0=mobileModal.closest('.mobile-card'); selected.clear(); selected.add(card0.dataset.hash); selectedHash=card0.dataset.hash; lastSelectedHash=selectedHash; scheduleRender(true); if(mobileModal.dataset.mobileModal==='label') new bootstrap.Modal($('labelModal')).show(); return; } const card=e.target.closest('.mobile-card'); const tr=e.target.closest('tr[data-hash]'); const row=tr||card; if(row){ const h=row.dataset.hash; const additive=e.ctrlKey||e.metaKey; if(e.shiftKey){ setSelectionRange(h, additive); } else if(e.target.classList.contains('row-check')){ e.target.checked?selected.add(h):selected.delete(h); lastSelectedHash=h; selectedHash=h; } else { selectedHash=h; if(!additive)selected.clear(); selected.add(h); lastSelectedHash=h; loadDetails(activeTab()); } scheduleRender(true); } const copy=e.target.closest('[data-copy]'); if(copy) copySelected(copy.dataset.copy); const smartEx=e.target.closest('#smartExcludeCtx'); if(smartEx){ selectedHashes().forEach(h=>post('/api/smart-queue/exclusion',{hash:h,excluded:true,reason:'manual'}).catch(()=>{})); toast('Smart Queue exception saved','success'); loadSmartQueue().catch(()=>{}); } const act=e.target.closest('.torrent-action,[data-action]'); if(act&&act.dataset.action&&!act.closest('#detailTabs')&&!act.closest('.mobile-card')) runAction(act.dataset.action); });
|
||||||
document.addEventListener('contextmenu',e=>{ const tr=e.target.closest('tr[data-hash],.mobile-card'); if(!tr)return; e.preventDefault(); selectedHash=tr.dataset.hash; if(!selected.has(selectedHash)){selected.clear();selected.add(selectedHash);scheduleRender(true);} const m=$('ctxMenu'); m.style.left=`${e.pageX}px`; m.style.top=`${e.pageY}px`; m.style.display='block'; });
|
document.addEventListener('contextmenu',e=>{ const tr=e.target.closest('tr[data-hash],.mobile-card'); if(!tr)return; e.preventDefault(); selectedHash=tr.dataset.hash; if(!selected.has(selectedHash)){selected.clear();selected.add(selectedHash);scheduleRender(true);} const m=$('ctxMenu'); m.style.left=`${e.pageX}px`; m.style.top=`${e.pageY}px`; m.style.display='block'; });
|
||||||
document.querySelectorAll('.torrent-table thead th[data-sort]').forEach(th=>th.addEventListener('click',()=>{ const key=th.dataset.sort; if(sortState.key===key) sortState.dir*=-1; else sortState={key,dir:1}; scheduleRender(true); })); $('tableWrap')?.addEventListener('scroll',()=>scheduleRender(false),{passive:true}); $('selectAll')?.addEventListener('change',e=>{selected.clear(); if(e.target.checked)visibleRows.forEach(t=>selected.add(t.hash)); scheduleRender(true);}); $('searchBox')?.addEventListener('input',()=>{if($('tableWrap'))$('tableWrap').scrollTop=0;scheduleRender(true);}); document.querySelectorAll('.filter').forEach(b=>b.addEventListener('click',()=>{document.querySelectorAll('.filter').forEach(x=>x.classList.remove('active')); b.classList.add('active'); activeFilter=b.dataset.filter; if($('tableWrap'))$('tableWrap').scrollTop=0; scheduleRender(true);})); document.querySelectorAll('#detailTabs .nav-link').forEach(b=>b.addEventListener('click',()=>{document.querySelectorAll('#detailTabs .nav-link').forEach(x=>x.classList.remove('active')); b.classList.add('active'); loadDetails(b.dataset.tab);})); document.addEventListener('change',e=>{ const sel=e.target.closest('.file-priority'); if(sel){ setFilePriorities([{index:Number(sel.dataset.index),priority:Number(sel.value)}]); return; } if(e.target && e.target.id==='fileSelectAll'){ document.querySelectorAll('#detailPane .file-check').forEach(cb=>cb.checked=e.target.checked); } }); document.addEventListener('click',e=>{ const bulk=e.target.closest('.file-priority-bulk'); if(!bulk) return; const priority=Number(bulk.dataset.priority); const checked=[...document.querySelectorAll('#detailPane .file-check:checked')].map(cb=>({index:Number(cb.dataset.index),priority})); if(!checked.length) return toast('No files selected','warning'); setFilePriorities(checked); }); document.addEventListener('click',e=>{ const b=e.target.closest('.peer-action'); if(!b) return; peerAction(b.dataset.peerIndex,b.dataset.peerAction); }); document.addEventListener('click',e=>{ const add=e.target.closest('#trackerAddBtn'); if(add){ const url=$('trackerAddUrl')?.value||''; trackerAction('add',{url}); return; } const editStart=e.target.closest('.tracker-edit-start'); if(editStart){ setTrackerEdit(editStart.dataset.index,true); return; } const cancel=e.target.closest('.tracker-edit-cancel'); if(cancel){ setTrackerEdit(cancel.dataset.index,false); return; } const save=e.target.closest('.tracker-edit-save'); if(save){ const input=document.querySelector(`.tracker-url[data-tracker-index="${CSS.escape(String(save.dataset.index))}"]`); trackerAction('edit',{index:Number(save.dataset.index),url:input?.value||''}); return; } const rea=e.target.closest('#trackerReannounceBtn'); if(rea) trackerAction('reannounce',{}); }); $('appStatusRefreshBtn')?.addEventListener('click',loadAppStatus); $('portCheckEnabled')?.addEventListener('change',savePortCheckPref); $('portCheckNowBtn')?.addEventListener('click',()=>loadPortCheck(true)); $('bootstrapThemeSelect')?.addEventListener('change',saveAppearancePreferences); $('fontFamilySelect')?.addEventListener('change',saveAppearancePreferences); $('saveFooterPrefsBtn')?.addEventListener('click',saveFooterPreferences);
|
document.querySelectorAll('.torrent-table thead th[data-sort]').forEach(th=>th.addEventListener('click',()=>{ const key=th.dataset.sort; if(sortState.key===key) sortState.dir*=-1; else sortState={key,dir:1}; scheduleRender(true); })); $('tableWrap')?.addEventListener('scroll',()=>scheduleRender(false),{passive:true}); $('selectAll')?.addEventListener('change',e=>{selected.clear(); if(e.target.checked)visibleRows.forEach(t=>selected.add(t.hash)); scheduleRender(true);}); $('searchBox')?.addEventListener('input',()=>{if($('tableWrap'))$('tableWrap').scrollTop=0;scheduleRender(true);}); document.querySelectorAll('.filter').forEach(b=>b.addEventListener('click',()=>{document.querySelectorAll('.filter').forEach(x=>x.classList.remove('active')); b.classList.add('active'); activeFilter=b.dataset.filter; if($('tableWrap'))$('tableWrap').scrollTop=0; scheduleRender(true);})); document.querySelectorAll('#detailTabs .nav-link').forEach(b=>b.addEventListener('click',()=>{document.querySelectorAll('#detailTabs .nav-link').forEach(x=>x.classList.remove('active')); b.classList.add('active'); loadDetails(b.dataset.tab);})); document.addEventListener('change',e=>{ const sel=e.target.closest('.file-priority'); if(sel){ setFilePriorities([{index:Number(sel.dataset.index),priority:Number(sel.value)}]); return; } if(e.target && e.target.id==='fileSelectAll'){ document.querySelectorAll('#detailPane .file-check').forEach(cb=>cb.checked=e.target.checked); } }); document.addEventListener('click',e=>{ const bulk=e.target.closest('.file-priority-bulk'); if(!bulk) return; const priority=Number(bulk.dataset.priority); const checked=[...document.querySelectorAll('#detailPane .file-check:checked')].map(cb=>({index:Number(cb.dataset.index),priority})); if(!checked.length) return toast('No files selected','warning'); setFilePriorities(checked); }); document.addEventListener('click',e=>{ const add=e.target.closest('#trackerAddBtn'); if(add){ const url=$('trackerAddUrl')?.value||''; trackerAction('add',{url}); return; } const del=e.target.closest('.tracker-delete'); if(del && !del.disabled){ trackerAction('delete',{index:Number(del.dataset.index)}); return; } const rea=e.target.closest('#trackerReannounceBtn'); if(rea) trackerAction('reannounce',{}); }); $('appStatusRefreshBtn')?.addEventListener('click',loadAppStatus); $('portCheckEnabled')?.addEventListener('change',savePortCheckPref); $('portCheckNowBtn')?.addEventListener('click',()=>loadPortCheck(true)); $('bootstrapThemeSelect')?.addEventListener('change',saveAppearancePreferences); $('fontFamilySelect')?.addEventListener('change',saveAppearancePreferences); $('interfaceScaleRange')?.addEventListener('input',e=>applyInterfaceScale(e.target.value)); $('interfaceScaleRange')?.addEventListener('change',saveAppearancePreferences); $('titleSpeedEnabled')?.addEventListener('change',saveTitleSpeedPreference); $('trackerFaviconsEnabled')?.addEventListener('change',saveTrackerFaviconsPreference); $('saveFooterPrefsBtn')?.addEventListener('click',saveFooterPreferences);
|
||||||
document.addEventListener('keydown',e=>{ const tag=(e.target?.tagName||'').toLowerCase(); const editable=tag==='input'||tag==='textarea'||tag==='select'||e.target?.isContentEditable; if(editable){ if(e.key==='Enter' && e.target?.id==='labelInput'){ e.preventDefault(); $('addLabelToSelectionBtn')?.click(); } return; } if((e.ctrlKey||e.metaKey)&&e.key.toLowerCase()==='a'){e.preventDefault();selected.clear();visibleRows.forEach(t=>selected.add(t.hash));scheduleRender(true);} if((e.ctrlKey||e.metaKey)&&e.key.toLowerCase()==='i'){e.preventDefault();visibleRows.forEach(t=>selected.has(t.hash)?selected.delete(t.hash):selected.add(t.hash));scheduleRender(true);} if((e.ctrlKey||e.metaKey)&&e.key.toLowerCase()==='o'){e.preventDefault();new bootstrap.Modal($('addModal')).show();} if(e.key==='Escape'){selected.clear();scheduleRender(true);} if(e.key==='Delete') new bootstrap.Modal($('removeModal')).show(); if(e.key===' ') {e.preventDefault();runAction('start');} if(e.key.toLowerCase()==='p')runAction('pause'); if(e.key.toLowerCase()==='s')runAction('stop'); if(e.key.toLowerCase()==='r')runAction('resume'); if(e.key.toLowerCase()==='m')runAction('move'); });
|
document.addEventListener('keydown',e=>{ const tag=(e.target?.tagName||'').toLowerCase(); const editable=tag==='input'||tag==='textarea'||tag==='select'||e.target?.isContentEditable; if(editable){ if(e.key==='Enter' && e.target?.id==='labelInput'){ e.preventDefault(); $('addLabelToSelectionBtn')?.click(); } return; } if((e.ctrlKey||e.metaKey)&&e.key.toLowerCase()==='a'){e.preventDefault();selected.clear();visibleRows.forEach(t=>selected.add(t.hash));scheduleRender(true);} if((e.ctrlKey||e.metaKey)&&e.key.toLowerCase()==='i'){e.preventDefault();visibleRows.forEach(t=>selected.has(t.hash)?selected.delete(t.hash):selected.add(t.hash));scheduleRender(true);} if((e.ctrlKey||e.metaKey)&&e.key.toLowerCase()==='o'){e.preventDefault();new bootstrap.Modal($('addModal')).show();} if(e.key==='Escape'){selected.clear();scheduleRender(true);} if(e.key==='Delete') new bootstrap.Modal($('removeModal')).show(); if(e.key===' ') {e.preventDefault();runAction('start');} if(e.key.toLowerCase()==='p')runAction('pause'); if(e.key.toLowerCase()==='s')runAction('stop'); if(e.key.toLowerCase()==='r')runAction('resume'); if(e.key.toLowerCase()==='m')runAction('move'); });
|
||||||
$('removeModal')?.addEventListener('show.bs.modal',()=>{$('removeCount').textContent=selected.size;$('removeData').checked=true;}); $('confirmRemoveBtn')?.addEventListener('click',async()=>{await runAction('remove',{remove_data:$('removeData').checked});bootstrap.Modal.getInstance($('removeModal'))?.hide();});
|
$('removeModal')?.addEventListener('show.bs.modal',()=>{$('removeCount').textContent=selected.size;$('removeData').checked=true;}); $('confirmRemoveBtn')?.addEventListener('click',async()=>{await runAction('remove',{remove_data:$('removeData').checked});bootstrap.Modal.getInstance($('removeModal'))?.hide();});
|
||||||
$('addModal')?.addEventListener('show.bs.modal',()=>applyDefaultDownloadPath(true));
|
$('addModal')?.addEventListener('show.bs.modal',()=>applyDefaultDownloadPath(true));
|
||||||
@@ -746,6 +1305,32 @@ ${disk.error}`:''}`;
|
|||||||
b.classList.add("btn-primary"); b.classList.remove("btn-outline-secondary");
|
b.classList.add("btn-primary"); b.classList.remove("btn-outline-secondary");
|
||||||
loadTrafficHistory(b.dataset.range||"7d");
|
loadTrafficHistory(b.dataset.range||"7d");
|
||||||
}));
|
}));
|
||||||
socket.on('connect',()=>{ if(!hasActiveProfile){ showFirstRunSetup(); return; } $('connBadge').className='badge text-bg-success'; $('connBadge').textContent='online'; setInitialLoader('Loading torrents...','Connection is ready. Waiting for the first torrent snapshot.'); socket.emit('select_profile',{profile_id:window.PYTORRENT.activeProfile}); }); socket.on('disconnect',()=>{ $('connBadge').className='badge text-bg-danger'; $('connBadge').textContent='offline'; setInitialLoader('Waiting for connection...','pyTorrent is not connected yet. The application will open after data is received.'); }); socket.io.on('reconnect_attempt',()=>{ $('connBadge').className='badge text-bg-warning'; $('connBadge').textContent='reconnecting'; setInitialLoader('Reconnecting...','Trying to restore the live connection and load torrent data.'); }); socket.io.on('reconnect',()=>{ if(!hasActiveProfile){ showFirstRunSetup(); return; } $('connBadge').className='badge text-bg-success'; $('connBadge').textContent='online'; setInitialLoader('Loading torrents...','Connection restored. Waiting for the first torrent snapshot.'); socket.emit('select_profile',{profile_id:window.PYTORRENT.activeProfile}); }); socket.on('profile_required',()=>showFirstRunSetup()); socket.on('torrent_snapshot',msg=>{hasTorrentSnapshot=true;torrentSummary=msg.summary||null;torrents.clear();(msg.torrents||[]).forEach(t=>torrents.set(t.hash,t));scheduleRender(true);hideInitialLoader();}); socket.on('torrent_patch',patchRows); socket.on('job_update',()=>{ if(document.body.classList.contains('modal-open')) loadJobs().catch(()=>{}); }); socket.on('operation_started',msg=>{setBusy(true);markTorrentOperation(msg.hashes||[],msg.action,msg.job_id,'running');toast(`${msg.action} started`,'secondary');}); socket.on('operation_finished',msg=>{setBusy(false);clearJobOperation(msg.job_id,msg.hashes||[]);toast(`${msg.action} done`,'success');}); socket.on('operation_failed',msg=>{setBusy(false);clearJobOperation(msg.job_id,msg.hashes||[]);toast(`${msg.action}: ${msg.error}`,'danger');}); socket.on('rtorrent_error',msg=>{ if(msg.error){$('connBadge').className='badge badge-degraded';$('connBadge').textContent='degraded'; setInitialLoader('Waiting for rTorrent...','rTorrent is not ready yet. Data will appear automatically after it responds.');} }); socket.on('heartbeat',msg=>{ if(msg.error){$('connBadge').className='badge badge-degraded';$('connBadge').textContent='degraded'; setInitialLoader('Waiting for rTorrent...','rTorrent is not ready yet. Data will appear automatically after it responds.');} else if(socket.connected){$('connBadge').className='badge text-bg-success';$('connBadge').textContent='online';} }); socket.on('smart_queue_update',msg=>{ if(msg && msg.enabled) toast(`Smart Queue: paused ${msg.paused?.length||0}, resumed ${msg.resumed?.length||0}`,'secondary'); }); socket.on('automation_update',msg=>{ if(msg?.applied?.length) toast(`Automations applied ${msg.applied.length} item(s)`,'secondary'); }); socket.on('rtorrent_config_applied',msg=>{ if(msg?.result?.updated?.length) toast(`Startup rTorrent config applied (${msg.result.updated.length})`,'success'); if(msg?.error) toast(`Startup rTorrent config: ${msg.error}`,'danger'); }); socket.on('system_stats',s=>{ const usageAvailable=s.usage_available!==false && s.cpu!==undefined && s.ram!==undefined; $('statCpuBox')?.classList.toggle('d-none',!usageAvailable);$('statRamBox')?.classList.toggle('d-none',!usageAvailable);$('systemChart')?.classList.toggle('d-none',!usageAvailable); if(usageAvailable){$('statCpu').textContent=s.cpu??'-';$('statRam').textContent=s.ram??'-';drawSystemUsage(s.cpu,s.ram);} $('statVersion').textContent=s.version||'-';$('statDl').textContent=s.down_rate_h||'0 B/s';$('statUl').textContent=s.up_rate_h||'0 B/s';if($('mobileSpeedDl')) $('mobileSpeedDl').textContent=s.down_rate_h||'0 B/s';if($('mobileSpeedUl')) $('mobileSpeedUl').textContent=s.up_rate_h||'0 B/s';lastLimits={down:Number(s.down_limit||0),up:Number(s.up_limit||0)};$('statDlLimit').textContent=s.down_limit_h||'∞';$('statUlLimit').textContent=s.up_limit_h||'∞';$('statTotalDl').textContent=compactTransferText(s.total_down_h);$('statTotalUl').textContent=compactTransferText(s.total_up_h);drawTraffic(s.down_rate,s.up_rate);drawDiskUsage(s.disk);updateSocketStatus(s);applyFooterPreferences();});
|
socket.on('connect',()=>{ if(!hasActiveProfile){ showFirstRunSetup(); return; } $('connBadge').className='badge text-bg-success'; $('connBadge').textContent='online'; setInitialLoader('Loading torrents...','Connection is ready. Waiting for the first torrent snapshot.'); socket.emit('select_profile',{profile_id:window.PYTORRENT.activeProfile}); }); socket.on('disconnect',()=>{ $('connBadge').className='badge text-bg-danger'; $('connBadge').textContent='offline'; setInitialLoader('Waiting for connection...','pyTorrent is not connected yet. The application will open after data is received.'); }); socket.io.on('reconnect_attempt',()=>{ $('connBadge').className='badge text-bg-warning'; $('connBadge').textContent='reconnecting'; setInitialLoader('Reconnecting...','Trying to restore the live connection and load torrent data.'); }); socket.io.on('reconnect',()=>{ if(!hasActiveProfile){ showFirstRunSetup(); return; } $('connBadge').className='badge text-bg-success'; $('connBadge').textContent='online'; setInitialLoader('Loading torrents...','Connection restored. Waiting for the first torrent snapshot.'); socket.emit('select_profile',{profile_id:window.PYTORRENT.activeProfile}); }); socket.on('profile_required',()=>showFirstRunSetup()); socket.on('torrent_snapshot',msg=>{hasTorrentSnapshot=true;torrentSummary=msg.summary||null;torrents.clear();(msg.torrents||[]).forEach(t=>torrents.set(t.hash,t));scheduleRender(true);scheduleTrackerSummary(true);hideInitialLoader();}); socket.on('torrent_patch',msg=>{patchRows(msg);scheduleTrackerSummary(false);}); socket.on('job_update',()=>{ if(document.body.classList.contains('modal-open')) loadJobs().catch(()=>{}); }); socket.on('operation_started',msg=>{setBusy(true);markTorrentOperation(msg.hashes||[],msg.action,msg.job_id,'running');toast(`${msg.action} started`,'secondary');}); socket.on('operation_finished',msg=>{setBusy(false);clearJobOperation(msg.job_id,msg.hashes||[]);toast(`${msg.action} done`,'success');}); socket.on('operation_failed',msg=>{setBusy(false);clearJobOperation(msg.job_id,msg.hashes||[]);toast(`${msg.action}: ${msg.error}`,'danger');}); socket.on('rtorrent_error',msg=>{ if(msg.error){$('connBadge').className='badge badge-degraded';$('connBadge').textContent='degraded'; setInitialLoader('Waiting for rTorrent...','rTorrent is not ready yet. Data will appear automatically after it responds.');} }); socket.on('heartbeat',msg=>{ if(msg.error){$('connBadge').className='badge badge-degraded';$('connBadge').textContent='degraded'; setInitialLoader('Waiting for rTorrent...','rTorrent is not ready yet. Data will appear automatically after it responds.');} else if(socket.connected){$('connBadge').className='badge text-bg-success';$('connBadge').textContent='online';} }); socket.on('smart_queue_update',msg=>{ if(msg && msg.enabled){ toast(smartQueueToastMessage(msg),'secondary'); } }); socket.on('automation_update',msg=>{ if(msg?.applied?.length) toast(`Automations applied ${msg.applied.length} item(s)`,'secondary'); }); socket.on('torrent_stats_update',msg=>{ if(msg?.stats){ renderTorrentStats(msg.stats); } else if(msg?.error && $('toolTorrentStats') && !$('toolTorrentStats').classList.contains('d-none')){ toast(`Torrent stats: ${msg.error}`,'danger'); } }); socket.on('rtorrent_config_applied',msg=>{ if(msg?.result?.updated?.length) toast(`Startup rTorrent config applied (${msg.result.updated.length})`,'success'); if(msg?.error) toast(`Startup rTorrent config: ${msg.error}`,'danger'); }); socket.on('system_stats',s=>{
|
||||||
updateSortHeaders(); applyColumnVisibility(); renderColumnManager(); renderFooterPreferences(); applyFooterPreferences(); updateFooterClock(); setInterval(updateFooterClock,1000); scheduleRender(true); if(!hasActiveProfile) renderNoProfileState(); loadLabels().catch(()=>{}); loadRatios().catch(()=>{}); loadSmartQueue().catch(()=>{}); loadAutomations().catch(()=>{}); if(portCheckEnabled) loadPortCheck(false); else renderPortCheck({status:'disabled',enabled:false}); if(hasActiveProfile) applyDefaultDownloadPath(false).catch(()=>{});
|
const usageAvailable=s.usage_available!==false && s.cpu!==undefined && s.ram!==undefined;
|
||||||
|
$('statCpuBox')?.classList.toggle('d-none',!usageAvailable);
|
||||||
|
$('statRamBox')?.classList.toggle('d-none',!usageAvailable);
|
||||||
|
$('systemChart')?.classList.toggle('d-none',!usageAvailable);
|
||||||
|
if(usageAvailable){
|
||||||
|
$('statCpu').textContent=s.cpu??'-';
|
||||||
|
$('statRam').textContent=s.ram??'-';
|
||||||
|
drawSystemUsage(s.cpu,s.ram);
|
||||||
|
}
|
||||||
|
$('statVersion').textContent=s.version||'-';
|
||||||
|
$('statDl').textContent=s.down_rate_h||'0 B/s';
|
||||||
|
$('statUl').textContent=s.up_rate_h||'0 B/s';
|
||||||
|
if($('mobileSpeedDl')) $('mobileSpeedDl').textContent=s.down_rate_h||'0 B/s';
|
||||||
|
if($('mobileSpeedUl')) $('mobileSpeedUl').textContent=s.up_rate_h||'0 B/s';
|
||||||
|
lastLimits={down:Number(s.down_limit||0),up:Number(s.up_limit||0)};
|
||||||
|
$('statDlLimit').textContent=s.down_limit_h||'∞';
|
||||||
|
$('statUlLimit').textContent=s.up_limit_h||'∞';
|
||||||
|
$('statTotalDl').textContent=compactTransferText(s.total_down_h);
|
||||||
|
$('statTotalUl').textContent=compactTransferText(s.total_up_h);
|
||||||
|
updateSpeedPeaks(s.speed_peaks||{});
|
||||||
|
updateBrowserSpeedTitle(s.down_rate_h||'0 B/s', s.up_rate_h||'0 B/s');
|
||||||
|
drawTraffic(s.down_rate,s.up_rate);
|
||||||
|
drawDiskUsage(s.disk);
|
||||||
|
updateSocketStatus(s);
|
||||||
|
applyFooterPreferences();
|
||||||
|
});
|
||||||
|
updateSortHeaders(); applyColumnVisibility(); renderColumnManager(); renderFooterPreferences(); applyFooterPreferences(); updateFooterClock(); updateBrowserSpeedTitle(); setInterval(updateFooterClock,1000); scheduleRender(true); if(!hasActiveProfile) renderNoProfileState(); loadLabels().catch(()=>{}); loadRatios().catch(()=>{}); loadSmartQueue().catch(()=>{}); loadAutomations().catch(()=>{}); if(portCheckEnabled) loadPortCheck(false); else renderPortCheck({status:'disabled',enabled:false}); if(hasActiveProfile) applyDefaultDownloadPath(false).catch(()=>{}); scheduleTrackerSummary(true);
|
||||||
})();
|
})();
|
||||||
|
|||||||
9
pytorrent/static/favicon.svg
Normal file
9
pytorrent/static/favicon.svg
Normal file
@@ -0,0 +1,9 @@
|
|||||||
|
<svg xmlns="http://www.w3.org/2000/svg" viewBox="0 0 64 64">
|
||||||
|
<rect x="14" y="20" width="36" height="30" rx="8" fill="#f8fafc" stroke="#0f172a" stroke-width="4"></rect>
|
||||||
|
<rect x="22" y="30" width="6" height="6" rx="3" fill="#0f172a"></rect>
|
||||||
|
<rect x="36" y="30" width="6" height="6" rx="3" fill="#0f172a"></rect>
|
||||||
|
<path d="M25 42h14" stroke="#0f172a" stroke-width="4" stroke-linecap="round"></path>
|
||||||
|
<path d="M32 20V10" stroke="#0f172a" stroke-width="4" stroke-linecap="round"></path>
|
||||||
|
<circle cx="32" cy="8" r="4" fill="#0f172a"></circle>
|
||||||
|
<path d="M14 34H8M56 34h-6" stroke="#0f172a" stroke-width="4" stroke-linecap="round"></path>
|
||||||
|
</svg>
|
||||||
|
After Width: | Height: | Size: 647 B |
File diff suppressed because it is too large
Load Diff
1
pytorrent/static/tracker_favicons
Symbolic link
1
pytorrent/static/tracker_favicons
Symbolic link
@@ -0,0 +1 @@
|
|||||||
|
../../data/tracker_favicons
|
||||||
26
pytorrent/templates/error.html
Normal file
26
pytorrent/templates/error.html
Normal file
@@ -0,0 +1,26 @@
|
|||||||
|
<!doctype html>
|
||||||
|
<html lang="en" data-bs-theme="dark">
|
||||||
|
<head>
|
||||||
|
<meta charset="utf-8">
|
||||||
|
<meta name="viewport" content="width=device-width, initial-scale=1">
|
||||||
|
<title>pyTorrent {{ code }}</title>
|
||||||
|
<link rel="icon" href="{{ static_url('favicon.svg') }}" type="image/svg+xml">
|
||||||
|
<link rel="shortcut icon" href="{{ static_url('favicon.svg') }}" type="image/svg+xml">
|
||||||
|
<link href="{{ bootstrap_theme_url('default') }}" rel="stylesheet">
|
||||||
|
<link href="{{ frontend_asset_url('fontawesome_css') }}" rel="stylesheet">
|
||||||
|
<link href="{{ static_url('styles.css') }}" rel="stylesheet">
|
||||||
|
</head>
|
||||||
|
<body class="error-page">
|
||||||
|
<main class="error-card" role="alert">
|
||||||
|
<div class="error-brand"><i class="fa-solid fa-robot"></i> pyTorrent</div>
|
||||||
|
<div class="error-icon" aria-hidden="true"><i class="fa-solid {{ icon }}"></i></div>
|
||||||
|
<p class="error-code">{{ code }}</p>
|
||||||
|
<h1>{{ title }}</h1>
|
||||||
|
<p>{{ message }}</p>
|
||||||
|
<div class="error-actions">
|
||||||
|
<a class="btn btn-primary" href="{{ url_for('main.index') }}"><i class="fa-solid fa-house"></i> Back to dashboard</a>
|
||||||
|
<a class="btn btn-outline-secondary" href="{{ url_for('main.docs') }}"><i class="fa-solid fa-book"></i> API docs</a>
|
||||||
|
</div>
|
||||||
|
</main>
|
||||||
|
</body>
|
||||||
|
</html>
|
||||||
File diff suppressed because one or more lines are too long
29
pytorrent/templates/login.html
Normal file
29
pytorrent/templates/login.html
Normal file
@@ -0,0 +1,29 @@
|
|||||||
|
<!doctype html>
|
||||||
|
<html lang="en" data-bs-theme="dark">
|
||||||
|
<head>
|
||||||
|
<meta charset="utf-8">
|
||||||
|
<meta name="viewport" content="width=device-width, initial-scale=1">
|
||||||
|
<title>pyTorrent login</title>
|
||||||
|
<link rel="icon" href="{{ static_url('favicon.svg') }}" type="image/svg+xml">
|
||||||
|
<link rel="shortcut icon" href="{{ static_url('favicon.svg') }}" type="image/svg+xml">
|
||||||
|
<link href="{{ bootstrap_theme_url('default') }}" rel="stylesheet">
|
||||||
|
<link href="{{ frontend_asset_url('fontawesome_css') }}" rel="stylesheet">
|
||||||
|
<link href="{{ static_url('styles.css') }}" rel="stylesheet">
|
||||||
|
</head>
|
||||||
|
<body class="auth-page">
|
||||||
|
<main class="initial-loader-card auth-card">
|
||||||
|
<div class="initial-loader-brand"><i class="fa-solid fa-robot"></i> pyTorrent</div>
|
||||||
|
<div class="auth-lock" aria-hidden="true"><i class="fa-solid fa-lock"></i></div>
|
||||||
|
<h1 class="initial-loader-title">Sign in</h1>
|
||||||
|
<p class="initial-loader-text">Authentication is enabled for this pyTorrent instance.</p>
|
||||||
|
{% if error %}<div class="alert alert-danger auth-alert">{{ error }}</div>{% endif %}
|
||||||
|
<form class="auth-form" method="post">
|
||||||
|
<label class="form-label" for="username">User</label>
|
||||||
|
<input id="username" class="form-control" name="username" autocomplete="username" autofocus>
|
||||||
|
<label class="form-label" for="password">Password</label>
|
||||||
|
<input id="password" class="form-control" name="password" type="password" autocomplete="current-password">
|
||||||
|
<button class="btn btn-primary w-100" type="submit"><i class="fa-solid fa-right-to-bracket"></i> Log in</button>
|
||||||
|
</form>
|
||||||
|
</main>
|
||||||
|
</body>
|
||||||
|
</html>
|
||||||
@@ -4,3 +4,4 @@ python-dotenv>=1.0
|
|||||||
geoip2>=4.8
|
geoip2>=4.8
|
||||||
psutil>=5.9
|
psutil>=5.9
|
||||||
simple-websocket>=1.0
|
simple-websocket>=1.0
|
||||||
|
gunicorn>=22.0
|
||||||
|
|||||||
113
scripts/download_frontend_libs.py
Executable file
113
scripts/download_frontend_libs.py
Executable file
@@ -0,0 +1,113 @@
|
|||||||
|
#!/usr/bin/env python3
|
||||||
|
from __future__ import annotations
|
||||||
|
|
||||||
|
import re
|
||||||
|
from pathlib import Path
|
||||||
|
from urllib.parse import urljoin
|
||||||
|
from urllib.request import Request, urlopen
|
||||||
|
|
||||||
|
ROOT = Path(__file__).resolve().parents[1]
|
||||||
|
LIBS_STATIC_DIR = "libs"
|
||||||
|
BOOTSTRAP_VERSION = "5.3.3"
|
||||||
|
BOOTSWATCH_VERSION = "5.3.3"
|
||||||
|
FONTAWESOME_VERSION = "6.5.2"
|
||||||
|
FLAG_ICONS_VERSION = "7.2.3"
|
||||||
|
SWAGGER_UI_VERSION = "5"
|
||||||
|
SOCKET_IO_VERSION = "4.7.5"
|
||||||
|
BOOTSTRAP_THEMES = (
|
||||||
|
"default",
|
||||||
|
"flatly",
|
||||||
|
"litera",
|
||||||
|
"lumen",
|
||||||
|
"minty",
|
||||||
|
"sketchy",
|
||||||
|
"solar",
|
||||||
|
"spacelab",
|
||||||
|
"united",
|
||||||
|
"zephyr",
|
||||||
|
)
|
||||||
|
STATIC_ASSETS = {
|
||||||
|
"bootstrap_js": {
|
||||||
|
"local": f"{LIBS_STATIC_DIR}/bootstrap/{BOOTSTRAP_VERSION}/js/bootstrap.bundle.min.js",
|
||||||
|
"cdn": f"https://cdn.jsdelivr.net/npm/bootstrap@{BOOTSTRAP_VERSION}/dist/js/bootstrap.bundle.min.js",
|
||||||
|
},
|
||||||
|
"socket_io_js": {
|
||||||
|
"local": f"{LIBS_STATIC_DIR}/socket.io/{SOCKET_IO_VERSION}/socket.io.min.js",
|
||||||
|
"cdn": f"https://cdn.socket.io/{SOCKET_IO_VERSION}/socket.io.min.js",
|
||||||
|
},
|
||||||
|
"fontawesome_css": {
|
||||||
|
"local": f"{LIBS_STATIC_DIR}/fontawesome/{FONTAWESOME_VERSION}/css/all.min.css",
|
||||||
|
"cdn": f"https://cdnjs.cloudflare.com/ajax/libs/font-awesome/{FONTAWESOME_VERSION}/css/all.min.css",
|
||||||
|
},
|
||||||
|
"flag_icons_css": {
|
||||||
|
"local": f"{LIBS_STATIC_DIR}/flag-icons/{FLAG_ICONS_VERSION}/css/flag-icons.min.css",
|
||||||
|
"cdn": f"https://cdn.jsdelivr.net/gh/lipis/flag-icons@{FLAG_ICONS_VERSION}/css/flag-icons.min.css",
|
||||||
|
},
|
||||||
|
"swagger_css": {
|
||||||
|
"local": f"{LIBS_STATIC_DIR}/swagger-ui/{SWAGGER_UI_VERSION}/swagger-ui.css",
|
||||||
|
"cdn": f"https://cdn.jsdelivr.net/npm/swagger-ui-dist@{SWAGGER_UI_VERSION}/swagger-ui.css",
|
||||||
|
},
|
||||||
|
"swagger_js": {
|
||||||
|
"local": f"{LIBS_STATIC_DIR}/swagger-ui/{SWAGGER_UI_VERSION}/swagger-ui-bundle.js",
|
||||||
|
"cdn": f"https://cdn.jsdelivr.net/npm/swagger-ui-dist@{SWAGGER_UI_VERSION}/swagger-ui-bundle.js",
|
||||||
|
},
|
||||||
|
}
|
||||||
|
URL_RE = re.compile(r"url\((['\"]?)(?!data:)(?!https?:)([^)'\"]+)\1\)")
|
||||||
|
|
||||||
|
|
||||||
|
def bootstrap_css_asset(theme: str) -> dict[str, str]:
|
||||||
|
if theme == "default":
|
||||||
|
return {
|
||||||
|
"local": f"{LIBS_STATIC_DIR}/bootstrap/{BOOTSTRAP_VERSION}/css/bootstrap.min.css",
|
||||||
|
"cdn": f"https://cdn.jsdelivr.net/npm/bootstrap@{BOOTSTRAP_VERSION}/dist/css/bootstrap.min.css",
|
||||||
|
}
|
||||||
|
return {
|
||||||
|
"local": f"{LIBS_STATIC_DIR}/bootswatch/{BOOTSWATCH_VERSION}/{theme}/bootstrap.min.css",
|
||||||
|
"cdn": f"https://cdn.jsdelivr.net/npm/bootswatch@{BOOTSWATCH_VERSION}/dist/{theme}/bootstrap.min.css",
|
||||||
|
}
|
||||||
|
|
||||||
|
|
||||||
|
def download(url: str, dest: Path) -> None:
|
||||||
|
dest.parent.mkdir(parents=True, exist_ok=True)
|
||||||
|
req = Request(url, headers={"User-Agent": "pyTorrent installer"})
|
||||||
|
with urlopen(req, timeout=60) as response:
|
||||||
|
data = response.read()
|
||||||
|
if not data:
|
||||||
|
raise RuntimeError(f"Empty response for {url}")
|
||||||
|
tmp = dest.with_suffix(dest.suffix + ".tmp")
|
||||||
|
tmp.write_bytes(data)
|
||||||
|
tmp.replace(dest)
|
||||||
|
print(f"OK {dest.relative_to(ROOT)}")
|
||||||
|
|
||||||
|
|
||||||
|
def download_css_with_assets(url: str, dest: Path) -> None:
|
||||||
|
download(url, dest)
|
||||||
|
text = dest.read_text(encoding="utf-8", errors="ignore")
|
||||||
|
for match in URL_RE.finditer(text):
|
||||||
|
rel = match.group(2).split("#", 1)[0].split("?", 1)[0]
|
||||||
|
if not rel:
|
||||||
|
continue
|
||||||
|
asset_url = urljoin(url, rel)
|
||||||
|
asset_dest = (dest.parent / rel).resolve()
|
||||||
|
try:
|
||||||
|
asset_dest.relative_to(ROOT)
|
||||||
|
except ValueError:
|
||||||
|
continue
|
||||||
|
if not asset_dest.exists():
|
||||||
|
download(asset_url, asset_dest)
|
||||||
|
|
||||||
|
|
||||||
|
def main() -> None:
|
||||||
|
items = list(STATIC_ASSETS.values())
|
||||||
|
items.extend(bootstrap_css_asset(theme) for theme in BOOTSTRAP_THEMES)
|
||||||
|
for item in items:
|
||||||
|
url = item["cdn"]
|
||||||
|
dest = ROOT / "pytorrent" / "static" / item["local"]
|
||||||
|
if dest.suffix == ".css":
|
||||||
|
download_css_with_assets(url, dest)
|
||||||
|
else:
|
||||||
|
download(url, dest)
|
||||||
|
|
||||||
|
|
||||||
|
if __name__ == "__main__":
|
||||||
|
main()
|
||||||
@@ -1,16 +1,25 @@
|
|||||||
[Unit]
|
[Unit]
|
||||||
Description=pyTorrent web UI for rTorrent
|
Description=pyTorrent Web UI
|
||||||
After=network.target
|
After=network-online.target
|
||||||
|
Wants=network-online.target
|
||||||
|
|
||||||
[Service]
|
[Service]
|
||||||
Type=simple
|
Type=simple
|
||||||
WorkingDirectory=/opt/pytorrent
|
#User=root
|
||||||
EnvironmentFile=/opt/pytorrent/.env
|
#Group=root
|
||||||
ExecStart=/opt/pytorrent/venv/bin/python /opt/pytorrent/app.py
|
User=pytorrent
|
||||||
|
Group=pytorrent
|
||||||
|
WorkingDirectory=/opt/pyTorrent
|
||||||
|
Environment="PYTHONUNBUFFERED=1"
|
||||||
|
EnvironmentFile=/opt/pyTorrent/.env
|
||||||
|
# Note: threaded Gunicorn preserves Flask-SocketIO background tasks without running Werkzeug in production.
|
||||||
|
ExecStart=/opt/pyTorrent/venv/bin/gunicorn -c /opt/pyTorrent/gunicorn.conf.py --worker-class gthread --workers 1 --threads 32 --bind ${PYTORRENT_HOST}:${PYTORRENT_PORT} --access-logfile - --error-logfile - wsgi:app
|
||||||
Restart=always
|
Restart=always
|
||||||
RestartSec=3
|
RestartSec=3
|
||||||
User=www-data
|
KillSignal=SIGINT
|
||||||
Group=www-data
|
TimeoutStopSec=20
|
||||||
|
NoNewPrivileges=true
|
||||||
|
PrivateTmp=true
|
||||||
|
|
||||||
[Install]
|
[Install]
|
||||||
WantedBy=multi-user.target
|
WantedBy=multi-user.target
|
||||||
Reference in New Issue
Block a user