[Scummvm-git-logs] scummvm-sites integrity -> 203f60b547f6e2592998e7bc01945583a3fa1137

sev- noreply at scummvm.org
Thu Aug 14 20:21:29 UTC 2025


This automated email contains information about 51 new commits which have been
pushed to the 'scummvm-sites' repo located at https://api.github.com/repos/scummvm/scummvm-sites .

Summary:
2fb7203a5a INTEGRITY: Only skip processing for entries with the same status when matching key is present.
c12c5c465b INTEGRITY: Logging dropped candidates missed earlier.
2b1c7e7d08 INTEGRITY: Add fileset creation details in log when new fileset is not deleted.
a760e2b693 INTEGRITY: Fix placeholder for widetable query.
6f12ea1dee INTEGRITY: Redirect fileset url if id exceeds the bounds.
0f919dee09 INTEGRITY: Fix the issue for filesets with null field not being filtered.
589770009b INTEGRITY: Join game table before engine table.
6c692292e8 INTEGRITY: Add max and min pages in dashboard.
eaaa899e56 INTEGRITY: Improve merge workflow.
7bd2979b48 INTEGRITY: Add detection type for full checksums in detection entries
c160766734 INTEGRITY: Update timestamp for detection files in partial fileset conversion for set.dat
9a384d4367 INTEGRITY: Add extra error handling for parsing.
12cfb77b85 INTEGRITY: Add commit/rollback transaction support to match_fileset and db_insert.
b6ae1265a1 INTEGRITY: Remove early string escaping for database logs as queries have been parametrised
1ba6b8bf69 INTEGRITY: Remove depracated/redundant code.
696611f39a INTEGRITY: Improve homepage navbar.
1612a80ce2 INTEGRITY: Fix incorrect fileset redirection issue in logs.
2de285adda INTEGRITY: Add fileset redirection message for merged filesets in the new fileset.
0c57b6ffc8 INTEGRITY: Display only matched files in confirm merge by default, introduce checkboxes for showing more details.
9d9e8e9344 INTEGRITY: Add check for matching files by missing size and hide merge button after clicked.
53543f1eb4 INTEGRITY: Add configuration page with a feature to select items per page.
ba40176a6f INTEGRITY: Add user details in manual merge log.
fb13d68964 INTEGRITY: Add config page url in the homepage.
c742c0f4a4 INTEGRITY: Add user.dat processing logic.
850ee1ff1b INTEGRITY: Add comprehensive search criterias in log with OR, AND conditions.
ee694afc42 INTEGRITY: Add separate logs per page and filesets per page in config.
2c4458dc5b INTEGRITY: Add/update deployment related files.
88177064e4 INTEGRITY: Add underline on hover for navbar links.
9e93e50919 INTEGRITY: Add favicon.
5c69679558 INTEGRITY: Add column width variable in config.
948adac460 INTEGRITY: Add metadata information in seeding logs.
4bd0f4c622 INTEGRITY: Move html text file to static folder.
489966cb0d INTEGRITY: Add visual symbols for sorting.
906bc7c214 INTEGRITY: Add checksum filtering in fileset search page.
146795da94 INTEGRITY: Encode url variables before changing page.
e520c15eea INTEGRITY: Fix the fileset details being displayed in merge dashboard.
469b9b8df9 INTEGRITY: Add default state for sorting along with ascending and descending.
c53d37b710 INTEGRITY: Refactor confirm merge code.
0414ac9a9a INTEGRITY: Remove icon for default sorting state.
2dc641b323 INTEGRITY: Decode macbinary's filename as mac roman instead of utf-8.
3be01c8148 INTEGRITY: Wrap filename in scanned dat in double quotes instead of single.
f5d5636f3a INTEGRITY: Update size filtering logic for scan.dat for macfiles.
c04c669aaa INTEGRITY: Check for rt and dt checktype suffix while adding equal checksums.
b424821fcc INTEGRITY: Add validation checks on user data from the payload along with rate limiting.
b4fb9213a1 INTEGRITY: Add python virtual environment path in apache config file.
a130a0c811 INTEGRITY: Remove apache basic auth from validate endpoint.
9934d9fb97 INTEGRITY: Delete unused files.
34686f1cd9 INTEGRITY: Restructure project to a python module.
b825448fe8 INTEGRITY: Set up uv for package management.
3d6bf4675f INTEGRITY: Improve error handling in compute_hash.py
203f60b547 INTEGRITY: Print entire traceback when unknown exception is caught in dat_parser.py


Commit: 2fb7203a5a5dcfdd99e855d091e40aedda50df77
    https://github.com/scummvm/scummvm-sites/commit/2fb7203a5a5dcfdd99e855d091e40aedda50df77
Author: ShivangNagta (shivangnag at gmail.com)
Date: 2025-08-14T22:21:10+02:00

Commit Message:
INTEGRITY: Only skip processing for entries with the same status when matching key is present.

Changed paths:
    db_functions.py


diff --git a/db_functions.py b/db_functions.py
index a41c409..4dea844 100644
--- a/db_functions.py
+++ b/db_functions.py
@@ -131,16 +131,18 @@ def insert_fileset(
     # Check if key/megakey already exists, if so, skip insertion (no quotes on purpose)
     if detection:
         with conn.cursor() as cursor:
-            cursor.execute("SELECT id FROM fileset WHERE megakey = %s", (megakey,))
+            cursor.execute(
+                "SELECT id, status FROM fileset WHERE megakey = %s", (megakey,)
+            )
 
             existing_entry = cursor.fetchone()
     else:
         with conn.cursor() as cursor:
-            cursor.execute("SELECT id FROM fileset WHERE `key` = %s", (key,))
+            cursor.execute("SELECT id, status FROM fileset WHERE `key` = %s", (key,))
 
             existing_entry = cursor.fetchone()
 
-    if existing_entry is not None:
+    if (existing_entry is not None) and (status == existing_entry["status"]):
         existing_entry = existing_entry["id"]
         with conn.cursor() as cursor:
             cursor.execute("SET @fileset_last = %s", (existing_entry,))


Commit: c12c5c465b306efbeb270d67e9c8d757df5dee98
    https://github.com/scummvm/scummvm-sites/commit/c12c5c465b306efbeb270d67e9c8d757df5dee98
Author: ShivangNagta (shivangnag at gmail.com)
Date: 2025-08-14T22:21:10+02:00

Commit Message:
INTEGRITY: Logging dropped candidates missed earlier.

Changed paths:
    db_functions.py


diff --git a/db_functions.py b/db_functions.py
index 4dea844..f943082 100644
--- a/db_functions.py
+++ b/db_functions.py
@@ -1707,6 +1707,8 @@ def set_process(
     set_to_candidate_dict = defaultdict(list)
     id_to_fileset_dict = defaultdict(dict)
 
+    no_candidate_logs = []
+
     # Deep copy to avoid changes in game_data in the loop affecting the lookup map.
     game_data_lookup = {fs["name"]: copy.deepcopy(fs) for fs in game_data}
 
@@ -1755,7 +1757,6 @@ def set_process(
 
         # Separating out the matching logic for glk engine
         engine_name = fileset["sourcefile"].split("-")[0]
-
         (candidate_filesets, fileset_count) = set_filter_candidate_filesets(
             fileset_id, fileset, fileset_count, transaction_id, engine_name, conn
         )
@@ -1768,20 +1769,27 @@ def set_process(
                 fileset["description"] if "description" in fileset else ""
             )
             log_text = f"Drop fileset as no matching candidates. Name: {fileset_name}, Description: {fileset_description}."
+            console_log_text = f"Early fileset drop as no matching candidates. Name: {fileset_name}, Description: {fileset_description}."
+            no_candidate_logs.append(console_log_text)
             create_log(
                 escape_string(category_text), user, escape_string(log_text), conn
             )
             dropped_early_no_candidate += 1
             delete_original_fileset(fileset_id, conn)
+            continue
         id_to_fileset_dict[fileset_id] = fileset
         set_to_candidate_dict[fileset_id].extend(candidate_filesets)
 
-    console_message = "Candidate filtering finished."
-    console_log(console_message)
+    for console_log_text in no_candidate_logs:
+        console_log(console_log_text)
+    no_candidate_logs = []
+
     console_message = (
-        f"{dropped_early_no_candidate} Filesets Dropped for No candidates."
+        f"{dropped_early_no_candidate} Filesets Dropped Early for having no candidates."
     )
     console_log(console_message)
+    console_message = "Candidate filtering finished."
+    console_log(console_message)
     console_message = "Looking for duplicates..."
     console_log(console_message)
 
@@ -1848,6 +1856,7 @@ def set_process(
             auto_merged_filesets,
             manual_merged_filesets,
             mismatch_filesets,
+            dropped_early_no_candidate,
         ) = set_perform_match(
             fileset,
             src,
@@ -1861,13 +1870,18 @@ def set_process(
             mismatch_filesets,
             manual_merge_map,
             set_to_candidate_dict,
+            dropped_early_no_candidate,
+            no_candidate_logs,
             conn,
             skiplog,
         )
-
         match_count += 1
+
     console_log("Matching performed.")
 
+    for console_log_text in no_candidate_logs:
+        console_log(console_log_text)
+
     with conn.cursor() as cursor:
         for fileset_id, candidates in manual_merge_map.items():
             if len(candidates) == 0:
@@ -1878,15 +1892,17 @@ def set_process(
                     fileset["description"] if "description" in fileset else ""
                 )
                 log_text = f"Drop fileset as no matching candidates. Name: {fileset_name}, Description: {fileset_description}."
+                console_log_text = f"Fileset dropped as no candidates anymore. Name: {fileset_name}, Description: {fileset_description}."
+                console_log(console_log_text)
                 create_log(
                     escape_string(category_text), user, escape_string(log_text), conn
                 )
                 dropped_early_no_candidate += 1
+                manual_merged_filesets -= 1
                 delete_original_fileset(fileset_id, conn)
             else:
                 category_text = "Manual Merge Required"
                 log_text = f"Merge Fileset:{fileset_id} manually. Possible matches are: {', '.join(f'Fileset:{id}' for id in candidates)}."
-                manual_merged_filesets += 1
                 add_manual_merge(
                     candidates,
                     fileset_id,
@@ -1962,14 +1978,30 @@ def set_perform_match(
     mismatch_filesets,
     manual_merge_map,
     set_to_candidate_dict,
+    dropped_early_no_candidate,
+    no_candidate_logs,
     conn,
     skiplog,
 ):
     """
-    "Performs matching for set.dat"
+    Performs matching for set.dat
     """
     with conn.cursor() as cursor:
-        if len(candidate_filesets) == 1:
+        if len(candidate_filesets) == 0:
+            category_text = "Drop fileset - No Candidates"
+            fileset_name = fileset["name"] if "name" in fileset else ""
+            fileset_description = (
+                fileset["description"] if "description" in fileset else ""
+            )
+            log_text = f"Drop fileset as no matching candidates. Name: {fileset_name}, Description: {fileset_description}."
+            console_log_text = f"Fileset dropped as no candidates anymore. Name: {fileset_name}, Description: {fileset_description}."
+            no_candidate_logs.append(console_log_text)
+            create_log(
+                escape_string(category_text), user, escape_string(log_text), conn
+            )
+            dropped_early_no_candidate += 1
+            delete_original_fileset(fileset_id, conn)
+        elif len(candidate_filesets) == 1:
             matched_fileset_id = candidate_filesets[0]
             cursor.execute(
                 "SELECT status FROM fileset WHERE id = %s", (matched_fileset_id,)
@@ -2032,12 +2064,14 @@ def set_perform_match(
 
         elif len(candidate_filesets) > 1:
             manual_merge_map[fileset_id] = candidate_filesets
+            manual_merged_filesets += 1
 
     return (
         fully_matched_filesets,
         auto_merged_filesets,
         manual_merged_filesets,
         mismatch_filesets,
+        dropped_early_no_candidate,
     )
 
 
@@ -2247,6 +2281,7 @@ def set_filter_candidate_filesets(
                 filesize = f["size"]
                 if is_glk and (filesize in set_glk_file_size or filesize == 0):
                     count += 1
+                    continue
                 if (filename, filesize) in set_file_name_size:
                     if filesize == -1:
                         count += 1


Commit: 2b1c7e7d08a5f41838b9e6136bfb6d32c6bc45bb
    https://github.com/scummvm/scummvm-sites/commit/2b1c7e7d08a5f41838b9e6136bfb6d32c6bc45bb
Author: ShivangNagta (shivangnag at gmail.com)
Date: 2025-08-14T22:21:10+02:00

Commit Message:
INTEGRITY: Add fileset creation details in log when new fileset is not deleted.

Changed paths:
    db_functions.py


diff --git a/db_functions.py b/db_functions.py
index f943082..1836788 100644
--- a/db_functions.py
+++ b/db_functions.py
@@ -183,10 +183,10 @@ def insert_fileset(
 
     log_text = f"Created Fileset:{fileset_last}, {log_text}"
     if src == "user":
-        log_text = f"Created Fileset:{fileset_last}, from user: IP {ip}, {log_text}"
+        log_text = f"Created Fileset:{fileset_last}, from user: IP {ip}."
 
     user = f"cli:{getpass.getuser()}" if username is None else username
-    if not skiplog:
+    if not skiplog and detection:
         log_last = create_log(
             escape_string(category_text), user, escape_string(log_text), conn
         )
@@ -1033,7 +1033,7 @@ def scan_process(
             fileset_description = (
                 fileset["description"] if "description" in fileset else ""
             )
-            log_text = f"Drop fileset as no matching candidates. Name: {fileset_name}, Description: {fileset_description}."
+            log_text = f"Drop fileset as no matching candidates. Name: {fileset_name} Description: {fileset_description}."
             create_log(
                 escape_string(category_text), user, escape_string(log_text), conn
             )
@@ -1169,6 +1169,8 @@ def scan_perform_match(
         Put them for manual merge.
     """
     with conn.cursor() as cursor:
+        fileset_name = fileset["name"] if "name" in fileset else ""
+        fileset_description = fileset["description"] if "description" in fileset else ""
         if len(candidate_filesets) == 1:
             matched_fileset_id = candidate_filesets[0]
             cursor.execute(
@@ -1180,6 +1182,15 @@ def scan_perform_match(
             if status == "partial":
                 # Partial filesets contain all the files, so does the scanned filesets, so this case should not ideally happen.
                 if total_files(matched_fileset_id, conn) > total_fileset_files(fileset):
+                    log_text = f"Created Fileset:{fileset_id}. Name: {fileset_name} Description: {fileset_description}"
+                    category_text = "Uploaded from scan."
+                    create_log(
+                        escape_string(category_text),
+                        user,
+                        escape_string(log_text),
+                        conn,
+                    )
+                    console_log(log_text)
                     category_text = "Missing files"
                     log_text = f"Missing files in Fileset:{fileset_id}. Try manual merge with Fileset:{matched_fileset_id}."
                     add_manual_merge(
@@ -1229,6 +1240,15 @@ def scan_perform_match(
                         automatic_merged_filesets += 1
 
                 else:
+                    log_text = f"Created Fileset:{fileset_id}. Name: {fileset_name} Description: {fileset_description}"
+                    category_text = "Uploaded from scan."
+                    create_log(
+                        escape_string(category_text),
+                        user,
+                        escape_string(log_text),
+                        conn,
+                    )
+                    console_log(log_text)
                     category_text = "Manual Merge - Detection found"
                     log_text = f"Matched with detection. Merge Fileset:{fileset_id} manually with Fileset:{matched_fileset_id}."
                     add_manual_merge(
@@ -1269,6 +1289,12 @@ def scan_perform_match(
                 delete_original_fileset(fileset_id, conn)
 
         elif len(candidate_filesets) > 1:
+            log_text = f"Created Fileset:{fileset_id}. Name: {fileset_name} Description: {fileset_description}"
+            category_text = "Uploaded from scan."
+            create_log(
+                escape_string(category_text), user, escape_string(log_text), conn
+            )
+            console_log(log_text)
             category_text = "Manual Merge - Multiple Candidates"
             log_text = f"Merge Fileset:{fileset_id} manually. Possible matches are: {', '.join(f'Fileset:{id}' for id in candidate_filesets)}."
             manual_merged_filesets += 1
@@ -1768,8 +1794,8 @@ def set_process(
             fileset_description = (
                 fileset["description"] if "description" in fileset else ""
             )
-            log_text = f"Drop fileset as no matching candidates. Name: {fileset_name}, Description: {fileset_description}."
-            console_log_text = f"Early fileset drop as no matching candidates. Name: {fileset_name}, Description: {fileset_description}."
+            log_text = f"Drop fileset as no matching candidates. Name: {fileset_name} Description: {fileset_description}."
+            console_log_text = f"Early fileset drop as no matching candidates. Name: {fileset_name} Description: {fileset_description}."
             no_candidate_logs.append(console_log_text)
             create_log(
                 escape_string(category_text), user, escape_string(log_text), conn
@@ -1829,7 +1855,7 @@ def set_process(
                 fileset_description = (
                     fileset["description"] if "description" in fileset else ""
                 )
-                log_text = f"Drop fileset, multiple filesets mapping to single detection. Name: {fileset_name}, Description: {fileset_description}. Clashed with Fileset:{candidate} ({engine}:{gameid}-{platform}-{language})"
+                log_text = f"Drop fileset, multiple filesets mapping to single detection. Name: {fileset_name} Description: {fileset_description}. Clashed with Fileset:{candidate} ({engine}:{gameid}-{platform}-{language})"
                 console_log(log_text)
                 create_log(
                     escape_string(category_text), user, escape_string(log_text), conn
@@ -1884,15 +1910,15 @@ def set_process(
 
     with conn.cursor() as cursor:
         for fileset_id, candidates in manual_merge_map.items():
+            fileset = id_to_fileset_dict[fileset_id]
+            fileset_name = fileset["name"] if "name" in fileset else ""
+            fileset_description = (
+                fileset["description"] if "description" in fileset else ""
+            )
             if len(candidates) == 0:
                 category_text = "Drop fileset - No Candidates"
-                fileset = id_to_fileset_dict[fileset_id]
-                fileset_name = fileset["name"] if "name" in fileset else ""
-                fileset_description = (
-                    fileset["description"] if "description" in fileset else ""
-                )
-                log_text = f"Drop fileset as no matching candidates. Name: {fileset_name}, Description: {fileset_description}."
-                console_log_text = f"Fileset dropped as no candidates anymore. Name: {fileset_name}, Description: {fileset_description}."
+                log_text = f"Drop fileset as no matching candidates. Name: {fileset_name} Description: {fileset_description}."
+                console_log_text = f"Fileset dropped as no candidates anymore. Name: {fileset_name} Description: {fileset_description}."
                 console_log(console_log_text)
                 create_log(
                     escape_string(category_text), user, escape_string(log_text), conn
@@ -1901,6 +1927,12 @@ def set_process(
                 manual_merged_filesets -= 1
                 delete_original_fileset(fileset_id, conn)
             else:
+                log_text = f"Created Fileset:{fileset_id}. Name: {fileset_name} Description: {fileset_description}"
+                category_text = "Uploaded from dat."
+                create_log(
+                    escape_string(category_text), user, escape_string(log_text), conn
+                )
+                console_log(log_text)
                 category_text = "Manual Merge Required"
                 log_text = f"Merge Fileset:{fileset_id} manually. Possible matches are: {', '.join(f'Fileset:{id}' for id in candidates)}."
                 add_manual_merge(
@@ -1987,14 +2019,12 @@ def set_perform_match(
     Performs matching for set.dat
     """
     with conn.cursor() as cursor:
+        fileset_name = fileset["name"] if "name" in fileset else ""
+        fileset_description = fileset["description"] if "description" in fileset else ""
         if len(candidate_filesets) == 0:
             category_text = "Drop fileset - No Candidates"
-            fileset_name = fileset["name"] if "name" in fileset else ""
-            fileset_description = (
-                fileset["description"] if "description" in fileset else ""
-            )
-            log_text = f"Drop fileset as no matching candidates. Name: {fileset_name}, Description: {fileset_description}."
-            console_log_text = f"Fileset dropped as no candidates anymore. Name: {fileset_name}, Description: {fileset_description}."
+            log_text = f"Drop fileset as no matching candidates. Name: {fileset_name} Description: {fileset_description}."
+            console_log_text = f"Fileset dropped as no candidates anymore. Name: {fileset_name} Description: {fileset_description}."
             no_candidate_logs.append(console_log_text)
             create_log(
                 escape_string(category_text), user, escape_string(log_text), conn
@@ -2048,6 +2078,15 @@ def set_perform_match(
                     delete_original_fileset(fileset_id, conn)
 
                 else:
+                    log_text = f"Created Fileset:{fileset_id}. Name: {fileset_name} Description: {fileset_description}"
+                    category_text = "Uploaded from dat."
+                    create_log(
+                        escape_string(category_text),
+                        user,
+                        escape_string(log_text),
+                        conn,
+                    )
+                    console_log(log_text)
                     category_text = "Mismatch"
                     log_text = f"Fileset:{fileset_id} mismatched with Fileset:{matched_fileset_id} with status:{status}. Try manual merge. Unmatched Files in set.dat fileset = {len(unmatched_dat_files)} Unmatched Files in candidate fileset = {len(unmatched_candidate_files)}. List of unmatched files scan.dat : {', '.join(scan_file for scan_file in unmatched_dat_files)}, List of unmatched files full fileset : {', '.join(scan_file for scan_file in unmatched_candidate_files)}"
                     console_log(log_text)


Commit: a760e2b6934ec0e6e5982d953039548efc9a801c
    https://github.com/scummvm/scummvm-sites/commit/a760e2b6934ec0e6e5982d953039548efc9a801c
Author: ShivangNagta (shivangnag at gmail.com)
Date: 2025-08-14T22:21:10+02:00

Commit Message:
INTEGRITY: Fix placeholder for widetable query.

Changed paths:
    clear.py
    fileset.py


diff --git a/clear.py b/clear.py
index acdae14..ccc5588 100644
--- a/clear.py
+++ b/clear.py
@@ -19,7 +19,7 @@ def truncate_all_tables(conn):
 
     for table in tables:
         try:
-            cursor.execute("TRUNCATE TABLE %s", (table,))
+            cursor.execute(f"TRUNCATE TABLE `{table}`")
             print(f"Table '{table}' truncated successfully")
         except pymysql.Error as err:
             print(f"Error truncating table '{table}': {err}")
diff --git a/fileset.py b/fileset.py
index a45556e..d43df8f 100644
--- a/fileset.py
+++ b/fileset.py
@@ -266,8 +266,7 @@ def fileset():
             if widetable == "full":
                 file_ids = [file["id"] for file in result]
                 cursor.execute(
-                    "SELECT file, checksum, checksize, checktype FROM filechecksum WHERE file IN (%s)",
-                    (",".join(map(str, file_ids)),),
+                    f"SELECT file, checksum, checksize, checktype FROM filechecksum WHERE file IN ({','.join(map(str, file_ids))})"
                 )
                 checksums = cursor.fetchall()
 


Commit: 6f12ea1dee6c0ba7f6f63a57ebee1244f3f6967d
    https://github.com/scummvm/scummvm-sites/commit/6f12ea1dee6c0ba7f6f63a57ebee1244f3f6967d
Author: ShivangNagta (shivangnag at gmail.com)
Date: 2025-08-14T22:21:10+02:00

Commit Message:
INTEGRITY: Redirect fileset url if id exceeds the bounds.

Changed paths:
    fileset.py


diff --git a/fileset.py b/fileset.py
index d43df8f..fdc7e2a 100644
--- a/fileset.py
+++ b/fileset.py
@@ -118,6 +118,11 @@ def fileset():
             cursor.execute("SELECT MAX(id) FROM fileset")
             max_id = cursor.fetchone()["MAX(id)"]
 
+            if id > max_id:
+                return redirect(f"/fileset?id={max_id}")
+            if id < min_id:
+                return redirect(f"/fileset?id={min_id}")
+
             # Ensure the id is between the minimum and maximum id
             id = max(min_id, min(id, max_id))
 


Commit: 0f919dee09a806eba73874a83f404308fdd76aa2
    https://github.com/scummvm/scummvm-sites/commit/0f919dee09a806eba73874a83f404308fdd76aa2
Author: ShivangNagta (shivangnag at gmail.com)
Date: 2025-08-14T22:21:10+02:00

Commit Message:
INTEGRITY: Fix the issue for filesets with null field not being filtered.

Changed paths:
    fileset.py
    pagination.py


diff --git a/fileset.py b/fileset.py
index fdc7e2a..9906e25 100644
--- a/fileset.py
+++ b/fileset.py
@@ -1229,10 +1229,10 @@ def fileset_search():
     order = "ORDER BY fileset.id"
     filters = {
         "fileset": "fileset",
-        "gameid": "game",
         "extra": "game",
         "platform": "game",
         "language": "game",
+        "gameid": "game",
         "megakey": "fileset",
         "status": "fileset",
         "transaction": "transactions",
diff --git a/pagination.py b/pagination.py
index 091384c..5b12482 100644
--- a/pagination.py
+++ b/pagination.py
@@ -101,8 +101,6 @@ def create_page(
 
         num_of_pages = (num_of_results + results_per_page - 1) // results_per_page
         print(f"Num of results: {num_of_results}, Num of pages: {num_of_pages}")
-        if num_of_results == 0:
-            return "No results for given filters"
 
         page = int(request.args.get("page", 1))
         page = max(1, min(page, num_of_pages))
@@ -118,11 +116,12 @@ def create_page(
                 value = pymysql.converters.escape_string(value)
                 if value == "":
                     value = ".*"
-                condition += (
-                    f" AND {filters[key]}.{'id' if key == 'fileset' else key} REGEXP '{value}'"
-                    if condition != "WHERE "
-                    else f"{filters[key]}.{'id' if key == 'fileset' else key} REGEXP '{value}'"
-                )
+                field = f"{filters[key]}.{'id' if key == 'fileset' else key}"
+                if value == ".*":
+                    clause = f"({field} IS NULL OR {field} REGEXP '{value}')"
+                else:
+                    clause = f"{field} REGEXP '{value}'"
+                condition += f" AND {clause}" if condition != "WHERE " else clause
 
             if condition == "WHERE ":
                 condition = ""
@@ -149,39 +148,32 @@ def create_page(
 <form id='filters-form' method='GET' onsubmit='remove_empty_inputs()'>
 <table style="margin-top: 80px;">
 """
-    if not results:
-        return "No results for given filters"
-    if results:
-        if filters:
-            if records_table != "log":
-                html += "<tr class='filter'><td></td><td></td>"
-            else:
-                html += "<tr class='filter'><td></td>"
+    if filters:
+        if records_table != "log":
+            html += "<tr class='filter'><td></td><td></td>"
+        else:
+            html += "<tr class='filter'><td></td>"
 
-            for key in results[0].keys():
-                if key not in filters:
-                    html += "<td class='filter'></td>"
-                    continue
-                filter_value = request.args.get(key, "")
-                html += f"<td class='filter'><input type='text' class='filter' placeholder='{key}' name='{key}' value='{filter_value}'/></td>"
-            html += "</tr><tr class='filter'><td></td><td></td><td class='filter'><input type='submit' value='Submit'></td></tr>"
+        for key in filters.keys():
+            filter_value = request.args.get(key, "")
+            html += f"<td class='filter'><input type='text' class='filter' placeholder='{key}' name='{key}' value='{filter_value}'/></td>"
+        html += "</tr><tr class='filter'><td></td><td></td><td class='filter'><input type='submit' value='Submit'></td></tr>"
 
-        html += "<th>#</th>"
-        if records_table != "log":
-            html += "<th>Fileset ID</th>"
-        for key in results[0].keys():
-            if key in ["fileset", "fileset_id"]:
-                continue
-            vars = "&".join(
-                [f"{k}={v}" for k, v in request.args.items() if k != "sort"]
-            )
-            sort = request.args.get("sort", "")
-            if sort == key:
-                vars += f"&sort={key}-desc"
-            else:
-                vars += f"&sort={key}"
-            html += f"<th><a href='{filename}?{vars}'>{key}</a></th>"
+    html += "<th>#</th>"
+    if records_table != "log":
+        html += "<th>Fileset ID</th>"
+    for key in filters.keys():
+        if key in ["fileset", "fileset_id"]:
+            continue
+        vars = "&".join([f"{k}={v}" for k, v in request.args.items() if k != "sort"])
+        sort = request.args.get("sort", "")
+        if sort == key:
+            vars += f"&sort={key}-desc"
+        else:
+            vars += f"&sort={key}"
+        html += f"<th><a href='{filename}?{vars}'>{key}</a></th>"
 
+    if results:
         counter = offset + 1
         for row in results:
             if counter == offset + 1:  # If it is the first run of the loop
@@ -232,6 +224,8 @@ def create_page(
             counter += 1
 
     html += "</table></form>"
+    if not results:
+        html += "<h1>No results for given filters</h1>"
 
     # Pagination
     vars = "&".join([f"{k}={v}" for k, v in request.args.items() if k != "page"])


Commit: 589770009b6cefbe3c41954214d250b7bb3a4f31
    https://github.com/scummvm/scummvm-sites/commit/589770009b6cefbe3c41954214d250b7bb3a4f31
Author: ShivangNagta (shivangnag at gmail.com)
Date: 2025-08-14T22:21:10+02:00

Commit Message:
INTEGRITY: Join game table before engine table.

Changed paths:
    pagination.py


diff --git a/pagination.py b/pagination.py
index 5b12482..899203a 100644
--- a/pagination.py
+++ b/pagination.py
@@ -74,7 +74,11 @@ def create_page(
 
             # Handle multiple tables
             from_query = records_table
-            tables_list = list(tables)
+            join_order = ["game", "engine"]
+            tables_list = sorted(
+                list(tables),
+                key=lambda t: join_order.index(t) if t in join_order else 99,
+            )
             if records_table not in tables_list or len(tables_list) > 1:
                 for table in tables_list:
                     if table == records_table:


Commit: 6c692292e8a280e1452432de10515d299bd56431
    https://github.com/scummvm/scummvm-sites/commit/6c692292e8a280e1452432de10515d299bd56431
Author: ShivangNagta (shivangnag at gmail.com)
Date: 2025-08-14T22:21:10+02:00

Commit Message:
INTEGRITY: Add max and min pages in dashboard.

Changed paths:
    pagination.py


diff --git a/pagination.py b/pagination.py
index 899203a..8497ec4 100644
--- a/pagination.py
+++ b/pagination.py
@@ -241,8 +241,8 @@ def create_page(
                 html += f"<input type='hidden' name='{key}' value='{value}'>"
         html += "<div class='pagination'>"
         if page > 1:
-            html += f"<a href='{filename}?{vars}'>❮❮</a>"
-            html += f"<a href='{filename}?page={page - 1}&{vars}'>❮</a>"
+            html += f"<a href='{filename}?{vars}'>1</a>"
+            html += f"<a href='{filename}?page={page - 1}&{vars}'>Prev</a>"
         if page - 2 > 1:
             html += "<div class='more'>...</div>"
         for i in range(page - 2, page + 3):
@@ -256,8 +256,10 @@ def create_page(
         if page + 2 < num_of_pages:
             html += "<div class='more'>...</div>"
         if page < num_of_pages:
-            html += f"<a href='{filename}?page={page + 1}&{vars}'>❯</a>"
-            html += f"<a href='{filename}?page={num_of_pages}&{vars}'>❯❯</a>"
+            html += f"<a href='{filename}?page={page + 1}&{vars}'>Next</a>"
+            html += (
+                f"<a href='{filename}?page={num_of_pages}&{vars}'>{num_of_pages}</a>"
+            )
         html += "<input type='text' name='page' placeholder='Page No'>"
         html += "<input type='submit' value='Submit'>"
         html += "</div></form>"


Commit: eaaa899e5686b688a7bc5bf2ccde602e176efcb6
    https://github.com/scummvm/scummvm-sites/commit/eaaa899e5686b688a7bc5bf2ccde602e176efcb6
Author: ShivangNagta (shivangnag at gmail.com)
Date: 2025-08-14T22:21:10+02:00

Commit Message:
INTEGRITY: Improve merge workflow.

Changed paths:
  A static/js/confirm_merge_form_handler.js
    fileset.py


diff --git a/fileset.py b/fileset.py
index 9906e25..7ee5dd8 100644
--- a/fileset.py
+++ b/fileset.py
@@ -8,6 +8,7 @@ from flask import (
 )
 import pymysql.cursors
 import json
+import html as html_lib
 import os
 from user_fileset_functions import (
     user_insert_fileset,
@@ -16,13 +17,14 @@ from user_fileset_functions import (
 from pagination import create_page
 import difflib
 from db_functions import (
-    find_matching_filesets,
     get_all_related_filesets,
     convert_log_text_to_links,
     user_integrity_check,
     db_connect,
     create_log,
     db_connect_root,
+    get_checksum_props,
+    delete_original_fileset,
 )
 from collections import defaultdict
 from schema import init_database
@@ -159,8 +161,7 @@ def fileset():
             <table>
             """
             html += f"<button type='button' onclick=\"location.href='/fileset/{id}/merge'\">Manual Merge</button>"
-            html += f"<button type='button' onclick=\"location.href='/fileset/{id}/match'\">Match and Merge</button>"
-            html += f"<button type='button' onclick=\"location.href='/fileset/{id}/possible_merge'\">Possible Merges</button>"
+            # html += f"<button type='button' onclick=\"location.href='/fileset/{id}/possible_merge'\">Possible Merges</button>"
             html += f"""
                     <form action="/fileset/{id}/mark_full" method="post" style="display:inline;">
                         <button type='submit'>Mark as full</button>
@@ -334,7 +335,6 @@ def fileset():
             # Generate the HTML for the developer actions
             html += "<h3>Developer Actions</h3>"
             html += f"<button id='delete-button' type='button' onclick='delete_id({id})'>Mark Fileset for Deletion</button>"
-            html += f"<button id='match-button' type='button' onclick='match_id({id})'>Match and Merge Fileset</button>"
 
             if "delete" in request.form:
                 cursor.execute(
@@ -419,121 +419,46 @@ def fileset():
                 html += "</tr>\n"
 
             html += "</table>\n"
-            return render_template_string(html)
-    finally:
-        connection.close()
-
-
- at app.route("/fileset/<int:id>/match", methods=["GET"])
-def match_fileset_route(id):
-    base_dir = os.path.dirname(os.path.abspath(__file__))
-    config_path = os.path.join(base_dir, "mysql_config.json")
-    with open(config_path) as f:
-        mysql_cred = json.load(f)
-
-    connection = pymysql.connect(
-        host=mysql_cred["servername"],
-        user=mysql_cred["username"],
-        password=mysql_cred["password"],
-        db=mysql_cred["dbname"],
-        charset="utf8mb4",
-        cursorclass=pymysql.cursors.DictCursor,
-    )
 
-    try:
-        with connection.cursor() as cursor:
-            cursor.execute("SELECT * FROM fileset WHERE id = %s", (id,))
-            fileset = cursor.fetchone()
-            fileset["rom"] = []
-            if not fileset:
-                return f"No fileset found with id {id}", 404
-
-            cursor.execute(
-                "SELECT file.id, name, size, checksum, detection, detection_type FROM file WHERE fileset = %s",
-                (id,),
-            )
-            result = cursor.fetchall()
-            file_ids = {}
-            for file in result:
-                file_ids[file["id"]] = (file["name"], file["size"])
-            cursor.execute(
-                "SELECT file, checksum, checksize, checktype FROM filechecksum WHERE file IN (%s)",
-                (",".join(map(str, file_ids.keys())),),
-            )
-
-            files = cursor.fetchall()
-            checksum_dict = defaultdict(
-                lambda: {"name": "", "size": 0, "checksums": {}}
-            )
-
-            for i in files:
-                file_id = i["file"]
-                file_name, file_size = file_ids[file_id]
-                checksum_dict[file_name]["name"] = file_name
-                checksum_dict[file_name]["size"] = file_size
-                checksum_key = (
-                    f"{i['checktype']}-{i['checksize']}"
-                    if i["checksize"] != 0
-                    else i["checktype"]
-                )
-                checksum_dict[file_name]["checksums"][checksum_key] = i["checksum"]
-
-            fileset["rom"] = [
-                {"name": value["name"], "size": value["size"], **value["checksums"]}
-                for value in checksum_dict.values()
-            ]
-
-            matched_map = find_matching_filesets(fileset, connection, fileset["status"])
-
-            html = f"""
-            <!DOCTYPE html>
-            <html>
-            <head>
-                <link rel="stylesheet" type="text/css" href="{{{{ url_for('static', filename='style.css') }}}}">
-            </head>
-            <body>
-            <nav style="position: fixed; top: 0; left: 0; right: 0; background: white; padding: 3px; border-bottom: 1px solid #ccc;">
-                <a href="{{{{ url_for('index') }}}}">
-                    <img src="{{{{ url_for('static', filename='integrity_service_logo_256.png') }}}}" alt="Logo" style="height:60px; vertical-align:middle;">
-                </a>
-            </nav>
-            <h2 style="margin-top: 80px;">Matched Filesets for Fileset: {id}</h2>
-            <table>
-            <tr>
-                <th>Fileset ID</th>
-                <th>Match Count</th>
-                <th>Actions</th>
-            </tr>
+            # Manual merge final candidates
+            query = """
+                SELECT
+                    fs.*,
+                    g.name AS game_name,
+                    g.engine AS game_engine,
+                    g.platform AS game_platform,
+                    g.language AS game_language,
+                    g.extra AS extra
+                FROM
+                    fileset fs
+                LEFT JOIN
+                    game g ON fs.game = g.id
+                JOIN
+                    possible_merges pm ON pm.child_fileset = fs.id
+                WHERE pm.parent_fileset = %s
             """
-
-            for fileset_id, match_count in matched_map.items():
-                if fileset_id == id:
-                    continue
-                cursor.execute(
-                    "SELECT COUNT(file.id) FROM file WHERE fileset = %s", (fileset_id,)
-                )
-                count = cursor.fetchone()["COUNT(file.id)"]
-                html += f"""
-                <tr>
-                    <td>{fileset_id}</td>
-                    <td>{len(match_count)} / {count}</td>
-                    <td><a href="/fileset?id={fileset_id}">View Details</a></td>
-                    <td>
-                        <form method="POST" action="/fileset/{id}/merge/confirm">
-                            <input type="hidden" name="source_id" value="{id}">
-                            <input type="hidden" name="target_id" value="{fileset_id}">
-                            <input type="submit" value="Merge">
-                        </form>
-                    </td>
-                    <td>
-                        <form method="GET" action="/fileset?id={id}">
-                            <input type="submit" value="Cancel">
-                        </form>
-                    </td>
-                </tr>
+            cursor.execute(query, (id,))
+            results = cursor.fetchall()
+            if results:
+                html += """
+                    <h3 style="margin-top: 30px;">Possible Merges</h3>
+                    <table>
+                    <tr><th>ID</th><th>Game Name</th><th>Platform</th><th>Language</th><th>Extra</th><th>Details</th><th>Action</th></tr>
                 """
+                for result in results:
+                    html += f"""
+                    <tr>
+                        <td>{result["id"]}</td>
+                        <td>{result["game_name"]}</td>
+                        <td>{result["game_platform"]}</td>
+                        <td>{result["game_language"]}</td>
+                        <td>{result["extra"]}</td>
+                        <td><a href="/fileset?id={result["id"]}">View Details</a></td>
+                        <td><a href="/fileset/{id}/merge/confirm?target_id={result["id"]}">Merge</a></td>
+                    </tr>
+                    """
+                html += "</table>\n"
 
-            html += "</table></body></html>"
             return render_template_string(html)
     finally:
         connection.close()
@@ -755,7 +680,18 @@ def confirm_merge(id):
                 (id,),
             )
             source_fileset = cursor.fetchone()
-            print(source_fileset)
+
+            # Select all files
+            file_query = """
+                SELECT f.name, f.size, f.`size-r`, f.`size-rd`, 
+                fc.checksum, fc.checksize, fc.checktype, f.detection
+                FROM file f
+                JOIN filechecksum fc ON fc.file = f.id
+                WHERE f.fileset = %s
+            """
+            cursor.execute(file_query, (id,))
+            source_files = cursor.fetchall()
+
             cursor.execute(
                 """
                 SELECT 
@@ -774,6 +710,9 @@ def confirm_merge(id):
             """,
                 (target_id,),
             )
+            target_fileset = cursor.fetchone()
+            cursor.execute(file_query, (target_id,))
+            target_files = cursor.fetchall()
 
             def highlight_differences(source, target):
                 diff = difflib.ndiff(source, target)
@@ -806,12 +745,11 @@ def confirm_merge(id):
                 </a>
             </nav>
             <h2 style="margin-top: 80px;">Confirm Merge</h2>
+            <form id="confirm_merge_form">
             <table border="1">
-            <tr><th>Field</th><th>Source Fileset</th><th>Target Fileset</th></tr>
+            <tr><th style="width: 50px;">Field</th><th style="width: 1000px;">Source Fileset</th><th style="width: 1000px;">Target Fileset</th></tr>
             """
 
-            target_fileset = cursor.fetchone()
-
             for column in source_fileset.keys():
                 source_value = str(source_fileset[column])
                 target_value = str(target_fileset[column])
@@ -826,16 +764,141 @@ def confirm_merge(id):
                 else:
                     html += f"<tr><td>{column}</td><td>{source_value}</td><td>{target_value}</td></tr>"
 
+            # Files
+            source_files_map = defaultdict(dict)
+            target_files_map = defaultdict(dict)
+            detection_files_set = set()
+
+            if source_files:
+                for file in source_files:
+                    checksize = file["checksize"]
+                    if checksize != "1048576" and file["checksize"] == "1M":
+                        checksize = "1048576"
+                    if checksize != "1048576" and int(file["checksize"]) == 0:
+                        checksize = "full"
+                    check = file["checktype"] + "-" + checksize
+                    source_files_map[file["name"].lower()][check] = file["checksum"]
+                    source_files_map[file["name"].lower()]["size"] = file["size"]
+                    source_files_map[file["name"].lower()]["size-r"] = file["size-r"]
+                    source_files_map[file["name"].lower()]["size-rd"] = file["size-rd"]
+
+            if target_files:
+                for file in target_files:
+                    checksize = file["checksize"]
+                    if checksize != "1048576" and file["checksize"] == "1M":
+                        checksize = "1048576"
+                    if checksize != "1048576" and int(file["checksize"]) == 0:
+                        checksize = "full"
+                    check = file["checktype"] + "-" + checksize
+                    target_files_map[file["name"].lower()][check] = file["checksum"]
+                    target_files_map[file["name"].lower()]["size"] = file["size"]
+                    target_files_map[file["name"].lower()]["size-r"] = file["size-r"]
+                    target_files_map[file["name"].lower()]["size-rd"] = file["size-rd"]
+                    print(file)
+                    if file["detection"] == 1:
+                        detection_files_set.add(file["name"].lower())
+
+            print(detection_files_set)
+
+            all_filenames = sorted(
+                set(source_files_map.keys()) | set(target_files_map.keys())
+            )
+            html += "<tr><th>Files</th></tr>"
+            for filename in all_filenames:
+                source_dict = source_files_map.get(filename, {})
+                target_dict = target_files_map.get(filename, {})
+
+                html += f"<tr><th>{filename}</th><th>Source File</th><th>Target File</th></tr>"
+
+                keys = sorted(set(source_dict.keys()) | set(target_dict.keys()))
+
+                for key in keys:
+                    source_value = str(source_dict.get(key, ""))
+                    target_value = str(target_dict.get(key, ""))
+
+                    source_checked = "checked" if key in source_dict else ""
+                    source_checksum = source_files_map[filename.lower()].get(key, "")
+                    target_checksum = target_files_map[filename.lower()].get(key, "")
+
+                    source_val = html_lib.escape(
+                        json.dumps(
+                            {
+                                "side": "source",
+                                "filename": filename,
+                                "prop": key,
+                                "value": source_checksum,
+                                "detection": "0",
+                            }
+                        )
+                    )
+                    if filename in detection_files_set:
+                        target_val = html_lib.escape(
+                            json.dumps(
+                                {
+                                    "side": "target",
+                                    "filename": filename,
+                                    "prop": key,
+                                    "value": target_checksum,
+                                    "detection": "1",
+                                }
+                            )
+                        )
+                    else:
+                        target_val = html_lib.escape(
+                            json.dumps(
+                                {
+                                    "side": "target",
+                                    "filename": filename,
+                                    "prop": key,
+                                    "value": target_checksum,
+                                    "detection": "0",
+                                }
+                            )
+                        )
+
+                    if source_value != target_value:
+                        source_highlighted, target_highlighted = highlight_differences(
+                            source_value, target_value
+                        )
+
+                        html += f"""
+                        <tr>
+                            <td>{key}</td>
+                            <td>
+                                <input type="checkbox" name="options[]" value="{source_val}" {source_checked}>
+                                {source_highlighted}
+                            </td>
+                            <td>
+                                <input type="checkbox" name="options[]" value="{target_val}">
+                                {target_highlighted}
+                            </td>
+                        </tr>
+                        """
+                    else:
+                        html += f"""
+                        <tr>
+                            <td>{key}</td>
+                            <td>
+                                <input type="checkbox" name="options[]" value="{source_val}" {source_checked}>
+                                {source_value}
+                            </td>
+                            <td>
+                                <input type="checkbox" name="options[]" value="{target_val}">
+                                {target_value}
+                            </td>
+                        </tr>
+                        """
+
             html += """
             </table>
-            <form method="POST" action="{{ url_for('execute_merge', id=id) }}">
                 <input type="hidden" name="source_id" value="{{ source_fileset['id'] }}">
                 <input type="hidden" name="target_id" value="{{ target_fileset['id'] }}">
-                <input type="submit" value="Confirm Merge">
+                <button type="submit">Confirm Merge</button>
             </form>
             <form action="{{ url_for('fileset', id=id) }}">
                 <input type="submit" value="Cancel">
             </form>
+            <script src="{{ url_for('static', filename='js/confirm_merge_form_handler.js') }}"></script>
             </body>
             </html>
             """
@@ -851,9 +914,11 @@ def confirm_merge(id):
 
 
 @app.route("/fileset/<int:id>/merge/execute", methods=["POST"])
-def execute_merge(id, source=None, target=None):
-    source_id = request.form["source_id"] if not source else source
-    target_id = request.form["target_id"] if not target else target
+def execute_merge(id):
+    data = request.get_json()
+    source_id = data.get("source_id")
+    target_id = data.get("target_id")
+    options = data.get("options")
 
     base_dir = os.path.dirname(os.path.abspath(__file__))
     config_path = os.path.join(base_dir, "mysql_config.json")
@@ -875,145 +940,136 @@ def execute_merge(id, source=None, target=None):
             source_fileset = cursor.fetchone()
             cursor.execute("SELECT * FROM fileset WHERE id = %s", (target_id,))
 
-            if source_fileset["status"] == "detection":
+            if source_fileset["status"] == "dat":
                 cursor.execute(
                     """
-                UPDATE fileset SET
-                    game = %s
+                    UPDATE fileset SET
                     status = %s,
                     `key` = %s,
-                    megakey = %s,
                     `timestamp` = %s
-                WHERE id = %s
+                    WHERE id = %s
                 """,
                     (
-                        source_fileset["game"],
-                        source_fileset["status"],
+                        "partial",
                         source_fileset["key"],
-                        source_fileset["megakey"],
                         source_fileset["timestamp"],
                         target_id,
                     ),
                 )
 
-                cursor.execute("DELETE FROM file WHERE fileset = %s", (target_id,))
-
-                cursor.execute("SELECT * FROM file WHERE fileset = %s", (source_id,))
-                source_files = cursor.fetchall()
+                source_filenames = set()
+                change_fileset_id = set()
+                file_details_map = defaultdict(dict)
+
+                for file in options:
+                    filename = file["filename"].lower()
+                    if "detection" not in file_details_map[filename]:
+                        file_details_map[filename]["detection"] = file["detection"]
+                        file_details_map[filename]["detection_type"] = file["prop"]
+                    elif (
+                        "detection" in file_details_map[filename]
+                        and file_details_map[filename]["detection"] != "1"
+                    ):
+                        file_details_map[filename]["detection"] = file["detection"]
+                        file_details_map[filename]["detection_type"] = file["prop"]
+                    if file["prop"].startswith("md5"):
+                        if "checksums" not in file_details_map[filename]:
+                            file_details_map[filename]["checksums"] = []
+                        file_details_map[filename]["checksums"].append(
+                            {"check": file["prop"], "value": file["value"]}
+                        )
+                    if file["side"] == "source":
+                        source_filenames.add(filename)
 
-                for file in source_files:
-                    cursor.execute(
+                # Delete older checksums
+                for file in options:
+                    filename = file["filename"].lower()
+                    if file["side"] == "source":
+                        cursor.execute(
+                            """SELECT f.id as file_id FROM file f
+                                       JOIN fileset fs ON fs.id = f.fileset 
+                                       WHERE f.name = %s
+                                       AND fs.id = %s""",
+                            (filename, source_id),
+                        )
+                        file_id = cursor.fetchone()["file_id"]
+                        query = """
+                            DELETE FROM filechecksum
+                            WHERE file = %s
                         """
-                    INSERT INTO file (name, size, checksum, fileset, detection, `timestamp`)
-                    VALUES (%s, %s, %s, %s, %s, NOW())
-                    """,
-                        (
-                            file["name"].lower(),
-                            file["size"],
-                            file["checksum"],
-                            target_id,
-                            file["detection"],
-                        ),
-                    )
-
-                    cursor.execute("SELECT LAST_INSERT_ID() as file_id")
-                    new_file_id = cursor.fetchone()["file_id"]
+                        cursor.execute(query, (file_id,))
+                    else:
+                        if filename not in source_filenames:
+                            cursor.execute(
+                                """SELECT f.id as file_id FROM file f
+                            JOIN fileset fs ON fs.id = f.fileset 
+                            WHERE f.name = %s
+                            AND fs.id = %s""",
+                                (filename, target_id),
+                            )
+                            target_file_id = cursor.fetchone()["file_id"]
+                            change_fileset_id.add(target_file_id)
 
+                for filename, details in file_details_map.items():
                     cursor.execute(
-                        "SELECT * FROM filechecksum WHERE file = %s", (file["id"],)
+                        """SELECT f.id as file_id FROM file f
+                                    JOIN fileset fs ON fs.id = f.fileset 
+                                    WHERE f.name = %s
+                                    AND fs.id = %s""",
+                        (filename, source_id),
                     )
-                    file_checksums = cursor.fetchall()
-
-                    for checksum in file_checksums:
+                    source_file_id = cursor.fetchone()["file_id"]
+                    detection = (
+                        details["detection"] == "1" if "detection" in details else False
+                    )
+                    if detection:
+                        query = """
+                            UPDATE file 
+                            SET detection = 1,
+                            detection_type = %s
+                            WHERE id = %s
+                        """
                         cursor.execute(
-                            """
-                        INSERT INTO filechecksum (file, checksize, checktype, checksum)
-                        VALUES (%s, %s, %s, %s)
-                        """,
+                            query,
                             (
-                                new_file_id,
-                                checksum["checksize"],
-                                checksum["checktype"],
-                                checksum["checksum"],
+                                details["detection_type"],
+                                source_file_id,
                             ),
                         )
-            elif source_fileset["status"] in ["scan", "dat"]:
-                cursor.execute(
-                    """
-                UPDATE fileset SET
-                    status = %s,
-                    `key` = %s,
-                    `timestamp` = %s
-                WHERE id = %s
-                """,
-                    (
-                        source_fileset["status"]
-                        if source_fileset["status"] != "dat"
-                        else "partial",
-                        source_fileset["key"],
-                        source_fileset["timestamp"],
-                        target_id,
-                    ),
-                )
-                cursor.execute("SELECT * FROM file WHERE fileset = %s", (source_id,))
-                source_files = cursor.fetchall()
-
-                cursor.execute("SELECT * FROM file WHERE fileset = %s", (target_id,))
-                target_files = cursor.fetchall()
-
-                target_files_dict = {}
-                for target_file in target_files:
-                    cursor.execute(
-                        "SELECT * FROM filechecksum WHERE file = %s",
-                        (target_file["id"],),
-                    )
-                    target_checksums = cursor.fetchall()
-                    for checksum in target_checksums:
-                        target_files_dict[checksum["checksum"]] = target_file
-
-                for source_file in source_files:
-                    cursor.execute(
-                        "SELECT * FROM filechecksum WHERE file = %s",
-                        (source_file["id"],),
-                    )
-                    source_checksums = cursor.fetchall()
-                    file_exists = False
-                    for checksum in source_checksums:
-                        print(checksum["checksum"])
-                        if checksum["checksum"] in target_files_dict.keys():
-                            target_file = target_files_dict[checksum["checksum"]]
-                            source_file["detection"] = target_file["detection"]
+                        cursor.execute(
+                            """SELECT f.id as file_id FROM file f
+                                    JOIN fileset fs ON fs.id = f.fileset 
+                                    WHERE f.name = %s
+                                    AND fs.id = %s""",
+                            (filename, target_id),
+                        )
+                        target_file_id = cursor.fetchone()["file_id"]
+                        cursor.execute(
+                            "DELETE FROM file WHERE id = %s", (target_file_id,)
+                        )
+                    for c in details["checksums"]:
+                        checksum = c["value"]
+                        check = c["check"]
+                        checksize, checktype, checksum = get_checksum_props(
+                            check, checksum
+                        )
+                        query = "INSERT INTO filechecksum (file, checksize, checktype, checksum) VALUES (%s, %s, %s, %s)"
+                        cursor.execute(
+                            query, (source_file_id, checksize, checktype, checksum)
+                        )
 
-                            cursor.execute(
-                                "DELETE FROM file WHERE id = %s", (target_file["id"],)
-                            )
-                            file_exists = True
-                            break
-                    print(file_exists)
                     cursor.execute(
-                        """INSERT INTO file (name, size, checksum, fileset, detection, `timestamp`) VALUES (
-                        %s, %s, %s, %s, %s, NOW())""",
-                        (
-                            source_file["name"],
-                            source_file["size"],
-                            source_file["checksum"],
-                            target_id,
-                            source_file["detection"],
-                        ),
+                        "UPDATE file SET fileset = %s WHERE id = %s",
+                        (target_id, source_file_id),
                     )
-                    new_file_id = cursor.lastrowid
-                    for checksum in source_checksums:
-                        # TODO: Handle the string
 
-                        cursor.execute(
-                            "INSERT INTO filechecksum (file, checksize, checktype, checksum) VALUES (%s, %s, %s, %s)",
-                            (
-                                new_file_id,
-                                checksum["checksize"],
-                                f"{checksum['checktype']}-{checksum['checksize']}",
-                                checksum["checksum"],
-                            ),
-                        )
+                # for target_file_id in change_fileset_id:
+                #     query = """
+                #         UPDATE file
+                #         SET fileset = %s
+                #         WHERE id = %s
+                #     """
+                #     cursor.execute(query, (source_id, target_file_id))
 
             cursor.execute(
                 """
@@ -1023,6 +1079,17 @@ def execute_merge(id, source=None, target=None):
                 (target_id, source_id),
             )
 
+            delete_original_fileset(source_id, connection)
+            category_text = "Manually Merged"
+            log_text = f"Manually merged Fileset:{source_id} with Fileset:{target_id}."
+            create_log(category_text, "Moderator", log_text, connection)
+
+            query = """
+                DELETE FROM possible_merges
+                WHERE parent_fileset = %s
+            """
+            cursor.execute(query, (source_id,))
+
             connection.commit()
 
             return redirect(url_for("fileset", id=target_id))
diff --git a/static/js/confirm_merge_form_handler.js b/static/js/confirm_merge_form_handler.js
new file mode 100644
index 0000000..d514091
--- /dev/null
+++ b/static/js/confirm_merge_form_handler.js
@@ -0,0 +1,85 @@
+document.getElementById("confirm_merge_form").addEventListener("submit", async function (e) {
+  e.preventDefault();
+
+  const form = e.target;
+
+  source_id = form.querySelector('input[name="source_id"]').value
+  
+  const jsonData = {
+    source_id: source_id,
+    target_id: form.querySelector('input[name="target_id"]').value,
+    options: []
+  };
+  
+  const checkedBoxes = form.querySelectorAll('input[name="options[]"]:checked');
+  jsonData.options = Array.from(checkedBoxes).map(cb => {
+    const optionData = JSON.parse(cb.value);
+    optionData.tick = "on";
+    return optionData;
+  });
+  
+  console.log("Data being sent:", jsonData);
+
+  const response = await fetch(`/fileset/${source_id}/merge/execute`, {
+    method: "POST",
+    headers: {
+      "Content-Type": "application/json",
+    },
+    body: JSON.stringify(jsonData),
+  });
+
+  if (response.redirected) {
+    window.location.href = response.url;
+  }
+});
+
+
+function checkForConflicts() {
+  const checkedBoxes = document.querySelectorAll('input[name="options[]"]:checked');
+  const conflicts = new Map();
+  
+  Array.from(checkedBoxes).forEach(cb => {
+    const option = JSON.parse(cb.value);
+    const key = `${option.filename}|${option.prop}`;
+    if (!conflicts.has(key)) {
+      conflicts.set(key, []);
+    }
+    conflicts.get(key).push({side: option.side, checkbox: cb});
+  });
+  
+  document.querySelectorAll('input[name="options[]"]').forEach(cb => {
+    cb.style.backgroundColor = '';
+    cb.parentElement.style.backgroundColor = '';
+  });
+  
+  let hasConflicts = false;
+  
+  conflicts.forEach((items, key) => {
+    if (items.length > 1) {
+      
+      hasConflicts = true;
+      
+      items.forEach(item => {
+        item.checkbox.style.backgroundColor = '#ffcccc';
+        item.checkbox.parentElement.style.backgroundColor = '#ffe6e6';
+      });
+    }
+  });
+  
+  const submitButton = document.querySelector('button[type="submit"]');
+  if (hasConflicts) {
+    submitButton.disabled = true;
+    submitButton.textContent = 'Resolve Conflicts First';
+    submitButton.style.backgroundColor = '#ccc';
+  } else {
+    submitButton.disabled = false;
+    submitButton.textContent = 'Confirm Merge';
+    submitButton.style.backgroundColor = '';
+  }
+}
+
+
+document.querySelectorAll('input[name="options[]"]').forEach(checkbox => {
+  checkbox.addEventListener('change', checkForConflicts);
+});
+


Commit: 7bd2979b48a4b5e7892d5e30002ebe2edf17dec0
    https://github.com/scummvm/scummvm-sites/commit/7bd2979b48a4b5e7892d5e30002ebe2edf17dec0
Author: ShivangNagta (shivangnag at gmail.com)
Date: 2025-08-14T22:21:10+02:00

Commit Message:
INTEGRITY: Add detection type for full checksums in detection entries

Changed paths:
    db_functions.py


diff --git a/db_functions.py b/db_functions.py
index 1836788..a0b68bf 100644
--- a/db_functions.py
+++ b/db_functions.py
@@ -218,6 +218,11 @@ def insert_file(file, detection, src, conn):
     if "md5" in file:
         checksum = file["md5"]
         checksum = checksum.split(":")[1] if ":" in checksum else checksum
+        tag = checksum.split(":")[0] if ":" in checksum else ""
+        checktype = "md5"
+        if tag != "":
+            checktype += "-" + tag
+        checksize = 0
     else:
         for key, value in file.items():
             if "md5" in key:


Commit: c160766734ce8b2a4d12a9d450e61b05715d82ed
    https://github.com/scummvm/scummvm-sites/commit/c160766734ce8b2a4d12a9d450e61b05715d82ed
Author: ShivangNagta (shivangnag at gmail.com)
Date: 2025-08-14T22:21:10+02:00

Commit Message:
INTEGRITY: Update timestamp for detection files in partial fileset conversion for set.dat

Changed paths:
    db_functions.py


diff --git a/db_functions.py b/db_functions.py
index a0b68bf..b332023 100644
--- a/db_functions.py
+++ b/db_functions.py
@@ -2846,7 +2846,8 @@ def set_populate_file(fileset, fileset_id, conn, detection):
                 query = """
                     UPDATE file
                     SET size = %s,
-                    name = %s
+                    name = %s,
+                    `timestamp` = NOW()
                     WHERE id = %s
                 """
 


Commit: 9a384d436765be7bddfb47c41723e4dc0d349881
    https://github.com/scummvm/scummvm-sites/commit/9a384d436765be7bddfb47c41723e4dc0d349881
Author: ShivangNagta (shivangnag at gmail.com)
Date: 2025-08-14T22:21:10+02:00

Commit Message:
INTEGRITY: Add extra error handling for parsing.

Changed paths:
    dat_parser.py
    db_functions.py


diff --git a/dat_parser.py b/dat_parser.py
index a76480b..9655d5c 100644
--- a/dat_parser.py
+++ b/dat_parser.py
@@ -1,5 +1,6 @@
 import re
 import os
+import sys
 from db_functions import db_insert, match_fileset
 import argparse
 
@@ -79,21 +80,40 @@ def match_outermost_brackets(input):
     depth = 0
     inside_quotes = False
     cur_index = 0
+    line_number = 1
+    index_line = 1
 
-    for i in range(len(input)):
-        char = input[i]
+    for i, char in enumerate(input):
+        if char == "\n":
+            line_number += 1
+            inside_quotes = False
 
-        if char == "(" and not inside_quotes:
+        if char == '"' and input[i - 1] != "\\":
+            inside_quotes = not inside_quotes
+
+        elif char == "(" and not inside_quotes:
             if depth == 0:
+                if "rom" in input[i - 4 : i]:
+                    raise ValueError(
+                        f"Missing an opening '(' for the game. Look near line {line_number}."
+                    )
+                index_line = line_number
                 cur_index = i
             depth += 1
+
         elif char == ")" and not inside_quotes:
+            if depth == 0:
+                print(f"Warning: unmatched ')' at line {line_number}")
+                continue
             depth -= 1
             if depth == 0:
                 match = input[cur_index : i + 1]
                 matches.append((match, cur_index))
-        elif char == '"' and input[i - 1] != "\\":
-            inside_quotes = not inside_quotes
+
+    if depth != 0:
+        raise ValueError(
+            f"Unmatched '(' starting at line {index_line}: possibly an unclosed block."
+        )
 
     return matches
 
@@ -104,61 +124,102 @@ def parse_dat(dat_filepath):
     associated arrays
     """
     if not os.path.isfile(dat_filepath):
-        print("File not readable")
-        return
+        print(f"Error: File does not exist or is unreadable: {dat_filepath}.")
+        return None
 
-    with open(dat_filepath, "r", encoding="utf-8") as dat_file:
-        content = dat_file.read()
+    try:
+        with open(dat_filepath, "r", encoding="utf-8") as dat_file:
+            content = dat_file.read()
+    except (IOError, UnicodeDecodeError) as e:
+        print(f"Error: Failed to read file {dat_filepath}: {e}")
+        return None
 
     header = {}
     game_data = []
     resources = {}
 
-    matches = match_outermost_brackets(content)
-    # print(matches)
+    try:
+        matches = match_outermost_brackets(content)
+    except Exception as e:
+        print(f"Error: Failed to parse outer brackets in {dat_filepath}: {e}")
+        return None
     if matches:
         for data_segment in matches:
-            if (
-                "clrmamepro" in content[data_segment[1] - 11 : data_segment[1]]
-                or "scummvm" in content[data_segment[1] - 8 : data_segment[1]]
-            ):
-                header = map_key_values(data_segment[0], header)
-            elif "game" in content[data_segment[1] - 5 : data_segment[1]]:
-                temp = {}
-                temp = map_key_values(data_segment[0], temp)
-                game_data.append(temp)
-            elif "resource" in content[data_segment[1] - 9 : data_segment[1]]:
-                temp = {}
-                temp = map_key_values(data_segment[0], temp)
-                resources[temp["name"]] = temp
-    # print(header, game_data, resources, dat_filepath)
+            try:
+                if (
+                    "clrmamepro" in content[data_segment[1] - 11 : data_segment[1]]
+                    or "scummvm" in content[data_segment[1] - 8 : data_segment[1]]
+                ):
+                    header = map_key_values(data_segment[0], header)
+                elif "game" in content[data_segment[1] - 5 : data_segment[1]]:
+                    temp = {}
+                    temp = map_key_values(data_segment[0], temp)
+                    game_data.append(temp)
+                elif "resource" in content[data_segment[1] - 9 : data_segment[1]]:
+                    temp = {}
+                    temp = map_key_values(data_segment[0], temp)
+                    resources[temp["name"]] = temp
+            except Exception as e:
+                print(f"Error: Failed to parse a data_segment: {e}")
+                return None
+
     return header, game_data, resources, dat_filepath
 
 
 def main():
-    parser = argparse.ArgumentParser(
-        description="Process DAT files and interact with the database."
-    )
-    parser.add_argument(
-        "--upload", nargs="+", help="Upload DAT file(s) to the database"
-    )
-    parser.add_argument(
-        "--match", nargs="+", help="Populate matching games in the database"
-    )
-    parser.add_argument("--user", help="Username for database")
-    parser.add_argument("-r", help="Recurse through directories", action="store_true")
-    parser.add_argument("--skiplog", help="Skip logging dups", action="store_true")
-
-    args = parser.parse_args()
-
-    if args.upload:
-        for filepath in args.upload:
-            db_insert(parse_dat(filepath), args.user, args.skiplog)
-
-    if args.match:
-        for filepath in args.match:
-            # print(parse_dat(filepath)[2])
-            match_fileset(parse_dat(filepath), args.user, args.skiplog)
+    try:
+        parser = argparse.ArgumentParser(
+            description="Process DAT files and interact with the database."
+        )
+        parser.add_argument(
+            "--upload", nargs="+", help="Upload DAT file(s) to the database"
+        )
+        parser.add_argument(
+            "--match", nargs="+", help="Populate matching games in the database"
+        )
+        parser.add_argument("--user", help="Username for database")
+        parser.add_argument(
+            "-r", help="Recurse through directories", action="store_true"
+        )
+        parser.add_argument("--skiplog", help="Skip logging dups", action="store_true")
+
+        args = parser.parse_args()
+
+        if not args.upload and not args.match:
+            print("Error: No action specified. Use --upload or --match")
+            parser.print_help()
+            sys.exit(1)
+
+        if args.upload:
+            for filepath in args.upload:
+                try:
+                    parsed_data = parse_dat(filepath)
+                    if parsed_data is not None:
+                        db_insert(parsed_data, args.user, args.skiplog)
+                    else:
+                        print(f"Error: Failed to parse file for upload: {filepath}")
+                except Exception as e:
+                    print(f"Error uploading {filepath}: {e}")
+                    continue
+
+        if args.match:
+            for filepath in args.match:
+                try:
+                    parsed_data = parse_dat(filepath)
+                    if parsed_data[0] is not None:
+                        match_fileset(parsed_data, args.user, args.skiplog)
+                    else:
+                        print(f"Error: Failed to parse file for matching: {filepath}")
+                except Exception as e:
+                    print(f"Error matching {filepath}: {e}")
+                    continue
+
+    except KeyboardInterrupt:
+        print("Operation cancelled by user")
+        sys.exit(0)
+    except Exception as e:
+        print(f"Error: Unexpected error in main: {e}")
+        sys.exit(1)
 
 
 if __name__ == "__main__":
diff --git a/db_functions.py b/db_functions.py
index b332023..ff5d7f8 100644
--- a/db_functions.py
+++ b/db_functions.py
@@ -519,14 +519,19 @@ def db_insert(data_arr, username=None, skiplog=False):
     try:
         author = header["author"]
         version = header["version"]
+        if author != "scummvm":
+            raise ValueError(
+                f"Author needs to be scummvm for seeding. Incorrect author: {author}"
+            )
+    except ValueError as ve:
+        raise ve
     except KeyError as e:
         print(f"Missing key in header: {e}")
         return
 
-    src = "dat" if author not in ["scan", "scummvm"] else author
-
-    detection = src == "scummvm"
-    status = "detection" if detection else src
+    src = author
+    detection = True
+    status = "detection"
 
     conn.cursor().execute("SET @fileset_time_last = %s", (int(time.time()),))
 
@@ -552,38 +557,35 @@ def db_insert(data_arr, username=None, skiplog=False):
         key = calc_key(fileset)
         megakey = calc_megakey(fileset)
 
-        if detection:
-            try:
-                engine_name = fileset.get("engine", "")
-                engineid = fileset["sourcefile"]
-                gameid = fileset["name"]
-                title = fileset.get("title", "")
-                extra = fileset.get("extra", "")
-                platform = fileset.get("platform", "")
-                lang = fileset.get("language", "")
-            except KeyError as e:
-                print(
-                    f"Missing key in header: {e} for {fileset.get('name', '')}-{fileset.get('language', '')}-{fileset.get('platform', '')}"
-                )
-                return
+        try:
+            engine_name = fileset.get("engine", "")
+            engineid = fileset["sourcefile"]
+            gameid = fileset["name"]
+            title = fileset.get("title", "")
+            extra = fileset.get("extra", "")
+            platform = fileset.get("platform", "")
+            lang = fileset.get("language", "")
+        except KeyError as e:
+            print(
+                f"Missing key in header: {e} for {fileset.get('name', '')}-{fileset.get('language', '')}-{fileset.get('platform', '')}"
+            )
+            return
 
-            with conn.cursor() as cursor:
-                query = """
-                    SELECT id
-                    FROM fileset
-                    WHERE `key` = %s
-                """
-                cursor.execute(query, (key,))
-                existing_entry = cursor.fetchone()
-                if existing_entry is not None:
-                    log_text = f"Skipping Entry as similar entry already exsits - Fileset:{existing_entry['id']}. Skpped entry details - engineid = {engineid}, gameid = {gameid}, platform = {platform}, language = {lang}"
-                    create_log("Warning", user, escape_string(log_text), conn)
-                    console_log(log_text)
-                    continue
+        with conn.cursor() as cursor:
+            query = """
+                SELECT id
+                FROM fileset
+                WHERE `key` = %s
+            """
+            cursor.execute(query, (key,))
+            existing_entry = cursor.fetchone()
+            if existing_entry is not None:
+                log_text = f"Skipping Entry as similar entry already exsits - Fileset:{existing_entry['id']}. Skpped entry details - engineid = {engineid}, gameid = {gameid}, platform = {platform}, language = {lang}"
+                create_log("Warning", user, escape_string(log_text), conn)
+                console_log(log_text)
+                continue
 
-            insert_game(
-                engine_name, engineid, title, gameid, extra, platform, lang, conn
-            )
+        insert_game(engine_name, engineid, title, gameid, extra, platform, lang, conn)
 
         log_text = f"size {os.path.getsize(filepath)}, author {author}, version {version}. State {status}."
 
@@ -894,8 +896,8 @@ def match_fileset(data_arr, username=None, skiplog=False):
         return
 
     src = "dat" if author not in ["scan", "scummvm"] else author
-    detection = src == "scummvm"
-    source_status = "detection" if detection else src
+    detection = False
+    source_status = src
 
     conn.cursor().execute("SET @fileset_time_last = %s", (int(time.time()),))
 


Commit: 12cfb77b85a9e6805b2b304df329d6a885849247
    https://github.com/scummvm/scummvm-sites/commit/12cfb77b85a9e6805b2b304df329d6a885849247
Author: ShivangNagta (shivangnag at gmail.com)
Date: 2025-08-14T22:21:10+02:00

Commit Message:
INTEGRITY: Add commit/rollback transaction support to match_fileset and db_insert.

Changed paths:
    db_functions.py


diff --git a/db_functions.py b/db_functions.py
index ff5d7f8..b2cbc55 100644
--- a/db_functions.py
+++ b/db_functions.py
@@ -382,19 +382,14 @@ def punycode_need_encode(orig):
 
 
 def create_log(category, user, text, conn):
-    query = f"INSERT INTO log (`timestamp`, category, user, `text`) VALUES (FROM_UNIXTIME({int(time.time())}), '{escape_string(category)}', '{escape_string(user)}', '{escape_string(text)}')"
-    query = "INSERT INTO log (`timestamp`, category, user, `text`) VALUES (FROM_UNIXTIME(%s), %s, %s, %s)"
     with conn.cursor() as cursor:
         try:
+            query = "INSERT INTO log (`timestamp`, category, user, `text`) VALUES (FROM_UNIXTIME(%s), %s, %s, %s)"
             cursor.execute(query, (int(time.time()), category, user, text))
-            conn.commit()
-        except Exception as e:
-            conn.rollback()
-            print(f"Creating log failed: {e}")
-            log_last = None
-        else:
             cursor.execute("SELECT LAST_INSERT_ID()")
             log_last = cursor.fetchone()["LAST_INSERT_ID()"]
+        except Exception as e:
+            raise RuntimeError("Log creation failed") from e
     return log_last
 
 
@@ -405,9 +400,7 @@ def update_history(source_id, target_id, conn, log_last=None):
             cursor.execute(
                 query, (target_id, source_id, log_last if log_last is not None else 0)
             )
-            conn.commit()
         except Exception as e:
-            conn.rollback()
             print(f"Creating log failed: {e}")
             log_last = None
         else:
@@ -523,120 +516,137 @@ def db_insert(data_arr, username=None, skiplog=False):
             raise ValueError(
                 f"Author needs to be scummvm for seeding. Incorrect author: {author}"
             )
-    except ValueError as ve:
-        raise ve
     except KeyError as e:
         print(f"Missing key in header: {e}")
         return
 
-    src = author
-    detection = True
-    status = "detection"
-
-    conn.cursor().execute("SET @fileset_time_last = %s", (int(time.time()),))
-
-    with conn.cursor() as cursor:
-        cursor.execute("SELECT MAX(`transaction`) FROM transactions")
-        temp = cursor.fetchone()["MAX(`transaction`)"]
-        if temp is None:
-            temp = 0
-        transaction_id = temp + 1
+    try:
+        src = author
+        detection = True
+        status = "detection"
 
-    category_text = f"Uploaded from {src}"
-    log_text = f"Started loading DAT file {filepath}, size {os.path.getsize(filepath)}, author {author}, version {version}. State {status}. Transaction: {transaction_id}"
+        with conn.cursor() as cursor:
+            cursor.execute("SET @fileset_time_last = %s", (int(time.time()),))
 
-    user = f"cli:{getpass.getuser()}" if username is None else username
-    create_log(escape_string(category_text), user, escape_string(log_text), conn)
+        with conn.cursor() as cursor:
+            cursor.execute("SELECT MAX(`transaction`) FROM transactions")
+            temp = cursor.fetchone()["MAX(`transaction`)"]
+            if temp is None:
+                temp = 0
+            transaction_id = temp + 1
 
-    console_log(log_text)
-    console_log_total_filesets(filepath)
+        category_text = f"Uploaded from {src}"
+        log_text = f"Started loading DAT file {filepath}, size {os.path.getsize(filepath)}, author {author}, version {version}. State {status}. Transaction: {transaction_id}"
 
-    fileset_count = 1
-    for fileset in game_data:
-        console_log_detection(fileset_count)
-        key = calc_key(fileset)
-        megakey = calc_megakey(fileset)
+        user = f"cli:{getpass.getuser()}" if username is None else username
+        create_log(escape_string(category_text), user, escape_string(log_text), conn)
 
-        try:
-            engine_name = fileset.get("engine", "")
-            engineid = fileset["sourcefile"]
-            gameid = fileset["name"]
-            title = fileset.get("title", "")
-            extra = fileset.get("extra", "")
-            platform = fileset.get("platform", "")
-            lang = fileset.get("language", "")
-        except KeyError as e:
-            print(
-                f"Missing key in header: {e} for {fileset.get('name', '')}-{fileset.get('language', '')}-{fileset.get('platform', '')}"
-            )
-            return
+        console_log(log_text)
+        console_log_total_filesets(filepath)
 
-        with conn.cursor() as cursor:
-            query = """
-                SELECT id
-                FROM fileset
-                WHERE `key` = %s
-            """
-            cursor.execute(query, (key,))
-            existing_entry = cursor.fetchone()
-            if existing_entry is not None:
-                log_text = f"Skipping Entry as similar entry already exsits - Fileset:{existing_entry['id']}. Skpped entry details - engineid = {engineid}, gameid = {gameid}, platform = {platform}, language = {lang}"
-                create_log("Warning", user, escape_string(log_text), conn)
-                console_log(log_text)
-                continue
+        fileset_count = 1
+        for fileset in game_data:
+            console_log_detection(fileset_count)
+            key = calc_key(fileset)
+            megakey = calc_megakey(fileset)
+
+            try:
+                engine_name = fileset.get("engine", "")
+                engineid = fileset["sourcefile"]
+                gameid = fileset["name"]
+                title = fileset.get("title", "")
+                extra = fileset.get("extra", "")
+                platform = fileset.get("platform", "")
+                lang = fileset.get("language", "")
+            except KeyError as e:
+                raise RuntimeError(
+                    f"Missing key in header: {e} for {fileset.get('name', '')}-{fileset.get('language', '')}-{fileset.get('platform', '')}"
+                )
 
-        insert_game(engine_name, engineid, title, gameid, extra, platform, lang, conn)
+            with conn.cursor() as cursor:
+                query = """
+                    SELECT id
+                    FROM fileset
+                    WHERE `key` = %s
+                """
+                cursor.execute(query, (key,))
+                existing_entry = cursor.fetchone()
+                if existing_entry is not None:
+                    log_text = f"Skipping Entry as similar entry already exsits - Fileset:{existing_entry['id']}. Skpped entry details - engineid = {engineid}, gameid = {gameid}, platform = {platform}, language = {lang}"
+                    create_log("Warning", user, escape_string(log_text), conn)
+                    console_log(log_text)
+                    continue
 
-        log_text = f"size {os.path.getsize(filepath)}, author {author}, version {version}. State {status}."
+            insert_game(
+                engine_name, engineid, title, gameid, extra, platform, lang, conn
+            )
 
-        if insert_fileset(
-            src,
-            detection,
-            key,
-            megakey,
-            transaction_id,
-            log_text,
-            conn,
-            username=username,
-            skiplog=skiplog,
-        ):
-            # Some detection entries contain duplicate files.
-            unique_files = []
-            seen = set()
-            for file_dict in fileset["rom"]:
-                dict_tuple = tuple(sorted(file_dict.items()))
-                if dict_tuple not in seen:
-                    seen.add(dict_tuple)
-                    unique_files.append(file_dict)
-
-            for file in unique_files:
-                insert_file(file, detection, src, conn)
-                file_id = None
-                with conn.cursor() as cursor:
-                    cursor.execute("SELECT @file_last AS file_id")
-                    file_id = cursor.fetchone()["file_id"]
-                for key, value in file.items():
-                    if key not in ["name", "size", "size-r", "size-rd", "sha1", "crc"]:
-                        insert_filechecksum(file, key, file_id, conn)
+            log_text = f"size {os.path.getsize(filepath)}, author {author}, version {version}. State {status}."
 
-        fileset_count += 1
+            if insert_fileset(
+                src,
+                detection,
+                key,
+                megakey,
+                transaction_id,
+                log_text,
+                conn,
+                username=username,
+                skiplog=skiplog,
+            ):
+                # Some detection entries contain duplicate files.
+                unique_files = []
+                seen = set()
+                for file_dict in fileset["rom"]:
+                    dict_tuple = tuple(sorted(file_dict.items()))
+                    if dict_tuple not in seen:
+                        seen.add(dict_tuple)
+                        unique_files.append(file_dict)
+
+                for file in unique_files:
+                    insert_file(file, detection, src, conn)
+                    file_id = None
+                    with conn.cursor() as cursor:
+                        cursor.execute("SELECT @file_last AS file_id")
+                        file_id = cursor.fetchone()["file_id"]
+                    for key, value in file.items():
+                        if key not in [
+                            "name",
+                            "size",
+                            "size-r",
+                            "size-rd",
+                            "sha1",
+                            "crc",
+                        ]:
+                            insert_filechecksum(file, key, file_id, conn)
+
+            fileset_count += 1
+
+        cur = conn.cursor()
 
-    cur = conn.cursor()
+        try:
+            cur.execute(
+                "SELECT COUNT(fileset) from transactions WHERE `transaction` = %s",
+                (transaction_id,),
+            )
+            fileset_insertion_count = cur.fetchone()["COUNT(fileset)"]
+            category_text = f"Uploaded from {src}"
+            log_text = f"Completed loading DAT file, filename {filepath}, size {os.path.getsize(filepath)}, author {author}, version {version}. State {status}. Number of filesets: {fileset_insertion_count}. Transaction: {transaction_id}"
+            console_log(log_text)
+        except Exception as e:
+            print("Inserting failed:", e)
+        else:
+            user = f"cli:{getpass.getuser()}" if username is None else username
+            create_log(
+                escape_string(category_text), user, escape_string(log_text), conn
+            )
 
-    try:
-        cur.execute(
-            "SELECT COUNT(fileset) from transactions WHERE `transaction` = %s",
-            (transaction_id,),
-        )
-        fileset_insertion_count = cur.fetchone()["COUNT(fileset)"]
-        category_text = f"Uploaded from {src}"
-        log_text = f"Completed loading DAT file, filename {filepath}, size {os.path.getsize(filepath)}, author {author}, version {version}. State {status}. Number of filesets: {fileset_insertion_count}. Transaction: {transaction_id}"
-        console_log(log_text)
+        conn.commit()
     except Exception as e:
-        print("Inserting failed:", e)
-    else:
-        user = f"cli:{getpass.getuser()}" if username is None else username
-        create_log(escape_string(category_text), user, escape_string(log_text), conn)
+        conn.rollback()
+        print(f"Transaction failed: {e}")
+    finally:
+        conn.close()
 
 
 def compare_filesets(id1, id2, conn):
@@ -895,59 +905,27 @@ def match_fileset(data_arr, username=None, skiplog=False):
         print(f"Missing key in header: {e}")
         return
 
-    src = "dat" if author not in ["scan", "scummvm"] else author
-    detection = False
-    source_status = src
-
-    conn.cursor().execute("SET @fileset_time_last = %s", (int(time.time()),))
+    try:
+        src = "dat" if author not in ["scan", "scummvm"] else author
+        detection = False
+        source_status = src
 
-    with conn.cursor() as cursor:
-        cursor.execute("SELECT MAX(`transaction`) FROM transactions")
-        transaction_id = cursor.fetchone()["MAX(`transaction`)"]
-        transaction_id = transaction_id + 1 if transaction_id else 1
+        with conn.cursor() as cursor:
+            cursor.execute("SET @fileset_time_last = %s", (int(time.time()),))
+            cursor.execute("SELECT MAX(`transaction`) FROM transactions")
+            transaction_id = cursor.fetchone()["MAX(`transaction`)"]
+            transaction_id = transaction_id + 1 if transaction_id else 1
 
-    category_text = f"Uploaded from {src}"
-    log_text = f"Started loading DAT file {filepath}, size {os.path.getsize(filepath)}, author {author}, version {version}. State {source_status}. Transaction: {transaction_id}"
-    console_log(log_text)
-    console_log_total_filesets(filepath)
-    user = f"cli:{getpass.getuser()}" if username is None else username
-    create_log(escape_string(category_text), user, escape_string(log_text), conn)
+        category_text = f"Uploaded from {src}"
+        log_text = f"Started loading DAT file {filepath}, size {os.path.getsize(filepath)}, author {author}, version {version}. State {source_status}. Transaction: {transaction_id}"
+        console_log(log_text)
+        console_log_total_filesets(filepath)
+        user = f"cli:{getpass.getuser()}" if username is None else username
+        create_log(escape_string(category_text), user, escape_string(log_text), conn)
 
-    if src == "dat":
-        set_process(
-            game_data,
-            resources,
-            detection,
-            src,
-            conn,
-            transaction_id,
-            filepath,
-            author,
-            version,
-            source_status,
-            user,
-            skiplog,
-        )
-    elif src == "scan":
-        scan_process(
-            game_data,
-            resources,
-            detection,
-            src,
-            conn,
-            transaction_id,
-            filepath,
-            author,
-            version,
-            source_status,
-            user,
-            skiplog,
-        )
-    else:
-        game_data_lookup = {fs["name"]: fs for fs in game_data}
-        for fileset in game_data:
-            process_fileset(
-                fileset,
+        if src == "dat":
+            set_process(
+                game_data,
                 resources,
                 detection,
                 src,
@@ -958,11 +936,56 @@ def match_fileset(data_arr, username=None, skiplog=False):
                 version,
                 source_status,
                 user,
-                game_data_lookup,
+                skiplog,
             )
-        finalize_fileset_insertion(
-            conn, transaction_id, src, filepath, author, version, source_status, user
-        )
+        elif src == "scan":
+            scan_process(
+                game_data,
+                resources,
+                detection,
+                src,
+                conn,
+                transaction_id,
+                filepath,
+                author,
+                version,
+                source_status,
+                user,
+                skiplog,
+            )
+        else:
+            game_data_lookup = {fs["name"]: fs for fs in game_data}
+            for fileset in game_data:
+                process_fileset(
+                    fileset,
+                    resources,
+                    detection,
+                    src,
+                    conn,
+                    transaction_id,
+                    filepath,
+                    author,
+                    version,
+                    source_status,
+                    user,
+                    game_data_lookup,
+                )
+            finalize_fileset_insertion(
+                conn,
+                transaction_id,
+                src,
+                filepath,
+                author,
+                version,
+                source_status,
+                user,
+            )
+        conn.commit()
+    except Exception as e:
+        conn.rollback()
+        print(f"Transaction failed: {e}")
+    finally:
+        conn.close()
 
 
 def scan_process(
@@ -2639,7 +2662,6 @@ def delete_original_fileset(fileset_id, conn):
     with conn.cursor() as cursor:
         cursor.execute("DELETE FROM file WHERE fileset = %s", (fileset_id,))
         cursor.execute("DELETE FROM fileset WHERE id = %s", (fileset_id,))
-    conn.commit()
 
 
 def update_fileset_status(cursor, fileset_id, status):


Commit: b6ae1265a1688c9cd27b49570915dba268c9cc8f
    https://github.com/scummvm/scummvm-sites/commit/b6ae1265a1688c9cd27b49570915dba268c9cc8f
Author: ShivangNagta (shivangnag at gmail.com)
Date: 2025-08-14T22:21:10+02:00

Commit Message:
INTEGRITY: Remove early string escaping for database logs as queries have been parametrised

Changed paths:
    db_functions.py


diff --git a/db_functions.py b/db_functions.py
index b2cbc55..272935b 100644
--- a/db_functions.py
+++ b/db_functions.py
@@ -161,9 +161,7 @@ def insert_fileset(
         log_text = f"Updated Fileset:{existing_entry}, {log_text}"
         user = f"cli:{getpass.getuser()}" if username is None else username
         if not skiplog:
-            log_last = create_log(
-                escape_string(category_text), user, escape_string(log_text), conn
-            )
+            log_last = create_log(category_text, user, log_text, conn)
             update_history(existing_entry, existing_entry, conn, log_last)
 
         return (existing_entry, True)
@@ -187,9 +185,7 @@ def insert_fileset(
 
     user = f"cli:{getpass.getuser()}" if username is None else username
     if not skiplog and detection:
-        log_last = create_log(
-            escape_string(category_text), user, escape_string(log_text), conn
-        )
+        log_last = create_log(category_text, user, log_text, conn)
         update_history(fileset_last, fileset_last, conn, log_last)
     else:
         update_history(0, fileset_last, conn)
@@ -539,7 +535,7 @@ def db_insert(data_arr, username=None, skiplog=False):
         log_text = f"Started loading DAT file {filepath}, size {os.path.getsize(filepath)}, author {author}, version {version}. State {status}. Transaction: {transaction_id}"
 
         user = f"cli:{getpass.getuser()}" if username is None else username
-        create_log(escape_string(category_text), user, escape_string(log_text), conn)
+        create_log(category_text, user, log_text, conn)
 
         console_log(log_text)
         console_log_total_filesets(filepath)
@@ -573,7 +569,7 @@ def db_insert(data_arr, username=None, skiplog=False):
                 existing_entry = cursor.fetchone()
                 if existing_entry is not None:
                     log_text = f"Skipping Entry as similar entry already exsits - Fileset:{existing_entry['id']}. Skpped entry details - engineid = {engineid}, gameid = {gameid}, platform = {platform}, language = {lang}"
-                    create_log("Warning", user, escape_string(log_text), conn)
+                    create_log("Warning", user, log_text, conn)
                     console_log(log_text)
                     continue
 
@@ -637,9 +633,7 @@ def db_insert(data_arr, username=None, skiplog=False):
             print("Inserting failed:", e)
         else:
             user = f"cli:{getpass.getuser()}" if username is None else username
-            create_log(
-                escape_string(category_text), user, escape_string(log_text), conn
-            )
+            create_log(category_text, user, log_text, conn)
 
         conn.commit()
     except Exception as e:
@@ -858,16 +852,12 @@ def populate_matching_games():
             create_log(
                 "Fileset merge",
                 user,
-                escape_string(
-                    f"Merged Fileset:{matched_game['fileset']} and Fileset:{fileset[0][0]}"
-                ),
+                f"Merged Fileset:{matched_game['fileset']} and Fileset:{fileset[0][0]}",
                 conn,
             )
 
             # Matching log
-            log_last = create_log(
-                escape_string(conn, category_text), user, escape_string(conn, log_text)
-            )
+            log_last = create_log(conn, category_text, user, conn, log_text)
 
             # Add log id to the history table
             cursor.execute(
@@ -921,7 +911,7 @@ def match_fileset(data_arr, username=None, skiplog=False):
         console_log(log_text)
         console_log_total_filesets(filepath)
         user = f"cli:{getpass.getuser()}" if username is None else username
-        create_log(escape_string(category_text), user, escape_string(log_text), conn)
+        create_log(category_text, user, log_text, conn)
 
         if src == "dat":
             set_process(
@@ -1064,9 +1054,7 @@ def scan_process(
                 fileset["description"] if "description" in fileset else ""
             )
             log_text = f"Drop fileset as no matching candidates. Name: {fileset_name} Description: {fileset_description}."
-            create_log(
-                escape_string(category_text), user, escape_string(log_text), conn
-            )
+            create_log(category_text, user, log_text, conn)
             dropped_early_no_candidate += 1
             delete_original_fileset(fileset_id, conn)
             continue
@@ -1105,11 +1093,11 @@ def scan_process(
         fileset_insertion_count = cursor.fetchone()["COUNT(fileset)"]
         category_text = f"Uploaded from {src}"
         log_text = f"Completed loading DAT file, filename {filepath}, size {os.path.getsize(filepath)}. State {source_status}. Number of filesets: {fileset_insertion_count}. Transaction: {transaction_id}"
-        create_log(escape_string(category_text), user, escape_string(log_text), conn)
+        create_log(category_text, user, log_text, conn)
         category_text = "Upload information"
         log_text = f"Number of filesets: {fileset_insertion_count}. Filesets automatically merged: {automatic_merged_filesets}. Filesets requiring manual merge (multiple candidates): {manual_merged_filesets}. Filesets requiring manual merge (matched with detection): {manual_merged_with_detection}. Filesets dropped, no candidate: {dropped_early_no_candidate}. Filesets matched with existing Full fileset: {match_with_full_fileset}. Filesets with mismatched files with Full fileset: {mismatch_with_full_fileset}. Filesets missing files compared to partial fileset candidate: {filesets_with_missing_files}."
         console_log(log_text)
-        create_log(escape_string(category_text), user, escape_string(log_text), conn)
+        create_log(category_text, user, log_text, conn)
 
 
 def pre_update_files(rom, filesets_check_for_full, transaction_id, conn):
@@ -1215,9 +1203,9 @@ def scan_perform_match(
                     log_text = f"Created Fileset:{fileset_id}. Name: {fileset_name} Description: {fileset_description}"
                     category_text = "Uploaded from scan."
                     create_log(
-                        escape_string(category_text),
+                        category_text,
                         user,
-                        escape_string(log_text),
+                        log_text,
                         conn,
                     )
                     console_log(log_text)
@@ -1273,9 +1261,9 @@ def scan_perform_match(
                     log_text = f"Created Fileset:{fileset_id}. Name: {fileset_name} Description: {fileset_description}"
                     category_text = "Uploaded from scan."
                     create_log(
-                        escape_string(category_text),
+                        category_text,
                         user,
-                        escape_string(log_text),
+                        log_text,
                         conn,
                     )
                     console_log(log_text)
@@ -1321,9 +1309,7 @@ def scan_perform_match(
         elif len(candidate_filesets) > 1:
             log_text = f"Created Fileset:{fileset_id}. Name: {fileset_name} Description: {fileset_description}"
             category_text = "Uploaded from scan."
-            create_log(
-                escape_string(category_text), user, escape_string(log_text), conn
-            )
+            create_log(category_text, user, log_text, conn)
             console_log(log_text)
             category_text = "Manual Merge - Multiple Candidates"
             log_text = f"Merge Fileset:{fileset_id} manually. Possible matches are: {', '.join(f'Fileset:{id}' for id in candidate_filesets)}."
@@ -1827,9 +1813,7 @@ def set_process(
             log_text = f"Drop fileset as no matching candidates. Name: {fileset_name} Description: {fileset_description}."
             console_log_text = f"Early fileset drop as no matching candidates. Name: {fileset_name} Description: {fileset_description}."
             no_candidate_logs.append(console_log_text)
-            create_log(
-                escape_string(category_text), user, escape_string(log_text), conn
-            )
+            create_log(category_text, user, log_text, conn)
             dropped_early_no_candidate += 1
             delete_original_fileset(fileset_id, conn)
             continue
@@ -1887,9 +1871,7 @@ def set_process(
                 )
                 log_text = f"Drop fileset, multiple filesets mapping to single detection. Name: {fileset_name} Description: {fileset_description}. Clashed with Fileset:{candidate} ({engine}:{gameid}-{platform}-{language})"
                 console_log(log_text)
-                create_log(
-                    escape_string(category_text), user, escape_string(log_text), conn
-                )
+                create_log(category_text, user, log_text, conn)
                 dropped_early_single_candidate_multiple_sets += 1
                 delete_original_fileset(set_fileset, conn)
                 del set_to_candidate_dict[set_fileset]
@@ -1950,18 +1932,14 @@ def set_process(
                 log_text = f"Drop fileset as no matching candidates. Name: {fileset_name} Description: {fileset_description}."
                 console_log_text = f"Fileset dropped as no candidates anymore. Name: {fileset_name} Description: {fileset_description}."
                 console_log(console_log_text)
-                create_log(
-                    escape_string(category_text), user, escape_string(log_text), conn
-                )
+                create_log(category_text, user, log_text, conn)
                 dropped_early_no_candidate += 1
                 manual_merged_filesets -= 1
                 delete_original_fileset(fileset_id, conn)
             else:
                 log_text = f"Created Fileset:{fileset_id}. Name: {fileset_name} Description: {fileset_description}"
                 category_text = "Uploaded from dat."
-                create_log(
-                    escape_string(category_text), user, escape_string(log_text), conn
-                )
+                create_log(category_text, user, log_text, conn)
                 console_log(log_text)
                 category_text = "Manual Merge Required"
                 log_text = f"Merge Fileset:{fileset_id} manually. Possible matches are: {', '.join(f'Fileset:{id}' for id in candidates)}."
@@ -1982,11 +1960,11 @@ def set_process(
         fileset_insertion_count = cursor.fetchone()["COUNT(fileset)"]
         category_text = f"Uploaded from {src}"
         log_text = f"Completed loading DAT file, filename {filepath}, size {os.path.getsize(filepath)}. State {source_status}. Number of filesets: {fileset_insertion_count}. Transaction: {transaction_id}"
-        create_log(escape_string(category_text), user, escape_string(log_text), conn)
+        create_log(category_text, user, log_text, conn)
         category_text = "Upload information"
         log_text = f"Number of filesets: {fileset_insertion_count}. Filesets automatically merged: {auto_merged_filesets}. Filesets dropped early (no candidate) - {dropped_early_no_candidate}. Filesets dropped early (mapping to single detection) - {dropped_early_single_candidate_multiple_sets}. Filesets requiring manual merge: {manual_merged_filesets}. Partial/Full filesets already present: {fully_matched_filesets}. Partial/Full filesets with mismatch {mismatch_filesets}."
         console_log(log_text)
-        create_log(escape_string(category_text), user, escape_string(log_text), conn)
+        create_log(category_text, user, log_text, conn)
 
 
 def set_filter_by_platform(gameid, candidate_filesets, conn):
@@ -2056,9 +2034,7 @@ def set_perform_match(
             log_text = f"Drop fileset as no matching candidates. Name: {fileset_name} Description: {fileset_description}."
             console_log_text = f"Fileset dropped as no candidates anymore. Name: {fileset_name} Description: {fileset_description}."
             no_candidate_logs.append(console_log_text)
-            create_log(
-                escape_string(category_text), user, escape_string(log_text), conn
-            )
+            create_log(category_text, user, log_text, conn)
             dropped_early_no_candidate += 1
             delete_original_fileset(fileset_id, conn)
         elif len(candidate_filesets) == 1:
@@ -2098,9 +2074,9 @@ def set_perform_match(
                     category_text = "Already present"
                     log_text = f"Already present as - Fileset:{matched_fileset_id}. Deleting Fileset:{fileset_id}"
                     log_last = create_log(
-                        escape_string(category_text),
+                        category_text,
                         user,
-                        escape_string(log_text),
+                        log_text,
                         conn,
                     )
                     update_history(fileset_id, matched_fileset_id, conn, log_last)
@@ -2111,9 +2087,9 @@ def set_perform_match(
                     log_text = f"Created Fileset:{fileset_id}. Name: {fileset_name} Description: {fileset_description}"
                     category_text = "Uploaded from dat."
                     create_log(
-                        escape_string(category_text),
+                        category_text,
                         user,
-                        escape_string(log_text),
+                        log_text,
                         conn,
                     )
                     console_log(log_text)
@@ -2226,7 +2202,7 @@ def add_manual_merge(
                 """
             cursor.execute(query, (child_fileset, parent_fileset))
 
-    create_log(escape_string(category_text), user, escape_string(log_text), conn)
+    create_log(category_text, user, log_text, conn)
     if print_text:
         print(print_text)
 
@@ -2968,9 +2944,7 @@ def log_matched_fileset(src, fileset_last, fileset_id, state, user, conn):
     log_text = (
         f"Matched Fileset:{fileset_last} with Fileset:{fileset_id}. State {state}."
     )
-    log_last = create_log(
-        escape_string(category_text), user, escape_string(log_text), conn
-    )
+    log_last = create_log(category_text, user, log_text, conn)
     update_history(fileset_last, fileset_id, conn, log_last)
 
 
@@ -2992,7 +2966,7 @@ def log_scan_match_with_full(
             f"Fileset matched completely with Full Fileset:{candidate_id}. Dropping."
         )
     print(log_text)
-    create_log(escape_string(category_text), user, escape_string(log_text), conn)
+    create_log(category_text, user, log_text, conn)
 
 
 def finalize_fileset_insertion(
@@ -3007,9 +2981,7 @@ def finalize_fileset_insertion(
         category_text = f"Uploaded from {src}"
         if src != "user":
             log_text = f"Completed loading DAT file, filename {filepath}, size {os.path.getsize(filepath)}, author {author}, version {version}. State {source_status}. Number of filesets: {fileset_insertion_count}. Transaction: {transaction_id}"
-            create_log(
-                escape_string(category_text), user, escape_string(log_text), conn
-            )
+            create_log(category_text, user, log_text, conn)
 
 
 def user_integrity_check(data, ip, game_metadata=None):
@@ -3051,9 +3023,7 @@ def user_integrity_check(data, ip, game_metadata=None):
 
             user = f"cli:{getpass.getuser()}"
 
-            create_log(
-                escape_string(category_text), user, escape_string(log_text), conn
-            )
+            create_log(category_text, user, log_text, conn)
 
             matched_map = find_matching_filesets(data, conn, src)
 
@@ -3186,7 +3156,7 @@ def user_integrity_check(data, ip, game_metadata=None):
     finally:
         category_text = f"Uploaded from {src}"
         log_text = f"Completed loading file, State {source_status}. Transaction: {transaction_id}"
-        create_log(escape_string(category_text), user, escape_string(log_text), conn)
+        create_log(category_text, user, log_text, conn)
         # conn.close()
     return matched_map, missing_map, extra_map
 


Commit: 1ba6b8bf692db8747b0f497d8c4bb9123edafdab
    https://github.com/scummvm/scummvm-sites/commit/1ba6b8bf692db8747b0f497d8c4bb9123edafdab
Author: ShivangNagta (shivangnag at gmail.com)
Date: 2025-08-14T22:21:10+02:00

Commit Message:
INTEGRITY: Remove depracated/redundant code.

Changed paths:
  R user_fileset_functions.py
    db_functions.py
    fileset.py


diff --git a/db_functions.py b/db_functions.py
index 272935b..9b385eb 100644
--- a/db_functions.py
+++ b/db_functions.py
@@ -1,18 +1,14 @@
 import pymysql
 import json
-from collections import Counter
 import getpass
 import time
 import hashlib
 import os
-from pymysql.converters import escape_string
 from collections import defaultdict
 import re
 import copy
 import sys
 
-SPECIAL_SYMBOLS = '/":*|\\?%<>\x7f'
-
 
 def db_connect():
     console_log("Connecting to the Database.")
@@ -324,59 +320,6 @@ def delete_filesets(conn):
         cursor.execute(query)
 
 
-def my_escape_string(s: str) -> str:
-    """
-    Escape strings
-
-    Escape the following:
-    - escape char: \x81
-    - unallowed filename chars: https://en.wikipedia.org/wiki/Filename#Reserved_characters_and_words
-    - control chars < 0x20
-    """
-    new_name = ""
-    for char in s:
-        if char == "\x81":
-            new_name += "\x81\x79"
-        elif char in SPECIAL_SYMBOLS or ord(char) < 0x20:
-            new_name += "\x81" + chr(0x80 + ord(char))
-        else:
-            new_name += char
-    return new_name
-
-
-def encode_punycode(orig):
-    """
-    Punyencode strings
-
-    - escape special characters and
-    - ensure filenames can't end in a space or dotif temp == None:
-    """
-    s = my_escape_string(orig)
-    encoded = s.encode("punycode").decode("ascii")
-    # punyencoding adds an '-' at the end when there are no special chars
-    # don't use it for comparing
-    compare = encoded
-    if encoded.endswith("-"):
-        compare = encoded[:-1]
-    if orig != compare or compare[-1] in " .":
-        return "xn--" + encoded
-    return orig
-
-
-def punycode_need_encode(orig):
-    """
-    A filename needs to be punyencoded when it:
-
-    - contains a char that should be escaped or
-    - ends with a dot or a space.
-    """
-    if not all((0x20 <= ord(c) < 0x80) and c not in SPECIAL_SYMBOLS for c in orig):
-        return True
-    if orig[-1] in " .":
-        return True
-    return False
-
-
 def create_log(category, user, text, conn):
     with conn.cursor() as cursor:
         try:
@@ -643,233 +586,6 @@ def db_insert(data_arr, username=None, skiplog=False):
         conn.close()
 
 
-def compare_filesets(id1, id2, conn):
-    with conn.cursor() as cursor:
-        cursor.execute(
-            "SELECT name, size, `size-r`, `size-rd`, checksum FROM file WHERE fileset = %s",
-            (id1,),
-        )
-        fileset1 = cursor.fetchall()
-        cursor.execute(
-            "SELECT name, size, `size-r`, `size-rd`, checksum FROM file WHERE fileset = %s",
-            (id2,),
-        )
-        fileset2 = cursor.fetchall()
-
-    # Sort filesets on checksum
-    fileset1.sort(key=lambda x: x[2])
-    fileset2.sort(key=lambda x: x[2])
-
-    if len(fileset1) != len(fileset2):
-        return False
-
-    for i in range(len(fileset1)):
-        # If checksums do not match
-        if fileset1[i][2] != fileset2[i][2]:
-            return False
-
-    return True
-
-
-def status_to_match(status):
-    order = ["detection", "dat", "scan", "partialmatch", "fullmatch", "user"]
-    return order[: order.index(status)]
-
-
-def find_matching_game(game_files):
-    matching_games = []  # All matching games
-    matching_filesets = []  # All filesets containing one file from game_files
-    matches_count = 0  # Number of files with a matching detection entry
-
-    conn = db_connect()
-
-    for file in game_files:
-        checksum = file[1]
-
-        query = "SELECT file.fileset as file_fileset FROM filechecksum JOIN file ON filechecksum.file = file.id WHERE filechecksum.checksum = %s AND file.detection = TRUE"
-        with conn.cursor() as cursor:
-            cursor.execute(query, (checksum,))
-            records = cursor.fetchall()
-
-        # If file is not part of detection entries, skip it
-        if len(records) == 0:
-            continue
-
-        matches_count += 1
-        for record in records:
-            matching_filesets.append(record[0])
-
-    # Check if there is a fileset_id that is present in all results
-    for key, value in Counter(matching_filesets).items():
-        with conn.cursor() as cursor:
-            cursor.execute(
-                "SELECT COUNT(file.id) FROM file JOIN fileset ON file.fileset = fileset.id WHERE fileset.id = %s",
-                (key,),
-            )
-            count_files_in_fileset = cursor.fetchone()["COUNT(file.id)"]
-
-        # We use < instead of != since one file may have more than one entry in the fileset
-        # We see this in Drascula English version, where one entry is duplicated
-        if value < matches_count or value < count_files_in_fileset:
-            continue
-
-        with conn.cursor() as cursor:
-            cursor.execute(
-                "SELECT engineid, game.id, gameid, platform, language, `key`, src, fileset.id as fileset FROM game JOIN fileset ON fileset.game = game.id JOIN engine ON engine.id = game.engine WHERE fileset.id = %s",
-                (key,),
-            )
-            records = cursor.fetchall()
-
-        matching_games.append(records[0])
-
-    if len(matching_games) != 1:
-        return matching_games
-
-    # Check the current fileset priority with that of the match
-    with conn.cursor() as cursor:
-        cursor.execute(
-            f"SELECT id FROM fileset, ({query}) AS res WHERE id = file_fileset AND status IN ({', '.join(['%s'] * len(game_files[3]))})",
-            status_to_match(game_files[3]),
-        )
-        records = cursor.fetchall()
-
-    # If priority order is correct
-    if len(records) != 0:
-        return matching_games
-
-    if compare_filesets(matching_games[0]["fileset"], game_files[0][0], conn):
-        with conn.cursor() as cursor:
-            cursor.execute(
-                "UPDATE fileset SET `delete` = TRUE WHERE id = %s", (game_files[0][0],)
-            )
-        return []
-
-    return matching_games
-
-
-def merge_filesets(detection_id, dat_id):
-    conn = db_connect()
-
-    try:
-        with conn.cursor() as cursor:
-            cursor.execute(
-                "SELECT DISTINCT(filechecksum.checksum), checksize, checktype FROM filechecksum JOIN file on file.id = filechecksum.file WHERE fileset = %s'",
-                (detection_id,),
-            )
-            detection_files = cursor.fetchall()
-
-            for file in detection_files:
-                checksum = file[0]
-                checksize = file[1]
-                checktype = file[2]
-
-                cursor.execute(
-                    "DELETE FROM file WHERE checksum = %s AND fileset = %s LIMIT 1",
-                    (checksum, detection_id),
-                )
-                cursor.execute(
-                    "UPDATE file JOIN filechecksum ON filechecksum.file = file.id SET detection = TRUE, checksize = %s, checktype = %s WHERE fileset = %s AND filechecksum.checksum = %s",
-                    (checksize, checktype, dat_id, checksum),
-                )
-
-            cursor.execute(
-                "INSERT INTO history (`timestamp`, fileset, oldfileset) VALUES (FROM_UNIXTIME(%s), %s, %s)",
-                (int(time.time()), dat_id, detection_id),
-            )
-            cursor.execute("SELECT LAST_INSERT_ID()")
-            history_last = cursor.fetchone()["LAST_INSERT_ID()"]
-
-            cursor.execute(
-                "UPDATE history SET fileset = %s WHERE fileset = %s",
-                (dat_id, detection_id),
-            )
-            cursor.execute("DELETE FROM fileset WHERE id = %s", (detection_id,))
-
-        conn.commit()
-    except Exception as e:
-        conn.rollback()
-        print(f"Error merging filesets: {e}")
-    finally:
-        # conn.close()
-        pass
-
-    return history_last
-
-
-def populate_matching_games():
-    conn = db_connect()
-
-    # Getting unmatched filesets
-    unmatched_filesets = []
-
-    with conn.cursor() as cursor:
-        cursor.execute(
-            "SELECT fileset.id, filechecksum.checksum, src, status FROM fileset JOIN file ON file.fileset = fileset.id JOIN filechecksum ON file.id = filechecksum.file WHERE fileset.game IS NULL AND status != 'user'"
-        )
-        unmatched_files = cursor.fetchall()
-
-    # Splitting them into different filesets
-    i = 0
-    while i < len(unmatched_files):
-        cur_fileset = unmatched_files[i][0]
-        temp = []
-        while i < len(unmatched_files) and cur_fileset == unmatched_files[i][0]:
-            temp.append(unmatched_files[i])
-            i += 1
-        unmatched_filesets.append(temp)
-
-    for fileset in unmatched_filesets:
-        matching_games = find_matching_game(fileset)
-
-        if len(matching_games) != 1:  # If there is no match/non-unique match
-            continue
-
-        matched_game = matching_games[0]
-
-        # Update status depending on $matched_game["src"] (dat -> partialmatch, scan -> fullmatch)
-        status = fileset[0][2]
-        if fileset[0][2] == "dat":
-            status = "partialmatch"
-        elif fileset[0][2] == "scan":
-            status = "fullmatch"
-
-        # Convert NULL values to string with value NULL for printing
-        matched_game = {k: "NULL" if v is None else v for k, v in matched_game.items()}
-
-        category_text = f"Matched from {fileset[0][2]}"
-        log_text = f"Matched game {matched_game['engineid']}:\n{matched_game['gameid']}-{matched_game['platform']}-{matched_game['language']}\nvariant {matched_game['key']}. State {status}. Fileset:{fileset[0][0]}."
-
-        # Updating the fileset.game value to be $matched_game["id"]
-        query = "UPDATE fileset SET game = %s, status = %s, `key` = %s WHERE id = %s"
-
-        history_last = merge_filesets(matched_game["fileset"], fileset[0][0])
-
-        if cursor.execute(
-            query, (matched_game["id"], status, matched_game["key"], fileset[0][0])
-        ):
-            user = f"cli:{getpass.getuser()}"
-
-            create_log(
-                "Fileset merge",
-                user,
-                f"Merged Fileset:{matched_game['fileset']} and Fileset:{fileset[0][0]}",
-                conn,
-            )
-
-            # Matching log
-            log_last = create_log(conn, category_text, user, conn, log_text)
-
-            # Add log id to the history table
-            cursor.execute(
-                "UPDATE history SET log = %s WHERE id = %s", (log_last, history_last)
-            )
-
-        try:
-            conn.commit()
-        except Exception:
-            print("Updating matched games failed")
-
-
 def match_fileset(data_arr, username=None, skiplog=False):
     """
     data_arr -> tuple : (header, game_data, resources, filepath).
@@ -943,33 +659,6 @@ def match_fileset(data_arr, username=None, skiplog=False):
                 user,
                 skiplog,
             )
-        else:
-            game_data_lookup = {fs["name"]: fs for fs in game_data}
-            for fileset in game_data:
-                process_fileset(
-                    fileset,
-                    resources,
-                    detection,
-                    src,
-                    conn,
-                    transaction_id,
-                    filepath,
-                    author,
-                    version,
-                    source_status,
-                    user,
-                    game_data_lookup,
-                )
-            finalize_fileset_insertion(
-                conn,
-                transaction_id,
-                src,
-                filepath,
-                author,
-                version,
-                source_status,
-                user,
-            )
         conn.commit()
     except Exception as e:
         conn.rollback()
@@ -2429,78 +2118,6 @@ def is_candidate_by_checksize(candidate, fileset, conn):
         return False
 
 
-def process_fileset(
-    fileset,
-    resources,
-    detection,
-    src,
-    conn,
-    transaction_id,
-    filepath,
-    author,
-    version,
-    source_status,
-    user,
-    game_data_lookup,
-):
-    if detection:
-        insert_game_data(fileset, conn)
-
-    # Ideally romof should be enough, but adding in case of an edge case
-    current_name = fileset.get("romof") or fileset.get("cloneof")
-
-    # Iteratively check for extra files if linked to multiple filesets
-    while current_name:
-        if current_name in resources:
-            fileset["rom"] += resources[current_name]["rom"]
-            break
-
-        elif current_name in game_data_lookup:
-            linked = game_data_lookup[current_name]
-            fileset["rom"] += linked.get("rom", [])
-            current_name = linked.get("romof") or linked.get("cloneof")
-        else:
-            break
-
-    key = calc_key(fileset) if not detection else ""
-    megakey = calc_megakey(fileset) if detection else ""
-    log_text = f"size {os.path.getsize(filepath)}, author {author}, version {version}. State {source_status}."
-    if src != "dat":
-        matched_map = find_matching_filesets(fileset, conn, src)
-    else:
-        matched_map = matching_set(fileset, conn)
-
-    (fileset_id, _) = insert_new_fileset(
-        fileset, conn, detection, src, key, megakey, transaction_id, log_text, user
-    )
-
-    if matched_map:
-        handle_matched_filesets(
-            fileset_id,
-            matched_map,
-            fileset,
-            conn,
-            detection,
-            src,
-            key,
-            megakey,
-            transaction_id,
-            log_text,
-            user,
-        )
-
-
-def insert_game_data(fileset, conn):
-    engine_name = fileset["engine"]
-    engineid = fileset["sourcefile"]
-    gameid = fileset["name"]
-    title = fileset["title"]
-    extra = fileset["extra"]
-    platform = fileset["platform"]
-    lang = fileset["language"]
-    insert_game(engine_name, engineid, title, gameid, extra, platform, lang, conn)
-
-
 def find_matching_filesets(fileset, conn, status):
     matched_map = defaultdict(list)
     if status != "user":
@@ -2535,105 +2152,6 @@ def find_matching_filesets(fileset, conn, status):
     return matched_map
 
 
-def matching_set(fileset, conn):
-    matched_map = defaultdict(list)
-    with conn.cursor() as cursor:
-        for file in fileset["rom"]:
-            matched_set = set()
-            if "md5" in file:
-                checksum = file["md5"]
-                if ":" in checksum:
-                    checksum = checksum.split(":")[1]
-                size = file["size"]
-
-                query = """
-                    SELECT DISTINCT fs.id AS fileset_id
-                    FROM fileset fs
-                    JOIN file f ON fs.id = f.fileset
-                    JOIN filechecksum fc ON f.id = fc.file
-                    WHERE fc.checksum = %s AND fc.checktype LIKE 'md5%'
-                    AND fc.checksize > %s
-                    AND fs.status = 'detection'
-                """
-                cursor.execute(query, (checksum, size))
-                records = cursor.fetchall()
-                if records:
-                    for record in records:
-                        matched_set.add(record["fileset_id"])
-            for id in matched_set:
-                matched_map[id].append(file)
-    return matched_map
-
-
-def handle_matched_filesets(
-    fileset_last,
-    matched_map,
-    fileset,
-    conn,
-    detection,
-    src,
-    key,
-    megakey,
-    transaction_id,
-    log_text,
-    user,
-):
-    matched_list = sorted(matched_map.items(), key=lambda x: len(x[1]), reverse=True)
-    is_full_matched = False
-    with conn.cursor() as cursor:
-        for matched_fileset_id, matched_count in matched_list:
-            if is_full_matched:
-                break
-            cursor.execute(
-                "SELECT status FROM fileset WHERE id = %s", (matched_fileset_id,)
-            )
-            status = cursor.fetchone()["status"]
-            cursor.execute(
-                "SELECT COUNT(file.id) FROM file WHERE fileset = %s",
-                (matched_fileset_id,),
-            )
-            count = cursor.fetchone()["COUNT(file.id)"]
-
-            if status in ["detection", "obsolete"] and count == len(matched_count):
-                is_full_matched = True
-                update_fileset_status(
-                    cursor, matched_fileset_id, "full" if src != "dat" else "partial"
-                )
-                populate_file(fileset, matched_fileset_id, conn, detection)
-                log_matched_fileset(
-                    src,
-                    fileset_last,
-                    matched_fileset_id,
-                    "full" if src != "dat" else "partial",
-                    user,
-                    conn,
-                )
-                delete_original_fileset(fileset_last, conn)
-            elif status == "full" and len(fileset["rom"]) == count:
-                is_full_matched = True
-                log_matched_fileset(
-                    src, fileset_last, matched_fileset_id, "full", user, conn
-                )
-                delete_original_fileset(fileset_last, conn)
-                return
-            elif (status == "partial") and count == len(matched_count):
-                is_full_matched = True
-                update_fileset_status(cursor, matched_fileset_id, "full")
-                populate_file(fileset, matched_fileset_id, conn, detection)
-                log_matched_fileset(
-                    src, fileset_last, matched_fileset_id, "full", user, conn
-                )
-                delete_original_fileset(fileset_last, conn)
-            elif status == "scan" and count == len(matched_count):
-                log_matched_fileset(
-                    src, fileset_last, matched_fileset_id, "full", user, conn
-                )
-            elif src == "dat":
-                log_matched_fileset(
-                    src, fileset_last, matched_fileset_id, "partial matched", user, conn
-                )
-
-
 def delete_original_fileset(fileset_id, conn):
     with conn.cursor() as cursor:
         cursor.execute("DELETE FROM file WHERE fileset = %s", (fileset_id,))
@@ -2652,131 +2170,6 @@ def update_fileset_status(cursor, fileset_id, status):
     )
 
 
-def populate_file(fileset, fileset_id, conn, detection):
-    with conn.cursor() as cursor:
-        cursor.execute("SELECT * FROM file WHERE fileset = %s", (fileset_id,))
-        target_files = cursor.fetchall()
-        target_files_dict = {}
-        for target_file in target_files:
-            cursor.execute(
-                "SELECT * FROM filechecksum WHERE file = %s", (target_file["id"],)
-            )
-            target_checksums = cursor.fetchall()
-            for checksum in target_checksums:
-                target_files_dict[checksum["checksum"]] = target_file
-                target_files_dict[target_file["id"]] = (
-                    f"{checksum['checktype']}-{checksum['checksize']}"
-                )
-        for file in fileset["rom"]:
-            file_exists = False
-            checksum = ""
-            checksize = 5000
-            checktype = "None"
-            if "md5" in file:
-                checksum = file["md5"]
-            else:
-                for key, value in file.items():
-                    if "md5" in key:
-                        checksize, checktype, checksum = get_checksum_props(key, value)
-                        break
-
-            if not detection:
-                checktype = "None"
-                detection = 0
-            detection_type = (
-                f"{checktype}-{checksize}" if checktype != "None" else f"{checktype}"
-            )
-
-            extended_file_size = True if "size-r" in file else False
-
-            name = normalised_path(file["name"])
-            escaped_name = escape_string(name)
-
-            columns = ["name", "size"]
-            values = [f"'{escaped_name}'", f"'{file['size']}'"]
-
-            if extended_file_size:
-                columns.extend(["`size-r`", "`size-rd`"])
-                values.extend([f"'{file['size-r']}'", f"'{file['size-rd']}'"])
-
-            columns.extend(
-                ["checksum", "fileset", "detection", "detection_type", "`timestamp`"]
-            )
-            values.extend(
-                [
-                    f"'{checksum}'",
-                    str(fileset_id),
-                    str(detection),
-                    f"'{detection_type}'",
-                    "NOW()",
-                ]
-            )
-
-            query = (
-                f"INSERT INTO file ({', '.join(columns)}) VALUES ({', '.join(values)})"
-            )
-            cursor.execute(query)
-            cursor.execute("SET @file_last = LAST_INSERT_ID()")
-            cursor.execute("SELECT @file_last AS file_id")
-
-            file_id = cursor.fetchone()["file_id"]
-            d_type = 0
-            previous_checksums = {}
-
-            for key, value in file.items():
-                if key not in ["name", "size", "size-r", "size-rd", "sha1", "crc"]:
-                    insert_filechecksum(file, key, file_id, conn)
-                    if value in target_files_dict and not file_exists:
-                        cursor.execute(
-                            f"SELECT detection_type FROM file WHERE id = {target_files_dict[value]['id']}"
-                        )
-                        d_type = cursor.fetchone()["detection_type"]
-                        file_exists = True
-                        cursor.execute(
-                            f"SELECT * FROM file WHERE fileset = {fileset_id}"
-                        )
-                        target_files = cursor.fetchall()
-                        for target_file in target_files:
-                            cursor.execute(
-                                f"SELECT * FROM filechecksum WHERE file = {target_file['id']}"
-                            )
-                            target_checksums = cursor.fetchall()
-                            for checksum in target_checksums:
-                                previous_checksums[
-                                    f"{checksum['checktype']}-{checksum['checksize']}"
-                                ] = checksum["checksum"]
-                        cursor.execute(
-                            f"DELETE FROM file WHERE id = {target_files_dict[value]['id']}"
-                        )
-
-            if file_exists:
-                cursor.execute(
-                    f"SELECT checktype, checksize FROM filechecksum WHERE file = {file_id}"
-                )
-                existing_checks = cursor.fetchall()
-                existing_checksum = []
-                for existing_check in existing_checks:
-                    existing_checksum.append(
-                        existing_check["checktype"] + "-" + existing_check["checksize"]
-                    )
-                for key, value in previous_checksums.items():
-                    if key not in existing_checksum:
-                        checksize, checktype, checksum = get_checksum_props(key, value)
-                        cursor.execute(
-                            "INSERT INTO filechecksum (file, checksize, checktype, checksum) VALUES (%s, %s, %s, %s)",
-                            (file_id, checksize, checktype, checksum),
-                        )
-
-                cursor.execute(f"UPDATE file SET detection = 1 WHERE id = {file_id}")
-                cursor.execute(
-                    f"UPDATE file SET detection_type = '{d_type}' WHERE id = {file_id}"
-                )
-            else:
-                cursor.execute(
-                    f"UPDATE file SET detection_type = 'None' WHERE id = {file_id}"
-                )
-
-
 def set_populate_file(fileset, fileset_id, conn, detection):
     """
     Updates the old fileset in case of a match. Further deletes the newly created fileset which is not needed anymore.
@@ -3133,11 +2526,11 @@ def user_integrity_check(data, ip, game_metadata=None):
                 log_matched_fileset(
                     src, matched_fileset_id, matched_fileset_id, "full", user, conn
                 )
-            elif status == "partial" and count == matched_count:
-                populate_file(data, matched_fileset_id, conn, None, src)
-                log_matched_fileset(
-                    src, matched_fileset_id, matched_fileset_id, "partial", user, conn
-                )
+            # elif status == "partial" and count == matched_count:
+            #     populate_file(data, matched_fileset_id, conn, None, src)
+            #     log_matched_fileset(
+            #         src, matched_fileset_id, matched_fileset_id, "partial", user, conn
+            #     )
             elif status == "user" and count == matched_count:
                 add_usercount(matched_fileset_id, conn)
                 log_matched_fileset(
diff --git a/fileset.py b/fileset.py
index 7ee5dd8..4377d10 100644
--- a/fileset.py
+++ b/fileset.py
@@ -10,10 +10,6 @@ import pymysql.cursors
 import json
 import html as html_lib
 import os
-from user_fileset_functions import (
-    user_insert_fileset,
-    match_and_merge_user_filesets,
-)
 from pagination import create_page
 import difflib
 from db_functions import (
@@ -344,10 +340,6 @@ def fileset():
                 connection.commit()
                 html += "<p id='delete-confirm'>Fileset marked for deletion</p>"
 
-            if "match" in request.form:
-                match_and_merge_user_filesets(request.form["match"])
-                return redirect(url_for("fileset", id=request.form["match"]))
-
             # Generate the HTML for the fileset history
             cursor.execute(
                 "SELECT `timestamp`, category, `text`, id FROM log WHERE `text` REGEXP 'Fileset:%s' ORDER BY `timestamp` DESC, id DESC",
@@ -1134,28 +1126,24 @@ def validate():
 
     json_response = {"error": error_codes["success"], "files": []}
 
-    if not game_metadata:
-        if not json_object.get("files"):
-            json_response["error"] = error_codes["empty"]
-            del json_response["files"]
-            json_response["status"] = "empty_fileset"
-            return jsonify(json_response)
-
-        json_response["error"] = error_codes["no_metadata"]
-        del json_response["files"]
-        json_response["status"] = "no_metadata"
-
-        conn = db_connect()
-        try:
-            fileset_id = user_insert_fileset(json_object, ip, conn)
-        finally:
-            conn.close()
-        json_response["fileset"] = fileset_id
-        return jsonify(json_response)
-
-    matched_map = {}
-    missing_map = {}
-    extra_map = {}
+    # if not game_metadata:
+    #     if not json_object.get("files"):
+    #         json_response["error"] = error_codes["empty"]
+    #         del json_response["files"]
+    #         json_response["status"] = "empty_fileset"
+    #         return jsonify(json_response)
+
+    #     json_response["error"] = error_codes["no_metadata"]
+    #     del json_response["files"]
+    #     json_response["status"] = "no_metadata"
+
+    #     conn = db_connect()
+    #     try:
+    #         fileset_id = user_insert_fileset(json_object, ip, conn)
+    #     finally:
+    #         conn.close()
+    #     json_response["fileset"] = fileset_id
+    #     return jsonify(json_response)
 
     file_object = json_object["files"]
     if not file_object:
diff --git a/user_fileset_functions.py b/user_fileset_functions.py
deleted file mode 100644
index 6ca1c1f..0000000
--- a/user_fileset_functions.py
+++ /dev/null
@@ -1,205 +0,0 @@
-import hashlib
-import time
-from db_functions import (
-    db_connect,
-    insert_fileset,
-    insert_file,
-    insert_filechecksum,
-    find_matching_game,
-    merge_filesets,
-    create_log,
-    calc_megakey,
-)
-import getpass
-import pymysql
-
-
-def user_calc_key(user_fileset):
-    key_string = ""
-    for file in user_fileset:
-        for key, value in file.items():
-            if key != "checksums":
-                key_string += ":" + str(value)
-                continue
-            for checksum_pair in value:
-                key_string += ":" + checksum_pair["checksum"]
-    key_string = key_string.strip(":")
-    return hashlib.md5(key_string.encode()).hexdigest()
-
-
-def file_json_to_array(file_json_object):
-    res = {}
-    for key, value in file_json_object.items():
-        if key != "checksums":
-            res[key] = value
-            continue
-        for checksum_pair in value:
-            res[checksum_pair["type"]] = checksum_pair["checksum"]
-    return res
-
-
-def user_insert_queue(user_fileset, conn):
-    query = "INSERT INTO queue (time, notes, fileset, ticketid, userid, commit) VALUES (%s, NULL, @fileset_last, NULL, NULL, NULL)"
-
-    with conn.cursor() as cursor:
-        cursor.execute(query, (int(time.time()),))
-        conn.commit()
-
-
-def user_insert_fileset(user_fileset, ip, conn):
-    src = "user"
-    detection = False
-    key = ""
-    megakey = calc_megakey(user_fileset)
-    with conn.cursor() as cursor:
-        cursor.execute("SELECT MAX(`transaction`) FROM transactions")
-        transaction_id = cursor.fetchone()["MAX(`transaction`)"] + 1
-        log_text = "from user submitted files"
-        cursor.execute("SET @fileset_time_last = %s", (int(time.time()),))
-        if insert_fileset(
-            src, detection, key, megakey, transaction_id, log_text, conn, ip
-        ):
-            for file in user_fileset["files"]:
-                file = file_json_to_array(file)
-                insert_file(file, detection, src, conn)
-                for key, value in file.items():
-                    if key not in ["name", "size"]:
-                        insert_filechecksum(file, key, conn)
-        cursor.execute("SELECT @fileset_last")
-        fileset_id = cursor.fetchone()["@fileset_last"]
-    conn.commit()
-    return fileset_id
-
-
-def match_and_merge_user_filesets(id):
-    conn = db_connect()
-
-    # Getting unmatched filesets
-    unmatched_filesets = []
-
-    with conn.cursor() as cursor:
-        cursor.execute(
-            "SELECT fileset.id, filechecksum.checksum, src, status FROM fileset JOIN file ON file.fileset = fileset.id JOIN filechecksum ON file.id = filechecksum.file WHERE status = 'user' AND fileset.id = %s",
-            (id,),
-        )
-        unmatched_files = cursor.fetchall()
-
-    # Splitting them into different filesets
-    i = 0
-    while i < len(unmatched_files):
-        cur_fileset = unmatched_files[i][0]
-        temp = []
-        while i < len(unmatched_files) and cur_fileset == unmatched_files[i][0]:
-            temp.append(unmatched_files[i])
-            i += 1
-        unmatched_filesets.append(temp)
-
-    for fileset in unmatched_filesets:
-        matching_games = find_matching_game(fileset)
-
-        if len(matching_games) != 1:  # If there is no match/non-unique match
-            continue
-
-        matched_game = matching_games[0]
-
-        status = "full"
-
-        # Convert NULL values to string with value NULL for printing
-        matched_game = {k: "NULL" if v is None else v for k, v in matched_game.items()}
-
-        category_text = f"Matched from {fileset[0][2]}"
-        log_text = f"Matched game {matched_game['engineid']}:\n{matched_game['gameid']}-{matched_game['platform']}-{matched_game['language']}\nvariant {matched_game['key']}. State {status}. Fileset:{fileset[0][0]}."
-
-        # Updating the fileset.game value to be $matched_game["id"]
-        query = "UPDATE fileset SET game = %s, status = %s, `key` = %s WHERE id = %s"
-
-        history_last = merge_filesets(matched_game["fileset"], fileset[0][0])
-
-        if cursor.execute(
-            query, (matched_game["id"], status, matched_game["key"], fileset[0][0])
-        ):
-            user = f"cli:{getpass.getuser()}"
-
-            # Merge log
-            create_log(
-                "Fileset merge",
-                user,
-                pymysql.escape_string(
-                    conn,
-                    f"Merged Fileset:{matched_game['fileset']} and Fileset:{fileset[0][0]}",
-                ),
-            )
-
-            # Matching log
-            log_last = create_log(
-                pymysql.escape_string(conn, category_text),
-                user,
-                pymysql.escape_string(conn, log_text),
-            )
-
-            # Add log id to the history table
-            cursor.execute(
-                "UPDATE history SET log = %s WHERE id = %s", (log_last, history_last)
-            )
-
-        if not conn.commit():
-            print("Updating matched games failed")
-    with conn.cursor() as cursor:
-        cursor.execute(
-            """
-            SELECT fileset.id, filechecksum.checksum, src, status
-            FROM fileset
-            JOIN file ON file.fileset = fileset.id
-            JOIN filechecksum ON file.id = filechecksum.file
-            WHERE status = 'user' AND fileset.id = %s
-        """,
-            (id,),
-        )
-        unmatched_files = cursor.fetchall()
-
-    unmatched_filesets = []
-    cur_fileset = None
-    temp = []
-    for file in unmatched_files:
-        if cur_fileset is None or cur_fileset != file["id"]:
-            if temp:
-                unmatched_filesets.append(temp)
-            cur_fileset = file["id"]
-            temp = []
-        temp.append(file)
-    if temp:
-        unmatched_filesets.append(temp)
-
-    for fileset in unmatched_filesets:
-        matching_games = find_matching_game(fileset)
-        if len(matching_games) != 1:
-            continue
-        matched_game = matching_games[0]
-        status = "full"
-        matched_game = {
-            k: ("NULL" if v is None else v) for k, v in matched_game.items()
-        }
-        category_text = f"Matched from {fileset[0]['src']}"
-        log_text = f"Matched game {matched_game['engineid']}: {matched_game['gameid']}-{matched_game['platform']}-{matched_game['language']} variant {matched_game['key']}. State {status}. Fileset:{fileset[0]['id']}."
-        query = """
-            UPDATE fileset
-            SET game = %s, status = %s, `key` = %s
-            WHERE id = %s
-        """
-        history_last = merge_filesets(matched_game["fileset"], fileset[0]["id"])
-        with conn.cursor() as cursor:
-            cursor.execute(
-                query,
-                (matched_game["id"], status, matched_game["key"], fileset[0]["id"]),
-            )
-            user = "cli:" + getpass.getuser()
-            create_log(
-                "Fileset merge",
-                user,
-                f"Merged Fileset:{matched_game['fileset']} and Fileset:{fileset[0]['id']}",
-            )
-            log_last = create_log(category_text, user, log_text)
-            cursor.execute(
-                "UPDATE history SET log = %s WHERE id = %s", (log_last, history_last)
-            )
-        conn.commit()


Commit: 696611f39a9bf93ba8dec9f7788d85d32f88f950
    https://github.com/scummvm/scummvm-sites/commit/696611f39a9bf93ba8dec9f7788d85d32f88f950
Author: ShivangNagta (shivangnag at gmail.com)
Date: 2025-08-14T22:21:10+02:00

Commit Message:
INTEGRITY: Improve homepage navbar.

Changed paths:
  A templates/home.html
  R index.html
    fileset.py
    pagination.py
    static/style.css


diff --git a/fileset.py b/fileset.py
index 4377d10..cdcbe0d 100644
--- a/fileset.py
+++ b/fileset.py
@@ -5,6 +5,7 @@ from flask import (
     url_for,
     render_template_string,
     jsonify,
+    render_template,
 )
 import pymysql.cursors
 import json
@@ -32,37 +33,12 @@ secret_key = os.urandom(24)
 
 @app.route("/")
 def index():
-    html = """
-    <!DOCTYPE html>
-    <html>
-    <head>
-        <link rel="stylesheet" type="text/css" href="{{ url_for('static', filename='style.css') }}">
-    </head>
-    <body>
-    <nav style="position: fixed; top: 0; left: 0; right: 0; background: white; padding: 3px; border-bottom: 1px solid #ccc;">
-    <a href="{{ url_for('index') }}">
-        <img src="{{ url_for('static', filename='integrity_service_logo_256.png') }}" alt="Logo" style="height:60px; vertical-align:middle;">
-    </a>
-    </nav>
-    <h1 style="margin-top: 80px;">Fileset Database</h1>
-    <h2>Fileset Actions</h2>
-    <ul>
-        <li><a href="{{ url_for('fileset') }}">Fileset</a></li>
-        <li><a href="{{ url_for('user_games_list') }}">User Games List</a></li>
-        <li><a href="{{ url_for('ready_for_review') }}">Ready for review</a></li>
-        <li><a href="{{ url_for('fileset_search') }}">Fileset Search</a></li>
-    </ul>
-    <h2>Logs</h2>
-    <ul>
-        <li><a href="{{ url_for('logs') }}">Logs</a></li>
-    </ul>
-    <form action="{{ url_for('clear_database') }}" method="POST"> 
-        <button style="margin:100px 0 0 0; background-color:red"  type="submit"> Clear Database </button>
-    </form>
-    </body>
-    </html>
-    """
-    return render_template_string(html)
+    return redirect(url_for("logs"))
+
+
+ at app.route("/home")
+def home():
+    return render_template("home.html")
 
 
 @app.route("/clear_database", methods=["POST"])
@@ -148,10 +124,18 @@ def fileset():
                 <link rel="stylesheet" type="text/css" href="{{{{ url_for('static', filename='style.css') }}}}">
             </head>
             <body>
-            <nav style="position: fixed; top: 0; left: 0; right: 0; background: white; padding: 3px; border-bottom: 1px solid #ccc;">
-                <a href="{{{{ url_for('index') }}}}">
-                    <img src="{{{{ url_for('static', filename='integrity_service_logo_256.png') }}}}" alt="Logo" style="height:60px; vertical-align:middle;">
-                </a>
+            <nav>
+                <div class="logo">
+                    <a href="{{{{ url_for('home') }}}}">
+                        <img src="{{{{ url_for('static', filename='integrity_service_logo_256.png') }}}}" alt="Logo">
+                    </a>
+                </div>
+                <div class="nav-buttons">
+                    <a href="{{{{ url_for('user_games_list') }}}}">User Games List</a>
+                    <a href="{{{{ url_for('ready_for_review') }}}}">Ready for review</a>
+                    <a href="{{{{ url_for('fileset_search') }}}}">Fileset Search</a>
+                    <a href="{{{{ url_for('logs') }}}}">Logs</a>
+                </div>
             </nav>
             <h2 style="margin-top: 80px;"><u>Fileset: {id}</u></h2>
             <table>
@@ -501,10 +485,18 @@ def merge_fileset(id):
                     <link rel="stylesheet" type="text/css" href="{{{{ url_for('static', filename='style.css') }}}}">
                 </head>
                 <body>
-                <nav style="position: fixed; top: 0; left: 0; right: 0; background: white; padding: 3px; border-bottom: 1px solid #ccc;">
-                    <a href="{{{{ url_for('index') }}}}">
-                        <img src="{{{{ url_for('static', filename='integrity_service_logo_256.png') }}}}" alt="Logo" style="height:60px; vertical-align:middle;">
-                    </a>
+                <nav>
+                    <div class="logo">
+                        <a href="{{{{ url_for('home') }}}}">
+                            <img src="{{{{ url_for('static', filename='integrity_service_logo_256.png') }}}}" alt="Logo">
+                        </a>
+                    </div>
+                    <div class="nav-buttons">
+                        <a href="{{{{ url_for('user_games_list') }}}}">User Games List</a>
+                        <a href="{{{{ url_for('ready_for_review') }}}}">Ready for review</a>
+                        <a href="{{{{ url_for('fileset_search') }}}}">Fileset Search</a>
+                        <a href="{{{{ url_for('logs') }}}}">Logs</a>
+                    </div>
                 </nav>
                 <h2 style="margin-top: 80px;">Search Results for '{search_query}'</h2>
                 <form method="POST">
@@ -540,10 +532,18 @@ def merge_fileset(id):
         <link rel="stylesheet" type="text/css" href="{{ url_for('static', filename='style.css') }}">
     </head>
     <body>
-    <nav style="position: fixed; top: 0; left: 0; right: 0; background: white; padding: 3px; border-bottom: 1px solid #ccc;">
-        <a href="{{ url_for('index') }}">
-            <img src="{{ url_for('static', filename='integrity_service_logo_256.png') }}" alt="Logo" style="height:60px; vertical-align:middle;">
-        </a>
+    <nav>
+        <div class="logo">
+            <a href="{{ url_for('home') }}">
+                <img src="{{ url_for('static', filename='integrity_service_logo_256.png') }}" alt="Logo">
+            </a>
+        </div>
+        <div class="nav-buttons">
+            <a href="{{ url_for('user_games_list') }}">User Games List</a>
+            <a href="{{ url_for('ready_for_review') }}">Ready for review</a>
+            <a href="{{ url_for('fileset_search') }}">Fileset Search</a>
+            <a href="{{ url_for('logs') }}">Logs</a>
+        </div>
     </nav>
     <h2 style="margin-top: 80px;">Search Fileset to Merge</h2>
     <form method="POST">
@@ -599,10 +599,18 @@ def possible_merge_filesets(id):
                 <link rel="stylesheet" type="text/css" href="{{{{ url_for('static', filename='style.css') }}}}">
             </head>
             <body>
-            <nav style="position: fixed; top: 0; left: 0; right: 0; background: white; padding: 3px; border-bottom: 1px solid #ccc;">
-                <a href="{{{{ url_for('index') }}}}">
-                    <img src="{{{{ url_for('static', filename='integrity_service_logo_256.png') }}}}" alt="Logo" style="height:60px; vertical-align:middle;">
-                </a>
+            <nav>
+                <div class="logo">
+                    <a href="{{{{ url_for('home') }}}}">
+                        <img src="{{{{ url_for('static', filename='integrity_service_logo_256.png') }}}}" alt="Logo">
+                    </a>
+                </div>
+                <div class="nav-buttons">
+                    <a href="{{{{ url_for('user_games_list') }}}}">User Games List</a>
+                    <a href="{{{{ url_for('ready_for_review') }}}}">Ready for review</a>
+                    <a href="{{{{ url_for('fileset_search') }}}}">Fileset Search</a>
+                    <a href="{{{{ url_for('logs') }}}}">Logs</a>
+                </div>
             </nav>
             <h2 style="margin-top: 80px;">Possible Merges for fileset-'{id}'</h2>
             <table>
@@ -731,10 +739,18 @@ def confirm_merge(id):
                 <link rel="stylesheet" type="text/css" href="{{ url_for('static', filename='style.css') }}">
             </head>
             <body>
-            <nav style="position: fixed; top: 0; left: 0; right: 0; background: white; padding: 3px; border-bottom: 1px solid #ccc;">
-                <a href="{{ url_for('index') }}">
-                    <img src="{{ url_for('static', filename='integrity_service_logo_256.png') }}" alt="Logo" style="height:60px; vertical-align:middle;">
-                </a>
+            <nav>
+                <div class="logo">
+                    <a href="{{ url_for('home') }}">
+                        <img src="{{ url_for('static', filename='integrity_service_logo_256.png') }}" alt="Logo">
+                    </a>
+                </div>
+                <div class="nav-buttons">
+                    <a href="{{ url_for('user_games_list') }}">User Games List</a>
+                    <a href="{{ url_for('ready_for_review') }}">Ready for review</a>
+                    <a href="{{ url_for('fileset_search') }}">Fileset Search</a>
+                    <a href="{{ url_for('logs') }}">Logs</a>
+                </div>
             </nav>
             <h2 style="margin-top: 80px;">Confirm Merge</h2>
             <form id="confirm_merge_form">
diff --git a/index.html b/index.html
deleted file mode 100644
index a6c5e6d..0000000
--- a/index.html
+++ /dev/null
@@ -1,2 +0,0 @@
-<a href="games_list.php">List of Detection entries</a><br/>
-<a href="logs.php">Logs of developer actions</a><br/>
diff --git a/pagination.py b/pagination.py
index 8497ec4..79d61e9 100644
--- a/pagination.py
+++ b/pagination.py
@@ -144,10 +144,18 @@ def create_page(
         <link rel="stylesheet" type="text/css" href="{{ url_for('static', filename='style.css') }}">
     </head>
     <body>
-    <nav style="position: fixed; top: 0; left: 0; right: 0; background: white; padding: 3px; border-bottom: 1px solid #ccc;">
-        <a href="{{ url_for('index') }}">
-            <img src="{{ url_for('static', filename='integrity_service_logo_256.png') }}" alt="Logo" style="height:60px; vertical-align:middle;">
-        </a>
+    <nav>
+        <div class="logo">
+            <a href="{{ url_for('home') }}">
+                <img src="{{ url_for('static', filename='integrity_service_logo_256.png') }}" alt="Logo">
+            </a>
+        </div>
+        <div class="nav-buttons">
+            <a href="{{ url_for('user_games_list') }}">User Games List</a>
+            <a href="{{ url_for('ready_for_review') }}">Ready for review</a>
+            <a href="{{ url_for('fileset_search') }}">Fileset Search</a>
+            <a href="{{ url_for('logs') }}">Logs</a>
+        </div>
     </nav>
 <form id='filters-form' method='GET' onsubmit='remove_empty_inputs()'>
 <table style="margin-top: 80px;">
diff --git a/static/style.css b/static/style.css
index 1c9e599..527824b 100644
--- a/static/style.css
+++ b/static/style.css
@@ -3,18 +3,34 @@
   font-family: 'Segoe UI', Tahoma, Geneva, Verdana, sans-serif;
 }
 
-td, th {
+td,
+th {
   padding-inline: 5px;
 }
 
-tr:nth-child(even) {background-color: #f2f2f2;}
-tr {background-color: white;}
+tr:nth-child(even) {
+  background-color: #f2f2f2;
+}
+
+tr {
+  background-color: white;
+}
+
+tr:hover {
+  background-color: #ddd;
+}
+
+tr.games_list:hover {
+  cursor: pointer;
+}
 
-tr:hover {background-color: #ddd;}
-tr.games_list:hover {cursor: pointer;}
+tr.filter:hover {
+  background-color: inherit;
+}
 
-tr.filter:hover {background-color:inherit;}
-td.filter {text-align: center;}
+td.filter {
+  text-align: center;
+}
 
 th {
   padding-top: 5px;
@@ -26,7 +42,38 @@ th {
 
 th a {
   color: white;
-  text-decoration: none; /* no underline */
+  text-decoration: none;
+  /* no underline */
+}
+
+nav {
+  position: fixed;
+  top: 0;
+  left: 0;
+  right: 0;
+  border-bottom: 1px solid #ccc;
+  background-color: white;
+  display: flex;
+  padding: 0px 20px 0px 20px;
+  align-items: center;
+  flex-wrap: wrap;
+  z-index: 1000;
+}
+
+.nav-buttons a {
+  text-decoration: none;
+  padding: 10px 16px;
+  border-radius: 6px;
+  margin-left: 10px;
+}
+
+/* .nav-buttons a:hover {
+  box-shadow: 0 4px 12px rgba(39, 145, 232, 0.4);
+} */
+
+.logo img {
+  height: 75px;
+  vertical-align: middle;
 }
 
 button {
@@ -39,8 +86,11 @@ button {
 }
 
 button:hover {
-  background-color: #29afe0;
+  background-color: #1f7fc4;
+  color: #f2f2f2;
+  box-shadow: 0 4px 12px rgba(39, 145, 232, 0.4);
 }
+
 button:active {
   background-color: #1a95c2;
 }
@@ -55,13 +105,17 @@ input[type=submit] {
 }
 
 input[type=submit]:hover {
-  background-color: #29afe0;
+  background-color: #1f7fc4;
+  color: #f2f2f2;
+  box-shadow: 0 4px 12px rgba(39, 145, 232, 0.4);
 }
+
 input[type=submit]:active {
   background-color: #1a95c2;
 }
 
-input[type=text], select {
+input[type=text],
+select {
   width: 25%;
   height: 38px;
   padding: 6px 12px;
diff --git a/templates/home.html b/templates/home.html
new file mode 100644
index 0000000..458f5fb
--- /dev/null
+++ b/templates/home.html
@@ -0,0 +1,105 @@
+<!DOCTYPE html>
+<html>
+
+<head>
+    <link rel="stylesheet" type="text/css" href="{{ url_for('static', filename='style.css') }}">
+    <style>
+        body {
+            margin: 0;
+            font-family: 'Segoe UI', Tahoma, Geneva, Verdana, sans-serif;
+        }
+
+        .title {
+            align-items: center;
+            display: flex;
+            flex-direction: column;
+            height: 100vh;
+            gap: 50px;
+        }
+
+        .fileset_database {
+            margin-top: 10vh;
+            text-align: center;
+            background-color: #ffffff;
+            color: #000000;
+            border-radius: 5px;
+            padding: 10px;
+            font-size: 100px
+        }
+
+        .dev {
+            background-color: #fafeff;
+            padding: 10px;
+            border-radius: 5px;
+            margin-left: auto;
+        }
+
+        button {
+            background-color: #d9534f;
+            color: white;
+            padding: 10px 20px;
+            font-size: 16px;
+            border: none;
+            border-radius: 4px;
+            cursor: pointer;
+            transition: background-color 0.3s, box-shadow 0.3s;
+        }
+
+        button:hover {
+            background-color: #c9302c;
+            box-shadow: 0 4px 12px rgba(0, 0, 0, 0.2);
+        }
+
+        h3 {
+            color: #000000;
+        }
+
+        @media (max-width: 768px) {
+            .fileset_database {
+                font-size: 48px;
+            }
+        }
+
+        @media (max-width: 480px) {
+            .fileset_database {
+                font-size: 32px;
+            }
+
+            nav {
+                padding: 10px;
+            }
+
+            .nav-buttons a {
+                margin-bottom: 5px;
+                display: block;
+                text-align: center;
+            }
+        }
+    </style>
+</head>
+
+<body>
+    <nav>
+        <div class="logo">
+            <a href="{{ url_for('home') }}">
+                <img src="{{ url_for('static', filename='integrity_service_logo_256.png') }}" alt="Logo">
+            </a>
+        </div>
+        <div class="nav-buttons">
+            <a href="{{ url_for('user_games_list') }}">User Games List</a>
+            <a href="{{ url_for('ready_for_review') }}">Ready for review</a>
+            <a href="{{ url_for('fileset_search') }}">Fileset Search</a>
+            <a href="{{ url_for('logs') }}">Logs</a>
+        </div>
+        <div class="dev">
+            <form action="{{ url_for('clear_database') }}" method="POST">
+                <button type="submit">Clear Database</button>
+            </form>
+        </div>
+    </nav>
+    <div class="title">
+        <div class="fileset_database">Fileset Database</div>
+    </div>
+</body>
+
+</html>


Commit: 1612a80ce26af5ffec66c92c9892d0c2473bc0a3
    https://github.com/scummvm/scummvm-sites/commit/1612a80ce26af5ffec66c92c9892d0c2473bc0a3
Author: ShivangNagta (shivangnag at gmail.com)
Date: 2025-08-14T22:21:10+02:00

Commit Message:
INTEGRITY: Fix incorrect fileset redirection issue in logs.

Changed paths:
    fileset.py


diff --git a/fileset.py b/fileset.py
index cdcbe0d..628e539 100644
--- a/fileset.py
+++ b/fileset.py
@@ -88,6 +88,16 @@ def fileset():
             # Get the id from the GET parameters, or use the minimum id if it's not provided
             id = request.args.get("id", default=min_id, type=int)
 
+            # Check if the id exists in the fileset table
+            cursor.execute("SELECT id FROM fileset WHERE id = %s", (id,))
+            if cursor.rowcount == 0:
+                # If the id doesn't exist, get a new id from the history table
+                cursor.execute(
+                    "SELECT fileset FROM history WHERE oldfileset = %s", (id,)
+                )
+                id = cursor.fetchone()["fileset"]
+                return redirect(f"/fileset?id={id}")
+
             # Get the maximum id from the fileset table
             cursor.execute("SELECT MAX(id) FROM fileset")
             max_id = cursor.fetchone()["MAX(id)"]
@@ -100,15 +110,6 @@ def fileset():
             # Ensure the id is between the minimum and maximum id
             id = max(min_id, min(id, max_id))
 
-            # Check if the id exists in the fileset table
-            cursor.execute("SELECT id FROM fileset WHERE id = %s", (id,))
-            if cursor.rowcount == 0:
-                # If the id doesn't exist, get a new id from the history table
-                cursor.execute(
-                    "SELECT fileset FROM history WHERE oldfileset = %s", (id,)
-                )
-                id = cursor.fetchone()["fileset"]
-
             # Get the history for the current id
             cursor.execute(
                 "SELECT `timestamp`, oldfileset, log FROM history WHERE fileset = %s ORDER BY `timestamp`",


Commit: 2de285adda4c316b4b395da49cc26f9a58efd6e3
    https://github.com/scummvm/scummvm-sites/commit/2de285adda4c316b4b395da49cc26f9a58efd6e3
Author: ShivangNagta (shivangnag at gmail.com)
Date: 2025-08-14T22:21:10+02:00

Commit Message:
INTEGRITY: Add fileset redirection message for merged filesets in the new fileset.

Changed paths:
    fileset.py
    pagination.py


diff --git a/fileset.py b/fileset.py
index 628e539..0f0cab0 100644
--- a/fileset.py
+++ b/fileset.py
@@ -62,6 +62,7 @@ def clear_database():
 @app.route("/fileset", methods=["GET", "POST"])
 def fileset():
     id = request.args.get("id", default=1, type=int)
+    old_id = request.args.get("redirected_from", default=None, type=int)
     widetable = request.args.get("widetable", default="partial", type=str)
     # Load MySQL credentials from a JSON file
     base_dir = os.path.dirname(os.path.abspath(__file__))
@@ -95,8 +96,9 @@ def fileset():
                 cursor.execute(
                     "SELECT fileset FROM history WHERE oldfileset = %s", (id,)
                 )
+                old_id = id
                 id = cursor.fetchone()["fileset"]
-                return redirect(f"/fileset?id={id}")
+                return redirect(f"/fileset?id={id}&redirected_from={old_id}")
 
             # Get the maximum id from the fileset table
             cursor.execute("SELECT MAX(id) FROM fileset")
@@ -141,6 +143,8 @@ def fileset():
             <h2 style="margin-top: 80px;"><u>Fileset: {id}</u></h2>
             <table>
             """
+            if old_id is not None:
+                html += f"""<h3><u>Redirected from Fileset: {old_id}</u></h3>"""
             html += f"<button type='button' onclick=\"location.href='/fileset/{id}/merge'\">Manual Merge</button>"
             # html += f"<button type='button' onclick=\"location.href='/fileset/{id}/possible_merge'\">Possible Merges</button>"
             html += f"""
diff --git a/pagination.py b/pagination.py
index 79d61e9..0890580 100644
--- a/pagination.py
+++ b/pagination.py
@@ -218,14 +218,6 @@ def create_page(
                     matches = re.findall(r"Fileset:(\d+)", value)
                     for fileset_id in matches:
                         fileset_text = f"Fileset:{fileset_id}"
-                        with conn.cursor() as cursor:
-                            cursor.execute(
-                                "SELECT fileset FROM history WHERE oldfileset = %s AND oldfileset != fileset",
-                                (fileset_id,),
-                            )
-                            row = cursor.fetchone()
-                            if row:
-                                fileset_id = row["fileset"]
                         value = value.replace(
                             fileset_text,
                             f"<a href='fileset?id={fileset_id}'>{fileset_text}</a>",


Commit: 0c57b6ffc8b734fa13d34afb63523a00385c7974
    https://github.com/scummvm/scummvm-sites/commit/0c57b6ffc8b734fa13d34afb63523a00385c7974
Author: ShivangNagta (shivangnag at gmail.com)
Date: 2025-08-14T22:21:10+02:00

Commit Message:
INTEGRITY: Display only matched files in confirm merge by default, introduce checkboxes for showing more details.

Changed paths:
    fileset.py


diff --git a/fileset.py b/fileset.py
index 0f0cab0..8aa957f 100644
--- a/fileset.py
+++ b/fileset.py
@@ -22,6 +22,7 @@ from db_functions import (
     db_connect_root,
     get_checksum_props,
     delete_original_fileset,
+    normalised_path,
 )
 from collections import defaultdict
 from schema import init_database
@@ -642,6 +643,48 @@ def possible_merge_filesets(id):
         connection.close()
 
 
+def get_file_status(candidate_fileset, fileset, conn):
+    """
+    Returns a list of matched file tuples:
+    (candidate_file_name, dat_file_name)
+    """
+    with conn.cursor() as cursor:
+        cursor.execute(
+            "SELECT id, name, size, `size-r`, `size-rd` FROM file WHERE fileset = %s",
+            (candidate_fileset,),
+        )
+        candidate_file_rows = cursor.fetchall()
+
+        candidate_files = {
+            row["id"]: [row["name"], row["size"], row["size-r"], row["size-rd"]]
+            for row in candidate_file_rows
+        }
+
+        dat_sizes = set()
+        dat_names_by_sizes = {}
+
+        for file in fileset["rom"]:
+            (name, size, size_r, size_rd) = file
+            base_name = os.path.basename(normalised_path(name)).lower()
+            key = (size, size_r, size_rd, base_name)
+            dat_sizes.add(key)
+            dat_names_by_sizes[key] = name
+
+        matched_files = []
+
+        for file_id, [file_name, size, size_r, size_rd] in candidate_files.items():
+            base_name = os.path.basename(file_name).lower()
+            key_exact = (size, size_r, size_rd, base_name)
+            key_fallback = (-1, size_r, size_rd, base_name)
+
+            if key_exact in dat_sizes:
+                matched_files.append((file_name, dat_names_by_sizes[key_exact]))
+            elif key_fallback in dat_sizes:
+                matched_files.append((file_name, dat_names_by_sizes[key_fallback]))
+
+        return matched_files
+
+
 @app.route("/fileset/<int:id>/merge/confirm", methods=["GET", "POST"])
 def confirm_merge(id):
     target_id = (
@@ -686,12 +729,12 @@ def confirm_merge(id):
             )
             source_fileset = cursor.fetchone()
 
-            # Select all files
+            # Select all filesw
             file_query = """
                 SELECT f.name, f.size, f.`size-r`, f.`size-rd`, 
                 fc.checksum, fc.checksize, fc.checktype, f.detection
                 FROM file f
-                JOIN filechecksum fc ON fc.file = f.id
+                LEFT JOIN filechecksum fc ON fc.file = f.id
                 WHERE f.fileset = %s
             """
             cursor.execute(file_query, (id,))
@@ -719,6 +762,23 @@ def confirm_merge(id):
             cursor.execute(file_query, (target_id,))
             target_files = cursor.fetchall()
 
+            source_files_set = set()
+            source_fileset_with_files = {}
+
+            for source_file in source_files:
+                file_tuple = (
+                    source_file["name"],
+                    source_file["size"],
+                    source_file["size-r"],
+                    source_file["size-rd"],
+                )
+                source_files_set.add(file_tuple)
+            source_fileset_with_files["rom"] = source_files_set
+
+            matched_files = get_file_status(
+                target_id, source_fileset_with_files, connection
+            )
+
             def highlight_differences(source, target):
                 diff = difflib.ndiff(source, target)
                 source_highlighted = ""
@@ -784,16 +844,30 @@ def confirm_merge(id):
 
             if source_files:
                 for file in source_files:
+                    checksum = file["checksum"]
                     checksize = file["checksize"]
-                    if checksize != "1048576" and file["checksize"] == "1M":
+                    checktype = file["checktype"]
+                    size = file["size"]
+                    size_r = file["size-r"]
+                    size_rd = file["size-rd"]
+                    if file["checksum"] is None:
+                        checksum = ""
+                        checksize = ""
+                        checktype = ""
+
+                    if checksize != "1048576" and checksize == "1M":
                         checksize = "1048576"
-                    if checksize != "1048576" and int(file["checksize"]) == 0:
+                    if (
+                        checksize != ""
+                        and checksize != "1048576"
+                        and int(checksize) == 0
+                    ):
                         checksize = "full"
-                    check = file["checktype"] + "-" + checksize
-                    source_files_map[file["name"].lower()][check] = file["checksum"]
-                    source_files_map[file["name"].lower()]["size"] = file["size"]
-                    source_files_map[file["name"].lower()]["size-r"] = file["size-r"]
-                    source_files_map[file["name"].lower()]["size-rd"] = file["size-rd"]
+                    check = checktype + "-" + checksize
+                    source_files_map[file["name"].lower()][check] = checksum
+                    source_files_map[file["name"].lower()]["size"] = size
+                    source_files_map[file["name"].lower()]["size-r"] = size_r
+                    source_files_map[file["name"].lower()]["size-rd"] = size_rd
 
             if target_files:
                 for file in target_files:
@@ -807,49 +881,72 @@ def confirm_merge(id):
                     target_files_map[file["name"].lower()]["size"] = file["size"]
                     target_files_map[file["name"].lower()]["size-r"] = file["size-r"]
                     target_files_map[file["name"].lower()]["size-rd"] = file["size-rd"]
-                    print(file)
                     if file["detection"] == 1:
                         detection_files_set.add(file["name"].lower())
 
-            print(detection_files_set)
+            html += """<tr><th>Files</th><td colspan='2'><label><input type="checkbox" id="toggle-unmatched"> Show Unmatched Files</label></td></tr>"""
 
-            all_filenames = sorted(
-                set(source_files_map.keys()) | set(target_files_map.keys())
-            )
-            html += "<tr><th>Files</th></tr>"
-            for filename in all_filenames:
-                source_dict = source_files_map.get(filename, {})
-                target_dict = target_files_map.get(filename, {})
+            all_source_unmatched_filenames = sorted(set(source_files_map.keys()))
+            all_target_unmatched_filenames = sorted(set(target_files_map.keys()))
+
+            for matched_target_filename, matched_source_filename in matched_files:
+                if matched_source_filename.lower() in all_source_unmatched_filenames:
+                    all_source_unmatched_filenames.remove(
+                        matched_source_filename.lower()
+                    )
+                if matched_target_filename.lower() in all_target_unmatched_filenames:
+                    all_target_unmatched_filenames.remove(
+                        matched_target_filename.lower()
+                    )
+                source_dict = source_files_map.get(matched_source_filename.lower(), {})
+                target_dict = target_files_map.get(matched_target_filename.lower(), {})
 
-                html += f"<tr><th>{filename}</th><th>Source File</th><th>Target File</th></tr>"
+                # html += f"""<tr><th>{matched_source_filename}</th><th>Source File</th><th>Target File</th></tr>"""
 
                 keys = sorted(set(source_dict.keys()) | set(target_dict.keys()))
 
+                group_id = f"group_{matched_source_filename.lower().replace('.', '_').replace('/', '_')}_{matched_target_filename.lower().replace('.', '_').replace('/', '_')}"
+                html += f"""<tr>
+                    <td colspan='3'>
+                        <label>
+                            <input type="checkbox" onclick="toggleGroup('{group_id}')">
+                            Show all fields for <strong>{matched_source_filename}</strong>
+                        </label>
+                    </td>
+                </tr>"""
+
                 for key in keys:
                     source_value = str(source_dict.get(key, ""))
                     target_value = str(target_dict.get(key, ""))
 
                     source_checked = "checked" if key in source_dict else ""
-                    source_checksum = source_files_map[filename.lower()].get(key, "")
-                    target_checksum = target_files_map[filename.lower()].get(key, "")
+                    source_checksum = source_files_map[
+                        matched_source_filename.lower()
+                    ].get(key, "")
+                    target_checksum = target_files_map[
+                        matched_target_filename.lower()
+                    ].get(key, "")
 
                     source_val = html_lib.escape(
                         json.dumps(
                             {
                                 "side": "source",
-                                "filename": filename,
+                                "filename": matched_source_filename,
                                 "prop": key,
                                 "value": source_checksum,
                                 "detection": "0",
                             }
                         )
                     )
-                    if filename in detection_files_set:
+                    if (
+                        os.path.basename(matched_source_filename).lower()
+                        in detection_files_set
+                    ):
                         target_val = html_lib.escape(
                             json.dumps(
                                 {
                                     "side": "target",
-                                    "filename": filename,
+                                    "filename": matched_source_filename,
                                     "prop": key,
                                     "value": target_checksum,
                                     "detection": "1",
@@ -861,46 +958,151 @@ def confirm_merge(id):
                             json.dumps(
                                 {
                                     "side": "target",
-                                    "filename": filename,
+                                    "filename": matched_target_filename,
                                     "prop": key,
                                     "value": target_checksum,
                                     "detection": "0",
                                 }
                             )
                         )
-
                     if source_value != target_value:
                         source_highlighted, target_highlighted = highlight_differences(
                             source_value, target_value
                         )
-
-                        html += f"""
-                        <tr>
-                            <td>{key}</td>
-                            <td>
-                                <input type="checkbox" name="options[]" value="{source_val}" {source_checked}>
-                                {source_highlighted}
-                            </td>
-                            <td>
-                                <input type="checkbox" name="options[]" value="{target_val}">
-                                {target_highlighted}
-                            </td>
-                        </tr>
-                        """
+                        if key == "md5-full":
+                            html += f"""<tr>
+                                <td>{key}</td>
+                                <td><input type="checkbox" name="options[]" value="{source_val}" {source_checked}>{source_highlighted}</td>
+                                <td><input type="checkbox" name="options[]" value="{target_val}">{target_highlighted}</td>
+                            </tr>"""
+                        else:
+                            html += f"""<tbody class="toggle-details" id="{group_id}" style="display: none;">
+                                <tr>
+                                    <td>{key}</td>
+                                    <td><input type="checkbox" name="options[]" value="{source_val}" {source_checked}>{source_highlighted}</td>
+                                    <td><input type="checkbox" name="options[]" value="{target_val}">{target_highlighted}</td>
+                                </tr>
+                            </tbody>"""
                     else:
-                        html += f"""
-                        <tr>
-                            <td>{key}</td>
-                            <td>
-                                <input type="checkbox" name="options[]" value="{source_val}" {source_checked}>
-                                {source_value}
-                            </td>
-                            <td>
-                                <input type="checkbox" name="options[]" value="{target_val}">
-                                {target_value}
-                            </td>
-                        </tr>
-                        """
+                        if key == "md5-full":
+                            html += f"""<tr>
+                                <td>{key}</td>
+                                <td><input type="checkbox" name="options[]" value="{source_val}" {source_checked}>{source_value}</td>
+                                <td><input type="checkbox" name="options[]" value="{target_val}">{target_value}</td>
+                            </tr>"""
+                        else:
+                            html += f"""<tbody class="toggle-details" id="{group_id}" style="display: none;">
+                                <tr>
+                                    <td>{key}</td>
+                                    <td><input type="checkbox" name="options[]" value="{source_val}" {source_checked}>{source_value}</td>
+                                    <td><input type="checkbox" name="options[]" value="{target_val}">{target_value}</td>
+                                </tr>
+                            </tbody>"""
+
+            all_unmatched_filenames = [
+                all_target_unmatched_filenames,
+                all_source_unmatched_filenames,
+            ]
+
+            for unmatched_filenames in all_unmatched_filenames:
+                for filename in unmatched_filenames:
+                    source_dict = source_files_map.get(filename.lower(), {})
+                    target_dict = target_files_map.get(filename.lower(), {})
+
+                    keys = sorted(set(source_dict.keys()) | set(target_dict.keys()))
+                    group_id = (
+                        f"group_{filename.lower().replace('.', '_').replace('/', '_')}"
+                    )
+                    html += f"""<tr class="unmatched" style='display: none;'>
+                        <td colspan='3'>
+                            <label>
+                                <input type="checkbox" onclick="toggleGroup('{group_id}')">
+                                Show all fields for <strong>{filename}</strong>
+                            </label>
+                        </td>
+                    </tr>"""
+
+                    for key in keys:
+                        source_value = str(source_dict.get(key, ""))
+                        target_value = str(target_dict.get(key, ""))
+
+                        source_checked = "checked" if key in source_dict else ""
+                        source_checksum = source_files_map[filename.lower()].get(
+                            key, ""
+                        )
+                        target_checksum = target_files_map[filename.lower()].get(
+                            key, ""
+                        )
+
+                        source_val = html_lib.escape(
+                            json.dumps(
+                                {
+                                    "side": "source",
+                                    "filename": filename,
+                                    "prop": key,
+                                    "value": source_checksum,
+                                    "detection": "0",
+                                }
+                            )
+                        )
+                        if filename.lower() in detection_files_set:
+                            target_val = html_lib.escape(
+                                json.dumps(
+                                    {
+                                        "side": "target",
+                                        "filename": filename,
+                                        "prop": key,
+                                        "value": target_checksum,
+                                        "detection": "1",
+                                    }
+                                )
+                            )
+                        else:
+                            target_val = html_lib.escape(
+                                json.dumps(
+                                    {
+                                        "side": "target",
+                                        "filename": filename,
+                                        "prop": key,
+                                        "value": target_checksum,
+                                        "detection": "0",
+                                    }
+                                )
+                            )
+
+                        if source_value != target_value:
+                            source_highlighted, target_highlighted = (
+                                highlight_differences(source_value, target_value)
+                            )
+                            if key == "md5-full":
+                                html += f"""<tr class="unmatched" style='display: none;'">
+                                    <td>{key}</td>
+                                    <td><input type="checkbox" name="options[]" value="{source_val}" {source_checked}>{source_highlighted}</td>
+                                    <td><input type="checkbox" name="options[]" value="{target_val}">{target_highlighted}</td>
+                                </tr>"""
+                            else:
+                                html += f"""<tbody class="toggle-details" id="{group_id}"  style='display: none;'>
+                                    <tr>
+                                        <td>{key}</td>
+                                        <td><input type="checkbox" name="options[]" value="{source_val}" {source_checked}>{source_highlighted}</td>
+                                        <td><input type="checkbox" name="options[]" value="{target_val}">{target_highlighted}</td>
+                                    </tr>
+                                </tbody>"""
+                        else:
+                            if key == "md5-full":
+                                html += f"""<tr class="unmatched" style='display: none;'>
+                                    <td>{key}</td>
+                                    <td><input type="checkbox" name="options[]" value="{source_val}" {source_checked}>{source_value}</td>
+                                    <td><input type="checkbox" name="options[]" value="{target_val}">{target_value}</td>
+                                </tr>"""
+                            else:
+                                html += f"""<tbody class="toggle-details unmatched" id="{group_id}"  style='display: none;'>
+                                    <tr>
+                                        <td>{key}</td>
+                                        <td><input type="checkbox" name="options[]" value="{source_val}" {source_checked}>{source_value}</td>
+                                        <td><input type="checkbox" name="options[]" value="{target_val}">{target_value}</td>
+                                    </tr>
+                                </tbody>"""
 
             html += """
             </table>
@@ -912,6 +1114,22 @@ def confirm_merge(id):
                 <input type="submit" value="Cancel">
             </form>
             <script src="{{ url_for('static', filename='js/confirm_merge_form_handler.js') }}"></script>
+            <script>
+            document.getElementById("toggle-unmatched").addEventListener("change", function() {
+                const rows = document.querySelectorAll("tr.unmatched");
+                rows.forEach(row => {
+                    row.style.display = this.checked ? "" : "none";
+                });
+            });
+            </script>
+            <script>
+            function toggleGroup(groupId) {
+                const rows = document.querySelectorAll(`#${groupId}`);
+                rows.forEach(row => {
+                    row.style.display = (row.style.display === "none") ? "" : "none";
+                });
+            }
+            </script>
             </body>
             </html>
             """
@@ -1049,10 +1267,11 @@ def execute_merge(id):
                                 source_file_id,
                             ),
                         )
+                        filename = os.path.basename(filename).lower()
                         cursor.execute(
                             """SELECT f.id as file_id FROM file f
                                     JOIN fileset fs ON fs.id = f.fileset 
-                                    WHERE f.name = %s
+                                    WHERE REGEXP_REPLACE(f.name, '^.*[\\\\/]', '') = %s
                                     AND fs.id = %s""",
                             (filename, target_id),
                         )
@@ -1060,16 +1279,23 @@ def execute_merge(id):
                         cursor.execute(
                             "DELETE FROM file WHERE id = %s", (target_file_id,)
                         )
-                    for c in details["checksums"]:
-                        checksum = c["value"]
-                        check = c["check"]
-                        checksize, checktype, checksum = get_checksum_props(
-                            check, checksum
-                        )
-                        query = "INSERT INTO filechecksum (file, checksize, checktype, checksum) VALUES (%s, %s, %s, %s)"
-                        cursor.execute(
-                            query, (source_file_id, checksize, checktype, checksum)
-                        )
+
+                    check = ""
+                    checksize = ""
+                    checktype = ""
+                    checksum = ""
+
+                    if "checksums" in details:
+                        for c in details["checksums"]:
+                            checksum = c["value"]
+                            check = c["check"]
+                            checksize, checktype, checksum = get_checksum_props(
+                                check, checksum
+                            )
+                            query = "INSERT INTO filechecksum (file, checksize, checktype, checksum) VALUES (%s, %s, %s, %s)"
+                            cursor.execute(
+                                query, (source_file_id, checksize, checktype, checksum)
+                            )
 
                     cursor.execute(
                         "UPDATE file SET fileset = %s WHERE id = %s",


Commit: 9d9e8e93448eec9415e381e2ce8971d1152e2a93
    https://github.com/scummvm/scummvm-sites/commit/9d9e8e93448eec9415e381e2ce8971d1152e2a93
Author: ShivangNagta (shivangnag at gmail.com)
Date: 2025-08-14T22:21:10+02:00

Commit Message:
INTEGRITY: Add check for matching files by missing size and hide merge button after clicked.

Changed paths:
    fileset.py


diff --git a/fileset.py b/fileset.py
index 8aa957f..cad1041 100644
--- a/fileset.py
+++ b/fileset.py
@@ -667,8 +667,11 @@ def get_file_status(candidate_fileset, fileset, conn):
             (name, size, size_r, size_rd) = file
             base_name = os.path.basename(normalised_path(name)).lower()
             key = (size, size_r, size_rd, base_name)
+            key2 = (-1, size_r, size_rd, base_name)
             dat_sizes.add(key)
+            dat_sizes.add(key2)
             dat_names_by_sizes[key] = name
+            dat_names_by_sizes[key2] = name
 
         matched_files = []
 
@@ -1108,13 +1111,23 @@ def confirm_merge(id):
             </table>
                 <input type="hidden" name="source_id" value="{{ source_fileset['id'] }}">
                 <input type="hidden" name="target_id" value="{{ target_fileset['id'] }}">
-                <button type="submit">Confirm Merge</button>
+                <button id="confirm_merge_submit" type="submit">Confirm Merge</button>
             </form>
+            <div id="merging-status" style="display: none; font-weight: bold; margin-top: 10px;">
+                Merging... Please wait.
+            </div>
             <form action="{{ url_for('fileset', id=id) }}">
-                <input type="submit" value="Cancel">
+                <input id="confirm_merge_cancel" type="submit" value="Cancel">
             </form>
             <script src="{{ url_for('static', filename='js/confirm_merge_form_handler.js') }}"></script>
             <script>
+            document.getElementById("confirm_merge_form").addEventListener("submit", function () {
+                document.getElementById("merging-status").style.display = "block";
+                document.getElementById("confirm_merge_submit").style.display = "none";
+                document.getElementById("confirm_merge_cancel").style.display = "none";
+            });
+            </script>
+            <script>
             document.getElementById("toggle-unmatched").addEventListener("change", function() {
                 const rows = document.querySelectorAll("tr.unmatched");
                 rows.forEach(row => {


Commit: 53543f1eb4d5e2a99723336bd908451b2e88d161
    https://github.com/scummvm/scummvm-sites/commit/53543f1eb4d5e2a99723336bd908451b2e88d161
Author: ShivangNagta (shivangnag at gmail.com)
Date: 2025-08-14T22:21:10+02:00

Commit Message:
INTEGRITY: Add configuration page with a feature to select items per page.

Changed paths:
  A templates/config.html
    fileset.py
    pagination.py


diff --git a/fileset.py b/fileset.py
index cad1041..4860506 100644
--- a/fileset.py
+++ b/fileset.py
@@ -6,7 +6,9 @@ from flask import (
     render_template_string,
     jsonify,
     render_template,
+    make_response,
 )
+
 import pymysql.cursors
 import json
 import html as html_lib
@@ -139,6 +141,7 @@ def fileset():
                     <a href="{{{{ url_for('ready_for_review') }}}}">Ready for review</a>
                     <a href="{{{{ url_for('fileset_search') }}}}">Fileset Search</a>
                     <a href="{{{{ url_for('logs') }}}}">Logs</a>
+                    <a href="{{{{ url_for('config') }}}}">Config</a>
                 </div>
             </nav>
             <h2 style="margin-top: 80px;"><u>Fileset: {id}</u></h2>
@@ -502,6 +505,7 @@ def merge_fileset(id):
                         <a href="{{{{ url_for('ready_for_review') }}}}">Ready for review</a>
                         <a href="{{{{ url_for('fileset_search') }}}}">Fileset Search</a>
                         <a href="{{{{ url_for('logs') }}}}">Logs</a>
+                        <a href="{{{{ url_for('config') }}}}">Config</a>
                     </div>
                 </nav>
                 <h2 style="margin-top: 80px;">Search Results for '{search_query}'</h2>
@@ -549,6 +553,7 @@ def merge_fileset(id):
             <a href="{{ url_for('ready_for_review') }}">Ready for review</a>
             <a href="{{ url_for('fileset_search') }}">Fileset Search</a>
             <a href="{{ url_for('logs') }}">Logs</a>
+            <a href="{{ url_for('config') }}">Config</a>
         </div>
     </nav>
     <h2 style="margin-top: 80px;">Search Fileset to Merge</h2>
@@ -616,6 +621,7 @@ def possible_merge_filesets(id):
                     <a href="{{{{ url_for('ready_for_review') }}}}">Ready for review</a>
                     <a href="{{{{ url_for('fileset_search') }}}}">Fileset Search</a>
                     <a href="{{{{ url_for('logs') }}}}">Logs</a>
+                    <a href="{{{{ url_for('config') }}}}">Config</a>
                 </div>
             </nav>
             <h2 style="margin-top: 80px;">Possible Merges for fileset-'{id}'</h2>
@@ -818,6 +824,7 @@ def confirm_merge(id):
                     <a href="{{ url_for('ready_for_review') }}">Ready for review</a>
                     <a href="{{ url_for('fileset_search') }}">Fileset Search</a>
                     <a href="{{ url_for('logs') }}">Logs</a>
+                    <a href="{{ url_for('config') }}">Config</a>
                 </div>
             </nav>
             <h2 style="margin-top: 80px;">Confirm Merge</h2>
@@ -1368,6 +1375,34 @@ def mark_as_full(id):
     return redirect(f"/fileset?id={id}")
 
 
+ at app.route("/config", methods=["GET", "POST"])
+def config():
+    """
+    Stores the user configurations in the cookies
+    """
+    if request.method == "POST":
+        items_per_page = request.form.get("items_per_page", "25")
+
+        try:
+            items_per_page_int = int(items_per_page)
+            if items_per_page_int < 1:
+                items_per_page = "1"
+        except ValueError:
+            items_per_page = "25"
+
+        resp = make_response(redirect(url_for("config")))
+        resp.set_cookie("items_per_page", items_per_page, max_age=365 * 24 * 60 * 60)
+        return resp
+
+    items_per_page = int(request.cookies.get("items_per_page", "25"))
+
+    return render_template("config.html", items_per_page=items_per_page)
+
+
+def get_items_per_page():
+    return int(request.cookies.get("items_per_page", "25"))
+
+
 @app.route("/validate", methods=["POST"])
 def validate():
     error_codes = {
@@ -1506,8 +1541,19 @@ def games_list():
         "engine.id": "game.engine",
         "game.id": "fileset.game",
     }
+
+    items_per_page = get_items_per_page()
+
     return render_template_string(
-        create_page(filename, 25, records_table, select_query, order, filters, mapping)
+        create_page(
+            filename,
+            items_per_page,
+            records_table,
+            select_query,
+            order,
+            filters,
+            mapping,
+        )
     )
 
 
@@ -1524,8 +1570,11 @@ def logs():
         "user": "log",
         "text": "log",
     }
+    items_per_page = get_items_per_page()
     return render_template_string(
-        create_page(filename, 25, records_table, select_query, order, filters)
+        create_page(
+            filename, items_per_page, records_table, select_query, order, filters
+        )
     )
 
 
@@ -1558,8 +1607,17 @@ def fileset_search():
         "engine.id": "game.engine",
         "fileset.id": "transactions.fileset",
     }
+    items_per_page = get_items_per_page()
     return render_template_string(
-        create_page(filename, 25, records_table, select_query, order, filters, mapping)
+        create_page(
+            filename,
+            items_per_page,
+            records_table,
+            select_query,
+            order,
+            filters,
+            mapping,
+        )
     )
 
 
diff --git a/pagination.py b/pagination.py
index 0890580..22f7930 100644
--- a/pagination.py
+++ b/pagination.py
@@ -155,6 +155,7 @@ def create_page(
             <a href="{{ url_for('ready_for_review') }}">Ready for review</a>
             <a href="{{ url_for('fileset_search') }}">Fileset Search</a>
             <a href="{{ url_for('logs') }}">Logs</a>
+            <a href="{{ url_for('config') }}">Config</a>
         </div>
     </nav>
 <form id='filters-form' method='GET' onsubmit='remove_empty_inputs()'>
diff --git a/templates/config.html b/templates/config.html
new file mode 100644
index 0000000..a578779
--- /dev/null
+++ b/templates/config.html
@@ -0,0 +1,149 @@
+<!DOCTYPE html>
+<html>
+
+<head>
+    <link rel="stylesheet" type="text/css" href="{{ url_for('static', filename='style.css') }}">
+    <style>
+        body {
+            margin: 0;
+            font-family: 'Segoe UI', Tahoma, Geneva, Verdana, sans-serif;
+        }
+
+        .content {
+            height: 100vh;
+            display: flex;
+            flex-direction: column;
+            justify-content: center;
+            align-items: center;
+        }
+
+        .title {
+            margin-top: 8vh;
+            text-align: center;
+            background-color: #ffffff;
+            color: #000000;
+            padding: 10px;
+            font-size: 50px;
+            align-self: flex-start;
+            margin-left: 2vh;
+        }
+
+        .main {
+            height: 90vh;
+            width: 100vw;
+        }
+
+        .config-section {
+            margin-bottom: 30px;
+            padding: 20px;
+            border: 1px solid #ddd;
+            border-radius: 8px;
+            background-color: #f9f9f9;
+            max-width: 600px;
+            margin-left: auto;
+            margin-right: auto;
+        }
+
+        .config-item {
+            display: flex;
+            align-items: center;
+            margin-bottom: 15px;
+        }
+
+        .config-item label {
+            flex: 1;
+            margin-right: 15px;
+            font-weight: 500;
+        }
+
+        .current-value {
+            font-style: italic;
+            color: #666;
+            font-size: 12px;
+        }
+
+        .config-item input,
+        .config-item select {
+            padding: 8px 12px;
+            border: 1px solid #ccc;
+            border-radius: 4px;
+            font-size: 14px;
+            min-width: 120px;
+            margin-right: 2vw;
+        }
+
+        .success-message {
+            background-color: #d4edda;
+            color: #155724;
+            padding: 10px;
+            border: 1px solid #c3e6cb;
+            border-radius: 4px;
+            margin-bottom: 20px;
+            text-align: center;
+        }
+
+        @media (max-width: 768px) {
+            .config {
+                font-size: 40px;
+            }
+        }
+
+        @media (max-width: 480px) {
+            .config {
+                font-size: 32px;
+            }
+
+            nav {
+                padding: 10px;
+            }
+
+            .nav-buttons a {
+                margin-bottom: 5px;
+                display: block;
+                text-align: center;
+            }
+        }
+    </style>
+</head>
+
+<body>
+    <nav>
+        <div class="logo">
+            <a href="{{ url_for('home') }}">
+                <img src="{{ url_for('static', filename='integrity_service_logo_256.png') }}" alt="Logo">
+            </a>
+        </div>
+        <div class="nav-buttons">
+            <a href="{{ url_for('user_games_list') }}">User Games List</a>
+            <a href="{{ url_for('ready_for_review') }}">Ready for review</a>
+            <a href="{{ url_for('fileset_search') }}">Fileset Search</a>
+            <a href="{{ url_for('logs') }}">Logs</a>
+            <a href="{{ url_for('config') }}">Config</a>
+        </div>
+    </nav>
+    <div class="content">
+        <h1 class="title">User Configurations</h1>
+        <div class="main">
+            <form method="POST" action="{{ url_for('config') }}">
+                <div class="config-section">
+                    <div class="config-item">
+                        <label for="items_per_page">Number of items per page:</label>
+                        <input type="number" name="items_per_page" id="items_per_page" value="{{ items_per_page }}"
+                            min="1">
+                        <div class="current-value">Current: {{ items_per_page }}</div>
+                    </div>
+                    <div style="text-align: center; padding: 20px;">
+                        <button type="submit">Save Configuration</button>
+                    </div>
+            </form>
+        </div>
+    </div>
+    <script>
+        document.getElementById('items_per_page').addEventListener('input', function () {
+            const value = parseInt(this.value);
+            if (value < 1) this.value = 1;
+        });
+    </script>
+</body>
+
+</html>


Commit: ba40176a6f4001d9c545d6b99691a4bb61720524
    https://github.com/scummvm/scummvm-sites/commit/ba40176a6f4001d9c545d6b99691a4bb61720524
Author: ShivangNagta (shivangnag at gmail.com)
Date: 2025-08-14T22:21:10+02:00

Commit Message:
INTEGRITY: Add user details in manual merge log.

Changed paths:
    fileset.py


diff --git a/fileset.py b/fileset.py
index 4860506..bc7fa46 100644
--- a/fileset.py
+++ b/fileset.py
@@ -13,6 +13,7 @@ import pymysql.cursors
 import json
 import html as html_lib
 import os
+import getpass
 from pagination import create_page
 import difflib
 from db_functions import (
@@ -1340,7 +1341,8 @@ def execute_merge(id):
 
             delete_original_fileset(source_id, connection)
             category_text = "Manually Merged"
-            log_text = f"Manually merged Fileset:{source_id} with Fileset:{target_id}."
+            user = f"cli:{getpass.getuser()}"
+            log_text = f"Manually merged Fileset:{source_id} with Fileset:{target_id} by user: {user}."
             create_log(category_text, "Moderator", log_text, connection)
 
             query = """


Commit: fb13d68964535ea1ea0351399f108201ce88e2d1
    https://github.com/scummvm/scummvm-sites/commit/fb13d68964535ea1ea0351399f108201ce88e2d1
Author: ShivangNagta (shivangnag at gmail.com)
Date: 2025-08-14T22:21:10+02:00

Commit Message:
INTEGRITY: Add config page url in the homepage.

Changed paths:
    templates/home.html


diff --git a/templates/home.html b/templates/home.html
index 458f5fb..c41cda0 100644
--- a/templates/home.html
+++ b/templates/home.html
@@ -90,6 +90,7 @@
             <a href="{{ url_for('ready_for_review') }}">Ready for review</a>
             <a href="{{ url_for('fileset_search') }}">Fileset Search</a>
             <a href="{{ url_for('logs') }}">Logs</a>
+            <a href="{{ url_for('config') }}">Config</a>
         </div>
         <div class="dev">
             <form action="{{ url_for('clear_database') }}" method="POST">


Commit: c742c0f4a46258d0a4b3cf37f63948324f0df9dc
    https://github.com/scummvm/scummvm-sites/commit/c742c0f4a46258d0a4b3cf37f63948324f0df9dc
Author: ShivangNagta (shivangnag at gmail.com)
Date: 2025-08-14T22:21:10+02:00

Commit Message:
INTEGRITY: Add user.dat processing logic.

Changed paths:
    db_functions.py
    fileset.py
    schema.py


diff --git a/db_functions.py b/db_functions.py
index 9b385eb..e478dbb 100644
--- a/db_functions.py
+++ b/db_functions.py
@@ -175,9 +175,19 @@ def insert_fileset(
         cursor.execute("SELECT @fileset_last")
         fileset_last = cursor.fetchone()["@fileset_last"]
 
-    log_text = f"Created Fileset:{fileset_last}, {log_text}"
-    if src == "user":
-        log_text = f"Created Fileset:{fileset_last}, from user: IP {ip}."
+        log_text = f"Created Fileset:{fileset_last}, {log_text}"
+        if src == "user":
+            query = """
+                INSERT INTO queue (time, fileset, ip)
+                VALUES (FROM_UNIXTIME(@fileset_time_last), %s, %s)
+            """
+            cursor.execute(query, (fileset_id, ip))
+            cursor.execute(
+                "UPDATE fileset SET user_count = COALESCE(user_count, 0) + 1 WHERE id = %s",
+                (fileset_id,),
+            )
+            cursor.execute(query, (fileset_id, ip))
+            log_text = f"Created Fileset:{fileset_last}, from user: IP {ip}."
 
     user = f"cli:{getpass.getuser()}" if username is None else username
     if not skiplog and detection:
@@ -698,6 +708,9 @@ def scan_process(
 
     id_to_fileset_mapping = defaultdict(dict)
 
+    # set of filesets whose files got updated
+    filesets_check_for_full = set()
+
     fileset_count = 0
     for fileset in game_data:
         console_log_file_update(fileset_count)
@@ -722,18 +735,19 @@ def scan_process(
 
         id_to_fileset_mapping[fileset_id] = fileset
 
-        # set of filesets whose files got updated
-        filesets_check_for_full = set()
+        possible_full_filesets = set()
 
         for rom in fileset["rom"]:
-            pre_update_files(rom, filesets_check_for_full, transaction_id, conn)
+            pre_update_files(rom, transaction_id, conn, possible_full_filesets)
+
+        filesets_check_for_full.update(possible_full_filesets)
         fileset_count += 1
 
     fileset_count = 0
     for fileset_id, fileset in id_to_fileset_mapping.items():
         console_log_matching(fileset_count)
-        candidate_filesets = scan_filter_candidate_filesets(
-            fileset_id, fileset, transaction_id, conn
+        candidate_filesets = filter_candidate_filesets(
+            fileset["rom"], transaction_id, conn
         )
 
         if len(candidate_filesets) == 0:
@@ -773,6 +787,9 @@ def scan_process(
         )
         fileset_count += 1
 
+    # If any partial fileset turned full with pre file updates, turn it full
+    update_status_for_partial_filesets(list(filesets_check_for_full), conn)
+
     # Final log
     with conn.cursor() as cursor:
         cursor.execute(
@@ -789,10 +806,12 @@ def scan_process(
         create_log(category_text, user, log_text, conn)
 
 
-def pre_update_files(rom, filesets_check_for_full, transaction_id, conn):
+def pre_update_files(rom, transaction_id, conn, filesets_check_for_full=None):
     """
     Updates all the checksums for the files matching by a checksum and size.
     """
+    if filesets_check_for_full is None:
+        filesets_check_for_full = set()
     with conn.cursor() as cursor:
         checksums = defaultdict(str)
         for key in rom:
@@ -971,8 +990,8 @@ def scan_perform_match(
 
             # Drop the fileset, note down the file differences
             elif status == "full":
-                (unmatched_candidate_files, unmatched_scan_files) = get_unmatched_files(
-                    matched_fileset_id, fileset, conn
+                (_, unmatched_candidate_files, unmatched_scan_files) = (
+                    get_unmatched_files(matched_fileset_id, fileset, conn)
                 )
                 fully_matched = (
                     True
@@ -984,8 +1003,7 @@ def scan_perform_match(
                     match_with_full_fileset += 1
                 else:
                     mismatch_with_full_fileset += 1
-                log_scan_match_with_full(
-                    fileset_id,
+                log_match_with_full(
                     matched_fileset_id,
                     unmatched_candidate_files,
                     unmatched_scan_files,
@@ -1149,9 +1167,10 @@ def total_fileset_files(fileset):
     return len(fileset["rom"])
 
 
-def scan_filter_candidate_filesets(fileset_id, fileset, transaction_id, conn):
+def filter_candidate_filesets(roms, transaction_id, conn):
     """
     Returns a list of candidate filesets that can be merged.
+    For scan.dat and user.dat
     Performs early filtering in SQL (by name, size) and then
     applies checksum filtering and max-match filtering in Python.
     """
@@ -1179,9 +1198,9 @@ def scan_filter_candidate_filesets(fileset_id, fileset, transaction_id, conn):
             {
                 "file_id": row["file_id"],
                 "name": os.path.basename(normalised_path(row["name"])).lower(),
-                "size": row["size"],
-                "size-r": row["size_r"],
-                "size-rd": row["size_rd"],
+                "size": row["size"] if "size" in row else 0,
+                "size-r": row["size_r"] if "size-r" in row else 0,
+                "size-rd": row["size_rd"] if "size-rd" in row else 0,
             }
         )
     for id, files in candidate_map.items():
@@ -1189,7 +1208,7 @@ def scan_filter_candidate_filesets(fileset_id, fileset, transaction_id, conn):
 
     set_checksums = set()
     set_file_name_size = set()
-    for file in fileset["rom"]:
+    for file in roms:
         name = os.path.basename(normalised_path(file["name"]))
         for key in file:
             if key.startswith("md5"):
@@ -1284,7 +1303,7 @@ def scan_filter_candidate_filesets(fileset_id, fileset, transaction_id, conn):
 
     matched_candidates = []
     for candidate in candidates:
-        if is_full_detection_checksum_match(candidate, fileset, conn):
+        if is_full_detection_checksum_match(candidate, roms, conn):
             matched_candidates.append(candidate)
 
     if len(matched_candidates) != 0:
@@ -1343,12 +1362,17 @@ def get_unmatched_files(candidate_fileset, fileset, conn):
             for key in dat_checksums
             if key not in matched_dat_pairs
         }
+        matched_dat_files = {
+            dat_names_by_checksum[key]
+            for key in dat_checksums
+            if key in matched_dat_pairs
+        }
         unmatched_dat_files = list(unmatched_dat_files)
 
-        return (unmatched_candidate_files, unmatched_dat_files)
+        return (matched_dat_files, unmatched_candidate_files, unmatched_dat_files)
 
 
-def is_full_detection_checksum_match(candidate_fileset, fileset, conn):
+def is_full_detection_checksum_match(candidate_fileset, files, conn):
     """
     Return type - Boolean
     Checks if all the detection files in the candidate fileset have corresponding checksums matching with scan.
@@ -1367,7 +1391,7 @@ def is_full_detection_checksum_match(candidate_fileset, fileset, conn):
 
         # set of (checksum, filename)
         scan_checksums = set()
-        for file in fileset["rom"]:
+        for file in files:
             for key in file:
                 if key.startswith("md5"):
                     name = os.path.basename(normalised_path(file["name"]))
@@ -1467,7 +1491,7 @@ def set_process(
         set_dat_metadata = ""
         for meta in fileset:
             if meta != "rom":
-                set_dat_metadata += meta + " = " + fileset[meta] + "  ,  "
+                set_dat_metadata += meta + ": " + fileset[meta] + "  "
 
         (fileset_id, existing) = insert_new_fileset(
             fileset,
@@ -1750,8 +1774,8 @@ def set_perform_match(
                     matched_fileset_id, manual_merge_map, set_to_candidate_dict, conn
                 )
             elif status == "partial" or status == "full":
-                (unmatched_candidate_files, unmatched_dat_files) = get_unmatched_files(
-                    matched_fileset_id, fileset, conn
+                (_, unmatched_candidate_files, unmatched_dat_files) = (
+                    get_unmatched_files(matched_fileset_id, fileset, conn)
                 )
                 is_match = (
                     True
@@ -1890,8 +1914,8 @@ def add_manual_merge(
                     (%s, %s)
                 """
             cursor.execute(query, (child_fileset, parent_fileset))
-
-    create_log(category_text, user, log_text, conn)
+    if category_text and log_text:
+        create_log(category_text, user, log_text, conn)
     if print_text:
         print(print_text)
 
@@ -2057,7 +2081,7 @@ def set_filter_candidate_filesets(
 
     matched_candidates = []
     for candidate in candidates:
-        if is_full_detection_checksum_match(candidate, fileset, conn):
+        if is_full_detection_checksum_match(candidate, fileset["rom"], conn):
             matched_candidates.append(candidate)
 
     if len(matched_candidates) != 0:
@@ -2341,8 +2365,7 @@ def log_matched_fileset(src, fileset_last, fileset_id, state, user, conn):
     update_history(fileset_last, fileset_id, conn, log_last)
 
 
-def log_scan_match_with_full(
-    fileset_last,
+def log_match_with_full(
     candidate_id,
     unmatched_candidate_files,
     unmatched_scan_files,
@@ -2362,6 +2385,22 @@ def log_scan_match_with_full(
     create_log(category_text, user, log_text, conn)
 
 
+def log_user_match_with_full(
+    candidate_id,
+    unmatched_full_files,
+    unmatched_user_files,
+    matched_user_files,
+    fully_matched,
+    user,
+    conn,
+):
+    category_text = "User fileset mismatch"
+    if fully_matched:
+        category_text = "User fileset matched"
+    log_text = f"""Candidate Full Fileset:{candidate_id}. Total matched user files = {len(matched_user_files)}. Missing/mismatch Files = {len(unmatched_full_files)}. Unknown Files = {len(unmatched_user_files)}. List of Missing/mismatch files : {", ".join(scan_file for scan_file in unmatched_full_files)}, List of unknown files : {", ".join(scan_file for scan_file in unmatched_user_files)}"""
+    create_log(category_text, user, log_text, conn)
+
+
 def finalize_fileset_insertion(
     conn, transaction_id, src, filepath, author, version, source_status, user
 ):
@@ -2377,6 +2416,90 @@ def finalize_fileset_insertion(
             create_log(category_text, user, log_text, conn)
 
 
+def user_perform_match(
+    fileset,
+    src,
+    user,
+    candidate_filesets,
+    game_metadata,
+    transaction_id,
+    conn,
+    ip,
+):
+    with conn.cursor() as cursor:
+        single_candidate_id = candidate_filesets[0]
+        cursor.execute(
+            "SELECT status FROM fileset WHERE id = %s", (single_candidate_id,)
+        )
+        status = cursor.fetchone()["status"]
+        if len(candidate_filesets) == 1 and status == "full":
+            if status == "full":
+                # Checks how many files match
+                (matched_dat_files, unmatched_full_files, unmatched_user_files) = (
+                    get_unmatched_files(single_candidate_id, fileset, conn)
+                )
+                return (
+                    "full",
+                    -1,
+                    single_candidate_id,
+                    matched_dat_files,
+                    unmatched_full_files,
+                    unmatched_user_files,
+                )
+        # Includes cases for
+        # - single candidate with detection or partial status
+        # - multiple candidates
+        else:
+            # Create a new fileset and add links to candidates
+            fileset_id = create_user_fileset(
+                fileset, game_metadata, src, transaction_id, user, conn, ip
+            )
+            if fileset_id != -1:
+                add_manual_merge(
+                    candidate_filesets,
+                    fileset_id,
+                    None,
+                    None,
+                    user,
+                    conn,
+                )
+            return ("multiple", fileset_id, -1, [], [], [])
+
+
+def create_user_fileset(fileset, game_metadata, src, transaction_id, user, conn, ip):
+    with conn.cursor() as cursor:
+        key = calc_key(fileset)
+        try:
+            engine_name = ""
+            engineid = game_metadata["engineid"]
+            title = ""
+            gameid = game_metadata["gameid"]
+            extra = game_metadata.get("extra", "")
+            platform = game_metadata.get("platform", "")
+            lang = game_metadata.get("language", "")
+        except KeyError as e:
+            print(f"Missing key in metadata: {e}")
+            return
+
+        (fileset_id, _) = insert_fileset(
+            src, False, key, None, transaction_id, None, conn, ip=ip
+        )
+
+        insert_game(engine_name, engineid, title, gameid, extra, platform, lang, conn)
+        if fileset_id:
+            for file in fileset["rom"]:
+                insert_file(file, False, src, conn)
+                file_id = None
+                with conn.cursor() as cursor:
+                    cursor.execute("SELECT @file_last AS file_id")
+                    file_id = cursor.fetchone()["file_id"]
+                for key, value in file.items():
+                    if key not in ["name", "size", "size-r", "size-rd"]:
+                        insert_filechecksum(file, key, file_id, conn)
+
+        return fileset_id
+
+
 def user_integrity_check(data, ip, game_metadata=None):
     src = "user"
     source_status = src
@@ -2386,8 +2509,8 @@ def user_integrity_check(data, ip, game_metadata=None):
         new_file = {
             "name": file["name"],
             "size": file["size"],
-            "size-r": file["size-r"],
-            "size-rd": file["size-rd"],
+            "size-r": file["size-r"] if "size-r" in file else 0,
+            "size-rd": file["size-rd"] if "size-rd" in file else 0,
         }
         for checksum in file["checksums"]:
             checksum_type = checksum["type"]
@@ -2409,7 +2532,10 @@ def user_integrity_check(data, ip, game_metadata=None):
     try:
         with conn.cursor() as cursor:
             cursor.execute("SELECT MAX(`transaction`) FROM transactions")
-            transaction_id = cursor.fetchone()["MAX(`transaction`)"] + 1
+            transaction_id = cursor.fetchone()["MAX(`transaction`)"]
+            if transaction_id is None:
+                transaction_id = 0
+            transaction_id += 1
 
             category_text = f"Uploaded from {src}"
             log_text = f"Started loading file, State {source_status}. Transaction: {transaction_id}"
@@ -2418,128 +2544,117 @@ def user_integrity_check(data, ip, game_metadata=None):
 
             create_log(category_text, user, log_text, conn)
 
-            matched_map = find_matching_filesets(data, conn, src)
-
-            # show matched, missing, extra
-            extra_map = defaultdict(list)
-            missing_map = defaultdict(list)
-            extra_set = set()
-            missing_set = set()
+            # Check if the key already exists in the db
+            query = """
+                SELECT id
+                FROM fileset
+                WHERE `key` = %s
+                AND (status = 'user' OR status = 'ReadyForReview')
+            """
+            cursor.execute(query, (key,))
+            existing_entry = cursor.fetchone()
+            if existing_entry is not None:
+                match_type = "no_candidate"
+                existing_fileset_id = existing_entry["id"]
+                add_usercount(existing_fileset_id, ip, conn)
+                conn.commit()
+                return (match_type, existing_fileset_id, [], [], [])
+
+            candidate_filesets = filter_candidate_filesets(
+                data["rom"], transaction_id, conn
+            )
 
-            for fileset_id in matched_map.keys():
-                cursor.execute("SELECT * FROM file WHERE fileset = %s", (fileset_id,))
-                target_files = cursor.fetchall()
-                target_files_dict = {}
-                for target_file in target_files:
-                    cursor.execute(
-                        "SELECT * FROM filechecksum WHERE file = %s",
-                        (target_file["id"],),
-                    )
-                    target_checksums = cursor.fetchall()
-                    for checksum in target_checksums:
-                        target_files_dict[checksum["checksum"]] = target_file
-                        # target_files_dict[target_file['id']] = f"{checksum['checktype']}-{checksum['checksize']}"
-
-                # Collect all the checksums from data['files']
-                data_files_set = set()
-                for file in data["files"]:
-                    for checksum_info in file["checksums"]:
-                        checksum = checksum_info["checksum"]
-                        checktype = checksum_info["type"]
-                        checksize, checktype, checksum = get_checksum_props(
-                            checktype, checksum
-                        )
-                        data_files_set.add(checksum)
-
-                # Identify missing files
-                matched_names = set()
-                for checksum, target_file in target_files_dict.items():
-                    if checksum not in data_files_set:
-                        if target_file["name"] not in matched_names:
-                            missing_set.add(target_file["name"])
-                        else:
-                            missing_set.discard(target_file["name"])
-                    else:
-                        matched_names.add(target_file["name"])
-
-                for tar in missing_set:
-                    missing_map[fileset_id].append({"name": tar})
-
-                # Identify extra files
-                for file in data["files"]:
-                    file_exists = False
-                    for checksum_info in file["checksums"]:
-                        checksum = checksum_info["checksum"]
-                        checktype = checksum_info["type"]
-                        checksize, checktype, checksum = get_checksum_props(
-                            checktype, checksum
-                        )
-                        if checksum in target_files_dict and not file_exists:
-                            file_exists = True
-                    if not file_exists:
-                        extra_set.add(file["name"])
-
-                for extra in extra_set:
-                    extra_map[fileset_id].append({"name": extra})
-            if game_metadata:
-                platform = game_metadata["platform"]
-                lang = game_metadata["language"]
-                gameid = game_metadata["gameid"]
-                engineid = game_metadata["engineid"]
-                extra_info = game_metadata["extra"]
-                engine_name = " "
-                title = " "
-                insert_game(
-                    engine_name,
-                    engineid,
-                    title,
-                    gameid,
-                    extra_info,
-                    platform,
-                    lang,
+            if len(candidate_filesets) == 0:
+                (user_fileset_id, _) = insert_new_fileset(
+                    data,
                     conn,
+                    None,
+                    src,
+                    key,
+                    None,
+                    transaction_id,
+                    log_text,
+                    user,
+                    ip=ip,
                 )
-
-            # handle different scenarios
-            if len(matched_map) == 0:
-                insert_new_fileset(
-                    data, conn, None, src, key, None, transaction_id, log_text, user, ip
+                match_type = "no_candidate"
+                category_text = "New User Fileset"
+                engineid = (
+                    game_metadata["engineid"] if "engineid" in game_metadata else ""
                 )
-                return matched_map, missing_map, extra_map
-
-            matched_list = sorted(
-                matched_map.items(), key=lambda x: len(x[1]), reverse=True
-            )
-            most_matched = matched_list[0]
-            matched_fileset_id, matched_count = most_matched[0], most_matched[1]
-            cursor.execute(
-                "SELECT status FROM fileset WHERE id = %s", (matched_fileset_id,)
-            )
-            status = cursor.fetchone()["status"]
-
-            cursor.execute(
-                "SELECT COUNT(file.id) FROM file WHERE fileset = %s",
-                (matched_fileset_id,),
-            )
-            count = cursor.fetchone()["COUNT(file.id)"]
-            if status == "full" and count == matched_count:
-                log_matched_fileset(
-                    src, matched_fileset_id, matched_fileset_id, "full", user, conn
+                gameid = game_metadata["gameid"] if "gameid" in game_metadata else ""
+                platform = (
+                    game_metadata["platform"] if "platform" in game_metadata else ""
                 )
-            # elif status == "partial" and count == matched_count:
-            #     populate_file(data, matched_fileset_id, conn, None, src)
-            #     log_matched_fileset(
-            #         src, matched_fileset_id, matched_fileset_id, "partial", user, conn
-            #     )
-            elif status == "user" and count == matched_count:
-                add_usercount(matched_fileset_id, conn)
-                log_matched_fileset(
-                    src, matched_fileset_id, matched_fileset_id, "user", user, conn
+                language = (
+                    game_metadata["language"] if "language" in game_metadata else ""
                 )
+                log_text = f"New User Fileset:{user_fileset_id} with no matching candidates. Engine: {engineid} Name: {gameid}-{platform}-{language}"
+                create_log(category_text, user, log_text, conn)
+                conn.commit()
+                return (match_type, user_fileset_id, [], [], [])
+
             else:
-                insert_new_fileset(
-                    data, conn, None, src, key, None, transaction_id, log_text, user, ip
+                (
+                    match_type,
+                    user_fileset_id,
+                    matched_id,
+                    matched_user_files,
+                    unmatched_full_files,
+                    unmatched_user_files,
+                ) = user_perform_match(
+                    data,
+                    src,
+                    user,
+                    candidate_filesets,
+                    game_metadata,
+                    transaction_id,
+                    conn,
+                    ip,
                 )
+                if match_type == "multiple":
+                    # If multiple candidates matched, we will do manual review and ask user for more details.
+                    category_text = "User fileset - Multiple candidates"
+                    log_text = f"Possible new variant Fileset:{user_fileset_id} from user. Multiple filesets candidates {', '.join(f'Fileset:{id}' for id in candidate_filesets)}"
+                    create_log(
+                        category_text,
+                        user,
+                        log_text,
+                        conn,
+                    )
+                    conn.commit()
+                    return (
+                        match_type,
+                        user_fileset_id,
+                        matched_user_files,
+                        unmatched_full_files,
+                        unmatched_user_files,
+                    )
+                if match_type == "full":
+                    fully_matched = (
+                        True
+                        if len(unmatched_full_files) == 0
+                        and len(unmatched_user_files) == 0
+                        else False
+                    )
+                    log_user_match_with_full(
+                        matched_id,
+                        unmatched_full_files,
+                        unmatched_user_files,
+                        matched_user_files,
+                        fully_matched,
+                        user,
+                        conn,
+                    )
+                    conn.commit()
+                    return (
+                        match_type,
+                        matched_id,
+                        matched_user_files,
+                        unmatched_full_files,
+                        unmatched_user_files,
+                    )
+
             finalize_fileset_insertion(
                 conn, transaction_id, src, None, user, 0, source_status, user
             )
@@ -2550,22 +2665,91 @@ def user_integrity_check(data, ip, game_metadata=None):
         category_text = f"Uploaded from {src}"
         log_text = f"Completed loading file, State {source_status}. Transaction: {transaction_id}"
         create_log(category_text, user, log_text, conn)
-        # conn.close()
-    return matched_map, missing_map, extra_map
+        conn.close()
 
 
-def add_usercount(fileset, conn):
+def update_status_for_partial_filesets(fileset_list, conn):
+    """
+    Updates the status of the given filesets from partial to full, if all of their files have full checksums.
+    """
     with conn.cursor() as cursor:
-        cursor.execute(
-            "UPDATE fileset SET user_count = COALESCE(user_count, 0) + 1 WHERE id = %s",
-            (fileset,),
-        )
-        cursor.execute("SELECT user_count from fileset WHERE id = %s", (fileset,))
-        count = cursor.fetchone()["user_count"]
-        if count >= 3:
+        for fileset_id in fileset_list:
+            cursor.execute("SELECT status FROM fileset WHERE id = %s", (fileset_id,))
+            result = cursor.fetchone()
+            status = result["status"]
+            if status == "partial":
+                query = """
+                    SELECT f.id as file_id
+                    FROM file f
+                    JOIN fileset fs ON fs.id = f.fileset
+                    WHERE fs.id = %s
+                """
+                cursor.execute(query, (fileset_id,))
+                result = cursor.fetchall()
+                not_complete = False
+                for file in result:
+                    file_id = file["file_id"]
+                    query = """
+                        SELECT COUNT(*) AS count
+                        FROM filechecksum fc
+                        WHERE fc.file = %s
+                    """
+                    cursor.execute(query, (file_id,))
+                    checksum_count = cursor.fetchone()["count"]
+                    if checksum_count != 4:
+                        not_complete = True
+                        break
+                if not not_complete:
+                    query = """
+                        UPDATE fileset
+                        SET status = 'full'
+                        WHERE id = %s
+                    """
+                    cursor.execute(query, fileset_id)
+
+
+def add_usercount(fileset, ip, conn):
+    with conn.cursor() as cursor:
+        query = """
+            SELECT COUNT(*) AS count FROM queue
+            WHERE fileset = %s
+            AND ip = %s
+            LIMIT 1
+        """
+        cursor.execute(query, (fileset, ip))
+        duplicate = True if cursor.fetchone()["count"] != 0 else False
+        print("dupe ", duplicate)
+        if not duplicate:
             cursor.execute(
-                "UPDATE fileset SET status = 'ReadyForReview' WHERE id = %s", (fileset,)
+                "UPDATE fileset SET user_count = COALESCE(user_count, 0) + 1 WHERE id = %s",
+                (fileset,),
             )
+            query = """
+                INSERT INTO queue (time, fileset, ip)
+                VALUES (FROM_UNIXTIME(@fileset_time_last), %s, %s)
+            """
+            cursor.execute(query, (fileset, ip))
+            cursor.execute("SELECT user_count from fileset WHERE id = %s", (fileset,))
+            count = cursor.fetchone()["user_count"]
+            category_text = "Existing user fileset - different user."
+            log_text = f"User Fileset:{fileset} found. Match count: {count}."
+            create_log(category_text, ip, log_text, conn)
+            if count >= 3:
+                cursor.execute(
+                    "UPDATE fileset SET status = 'ReadyForReview' WHERE id = %s",
+                    (fileset,),
+                )
+                category_text = "Ready for Review"
+                log_text = (
+                    f"User Fileset:{fileset} ready for review. Match count: {count}."
+                )
+                create_log(category_text, ip, log_text, conn)
+        else:
+            cursor.execute("SELECT user_count from fileset WHERE id = %s", (fileset,))
+            count = cursor.fetchone()["user_count"]
+            category_text = "Existing user fileset - same user."
+            log_text = f"User Fileset:{fileset} exists. Match count: {count}."
+            create_log(category_text, ip, log_text, conn)
 
 
 def console_log(message):
diff --git a/fileset.py b/fileset.py
index bc7fa46..5f2b2b0 100644
--- a/fileset.py
+++ b/fileset.py
@@ -1423,25 +1423,6 @@ def validate():
 
     json_response = {"error": error_codes["success"], "files": []}
 
-    # if not game_metadata:
-    #     if not json_object.get("files"):
-    #         json_response["error"] = error_codes["empty"]
-    #         del json_response["files"]
-    #         json_response["status"] = "empty_fileset"
-    #         return jsonify(json_response)
-
-    #     json_response["error"] = error_codes["no_metadata"]
-    #     del json_response["files"]
-    #     json_response["status"] = "no_metadata"
-
-    #     conn = db_connect()
-    #     try:
-    #         fileset_id = user_insert_fileset(json_object, ip, conn)
-    #     finally:
-    #         conn.close()
-    #     json_response["fileset"] = fileset_id
-    #     return jsonify(json_response)
-
     file_object = json_object["files"]
     if not file_object:
         json_response["error"] = error_codes["empty"]
@@ -1449,9 +1430,16 @@ def validate():
         return jsonify(json_response)
 
     try:
-        matched_map, missing_map, extra_map = user_integrity_check(
-            json_object, ip, game_metadata
-        )
+        # match_type - no_candidate or multiple or full
+        # fileset_id - new user fileset id : if match_type is no_candidate or multiple
+        #            - matched fileset id : if match_type if full
+        (
+            match_type,
+            fileset_id,
+            matched_user_files,
+            unmatched_full_files,
+            unmatched_user_files,
+        ) = user_integrity_check(json_object, ip, game_metadata)
     except Exception as e:
         json_response["error"] = -1
         json_response["status"] = "processing_error"
@@ -1459,49 +1447,40 @@ def validate():
         json_response["message"] = str(e)
         print(f"Response: {json_response}")
         return jsonify(json_response)
-    print(f"Matched: {matched_map}")
-    print(len(matched_map))
-    if len(matched_map) == 0:
-        json_response["error"] = error_codes["unknown"]
-        json_response["status"] = "unknown_fileset"
-        json_response["fileset"] = "unknown_fileset"
+
+    # If no candidate was filtered out
+    if match_type == "no_candidate":
+        json_response["error"] = -1
+        json_response["status"] = "new_fileset"
+        json_response["fileset"] = str(fileset_id)
+        json_response["message"] = ""
+        print(f"Response: {json_response}")
         return jsonify(json_response)
-    matched_map = list(
-        sorted(matched_map.items(), key=lambda x: len(x[1]), reverse=True)
-    )[0]
-    matched_id = matched_map[0]
-    # find the same id in the missing_map and extra_map
-    for fileset_id, count in missing_map.items():
-        if fileset_id == matched_id:
-            missing_map = (fileset_id, count)
-            break
-
-    for fileset_id, count in extra_map.items():
-        if fileset_id == matched_id:
-            extra_map = (fileset_id, count)
-            break
-
-    for file in matched_map[1]:
-        for key, value in file.items():
-            if key == "name":
-                json_response["files"].append(
-                    {"status": "ok", "fileset_id": matched_id, "name": value}
-                )
-                break
-    for file in missing_map[1]:
-        for key, value in file.items():
-            if key == "name":
-                json_response["files"].append(
-                    {"status": "missing", "fileset_id": matched_id, "name": value}
-                )
-                break
-    for file in extra_map[1]:
-        for key, value in file.items():
-            if key == "name":
-                json_response["files"].append(
-                    {"status": "unknown_file", "fileset_id": matched_id, "name": value}
-                )
-                break
+
+    # If match was with multiple candidates
+    if match_type == "multiple":
+        json_response["error"] = -1
+        json_response["status"] = "possible_new_variant"
+        json_response["fileset"] = str(fileset_id)
+        json_response["message"] = ""
+        print(f"Response: {json_response}")
+        return jsonify(json_response)
+
+    # If match was with full
+    for file in matched_user_files:
+        json_response["files"].append(
+            {"status": "ok", "fileset_id": fileset_id, "name": file}
+        )
+    for file in unmatched_full_files:
+        json_response["files"].append(
+            {"status": "missing/unmatched", "fileset_id": fileset_id, "name": file}
+        )
+
+    for file in unmatched_user_files:
+        json_response["files"].append(
+            {"status": "unknown_file", "fileset_id": fileset_id, "name": file}
+        )
+
     print(f"Response: {json_response}")
     return jsonify(json_response)
 
diff --git a/schema.py b/schema.py
index 776eada..5a42f29 100644
--- a/schema.py
+++ b/schema.py
@@ -102,10 +102,9 @@ def init_database():
             CREATE TABLE IF NOT EXISTS queue (
                 id INT AUTO_INCREMENT PRIMARY KEY,
                 time TIMESTAMP NOT NULL,
-                notes varchar(300),
+                notes varchar(300) DEFAULT '',
                 fileset INT,
-                userid INT NOT NULL,
-                commit VARCHAR(64) NOT NULL,
+                ip VARCHAR(100) NOT NULL,
                 FOREIGN KEY (fileset) REFERENCES fileset(id)
             )
         """,


Commit: 850ee1ff1b453ec2e108aeaff1352729ab34f2c1
    https://github.com/scummvm/scummvm-sites/commit/850ee1ff1b453ec2e108aeaff1352729ab34f2c1
Author: ShivangNagta (shivangnag at gmail.com)
Date: 2025-08-14T22:21:10+02:00

Commit Message:
INTEGRITY: Add comprehensive search criterias in log with OR, AND conditions.

Changed paths:
    fileset.py
    pagination.py
    templates/config.html
    templates/home.html


diff --git a/fileset.py b/fileset.py
index 5f2b2b0..4eae1dd 100644
--- a/fileset.py
+++ b/fileset.py
@@ -37,7 +37,7 @@ secret_key = os.urandom(24)
 
 @app.route("/")
 def index():
-    return redirect(url_for("logs"))
+    return redirect(url_for("logs", sort="id-desc"))
 
 
 @app.route("/home")
@@ -141,7 +141,7 @@ def fileset():
                     <a href="{{{{ url_for('user_games_list') }}}}">User Games List</a>
                     <a href="{{{{ url_for('ready_for_review') }}}}">Ready for review</a>
                     <a href="{{{{ url_for('fileset_search') }}}}">Fileset Search</a>
-                    <a href="{{{{ url_for('logs') }}}}">Logs</a>
+                    <a href="{{{{ url_for('logs', sort='id-desc') }}}}">Logs</a>
                     <a href="{{{{ url_for('config') }}}}">Config</a>
                 </div>
             </nav>
@@ -505,7 +505,7 @@ def merge_fileset(id):
                         <a href="{{{{ url_for('user_games_list') }}}}">User Games List</a>
                         <a href="{{{{ url_for('ready_for_review') }}}}">Ready for review</a>
                         <a href="{{{{ url_for('fileset_search') }}}}">Fileset Search</a>
-                        <a href="{{{{ url_for('logs') }}}}">Logs</a>
+                        <a href="{{{{ url_for('logs', sort='id-desc') }}}}">Logs</a>
                         <a href="{{{{ url_for('config') }}}}">Config</a>
                     </div>
                 </nav>
@@ -553,7 +553,7 @@ def merge_fileset(id):
             <a href="{{ url_for('user_games_list') }}">User Games List</a>
             <a href="{{ url_for('ready_for_review') }}">Ready for review</a>
             <a href="{{ url_for('fileset_search') }}">Fileset Search</a>
-            <a href="{{ url_for('logs') }}">Logs</a>
+            <a href="{{ url_for('logs', sort='id-desc') }}">Logs</a>
             <a href="{{ url_for('config') }}">Config</a>
         </div>
     </nav>
@@ -621,7 +621,7 @@ def possible_merge_filesets(id):
                     <a href="{{{{ url_for('user_games_list') }}}}">User Games List</a>
                     <a href="{{{{ url_for('ready_for_review') }}}}">Ready for review</a>
                     <a href="{{{{ url_for('fileset_search') }}}}">Fileset Search</a>
-                    <a href="{{{{ url_for('logs') }}}}">Logs</a>
+                    <a href="{{{{ url_for('logs', sort='id-desc') }}}}">Logs</a>
                     <a href="{{{{ url_for('config') }}}}">Config</a>
                 </div>
             </nav>
@@ -824,7 +824,7 @@ def confirm_merge(id):
                     <a href="{{ url_for('user_games_list') }}">User Games List</a>
                     <a href="{{ url_for('ready_for_review') }}">Ready for review</a>
                     <a href="{{ url_for('fileset_search') }}">Fileset Search</a>
-                    <a href="{{ url_for('logs') }}">Logs</a>
+                    <a href="{{ url_for('logs', sort='id-desc') }}">Logs</a>
                     <a href="{{ url_for('config') }}">Config</a>
                 </div>
             </nav>
diff --git a/pagination.py b/pagination.py
index 22f7930..339fc81 100644
--- a/pagination.py
+++ b/pagination.py
@@ -22,6 +22,30 @@ def get_join_columns(table1, table2, mapping):
     return "No primary-foreign key mapping provided. Filter is invalid"
 
 
+def build_search_condition(value, column):
+    phrases = re.findall(r'"([^"]+)"', value)
+    if phrases:
+        conditions = [f"{column} REGEXP '{re.escape(p)}'" for p in phrases]
+        return " AND ".join(conditions)
+
+    if "+" in value:
+        and_terms = value.split("+")
+        and_conditions = []
+        for term in and_terms:
+            or_terms = term.strip().split()
+            if len(or_terms) > 1:
+                or_cond = " OR ".join(
+                    [f"{column} REGEXP '{re.escape(t)}'" for t in or_terms if t]
+                )
+                and_conditions.append(f"({or_cond})")
+            else:
+                and_conditions.append(f"{column} REGEXP '{re.escape(term.strip())}'")
+        return " AND ".join(and_conditions)
+    else:
+        or_terms = value.split()
+        return " OR ".join([f"{column} REGEXP '{re.escape(t)}'" for t in or_terms if t])
+
+
 def create_page(
     filename,
     results_per_page,
@@ -46,62 +70,47 @@ def create_page(
     )
 
     with conn.cursor() as cursor:
-        # Handle sorting
-        sort = request.args.get("sort")
-        if sort:
-            column = sort.split("-")
-            order = f"ORDER BY {column[0]}"
-            if "desc" in sort:
-                order += " DESC"
+        tables = set()
+        where_clauses = []
 
-        if set(request.args.keys()).difference({"page", "sort"}):
-            condition = "WHERE "
-            tables = set()
-            for key, value in request.args.items():
-                if key in ["page", "sort"] or value == "":
-                    continue
-                tables.add(filters[key])
-                if value == "":
-                    value = ".*"
-                condition += (
-                    f" AND {filters[key]}.{'id' if key == 'fileset' else key} REGEXP '{value}'"
-                    if condition != "WHERE "
-                    else f"{filters[key]}.{'id' if key == 'fileset' else key} REGEXP '{value}'"
-                )
+        for key, value in request.args.items():
+            if key in ("page", "sort") or value == "":
+                continue
+            tables.add(filters[key])
+            col = f"{filters[key]}.{'id' if key == 'fileset' else key}"
+            parsed = build_search_condition(value, col)
+            if parsed:
+                where_clauses.append(parsed)
 
-            if condition == "WHERE ":
-                condition = ""
+        condition = ""
+        if where_clauses:
+            condition = "WHERE " + " AND ".join(where_clauses)
 
-            # Handle multiple tables
-            from_query = records_table
-            join_order = ["game", "engine"]
-            tables_list = sorted(
-                list(tables),
-                key=lambda t: join_order.index(t) if t in join_order else 99,
-            )
-            if records_table not in tables_list or len(tables_list) > 1:
-                for table in tables_list:
-                    if table == records_table:
-                        continue
-                    if table == "engine":
-                        if "game" in tables:
-                            from_query += " JOIN engine ON engine.id = game.engine"
-                        else:
-                            from_query += " JOIN game ON game.id = fileset.game JOIN engine ON engine.id = game.engine"
+        from_query = records_table
+        join_order = ["game", "engine"]
+        tables_list = sorted(
+            list(tables), key=lambda t: join_order.index(t) if t in join_order else 99
+        )
+
+        if records_table not in tables_list or len(tables_list) > 1:
+            for t in tables_list:
+                if t == records_table:
+                    continue
+                if t == "engine":
+                    if "game" in tables:
+                        from_query += " JOIN engine ON engine.id = game.engine"
                     else:
-                        from_query += f" JOIN {table} ON {get_join_columns(records_table, table, mapping)}"
-            cursor.execute(
-                f"SELECT COUNT({records_table}.id) AS count FROM {from_query} {condition}"
-            )
-            num_of_results = cursor.fetchone()["count"]
+                        from_query += " JOIN game ON game.id = fileset.game JOIN engine ON engine.id = game.engine"
+                else:
+                    from_query += (
+                        f" JOIN {t} ON {get_join_columns(records_table, t, mapping)}"
+                    )
 
-        elif "JOIN" in records_table:
-            first_table = records_table.split(" ")[0]
-            cursor.execute(f"SELECT COUNT({first_table}.id) FROM {records_table}")
-            num_of_results = cursor.fetchone()[f"COUNT({first_table}.id)"]
-        else:
-            cursor.execute(f"SELECT COUNT(id) FROM {records_table}")
-            num_of_results = cursor.fetchone()["COUNT(id)"]
+        base_table = records_table.split(" ")[0]
+        cursor.execute(
+            f"SELECT COUNT({base_table}.id) AS count FROM {from_query} {condition}"
+        )
+        num_of_results = cursor.fetchone()["count"]
 
         num_of_pages = (num_of_results + results_per_page - 1) // results_per_page
         print(f"Num of results: {num_of_results}, Num of pages: {num_of_pages}")
@@ -110,29 +119,21 @@ def create_page(
         page = max(1, min(page, num_of_pages))
         offset = (page - 1) * results_per_page
 
-        # Fetch results
-        if set(request.args.keys()).difference({"page"}):
-            condition = "WHERE "
-            for key, value in request.args.items():
-                if key not in filters:
-                    continue
-
-                value = pymysql.converters.escape_string(value)
-                if value == "":
-                    value = ".*"
-                field = f"{filters[key]}.{'id' if key == 'fileset' else key}"
-                if value == ".*":
-                    clause = f"({field} IS NULL OR {field} REGEXP '{value}')"
-                else:
-                    clause = f"{field} REGEXP '{value}'"
-                condition += f" AND {clause}" if condition != "WHERE " else clause
-
-            if condition == "WHERE ":
-                condition = ""
-
-            query = f"{select_query} {condition} {order} LIMIT {results_per_page} OFFSET {offset}"
+        # Sort
+        order = ""
+        sort_param = request.args.get("sort")
+        if sort_param:
+            sort_parts = sort_param.split("-")
+            sort_col = sort_parts[0]
+            order = f"ORDER BY {sort_col}"
+            if "desc" in sort_param:
+                order += " DESC"
         else:
-            query = f"{select_query} {order} LIMIT {results_per_page} OFFSET {offset}"
+            if records_table == "log":
+                order = "ORDER BY `id` DESC"
+
+        # Fetch results
+        query = f"{select_query} {condition} {order} LIMIT {results_per_page} OFFSET {offset}"
         cursor.execute(query)
         results = cursor.fetchall()
 
@@ -154,7 +155,7 @@ def create_page(
             <a href="{{ url_for('user_games_list') }}">User Games List</a>
             <a href="{{ url_for('ready_for_review') }}">Ready for review</a>
             <a href="{{ url_for('fileset_search') }}">Fileset Search</a>
-            <a href="{{ url_for('logs') }}">Logs</a>
+            <a href="{{ url_for('logs', sort='id-desc') }}">Logs</a>
             <a href="{{ url_for('config') }}">Config</a>
         </div>
     </nav>
diff --git a/templates/config.html b/templates/config.html
index a578779..d730f15 100644
--- a/templates/config.html
+++ b/templates/config.html
@@ -117,7 +117,7 @@
             <a href="{{ url_for('user_games_list') }}">User Games List</a>
             <a href="{{ url_for('ready_for_review') }}">Ready for review</a>
             <a href="{{ url_for('fileset_search') }}">Fileset Search</a>
-            <a href="{{ url_for('logs') }}">Logs</a>
+            <a href="{{ url_for('logs', sort='id-desc')}}">Logs</a>
             <a href="{{ url_for('config') }}">Config</a>
         </div>
     </nav>
diff --git a/templates/home.html b/templates/home.html
index c41cda0..ec093df 100644
--- a/templates/home.html
+++ b/templates/home.html
@@ -89,7 +89,7 @@
             <a href="{{ url_for('user_games_list') }}">User Games List</a>
             <a href="{{ url_for('ready_for_review') }}">Ready for review</a>
             <a href="{{ url_for('fileset_search') }}">Fileset Search</a>
-            <a href="{{ url_for('logs') }}">Logs</a>
+            <a href="{{ url_for('logs', sort='id-desc')}}">Logs</a>
             <a href="{{ url_for('config') }}">Config</a>
         </div>
         <div class="dev">


Commit: ee694afc42b59a485e5b985ee2578d007ca99415
    https://github.com/scummvm/scummvm-sites/commit/ee694afc42b59a485e5b985ee2578d007ca99415
Author: ShivangNagta (shivangnag at gmail.com)
Date: 2025-08-14T22:21:10+02:00

Commit Message:
INTEGRITY: Add separate logs per page and filesets per page in config.

Changed paths:
    fileset.py
    templates/config.html


diff --git a/fileset.py b/fileset.py
index 4eae1dd..3ba6f99 100644
--- a/fileset.py
+++ b/fileset.py
@@ -1383,26 +1383,41 @@ def config():
     Stores the user configurations in the cookies
     """
     if request.method == "POST":
-        items_per_page = request.form.get("items_per_page", "25")
+        filesets_per_page = request.form.get("filesets_per_page", "25")
+        logs_per_page = request.form.get("logs_per_page", "25")
 
         try:
-            items_per_page_int = int(items_per_page)
-            if items_per_page_int < 1:
-                items_per_page = "1"
+            filesets_per_page_int = int(filesets_per_page)
+            logs_per_page_int = int(logs_per_page)
+            if filesets_per_page_int < 1:
+                filesets_per_page = "1"
+            if logs_per_page_int < 1:
+                logs_per_page_int = "1"
         except ValueError:
-            items_per_page = "25"
+            filesets_per_page = "25"
+            logs_per_page = "25"
 
         resp = make_response(redirect(url_for("config")))
-        resp.set_cookie("items_per_page", items_per_page, max_age=365 * 24 * 60 * 60)
+        resp.set_cookie(
+            "filesets_per_page", filesets_per_page, max_age=365 * 24 * 60 * 60
+        )
+        resp.set_cookie("logs_per_page", logs_per_page, max_age=365 * 24 * 60 * 60)
         return resp
 
-    items_per_page = int(request.cookies.get("items_per_page", "25"))
+    filesets_per_page = int(request.cookies.get("filesets_per_page", "25"))
+    logs_per_page = int(request.cookies.get("logs_per_page", "25"))
+
+    return render_template(
+        "config.html", filesets_per_page=filesets_per_page, logs_per_page=logs_per_page
+    )
+
 
-    return render_template("config.html", items_per_page=items_per_page)
+def get_filesets_per_page():
+    return int(request.cookies.get("filesets_per_page", "25"))
 
 
-def get_items_per_page():
-    return int(request.cookies.get("items_per_page", "25"))
+def get_logs_per_page():
+    return int(request.cookies.get("logs_per_page", "25"))
 
 
 @app.route("/validate", methods=["POST"])
@@ -1497,47 +1512,6 @@ def ready_for_review():
     return redirect(url)
 
 
- at app.route("/games_list")
-def games_list():
-    filename = "games_list"
-    records_table = "game"
-    select_query = """
-    SELECT engineid, gameid, extra, platform, language, game.name,
-    status, fileset.id as fileset
-    FROM game
-    JOIN engine ON engine.id = game.engine
-    JOIN fileset ON game.id = fileset.game
-    """
-    order = "ORDER BY gameid"
-    filters = {
-        "engineid": "engine",
-        "gameid": "game",
-        "extra": "game",
-        "platform": "game",
-        "language": "game",
-        "name": "game",
-        "status": "fileset",
-    }
-    mapping = {
-        "engine.id": "game.engine",
-        "game.id": "fileset.game",
-    }
-
-    items_per_page = get_items_per_page()
-
-    return render_template_string(
-        create_page(
-            filename,
-            items_per_page,
-            records_table,
-            select_query,
-            order,
-            filters,
-            mapping,
-        )
-    )
-
-
 @app.route("/logs")
 def logs():
     filename = "logs"
@@ -1551,10 +1525,10 @@ def logs():
         "user": "log",
         "text": "log",
     }
-    items_per_page = get_items_per_page()
+    logs_per_page = get_logs_per_page()
     return render_template_string(
         create_page(
-            filename, items_per_page, records_table, select_query, order, filters
+            filename, logs_per_page, records_table, select_query, order, filters
         )
     )
 
@@ -1588,11 +1562,11 @@ def fileset_search():
         "engine.id": "game.engine",
         "fileset.id": "transactions.fileset",
     }
-    items_per_page = get_items_per_page()
+    filesets_per_page = get_filesets_per_page()
     return render_template_string(
         create_page(
             filename,
-            items_per_page,
+            filesets_per_page,
             records_table,
             select_query,
             order,
diff --git a/templates/config.html b/templates/config.html
index d730f15..a524ac3 100644
--- a/templates/config.html
+++ b/templates/config.html
@@ -23,7 +23,6 @@
             background-color: #ffffff;
             color: #000000;
             padding: 10px;
-            font-size: 50px;
             align-self: flex-start;
             margin-left: 2vh;
         }
@@ -35,51 +34,58 @@
 
         .config-section {
             margin-bottom: 30px;
-            padding: 20px;
+            padding: 25px;
+            padding-left: 50px;
+            padding-right: 50px;
             border: 1px solid #ddd;
             border-radius: 8px;
             background-color: #f9f9f9;
-            max-width: 600px;
-            margin-left: auto;
-            margin-right: auto;
+            width: 100%;
+            box-sizing: border-box;
         }
 
         .config-item {
             display: flex;
-            align-items: center;
-            margin-bottom: 15px;
+            flex-direction: column;
+            margin-bottom: 20px;
+            gap: 10px;
         }
 
         .config-item label {
-            flex: 1;
-            margin-right: 15px;
             font-weight: 500;
+            font-size: 14px;
+            color: #333;
         }
 
         .current-value {
             font-style: italic;
             color: #666;
             font-size: 12px;
+            margin-top: 4px;
+        }
+
+        .input-container {
+            display: flex;
+            align-items: center;
+            gap: 15px;
+            margin-bottom: 20px;
+        }
+
+        .submit-section {
+            padding: 20px 0;
+            border-top: 1px solid #ddd;
+            margin-top: 20px;
         }
 
         .config-item input,
         .config-item select {
-            padding: 8px 12px;
+            padding: 10px 12px;
             border: 1px solid #ccc;
-            border-radius: 4px;
+            border-radius: 6px;
             font-size: 14px;
-            min-width: 120px;
-            margin-right: 2vw;
-        }
-
-        .success-message {
-            background-color: #d4edda;
-            color: #155724;
-            padding: 10px;
-            border: 1px solid #c3e6cb;
-            border-radius: 4px;
-            margin-bottom: 20px;
-            text-align: center;
+            width: 200px;
+            box-sizing: border-box;
+            transition: border-color 0.3s ease;
         }
 
         @media (max-width: 768px) {
@@ -124,24 +130,39 @@
     <div class="content">
         <h1 class="title">User Configurations</h1>
         <div class="main">
-            <form method="POST" action="{{ url_for('config') }}">
+            <form class="config-form" method="POST" action="{{ url_for('config') }}">
                 <div class="config-section">
                     <div class="config-item">
-                        <label for="items_per_page">Number of items per page:</label>
-                        <input type="number" name="items_per_page" id="items_per_page" value="{{ items_per_page }}"
-                            min="1">
-                        <div class="current-value">Current: {{ items_per_page }}</div>
+                        <label for="filesets_per_page">Number of filesets per page:</label>
+                        <div class="input-container">
+                            <input type="number" name="filesets_per_page" id="filesets_per_page" value="{{ filesets_per_page }}"
+                                min="1">
+                            <div class="current-value">Default: 25</div>
+                        </div>
+                        <label for="logs_per_page">Number of logs per page:</label>
+                        <div class="input-container">
+                            <input type="number" name="logs_per_page" id="logs_per_page" value="{{ logs_per_page }}"
+                                min="1">
+                            <div class="current-value">Default: 25</div>
+                        </div>
                     </div>
-                    <div style="text-align: center; padding: 20px;">
+                    <div class="submit-section">
                         <button type="submit">Save Configuration</button>
                     </div>
+                </div>
             </form>
         </div>
     </div>
     <script>
-        document.getElementById('items_per_page').addEventListener('input', function () {
+        document.getElementById('filesets_per_page').addEventListener('input', function () {
+            const value = parseInt(this.value);
+            if (value < 1) this.value = 1;
+            if (value > 250) this.value = 250;
+        });
+        document.getElementById('logs_per_page').addEventListener('input', function () {
             const value = parseInt(this.value);
             if (value < 1) this.value = 1;
+            if (value > 250) this.value = 250;
         });
     </script>
 </body>


Commit: 2c4458dc5bd1a8dde9e9bf2b9b5b859fc6f388aa
    https://github.com/scummvm/scummvm-sites/commit/2c4458dc5bd1a8dde9e9bf2b9b5b859fc6f388aa
Author: ShivangNagta (shivangnag at gmail.com)
Date: 2025-08-14T22:21:10+02:00

Commit Message:
INTEGRITY: Add/update deployment related files.

Changed paths:
  A app.wsgi
    apache2-config/gamesdb.sev.zone.conf
    fileset.py
    requirements.txt


diff --git a/apache2-config/gamesdb.sev.zone.conf b/apache2-config/gamesdb.sev.zone.conf
index 8b37f5b..4356372 100644
--- a/apache2-config/gamesdb.sev.zone.conf
+++ b/apache2-config/gamesdb.sev.zone.conf
@@ -4,12 +4,15 @@
     ServerAdmin webmaster at localhost
     CustomLog ${APACHE_LOG_DIR}/integrity-access.log combined
     ErrorLog ${APACHE_LOG_DIR}/integrity-error.log
-    DocumentRoot /home/ubuntu/projects/python/scummvm-sites
+    DocumentRoot /home/ubuntu/projects/python/scummvm_sites_2025/scummvm-sites
     WSGIDaemonProcess scummvm-sites user=www-data group=www-data threads=5
-    WSGIScriptAlias / /home/ubuntu/projects/python/scummvm-sites/app.wsgi
+    WSGIScriptAlias / /home/ubuntu/projects/python/scummvm_sites_2025/scummvm-sites/app.wsgi
 
-    <Directory /home/ubuntu/projects/python/scummvm-sites>
-        Require all granted
+    <Directory /home/ubuntu/projects/python/scummvm_sites_2025/scummvm-sites>
+        AuthType Basic
+	AuthName "nope"
+	AuthUserFile /home/ubuntu/projects/python/scummvm_sites_2025/.htpasswd
+	Require valid-user
     </Directory>
 
 </VirtualHost>
diff --git a/app.wsgi b/app.wsgi
new file mode 100644
index 0000000..a52d3ab
--- /dev/null
+++ b/app.wsgi
@@ -0,0 +1,12 @@
+import sys
+import logging
+
+sys.path.insert(0, "/home/ubuntu/projects/python/scummvm_sites_2025/scummvm-sites")
+
+from fileset import app as application
+
+logging.basicConfig(stream=sys.stderr)
+sys.stderr = sys.stdout
+
+if __name__ == "__main__":
+    application.run()
diff --git a/fileset.py b/fileset.py
index 3ba6f99..42d07d3 100644
--- a/fileset.py
+++ b/fileset.py
@@ -1597,4 +1597,4 @@ def delete_files(id):
 
 if __name__ == "__main__":
     app.secret_key = secret_key
-    app.run(port=5001, debug=True, host="0.0.0.0")
+    app.run(debug=True, host="0.0.0.0")
diff --git a/requirements.txt b/requirements.txt
index 8486da7..8340e26 100644
--- a/requirements.txt
+++ b/requirements.txt
@@ -1,18 +1,18 @@
-blinker==1.9.0
-cffi==1.17.1
-click==8.1.8
-cryptography==45.0.3
-Flask==3.1.0
-iniconfig==2.1.0
-itsdangerous==2.2.0
-Jinja2==3.1.5
-MarkupSafe==3.0.2
-packaging==25.0
-pluggy==1.6.0
-pycparser==2.22
-Pygments==2.19.1
-PyMySQL==1.1.1
-pytest==8.4.0
-setuptools==75.8.0
-Werkzeug==3.1.3
-wheel==0.45.1
+blinker
+cffi
+click
+cryptography
+Flask
+iniconfig
+itsdangerous
+Jinja2
+MarkupSafe
+packaging
+pluggy
+pycparser
+Pygments
+PyMySQL
+pytest
+setuptools
+Werkzeug
+wheel


Commit: 88177064e4a718eab54f620afdd38a745708a10b
    https://github.com/scummvm/scummvm-sites/commit/88177064e4a718eab54f620afdd38a745708a10b
Author: ShivangNagta (shivangnag at gmail.com)
Date: 2025-08-14T22:21:10+02:00

Commit Message:
INTEGRITY: Add underline on hover for navbar links.

Changed paths:
    static/style.css


diff --git a/static/style.css b/static/style.css
index 527824b..b93da50 100644
--- a/static/style.css
+++ b/static/style.css
@@ -67,9 +67,9 @@ nav {
   margin-left: 10px;
 }
 
-/* .nav-buttons a:hover {
-  box-shadow: 0 4px 12px rgba(39, 145, 232, 0.4);
-} */
+.nav-buttons a:hover {
+  text-decoration: underline;
+}
 
 .logo img {
   height: 75px;


Commit: 9e93e50919dfb6a5ffbd842fdb81b30d3e01c451
    https://github.com/scummvm/scummvm-sites/commit/9e93e50919dfb6a5ffbd842fdb81b30d3e01c451
Author: ShivangNagta (shivangnag at gmail.com)
Date: 2025-08-14T22:21:10+02:00

Commit Message:
INTEGRITY: Add favicon.

Changed paths:
    fileset.py
    pagination.py
    templates/config.html
    templates/home.html


diff --git a/fileset.py b/fileset.py
index 42d07d3..db0c89c 100644
--- a/fileset.py
+++ b/fileset.py
@@ -129,6 +129,8 @@ def fileset():
             <html>
             <head>
                 <link rel="stylesheet" type="text/css" href="{{{{ url_for('static', filename='style.css') }}}}">
+                <link rel="icon" type="image/png" sizes="32x32" href="/static/favicon-32x32.png">
+                <link rel="icon" type="image/png" sizes="16x16" href="/static/favicon-16x16.png">
             </head>
             <body>
             <nav>
@@ -493,6 +495,8 @@ def merge_fileset(id):
                 <html>
                 <head>
                     <link rel="stylesheet" type="text/css" href="{{{{ url_for('static', filename='style.css') }}}}">
+                    <link rel="icon" type="image/png" sizes="32x32" href="/static/favicon-32x32.png">
+                    <link rel="icon" type="image/png" sizes="16x16" href="/static/favicon-16x16.png">
                 </head>
                 <body>
                 <nav>
@@ -541,6 +545,8 @@ def merge_fileset(id):
     <html>
     <head>
         <link rel="stylesheet" type="text/css" href="{{ url_for('static', filename='style.css') }}">
+        <link rel="icon" type="image/png" sizes="32x32" href="/static/favicon-32x32.png">
+        <link rel="icon" type="image/png" sizes="16x16" href="/static/favicon-16x16.png">
     </head>
     <body>
     <nav>
@@ -609,6 +615,8 @@ def possible_merge_filesets(id):
             <html>
             <head>
                 <link rel="stylesheet" type="text/css" href="{{{{ url_for('static', filename='style.css') }}}}">
+                <link rel="icon" type="image/png" sizes="32x32" href="/static/favicon-32x32.png">
+                <link rel="icon" type="image/png" sizes="16x16" href="/static/favicon-16x16.png">
             </head>
             <body>
             <nav>
@@ -812,6 +820,8 @@ def confirm_merge(id):
             <html>
             <head>
                 <link rel="stylesheet" type="text/css" href="{{ url_for('static', filename='style.css') }}">
+                <link rel="icon" type="image/png" sizes="32x32" href="/static/favicon-32x32.png">
+                <link rel="icon" type="image/png" sizes="16x16" href="/static/favicon-16x16.png">
             </head>
             <body>
             <nav>
diff --git a/pagination.py b/pagination.py
index 339fc81..37c5c28 100644
--- a/pagination.py
+++ b/pagination.py
@@ -143,6 +143,8 @@ def create_page(
     <html>
     <head>
         <link rel="stylesheet" type="text/css" href="{{ url_for('static', filename='style.css') }}">
+        <link rel="icon" type="image/png" sizes="32x32" href="/static/favicon-32x32.png">
+        <link rel="icon" type="image/png" sizes="16x16" href="/static/favicon-16x16.png">
     </head>
     <body>
     <nav>
diff --git a/templates/config.html b/templates/config.html
index a524ac3..73b37a9 100644
--- a/templates/config.html
+++ b/templates/config.html
@@ -3,6 +3,8 @@
 
 <head>
     <link rel="stylesheet" type="text/css" href="{{ url_for('static', filename='style.css') }}">
+    <link rel="icon" type="image/png" sizes="32x32" href="/static/favicon-32x32.png">
+    <link rel="icon" type="image/png" sizes="16x16" href="/static/favicon-16x16.png">
     <style>
         body {
             margin: 0;
diff --git a/templates/home.html b/templates/home.html
index ec093df..1953641 100644
--- a/templates/home.html
+++ b/templates/home.html
@@ -3,6 +3,8 @@
 
 <head>
     <link rel="stylesheet" type="text/css" href="{{ url_for('static', filename='style.css') }}">
+    <link rel="icon" type="image/png" sizes="32x32" href="/static/favicon-32x32.png">
+    <link rel="icon" type="image/png" sizes="16x16" href="/static/favicon-16x16.png">
     <style>
         body {
             margin: 0;


Commit: 5c69679558b57a81bb18bc2f2b9495becfb938f9
    https://github.com/scummvm/scummvm-sites/commit/5c69679558b57a81bb18bc2f2b9495becfb938f9
Author: ShivangNagta (shivangnag at gmail.com)
Date: 2025-08-14T22:21:10+02:00

Commit Message:
INTEGRITY: Add column width variable in config.

Changed paths:
  A templates/pagination/navbar.html
    fileset.py
    pagination.py
    templates/config.html


diff --git a/fileset.py b/fileset.py
index db0c89c..aaffe4a 100644
--- a/fileset.py
+++ b/fileset.py
@@ -1392,10 +1392,60 @@ def config():
     """
     Stores the user configurations in the cookies
     """
+
+    fileset_dashboard_widths_default = {
+        "fileset_serial_no": "5",
+        "fileset_id": "5",
+        "fileset_engineid": "10",
+        "fileset_gameid": "10",
+        "fileset_extra": "10",
+        "fileset_platform": "10",
+        "fileset_language": "10",
+        "fileset_status": "10",
+        "fileset_transaction": "30",
+    }
+    log_dashboard_widths_default = {
+        "log_serial_no": "4",
+        "log_id": "4",
+        "log_timestamp": "10",
+        "log_category": "15",
+        "log_user": "10",
+        "log_text": "57",
+    }
+    fileset_dashboard_widths = defaultdict(str)
+
+    fileset_fields = [
+        ("fileset_serial_no", "S. No."),
+        ("fileset_id", "fileset"),
+        ("fileset_engineid", "engineid"),
+        ("fileset_gameid", "gameid"),
+        ("fileset_extra", "extra"),
+        ("fileset_platform", "platform"),
+        ("fileset_language", "language"),
+        ("fileset_status", "status"),
+        ("fileset_transaction", "transaction"),
+    ]
+    log_fields = [
+        ("log_serial_no", "S. No."),
+        ("log_id", "id"),
+        ("log_timestamp", "timestamp"),
+        ("log_category", "category"),
+        ("log_text", "text"),
+    ]
+
     if request.method == "POST":
         filesets_per_page = request.form.get("filesets_per_page", "25")
         logs_per_page = request.form.get("logs_per_page", "25")
 
+        fileset_dashboard_widths = {
+            field: request.form.get(field, default)
+            for field, default in fileset_dashboard_widths_default.items()
+        }
+        log_dashboard_widths = {
+            field: request.form.get(field, default)
+            for field, default in log_dashboard_widths_default.items()
+        }
+
         try:
             filesets_per_page_int = int(filesets_per_page)
             logs_per_page_int = int(logs_per_page)
@@ -1403,6 +1453,12 @@ def config():
                 filesets_per_page = "1"
             if logs_per_page_int < 1:
                 logs_per_page_int = "1"
+            fileset_dashboard_widths = {
+                k: str(max(1, int(v))) for k, v in fileset_dashboard_widths.items()
+            }
+            log_dashboard_widths = {
+                k: str(max(1, int(v))) for k, v in log_dashboard_widths.items()
+            }
         except ValueError:
             filesets_per_page = "25"
             logs_per_page = "25"
@@ -1412,13 +1468,33 @@ def config():
             "filesets_per_page", filesets_per_page, max_age=365 * 24 * 60 * 60
         )
         resp.set_cookie("logs_per_page", logs_per_page, max_age=365 * 24 * 60 * 60)
+        for field, value in fileset_dashboard_widths.items():
+            resp.set_cookie(field, value, max_age=365 * 24 * 60 * 60)
+        for field, value in log_dashboard_widths.items():
+            resp.set_cookie(field, value, max_age=365 * 24 * 60 * 60)
+
         return resp
 
     filesets_per_page = int(request.cookies.get("filesets_per_page", "25"))
     logs_per_page = int(request.cookies.get("logs_per_page", "25"))
 
+    fileset_dashboard_widths = {
+        field: [int(request.cookies.get(field, default)), default]
+        for field, default in fileset_dashboard_widths_default.items()
+    }
+    log_dashboard_widths = {
+        field: [int(request.cookies.get(field, default)), default]
+        for field, default in log_dashboard_widths_default.items()
+    }
+
     return render_template(
-        "config.html", filesets_per_page=filesets_per_page, logs_per_page=logs_per_page
+        "config.html",
+        filesets_per_page=filesets_per_page,
+        logs_per_page=logs_per_page,
+        fileset_dashboard_widths=fileset_dashboard_widths,
+        fileset_fields=fileset_fields,
+        log_dashboard_widths=log_dashboard_widths,
+        log_fields=log_fields,
     )
 
 
@@ -1430,6 +1506,10 @@ def get_logs_per_page():
     return int(request.cookies.get("logs_per_page", "25"))
 
 
+def get_width(name, default):
+    return int(request.cookies.get(name, default))
+
+
 @app.route("/validate", methods=["POST"])
 def validate():
     error_codes = {
@@ -1548,8 +1628,7 @@ def fileset_search():
     filename = "fileset_search"
     records_table = "fileset"
     select_query = """
-    SELECT fileset.id as fileset, extra, platform, language, game.gameid, megakey,
-    status, transaction, engineid
+    SELECT fileset.id as fileset, engineid, game.gameid, extra, platform, language, status, transaction
     FROM fileset
     LEFT JOIN game ON game.id = fileset.game
     LEFT JOIN engine ON engine.id = game.engine
@@ -1558,14 +1637,13 @@ def fileset_search():
     order = "ORDER BY fileset.id"
     filters = {
         "fileset": "fileset",
+        "engineid": "engine",
+        "gameid": "game",
         "extra": "game",
         "platform": "game",
         "language": "game",
-        "gameid": "game",
-        "megakey": "fileset",
         "status": "fileset",
         "transaction": "transactions",
-        "engineid": "engine",
     }
     mapping = {
         "game.id": "fileset.game",
diff --git a/pagination.py b/pagination.py
index 37c5c28..530464b 100644
--- a/pagination.py
+++ b/pagination.py
@@ -4,11 +4,8 @@ import json
 import re
 import os
 
-app = Flask(__name__)
 
-stylesheet = "style.css"
-jquery_file = "https://code.jquery.com/jquery-3.7.0.min.js"
-js_file = "js_functions.js"
+app = Flask(__name__)
 
 
 def get_join_columns(table1, table2, mapping):
@@ -137,50 +134,64 @@ def create_page(
         cursor.execute(query)
         results = cursor.fetchall()
 
+    # Initial html code including the navbar is stored in a separate html file.
+    html = ""
+    with open("templates/pagination/navbar.html", "r") as f:
+        html = f.read()
+
     # Generate HTML
-    html = """
-<!DOCTYPE html>
-    <html>
-    <head>
-        <link rel="stylesheet" type="text/css" href="{{ url_for('static', filename='style.css') }}">
-        <link rel="icon" type="image/png" sizes="32x32" href="/static/favicon-32x32.png">
-        <link rel="icon" type="image/png" sizes="16x16" href="/static/favicon-16x16.png">
-    </head>
-    <body>
-    <nav>
-        <div class="logo">
-            <a href="{{ url_for('home') }}">
-                <img src="{{ url_for('static', filename='integrity_service_logo_256.png') }}" alt="Logo">
-            </a>
-        </div>
-        <div class="nav-buttons">
-            <a href="{{ url_for('user_games_list') }}">User Games List</a>
-            <a href="{{ url_for('ready_for_review') }}">Ready for review</a>
-            <a href="{{ url_for('fileset_search') }}">Fileset Search</a>
-            <a href="{{ url_for('logs', sort='id-desc') }}">Logs</a>
-            <a href="{{ url_for('config') }}">Config</a>
-        </div>
-    </nav>
-<form id='filters-form' method='GET' onsubmit='remove_empty_inputs()'>
-<table style="margin-top: 80px;">
-"""
+    html += """
+        <form id='filters-form' method='GET' onsubmit='remove_empty_inputs()'>
+        <table class="fixed-table" style="margin-top: 80px;">
+    """
+
+    from fileset import get_width
+
+    if records_table == "fileset":
+        fileset_dashboard_widths_default = {
+            "fileset_serial_no": "5",
+            "fileset_id": "5",
+            "fileset_engineid": "10",
+            "fileset_gameid": "10",
+            "fileset_extra": "10",
+            "fileset_platform": "10",
+            "fileset_language": "10",
+            "fileset_status": "10",
+            "fileset_transaction": "30",
+        }
+        html += "<colgroup>"
+        for name, default in fileset_dashboard_widths_default.items():
+            width = get_width(name, default)
+            html += f"<col style='width: {width}%;'>"
+        html += "</colgroup>"
+    if records_table == "log":
+        log_dashboard_widths_default = {
+            "log_serial_no": "4",
+            "log_id": "4",
+            "log_timestamp": "10",
+            "log_category": "15",
+            "log_user": "10",
+            "log_text": "57",
+        }
+        html += "<colgroup>"
+        for name, default in log_dashboard_widths_default.items():
+            width = get_width(name, default)
+            html += f"<col style='width: {width}%;'>"
+        html += "</colgroup>"
+
     if filters:
         if records_table != "log":
-            html += "<tr class='filter'><td></td><td></td>"
+            html += """<tr class='filter'><td class='filter'><input type='submit' value='Submit'></td>"""
         else:
-            html += "<tr class='filter'><td></td>"
+            html += """<tr class='filter'><td class='filter'><input type='submit' value='Submit'></td>"""
 
         for key in filters.keys():
             filter_value = request.args.get(key, "")
             html += f"<td class='filter'><input type='text' class='filter' placeholder='{key}' name='{key}' value='{filter_value}'/></td>"
-        html += "</tr><tr class='filter'><td></td><td></td><td class='filter'><input type='submit' value='Submit'></td></tr>"
+        html += "</tr>"
 
-    html += "<th>#</th>"
-    if records_table != "log":
-        html += "<th>Fileset ID</th>"
+    html += "<th>S. No.</th>"
     for key in filters.keys():
-        if key in ["fileset", "fileset_id"]:
-            continue
         vars = "&".join([f"{k}={v}" for k, v in request.args.items() if k != "sort"])
         sort = request.args.get("sort", "")
         if sort == key:
diff --git a/templates/config.html b/templates/config.html
index 73b37a9..0a7b6b9 100644
--- a/templates/config.html
+++ b/templates/config.html
@@ -20,7 +20,7 @@
         }
 
         .title {
-            margin-top: 8vh;
+            margin-top: 150px;
             text-align: center;
             background-color: #ffffff;
             color: #000000;
@@ -90,6 +90,12 @@
             transition: border-color 0.3s ease;
         }
 
+        .width-container {
+            display: flex;
+            gap: 30px;
+            flex-wrap: wrap;
+        }
+
         @media (max-width: 768px) {
             .config {
                 font-size: 40px;
@@ -135,10 +141,11 @@
             <form class="config-form" method="POST" action="{{ url_for('config') }}">
                 <div class="config-section">
                     <div class="config-item">
+                        <h4>Number of items per page</h4>
                         <label for="filesets_per_page">Number of filesets per page:</label>
                         <div class="input-container">
-                            <input type="number" name="filesets_per_page" id="filesets_per_page" value="{{ filesets_per_page }}"
-                                min="1">
+                            <input type="number" name="filesets_per_page" id="filesets_per_page"
+                                value="{{ filesets_per_page }}" min="1">
                             <div class="current-value">Default: 25</div>
                         </div>
                         <label for="logs_per_page">Number of logs per page:</label>
@@ -147,11 +154,38 @@
                                 min="1">
                             <div class="current-value">Default: 25</div>
                         </div>
+
+                        <h4>Field Widths for Fileset Search Dashboard</h4>
+                        <div class="width-container">
+                            {% for field_name, label in fileset_fields %}
+                            <div style="display: flex; flex-direction: column; gap: 10px">
+                                <label for="{{ field_name }}">{{ label }}:</label>
+                                <div class="input-container" style="margin-bottom: 2px;">
+                                    <input type="number" name="{{ field_name }}" id="{{ field_name }}"
+                                        value="{{ fileset_dashboard_widths[field_name][0] }}" min="1">
+                                </div>
+                                <div class="current-value" style="margin-top: 0px;">Default: {{ fileset_dashboard_widths[field_name][1] }}%</div>
+                            </div>
+                            {% endfor %}
+                        </div>
+                        
+                        <h4>Field Widths for Log Dashboard</h4>
+                        <div class="width-container">
+                            {% for field_name, label in log_fields %}
+                            <div style="display: flex; flex-direction: column; gap: 10px">
+                                <label for="{{ field_name }}">{{ label }}:</label>
+                                <div class="input-container" style="margin-bottom: 2px;">
+                                    <input type="number" name="{{ field_name }}" id="{{ field_name }}"
+                                        value="{{ log_dashboard_widths[field_name][0] }}" min="1">
+                                </div>
+                                <div class="current-value" style="margin-top: 0px;">Default: {{ log_dashboard_widths[field_name][1] }}%</div>
+                            </div>
+                            {% endfor %}
+                        </div>
+                        <div class="submit-section">
+                            <button type="submit">Save Configuration</button>
+                        </div>
                     </div>
-                    <div class="submit-section">
-                        <button type="submit">Save Configuration</button>
-                    </div>
-                </div>
             </form>
         </div>
     </div>
@@ -166,6 +200,21 @@
             if (value < 1) this.value = 1;
             if (value > 250) this.value = 250;
         });
+        {% for field_name, label in fileset_fields %}
+        document.getElementById('{{ field_name }}').addEventListener('input', function () {
+            const value = parseInt(this.value);
+            if (value < 1) this.value = 1;
+            if (value > 50) this.value = 50;
+        });
+        {% endfor %}
+        {% for field_name, label in log_fields %}
+        document.getElementById('{{ field_name }}').addEventListener('input', function () {
+            const value = parseInt(this.value);
+            if (value < 1) this.value = 1;
+            if (value > 50) this.value = 50;
+        });
+        {% endfor %}
+
     </script>
 </body>
 
diff --git a/templates/pagination/navbar.html b/templates/pagination/navbar.html
new file mode 100644
index 0000000..227428d
--- /dev/null
+++ b/templates/pagination/navbar.html
@@ -0,0 +1,22 @@
+<!DOCTYPE html>
+    <html>
+    <head>
+        <link rel="stylesheet" type="text/css" href="{{ url_for('static', filename='style.css') }}">
+        <link rel="icon" type="image/png" sizes="32x32" href="/static/favicon-32x32.png">
+        <link rel="icon" type="image/png" sizes="16x16" href="/static/favicon-16x16.png">
+    </head>
+    <body>
+    <nav>
+        <div class="logo">
+            <a href="{{ url_for('home') }}">
+                <img src="{{ url_for('static', filename='integrity_service_logo_256.png') }}" alt="Logo">
+            </a>
+        </div>
+        <div class="nav-buttons">
+            <a href="{{ url_for('user_games_list') }}">User Games List</a>
+            <a href="{{ url_for('ready_for_review') }}">Ready for review</a>
+            <a href="{{ url_for('fileset_search') }}">Fileset Search</a>
+            <a href="{{ url_for('logs', sort='id-desc') }}">Logs</a>
+            <a href="{{ url_for('config') }}">Config</a>
+        </div>
+    </nav>


Commit: 948adac460f801bd6846a5635885ccf935dc9bca
    https://github.com/scummvm/scummvm-sites/commit/948adac460f801bd6846a5635885ccf935dc9bca
Author: ShivangNagta (shivangnag at gmail.com)
Date: 2025-08-14T22:21:10+02:00

Commit Message:
INTEGRITY: Add metadata information in seeding logs.

Changed paths:
    db_functions.py


diff --git a/db_functions.py b/db_functions.py
index e478dbb..ed19005 100644
--- a/db_functions.py
+++ b/db_functions.py
@@ -529,9 +529,7 @@ def db_insert(data_arr, username=None, skiplog=False):
             insert_game(
                 engine_name, engineid, title, gameid, extra, platform, lang, conn
             )
-
-            log_text = f"size {os.path.getsize(filepath)}, author {author}, version {version}. State {status}."
-
+            log_text = f"Engine Name - {engine_name}, Engine ID - {engineid}, Game ID - {gameid}, Title - {title}, Extra - {extra}, Platform - {platform}, Language - {lang}."
             if insert_fileset(
                 src,
                 detection,


Commit: 4bd0f4c6222428d050486769e10741057e26e236
    https://github.com/scummvm/scummvm-sites/commit/4bd0f4c6222428d050486769e10741057e26e236
Author: ShivangNagta (shivangnag at gmail.com)
Date: 2025-08-14T22:21:10+02:00

Commit Message:
INTEGRITY: Move html text file to static folder.

Changed paths:
  A static/navbar.html.txt
  R templates/pagination/navbar.html
    pagination.py
    templates/config.html


diff --git a/pagination.py b/pagination.py
index 530464b..c6b6e76 100644
--- a/pagination.py
+++ b/pagination.py
@@ -136,7 +136,8 @@ def create_page(
 
     # Initial html code including the navbar is stored in a separate html file.
     html = ""
-    with open("templates/pagination/navbar.html", "r") as f:
+    navbar_path = os.path.join(app.root_path, "static", "navbar.html.txt")
+    with open(navbar_path, "r") as f:
         html = f.read()
 
     # Generate HTML
diff --git a/templates/pagination/navbar.html b/static/navbar.html.txt
similarity index 100%
rename from templates/pagination/navbar.html
rename to static/navbar.html.txt
diff --git a/templates/config.html b/templates/config.html
index 0a7b6b9..8dc90b2 100644
--- a/templates/config.html
+++ b/templates/config.html
@@ -204,14 +204,14 @@
         document.getElementById('{{ field_name }}').addEventListener('input', function () {
             const value = parseInt(this.value);
             if (value < 1) this.value = 1;
-            if (value > 50) this.value = 50;
+            if (value > 70) this.value = 70;
         });
         {% endfor %}
         {% for field_name, label in log_fields %}
         document.getElementById('{{ field_name }}').addEventListener('input', function () {
             const value = parseInt(this.value);
             if (value < 1) this.value = 1;
-            if (value > 50) this.value = 50;
+            if (value > 70) this.value = 70;
         });
         {% endfor %}
 


Commit: 489966cb0d0548c02963057b0758cf639d75fae4
    https://github.com/scummvm/scummvm-sites/commit/489966cb0d0548c02963057b0758cf639d75fae4
Author: ShivangNagta (shivangnag at gmail.com)
Date: 2025-08-14T22:21:10+02:00

Commit Message:
INTEGRITY: Add visual symbols for sorting.

Changed paths:
    pagination.py


diff --git a/pagination.py b/pagination.py
index c6b6e76..236455a 100644
--- a/pagination.py
+++ b/pagination.py
@@ -192,14 +192,23 @@ def create_page(
         html += "</tr>"
 
     html += "<th>S. No.</th>"
+    current_sort = request.args.get("sort", "")
+    sort_key, sort_dir = (current_sort.split("-") + ["asc"])[:2]
+
     for key in filters.keys():
-        vars = "&".join([f"{k}={v}" for k, v in request.args.items() if k != "sort"])
-        sort = request.args.get("sort", "")
-        if sort == key:
-            vars += f"&sort={key}-desc"
+        base_params = {k: v for k, v in request.args.items() if k != "sort"}
+
+        if key == sort_key:
+            next_sort_dir = "asc" if sort_dir == "desc" else "desc"
+            arrow = "â–¼" if sort_dir == "desc" else "â–²"
+            sort_param = f"{key}-{next_sort_dir}"
         else:
-            vars += f"&sort={key}"
-        html += f"<th><a href='{filename}?{vars}'>{key}</a></th>"
+            arrow = ""
+            sort_param = f"{key}-asc"
+
+        base_params["sort"] = sort_param
+        query_string = "&".join(f"{k}={v}" for k, v in base_params.items())
+        html += f"<th><a href='{filename}?{query_string}'>{key} {arrow}</a></th>"
 
     if results:
         counter = offset + 1


Commit: 906bc7c21427bea9abad398f25c4c2afba352075
    https://github.com/scummvm/scummvm-sites/commit/906bc7c21427bea9abad398f25c4c2afba352075
Author: ShivangNagta (shivangnag at gmail.com)
Date: 2025-08-14T22:21:10+02:00

Commit Message:
INTEGRITY: Add checksum filtering in fileset search page.

Changed paths:
  R style.css
    fileset.py
    pagination.py
    static/navbar.html.txt
    static/style.css
    templates/config.html
    templates/home.html


diff --git a/fileset.py b/fileset.py
index aaffe4a..4ce8b81 100644
--- a/fileset.py
+++ b/fileset.py
@@ -142,7 +142,7 @@ def fileset():
                 <div class="nav-buttons">
                     <a href="{{{{ url_for('user_games_list') }}}}">User Games List</a>
                     <a href="{{{{ url_for('ready_for_review') }}}}">Ready for review</a>
-                    <a href="{{{{ url_for('fileset_search') }}}}">Fileset Search</a>
+                    <a href="{{{{ url_for('fileset_search', sort='fileset-asc') }}}}">Fileset Search</a>
                     <a href="{{{{ url_for('logs', sort='id-desc') }}}}">Logs</a>
                     <a href="{{{{ url_for('config') }}}}">Config</a>
                 </div>
@@ -508,7 +508,7 @@ def merge_fileset(id):
                     <div class="nav-buttons">
                         <a href="{{{{ url_for('user_games_list') }}}}">User Games List</a>
                         <a href="{{{{ url_for('ready_for_review') }}}}">Ready for review</a>
-                        <a href="{{{{ url_for('fileset_search') }}}}">Fileset Search</a>
+                        <a href="{{{{ url_for('fileset_search', sort='fileset-asc') }}}}">Fileset Search</a>
                         <a href="{{{{ url_for('logs', sort='id-desc') }}}}">Logs</a>
                         <a href="{{{{ url_for('config') }}}}">Config</a>
                     </div>
@@ -558,7 +558,7 @@ def merge_fileset(id):
         <div class="nav-buttons">
             <a href="{{ url_for('user_games_list') }}">User Games List</a>
             <a href="{{ url_for('ready_for_review') }}">Ready for review</a>
-            <a href="{{ url_for('fileset_search') }}">Fileset Search</a>
+            <a href="{{ url_for('fileset_search', sort='fileset-asc') }}">Fileset Search</a>
             <a href="{{ url_for('logs', sort='id-desc') }}">Logs</a>
             <a href="{{ url_for('config') }}">Config</a>
         </div>
@@ -628,7 +628,7 @@ def possible_merge_filesets(id):
                 <div class="nav-buttons">
                     <a href="{{{{ url_for('user_games_list') }}}}">User Games List</a>
                     <a href="{{{{ url_for('ready_for_review') }}}}">Ready for review</a>
-                    <a href="{{{{ url_for('fileset_search') }}}}">Fileset Search</a>
+                    <a href="{{{{ url_for('fileset_search', sort='fileset-asc') }}}}">Fileset Search</a>
                     <a href="{{{{ url_for('logs', sort='id-desc') }}}}">Logs</a>
                     <a href="{{{{ url_for('config') }}}}">Config</a>
                 </div>
@@ -833,7 +833,7 @@ def confirm_merge(id):
                 <div class="nav-buttons">
                     <a href="{{ url_for('user_games_list') }}">User Games List</a>
                     <a href="{{ url_for('ready_for_review') }}">Ready for review</a>
-                    <a href="{{ url_for('fileset_search') }}">Fileset Search</a>
+                    <a href="{{ url_for('fileset_search', sort='fileset-asc') }}">Fileset Search</a>
                     <a href="{{ url_for('logs', sort='id-desc') }}">Logs</a>
                     <a href="{{ url_for('config') }}">Config</a>
                 </div>
@@ -1628,11 +1628,13 @@ def fileset_search():
     filename = "fileset_search"
     records_table = "fileset"
     select_query = """
-    SELECT fileset.id as fileset, engineid, game.gameid, extra, platform, language, status, transaction
+    SELECT DISTINCT fileset.id as fileset, engineid, game.gameid, extra, platform, language, status, transaction
     FROM fileset
     LEFT JOIN game ON game.id = fileset.game
     LEFT JOIN engine ON engine.id = game.engine
     JOIN transactions ON fileset.id = transactions.fileset
+    JOIN file ON fileset.id = file.fileset
+    JOIN filechecksum ON file.id = filechecksum.file
     """
     order = "ORDER BY fileset.id"
     filters = {
@@ -1644,11 +1646,14 @@ def fileset_search():
         "language": "game",
         "status": "fileset",
         "transaction": "transactions",
+        "checksum": "filechecksum",
     }
     mapping = {
         "game.id": "fileset.game",
         "engine.id": "game.engine",
         "fileset.id": "transactions.fileset",
+        "file.fileset": "fileset.id",
+        "file.id": "filechecksum.file",
     }
     filesets_per_page = get_filesets_per_page()
     return render_template_string(
diff --git a/pagination.py b/pagination.py
index 236455a..d3a84f4 100644
--- a/pagination.py
+++ b/pagination.py
@@ -98,15 +98,18 @@ def create_page(
                         from_query += " JOIN engine ON engine.id = game.engine"
                     else:
                         from_query += " JOIN game ON game.id = fileset.game JOIN engine ON engine.id = game.engine"
+                if t == "filechecksum":
+                    from_query += " JOIN file ON file.fileset = fileset.id JOIN filechecksum ON file.id = filechecksum.file"
                 else:
                     from_query += (
                         f" JOIN {t} ON {get_join_columns(records_table, t, mapping)}"
                     )
 
         base_table = records_table.split(" ")[0]
-        cursor.execute(
-            f"SELECT COUNT({base_table}.id) AS count FROM {from_query} {condition}"
-        )
+        query = f"""
+            SELECT COUNT(DISTINCT {base_table}.id) AS count FROM {from_query} {condition}
+        """
+        cursor.execute(query)
         num_of_results = cursor.fetchone()["count"]
 
         num_of_pages = (num_of_results + results_per_page - 1) // results_per_page
@@ -128,6 +131,8 @@ def create_page(
         else:
             if records_table == "log":
                 order = "ORDER BY `id` DESC"
+            if records_table == "fileset":
+                order = "ORDER BY fileset ASC"
 
         # Fetch results
         query = f"{select_query} {condition} {order} LIMIT {results_per_page} OFFSET {offset}"
@@ -181,14 +186,17 @@ def create_page(
         html += "</colgroup>"
 
     if filters:
-        if records_table != "log":
-            html += """<tr class='filter'><td class='filter'><input type='submit' value='Submit'></td>"""
-        else:
-            html += """<tr class='filter'><td class='filter'><input type='submit' value='Submit'></td>"""
-
+        html += """<tr class='filter'><td class='filter'><input type='submit' value='Submit'></td>"""
         for key in filters.keys():
+            if key == "checksum":
+                continue
             filter_value = request.args.get(key, "")
-            html += f"<td class='filter'><input type='text' class='filter' placeholder='{key}' name='{key}' value='{filter_value}'/></td>"
+            if key == "transaction":
+                html += f"<td style='display: flex;' class='filter'><input type='text' class='filter' placeholder='{key}' name='{key}' value='{filter_value}'/>"
+                filter_value = request.args.get("checksum", "")
+                html += f"<input type='text' class='filter' placeholder='checksum' name='checksum' value='{filter_value}'/></td>"
+            else:
+                html += f"<td class='filter'><input type='text' class='filter' placeholder='{key}' name='{key}' value='{filter_value}'/></td>"
         html += "</tr>"
 
     html += "<th>S. No.</th>"
@@ -203,12 +211,13 @@ def create_page(
             arrow = "â–¼" if sort_dir == "desc" else "â–²"
             sort_param = f"{key}-{next_sort_dir}"
         else:
-            arrow = ""
+            arrow = "⬍"
             sort_param = f"{key}-asc"
 
         base_params["sort"] = sort_param
         query_string = "&".join(f"{k}={v}" for k, v in base_params.items())
-        html += f"<th><a href='{filename}?{query_string}'>{key} {arrow}</a></th>"
+        if key != "checksum":
+            html += f"<th><a href='{filename}?{query_string}'>{key} {arrow}</a></th>"
 
     if results:
         counter = offset + 1
diff --git a/static/navbar.html.txt b/static/navbar.html.txt
index 227428d..1a66f9a 100644
--- a/static/navbar.html.txt
+++ b/static/navbar.html.txt
@@ -15,7 +15,7 @@
         <div class="nav-buttons">
             <a href="{{ url_for('user_games_list') }}">User Games List</a>
             <a href="{{ url_for('ready_for_review') }}">Ready for review</a>
-            <a href="{{ url_for('fileset_search') }}">Fileset Search</a>
+            <a href="{{ url_for('fileset_search', sort='fileset-asc') }}">Fileset Search</a>
             <a href="{{ url_for('logs', sort='id-desc') }}">Logs</a>
             <a href="{{ url_for('config') }}">Config</a>
         </div>
diff --git a/static/style.css b/static/style.css
index b93da50..7ec3f73 100644
--- a/static/style.css
+++ b/static/style.css
@@ -38,6 +38,8 @@ th {
   text-align: center;
   background-color: var(--primary-color);
   color: white;
+  height: 30px;
+  vertical-align: middle;
 }
 
 th a {
diff --git a/style.css b/style.css
deleted file mode 100644
index 1c9e599..0000000
--- a/style.css
+++ /dev/null
@@ -1,113 +0,0 @@
-:root {
-  --primary-color: #27b5e8;
-  font-family: 'Segoe UI', Tahoma, Geneva, Verdana, sans-serif;
-}
-
-td, th {
-  padding-inline: 5px;
-}
-
-tr:nth-child(even) {background-color: #f2f2f2;}
-tr {background-color: white;}
-
-tr:hover {background-color: #ddd;}
-tr.games_list:hover {cursor: pointer;}
-
-tr.filter:hover {background-color:inherit;}
-td.filter {text-align: center;}
-
-th {
-  padding-top: 5px;
-  padding-bottom: 5px;
-  text-align: center;
-  background-color: var(--primary-color);
-  color: white;
-}
-
-th a {
-  color: white;
-  text-decoration: none; /* no underline */
-}
-
-button {
-  color: white;
-  padding: 6px 12px;
-  border-radius: 10px;
-  transition: background-color 0.1s;
-  background-color: var(--primary-color);
-  border: 1px solid var(--primary-color);
-}
-
-button:hover {
-  background-color: #29afe0;
-}
-button:active {
-  background-color: #1a95c2;
-}
-
-input[type=submit] {
-  color: white;
-  padding: 6px 12px;
-  border-radius: 10px;
-  transition: background-color 0.1s;
-  background-color: var(--primary-color);
-  border: 1px solid var(--primary-color);
-}
-
-input[type=submit]:hover {
-  background-color: #29afe0;
-}
-input[type=submit]:active {
-  background-color: #1a95c2;
-}
-
-input[type=text], select {
-  width: 25%;
-  height: 38px;
-  padding: 6px 12px;
-  margin: 0px 8px;
-  display: inline-block;
-  border: 1px solid #ccc;
-  border-radius: 4px;
-  box-sizing: border-box;
-}
-
-input[type=text].filter {
-  width: 80%;
-}
-
-.pagination {
-  display: inline-block;
-  align-self: center;
-}
-
-.pagination .more {
-  color: black;
-  float: left;
-  padding: 15px 10px;
-}
-
-.pagination a {
-  color: black;
-  float: left;
-  padding: 8px 16px;
-  text-decoration: none;
-  transition: background-color 0.3s;
-  border: 1px solid #ddd;
-}
-
-.pagination a.active {
-  color: white;
-  background-color: var(--primary-color);
-  border: 1px solid var(--primary-color);
-}
-
-.pagination a:hover:not(.active) {
-  background-color: #ddd;
-}
-
-form {
-  padding: 0px;
-  margin: 0px;
-  display: inline;
-}
diff --git a/templates/config.html b/templates/config.html
index 8dc90b2..75f50d2 100644
--- a/templates/config.html
+++ b/templates/config.html
@@ -130,7 +130,7 @@
         <div class="nav-buttons">
             <a href="{{ url_for('user_games_list') }}">User Games List</a>
             <a href="{{ url_for('ready_for_review') }}">Ready for review</a>
-            <a href="{{ url_for('fileset_search') }}">Fileset Search</a>
+            <a href="{{ url_for('fileset_search', sort='fileset-asc') }}">Fileset Search</a>
             <a href="{{ url_for('logs', sort='id-desc')}}">Logs</a>
             <a href="{{ url_for('config') }}">Config</a>
         </div>
diff --git a/templates/home.html b/templates/home.html
index 1953641..dc66252 100644
--- a/templates/home.html
+++ b/templates/home.html
@@ -90,7 +90,7 @@
         <div class="nav-buttons">
             <a href="{{ url_for('user_games_list') }}">User Games List</a>
             <a href="{{ url_for('ready_for_review') }}">Ready for review</a>
-            <a href="{{ url_for('fileset_search') }}">Fileset Search</a>
+            <a href="{{ url_for('fileset_search', sort='fileset-asc') }}">Fileset Search</a>
             <a href="{{ url_for('logs', sort='id-desc')}}">Logs</a>
             <a href="{{ url_for('config') }}">Config</a>
         </div>


Commit: 146795da94edfc9bbe5737d7a9bd954b138d2aca
    https://github.com/scummvm/scummvm-sites/commit/146795da94edfc9bbe5737d7a9bd954b138d2aca
Author: ShivangNagta (shivangnag at gmail.com)
Date: 2025-08-14T22:21:10+02:00

Commit Message:
INTEGRITY: Encode url variables before changing page.

Changed paths:
    pagination.py


diff --git a/pagination.py b/pagination.py
index d3a84f4..b71e77f 100644
--- a/pagination.py
+++ b/pagination.py
@@ -4,6 +4,8 @@ import json
 import re
 import os
 
+from urllib.parse import urlencode
+
 
 app = Flask(__name__)
 
@@ -77,7 +79,7 @@ def create_page(
             col = f"{filters[key]}.{'id' if key == 'fileset' else key}"
             parsed = build_search_condition(value, col)
             if parsed:
-                where_clauses.append(parsed)
+                where_clauses.append("(" + parsed + ")")
 
         condition = ""
         if where_clauses:
@@ -265,9 +267,10 @@ def create_page(
     if not results:
         html += "<h1>No results for given filters</h1>"
 
-    # Pagination
-    vars = "&".join([f"{k}={v}" for k, v in request.args.items() if k != "page"])
+    # Encoding url variable again to url portable form
+    vars = urlencode({k: v for k, v in request.args.items() if k != "page"})
 
+    # Pagination
     if num_of_pages > 1:
         html += "<form method='GET'>"
         for key, value in request.args.items():


Commit: e520c15eea9c3aed8d47ff5324e732ef2adaf240
    https://github.com/scummvm/scummvm-sites/commit/e520c15eea9c3aed8d47ff5324e732ef2adaf240
Author: ShivangNagta (shivangnag at gmail.com)
Date: 2025-08-14T22:21:10+02:00

Commit Message:
INTEGRITY: Fix the fileset details being displayed in merge dashboard.

Changed paths:
    fileset.py


diff --git a/fileset.py b/fileset.py
index 4ce8b81..c36e352 100644
--- a/fileset.py
+++ b/fileset.py
@@ -730,17 +730,20 @@ def confirm_merge(id):
             cursor.execute(
                 """
                 SELECT 
-                    fs.*, 
+                    fs.id, fs.status, fs.src, fs.`key`, fs.megakey,
+                    fs.timestamp, fs.detection_size, fs.set_dat_metadata,
                     g.name AS game_name, 
-                    g.engine AS game_engine, 
+                    e.name AS game_engine,
                     g.platform AS game_platform,
                     g.language AS game_language,
                     (SELECT COUNT(*) FROM file WHERE fileset = fs.id) AS file_count
-                FROM 
+                FROM
                     fileset fs
-                LEFT JOIN 
+                LEFT JOIN
                     game g ON fs.game = g.id
-                WHERE 
+                LEFT JOIN
+                    engine e ON g.engine = e.id
+                WHERE
                     fs.id = %s
             """,
                 (id,),
@@ -761,9 +764,10 @@ def confirm_merge(id):
             cursor.execute(
                 """
                 SELECT 
-                    fs.*, 
-                    g.name AS game_name, 
-                    g.engine AS game_engine, 
+                    fs.id, fs.status, fs.src, fs.`key`, fs.megakey,
+                    fs.timestamp, fs.detection_size, fs.set_dat_metadata,
+                    g.name AS game_name,
+                    e.name AS game_engine,
                     g.platform AS game_platform,
                     g.language AS game_language,
                     (SELECT COUNT(*) FROM file WHERE fileset = fs.id) AS file_count
@@ -771,6 +775,8 @@ def confirm_merge(id):
                     fileset fs
                 LEFT JOIN 
                     game g ON fs.game = g.id
+                LEFT JOIN
+                    engine e ON g.engine = e.id
                 WHERE 
                     fs.id = %s
             """,
@@ -844,6 +850,7 @@ def confirm_merge(id):
             <tr><th style="width: 50px;">Field</th><th style="width: 1000px;">Source Fileset</th><th style="width: 1000px;">Target Fileset</th></tr>
             """
 
+            # Fileset metadata
             for column in source_fileset.keys():
                 source_value = str(source_fileset[column])
                 target_value = str(target_fileset[column])
@@ -959,10 +966,8 @@ def confirm_merge(id):
                             }
                         )
                     )
-                    if (
-                        os.path.basename(matched_source_filename).lower()
-                        in detection_files_set
-                    ):
+
+                    if matched_source_filename.lower() in detection_files_set:
                         target_val = html_lib.escape(
                             json.dumps(
                                 {


Commit: 469b9b8df99f3850a3e6bbad0628cd3c3b7a18fa
    https://github.com/scummvm/scummvm-sites/commit/469b9b8df99f3850a3e6bbad0628cd3c3b7a18fa
Author: ShivangNagta (shivangnag at gmail.com)
Date: 2025-08-14T22:21:10+02:00

Commit Message:
INTEGRITY: Add default state for sorting along with ascending and descending.

Changed paths:
  A static/icons/filter/arrow_drop_down.png
  A static/icons/filter/arrow_drop_up.png
  A static/icons/filter/unfold_more.png
  A static/navbar_string.html
  R static/navbar.html.txt
    fileset.py
    pagination.py
    static/style.css
    templates/config.html
    templates/home.html


diff --git a/fileset.py b/fileset.py
index c36e352..e59aa38 100644
--- a/fileset.py
+++ b/fileset.py
@@ -37,7 +37,7 @@ secret_key = os.urandom(24)
 
 @app.route("/")
 def index():
-    return redirect(url_for("logs", sort="id-desc"))
+    return redirect(url_for("logs"))
 
 
 @app.route("/home")
@@ -142,8 +142,8 @@ def fileset():
                 <div class="nav-buttons">
                     <a href="{{{{ url_for('user_games_list') }}}}">User Games List</a>
                     <a href="{{{{ url_for('ready_for_review') }}}}">Ready for review</a>
-                    <a href="{{{{ url_for('fileset_search', sort='fileset-asc') }}}}">Fileset Search</a>
-                    <a href="{{{{ url_for('logs', sort='id-desc') }}}}">Logs</a>
+                    <a href="{{{{ url_for('fileset_search') }}}}">Fileset Search</a>
+                    <a href="{{{{ url_for('logs') }}}}">Logs</a>
                     <a href="{{{{ url_for('config') }}}}">Config</a>
                 </div>
             </nav>
@@ -508,8 +508,8 @@ def merge_fileset(id):
                     <div class="nav-buttons">
                         <a href="{{{{ url_for('user_games_list') }}}}">User Games List</a>
                         <a href="{{{{ url_for('ready_for_review') }}}}">Ready for review</a>
-                        <a href="{{{{ url_for('fileset_search', sort='fileset-asc') }}}}">Fileset Search</a>
-                        <a href="{{{{ url_for('logs', sort='id-desc') }}}}">Logs</a>
+                        <a href="{{{{ url_for('fileset_search') }}}}">Fileset Search</a>
+                        <a href="{{{{ url_for('logs') }}}}">Logs</a>
                         <a href="{{{{ url_for('config') }}}}">Config</a>
                     </div>
                 </nav>
@@ -558,8 +558,8 @@ def merge_fileset(id):
         <div class="nav-buttons">
             <a href="{{ url_for('user_games_list') }}">User Games List</a>
             <a href="{{ url_for('ready_for_review') }}">Ready for review</a>
-            <a href="{{ url_for('fileset_search', sort='fileset-asc') }}">Fileset Search</a>
-            <a href="{{ url_for('logs', sort='id-desc') }}">Logs</a>
+            <a href="{{ url_for('fileset_search') }}">Fileset Search</a>
+            <a href="{{ url_for('logs') }}">Logs</a>
             <a href="{{ url_for('config') }}">Config</a>
         </div>
     </nav>
@@ -628,8 +628,8 @@ def possible_merge_filesets(id):
                 <div class="nav-buttons">
                     <a href="{{{{ url_for('user_games_list') }}}}">User Games List</a>
                     <a href="{{{{ url_for('ready_for_review') }}}}">Ready for review</a>
-                    <a href="{{{{ url_for('fileset_search', sort='fileset-asc') }}}}">Fileset Search</a>
-                    <a href="{{{{ url_for('logs', sort='id-desc') }}}}">Logs</a>
+                    <a href="{{{{ url_for('fileset_search') }}}}">Fileset Search</a>
+                    <a href="{{{{ url_for('logs') }}}}">Logs</a>
                     <a href="{{{{ url_for('config') }}}}">Config</a>
                 </div>
             </nav>
@@ -839,8 +839,8 @@ def confirm_merge(id):
                 <div class="nav-buttons">
                     <a href="{{ url_for('user_games_list') }}">User Games List</a>
                     <a href="{{ url_for('ready_for_review') }}">Ready for review</a>
-                    <a href="{{ url_for('fileset_search', sort='fileset-asc') }}">Fileset Search</a>
-                    <a href="{{ url_for('logs', sort='id-desc') }}">Logs</a>
+                    <a href="{{ url_for('fileset_search') }}">Fileset Search</a>
+                    <a href="{{ url_for('logs') }}">Logs</a>
                     <a href="{{ url_for('config') }}">Config</a>
                 </div>
             </nav>
diff --git a/pagination.py b/pagination.py
index b71e77f..f611cb6 100644
--- a/pagination.py
+++ b/pagination.py
@@ -1,4 +1,4 @@
-from flask import Flask, request
+from flask import Flask, request, url_for
 import pymysql
 import json
 import re
@@ -143,7 +143,7 @@ def create_page(
 
     # Initial html code including the navbar is stored in a separate html file.
     html = ""
-    navbar_path = os.path.join(app.root_path, "static", "navbar.html.txt")
+    navbar_path = os.path.join(app.root_path, "static", "navbar_string.html")
     with open(navbar_path, "r") as f:
         html = f.read()
 
@@ -205,21 +205,41 @@ def create_page(
     current_sort = request.args.get("sort", "")
     sort_key, sort_dir = (current_sort.split("-") + ["asc"])[:2]
 
+    # Adding heading links with sorting
     for key in filters.keys():
         base_params = {k: v for k, v in request.args.items() if k != "sort"}
+        icon_path = "icons/filter/"
+        icon_name = ""
 
         if key == sort_key:
-            next_sort_dir = "asc" if sort_dir == "desc" else "desc"
-            arrow = "â–¼" if sort_dir == "desc" else "â–²"
-            sort_param = f"{key}-{next_sort_dir}"
+            if sort_dir == "asc":
+                next_sort_dir = "desc"
+                icon_name = "arrow_drop_up.png"
+            elif sort_dir == "desc":
+                next_sort_dir = "default"
+                icon_name = "arrow_drop_down.png"
+            else:
+                next_sort_dir = "asc"
+                icon_name = "unfold_more.png"
+
+            if next_sort_dir != "default":
+                sort_param = f"{key}-{next_sort_dir}"
+                base_params["sort"] = sort_param
         else:
-            arrow = "⬍"
+            icon_name = "unfold_more.png"
             sort_param = f"{key}-asc"
+            base_params["sort"] = sort_param
 
-        base_params["sort"] = sort_param
         query_string = "&".join(f"{k}={v}" for k, v in base_params.items())
         if key != "checksum":
-            html += f"<th><a href='{filename}?{query_string}'>{key} {arrow}</a></th>"
+            icon_src = url_for("static", filename=icon_path + icon_name)
+            html += f"""<th>
+                <a href='{filename}?{query_string}' class="header-link">
+                    <span></span>
+                    <span class="key-text">{key}</span>
+                    <img class="filter-icon" src="{icon_src}" alt="asc" width="25">
+                </a>
+            </th>"""
 
     if results:
         counter = offset + 1
diff --git a/static/icons/filter/arrow_drop_down.png b/static/icons/filter/arrow_drop_down.png
new file mode 100644
index 0000000..b0ec865
Binary files /dev/null and b/static/icons/filter/arrow_drop_down.png differ
diff --git a/static/icons/filter/arrow_drop_up.png b/static/icons/filter/arrow_drop_up.png
new file mode 100644
index 0000000..0c8f191
Binary files /dev/null and b/static/icons/filter/arrow_drop_up.png differ
diff --git a/static/icons/filter/unfold_more.png b/static/icons/filter/unfold_more.png
new file mode 100644
index 0000000..7d5d598
Binary files /dev/null and b/static/icons/filter/unfold_more.png differ
diff --git a/static/navbar.html.txt b/static/navbar_string.html
similarity index 83%
rename from static/navbar.html.txt
rename to static/navbar_string.html
index 1a66f9a..2a80d9e 100644
--- a/static/navbar.html.txt
+++ b/static/navbar_string.html
@@ -15,8 +15,8 @@
         <div class="nav-buttons">
             <a href="{{ url_for('user_games_list') }}">User Games List</a>
             <a href="{{ url_for('ready_for_review') }}">Ready for review</a>
-            <a href="{{ url_for('fileset_search', sort='fileset-asc') }}">Fileset Search</a>
-            <a href="{{ url_for('logs', sort='id-desc') }}">Logs</a>
+            <a href="{{ url_for('fileset_search') }}">Fileset Search</a>
+            <a href="{{ url_for('logs') }}">Logs</a>
             <a href="{{ url_for('config') }}">Config</a>
         </div>
     </nav>
diff --git a/static/style.css b/static/style.css
index 7ec3f73..720e9f7 100644
--- a/static/style.css
+++ b/static/style.css
@@ -78,6 +78,20 @@ nav {
   vertical-align: middle;
 }
 
+.header-link {
+  display: flex;
+  align-items: center;
+  justify-content: space-between;
+  text-decoration: none;
+  color: inherit;
+  padding: 4px 8px;
+}
+
+.filter-icon {
+  height: auto;
+  margin-left: 8px;
+}
+
 button {
   color: white;
   padding: 6px 12px;
diff --git a/templates/config.html b/templates/config.html
index 75f50d2..64b616b 100644
--- a/templates/config.html
+++ b/templates/config.html
@@ -130,8 +130,8 @@
         <div class="nav-buttons">
             <a href="{{ url_for('user_games_list') }}">User Games List</a>
             <a href="{{ url_for('ready_for_review') }}">Ready for review</a>
-            <a href="{{ url_for('fileset_search', sort='fileset-asc') }}">Fileset Search</a>
-            <a href="{{ url_for('logs', sort='id-desc')}}">Logs</a>
+            <a href="{{ url_for('fileset_search') }}">Fileset Search</a>
+            <a href="{{ url_for('logs')}}">Logs</a>
             <a href="{{ url_for('config') }}">Config</a>
         </div>
     </nav>
diff --git a/templates/home.html b/templates/home.html
index dc66252..15691d3 100644
--- a/templates/home.html
+++ b/templates/home.html
@@ -90,8 +90,8 @@
         <div class="nav-buttons">
             <a href="{{ url_for('user_games_list') }}">User Games List</a>
             <a href="{{ url_for('ready_for_review') }}">Ready for review</a>
-            <a href="{{ url_for('fileset_search', sort='fileset-asc') }}">Fileset Search</a>
-            <a href="{{ url_for('logs', sort='id-desc')}}">Logs</a>
+            <a href="{{ url_for('fileset_search') }}">Fileset Search</a>
+            <a href="{{ url_for('logs')}}">Logs</a>
             <a href="{{ url_for('config') }}">Config</a>
         </div>
         <div class="dev">


Commit: c53d37b71053d6222ed36710435acdeea47d448b
    https://github.com/scummvm/scummvm-sites/commit/c53d37b71053d6222ed36710435acdeea47d448b
Author: ShivangNagta (shivangnag at gmail.com)
Date: 2025-08-14T22:21:10+02:00

Commit Message:
INTEGRITY: Refactor confirm merge code.

Changed paths:
  A static/js/update_merge_table_rows.js
    db_functions.py
    fileset.py
    pagination.py
    static/js/confirm_merge_form_handler.js


diff --git a/db_functions.py b/db_functions.py
index ed19005..daec255 100644
--- a/db_functions.py
+++ b/db_functions.py
@@ -114,8 +114,6 @@ def insert_fileset(
 ):
     status = "detection" if detection else src
     game = "NULL"
-    key = "NULL" if key == "" else key
-    megakey = "NULL" if megakey == "" else megakey
 
     if detection:
         status = "detection"
@@ -212,7 +210,7 @@ def normalised_path(name):
     return "/".join(path_list)
 
 
-def insert_file(file, detection, src, conn):
+def insert_file(file, detection, src, conn, fileset_id=None):
     # Find full md5, or else use first checksum value
     checksum = ""
     checksize = 5000
@@ -249,18 +247,27 @@ def insert_file(file, detection, src, conn):
     values.extend([checksum, detection, detection_type])
 
     # Parameterised Query
-    query = "INSERT INTO file ( name, size, `size-r`, `size-rd`, `modification-time`, checksum, fileset, detection, detection_type, `timestamp` ) VALUES (%s, %s, %s, %s, %s, %s, @fileset_last, %s, %s, NOW())"
-
     with conn.cursor() as cursor:
-        cursor.execute(query, values)
+        query = ""
+        if fileset_id is None:
+            query = "INSERT INTO file ( name, size, `size-r`, `size-rd`, `modification-time`, checksum, detection, detection_type, `timestamp`, fileset ) VALUES (%s, %s, %s, %s, %s, %s, %s, %s, NOW(), @fileset_last)"
+            cursor.execute(query, values)
+        else:
+            query = "INSERT INTO file ( name, size, `size-r`, `size-rd`, `modification-time`, checksum, detection, detection_type, `timestamp`, fileset ) VALUES (%s, %s, %s, %s, %s, %s, %s, %s, NOW(), %s)"
+            values.append(fileset_id)
+            cursor.execute(query, values)
 
-    if detection:
-        with conn.cursor() as cursor:
-            cursor.execute(
-                "UPDATE fileset SET detection_size = %s WHERE id = @fileset_last AND detection_size IS NULL",
-                (checksize,),
-            )
-    with conn.cursor() as cursor:
+        if detection:
+            if fileset_id is None:
+                cursor.execute(
+                    "UPDATE fileset SET detection_size = %s WHERE id = @fileset_last AND detection_size IS NULL",
+                    (checksize,),
+                )
+            else:
+                cursor.execute(
+                    "UPDATE fileset SET detection_size = %s WHERE id = %s AND detection_size IS NULL",
+                    (checksize, fileset_id),
+                )
         cursor.execute("SET @file_last = LAST_INSERT_ID()")
 
 
@@ -270,6 +277,8 @@ def insert_filechecksum(file, checktype, file_id, conn):
 
     checksum = file[checktype]
     checksize, checktype, checksum = get_checksum_props(checktype, checksum)
+    if checksize == "1048576":
+        checksize = "1M"
 
     query = "INSERT INTO filechecksum (file, checksize, checktype, checksum) VALUES (%s, %s, %s, %s)"
     with conn.cursor() as cursor:
@@ -2480,7 +2489,7 @@ def create_user_fileset(fileset, game_metadata, src, transaction_id, user, conn,
             return
 
         (fileset_id, _) = insert_fileset(
-            src, False, key, None, transaction_id, None, conn, ip=ip
+            src, False, key, "", transaction_id, None, conn, ip=ip
         )
 
         insert_game(engine_name, engineid, title, gameid, extra, platform, lang, conn)
diff --git a/fileset.py b/fileset.py
index e59aa38..c0cacd5 100644
--- a/fileset.py
+++ b/fileset.py
@@ -23,9 +23,10 @@ from db_functions import (
     db_connect,
     create_log,
     db_connect_root,
-    get_checksum_props,
     delete_original_fileset,
     normalised_path,
+    insert_file,
+    insert_filechecksum,
 )
 from collections import defaultdict
 from schema import init_database
@@ -852,8 +853,16 @@ def confirm_merge(id):
 
             # Fileset metadata
             for column in source_fileset.keys():
-                source_value = str(source_fileset[column])
-                target_value = str(target_fileset[column])
+                source_value = (
+                    ""
+                    if str(source_fileset[column]) == "None"
+                    else str(source_fileset[column])
+                )
+                target_value = (
+                    ""
+                    if str(target_fileset[column]) == "None"
+                    else str(target_fileset[column])
+                )
                 if column == "id":
                     html += f"<tr><td>{column}</td><td><a href='/fileset?id={source_value}'>{source_value}</a></td><td><a href='/fileset?id={target_value}'>{target_value}</a></td></tr>"
                     continue
@@ -912,138 +921,55 @@ def confirm_merge(id):
                     if file["detection"] == 1:
                         detection_files_set.add(file["name"].lower())
 
-            html += """<tr><th>Files</th><td colspan='2'><label><input type="checkbox" id="toggle-unmatched"> Show Unmatched Files</label></td></tr>"""
+            html += """<tr><th>Files</th><td colspan='2'><label><input type="checkbox" id="toggle-common-files"> Show Only Common Files</label><label style='margin-left: 50px;' ><input type="checkbox" id="toggle-all-fields"> Show All Fields</label></td></tr>"""
 
             all_source_unmatched_filenames = sorted(set(source_files_map.keys()))
             all_target_unmatched_filenames = sorted(set(target_files_map.keys()))
 
-            for matched_target_filename, matched_source_filename in matched_files:
-                if matched_source_filename.lower() in all_source_unmatched_filenames:
-                    all_source_unmatched_filenames.remove(
-                        matched_source_filename.lower()
-                    )
-                if matched_target_filename.lower() in all_target_unmatched_filenames:
-                    all_target_unmatched_filenames.remove(
-                        matched_target_filename.lower()
-                    )
-                source_dict = source_files_map.get(matched_source_filename.lower(), {})
-                target_dict = target_files_map.get(matched_target_filename.lower(), {})
-
-                # html += f"""<tr><th>{matched_source_filename}</th><th>Source File</th><th>Target File</th></tr>"""
-
-                keys = sorted(set(source_dict.keys()) | set(target_dict.keys()))
-
-                group_id = f"group_{matched_source_filename.lower().replace('.', '_').replace('/', '_')}_{matched_target_filename.lower().replace('.', '_').replace('/', '_')}"
-                html += f"""<tr>
-                    <td colspan='3'>
-                        <label>
-                            <input type="checkbox" onclick="toggleGroup('{group_id}')">
-                            Show all fields for <strong>{matched_source_filename}</strong>
-                        </label>
-                    </td>
-                </tr>"""
-
-                for key in keys:
-                    source_value = str(source_dict.get(key, ""))
-                    target_value = str(target_dict.get(key, ""))
-
-                    source_checked = "checked" if key in source_dict else ""
-                    source_checksum = source_files_map[
-                        matched_source_filename.lower()
-                    ].get(key, "")
-                    target_checksum = target_files_map[
-                        matched_target_filename.lower()
-                    ].get(key, "")
-
-                    source_val = html_lib.escape(
-                        json.dumps(
-                            {
-                                "side": "source",
-                                "filename": matched_source_filename,
-                                "prop": key,
-                                "value": source_checksum,
-                                "detection": "0",
-                            }
-                        )
-                    )
+            all_files = [
+                matched_files,
+                all_target_unmatched_filenames,
+                all_source_unmatched_filenames,
+            ]
 
-                    if matched_source_filename.lower() in detection_files_set:
-                        target_val = html_lib.escape(
-                            json.dumps(
-                                {
-                                    "side": "target",
-                                    "filename": matched_source_filename,
-                                    "prop": key,
-                                    "value": target_checksum,
-                                    "detection": "1",
-                                }
+            is_common_file = True
+            for file_category in all_files:
+                # For matched_files, files is a tuple of filename from source file and target file
+                # For unmatched_files, files is the filename of the files that was not common.
+                for files in file_category:
+                    if is_common_file:
+                        (target_filename, source_filename) = files
+
+                        # Also remove common files from source and target filenames set
+                        if source_filename.lower() in all_source_unmatched_filenames:
+                            all_source_unmatched_filenames.remove(
+                                source_filename.lower()
                             )
-                        )
-                    else:
-                        target_val = html_lib.escape(
-                            json.dumps(
-                                {
-                                    "side": "target",
-                                    "filename": matched_target_filename,
-                                    "prop": key,
-                                    "value": target_checksum,
-                                    "detection": "0",
-                                }
+                        if target_filename.lower() in all_target_unmatched_filenames:
+                            all_target_unmatched_filenames.remove(
+                                target_filename.lower()
                             )
-                        )
-                    if source_value != target_value:
-                        source_highlighted, target_highlighted = highlight_differences(
-                            source_value, target_value
-                        )
-                        if key == "md5-full":
-                            html += f"""<tr>
-                                <td>{key}</td>
-                                <td><input type="checkbox" name="options[]" value="{source_val}" {source_checked}>{source_highlighted}</td>
-                                <td><input type="checkbox" name="options[]" value="{target_val}">{target_highlighted}</td>
-                            </tr>"""
-                        else:
-                            html += f"""<tbody class="toggle-details" id="{group_id}" style="display: none;">
-                                <tr>
-                                    <td>{key}</td>
-                                    <td><input type="checkbox" name="options[]" value="{source_val}" {source_checked}>{source_highlighted}</td>
-                                    <td><input type="checkbox" name="options[]" value="{target_val}">{target_highlighted}</td>
-                                </tr>
-                            </tbody>"""
                     else:
-                        if key == "md5-full":
-                            html += f"""<tr>
-                                <td>{key}</td>
-                                <td><input type="checkbox" name="options[]" value="{source_val}" {source_checked}>{source_value}</td>
-                                <td><input type="checkbox" name="options[]" value="{target_val}">{target_value}</td>
-                            </tr>"""
-                        else:
-                            html += f"""<tbody class="toggle-details" id="{group_id}" style="display: none;">
-                                <tr>
-                                    <td>{key}</td>
-                                    <td><input type="checkbox" name="options[]" value="{source_val}" {source_checked}>{source_value}</td>
-                                    <td><input type="checkbox" name="options[]" value="{target_val}">{target_value}</td>
-                                </tr>
-                            </tbody>"""
-
-            all_unmatched_filenames = [
-                all_target_unmatched_filenames,
-                all_source_unmatched_filenames,
-            ]
+                        target_filename = files
+                        source_filename = files
 
-            for unmatched_filenames in all_unmatched_filenames:
-                for filename in unmatched_filenames:
-                    source_dict = source_files_map.get(filename.lower(), {})
-                    target_dict = target_files_map.get(filename.lower(), {})
+                    is_mac_file = False
+                    size = source_files_map[source_filename.lower()].get("size", "")
+                    size_rd = source_files_map[source_filename.lower()].get(
+                        "size-rd", ""
+                    )
+                    if size == "0" and size_rd != "0":
+                        is_mac_file = True
+
+                    source_dict = source_files_map.get(source_filename.lower(), {})
+                    target_dict = target_files_map.get(target_filename.lower(), {})
 
                     keys = sorted(set(source_dict.keys()) | set(target_dict.keys()))
-                    group_id = (
-                        f"group_{filename.lower().replace('.', '_').replace('/', '_')}"
-                    )
-                    html += f"""<tr class="unmatched" style='display: none;'>
+
+                    tr_class = "matched" if is_common_file else "unmatched"
+                    html += f"""<tr class="{tr_class}">
                         <td colspan='3'>
-                            <label>
-                                <input type="checkbox" onclick="toggleGroup('{group_id}')">
-                                Show all fields for <strong>{filename}</strong>
+                                <strong>{source_filename}</strong> {" - mac_file" if is_mac_file else ""}
                             </label>
                         </td>
                     </tr>"""
@@ -1053,82 +979,81 @@ def confirm_merge(id):
                         target_value = str(target_dict.get(key, ""))
 
                         source_checked = "checked" if key in source_dict else ""
-                        source_checksum = source_files_map[filename.lower()].get(
+                        source_checksum = source_files_map[source_filename.lower()].get(
                             key, ""
                         )
-                        target_checksum = target_files_map[filename.lower()].get(
+                        target_checksum = target_files_map[target_filename.lower()].get(
                             key, ""
                         )
 
-                        source_val = html_lib.escape(
-                            json.dumps(
-                                {
-                                    "side": "source",
-                                    "filename": filename,
-                                    "prop": key,
-                                    "value": source_checksum,
-                                    "detection": "0",
-                                }
-                            )
-                        )
-                        if filename.lower() in detection_files_set:
-                            target_val = html_lib.escape(
-                                json.dumps(
-                                    {
-                                        "side": "target",
-                                        "filename": filename,
-                                        "prop": key,
-                                        "value": target_checksum,
-                                        "detection": "1",
-                                    }
-                                )
-                            )
-                        else:
-                            target_val = html_lib.escape(
+                        vals = {}
+
+                        # Format the value for the checkbox input as an escaped HTML-safe JSON string
+                        for side, checksum in [
+                            ("source", source_checksum),
+                            ("target", target_checksum),
+                        ]:
+                            is_detection = "0"
+                            if (
+                                side == "target"
+                                and target_filename.lower() in detection_files_set
+                            ):
+                                is_detection = "1"
+
+                            vals[side] = html_lib.escape(
                                 json.dumps(
                                     {
-                                        "side": "target",
-                                        "filename": filename,
+                                        "side": side,
+                                        "filename": target_filename
+                                        if side == "target"
+                                        else source_filename,
                                         "prop": key,
-                                        "value": target_checksum,
-                                        "detection": "0",
+                                        "value": checksum,
+                                        "detection": is_detection,
                                     }
                                 )
                             )
+                        source_val = vals["source"]
+                        target_val = vals["target"]
 
+                        # Update the source and target values with highlighted differences if any
                         if source_value != target_value:
-                            source_highlighted, target_highlighted = (
-                                highlight_differences(source_value, target_value)
+                            source_value, target_value = highlight_differences(
+                                source_value, target_value
                             )
-                            if key == "md5-full":
-                                html += f"""<tr class="unmatched" style='display: none;'">
-                                    <td>{key}</td>
-                                    <td><input type="checkbox" name="options[]" value="{source_val}" {source_checked}>{source_highlighted}</td>
-                                    <td><input type="checkbox" name="options[]" value="{target_val}">{target_highlighted}</td>
-                                </tr>"""
-                            else:
-                                html += f"""<tbody class="toggle-details" id="{group_id}"  style='display: none;'>
-                                    <tr>
-                                        <td>{key}</td>
-                                        <td><input type="checkbox" name="options[]" value="{source_val}" {source_checked}>{source_highlighted}</td>
-                                        <td><input type="checkbox" name="options[]" value="{target_val}">{target_highlighted}</td>
-                                    </tr>
-                                </tbody>"""
-                        else:
-                            if key == "md5-full":
-                                html += f"""<tr class="unmatched" style='display: none;'>
-                                    <td>{key}</td>
-                                    <td><input type="checkbox" name="options[]" value="{source_val}" {source_checked}>{source_value}</td>
-                                    <td><input type="checkbox" name="options[]" value="{target_val}">{target_value}</td>
-                                </tr>"""
-                            else:
-                                html += f"""<tbody class="toggle-details unmatched" id="{group_id}"  style='display: none;'>
-                                    <tr>
-                                        <td>{key}</td>
-                                        <td><input type="checkbox" name="options[]" value="{source_val}" {source_checked}>{source_value}</td>
-                                        <td><input type="checkbox" name="options[]" value="{target_val}">{target_value}</td>
-                                    </tr>
-                                </tbody>"""
+
+                        is_md5_full = key == "md5-full"
+                        is_size = key == "size"
+                        is_size_rd = key == "size_rd"
+
+                        class_1 = "other_field "
+                        if is_md5_full:
+                            class_1 = "main_field "
+                        # class_1 will be file_size in case of non-mac files otherwise file_size_rd
+                        if is_size:
+                            class_1 = "main_field "
+                        if is_mac_file and is_size_rd:
+                            class_1 = "main_field "
+                        class_2 = tr_class
+                        tag_class = class_1 + class_2
+                        default_display = ""
+                        if class_1 == "other_field ":
+                            default_display = "none"
+
+                        html += f"""<tr class="{tag_class}" style="display: {default_display};">
+                            <td>{key}</td>
+                            <td><input type="checkbox" name="options[]" value="{source_val}" {source_checked}>{source_value}</td>
+                            <td><input type="checkbox" name="options[]" value="{target_val}">{target_value}</td>
+                        </tr>"""
+
+                # Next file categories do not contain common files
+                is_common_file = False
+
+            matched_dict = {
+                target.lower(): source.lower() for (target, source) in matched_files
+            }
+            escaped_json = html_lib.escape(json.dumps(matched_dict))
+            html += f'<input type="hidden" name="matched_files" value="{escaped_json}">'
 
             html += """
             </table>
@@ -1143,6 +1068,7 @@ def confirm_merge(id):
                 <input id="confirm_merge_cancel" type="submit" value="Cancel">
             </form>
             <script src="{{ url_for('static', filename='js/confirm_merge_form_handler.js') }}"></script>
+            <script src="{{ url_for('static', filename='js/update_merge_table_rows.js') }}"></script>
             <script>
             document.getElementById("confirm_merge_form").addEventListener("submit", function () {
                 document.getElementById("merging-status").style.display = "block";
@@ -1150,22 +1076,6 @@ def confirm_merge(id):
                 document.getElementById("confirm_merge_cancel").style.display = "none";
             });
             </script>
-            <script>
-            document.getElementById("toggle-unmatched").addEventListener("change", function() {
-                const rows = document.querySelectorAll("tr.unmatched");
-                rows.forEach(row => {
-                    row.style.display = this.checked ? "" : "none";
-                });
-            });
-            </script>
-            <script>
-            function toggleGroup(groupId) {
-                const rows = document.querySelectorAll(`#${groupId}`);
-                rows.forEach(row => {
-                    row.style.display = (row.style.display === "none") ? "" : "none";
-                });
-            }
-            </script>
             </body>
             </html>
             """
@@ -1186,6 +1096,7 @@ def execute_merge(id):
     source_id = data.get("source_id")
     target_id = data.get("target_id")
     options = data.get("options")
+    matched_dict = json.loads(data.get("matched_files"))
 
     base_dir = os.path.dirname(os.path.abspath(__file__))
     config_path = os.path.join(base_dir, "mysql_config.json")
@@ -1205,146 +1116,67 @@ def execute_merge(id):
         with connection.cursor() as cursor:
             cursor.execute("SELECT * FROM fileset WHERE id = %s", (source_id,))
             source_fileset = cursor.fetchone()
-            cursor.execute("SELECT * FROM fileset WHERE id = %s", (target_id,))
 
+            status = "full"
             if source_fileset["status"] == "dat":
-                cursor.execute(
-                    """
-                    UPDATE fileset SET
-                    status = %s,
-                    `key` = %s,
-                    `timestamp` = %s
-                    WHERE id = %s
-                """,
-                    (
-                        "partial",
-                        source_fileset["key"],
-                        source_fileset["timestamp"],
-                        target_id,
-                    ),
-                )
-
-                source_filenames = set()
-                change_fileset_id = set()
-                file_details_map = defaultdict(dict)
-
-                for file in options:
-                    filename = file["filename"].lower()
-                    if "detection" not in file_details_map[filename]:
-                        file_details_map[filename]["detection"] = file["detection"]
-                        file_details_map[filename]["detection_type"] = file["prop"]
-                    elif (
-                        "detection" in file_details_map[filename]
-                        and file_details_map[filename]["detection"] != "1"
-                    ):
-                        file_details_map[filename]["detection"] = file["detection"]
-                        file_details_map[filename]["detection_type"] = file["prop"]
-                    if file["prop"].startswith("md5"):
-                        if "checksums" not in file_details_map[filename]:
-                            file_details_map[filename]["checksums"] = []
-                        file_details_map[filename]["checksums"].append(
-                            {"check": file["prop"], "value": file["value"]}
-                        )
-                    if file["side"] == "source":
-                        source_filenames.add(filename)
-
-                # Delete older checksums
-                for file in options:
-                    filename = file["filename"].lower()
-                    if file["side"] == "source":
-                        cursor.execute(
-                            """SELECT f.id as file_id FROM file f
-                                       JOIN fileset fs ON fs.id = f.fileset 
-                                       WHERE f.name = %s
-                                       AND fs.id = %s""",
-                            (filename, source_id),
-                        )
-                        file_id = cursor.fetchone()["file_id"]
-                        query = """
-                            DELETE FROM filechecksum
-                            WHERE file = %s
-                        """
-                        cursor.execute(query, (file_id,))
-                    else:
-                        if filename not in source_filenames:
-                            cursor.execute(
-                                """SELECT f.id as file_id FROM file f
-                            JOIN fileset fs ON fs.id = f.fileset 
-                            WHERE f.name = %s
-                            AND fs.id = %s""",
-                                (filename, target_id),
-                            )
-                            target_file_id = cursor.fetchone()["file_id"]
-                            change_fileset_id.add(target_file_id)
-
-                for filename, details in file_details_map.items():
-                    cursor.execute(
-                        """SELECT f.id as file_id FROM file f
-                                    JOIN fileset fs ON fs.id = f.fileset 
-                                    WHERE f.name = %s
-                                    AND fs.id = %s""",
-                        (filename, source_id),
-                    )
-                    source_file_id = cursor.fetchone()["file_id"]
-                    detection = (
-                        details["detection"] == "1" if "detection" in details else False
-                    )
-                    if detection:
-                        query = """
-                            UPDATE file 
-                            SET detection = 1,
-                            detection_type = %s
-                            WHERE id = %s
-                        """
-                        cursor.execute(
-                            query,
-                            (
-                                details["detection_type"],
-                                source_file_id,
-                            ),
-                        )
-                        filename = os.path.basename(filename).lower()
-                        cursor.execute(
-                            """SELECT f.id as file_id FROM file f
-                                    JOIN fileset fs ON fs.id = f.fileset 
-                                    WHERE REGEXP_REPLACE(f.name, '^.*[\\\\/]', '') = %s
-                                    AND fs.id = %s""",
-                            (filename, target_id),
-                        )
-                        target_file_id = cursor.fetchone()["file_id"]
-                        cursor.execute(
-                            "DELETE FROM file WHERE id = %s", (target_file_id,)
-                        )
-
-                    check = ""
-                    checksize = ""
-                    checktype = ""
-                    checksum = ""
-
-                    if "checksums" in details:
-                        for c in details["checksums"]:
-                            checksum = c["value"]
-                            check = c["check"]
-                            checksize, checktype, checksum = get_checksum_props(
-                                check, checksum
-                            )
-                            query = "INSERT INTO filechecksum (file, checksize, checktype, checksum) VALUES (%s, %s, %s, %s)"
-                            cursor.execute(
-                                query, (source_file_id, checksize, checktype, checksum)
-                            )
+                status = "partial"
+            cursor.execute(
+                """
+                UPDATE fileset SET
+                status = %s,
+                `key` = %s,
+                `timestamp` = %s
+                WHERE id = %s
+            """,
+                (
+                    status,
+                    source_fileset["key"],
+                    source_fileset["timestamp"],
+                    target_id,
+                ),
+            )
 
-                    cursor.execute(
-                        "UPDATE file SET fileset = %s WHERE id = %s",
-                        (target_id, source_file_id),
-                    )
+            file_details_map = defaultdict(dict)
+
+            for file in options:
+                filename = file["filename"].lower()
+                if filename in matched_dict:
+                    filename = matched_dict[filename]
+                file_details_map[filename]["name"] = filename
+                # If we have confirmed that given file is a detection file, then we continue
+                if "detection" not in file_details_map[filename] or (
+                    "detection" in file_details_map[filename]
+                    and file_details_map[filename]["detection"] != "1"
+                ):
+                    file_details_map[filename]["detection"] = file["detection"]
+                    file_details_map[filename]["detection_type"] = file["prop"]
+                if file["prop"].startswith("md5"):
+                    file_details_map[filename][file["prop"]] = file["value"]
+                if file["prop"].startswith("size"):
+                    file_details_map[filename][file["prop"]] = file["value"]
+
+            query = "DELETE FROM file WHERE fileset = %s"
+            cursor.execute(query, (target_id,))
+            query = "DELETE FROM fileset WHERE id = %s"
+            cursor.execute(query, (source_id,))
 
-                # for target_file_id in change_fileset_id:
-                #     query = """
-                #         UPDATE file
-                #         SET fileset = %s
-                #         WHERE id = %s
-                #     """
-                #     cursor.execute(query, (source_id, target_file_id))
+            for filename, details in file_details_map.items():
+                detection = (
+                    details["detection"] == "1" if "detection" in details else False
+                )
+                insert_file(details, detection, "", connection, target_id)
+                cursor.execute("SELECT @file_last AS file_id")
+                file_id = cursor.fetchone()["file_id"]
+                for key in details:
+                    if key not in [
+                        "name",
+                        "size",
+                        "size-r",
+                        "size-rd",
+                        "detection",
+                        "detection_type",
+                    ]:
+                        insert_filechecksum(details, key, file_id, connection)
 
             cursor.execute(
                 """
@@ -1695,4 +1527,4 @@ def delete_files(id):
 
 if __name__ == "__main__":
     app.secret_key = secret_key
-    app.run(debug=True, host="0.0.0.0")
+    app.run(port=5001, debug=True, host="0.0.0.0")
diff --git a/pagination.py b/pagination.py
index f611cb6..4a77337 100644
--- a/pagination.py
+++ b/pagination.py
@@ -279,7 +279,7 @@ def create_page(
                             f"<a href='fileset?id={fileset_id}'>{fileset_text}</a>",
                         )
 
-                html += f"<td>{value}</td>\n"
+                html += f"<td>{'' if value is None else value}</td>\n"
             html += "</tr>\n"
             counter += 1
 
diff --git a/static/js/confirm_merge_form_handler.js b/static/js/confirm_merge_form_handler.js
index d514091..8487ff1 100644
--- a/static/js/confirm_merge_form_handler.js
+++ b/static/js/confirm_merge_form_handler.js
@@ -8,7 +8,8 @@ document.getElementById("confirm_merge_form").addEventListener("submit", async f
   const jsonData = {
     source_id: source_id,
     target_id: form.querySelector('input[name="target_id"]').value,
-    options: []
+    options: [],
+    matched_files: form.querySelector('input[name="matched_files"]').value
   };
   
   const checkedBoxes = form.querySelectorAll('input[name="options[]"]:checked');
diff --git a/static/js/update_merge_table_rows.js b/static/js/update_merge_table_rows.js
new file mode 100644
index 0000000..cb1c4c7
--- /dev/null
+++ b/static/js/update_merge_table_rows.js
@@ -0,0 +1,53 @@
+const toggleCommonFiles = document.getElementById("toggle-common-files");
+const toggleAllFields = document.getElementById("toggle-all-fields");
+
+function updateTableRows() {
+    const rows = document.querySelectorAll("tr");
+
+    const showUnmatched = !toggleCommonFiles.checked;
+    const showAllFields = toggleAllFields.checked;
+
+    rows.forEach(row => {
+        const is_matched = row.classList.contains("matched");
+        const is_unmatched = row.classList.contains("unmatched");
+        const is_main = row.classList.contains("main_field");
+        const is_other = row.classList.contains("other_field");
+
+        if ((is_matched || is_unmatched) && !(is_main || is_other)) {
+            if (showUnmatched) {
+                show = true
+            } else {
+                show = is_matched
+            }
+            row.style.display = show ? "" : "none";
+        }
+        else if (!(is_matched || is_unmatched) || !(is_main || is_other)) {
+            return;
+        }
+        else {
+            let show = false;
+
+            // Case 1: unmatched files checkbox - off, all file fields checkbox - off
+            if (!showUnmatched && !showAllFields) {
+                show = is_matched && is_main;
+            }
+            // Case 2: off, on
+            else if (!showUnmatched && showAllFields) {
+                show = is_matched && (is_main || is_other);
+            }
+            // Case 3: on, off
+            else if (showUnmatched && !showAllFields) {
+                show = is_main && (is_matched || is_unmatched);
+            }
+            // Case 4: off, off
+            else if (showUnmatched && showAllFields) {
+                show = true;
+            }
+
+            row.style.display = show ? "" : "none";
+        }
+    });
+}
+
+toggleCommonFiles.addEventListener("change", updateTableRows);
+toggleAllFields.addEventListener("change", updateTableRows);


Commit: 0414ac9a9a696953e3fdf5a3718798e5d6626705
    https://github.com/scummvm/scummvm-sites/commit/0414ac9a9a696953e3fdf5a3718798e5d6626705
Author: ShivangNagta (shivangnag at gmail.com)
Date: 2025-08-14T22:21:10+02:00

Commit Message:
INTEGRITY: Remove icon for default sorting state.

Changed paths:
    pagination.py
    static/style.css


diff --git a/pagination.py b/pagination.py
index 4a77337..57d634e 100644
--- a/pagination.py
+++ b/pagination.py
@@ -209,7 +209,7 @@ def create_page(
     for key in filters.keys():
         base_params = {k: v for k, v in request.args.items() if k != "sort"}
         icon_path = "icons/filter/"
-        icon_name = ""
+        icon_name = "no_icon"
 
         if key == sort_key:
             if sort_dir == "asc":
@@ -220,26 +220,35 @@ def create_page(
                 icon_name = "arrow_drop_down.png"
             else:
                 next_sort_dir = "asc"
-                icon_name = "unfold_more.png"
 
             if next_sort_dir != "default":
                 sort_param = f"{key}-{next_sort_dir}"
                 base_params["sort"] = sort_param
         else:
-            icon_name = "unfold_more.png"
             sort_param = f"{key}-asc"
             base_params["sort"] = sort_param
 
         query_string = "&".join(f"{k}={v}" for k, v in base_params.items())
         if key != "checksum":
             icon_src = url_for("static", filename=icon_path + icon_name)
-            html += f"""<th>
-                <a href='{filename}?{query_string}' class="header-link">
-                    <span></span>
-                    <span class="key-text">{key}</span>
-                    <img class="filter-icon" src="{icon_src}" alt="asc" width="25">
-                </a>
-            </th>"""
+            if icon_name != "no_icon":
+                html += f"""<th>
+                    <a href='{filename}?{query_string}' class="header-link">
+                        <div style="display:flex; align-items:center; width:100%;">
+                            <span style="flex:1; text-align:center;">{key}</span>
+                            <img src="{icon_src}" class="filter-icon" alt="asc" style="margin-left:auto;">
+                        </div>
+                    </a>
+                </th>"""
+            else:
+                html += f"""<th>
+                    <a href='{filename}?{query_string}' class="header-link">
+                        <div style="display:flex; align-items:center; width:100%;">
+                            <span style="flex:1; text-align:center;">{key}</span>
+                            <span style="width: 18px"></span>
+                        </div>
+                    </a>
+                </th>"""
 
     if results:
         counter = offset + 1
diff --git a/static/style.css b/static/style.css
index 720e9f7..aab06fb 100644
--- a/static/style.css
+++ b/static/style.css
@@ -3,8 +3,7 @@
   font-family: 'Segoe UI', Tahoma, Geneva, Verdana, sans-serif;
 }
 
-td,
-th {
+td {
   padding-inline: 5px;
 }
 
@@ -33,13 +32,11 @@ td.filter {
 }
 
 th {
-  padding-top: 5px;
-  padding-bottom: 5px;
   text-align: center;
   background-color: var(--primary-color);
   color: white;
-  height: 30px;
   vertical-align: middle;
+  padding: 4px;
 }
 
 th a {
@@ -84,12 +81,10 @@ nav {
   justify-content: space-between;
   text-decoration: none;
   color: inherit;
-  padding: 4px 8px;
 }
 
 .filter-icon {
-  height: auto;
-  margin-left: 8px;
+  width: 18px;
 }
 
 button {


Commit: 2dc641b3235cbbe61b8df0cfcffe159d6fc92782
    https://github.com/scummvm/scummvm-sites/commit/2dc641b3235cbbe61b8df0cfcffe159d6fc92782
Author: ShivangNagta (shivangnag at gmail.com)
Date: 2025-08-14T22:21:10+02:00

Commit Message:
INTEGRITY: Decode macbinary's filename as mac roman instead of utf-8.

Changed paths:
    compute_hash.py


diff --git a/compute_hash.py b/compute_hash.py
index ba8c62f..9d673cd 100644
--- a/compute_hash.py
+++ b/compute_hash.py
@@ -553,7 +553,8 @@ def extract_macbin_filename_from_header(file):
         header = f.read(128)
         name_len = header[1]
         filename_bytes = header[2 : 2 + name_len]
-        return filename_bytes.decode("utf-8")
+        filename = filename_bytes.decode("mac_roman")
+        return filename
 
 
 def file_classification(filepath):
@@ -562,7 +563,8 @@ def file_classification(filepath):
 
     # 1. Macbinary
     if is_macbin(filepath):
-        return [FileType.MAC_BINARY, extract_macbin_filename_from_header(filepath)]
+        base_name = extract_macbin_filename_from_header(filepath)
+        return [FileType.MAC_BINARY, base_name]
 
     # 2. Appledouble .rsrc
     if is_appledouble_rsrc(filepath):


Commit: 3be01c8148b9bb705a615d60d49e9fd63cf55342
    https://github.com/scummvm/scummvm-sites/commit/3be01c8148b9bb705a615d60d49e9fd63cf55342
Author: ShivangNagta (shivangnag at gmail.com)
Date: 2025-08-14T22:21:10+02:00

Commit Message:
INTEGRITY: Wrap filename in scanned dat in double quotes instead of single.

Changed paths:
    compute_hash.py


diff --git a/compute_hash.py b/compute_hash.py
index 9d673cd..3a395eb 100644
--- a/compute_hash.py
+++ b/compute_hash.py
@@ -818,7 +818,7 @@ def create_dat_file(hash_of_dirs, path, checksum_size=0):
                 timestamp,
             ) in hash_of_dir.items():
                 filename = encode_path_components(filename)
-                data = f"name '{filename}' size {size} size-r {size_r} size-rd {size_rd} modification-time {timestamp}"
+                data = f"""name "{filename}" size {size} size-r {size_r} size-rd {size_rd} modification-time {timestamp}"""
                 for key, value in hashes:
                     data += f" {key} {value}"
 


Commit: f5d5636f3a37bdd47591cfef2342e9caef87aa1b
    https://github.com/scummvm/scummvm-sites/commit/f5d5636f3a37bdd47591cfef2342e9caef87aa1b
Author: ShivangNagta (shivangnag at gmail.com)
Date: 2025-08-14T22:21:10+02:00

Commit Message:
INTEGRITY: Update size filtering logic for scan.dat for macfiles.

Changed paths:
    db_functions.py


diff --git a/db_functions.py b/db_functions.py
index daec255..a018f38 100644
--- a/db_functions.py
+++ b/db_functions.py
@@ -59,7 +59,10 @@ def get_checksum_props(checkcode, checksum):
         # For type md5-t-5000
         if last == "1M" or last.isdigit():
             checksize = last
-        checktype = "-".join(exploded_checkcode)
+            checktype = "-".join(exploded_checkcode)
+        # For type md5-r, md5-d
+        else:
+            checktype = checkcode
 
     # Detection entries have checktypes as part of the checksum prefix
     if ":" in checksum:
@@ -431,6 +434,8 @@ def calc_key(fileset):
         for key, value in file.items():
             if key == "name":
                 value = value.lower()
+            if key == "modification-time":
+                continue
             key_string += ":" + str(value)
 
     key_string = key_string.strip(":")
@@ -712,6 +717,7 @@ def scan_process(
     dropped_early_no_candidate = 0
     manual_merged_with_detection = 0
     filesets_with_missing_files = 0
+    duplicate_or_existing_entry = 0
 
     id_to_fileset_mapping = defaultdict(dict)
 
@@ -738,6 +744,7 @@ def scan_process(
             skiplog=skiplog,
         )
         if existing:
+            duplicate_or_existing_entry += 1
             continue
 
         id_to_fileset_mapping[fileset_id] = fileset
@@ -752,11 +759,11 @@ def scan_process(
 
     fileset_count = 0
     for fileset_id, fileset in id_to_fileset_mapping.items():
+        fileset_count += 1
         console_log_matching(fileset_count)
         candidate_filesets = filter_candidate_filesets(
             fileset["rom"], transaction_id, conn
         )
-
         if len(candidate_filesets) == 0:
             category_text = "Drop fileset - No Candidates"
             fileset_name = fileset["name"] if "name" in fileset else ""
@@ -792,8 +799,6 @@ def scan_process(
             conn,
             skiplog,
         )
-        fileset_count += 1
-
     # If any partial fileset turned full with pre file updates, turn it full
     update_status_for_partial_filesets(list(filesets_check_for_full), conn)
 
@@ -803,12 +808,14 @@ def scan_process(
             "SELECT COUNT(fileset) from transactions WHERE `transaction` = %s",
             (transaction_id,),
         )
-        fileset_insertion_count = cursor.fetchone()["COUNT(fileset)"]
+        fileset_insertion_count = (
+            cursor.fetchone()["COUNT(fileset)"] + duplicate_or_existing_entry
+        )
         category_text = f"Uploaded from {src}"
         log_text = f"Completed loading DAT file, filename {filepath}, size {os.path.getsize(filepath)}. State {source_status}. Number of filesets: {fileset_insertion_count}. Transaction: {transaction_id}"
         create_log(category_text, user, log_text, conn)
         category_text = "Upload information"
-        log_text = f"Number of filesets: {fileset_insertion_count}. Filesets automatically merged: {automatic_merged_filesets}. Filesets requiring manual merge (multiple candidates): {manual_merged_filesets}. Filesets requiring manual merge (matched with detection): {manual_merged_with_detection}. Filesets dropped, no candidate: {dropped_early_no_candidate}. Filesets matched with existing Full fileset: {match_with_full_fileset}. Filesets with mismatched files with Full fileset: {mismatch_with_full_fileset}. Filesets missing files compared to partial fileset candidate: {filesets_with_missing_files}."
+        log_text = f"Number of filesets: {fileset_insertion_count}. Duplicate or existing filesets: {duplicate_or_existing_entry}. Filesets automatically merged: {automatic_merged_filesets}. Filesets requiring manual merge (multiple candidates): {manual_merged_filesets}. Filesets requiring manual merge (matched with detection): {manual_merged_with_detection}. Filesets dropped, no candidate: {dropped_early_no_candidate}. Filesets matched with existing Full fileset: {match_with_full_fileset}. Filesets with mismatched files with Full fileset: {mismatch_with_full_fileset}. Filesets missing files compared to partial fileset candidate: {filesets_with_missing_files}."
         console_log(log_text)
         create_log(category_text, user, log_text, conn)
 
@@ -1071,6 +1078,7 @@ def update_all_files(fileset, candidate_fileset_id, is_candidate_detection, conn
         filename_to_filepath_map = defaultdict(str)
         filepath_to_checksum_map = defaultdict(dict)
         filepath_to_sizes_map = defaultdict(dict)
+        filepath_to_mod_time_map = defaultdict(dict)
 
         for file in fileset["rom"]:
             base_name = os.path.basename(normalised_path(file["name"])).lower()
@@ -1085,6 +1093,7 @@ def update_all_files(fileset, candidate_fileset_id, is_candidate_detection, conn
                     sizes[key] = file[key]
 
             filepath_to_sizes_map[file["name"]] = sizes
+            filepath_to_mod_time_map[file["name"]] = file["modification-time"]
             filepath_to_checksum_map[file["name"]] = checksums
             same_filename_count[base_name] += 1
             filename_to_filepath_map[base_name] = file["name"]
@@ -1128,21 +1137,30 @@ def update_all_files(fileset, candidate_fileset_id, is_candidate_detection, conn
                 UPDATE file
                 SET size = %s,
                 `size-r` = %s,
-                `size-rd` = %s
+                `size-rd` = %s,
+                `modification-time` = %s
             """
             sizes = filepath_to_sizes_map[filepath]
+            mod_time = filepath_to_mod_time_map[filepath]
             if is_candidate_detection:
                 query += ",name = %s WHERE id = %s"
                 params = (
                     sizes["size"],
                     sizes["size-r"],
                     sizes["size-rd"],
+                    mod_time,
                     normalised_path(filepath),
                     file_id,
                 )
             else:
                 query += "WHERE id = %s"
-                params = (sizes["size"], sizes["size-r"], sizes["size-rd"], file_id)
+                params = (
+                    sizes["size"],
+                    sizes["size-r"],
+                    sizes["size-rd"],
+                    mod_time,
+                    file_id,
+                )
             cursor.execute(query, params)
 
 
@@ -1224,7 +1242,12 @@ def filter_candidate_filesets(roms, transaction_id, conn):
                         file[key],
                         name.lower(),
                         int(file["size"]),
-                        int(file["size-r"]),
+                    )
+                )
+                set_checksums.add(
+                    (
+                        file[key],
+                        name.lower(),
                         int(file["size-rd"]),
                     )
                 )
@@ -1233,16 +1256,11 @@ def filter_candidate_filesets(roms, transaction_id, conn):
                         file[key],
                         name.lower(),
                         -1,
-                        int(file["size-r"]),
-                        int(file["size-rd"]),
                     )
                 )
-        set_file_name_size.add(
-            (name.lower(), -1, int(file["size-r"]), int(file["size-rd"]))
-        )
-        set_file_name_size.add(
-            (name.lower(), int(file["size"]), int(file["size-r"]), int(file["size-rd"]))
-        )
+        set_file_name_size.add((name.lower(), -1))
+        set_file_name_size.add((name.lower(), int(file["size-rd"])))
+        set_file_name_size.add((name.lower(), int(file["size"])))
 
     # Filter candidates by detection filename and file size (including -1) and increase matched file count
     # if filesize = -1,
@@ -1254,50 +1272,43 @@ def filter_candidate_filesets(roms, transaction_id, conn):
         with conn.cursor() as cursor:
             for f in files:
                 filename = os.path.basename(f["name"]).lower()
-                size = f["size"]
-                size_r = f["size-r"]
-                size_rd = f["size-rd"]
-                if (filename, size, size_r, size_rd) in set_file_name_size:
-                    if size == -1:
-                        count += 1
-                    else:
-                        cursor.execute(
-                            """
-                            SELECT checksum, checksize, checktype
-                            FROM filechecksum
-                            WHERE file = %s
-                        """,
-                            (f["file_id"],),
-                        )
-                        checksums = cursor.fetchall()
-                        not_inc_count = False
-                        for c in checksums:
-                            filesize = size
-                            checksum = c["checksum"]
-                            checksize = c["checksize"]
-                            checktype = c["checktype"]
-                            # Macfiles handling
-                            if checktype in ["md5-r", "md5-rt"]:
-                                filesize = size_rd
-
-                            if checksize == "1M":
-                                checksize = 1048576
-                            elif checksize == "0":
-                                checksize = filesize
-                            if filesize <= int(checksize):
-                                if (
-                                    checksum,
-                                    filename,
-                                    size,
-                                    size_r,
-                                    size_rd,
-                                ) in set_checksums:
-                                    count += 1
-                                not_inc_count = True
-                                # if it was a true match, checksum should be present
-                                break
-                        if not not_inc_count:
+                sizes = [f["size"], f["size-rd"]]
+                for size in sizes:
+                    if (filename, size) in set_file_name_size:
+                        if size == -1:
                             count += 1
+                        else:
+                            cursor.execute(
+                                """
+                                SELECT checksum, checksize, checktype
+                                FROM filechecksum
+                                WHERE file = %s
+                            """,
+                                (f["file_id"],),
+                            )
+                            checksums = cursor.fetchall()
+                            not_inc_count = False
+                            for c in checksums:
+                                filesize = size
+                                checksum = c["checksum"]
+                                checksize = c["checksize"]
+
+                                if checksize == "1M":
+                                    checksize = 1048576
+                                elif checksize == "0":
+                                    checksize = filesize
+                                if filesize <= int(checksize):
+                                    if (
+                                        checksum,
+                                        filename,
+                                        size,
+                                    ) in set_checksums:
+                                        count += 1
+                                    not_inc_count = True
+                                    # if it was a true match, checksum should be present
+                                    break
+                            if not not_inc_count:
+                                count += 1
         if count > 0 and total_detection_files_map[fileset_id] <= count:
             match_counts[fileset_id] = count
 


Commit: c04c669aaad8e35e0923baeb4e99a3155744992d
    https://github.com/scummvm/scummvm-sites/commit/c04c669aaad8e35e0923baeb4e99a3155744992d
Author: ShivangNagta (shivangnag at gmail.com)
Date: 2025-08-14T22:21:10+02:00

Commit Message:
INTEGRITY: Check for rt and dt checktype suffix while adding equal checksums.

Changed paths:
    db_functions.py


diff --git a/db_functions.py b/db_functions.py
index a018f38..d00f796 100644
--- a/db_functions.py
+++ b/db_functions.py
@@ -298,7 +298,9 @@ def add_all_equal_checksums(checksize, checktype, checksum, file_id, conn):
         if "md5" not in checktype:
             return
         size_name = "size"
-        if checktype[-1] == "r":
+
+        # e.g md5-r or md5-rt-5000
+        if checktype.endswith("r") or checktype.endswith("rt"):
             size_name += "-rd"
 
         cursor.execute(f"SELECT `{size_name}` FROM file WHERE id = %s", (file_id,))
@@ -320,7 +322,13 @@ def add_all_equal_checksums(checksize, checktype, checksum, file_id, conn):
                 "default": ["md5-0", "md5-1M", "md5-5000", "md5-t-5000"],
             }
 
-            key = checktype[-1] if checktype[-1] in md5_variants_map else "default"
+            if checktype.endswith("rt") or checktype.endswith("r"):
+                key = "r"
+            elif checktype.endswith("dt") or checktype.endswith("d"):
+                key = "d"
+            else:
+                key = "default"
+
             variants = md5_variants_map[key]
             inserted_checksum_type = f"{checktype}-{checksize}"
 


Commit: b424821fcc28fb008d6ee9117e08c5387addbffa
    https://github.com/scummvm/scummvm-sites/commit/b424821fcc28fb008d6ee9117e08c5387addbffa
Author: ShivangNagta (shivangnag at gmail.com)
Date: 2025-08-14T22:21:10+02:00

Commit Message:
INTEGRITY: Add validation checks on user data from the payload along with rate limiting.

Changed paths:
  A validate_user_payload.py
    fileset.py
    requirements.txt


diff --git a/fileset.py b/fileset.py
index c0cacd5..0e27628 100644
--- a/fileset.py
+++ b/fileset.py
@@ -31,7 +31,18 @@ from db_functions import (
 from collections import defaultdict
 from schema import init_database
 
+from validate_user_payload import validate_user_payload
+
+from flask_limiter import Limiter
+from flask_limiter.util import get_remote_address
+
 app = Flask(__name__)
+limiter = Limiter(
+    get_remote_address,
+    app=app,
+    default_limits=[],
+    storage_uri="memory://",
+)
 
 secret_key = os.urandom(24)
 
@@ -1348,6 +1359,7 @@ def get_width(name, default):
 
 
 @app.route("/validate", methods=["POST"])
+ at limiter.limit("3 per minute")
 def validate():
     error_codes = {
         "unknown": -1,
@@ -1361,10 +1373,20 @@ def validate():
     ip = request.remote_addr
     ip = ".".join(ip.split(".")[:3]) + ".X"
 
-    game_metadata = {k: v for k, v in json_object.items() if k != "files"}
-
+    is_valid_payload, response_message = validate_user_payload(json_object)
     json_response = {"error": error_codes["success"], "files": []}
 
+    if not is_valid_payload:
+        json_response["error"] = error_codes["unknown"]
+        json_response["status"] = response_message
+        category = "Invalid user payload."
+        text = f"User payload is not valid. User IP: {ip}, Status: {response_message}"
+        conn = db_connect()
+        create_log(category, ip, text, conn)
+        return jsonify(json_response)
+
+    game_metadata = {k: v for k, v in json_object.items() if k != "files"}
+
     file_object = json_object["files"]
     if not file_object:
         json_response["error"] = error_codes["empty"]
@@ -1527,4 +1549,4 @@ def delete_files(id):
 
 if __name__ == "__main__":
     app.secret_key = secret_key
-    app.run(port=5001, debug=True, host="0.0.0.0")
+    app.run(debug=False, host="0.0.0.0")
diff --git a/requirements.txt b/requirements.txt
index 8340e26..6547c1e 100644
--- a/requirements.txt
+++ b/requirements.txt
@@ -16,3 +16,4 @@ pytest
 setuptools
 Werkzeug
 wheel
+Flask-Limiter
diff --git a/validate_user_payload.py b/validate_user_payload.py
new file mode 100644
index 0000000..41dc445
--- /dev/null
+++ b/validate_user_payload.py
@@ -0,0 +1,156 @@
+import re
+
+MAX_FILES = 10000
+MAX_CHECKSUMS_PER_FILE = 8
+VALID_KEYS = {"gameid", "engineid", "extra", "platform", "language", "files"}
+VALID_FILE_KEYS = {"name", "size", "size-r", "size-rd", "checksums"}
+
+# Field lengths are taken from the defined schema
+FIELD_MAX_SIZES = {
+    # Metadata
+    "gameid": 100,
+    "engineid": 100,
+    "extra": 200,
+    "platform": 30,
+    "language": 10,
+    # File
+    "name": 200,
+    "size": 64,
+    "size-r": 64,
+    "size-rd": 64,
+}
+
+"""
+    Example payload - 
+
+    {
+        "gameid": "this_is_game_id",
+        "engineid": "this_is_engine_id",
+        "extra": "this_is_extra",
+        "platform": "this_is_platform",
+        "language": "lang",
+        "files": [
+            {
+                "name": "file1",
+                "size": "1234",
+                "checksums": [
+                                {"type": "md5", "checksum": "12345abcde12345ABCDE12345ABCDEab"}
+                            ]
+            }
+        ]
+    }
+"""
+
+
+def is_valid_md5(value):
+    """
+    Check if the md5 is 32 character long and contains [a-fA-F0-9]
+    """
+    if not isinstance(value, str):
+        return False
+    return re.fullmatch(r"[a-fA-F0-9]{32}", value) is not None
+
+
+def validate_field_len(field_name, value, max_size):
+    """
+    General length validator for values.
+    """
+    if not isinstance(value, str):
+        return False, f"{field_name}_invalid_type"
+
+    if len(value) > max_size:
+        return False, f"{field_name}_length_exceeded"
+
+    return True, "valid"
+
+
+def validate_user_payload(json_object):
+    """
+    All the checks on user data are performed here.
+    - Datatype of all values
+    - General structure of the payload
+    - Any key missing
+    - Max length of values
+    - Max number of files
+    - Max number of checksums per file
+    - Missing filename
+    - Valid numeric size
+    - Valid md5 checksum
+    """
+    # Ensure the payload is a dictionary
+    if not isinstance(json_object, dict):
+        return False, "invalid_json_object"
+
+    # Ensure any key is not missing
+    missing_keys = VALID_KEYS - json_object.keys()
+    if missing_keys:
+        return False, f"missing_required_keys - {list(missing_keys)}"
+
+    # Validating metadata's max length
+    for key in json_object:
+        if key in VALID_KEYS and key != "files":
+            valid, res = validate_field_len(key, json_object[key], FIELD_MAX_SIZES[key])
+            if not valid:
+                return False, res
+
+    # Ensure files are present as a list
+    files = json_object.get("files", [])
+    if not isinstance(files, list):
+        return False, "files_should_be_list"
+
+    # Bounds on the number of files
+    if len(files) == 0:
+        return False, "empty_fileset"
+    if len(files) > MAX_FILES:
+        return False, f"too_many_files - {len(files)}"
+
+    # Processing every file entry
+    for file_entry in files:
+        if not isinstance(file_entry, dict):
+            return False, "invalid_file_entry"
+
+        # Ensure filename exist
+        if "name" not in file_entry:
+            return False, "missing_filename"
+        # Validating file keys maximum length other than checksums
+        for file_key in ["name", "size", "size-r", "size-rd"]:
+            if file_key in file_entry:
+                valid, res = validate_field_len(
+                    file_key, file_entry[file_key], FIELD_MAX_SIZES[file_key]
+                )
+                if file_key.startswith("size"):
+                    value = file_entry[file_key]
+                    if not value.isdigit():
+                        return False, f"{file_key}_not_a_number"
+            if not valid:
+                return False, res
+
+        # Validation for checksums
+        checksums_raw = file_entry.get("checksums", [])
+        if not isinstance(checksums_raw, list):
+            return False, "invalid_checksum_format: not a list"
+        # Maximum number of checksums should be 8 in case of mac files.
+        if len(checksums_raw) > MAX_CHECKSUMS_PER_FILE:
+            return False, f"checksums_number_exceeded: {len(checksums_raw)}"
+
+        for checksum_entry in checksums_raw:
+            if not isinstance(checksum_entry, dict):
+                return False, f"invalid_checksum_entry - {checksum_entry}"
+            if "type" not in checksum_entry or "checksum" not in checksum_entry:
+                return False, "checksum_missing_fields"
+            ctype = checksum_entry["type"]
+            cvalue = checksum_entry["checksum"]
+            if not ctype.startswith("md5"):
+                return False, f"unsupported_checksum_type: {ctype}"
+            # md5 should be 32 character long and have only a-fA-F0-9
+            if not is_valid_md5(cvalue):
+                return False, f"invalid_md5_format: {ctype}"
+
+        # Check if other keys than md5 are valid
+        for key in file_entry:
+            if key.startswith("md5"):
+                continue
+            if key not in VALID_FILE_KEYS:
+                return False, f"invalid_file_key: {file_entry['name']} - {key}"
+
+    return True, "valid"


Commit: b4fb9213a18a2018b1df1c9b8607a307ebcf2fd6
    https://github.com/scummvm/scummvm-sites/commit/b4fb9213a18a2018b1df1c9b8607a307ebcf2fd6
Author: ShivangNagta (shivangnag at gmail.com)
Date: 2025-08-14T22:21:10+02:00

Commit Message:
INTEGRITY: Add python virtual environment path in apache config file.

Changed paths:
    apache2-config/gamesdb.sev.zone.conf


diff --git a/apache2-config/gamesdb.sev.zone.conf b/apache2-config/gamesdb.sev.zone.conf
index 4356372..7386ded 100644
--- a/apache2-config/gamesdb.sev.zone.conf
+++ b/apache2-config/gamesdb.sev.zone.conf
@@ -5,14 +5,17 @@
     CustomLog ${APACHE_LOG_DIR}/integrity-access.log combined
     ErrorLog ${APACHE_LOG_DIR}/integrity-error.log
     DocumentRoot /home/ubuntu/projects/python/scummvm_sites_2025/scummvm-sites
-    WSGIDaemonProcess scummvm-sites user=www-data group=www-data threads=5
+    WSGIDaemonProcess scummvm-sites \
+        user=www-data group=www-data threads=5 \
+        python-home=/home/ubuntu/projects/python/scummvm_sites_2025/scummvm-sites/venv
+    WSGIProcessGroup scummvm-sites
     WSGIScriptAlias / /home/ubuntu/projects/python/scummvm_sites_2025/scummvm-sites/app.wsgi
 
     <Directory /home/ubuntu/projects/python/scummvm_sites_2025/scummvm-sites>
         AuthType Basic
-	AuthName "nope"
-	AuthUserFile /home/ubuntu/projects/python/scummvm_sites_2025/.htpasswd
-	Require valid-user
+	    AuthName "nope"
+	    AuthUserFile /home/ubuntu/projects/python/scummvm_sites_2025/.htpasswd
+	    Require valid-user
     </Directory>
 
 </VirtualHost>


Commit: a130a0c811abf8b0e5495dddadfe9c8a086cb5c1
    https://github.com/scummvm/scummvm-sites/commit/a130a0c811abf8b0e5495dddadfe9c8a086cb5c1
Author: ShivangNagta (shivangnag at gmail.com)
Date: 2025-08-14T22:21:10+02:00

Commit Message:
INTEGRITY: Remove apache basic auth from validate endpoint.

Changed paths:
    apache2-config/gamesdb.sev.zone.conf


diff --git a/apache2-config/gamesdb.sev.zone.conf b/apache2-config/gamesdb.sev.zone.conf
index 7386ded..058e0b2 100644
--- a/apache2-config/gamesdb.sev.zone.conf
+++ b/apache2-config/gamesdb.sev.zone.conf
@@ -18,4 +18,10 @@
 	    Require valid-user
     </Directory>
 
+    <Location "/validate">
+        AuthType None
+        Require all granted
+        Satisfy Any
+    </Location>
+
 </VirtualHost>


Commit: 9934d9fb97ab268a6df2919d31a8ed42773ee858
    https://github.com/scummvm/scummvm-sites/commit/9934d9fb97ab268a6df2919d31a8ed42773ee858
Author: ShivangNagta (shivangnag at gmail.com)
Date: 2025-08-14T22:21:10+02:00

Commit Message:
INTEGRITY: Delete unused files.

Changed paths:
  R .htaccess
  R clear.py
  R js_functions.js
  R megadata.py


diff --git a/.htaccess b/.htaccess
deleted file mode 100644
index 3af6f96..0000000
--- a/.htaccess
+++ /dev/null
@@ -1,16 +0,0 @@
-RewriteCond %{REQUEST_FILENAME} !-d
-RewriteCond %{REQUEST_FILENAME}\.php -f
-RewriteRule ^(.*)$ $1.php [NC,L]
-
-<Files "mysql_config.json">
-    Order allow,deny
-    Deny from all
-</Files>
-<Files "bin/*">
-    Order allow,deny
-    Deny from all
-</Files>
-<Files "include/*">
-    Order allow,deny
-    Deny from all
-</Files>
diff --git a/clear.py b/clear.py
deleted file mode 100644
index ccc5588..0000000
--- a/clear.py
+++ /dev/null
@@ -1,62 +0,0 @@
-"""
-This script deletes all data from the tables in the database and resets auto-increment counters.
-Using it when testing the data insertion.
-"""
-
-import pymysql
-import json
-import os
-
-
-def truncate_all_tables(conn):
-    # fmt: off
-    tables = ["filechecksum", "queue", "history", "transactions", "file", "fileset", "game", "engine", "log"]
-    cursor = conn.cursor()
-    # fmt: on
-
-    # Disable foreign key checks
-    cursor.execute("SET FOREIGN_KEY_CHECKS = 0")
-
-    for table in tables:
-        try:
-            cursor.execute(f"TRUNCATE TABLE `{table}`")
-            print(f"Table '{table}' truncated successfully")
-        except pymysql.Error as err:
-            print(f"Error truncating table '{table}': {err}")
-
-    # Enable foreign key checks
-    cursor.execute("SET FOREIGN_KEY_CHECKS = 1")
-
-
-if __name__ == "__main__":
-    base_dir = os.path.dirname(os.path.abspath(__file__))
-    config_path = os.path.join(base_dir, "mysql_config.json")
-    with open(config_path) as f:
-        mysql_cred = json.load(f)
-
-    servername = mysql_cred["servername"]
-    username = mysql_cred["username"]
-    password = mysql_cred["password"]
-    dbname = mysql_cred["dbname"]
-
-    # Create connection
-    conn = pymysql.connect(
-        host=servername,
-        user=username,
-        password=password,
-        db=dbname,  # Specify the database to use
-        charset="utf8mb4",
-        cursorclass=pymysql.cursors.DictCursor,
-        autocommit=True,
-    )
-
-    # Check connection
-    if conn is None:
-        print("Error connecting to MySQL")
-        exit(1)
-
-    # Truncate all tables
-    truncate_all_tables(conn)
-
-    # Close connection
-    conn.close()
diff --git a/js_functions.js b/js_functions.js
deleted file mode 100644
index 187556e..0000000
--- a/js_functions.js
+++ /dev/null
@@ -1,46 +0,0 @@
-function delete_id(value) {
-  $("#delete-confirm").slideDown();
-
-  $.ajax({
-    url: "fileset.php",
-    type: "post",
-    dataType: "json",
-    data: {
-      delete: value,
-    },
-  });
-}
-
-function match_id(value) {
-  $.ajax({
-    url: "fileset.php",
-    type: "post",
-    dataType: "json",
-    data: {
-      match: value,
-    },
-  });
-}
-
-function remove_empty_inputs() {
-  var myForm = document.getElementById("filters-form");
-  var allInputs = myForm.getElementsByTagName("input");
-  var input, i;
-
-  for (i = 0; (input = allInputs[i]); i++) {
-    if (input.getAttribute("name") && !input.value) {
-      console.log(input);
-      input.setAttribute("name", "");
-    }
-  }
-}
-
-function hyperlink(link) {
-  window.location = link;
-}
-
-$(document).ready(function () {
-  $(".hidden").hide();
-  $("#delete-button").one("click", delete_id);
-  $("#match-button").one("click", match_id);
-});
diff --git a/megadata.py b/megadata.py
deleted file mode 100644
index 0b4b3af..0000000
--- a/megadata.py
+++ /dev/null
@@ -1,40 +0,0 @@
-import os
-
-
-class Megadata:
-    def __init__(self, file_path):
-        self.file_path = file_path
-        self.hash = self.calculate_hash(file_path)
-        self.size = os.path.getsize(file_path)
-        self.creation_time = os.path.getctime(file_path)
-        self.modification_time = os.path.getmtime(file_path)
-
-    def calculate_hash(self, file_path):
-        pass
-
-    def __eq__(self, other):
-        return (
-            self.hash == other.hash
-            and self.size == other.size
-            and self.creation_time == other.creation_time
-            and self.modification_time == other.modification_time
-        )
-
-
-def record_megadata(directory):
-    file_megadata = {}
-    for root, _, files in os.walk(directory):
-        for file in files:
-            file_path = os.path.join(root, file)
-            file_megadata[file_path] = Megadata(file_path)
-    return file_megadata
-
-
-def check_for_updates(old_megadata, current_directory):
-    current_megadata = record_megadata(current_directory)
-    updates = []
-    for old_path, old_data in old_megadata.items():
-        for current_path, current_data in current_megadata.items():
-            if old_data == current_data and old_path != current_path:
-                updates.append((old_path, current_path))
-    return updates


Commit: 34686f1cd93393f588b8323e28a5c5594eeb474a
    https://github.com/scummvm/scummvm-sites/commit/34686f1cd93393f588b8323e28a5c5594eeb474a
Author: ShivangNagta (shivangnag at gmail.com)
Date: 2025-08-14T22:21:10+02:00

Commit Message:
INTEGRITY: Restructure project to a python module.

Changed paths:
  A src/__init__.py
  A src/app/__init__.py
  A src/app/fileset.py
  A src/app/pagination.py
  A src/app/validate_user_payload.py
  A src/scripts/__init__.py
  A src/scripts/compute_hash.py
  A src/scripts/dat_parser.py
  A src/scripts/db_functions.py
  A src/scripts/schema.py
  A src/utils/__init__.py
  A src/utils/console_log.py
  A src/utils/cookie.py
  A src/utils/db_config.py
  A tests/__init__.py
  R compute_hash.py
  R dat_parser.py
  R db_functions.py
  R fileset.py
  R pagination.py
  R schema.py
  R static/icons/filter/unfold_more.png
  R validate_user_payload.py
    apache2-config/gamesdb.sev.zone.conf
    app.wsgi
    tests/test_compute_hash.py
    tests/test_punycode.py


diff --git a/apache2-config/gamesdb.sev.zone.conf b/apache2-config/gamesdb.sev.zone.conf
index 058e0b2..342101e 100644
--- a/apache2-config/gamesdb.sev.zone.conf
+++ b/apache2-config/gamesdb.sev.zone.conf
@@ -10,6 +10,7 @@
         python-home=/home/ubuntu/projects/python/scummvm_sites_2025/scummvm-sites/venv
     WSGIProcessGroup scummvm-sites
     WSGIScriptAlias / /home/ubuntu/projects/python/scummvm_sites_2025/scummvm-sites/app.wsgi
+    Alias /static /home/ubuntu/projects/python/scummvm_sites_2025/scummvm-sites/static
 
     <Directory /home/ubuntu/projects/python/scummvm_sites_2025/scummvm-sites>
         AuthType Basic
diff --git a/app.wsgi b/app.wsgi
index a52d3ab..8604033 100644
--- a/app.wsgi
+++ b/app.wsgi
@@ -3,7 +3,7 @@ import logging
 
 sys.path.insert(0, "/home/ubuntu/projects/python/scummvm_sites_2025/scummvm-sites")
 
-from fileset import app as application
+from src.app.fileset import app as application
 
 logging.basicConfig(stream=sys.stderr)
 sys.stderr = sys.stdout
diff --git a/src/__init__.py b/src/__init__.py
new file mode 100644
index 0000000..e69de29
diff --git a/src/app/__init__.py b/src/app/__init__.py
new file mode 100644
index 0000000..e69de29
diff --git a/fileset.py b/src/app/fileset.py
similarity index 95%
rename from fileset.py
rename to src/app/fileset.py
index 0e27628..bdc78c8 100644
--- a/fileset.py
+++ b/src/app/fileset.py
@@ -8,35 +8,32 @@ from flask import (
     render_template,
     make_response,
 )
-
-import pymysql.cursors
 import json
 import html as html_lib
 import os
 import getpass
-from pagination import create_page
+from src.app.pagination import create_page
 import difflib
-from db_functions import (
+from src.scripts.db_functions import (
     get_all_related_filesets,
     convert_log_text_to_links,
     user_integrity_check,
-    db_connect,
     create_log,
-    db_connect_root,
     delete_original_fileset,
     normalised_path,
     insert_file,
     insert_filechecksum,
 )
+from src.utils.db_config import db_connect, db_connect_root
 from collections import defaultdict
-from schema import init_database
-
-from validate_user_payload import validate_user_payload
-
+from src.scripts.schema import init_database
+from src.app.validate_user_payload import validate_user_payload
 from flask_limiter import Limiter
 from flask_limiter.util import get_remote_address
+from src.utils.cookie import get_filesets_per_page, get_logs_per_page
+from src.utils.db_config import STATIC_DIR, TEMPLATES_DIR
 
-app = Flask(__name__)
+app = Flask(__name__, static_folder=STATIC_DIR, template_folder=TEMPLATES_DIR)
 limiter = Limiter(
     get_remote_address,
     app=app,
@@ -81,20 +78,7 @@ def fileset():
     old_id = request.args.get("redirected_from", default=None, type=int)
     widetable = request.args.get("widetable", default="partial", type=str)
     # Load MySQL credentials from a JSON file
-    base_dir = os.path.dirname(os.path.abspath(__file__))
-    config_path = os.path.join(base_dir, "mysql_config.json")
-    with open(config_path) as f:
-        mysql_cred = json.load(f)
-
-    # Create a connection to the MySQL server
-    connection = pymysql.connect(
-        host=mysql_cred["servername"],
-        user=mysql_cred["username"],
-        password=mysql_cred["password"],
-        db=mysql_cred["dbname"],
-        charset="utf8mb4",
-        cursorclass=pymysql.cursors.DictCursor,
-    )
+    connection = db_connect()
 
     try:
         with connection.cursor() as cursor:
@@ -469,19 +453,7 @@ def merge_fileset(id):
     if request.method == "POST":
         search_query = request.form["search"]
 
-        base_dir = os.path.dirname(os.path.abspath(__file__))
-        config_path = os.path.join(base_dir, "mysql_config.json")
-        with open(config_path) as f:
-            mysql_cred = json.load(f)
-
-        connection = pymysql.connect(
-            host=mysql_cred["servername"],
-            user=mysql_cred["username"],
-            password=mysql_cred["password"],
-            db=mysql_cred["dbname"],
-            charset="utf8mb4",
-            cursorclass=pymysql.cursors.DictCursor,
-        )
+        connection = db_connect()
 
         try:
             with connection.cursor() as cursor:
@@ -587,19 +559,7 @@ def merge_fileset(id):
 
 @app.route("/fileset/<int:id>/possible_merge", methods=["GET", "POST"])
 def possible_merge_filesets(id):
-    base_dir = os.path.dirname(os.path.abspath(__file__))
-    config_path = os.path.join(base_dir, "mysql_config.json")
-    with open(config_path) as f:
-        mysql_cred = json.load(f)
-
-    connection = pymysql.connect(
-        host=mysql_cred["servername"],
-        user=mysql_cred["username"],
-        password=mysql_cred["password"],
-        db=mysql_cred["dbname"],
-        charset="utf8mb4",
-        cursorclass=pymysql.cursors.DictCursor,
-    )
+    connection = db_connect()
 
     try:
         with connection.cursor() as cursor:
@@ -723,19 +683,7 @@ def confirm_merge(id):
         else request.form.get("target_id")
     )
 
-    base_dir = os.path.dirname(os.path.abspath(__file__))
-    config_path = os.path.join(base_dir, "mysql_config.json")
-    with open(config_path) as f:
-        mysql_cred = json.load(f)
-
-    connection = pymysql.connect(
-        host=mysql_cred["servername"],
-        user=mysql_cred["username"],
-        password=mysql_cred["password"],
-        db=mysql_cred["dbname"],
-        charset="utf8mb4",
-        cursorclass=pymysql.cursors.DictCursor,
-    )
+    connection = db_connect()
 
     try:
         with connection.cursor() as cursor:
@@ -1109,19 +1057,7 @@ def execute_merge(id):
     options = data.get("options")
     matched_dict = json.loads(data.get("matched_files"))
 
-    base_dir = os.path.dirname(os.path.abspath(__file__))
-    config_path = os.path.join(base_dir, "mysql_config.json")
-    with open(config_path) as f:
-        mysql_cred = json.load(f)
-
-    connection = pymysql.connect(
-        host=mysql_cred["servername"],
-        user=mysql_cred["username"],
-        password=mysql_cred["password"],
-        db=mysql_cred["dbname"],
-        charset="utf8mb4",
-        cursorclass=pymysql.cursors.DictCursor,
-    )
+    connection = db_connect()
 
     try:
         with connection.cursor() as cursor:
@@ -1346,18 +1282,6 @@ def config():
     )
 
 
-def get_filesets_per_page():
-    return int(request.cookies.get("filesets_per_page", "25"))
-
-
-def get_logs_per_page():
-    return int(request.cookies.get("logs_per_page", "25"))
-
-
-def get_width(name, default):
-    return int(request.cookies.get(name, default))
-
-
 @app.route("/validate", methods=["POST"])
 @limiter.limit("3 per minute")
 def validate():
@@ -1549,4 +1473,4 @@ def delete_files(id):
 
 if __name__ == "__main__":
     app.secret_key = secret_key
-    app.run(debug=False, host="0.0.0.0")
+    app.run(port=5001, debug=True, host="0.0.0.0")
diff --git a/pagination.py b/src/app/pagination.py
similarity index 95%
rename from pagination.py
rename to src/app/pagination.py
index 57d634e..75915c8 100644
--- a/pagination.py
+++ b/src/app/pagination.py
@@ -1,11 +1,9 @@
 from flask import Flask, request, url_for
-import pymysql
-import json
 import re
 import os
-
 from urllib.parse import urlencode
-
+from src.utils.db_config import db_connect, STATIC_DIR
+from src.utils.cookie import get_width
 
 app = Flask(__name__)
 
@@ -54,19 +52,7 @@ def create_page(
     filters={},
     mapping={},
 ):
-    base_dir = os.path.dirname(os.path.abspath(__file__))
-    config_path = os.path.join(base_dir, "mysql_config.json")
-    with open(config_path) as f:
-        mysql_cred = json.load(f)
-
-    conn = pymysql.connect(
-        host=mysql_cred["servername"],
-        user=mysql_cred["username"],
-        password=mysql_cred["password"],
-        db=mysql_cred["dbname"],
-        charset="utf8mb4",
-        cursorclass=pymysql.cursors.DictCursor,
-    )
+    conn = db_connect()
 
     with conn.cursor() as cursor:
         tables = set()
@@ -143,7 +129,7 @@ def create_page(
 
     # Initial html code including the navbar is stored in a separate html file.
     html = ""
-    navbar_path = os.path.join(app.root_path, "static", "navbar_string.html")
+    navbar_path = os.path.join(STATIC_DIR, "navbar_string.html")
     with open(navbar_path, "r") as f:
         html = f.read()
 
@@ -153,8 +139,6 @@ def create_page(
         <table class="fixed-table" style="margin-top: 80px;">
     """
 
-    from fileset import get_width
-
     if records_table == "fileset":
         fileset_dashboard_widths_default = {
             "fileset_serial_no": "5",
diff --git a/validate_user_payload.py b/src/app/validate_user_payload.py
similarity index 100%
rename from validate_user_payload.py
rename to src/app/validate_user_payload.py
diff --git a/src/scripts/__init__.py b/src/scripts/__init__.py
new file mode 100644
index 0000000..e69de29
diff --git a/compute_hash.py b/src/scripts/compute_hash.py
similarity index 100%
rename from compute_hash.py
rename to src/scripts/compute_hash.py
diff --git a/dat_parser.py b/src/scripts/dat_parser.py
similarity index 99%
rename from dat_parser.py
rename to src/scripts/dat_parser.py
index 9655d5c..a1b0679 100644
--- a/dat_parser.py
+++ b/src/scripts/dat_parser.py
@@ -1,8 +1,8 @@
 import re
 import os
 import sys
-from db_functions import db_insert, match_fileset
 import argparse
+from src.scripts.db_functions import db_insert, match_fileset
 
 
 def remove_quotes(string):
diff --git a/db_functions.py b/src/scripts/db_functions.py
similarity index 98%
rename from db_functions.py
rename to src/scripts/db_functions.py
index d00f796..95c8aa4 100644
--- a/db_functions.py
+++ b/src/scripts/db_functions.py
@@ -1,5 +1,4 @@
 import pymysql
-import json
 import getpass
 import time
 import hashlib
@@ -7,45 +6,15 @@ import os
 from collections import defaultdict
 import re
 import copy
-import sys
-
-
-def db_connect():
-    console_log("Connecting to the Database.")
-    base_dir = os.path.dirname(os.path.abspath(__file__))
-    config_path = os.path.join(base_dir, "mysql_config.json")
-    with open(config_path) as f:
-        mysql_cred = json.load(f)
-
-    conn = pymysql.connect(
-        host=mysql_cred["servername"],
-        user=mysql_cred["username"],
-        password=mysql_cred["password"],
-        db=mysql_cred["dbname"],
-        charset="utf8mb4",
-        cursorclass=pymysql.cursors.DictCursor,
-        autocommit=False,
-    )
-    console_log(f"Connected to Database - {mysql_cred['dbname']}")
-    return conn
-
-
-def db_connect_root():
-    base_dir = os.path.dirname(os.path.abspath(__file__))
-    config_path = os.path.join(base_dir, "mysql_config.json")
-    with open(config_path) as f:
-        mysql_cred = json.load(f)
-
-    conn = pymysql.connect(
-        host=mysql_cred["servername"],
-        user=mysql_cred["username"],
-        password=mysql_cred["password"],
-        charset="utf8mb4",
-        cursorclass=pymysql.cursors.DictCursor,
-        autocommit=True,
-    )
-
-    return (conn, mysql_cred["dbname"])
+from src.utils.db_config import db_connect
+from src.utils.console_log import (
+    console_log,
+    console_log_candidate_filtering,
+    console_log_detection,
+    console_log_file_update,
+    console_log_matching,
+    console_log_total_filesets,
+)
 
 
 def get_checksum_props(checkcode, checksum):
@@ -2776,38 +2745,3 @@ def add_usercount(fileset, ip, conn):
             category_text = "Existing user fileset - same user."
             log_text = f"User Fileset:{fileset} exists. Match count: {count}."
             create_log(category_text, ip, log_text, conn)
-
-
-def console_log(message):
-    sys.stdout.write(" " * 50 + "\r")
-    sys.stdout.flush()
-    print(message)
-
-
-def console_log_candidate_filtering(fileset_count):
-    sys.stdout.write(f"Filtering Candidates - Fileset {fileset_count}\r")
-    sys.stdout.flush()
-
-
-def console_log_file_update(fileset_count):
-    sys.stdout.write(f"Updating files - Fileset {fileset_count}\r")
-    sys.stdout.flush()
-
-
-def console_log_matching(fileset_count):
-    sys.stdout.write(f"Performing Match - Fileset {fileset_count}\r")
-    sys.stdout.flush()
-
-
-def console_log_detection(fileset_count):
-    sys.stdout.write(f"Processing - Fileset {fileset_count}\r")
-    sys.stdout.flush()
-
-
-def console_log_total_filesets(file_path):
-    count = 0
-    with open(file_path, "r") as f:
-        for line in f:
-            if line.strip().startswith("game ("):
-                count += 1
-    print(f"Total filesets present - {count}.")
diff --git a/schema.py b/src/scripts/schema.py
similarity index 94%
rename from schema.py
rename to src/scripts/schema.py
index 5a42f29..2388add 100644
--- a/schema.py
+++ b/src/scripts/schema.py
@@ -1,32 +1,12 @@
-import json
 import pymysql
 import random
 import string
 from datetime import datetime
-import os
+from src.utils.db_config import db_connect_root
 
 
 def init_database():
-    # Load MySQL credentials
-    base_dir = os.path.dirname(os.path.abspath(__file__))
-    config_path = os.path.join(base_dir, "mysql_config.json")
-    with open(config_path) as f:
-        mysql_cred = json.load(f)
-
-    servername = mysql_cred["servername"]
-    username = mysql_cred["username"]
-    password = mysql_cred["password"]
-    dbname = mysql_cred["dbname"]
-
-    # Create connection
-    conn = pymysql.connect(
-        host=servername,
-        user=username,
-        password=password,
-        charset="utf8mb4",
-        cursorclass=pymysql.cursors.DictCursor,
-        autocommit=False,
-    )
+    (conn, dbname) = db_connect_root()
 
     # Check connection
     if conn is None:
diff --git a/src/utils/__init__.py b/src/utils/__init__.py
new file mode 100644
index 0000000..e69de29
diff --git a/src/utils/console_log.py b/src/utils/console_log.py
new file mode 100644
index 0000000..97275d4
--- /dev/null
+++ b/src/utils/console_log.py
@@ -0,0 +1,40 @@
+"""
+This functions are used for logging details while running scripts manually.
+"""
+
+import sys
+
+
+def console_log(message):
+    sys.stdout.write(" " * 50 + "\r")
+    sys.stdout.flush()
+    print(message)
+
+
+def console_log_candidate_filtering(fileset_count):
+    sys.stdout.write(f"Filtering Candidates - Fileset {fileset_count}\r")
+    sys.stdout.flush()
+
+
+def console_log_file_update(fileset_count):
+    sys.stdout.write(f"Updating files - Fileset {fileset_count}\r")
+    sys.stdout.flush()
+
+
+def console_log_matching(fileset_count):
+    sys.stdout.write(f"Performing Match - Fileset {fileset_count}\r")
+    sys.stdout.flush()
+
+
+def console_log_detection(fileset_count):
+    sys.stdout.write(f"Processing - Fileset {fileset_count}\r")
+    sys.stdout.flush()
+
+
+def console_log_total_filesets(file_path):
+    count = 0
+    with open(file_path, "r") as f:
+        for line in f:
+            if line.strip().startswith("game ("):
+                count += 1
+    print(f"Total filesets present - {count}.")
diff --git a/src/utils/cookie.py b/src/utils/cookie.py
new file mode 100644
index 0000000..51dcfdd
--- /dev/null
+++ b/src/utils/cookie.py
@@ -0,0 +1,13 @@
+from flask import request
+
+
+def get_filesets_per_page():
+    return int(request.cookies.get("filesets_per_page", "25"))
+
+
+def get_logs_per_page():
+    return int(request.cookies.get("logs_per_page", "25"))
+
+
+def get_width(name, default):
+    return int(request.cookies.get(name, default))
diff --git a/src/utils/db_config.py b/src/utils/db_config.py
new file mode 100644
index 0000000..f5f7e76
--- /dev/null
+++ b/src/utils/db_config.py
@@ -0,0 +1,50 @@
+import pymysql
+import os
+import json
+from src.utils.console_log import console_log
+
+BASE_DIR = os.path.abspath(os.path.dirname(__file__))
+PROJECT_ROOT = os.path.abspath(os.path.join(BASE_DIR, "..", ".."))
+STATIC_DIR = os.path.join(PROJECT_ROOT, "static")
+TEMPLATES_DIR = os.path.join(PROJECT_ROOT, "templates")
+CONFIG_PATH = os.path.join(PROJECT_ROOT, "mysql_config.json")
+
+
+def db_connect():
+    console_log("Connecting to the Database.")
+    with open(CONFIG_PATH) as f:
+        mysql_cred = json.load(f)
+
+        conn = pymysql.connect(
+            host=mysql_cred["servername"],
+            user=mysql_cred["username"],
+            password=mysql_cred["password"],
+            db=mysql_cred["dbname"],
+            charset="utf8mb4",
+            cursorclass=pymysql.cursors.DictCursor,
+            autocommit=False,
+        )
+        console_log(f"Connected to Database - {mysql_cred['dbname']}")
+        return conn
+
+
+def db_connect_root():
+    with open(CONFIG_PATH) as f:
+        mysql_cred = json.load(f)
+
+        conn = pymysql.connect(
+            host=mysql_cred["servername"],
+            user=mysql_cred["username"],
+            password=mysql_cred["password"],
+            charset="utf8mb4",
+            cursorclass=pymysql.cursors.DictCursor,
+            autocommit=True,
+        )
+
+        return (conn, mysql_cred["dbname"])
+
+
+def get_db_name():
+    with open(CONFIG_PATH) as f:
+        mysql_cred = json.load(f)
+        return mysql_cred["dbname"]
diff --git a/static/icons/filter/unfold_more.png b/static/icons/filter/unfold_more.png
deleted file mode 100644
index 7d5d598..0000000
Binary files a/static/icons/filter/unfold_more.png and /dev/null differ
diff --git a/tests/__init__.py b/tests/__init__.py
new file mode 100644
index 0000000..e69de29
diff --git a/tests/test_compute_hash.py b/tests/test_compute_hash.py
index b1cab92..9f60372 100644
--- a/tests/test_compute_hash.py
+++ b/tests/test_compute_hash.py
@@ -1,9 +1,5 @@
-import sys
 import os
-
-sys.path.insert(0, ".")
-
-from compute_hash import is_macbin
+from src.scripts.compute_hash import is_macbin
 
 
 def test_is_macbin():
diff --git a/tests/test_punycode.py b/tests/test_punycode.py
index 1affd0c..5c0b1c8 100644
--- a/tests/test_punycode.py
+++ b/tests/test_punycode.py
@@ -1,4 +1,4 @@
-from db_functions import punycode_need_encode, encode_punycode
+from src.scripts.compute_hash import punycode_need_encode, encode_punycode
 
 
 def test_needs_punyencoding():


Commit: b825448fe83749e662fcfd9103c291cb9516d2d5
    https://github.com/scummvm/scummvm-sites/commit/b825448fe83749e662fcfd9103c291cb9516d2d5
Author: ShivangNagta (shivangnag at gmail.com)
Date: 2025-08-14T22:21:10+02:00

Commit Message:
INTEGRITY: Set up uv for package management.

Changed paths:
  A .python-version
  A pyproject.toml
  A uv.lock


diff --git a/.python-version b/.python-version
new file mode 100644
index 0000000..e4fba21
--- /dev/null
+++ b/.python-version
@@ -0,0 +1 @@
+3.12
diff --git a/pyproject.toml b/pyproject.toml
new file mode 100644
index 0000000..6614fd6
--- /dev/null
+++ b/pyproject.toml
@@ -0,0 +1,26 @@
+[project]
+name = "scummvm-sites"
+version = "0.1.0"
+description = "Add your description here"
+requires-python = ">=3.12"
+dependencies = [
+    "blinker>=1.9.0",
+    "cffi>=1.17.1",
+    "click>=8.2.1",
+    "cryptography>=45.0.6",
+    "flask>=3.1.1",
+    "flask-limiter>=3.12",
+    "iniconfig>=2.1.0",
+    "itsdangerous>=2.2.0",
+    "jinja2>=3.1.6",
+    "markupsafe>=3.0.2",
+    "packaging>=25.0",
+    "pluggy>=1.6.0",
+    "pycparser>=2.22",
+    "pygments>=2.19.2",
+    "pymysql>=1.1.1",
+    "pytest>=8.4.1",
+    "setuptools>=80.9.0",
+    "werkzeug>=3.1.3",
+    "wheel>=0.45.1",
+]
diff --git a/uv.lock b/uv.lock
new file mode 100644
index 0000000..9f90dee
--- /dev/null
+++ b/uv.lock
@@ -0,0 +1,461 @@
+version = 1
+revision = 3
+requires-python = ">=3.12"
+
+[[package]]
+name = "blinker"
+version = "1.9.0"
+source = { registry = "https://pypi.org/simple" }
+sdist = { url = "https://files.pythonhosted.org/packages/21/28/9b3f50ce0e048515135495f198351908d99540d69bfdc8c1d15b73dc55ce/blinker-1.9.0.tar.gz", hash = "sha256:b4ce2265a7abece45e7cc896e98dbebe6cead56bcf805a3d23136d145f5445bf", size = 22460, upload-time = "2024-11-08T17:25:47.436Z" }
+wheels = [
+    { url = "https://files.pythonhosted.org/packages/10/cb/f2ad4230dc2eb1a74edf38f1a38b9b52277f75bef262d8908e60d957e13c/blinker-1.9.0-py3-none-any.whl", hash = "sha256:ba0efaa9080b619ff2f3459d1d500c57bddea4a6b424b60a91141db6fd2f08bc", size = 8458, upload-time = "2024-11-08T17:25:46.184Z" },
+]
+
+[[package]]
+name = "cffi"
+version = "1.17.1"
+source = { registry = "https://pypi.org/simple" }
+dependencies = [
+    { name = "pycparser" },
+]
+sdist = { url = "https://files.pythonhosted.org/packages/fc/97/c783634659c2920c3fc70419e3af40972dbaf758daa229a7d6ea6135c90d/cffi-1.17.1.tar.gz", hash = "sha256:1c39c6016c32bc48dd54561950ebd6836e1670f2ae46128f67cf49e789c52824", size = 516621, upload-time = "2024-09-04T20:45:21.852Z" }
+wheels = [
+    { url = "https://files.pythonhosted.org/packages/5a/84/e94227139ee5fb4d600a7a4927f322e1d4aea6fdc50bd3fca8493caba23f/cffi-1.17.1-cp312-cp312-macosx_10_9_x86_64.whl", hash = "sha256:805b4371bf7197c329fcb3ead37e710d1bca9da5d583f5073b799d5c5bd1eee4", size = 183178, upload-time = "2024-09-04T20:44:12.232Z" },
+    { url = "https://files.pythonhosted.org/packages/da/ee/fb72c2b48656111c4ef27f0f91da355e130a923473bf5ee75c5643d00cca/cffi-1.17.1-cp312-cp312-macosx_11_0_arm64.whl", hash = "sha256:733e99bc2df47476e3848417c5a4540522f234dfd4ef3ab7fafdf555b082ec0c", size = 178840, upload-time = "2024-09-04T20:44:13.739Z" },
+    { url = "https://files.pythonhosted.org/packages/cc/b6/db007700f67d151abadf508cbfd6a1884f57eab90b1bb985c4c8c02b0f28/cffi-1.17.1-cp312-cp312-manylinux_2_12_i686.manylinux2010_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:1257bdabf294dceb59f5e70c64a3e2f462c30c7ad68092d01bbbfb1c16b1ba36", size = 454803, upload-time = "2024-09-04T20:44:15.231Z" },
+    { url = "https://files.pythonhosted.org/packages/1a/df/f8d151540d8c200eb1c6fba8cd0dfd40904f1b0682ea705c36e6c2e97ab3/cffi-1.17.1-cp312-cp312-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:da95af8214998d77a98cc14e3a3bd00aa191526343078b530ceb0bd710fb48a5", size = 478850, upload-time = "2024-09-04T20:44:17.188Z" },
+    { url = "https://files.pythonhosted.org/packages/28/c0/b31116332a547fd2677ae5b78a2ef662dfc8023d67f41b2a83f7c2aa78b1/cffi-1.17.1-cp312-cp312-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:d63afe322132c194cf832bfec0dc69a99fb9bb6bbd550f161a49e9e855cc78ff", size = 485729, upload-time = "2024-09-04T20:44:18.688Z" },
+    { url = "https://files.pythonhosted.org/packages/91/2b/9a1ddfa5c7f13cab007a2c9cc295b70fbbda7cb10a286aa6810338e60ea1/cffi-1.17.1-cp312-cp312-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:f79fc4fc25f1c8698ff97788206bb3c2598949bfe0fef03d299eb1b5356ada99", size = 471256, upload-time = "2024-09-04T20:44:20.248Z" },
+    { url = "https://files.pythonhosted.org/packages/b2/d5/da47df7004cb17e4955df6a43d14b3b4ae77737dff8bf7f8f333196717bf/cffi-1.17.1-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:b62ce867176a75d03a665bad002af8e6d54644fad99a3c70905c543130e39d93", size = 479424, upload-time = "2024-09-04T20:44:21.673Z" },
+    { url = "https://files.pythonhosted.org/packages/0b/ac/2a28bcf513e93a219c8a4e8e125534f4f6db03e3179ba1c45e949b76212c/cffi-1.17.1-cp312-cp312-musllinux_1_1_aarch64.whl", hash = "sha256:386c8bf53c502fff58903061338ce4f4950cbdcb23e2902d86c0f722b786bbe3", size = 484568, upload-time = "2024-09-04T20:44:23.245Z" },
+    { url = "https://files.pythonhosted.org/packages/d4/38/ca8a4f639065f14ae0f1d9751e70447a261f1a30fa7547a828ae08142465/cffi-1.17.1-cp312-cp312-musllinux_1_1_x86_64.whl", hash = "sha256:4ceb10419a9adf4460ea14cfd6bc43d08701f0835e979bf821052f1805850fe8", size = 488736, upload-time = "2024-09-04T20:44:24.757Z" },
+    { url = "https://files.pythonhosted.org/packages/86/c5/28b2d6f799ec0bdecf44dced2ec5ed43e0eb63097b0f58c293583b406582/cffi-1.17.1-cp312-cp312-win32.whl", hash = "sha256:a08d7e755f8ed21095a310a693525137cfe756ce62d066e53f502a83dc550f65", size = 172448, upload-time = "2024-09-04T20:44:26.208Z" },
+    { url = "https://files.pythonhosted.org/packages/50/b9/db34c4755a7bd1cb2d1603ac3863f22bcecbd1ba29e5ee841a4bc510b294/cffi-1.17.1-cp312-cp312-win_amd64.whl", hash = "sha256:51392eae71afec0d0c8fb1a53b204dbb3bcabcb3c9b807eedf3e1e6ccf2de903", size = 181976, upload-time = "2024-09-04T20:44:27.578Z" },
+    { url = "https://files.pythonhosted.org/packages/8d/f8/dd6c246b148639254dad4d6803eb6a54e8c85c6e11ec9df2cffa87571dbe/cffi-1.17.1-cp313-cp313-macosx_10_13_x86_64.whl", hash = "sha256:f3a2b4222ce6b60e2e8b337bb9596923045681d71e5a082783484d845390938e", size = 182989, upload-time = "2024-09-04T20:44:28.956Z" },
+    { url = "https://files.pythonhosted.org/packages/8b/f1/672d303ddf17c24fc83afd712316fda78dc6fce1cd53011b839483e1ecc8/cffi-1.17.1-cp313-cp313-macosx_11_0_arm64.whl", hash = "sha256:0984a4925a435b1da406122d4d7968dd861c1385afe3b45ba82b750f229811e2", size = 178802, upload-time = "2024-09-04T20:44:30.289Z" },
+    { url = "https://files.pythonhosted.org/packages/0e/2d/eab2e858a91fdff70533cab61dcff4a1f55ec60425832ddfdc9cd36bc8af/cffi-1.17.1-cp313-cp313-manylinux_2_12_i686.manylinux2010_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:d01b12eeeb4427d3110de311e1774046ad344f5b1a7403101878976ecd7a10f3", size = 454792, upload-time = "2024-09-04T20:44:32.01Z" },
+    { url = "https://files.pythonhosted.org/packages/75/b2/fbaec7c4455c604e29388d55599b99ebcc250a60050610fadde58932b7ee/cffi-1.17.1-cp313-cp313-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:706510fe141c86a69c8ddc029c7910003a17353970cff3b904ff0686a5927683", size = 478893, upload-time = "2024-09-04T20:44:33.606Z" },
+    { url = "https://files.pythonhosted.org/packages/4f/b7/6e4a2162178bf1935c336d4da8a9352cccab4d3a5d7914065490f08c0690/cffi-1.17.1-cp313-cp313-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:de55b766c7aa2e2a3092c51e0483d700341182f08e67c63630d5b6f200bb28e5", size = 485810, upload-time = "2024-09-04T20:44:35.191Z" },
+    { url = "https://files.pythonhosted.org/packages/c7/8a/1d0e4a9c26e54746dc08c2c6c037889124d4f59dffd853a659fa545f1b40/cffi-1.17.1-cp313-cp313-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:c59d6e989d07460165cc5ad3c61f9fd8f1b4796eacbd81cee78957842b834af4", size = 471200, upload-time = "2024-09-04T20:44:36.743Z" },
+    { url = "https://files.pythonhosted.org/packages/26/9f/1aab65a6c0db35f43c4d1b4f580e8df53914310afc10ae0397d29d697af4/cffi-1.17.1-cp313-cp313-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:dd398dbc6773384a17fe0d3e7eeb8d1a21c2200473ee6806bb5e6a8e62bb73dd", size = 479447, upload-time = "2024-09-04T20:44:38.492Z" },
+    { url = "https://files.pythonhosted.org/packages/5f/e4/fb8b3dd8dc0e98edf1135ff067ae070bb32ef9d509d6cb0f538cd6f7483f/cffi-1.17.1-cp313-cp313-musllinux_1_1_aarch64.whl", hash = "sha256:3edc8d958eb099c634dace3c7e16560ae474aa3803a5df240542b305d14e14ed", size = 484358, upload-time = "2024-09-04T20:44:40.046Z" },
+    { url = "https://files.pythonhosted.org/packages/f1/47/d7145bf2dc04684935d57d67dff9d6d795b2ba2796806bb109864be3a151/cffi-1.17.1-cp313-cp313-musllinux_1_1_x86_64.whl", hash = "sha256:72e72408cad3d5419375fc87d289076ee319835bdfa2caad331e377589aebba9", size = 488469, upload-time = "2024-09-04T20:44:41.616Z" },
+    { url = "https://files.pythonhosted.org/packages/bf/ee/f94057fa6426481d663b88637a9a10e859e492c73d0384514a17d78ee205/cffi-1.17.1-cp313-cp313-win32.whl", hash = "sha256:e03eab0a8677fa80d646b5ddece1cbeaf556c313dcfac435ba11f107ba117b5d", size = 172475, upload-time = "2024-09-04T20:44:43.733Z" },
+    { url = "https://files.pythonhosted.org/packages/7c/fc/6a8cb64e5f0324877d503c854da15d76c1e50eb722e320b15345c4d0c6de/cffi-1.17.1-cp313-cp313-win_amd64.whl", hash = "sha256:f6a16c31041f09ead72d69f583767292f750d24913dadacf5756b966aacb3f1a", size = 182009, upload-time = "2024-09-04T20:44:45.309Z" },
+]
+
+[[package]]
+name = "click"
+version = "8.2.1"
+source = { registry = "https://pypi.org/simple" }
+dependencies = [
+    { name = "colorama", marker = "sys_platform == 'win32'" },
+]
+sdist = { url = "https://files.pythonhosted.org/packages/60/6c/8ca2efa64cf75a977a0d7fac081354553ebe483345c734fb6b6515d96bbc/click-8.2.1.tar.gz", hash = "sha256:27c491cc05d968d271d5a1db13e3b5a184636d9d930f148c50b038f0d0646202", size = 286342, upload-time = "2025-05-20T23:19:49.832Z" }
+wheels = [
+    { url = "https://files.pythonhosted.org/packages/85/32/10bb5764d90a8eee674e9dc6f4db6a0ab47c8c4d0d83c27f7c39ac415a4d/click-8.2.1-py3-none-any.whl", hash = "sha256:61a3265b914e850b85317d0b3109c7f8cd35a670f963866005d6ef1d5175a12b", size = 102215, upload-time = "2025-05-20T23:19:47.796Z" },
+]
+
+[[package]]
+name = "colorama"
+version = "0.4.6"
+source = { registry = "https://pypi.org/simple" }
+sdist = { url = "https://files.pythonhosted.org/packages/d8/53/6f443c9a4a8358a93a6792e2acffb9d9d5cb0a5cfd8802644b7b1c9a02e4/colorama-0.4.6.tar.gz", hash = "sha256:08695f5cb7ed6e0531a20572697297273c47b8cae5a63ffc6d6ed5c201be6e44", size = 27697, upload-time = "2022-10-25T02:36:22.414Z" }
+wheels = [
+    { url = "https://files.pythonhosted.org/packages/d1/d6/3965ed04c63042e047cb6a3e6ed1a63a35087b6a609aa3a15ed8ac56c221/colorama-0.4.6-py2.py3-none-any.whl", hash = "sha256:4f1d9991f5acc0ca119f9d443620b77f9d6b33703e51011c16baf57afb285fc6", size = 25335, upload-time = "2022-10-25T02:36:20.889Z" },
+]
+
+[[package]]
+name = "cryptography"
+version = "45.0.6"
+source = { registry = "https://pypi.org/simple" }
+dependencies = [
+    { name = "cffi", marker = "platform_python_implementation != 'PyPy'" },
+]
+sdist = { url = "https://files.pythonhosted.org/packages/d6/0d/d13399c94234ee8f3df384819dc67e0c5ce215fb751d567a55a1f4b028c7/cryptography-45.0.6.tar.gz", hash = "sha256:5c966c732cf6e4a276ce83b6e4c729edda2df6929083a952cc7da973c539c719", size = 744949, upload-time = "2025-08-05T23:59:27.93Z" }
+wheels = [
+    { url = "https://files.pythonhosted.org/packages/8c/29/2793d178d0eda1ca4a09a7c4e09a5185e75738cc6d526433e8663b460ea6/cryptography-45.0.6-cp311-abi3-macosx_10_9_universal2.whl", hash = "sha256:048e7ad9e08cf4c0ab07ff7f36cc3115924e22e2266e034450a890d9e312dd74", size = 7042702, upload-time = "2025-08-05T23:58:23.464Z" },
+    { url = "https://files.pythonhosted.org/packages/b3/b6/cabd07410f222f32c8d55486c464f432808abaa1f12af9afcbe8f2f19030/cryptography-45.0.6-cp311-abi3-manylinux2014_aarch64.manylinux_2_17_aarch64.whl", hash = "sha256:44647c5d796f5fc042bbc6d61307d04bf29bccb74d188f18051b635f20a9c75f", size = 4206483, upload-time = "2025-08-05T23:58:27.132Z" },
+    { url = "https://files.pythonhosted.org/packages/8b/9e/f9c7d36a38b1cfeb1cc74849aabe9bf817990f7603ff6eb485e0d70e0b27/cryptography-45.0.6-cp311-abi3-manylinux2014_x86_64.manylinux_2_17_x86_64.whl", hash = "sha256:e40b80ecf35ec265c452eea0ba94c9587ca763e739b8e559c128d23bff7ebbbf", size = 4429679, upload-time = "2025-08-05T23:58:29.152Z" },
+    { url = "https://files.pythonhosted.org/packages/9c/2a/4434c17eb32ef30b254b9e8b9830cee4e516f08b47fdd291c5b1255b8101/cryptography-45.0.6-cp311-abi3-manylinux_2_28_aarch64.whl", hash = "sha256:00e8724bdad672d75e6f069b27970883179bd472cd24a63f6e620ca7e41cc0c5", size = 4210553, upload-time = "2025-08-05T23:58:30.596Z" },
+    { url = "https://files.pythonhosted.org/packages/ef/1d/09a5df8e0c4b7970f5d1f3aff1b640df6d4be28a64cae970d56c6cf1c772/cryptography-45.0.6-cp311-abi3-manylinux_2_28_armv7l.manylinux_2_31_armv7l.whl", hash = "sha256:7a3085d1b319d35296176af31c90338eeb2ddac8104661df79f80e1d9787b8b2", size = 3894499, upload-time = "2025-08-05T23:58:32.03Z" },
+    { url = "https://files.pythonhosted.org/packages/79/62/120842ab20d9150a9d3a6bdc07fe2870384e82f5266d41c53b08a3a96b34/cryptography-45.0.6-cp311-abi3-manylinux_2_28_x86_64.whl", hash = "sha256:1b7fa6a1c1188c7ee32e47590d16a5a0646270921f8020efc9a511648e1b2e08", size = 4458484, upload-time = "2025-08-05T23:58:33.526Z" },
+    { url = "https://files.pythonhosted.org/packages/fd/80/1bc3634d45ddfed0871bfba52cf8f1ad724761662a0c792b97a951fb1b30/cryptography-45.0.6-cp311-abi3-manylinux_2_34_aarch64.whl", hash = "sha256:275ba5cc0d9e320cd70f8e7b96d9e59903c815ca579ab96c1e37278d231fc402", size = 4210281, upload-time = "2025-08-05T23:58:35.445Z" },
+    { url = "https://files.pythonhosted.org/packages/7d/fe/ffb12c2d83d0ee625f124880a1f023b5878f79da92e64c37962bbbe35f3f/cryptography-45.0.6-cp311-abi3-manylinux_2_34_x86_64.whl", hash = "sha256:f4028f29a9f38a2025abedb2e409973709c660d44319c61762202206ed577c42", size = 4456890, upload-time = "2025-08-05T23:58:36.923Z" },
+    { url = "https://files.pythonhosted.org/packages/8c/8e/b3f3fe0dc82c77a0deb5f493b23311e09193f2268b77196ec0f7a36e3f3e/cryptography-45.0.6-cp311-abi3-musllinux_1_2_aarch64.whl", hash = "sha256:ee411a1b977f40bd075392c80c10b58025ee5c6b47a822a33c1198598a7a5f05", size = 4333247, upload-time = "2025-08-05T23:58:38.781Z" },
+    { url = "https://files.pythonhosted.org/packages/b3/a6/c3ef2ab9e334da27a1d7b56af4a2417d77e7806b2e0f90d6267ce120d2e4/cryptography-45.0.6-cp311-abi3-musllinux_1_2_x86_64.whl", hash = "sha256:e2a21a8eda2d86bb604934b6b37691585bd095c1f788530c1fcefc53a82b3453", size = 4565045, upload-time = "2025-08-05T23:58:40.415Z" },
+    { url = "https://files.pythonhosted.org/packages/31/c3/77722446b13fa71dddd820a5faab4ce6db49e7e0bf8312ef4192a3f78e2f/cryptography-45.0.6-cp311-abi3-win32.whl", hash = "sha256:d063341378d7ee9c91f9d23b431a3502fc8bfacd54ef0a27baa72a0843b29159", size = 2928923, upload-time = "2025-08-05T23:58:41.919Z" },
+    { url = "https://files.pythonhosted.org/packages/38/63/a025c3225188a811b82932a4dcc8457a26c3729d81578ccecbcce2cb784e/cryptography-45.0.6-cp311-abi3-win_amd64.whl", hash = "sha256:833dc32dfc1e39b7376a87b9a6a4288a10aae234631268486558920029b086ec", size = 3403805, upload-time = "2025-08-05T23:58:43.792Z" },
+    { url = "https://files.pythonhosted.org/packages/5b/af/bcfbea93a30809f126d51c074ee0fac5bd9d57d068edf56c2a73abedbea4/cryptography-45.0.6-cp37-abi3-macosx_10_9_universal2.whl", hash = "sha256:3436128a60a5e5490603ab2adbabc8763613f638513ffa7d311c900a8349a2a0", size = 7020111, upload-time = "2025-08-05T23:58:45.316Z" },
+    { url = "https://files.pythonhosted.org/packages/98/c6/ea5173689e014f1a8470899cd5beeb358e22bb3cf5a876060f9d1ca78af4/cryptography-45.0.6-cp37-abi3-manylinux2014_aarch64.manylinux_2_17_aarch64.whl", hash = "sha256:0d9ef57b6768d9fa58e92f4947cea96ade1233c0e236db22ba44748ffedca394", size = 4198169, upload-time = "2025-08-05T23:58:47.121Z" },
+    { url = "https://files.pythonhosted.org/packages/ba/73/b12995edc0c7e2311ffb57ebd3b351f6b268fed37d93bfc6f9856e01c473/cryptography-45.0.6-cp37-abi3-manylinux2014_x86_64.manylinux_2_17_x86_64.whl", hash = "sha256:ea3c42f2016a5bbf71825537c2ad753f2870191134933196bee408aac397b3d9", size = 4421273, upload-time = "2025-08-05T23:58:48.557Z" },
+    { url = "https://files.pythonhosted.org/packages/f7/6e/286894f6f71926bc0da67408c853dd9ba953f662dcb70993a59fd499f111/cryptography-45.0.6-cp37-abi3-manylinux_2_28_aarch64.whl", hash = "sha256:20ae4906a13716139d6d762ceb3e0e7e110f7955f3bc3876e3a07f5daadec5f3", size = 4199211, upload-time = "2025-08-05T23:58:50.139Z" },
+    { url = "https://files.pythonhosted.org/packages/de/34/a7f55e39b9623c5cb571d77a6a90387fe557908ffc44f6872f26ca8ae270/cryptography-45.0.6-cp37-abi3-manylinux_2_28_armv7l.manylinux_2_31_armv7l.whl", hash = "sha256:2dac5ec199038b8e131365e2324c03d20e97fe214af051d20c49db129844e8b3", size = 3883732, upload-time = "2025-08-05T23:58:52.253Z" },
+    { url = "https://files.pythonhosted.org/packages/f9/b9/c6d32edbcba0cd9f5df90f29ed46a65c4631c4fbe11187feb9169c6ff506/cryptography-45.0.6-cp37-abi3-manylinux_2_28_x86_64.whl", hash = "sha256:18f878a34b90d688982e43f4b700408b478102dd58b3e39de21b5ebf6509c301", size = 4450655, upload-time = "2025-08-05T23:58:53.848Z" },
+    { url = "https://files.pythonhosted.org/packages/77/2d/09b097adfdee0227cfd4c699b3375a842080f065bab9014248933497c3f9/cryptography-45.0.6-cp37-abi3-manylinux_2_34_aarch64.whl", hash = "sha256:5bd6020c80c5b2b2242d6c48487d7b85700f5e0038e67b29d706f98440d66eb5", size = 4198956, upload-time = "2025-08-05T23:58:55.209Z" },
+    { url = "https://files.pythonhosted.org/packages/55/66/061ec6689207d54effdff535bbdf85cc380d32dd5377173085812565cf38/cryptography-45.0.6-cp37-abi3-manylinux_2_34_x86_64.whl", hash = "sha256:eccddbd986e43014263eda489abbddfbc287af5cddfd690477993dbb31e31016", size = 4449859, upload-time = "2025-08-05T23:58:56.639Z" },
+    { url = "https://files.pythonhosted.org/packages/41/ff/e7d5a2ad2d035e5a2af116e1a3adb4d8fcd0be92a18032917a089c6e5028/cryptography-45.0.6-cp37-abi3-musllinux_1_2_aarch64.whl", hash = "sha256:550ae02148206beb722cfe4ef0933f9352bab26b087af00e48fdfb9ade35c5b3", size = 4320254, upload-time = "2025-08-05T23:58:58.833Z" },
+    { url = "https://files.pythonhosted.org/packages/82/27/092d311af22095d288f4db89fcaebadfb2f28944f3d790a4cf51fe5ddaeb/cryptography-45.0.6-cp37-abi3-musllinux_1_2_x86_64.whl", hash = "sha256:5b64e668fc3528e77efa51ca70fadcd6610e8ab231e3e06ae2bab3b31c2b8ed9", size = 4554815, upload-time = "2025-08-05T23:59:00.283Z" },
+    { url = "https://files.pythonhosted.org/packages/7e/01/aa2f4940262d588a8fdf4edabe4cda45854d00ebc6eaac12568b3a491a16/cryptography-45.0.6-cp37-abi3-win32.whl", hash = "sha256:780c40fb751c7d2b0c6786ceee6b6f871e86e8718a8ff4bc35073ac353c7cd02", size = 2912147, upload-time = "2025-08-05T23:59:01.716Z" },
+    { url = "https://files.pythonhosted.org/packages/0a/bc/16e0276078c2de3ceef6b5a34b965f4436215efac45313df90d55f0ba2d2/cryptography-45.0.6-cp37-abi3-win_amd64.whl", hash = "sha256:20d15aed3ee522faac1a39fbfdfee25d17b1284bafd808e1640a74846d7c4d1b", size = 3390459, upload-time = "2025-08-05T23:59:03.358Z" },
+]
+
+[[package]]
+name = "deprecated"
+version = "1.2.18"
+source = { registry = "https://pypi.org/simple" }
+dependencies = [
+    { name = "wrapt" },
+]
+sdist = { url = "https://files.pythonhosted.org/packages/98/97/06afe62762c9a8a86af0cfb7bfdab22a43ad17138b07af5b1a58442690a2/deprecated-1.2.18.tar.gz", hash = "sha256:422b6f6d859da6f2ef57857761bfb392480502a64c3028ca9bbe86085d72115d", size = 2928744, upload-time = "2025-01-27T10:46:25.7Z" }
+wheels = [
+    { url = "https://files.pythonhosted.org/packages/6e/c6/ac0b6c1e2d138f1002bcf799d330bd6d85084fece321e662a14223794041/Deprecated-1.2.18-py2.py3-none-any.whl", hash = "sha256:bd5011788200372a32418f888e326a09ff80d0214bd961147cfed01b5c018eec", size = 9998, upload-time = "2025-01-27T10:46:09.186Z" },
+]
+
+[[package]]
+name = "flask"
+version = "3.1.1"
+source = { registry = "https://pypi.org/simple" }
+dependencies = [
+    { name = "blinker" },
+    { name = "click" },
+    { name = "itsdangerous" },
+    { name = "jinja2" },
+    { name = "markupsafe" },
+    { name = "werkzeug" },
+]
+sdist = { url = "https://files.pythonhosted.org/packages/c0/de/e47735752347f4128bcf354e0da07ef311a78244eba9e3dc1d4a5ab21a98/flask-3.1.1.tar.gz", hash = "sha256:284c7b8f2f58cb737f0cf1c30fd7eaf0ccfcde196099d24ecede3fc2005aa59e", size = 753440, upload-time = "2025-05-13T15:01:17.447Z" }
+wheels = [
+    { url = "https://files.pythonhosted.org/packages/3d/68/9d4508e893976286d2ead7f8f571314af6c2037af34853a30fd769c02e9d/flask-3.1.1-py3-none-any.whl", hash = "sha256:07aae2bb5eaf77993ef57e357491839f5fd9f4dc281593a81a9e4d79a24f295c", size = 103305, upload-time = "2025-05-13T15:01:15.591Z" },
+]
+
+[[package]]
+name = "flask-limiter"
+version = "3.12"
+source = { registry = "https://pypi.org/simple" }
+dependencies = [
+    { name = "flask" },
+    { name = "limits" },
+    { name = "ordered-set" },
+    { name = "rich" },
+]
+sdist = { url = "https://files.pythonhosted.org/packages/70/75/92b237dd4f6e19196bc73007fff288ab1d4c64242603f3c401ff8fc58a42/flask_limiter-3.12.tar.gz", hash = "sha256:f9e3e3d0c4acd0d1ffbfa729e17198dd1042f4d23c130ae160044fc930e21300", size = 303162, upload-time = "2025-03-15T02:23:10.734Z" }
+wheels = [
+    { url = "https://files.pythonhosted.org/packages/66/ba/40dafa278ee6a4300179d2bf59a1aa415165c26f74cfa17462132996186b/flask_limiter-3.12-py3-none-any.whl", hash = "sha256:b94c9e9584df98209542686947cf647f1ede35ed7e4ab564934a2bb9ed46b143", size = 28490, upload-time = "2025-03-15T02:23:08.919Z" },
+]
+
+[[package]]
+name = "iniconfig"
+version = "2.1.0"
+source = { registry = "https://pypi.org/simple" }
+sdist = { url = "https://files.pythonhosted.org/packages/f2/97/ebf4da567aa6827c909642694d71c9fcf53e5b504f2d96afea02718862f3/iniconfig-2.1.0.tar.gz", hash = "sha256:3abbd2e30b36733fee78f9c7f7308f2d0050e88f0087fd25c2645f63c773e1c7", size = 4793, upload-time = "2025-03-19T20:09:59.721Z" }
+wheels = [
+    { url = "https://files.pythonhosted.org/packages/2c/e1/e6716421ea10d38022b952c159d5161ca1193197fb744506875fbb87ea7b/iniconfig-2.1.0-py3-none-any.whl", hash = "sha256:9deba5723312380e77435581c6bf4935c94cbfab9b1ed33ef8d238ea168eb760", size = 6050, upload-time = "2025-03-19T20:10:01.071Z" },
+]
+
+[[package]]
+name = "itsdangerous"
+version = "2.2.0"
+source = { registry = "https://pypi.org/simple" }
+sdist = { url = "https://files.pythonhosted.org/packages/9c/cb/8ac0172223afbccb63986cc25049b154ecfb5e85932587206f42317be31d/itsdangerous-2.2.0.tar.gz", hash = "sha256:e0050c0b7da1eea53ffaf149c0cfbb5c6e2e2b69c4bef22c81fa6eb73e5f6173", size = 54410, upload-time = "2024-04-16T21:28:15.614Z" }
+wheels = [
+    { url = "https://files.pythonhosted.org/packages/04/96/92447566d16df59b2a776c0fb82dbc4d9e07cd95062562af01e408583fc4/itsdangerous-2.2.0-py3-none-any.whl", hash = "sha256:c6242fc49e35958c8b15141343aa660db5fc54d4f13a1db01a3f5891b98700ef", size = 16234, upload-time = "2024-04-16T21:28:14.499Z" },
+]
+
+[[package]]
+name = "jinja2"
+version = "3.1.6"
+source = { registry = "https://pypi.org/simple" }
+dependencies = [
+    { name = "markupsafe" },
+]
+sdist = { url = "https://files.pythonhosted.org/packages/df/bf/f7da0350254c0ed7c72f3e33cef02e048281fec7ecec5f032d4aac52226b/jinja2-3.1.6.tar.gz", hash = "sha256:0137fb05990d35f1275a587e9aee6d56da821fc83491a0fb838183be43f66d6d", size = 245115, upload-time = "2025-03-05T20:05:02.478Z" }
+wheels = [
+    { url = "https://files.pythonhosted.org/packages/62/a1/3d680cbfd5f4b8f15abc1d571870c5fc3e594bb582bc3b64ea099db13e56/jinja2-3.1.6-py3-none-any.whl", hash = "sha256:85ece4451f492d0c13c5dd7c13a64681a86afae63a5f347908daf103ce6d2f67", size = 134899, upload-time = "2025-03-05T20:05:00.369Z" },
+]
+
+[[package]]
+name = "limits"
+version = "5.5.0"
+source = { registry = "https://pypi.org/simple" }
+dependencies = [
+    { name = "deprecated" },
+    { name = "packaging" },
+    { name = "typing-extensions" },
+]
+sdist = { url = "https://files.pythonhosted.org/packages/76/17/7a2e9378c8b8bd4efe3573fd18d2793ad2a37051af5ccce94550a4e5d62d/limits-5.5.0.tar.gz", hash = "sha256:ee269fedb078a904608b264424d9ef4ab10555acc8d090b6fc1db70e913327ea", size = 95514, upload-time = "2025-08-05T18:23:54.771Z" }
+wheels = [
+    { url = "https://files.pythonhosted.org/packages/bf/68/ee314018c28da75ece5a639898b4745bd0687c0487fc465811f0c4b9cd44/limits-5.5.0-py3-none-any.whl", hash = "sha256:57217d01ffa5114f7e233d1f5e5bdc6fe60c9b24ade387bf4d5e83c5cf929bae", size = 60948, upload-time = "2025-08-05T18:23:53.335Z" },
+]
+
+[[package]]
+name = "markdown-it-py"
+version = "3.0.0"
+source = { registry = "https://pypi.org/simple" }
+dependencies = [
+    { name = "mdurl" },
+]
+sdist = { url = "https://files.pythonhosted.org/packages/38/71/3b932df36c1a044d397a1f92d1cf91ee0a503d91e470cbd670aa66b07ed0/markdown-it-py-3.0.0.tar.gz", hash = "sha256:e3f60a94fa066dc52ec76661e37c851cb232d92f9886b15cb560aaada2df8feb", size = 74596, upload-time = "2023-06-03T06:41:14.443Z" }
+wheels = [
+    { url = "https://files.pythonhosted.org/packages/42/d7/1ec15b46af6af88f19b8e5ffea08fa375d433c998b8a7639e76935c14f1f/markdown_it_py-3.0.0-py3-none-any.whl", hash = "sha256:355216845c60bd96232cd8d8c40e8f9765cc86f46880e43a8fd22dc1a1a8cab1", size = 87528, upload-time = "2023-06-03T06:41:11.019Z" },
+]
+
+[[package]]
+name = "markupsafe"
+version = "3.0.2"
+source = { registry = "https://pypi.org/simple" }
+sdist = { url = "https://files.pythonhosted.org/packages/b2/97/5d42485e71dfc078108a86d6de8fa46db44a1a9295e89c5d6d4a06e23a62/markupsafe-3.0.2.tar.gz", hash = "sha256:ee55d3edf80167e48ea11a923c7386f4669df67d7994554387f84e7d8b0a2bf0", size = 20537, upload-time = "2024-10-18T15:21:54.129Z" }
+wheels = [
+    { url = "https://files.pythonhosted.org/packages/22/09/d1f21434c97fc42f09d290cbb6350d44eb12f09cc62c9476effdb33a18aa/MarkupSafe-3.0.2-cp312-cp312-macosx_10_13_universal2.whl", hash = "sha256:9778bd8ab0a994ebf6f84c2b949e65736d5575320a17ae8984a77fab08db94cf", size = 14274, upload-time = "2024-10-18T15:21:13.777Z" },
+    { url = "https://files.pythonhosted.org/packages/6b/b0/18f76bba336fa5aecf79d45dcd6c806c280ec44538b3c13671d49099fdd0/MarkupSafe-3.0.2-cp312-cp312-macosx_11_0_arm64.whl", hash = "sha256:846ade7b71e3536c4e56b386c2a47adf5741d2d8b94ec9dc3e92e5e1ee1e2225", size = 12348, upload-time = "2024-10-18T15:21:14.822Z" },
+    { url = "https://files.pythonhosted.org/packages/e0/25/dd5c0f6ac1311e9b40f4af06c78efde0f3b5cbf02502f8ef9501294c425b/MarkupSafe-3.0.2-cp312-cp312-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:1c99d261bd2d5f6b59325c92c73df481e05e57f19837bdca8413b9eac4bd8028", size = 24149, upload-time = "2024-10-18T15:21:15.642Z" },
+    { url = "https://files.pythonhosted.org/packages/f3/f0/89e7aadfb3749d0f52234a0c8c7867877876e0a20b60e2188e9850794c17/MarkupSafe-3.0.2-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:e17c96c14e19278594aa4841ec148115f9c7615a47382ecb6b82bd8fea3ab0c8", size = 23118, upload-time = "2024-10-18T15:21:17.133Z" },
+    { url = "https://files.pythonhosted.org/packages/d5/da/f2eeb64c723f5e3777bc081da884b414671982008c47dcc1873d81f625b6/MarkupSafe-3.0.2-cp312-cp312-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:88416bd1e65dcea10bc7569faacb2c20ce071dd1f87539ca2ab364bf6231393c", size = 22993, upload-time = "2024-10-18T15:21:18.064Z" },
+    { url = "https://files.pythonhosted.org/packages/da/0e/1f32af846df486dce7c227fe0f2398dc7e2e51d4a370508281f3c1c5cddc/MarkupSafe-3.0.2-cp312-cp312-musllinux_1_2_aarch64.whl", hash = "sha256:2181e67807fc2fa785d0592dc2d6206c019b9502410671cc905d132a92866557", size = 24178, upload-time = "2024-10-18T15:21:18.859Z" },
+    { url = "https://files.pythonhosted.org/packages/c4/f6/bb3ca0532de8086cbff5f06d137064c8410d10779c4c127e0e47d17c0b71/MarkupSafe-3.0.2-cp312-cp312-musllinux_1_2_i686.whl", hash = "sha256:52305740fe773d09cffb16f8ed0427942901f00adedac82ec8b67752f58a1b22", size = 23319, upload-time = "2024-10-18T15:21:19.671Z" },
+    { url = "https://files.pythonhosted.org/packages/a2/82/8be4c96ffee03c5b4a034e60a31294daf481e12c7c43ab8e34a1453ee48b/MarkupSafe-3.0.2-cp312-cp312-musllinux_1_2_x86_64.whl", hash = "sha256:ad10d3ded218f1039f11a75f8091880239651b52e9bb592ca27de44eed242a48", size = 23352, upload-time = "2024-10-18T15:21:20.971Z" },
+    { url = "https://files.pythonhosted.org/packages/51/ae/97827349d3fcffee7e184bdf7f41cd6b88d9919c80f0263ba7acd1bbcb18/MarkupSafe-3.0.2-cp312-cp312-win32.whl", hash = "sha256:0f4ca02bea9a23221c0182836703cbf8930c5e9454bacce27e767509fa286a30", size = 15097, upload-time = "2024-10-18T15:21:22.646Z" },
+    { url = "https://files.pythonhosted.org/packages/c1/80/a61f99dc3a936413c3ee4e1eecac96c0da5ed07ad56fd975f1a9da5bc630/MarkupSafe-3.0.2-cp312-cp312-win_amd64.whl", hash = "sha256:8e06879fc22a25ca47312fbe7c8264eb0b662f6db27cb2d3bbbc74b1df4b9b87", size = 15601, upload-time = "2024-10-18T15:21:23.499Z" },
+    { url = "https://files.pythonhosted.org/packages/83/0e/67eb10a7ecc77a0c2bbe2b0235765b98d164d81600746914bebada795e97/MarkupSafe-3.0.2-cp313-cp313-macosx_10_13_universal2.whl", hash = "sha256:ba9527cdd4c926ed0760bc301f6728ef34d841f405abf9d4f959c478421e4efd", size = 14274, upload-time = "2024-10-18T15:21:24.577Z" },
+    { url = "https://files.pythonhosted.org/packages/2b/6d/9409f3684d3335375d04e5f05744dfe7e9f120062c9857df4ab490a1031a/MarkupSafe-3.0.2-cp313-cp313-macosx_11_0_arm64.whl", hash = "sha256:f8b3d067f2e40fe93e1ccdd6b2e1d16c43140e76f02fb1319a05cf2b79d99430", size = 12352, upload-time = "2024-10-18T15:21:25.382Z" },
+    { url = "https://files.pythonhosted.org/packages/d2/f5/6eadfcd3885ea85fe2a7c128315cc1bb7241e1987443d78c8fe712d03091/MarkupSafe-3.0.2-cp313-cp313-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:569511d3b58c8791ab4c2e1285575265991e6d8f8700c7be0e88f86cb0672094", size = 24122, upload-time = "2024-10-18T15:21:26.199Z" },
+    { url = "https://files.pythonhosted.org/packages/0c/91/96cf928db8236f1bfab6ce15ad070dfdd02ed88261c2afafd4b43575e9e9/MarkupSafe-3.0.2-cp313-cp313-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:15ab75ef81add55874e7ab7055e9c397312385bd9ced94920f2802310c930396", size = 23085, upload-time = "2024-10-18T15:21:27.029Z" },
+    { url = "https://files.pythonhosted.org/packages/c2/cf/c9d56af24d56ea04daae7ac0940232d31d5a8354f2b457c6d856b2057d69/MarkupSafe-3.0.2-cp313-cp313-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:f3818cb119498c0678015754eba762e0d61e5b52d34c8b13d770f0719f7b1d79", size = 22978, upload-time = "2024-10-18T15:21:27.846Z" },
+    { url = "https://files.pythonhosted.org/packages/2a/9f/8619835cd6a711d6272d62abb78c033bda638fdc54c4e7f4272cf1c0962b/MarkupSafe-3.0.2-cp313-cp313-musllinux_1_2_aarch64.whl", hash = "sha256:cdb82a876c47801bb54a690c5ae105a46b392ac6099881cdfb9f6e95e4014c6a", size = 24208, upload-time = "2024-10-18T15:21:28.744Z" },
+    { url = "https://files.pythonhosted.org/packages/f9/bf/176950a1792b2cd2102b8ffeb5133e1ed984547b75db47c25a67d3359f77/MarkupSafe-3.0.2-cp313-cp313-musllinux_1_2_i686.whl", hash = "sha256:cabc348d87e913db6ab4aa100f01b08f481097838bdddf7c7a84b7575b7309ca", size = 23357, upload-time = "2024-10-18T15:21:29.545Z" },
+    { url = "https://files.pythonhosted.org/packages/ce/4f/9a02c1d335caabe5c4efb90e1b6e8ee944aa245c1aaaab8e8a618987d816/MarkupSafe-3.0.2-cp313-cp313-musllinux_1_2_x86_64.whl", hash = "sha256:444dcda765c8a838eaae23112db52f1efaf750daddb2d9ca300bcae1039adc5c", size = 23344, upload-time = "2024-10-18T15:21:30.366Z" },
+    { url = "https://files.pythonhosted.org/packages/ee/55/c271b57db36f748f0e04a759ace9f8f759ccf22b4960c270c78a394f58be/MarkupSafe-3.0.2-cp313-cp313-win32.whl", hash = "sha256:bcf3e58998965654fdaff38e58584d8937aa3096ab5354d493c77d1fdd66d7a1", size = 15101, upload-time = "2024-10-18T15:21:31.207Z" },
+    { url = "https://files.pythonhosted.org/packages/29/88/07df22d2dd4df40aba9f3e402e6dc1b8ee86297dddbad4872bd5e7b0094f/MarkupSafe-3.0.2-cp313-cp313-win_amd64.whl", hash = "sha256:e6a2a455bd412959b57a172ce6328d2dd1f01cb2135efda2e4576e8a23fa3b0f", size = 15603, upload-time = "2024-10-18T15:21:32.032Z" },
+    { url = "https://files.pythonhosted.org/packages/62/6a/8b89d24db2d32d433dffcd6a8779159da109842434f1dd2f6e71f32f738c/MarkupSafe-3.0.2-cp313-cp313t-macosx_10_13_universal2.whl", hash = "sha256:b5a6b3ada725cea8a5e634536b1b01c30bcdcd7f9c6fff4151548d5bf6b3a36c", size = 14510, upload-time = "2024-10-18T15:21:33.625Z" },
+    { url = "https://files.pythonhosted.org/packages/7a/06/a10f955f70a2e5a9bf78d11a161029d278eeacbd35ef806c3fd17b13060d/MarkupSafe-3.0.2-cp313-cp313t-macosx_11_0_arm64.whl", hash = "sha256:a904af0a6162c73e3edcb969eeeb53a63ceeb5d8cf642fade7d39e7963a22ddb", size = 12486, upload-time = "2024-10-18T15:21:34.611Z" },
+    { url = "https://files.pythonhosted.org/packages/34/cf/65d4a571869a1a9078198ca28f39fba5fbb910f952f9dbc5220afff9f5e6/MarkupSafe-3.0.2-cp313-cp313t-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:4aa4e5faecf353ed117801a068ebab7b7e09ffb6e1d5e412dc852e0da018126c", size = 25480, upload-time = "2024-10-18T15:21:35.398Z" },
+    { url = "https://files.pythonhosted.org/packages/0c/e3/90e9651924c430b885468b56b3d597cabf6d72be4b24a0acd1fa0e12af67/MarkupSafe-3.0.2-cp313-cp313t-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:c0ef13eaeee5b615fb07c9a7dadb38eac06a0608b41570d8ade51c56539e509d", size = 23914, upload-time = "2024-10-18T15:21:36.231Z" },
+    { url = "https://files.pythonhosted.org/packages/66/8c/6c7cf61f95d63bb866db39085150df1f2a5bd3335298f14a66b48e92659c/MarkupSafe-3.0.2-cp313-cp313t-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:d16a81a06776313e817c951135cf7340a3e91e8c1ff2fac444cfd75fffa04afe", size = 23796, upload-time = "2024-10-18T15:21:37.073Z" },
+    { url = "https://files.pythonhosted.org/packages/bb/35/cbe9238ec3f47ac9a7c8b3df7a808e7cb50fe149dc7039f5f454b3fba218/MarkupSafe-3.0.2-cp313-cp313t-musllinux_1_2_aarch64.whl", hash = "sha256:6381026f158fdb7c72a168278597a5e3a5222e83ea18f543112b2662a9b699c5", size = 25473, upload-time = "2024-10-18T15:21:37.932Z" },
+    { url = "https://files.pythonhosted.org/packages/e6/32/7621a4382488aa283cc05e8984a9c219abad3bca087be9ec77e89939ded9/MarkupSafe-3.0.2-cp313-cp313t-musllinux_1_2_i686.whl", hash = "sha256:3d79d162e7be8f996986c064d1c7c817f6df3a77fe3d6859f6f9e7be4b8c213a", size = 24114, upload-time = "2024-10-18T15:21:39.799Z" },
+    { url = "https://files.pythonhosted.org/packages/0d/80/0985960e4b89922cb5a0bac0ed39c5b96cbc1a536a99f30e8c220a996ed9/MarkupSafe-3.0.2-cp313-cp313t-musllinux_1_2_x86_64.whl", hash = "sha256:131a3c7689c85f5ad20f9f6fb1b866f402c445b220c19fe4308c0b147ccd2ad9", size = 24098, upload-time = "2024-10-18T15:21:40.813Z" },
+    { url = "https://files.pythonhosted.org/packages/82/78/fedb03c7d5380df2427038ec8d973587e90561b2d90cd472ce9254cf348b/MarkupSafe-3.0.2-cp313-cp313t-win32.whl", hash = "sha256:ba8062ed2cf21c07a9e295d5b8a2a5ce678b913b45fdf68c32d95d6c1291e0b6", size = 15208, upload-time = "2024-10-18T15:21:41.814Z" },
+    { url = "https://files.pythonhosted.org/packages/4f/65/6079a46068dfceaeabb5dcad6d674f5f5c61a6fa5673746f42a9f4c233b3/MarkupSafe-3.0.2-cp313-cp313t-win_amd64.whl", hash = "sha256:e444a31f8db13eb18ada366ab3cf45fd4b31e4db1236a4448f68778c1d1a5a2f", size = 15739, upload-time = "2024-10-18T15:21:42.784Z" },
+]
+
+[[package]]
+name = "mdurl"
+version = "0.1.2"
+source = { registry = "https://pypi.org/simple" }
+sdist = { url = "https://files.pythonhosted.org/packages/d6/54/cfe61301667036ec958cb99bd3efefba235e65cdeb9c84d24a8293ba1d90/mdurl-0.1.2.tar.gz", hash = "sha256:bb413d29f5eea38f31dd4754dd7377d4465116fb207585f97bf925588687c1ba", size = 8729, upload-time = "2022-08-14T12:40:10.846Z" }
+wheels = [
+    { url = "https://files.pythonhosted.org/packages/b3/38/89ba8ad64ae25be8de66a6d463314cf1eb366222074cfda9ee839c56a4b4/mdurl-0.1.2-py3-none-any.whl", hash = "sha256:84008a41e51615a49fc9966191ff91509e3c40b939176e643fd50a5c2196b8f8", size = 9979, upload-time = "2022-08-14T12:40:09.779Z" },
+]
+
+[[package]]
+name = "ordered-set"
+version = "4.1.0"
+source = { registry = "https://pypi.org/simple" }
+sdist = { url = "https://files.pythonhosted.org/packages/4c/ca/bfac8bc689799bcca4157e0e0ced07e70ce125193fc2e166d2e685b7e2fe/ordered-set-4.1.0.tar.gz", hash = "sha256:694a8e44c87657c59292ede72891eb91d34131f6531463aab3009191c77364a8", size = 12826, upload-time = "2022-01-26T14:38:56.6Z" }
+wheels = [
+    { url = "https://files.pythonhosted.org/packages/33/55/af02708f230eb77084a299d7b08175cff006dea4f2721074b92cdb0296c0/ordered_set-4.1.0-py3-none-any.whl", hash = "sha256:046e1132c71fcf3330438a539928932caf51ddbc582496833e23de611de14562", size = 7634, upload-time = "2022-01-26T14:38:48.677Z" },
+]
+
+[[package]]
+name = "packaging"
+version = "25.0"
+source = { registry = "https://pypi.org/simple" }
+sdist = { url = "https://files.pythonhosted.org/packages/a1/d4/1fc4078c65507b51b96ca8f8c3ba19e6a61c8253c72794544580a7b6c24d/packaging-25.0.tar.gz", hash = "sha256:d443872c98d677bf60f6a1f2f8c1cb748e8fe762d2bf9d3148b5599295b0fc4f", size = 165727, upload-time = "2025-04-19T11:48:59.673Z" }
+wheels = [
+    { url = "https://files.pythonhosted.org/packages/20/12/38679034af332785aac8774540895e234f4d07f7545804097de4b666afd8/packaging-25.0-py3-none-any.whl", hash = "sha256:29572ef2b1f17581046b3a2227d5c611fb25ec70ca1ba8554b24b0e69331a484", size = 66469, upload-time = "2025-04-19T11:48:57.875Z" },
+]
+
+[[package]]
+name = "pluggy"
+version = "1.6.0"
+source = { registry = "https://pypi.org/simple" }
+sdist = { url = "https://files.pythonhosted.org/packages/f9/e2/3e91f31a7d2b083fe6ef3fa267035b518369d9511ffab804f839851d2779/pluggy-1.6.0.tar.gz", hash = "sha256:7dcc130b76258d33b90f61b658791dede3486c3e6bfb003ee5c9bfb396dd22f3", size = 69412, upload-time = "2025-05-15T12:30:07.975Z" }
+wheels = [
+    { url = "https://files.pythonhosted.org/packages/54/20/4d324d65cc6d9205fabedc306948156824eb9f0ee1633355a8f7ec5c66bf/pluggy-1.6.0-py3-none-any.whl", hash = "sha256:e920276dd6813095e9377c0bc5566d94c932c33b27a3e3945d8389c374dd4746", size = 20538, upload-time = "2025-05-15T12:30:06.134Z" },
+]
+
+[[package]]
+name = "pycparser"
+version = "2.22"
+source = { registry = "https://pypi.org/simple" }
+sdist = { url = "https://files.pythonhosted.org/packages/1d/b2/31537cf4b1ca988837256c910a668b553fceb8f069bedc4b1c826024b52c/pycparser-2.22.tar.gz", hash = "sha256:491c8be9c040f5390f5bf44a5b07752bd07f56edf992381b05c701439eec10f6", size = 172736, upload-time = "2024-03-30T13:22:22.564Z" }
+wheels = [
+    { url = "https://files.pythonhosted.org/packages/13/a3/a812df4e2dd5696d1f351d58b8fe16a405b234ad2886a0dab9183fb78109/pycparser-2.22-py3-none-any.whl", hash = "sha256:c3702b6d3dd8c7abc1afa565d7e63d53a1d0bd86cdc24edd75470f4de499cfcc", size = 117552, upload-time = "2024-03-30T13:22:20.476Z" },
+]
+
+[[package]]
+name = "pygments"
+version = "2.19.2"
+source = { registry = "https://pypi.org/simple" }
+sdist = { url = "https://files.pythonhosted.org/packages/b0/77/a5b8c569bf593b0140bde72ea885a803b82086995367bf2037de0159d924/pygments-2.19.2.tar.gz", hash = "sha256:636cb2477cec7f8952536970bc533bc43743542f70392ae026374600add5b887", size = 4968631, upload-time = "2025-06-21T13:39:12.283Z" }
+wheels = [
+    { url = "https://files.pythonhosted.org/packages/c7/21/705964c7812476f378728bdf590ca4b771ec72385c533964653c68e86bdc/pygments-2.19.2-py3-none-any.whl", hash = "sha256:86540386c03d588bb81d44bc3928634ff26449851e99741617ecb9037ee5ec0b", size = 1225217, upload-time = "2025-06-21T13:39:07.939Z" },
+]
+
+[[package]]
+name = "pymysql"
+version = "1.1.1"
+source = { registry = "https://pypi.org/simple" }
+sdist = { url = "https://files.pythonhosted.org/packages/b3/8f/ce59b5e5ed4ce8512f879ff1fa5ab699d211ae2495f1adaa5fbba2a1eada/pymysql-1.1.1.tar.gz", hash = "sha256:e127611aaf2b417403c60bf4dc570124aeb4a57f5f37b8e95ae399a42f904cd0", size = 47678, upload-time = "2024-05-21T11:03:43.722Z" }
+wheels = [
+    { url = "https://files.pythonhosted.org/packages/0c/94/e4181a1f6286f545507528c78016e00065ea913276888db2262507693ce5/PyMySQL-1.1.1-py3-none-any.whl", hash = "sha256:4de15da4c61dc132f4fb9ab763063e693d521a80fd0e87943b9a453dd4c19d6c", size = 44972, upload-time = "2024-05-21T11:03:41.216Z" },
+]
+
+[[package]]
+name = "pytest"
+version = "8.4.1"
+source = { registry = "https://pypi.org/simple" }
+dependencies = [
+    { name = "colorama", marker = "sys_platform == 'win32'" },
+    { name = "iniconfig" },
+    { name = "packaging" },
+    { name = "pluggy" },
+    { name = "pygments" },
+]
+sdist = { url = "https://files.pythonhosted.org/packages/08/ba/45911d754e8eba3d5a841a5ce61a65a685ff1798421ac054f85aa8747dfb/pytest-8.4.1.tar.gz", hash = "sha256:7c67fd69174877359ed9371ec3af8a3d2b04741818c51e5e99cc1742251fa93c", size = 1517714, upload-time = "2025-06-18T05:48:06.109Z" }
+wheels = [
+    { url = "https://files.pythonhosted.org/packages/29/16/c8a903f4c4dffe7a12843191437d7cd8e32751d5de349d45d3fe69544e87/pytest-8.4.1-py3-none-any.whl", hash = "sha256:539c70ba6fcead8e78eebbf1115e8b589e7565830d7d006a8723f19ac8a0afb7", size = 365474, upload-time = "2025-06-18T05:48:03.955Z" },
+]
+
+[[package]]
+name = "rich"
+version = "13.9.4"
+source = { registry = "https://pypi.org/simple" }
+dependencies = [
+    { name = "markdown-it-py" },
+    { name = "pygments" },
+]
+sdist = { url = "https://files.pythonhosted.org/packages/ab/3a/0316b28d0761c6734d6bc14e770d85506c986c85ffb239e688eeaab2c2bc/rich-13.9.4.tar.gz", hash = "sha256:439594978a49a09530cff7ebc4b5c7103ef57baf48d5ea3184f21d9a2befa098", size = 223149, upload-time = "2024-11-01T16:43:57.873Z" }
+wheels = [
+    { url = "https://files.pythonhosted.org/packages/19/71/39c7c0d87f8d4e6c020a393182060eaefeeae6c01dab6a84ec346f2567df/rich-13.9.4-py3-none-any.whl", hash = "sha256:6049d5e6ec054bf2779ab3358186963bac2ea89175919d699e378b99738c2a90", size = 242424, upload-time = "2024-11-01T16:43:55.817Z" },
+]
+
+[[package]]
+name = "scummvm-sites"
+version = "0.1.0"
+source = { virtual = "." }
+dependencies = [
+    { name = "blinker" },
+    { name = "cffi" },
+    { name = "click" },
+    { name = "cryptography" },
+    { name = "flask" },
+    { name = "flask-limiter" },
+    { name = "iniconfig" },
+    { name = "itsdangerous" },
+    { name = "jinja2" },
+    { name = "markupsafe" },
+    { name = "packaging" },
+    { name = "pluggy" },
+    { name = "pycparser" },
+    { name = "pygments" },
+    { name = "pymysql" },
+    { name = "pytest" },
+    { name = "setuptools" },
+    { name = "werkzeug" },
+    { name = "wheel" },
+]
+
+[package.metadata]
+requires-dist = [
+    { name = "blinker", specifier = ">=1.9.0" },
+    { name = "cffi", specifier = ">=1.17.1" },
+    { name = "click", specifier = ">=8.2.1" },
+    { name = "cryptography", specifier = ">=45.0.6" },
+    { name = "flask", specifier = ">=3.1.1" },
+    { name = "flask-limiter", specifier = ">=3.12" },
+    { name = "iniconfig", specifier = ">=2.1.0" },
+    { name = "itsdangerous", specifier = ">=2.2.0" },
+    { name = "jinja2", specifier = ">=3.1.6" },
+    { name = "markupsafe", specifier = ">=3.0.2" },
+    { name = "packaging", specifier = ">=25.0" },
+    { name = "pluggy", specifier = ">=1.6.0" },
+    { name = "pycparser", specifier = ">=2.22" },
+    { name = "pygments", specifier = ">=2.19.2" },
+    { name = "pymysql", specifier = ">=1.1.1" },
+    { name = "pytest", specifier = ">=8.4.1" },
+    { name = "setuptools", specifier = ">=80.9.0" },
+    { name = "werkzeug", specifier = ">=3.1.3" },
+    { name = "wheel", specifier = ">=0.45.1" },
+]
+
+[[package]]
+name = "setuptools"
+version = "80.9.0"
+source = { registry = "https://pypi.org/simple" }
+sdist = { url = "https://files.pythonhosted.org/packages/18/5d/3bf57dcd21979b887f014ea83c24ae194cfcd12b9e0fda66b957c69d1fca/setuptools-80.9.0.tar.gz", hash = "sha256:f36b47402ecde768dbfafc46e8e4207b4360c654f1f3bb84475f0a28628fb19c", size = 1319958, upload-time = "2025-05-27T00:56:51.443Z" }
+wheels = [
+    { url = "https://files.pythonhosted.org/packages/a3/dc/17031897dae0efacfea57dfd3a82fdd2a2aeb58e0ff71b77b87e44edc772/setuptools-80.9.0-py3-none-any.whl", hash = "sha256:062d34222ad13e0cc312a4c02d73f059e86a4acbfbdea8f8f76b28c99f306922", size = 1201486, upload-time = "2025-05-27T00:56:49.664Z" },
+]
+
+[[package]]
+name = "typing-extensions"
+version = "4.14.1"
+source = { registry = "https://pypi.org/simple" }
+sdist = { url = "https://files.pythonhosted.org/packages/98/5a/da40306b885cc8c09109dc2e1abd358d5684b1425678151cdaed4731c822/typing_extensions-4.14.1.tar.gz", hash = "sha256:38b39f4aeeab64884ce9f74c94263ef78f3c22467c8724005483154c26648d36", size = 107673, upload-time = "2025-07-04T13:28:34.16Z" }
+wheels = [
+    { url = "https://files.pythonhosted.org/packages/b5/00/d631e67a838026495268c2f6884f3711a15a9a2a96cd244fdaea53b823fb/typing_extensions-4.14.1-py3-none-any.whl", hash = "sha256:d1e1e3b58374dc93031d6eda2420a48ea44a36c2b4766a4fdeb3710755731d76", size = 43906, upload-time = "2025-07-04T13:28:32.743Z" },
+]
+
+[[package]]
+name = "werkzeug"
+version = "3.1.3"
+source = { registry = "https://pypi.org/simple" }
+dependencies = [
+    { name = "markupsafe" },
+]
+sdist = { url = "https://files.pythonhosted.org/packages/9f/69/83029f1f6300c5fb2471d621ab06f6ec6b3324685a2ce0f9777fd4a8b71e/werkzeug-3.1.3.tar.gz", hash = "sha256:60723ce945c19328679790e3282cc758aa4a6040e4bb330f53d30fa546d44746", size = 806925, upload-time = "2024-11-08T15:52:18.093Z" }
+wheels = [
+    { url = "https://files.pythonhosted.org/packages/52/24/ab44c871b0f07f491e5d2ad12c9bd7358e527510618cb1b803a88e986db1/werkzeug-3.1.3-py3-none-any.whl", hash = "sha256:54b78bf3716d19a65be4fceccc0d1d7b89e608834989dfae50ea87564639213e", size = 224498, upload-time = "2024-11-08T15:52:16.132Z" },
+]
+
+[[package]]
+name = "wheel"
+version = "0.45.1"
+source = { registry = "https://pypi.org/simple" }
+sdist = { url = "https://files.pythonhosted.org/packages/8a/98/2d9906746cdc6a6ef809ae6338005b3f21bb568bea3165cfc6a243fdc25c/wheel-0.45.1.tar.gz", hash = "sha256:661e1abd9198507b1409a20c02106d9670b2576e916d58f520316666abca6729", size = 107545, upload-time = "2024-11-23T00:18:23.513Z" }
+wheels = [
+    { url = "https://files.pythonhosted.org/packages/0b/2c/87f3254fd8ffd29e4c02732eee68a83a1d3c346ae39bc6822dcbcb697f2b/wheel-0.45.1-py3-none-any.whl", hash = "sha256:708e7481cc80179af0e556bbf0cc00b8444c7321e2700b8d8580231d13017248", size = 72494, upload-time = "2024-11-23T00:18:21.207Z" },
+]
+
+[[package]]
+name = "wrapt"
+version = "1.17.2"
+source = { registry = "https://pypi.org/simple" }
+sdist = { url = "https://files.pythonhosted.org/packages/c3/fc/e91cc220803d7bc4db93fb02facd8461c37364151b8494762cc88b0fbcef/wrapt-1.17.2.tar.gz", hash = "sha256:41388e9d4d1522446fe79d3213196bd9e3b301a336965b9e27ca2788ebd122f3", size = 55531, upload-time = "2025-01-14T10:35:45.465Z" }
+wheels = [
+    { url = "https://files.pythonhosted.org/packages/a1/bd/ab55f849fd1f9a58ed7ea47f5559ff09741b25f00c191231f9f059c83949/wrapt-1.17.2-cp312-cp312-macosx_10_13_universal2.whl", hash = "sha256:d5e2439eecc762cd85e7bd37161d4714aa03a33c5ba884e26c81559817ca0925", size = 53799, upload-time = "2025-01-14T10:33:57.4Z" },
+    { url = "https://files.pythonhosted.org/packages/53/18/75ddc64c3f63988f5a1d7e10fb204ffe5762bc663f8023f18ecaf31a332e/wrapt-1.17.2-cp312-cp312-macosx_10_13_x86_64.whl", hash = "sha256:3fc7cb4c1c744f8c05cd5f9438a3caa6ab94ce8344e952d7c45a8ed59dd88392", size = 38821, upload-time = "2025-01-14T10:33:59.334Z" },
+    { url = "https://files.pythonhosted.org/packages/48/2a/97928387d6ed1c1ebbfd4efc4133a0633546bec8481a2dd5ec961313a1c7/wrapt-1.17.2-cp312-cp312-macosx_11_0_arm64.whl", hash = "sha256:8fdbdb757d5390f7c675e558fd3186d590973244fab0c5fe63d373ade3e99d40", size = 38919, upload-time = "2025-01-14T10:34:04.093Z" },
+    { url = "https://files.pythonhosted.org/packages/73/54/3bfe5a1febbbccb7a2f77de47b989c0b85ed3a6a41614b104204a788c20e/wrapt-1.17.2-cp312-cp312-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:5bb1d0dbf99411f3d871deb6faa9aabb9d4e744d67dcaaa05399af89d847a91d", size = 88721, upload-time = "2025-01-14T10:34:07.163Z" },
+    { url = "https://files.pythonhosted.org/packages/25/cb/7262bc1b0300b4b64af50c2720ef958c2c1917525238d661c3e9a2b71b7b/wrapt-1.17.2-cp312-cp312-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:d18a4865f46b8579d44e4fe1e2bcbc6472ad83d98e22a26c963d46e4c125ef0b", size = 80899, upload-time = "2025-01-14T10:34:09.82Z" },
+    { url = "https://files.pythonhosted.org/packages/2a/5a/04cde32b07a7431d4ed0553a76fdb7a61270e78c5fd5a603e190ac389f14/wrapt-1.17.2-cp312-cp312-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:bc570b5f14a79734437cb7b0500376b6b791153314986074486e0b0fa8d71d98", size = 89222, upload-time = "2025-01-14T10:34:11.258Z" },
+    { url = "https://files.pythonhosted.org/packages/09/28/2e45a4f4771fcfb109e244d5dbe54259e970362a311b67a965555ba65026/wrapt-1.17.2-cp312-cp312-musllinux_1_2_aarch64.whl", hash = "sha256:6d9187b01bebc3875bac9b087948a2bccefe464a7d8f627cf6e48b1bbae30f82", size = 86707, upload-time = "2025-01-14T10:34:12.49Z" },
+    { url = "https://files.pythonhosted.org/packages/c6/d2/dcb56bf5f32fcd4bd9aacc77b50a539abdd5b6536872413fd3f428b21bed/wrapt-1.17.2-cp312-cp312-musllinux_1_2_i686.whl", hash = "sha256:9e8659775f1adf02eb1e6f109751268e493c73716ca5761f8acb695e52a756ae", size = 79685, upload-time = "2025-01-14T10:34:15.043Z" },
+    { url = "https://files.pythonhosted.org/packages/80/4e/eb8b353e36711347893f502ce91c770b0b0929f8f0bed2670a6856e667a9/wrapt-1.17.2-cp312-cp312-musllinux_1_2_x86_64.whl", hash = "sha256:e8b2816ebef96d83657b56306152a93909a83f23994f4b30ad4573b00bd11bb9", size = 87567, upload-time = "2025-01-14T10:34:16.563Z" },
+    { url = "https://files.pythonhosted.org/packages/17/27/4fe749a54e7fae6e7146f1c7d914d28ef599dacd4416566c055564080fe2/wrapt-1.17.2-cp312-cp312-win32.whl", hash = "sha256:468090021f391fe0056ad3e807e3d9034e0fd01adcd3bdfba977b6fdf4213ea9", size = 36672, upload-time = "2025-01-14T10:34:17.727Z" },
+    { url = "https://files.pythonhosted.org/packages/15/06/1dbf478ea45c03e78a6a8c4be4fdc3c3bddea5c8de8a93bc971415e47f0f/wrapt-1.17.2-cp312-cp312-win_amd64.whl", hash = "sha256:ec89ed91f2fa8e3f52ae53cd3cf640d6feff92ba90d62236a81e4e563ac0e991", size = 38865, upload-time = "2025-01-14T10:34:19.577Z" },
+    { url = "https://files.pythonhosted.org/packages/ce/b9/0ffd557a92f3b11d4c5d5e0c5e4ad057bd9eb8586615cdaf901409920b14/wrapt-1.17.2-cp313-cp313-macosx_10_13_universal2.whl", hash = "sha256:6ed6ffac43aecfe6d86ec5b74b06a5be33d5bb9243d055141e8cabb12aa08125", size = 53800, upload-time = "2025-01-14T10:34:21.571Z" },
+    { url = "https://files.pythonhosted.org/packages/c0/ef/8be90a0b7e73c32e550c73cfb2fa09db62234227ece47b0e80a05073b375/wrapt-1.17.2-cp313-cp313-macosx_10_13_x86_64.whl", hash = "sha256:35621ae4c00e056adb0009f8e86e28eb4a41a4bfa8f9bfa9fca7d343fe94f998", size = 38824, upload-time = "2025-01-14T10:34:22.999Z" },
+    { url = "https://files.pythonhosted.org/packages/36/89/0aae34c10fe524cce30fe5fc433210376bce94cf74d05b0d68344c8ba46e/wrapt-1.17.2-cp313-cp313-macosx_11_0_arm64.whl", hash = "sha256:a604bf7a053f8362d27eb9fefd2097f82600b856d5abe996d623babd067b1ab5", size = 38920, upload-time = "2025-01-14T10:34:25.386Z" },
+    { url = "https://files.pythonhosted.org/packages/3b/24/11c4510de906d77e0cfb5197f1b1445d4fec42c9a39ea853d482698ac681/wrapt-1.17.2-cp313-cp313-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:5cbabee4f083b6b4cd282f5b817a867cf0b1028c54d445b7ec7cfe6505057cf8", size = 88690, upload-time = "2025-01-14T10:34:28.058Z" },
+    { url = "https://files.pythonhosted.org/packages/71/d7/cfcf842291267bf455b3e266c0c29dcb675b5540ee8b50ba1699abf3af45/wrapt-1.17.2-cp313-cp313-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:49703ce2ddc220df165bd2962f8e03b84c89fee2d65e1c24a7defff6f988f4d6", size = 80861, upload-time = "2025-01-14T10:34:29.167Z" },
+    { url = "https://files.pythonhosted.org/packages/d5/66/5d973e9f3e7370fd686fb47a9af3319418ed925c27d72ce16b791231576d/wrapt-1.17.2-cp313-cp313-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:8112e52c5822fc4253f3901b676c55ddf288614dc7011634e2719718eaa187dc", size = 89174, upload-time = "2025-01-14T10:34:31.702Z" },
+    { url = "https://files.pythonhosted.org/packages/a7/d3/8e17bb70f6ae25dabc1aaf990f86824e4fd98ee9cadf197054e068500d27/wrapt-1.17.2-cp313-cp313-musllinux_1_2_aarch64.whl", hash = "sha256:9fee687dce376205d9a494e9c121e27183b2a3df18037f89d69bd7b35bcf59e2", size = 86721, upload-time = "2025-01-14T10:34:32.91Z" },
+    { url = "https://files.pythonhosted.org/packages/6f/54/f170dfb278fe1c30d0ff864513cff526d624ab8de3254b20abb9cffedc24/wrapt-1.17.2-cp313-cp313-musllinux_1_2_i686.whl", hash = "sha256:18983c537e04d11cf027fbb60a1e8dfd5190e2b60cc27bc0808e653e7b218d1b", size = 79763, upload-time = "2025-01-14T10:34:34.903Z" },
+    { url = "https://files.pythonhosted.org/packages/4a/98/de07243751f1c4a9b15c76019250210dd3486ce098c3d80d5f729cba029c/wrapt-1.17.2-cp313-cp313-musllinux_1_2_x86_64.whl", hash = "sha256:703919b1633412ab54bcf920ab388735832fdcb9f9a00ae49387f0fe67dad504", size = 87585, upload-time = "2025-01-14T10:34:36.13Z" },
+    { url = "https://files.pythonhosted.org/packages/f9/f0/13925f4bd6548013038cdeb11ee2cbd4e37c30f8bfd5db9e5a2a370d6e20/wrapt-1.17.2-cp313-cp313-win32.whl", hash = "sha256:abbb9e76177c35d4e8568e58650aa6926040d6a9f6f03435b7a522bf1c487f9a", size = 36676, upload-time = "2025-01-14T10:34:37.962Z" },
+    { url = "https://files.pythonhosted.org/packages/bf/ae/743f16ef8c2e3628df3ddfd652b7d4c555d12c84b53f3d8218498f4ade9b/wrapt-1.17.2-cp313-cp313-win_amd64.whl", hash = "sha256:69606d7bb691b50a4240ce6b22ebb319c1cfb164e5f6569835058196e0f3a845", size = 38871, upload-time = "2025-01-14T10:34:39.13Z" },
+    { url = "https://files.pythonhosted.org/packages/3d/bc/30f903f891a82d402ffb5fda27ec1d621cc97cb74c16fea0b6141f1d4e87/wrapt-1.17.2-cp313-cp313t-macosx_10_13_universal2.whl", hash = "sha256:4a721d3c943dae44f8e243b380cb645a709ba5bd35d3ad27bc2ed947e9c68192", size = 56312, upload-time = "2025-01-14T10:34:40.604Z" },
+    { url = "https://files.pythonhosted.org/packages/8a/04/c97273eb491b5f1c918857cd26f314b74fc9b29224521f5b83f872253725/wrapt-1.17.2-cp313-cp313t-macosx_10_13_x86_64.whl", hash = "sha256:766d8bbefcb9e00c3ac3b000d9acc51f1b399513f44d77dfe0eb026ad7c9a19b", size = 40062, upload-time = "2025-01-14T10:34:45.011Z" },
+    { url = "https://files.pythonhosted.org/packages/4e/ca/3b7afa1eae3a9e7fefe499db9b96813f41828b9fdb016ee836c4c379dadb/wrapt-1.17.2-cp313-cp313t-macosx_11_0_arm64.whl", hash = "sha256:e496a8ce2c256da1eb98bd15803a79bee00fc351f5dfb9ea82594a3f058309e0", size = 40155, upload-time = "2025-01-14T10:34:47.25Z" },
+    { url = "https://files.pythonhosted.org/packages/89/be/7c1baed43290775cb9030c774bc53c860db140397047cc49aedaf0a15477/wrapt-1.17.2-cp313-cp313t-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:40d615e4fe22f4ad3528448c193b218e077656ca9ccb22ce2cb20db730f8d306", size = 113471, upload-time = "2025-01-14T10:34:50.934Z" },
+    { url = "https://files.pythonhosted.org/packages/32/98/4ed894cf012b6d6aae5f5cc974006bdeb92f0241775addad3f8cd6ab71c8/wrapt-1.17.2-cp313-cp313t-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:a5aaeff38654462bc4b09023918b7f21790efb807f54c000a39d41d69cf552cb", size = 101208, upload-time = "2025-01-14T10:34:52.297Z" },
+    { url = "https://files.pythonhosted.org/packages/ea/fd/0c30f2301ca94e655e5e057012e83284ce8c545df7661a78d8bfca2fac7a/wrapt-1.17.2-cp313-cp313t-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:9a7d15bbd2bc99e92e39f49a04653062ee6085c0e18b3b7512a4f2fe91f2d681", size = 109339, upload-time = "2025-01-14T10:34:53.489Z" },
+    { url = "https://files.pythonhosted.org/packages/75/56/05d000de894c4cfcb84bcd6b1df6214297b8089a7bd324c21a4765e49b14/wrapt-1.17.2-cp313-cp313t-musllinux_1_2_aarch64.whl", hash = "sha256:e3890b508a23299083e065f435a492b5435eba6e304a7114d2f919d400888cc6", size = 110232, upload-time = "2025-01-14T10:34:55.327Z" },
+    { url = "https://files.pythonhosted.org/packages/53/f8/c3f6b2cf9b9277fb0813418e1503e68414cd036b3b099c823379c9575e6d/wrapt-1.17.2-cp313-cp313t-musllinux_1_2_i686.whl", hash = "sha256:8c8b293cd65ad716d13d8dd3624e42e5a19cc2a2f1acc74b30c2c13f15cb61a6", size = 100476, upload-time = "2025-01-14T10:34:58.055Z" },
+    { url = "https://files.pythonhosted.org/packages/a7/b1/0bb11e29aa5139d90b770ebbfa167267b1fc548d2302c30c8f7572851738/wrapt-1.17.2-cp313-cp313t-musllinux_1_2_x86_64.whl", hash = "sha256:4c82b8785d98cdd9fed4cac84d765d234ed3251bd6afe34cb7ac523cb93e8b4f", size = 106377, upload-time = "2025-01-14T10:34:59.3Z" },
+    { url = "https://files.pythonhosted.org/packages/6a/e1/0122853035b40b3f333bbb25f1939fc1045e21dd518f7f0922b60c156f7c/wrapt-1.17.2-cp313-cp313t-win32.whl", hash = "sha256:13e6afb7fe71fe7485a4550a8844cc9ffbe263c0f1a1eea569bc7091d4898555", size = 37986, upload-time = "2025-01-14T10:35:00.498Z" },
+    { url = "https://files.pythonhosted.org/packages/09/5e/1655cf481e079c1f22d0cabdd4e51733679932718dc23bf2db175f329b76/wrapt-1.17.2-cp313-cp313t-win_amd64.whl", hash = "sha256:eaf675418ed6b3b31c7a989fd007fa7c3be66ce14e5c3b27336383604c9da85c", size = 40750, upload-time = "2025-01-14T10:35:03.378Z" },
+    { url = "https://files.pythonhosted.org/packages/2d/82/f56956041adef78f849db6b289b282e72b55ab8045a75abad81898c28d19/wrapt-1.17.2-py3-none-any.whl", hash = "sha256:b18f2d1533a71f069c7f82d524a52599053d4c7166e9dd374ae2136b7f40f7c8", size = 23594, upload-time = "2025-01-14T10:35:44.018Z" },
+]


Commit: 3d6bf4675fd3c544de68c932148fc87e389f12ce
    https://github.com/scummvm/scummvm-sites/commit/3d6bf4675fd3c544de68c932148fc87e389f12ce
Author: ShivangNagta (shivangnag at gmail.com)
Date: 2025-08-14T22:21:10+02:00

Commit Message:
INTEGRITY: Improve error handling in compute_hash.py

Changed paths:
    src/scripts/compute_hash.py


diff --git a/src/scripts/compute_hash.py b/src/scripts/compute_hash.py
index 3a395eb..a3597fd 100644
--- a/src/scripts/compute_hash.py
+++ b/src/scripts/compute_hash.py
@@ -6,6 +6,7 @@ import sys
 from enum import Enum
 from datetime import datetime, date, timedelta
 from collections import defaultdict
+import traceback
 
 
 class FileType(Enum):
@@ -345,8 +346,8 @@ def appledouble_get_datafork(filepath, fileinfo):
         with open(data_fork_path, "rb") as f:
             data = f.read()
             return (data, len(data))
-    except (FileNotFoundError, IsADirectoryError):
-        return b""
+    except (FileNotFoundError, IsADirectoryError) as e:
+        raise e
 
 
 def raw_rsrc_get_datafork(filepath):
@@ -355,8 +356,8 @@ def raw_rsrc_get_datafork(filepath):
         with open(filepath[:-5] + ".data", "rb") as f:
             data = f.read()
             return (data, len(data))
-    except (FileNotFoundError, IsADirectoryError):
-        return b""
+    except (FileNotFoundError, IsADirectoryError) as e:
+        raise e
 
 
 def raw_rsrc_get_resource_fork_data(filepath):
@@ -380,8 +381,8 @@ def actual_mac_fork_get_data_fork(filepath):
         with open(filepath, "rb") as f:
             data = f.read()
             return (data, len(data))
-    except (FileNotFoundError, IsADirectoryError):
-        return b""
+    except (FileNotFoundError, IsADirectoryError) as e:
+        raise e
 
 
 def actual_mac_fork_get_resource_fork_data(filepath):
@@ -560,42 +561,46 @@ def extract_macbin_filename_from_header(file):
 def file_classification(filepath):
     """Returns [ Filetype, Filename ]. Filetype is an enum value - NON_MAC, MAC_BINARY, APPLE_DOUBLE_RSRC, APPLE_DOUBLE_MACOSX, APPLE_DOUBLE_DOT_, RAW_RSRC
     Filename for a normal file is the same as the original. Extensions are dropped for macfiles."""
-
-    # 1. Macbinary
-    if is_macbin(filepath):
-        base_name = extract_macbin_filename_from_header(filepath)
-        return [FileType.MAC_BINARY, base_name]
-
-    # 2. Appledouble .rsrc
-    if is_appledouble_rsrc(filepath):
-        base_name, _ = os.path.splitext(os.path.basename(filepath))
-        return [FileType.APPLE_DOUBLE_RSRC, base_name]
-
-    # 3. Raw .rsrc
-    if is_raw_rsrc(filepath):
-        base_name, _ = os.path.splitext(os.path.basename(filepath))
-        return [FileType.RAW_RSRC, base_name]
-
-    # 4. Appledouble in ._
-    if is_appledouble_in_dot_(filepath):
-        filename = os.path.basename(filepath)
-        actual_filename = filename[2:]
-        return [FileType.APPLE_DOUBLE_DOT_, actual_filename]
-
-    # 5. Appledouble in __MACOSX folder
-    if is_appledouble_in_macosx(filepath):
-        filename = os.path.basename(filepath)
-        actual_filename = filename[2:]
-        return [FileType.APPLE_DOUBLE_MACOSX, actual_filename]
-
-    # 6. Actual resource fork of mac
-    if is_actual_resource_fork_mac(filepath):
-        filename = os.path.basename(filepath)
-        return [FileType.ACTUAL_FORK_MAC, filename]
-
-    # Normal file
-    else:
-        return [FileType.NON_MAC, os.path.basename(filepath)]
+    try:
+        # 1. Macbinary
+        if is_macbin(filepath):
+            base_name = extract_macbin_filename_from_header(filepath)
+            return [FileType.MAC_BINARY, base_name]
+
+        # 2. Appledouble .rsrc
+        if is_appledouble_rsrc(filepath):
+            base_name, _ = os.path.splitext(os.path.basename(filepath))
+            return [FileType.APPLE_DOUBLE_RSRC, base_name]
+
+        # 3. Raw .rsrc
+        if is_raw_rsrc(filepath):
+            base_name, _ = os.path.splitext(os.path.basename(filepath))
+            return [FileType.RAW_RSRC, base_name]
+
+        # 4. Appledouble in ._
+        if is_appledouble_in_dot_(filepath):
+            filename = os.path.basename(filepath)
+            actual_filename = filename[2:]
+            return [FileType.APPLE_DOUBLE_DOT_, actual_filename]
+
+        # 5. Appledouble in __MACOSX folder
+        if is_appledouble_in_macosx(filepath):
+            filename = os.path.basename(filepath)
+            actual_filename = filename[2:]
+            return [FileType.APPLE_DOUBLE_MACOSX, actual_filename]
+
+        # 6. Actual resource fork of mac
+        if is_actual_resource_fork_mac(filepath):
+            filename = os.path.basename(filepath)
+            return [FileType.ACTUAL_FORK_MAC, filename]
+
+        # Normal file
+        else:
+            return [FileType.NON_MAC, os.path.basename(filepath)]
+    except FileNotFoundError:
+        raise FileNotFoundError(f"File not found: {filepath}")
+    except OSError as e:
+        raise OSError(f"Could not read file: {filepath}") from e
 
 
 def file_filter(files):
@@ -649,42 +654,46 @@ def compute_hash_of_dirs(
     res = []
 
     for directory in get_dirs_at_depth(root_directory, depth):
-        hash_of_dir = dict()
-        files = []
-        # Dictionary with key : path and value : [ Filetype, Filename ]
-        file_collection = dict()
-        # Getting only files of directory and subdirectories recursively
-        for root, _, contents in os.walk(directory):
-            files.extend([os.path.join(root, f) for f in contents])
-
-        # Filter out the files based on user input date - limit_timestamps_date
-        filtered_file_map = filter_files_by_timestamp(files, limit_timestamp_date)
-
-        # Produce filetype and filename(name to be used in game entry) for each file
-        for filepath in filtered_file_map:
-            file_collection[filepath] = file_classification(filepath)
-
-        # Remove extra entries of macfiles to avoid extra checksum calculation in form of non mac files
-        # Checksum for both the forks are calculated using a single file, so other files should be removed from the collection
-        file_filter(file_collection)
-
-        # Calculate checksum of files
-        for file_path, file_info in file_collection.items():
-            # relative_path is used for the name field in game entry
-            relative_path = os.path.relpath(file_path, directory)
-            base_name = file_info[1]
-            relative_dir = os.path.dirname(relative_path)
-            relative_path = os.path.join(relative_dir, base_name)
-
-            if file_info[0] == FileType.APPLE_DOUBLE_MACOSX:
-                relative_dir = os.path.dirname(os.path.dirname(relative_path))
+        try:
+            hash_of_dir = dict()
+            files = []
+            # Dictionary with key : path and value : [ Filetype, Filename ]
+            file_collection = dict()
+            # Getting only files of directory and subdirectories recursively
+            for root, _, contents in os.walk(directory):
+                files.extend([os.path.join(root, f) for f in contents])
+
+            # Filter out the files based on user input date - limit_timestamps_date
+            filtered_file_map = filter_files_by_timestamp(files, limit_timestamps_date)
+
+            # Produce filetype and filename(name to be used in game entry) for each file
+            for filepath in filtered_file_map:
+                file_collection[filepath] = file_classification(filepath)
+
+            # Remove extra entries of macfiles to avoid extra checksum calculation in form of non mac files
+            # Checksum for both the forks are calculated using a single file, so other files should be removed from the collection
+            file_filter(file_collection)
+
+            # Calculate checksum of files
+            for file_path, file_info in file_collection.items():
+                # relative_path is used for the name field in game entry
+                relative_path = os.path.relpath(file_path, directory)
+                base_name = file_info[1]
+                relative_dir = os.path.dirname(relative_path)
                 relative_path = os.path.join(relative_dir, base_name)
 
-            hash_of_dir[relative_path] = file_checksum(
-                file_path, alg, size, file_info
-            ) + (filtered_file_map[file_path],)
+                if file_info[0] == FileType.APPLE_DOUBLE_MACOSX:
+                    relative_dir = os.path.dirname(os.path.dirname(relative_path))
+                    relative_path = os.path.join(relative_dir, base_name)
+
+                hash_of_dir[relative_path] = file_checksum(
+                    file_path, alg, size, file_info
+                ) + (filtered_file_map[file_path],)
 
-        res.append(hash_of_dir)
+            res.append(hash_of_dir)
+        except Exception:
+            print(f"Error: Could not process the given directory: {directory}.")
+            raise
     return res
 
 
@@ -726,7 +735,7 @@ def extract_mtime_appledouble(file_byte_stream):
         if id == 8:
             date_info_data = file_byte_stream[offset : offset + length]
             if len(date_info_data) < 16:
-                raise ValueError("FileDatesInfo block is too short.")
+                raise ValueError("Error: FileDatesInfo block is too short.")
             appledouble_epoch = datetime(2000, 1, 1)
             modify_seconds = read_be_32(date_info_data[4:8], signed=True)
             return (appledouble_epoch + timedelta(seconds=modify_seconds)).date()
@@ -739,21 +748,28 @@ def macfile_timestamp(filepath):
     Returns the modification times for the mac file from their finderinfo.
     If the file is not a macfile, it returns None
     """
-    with open(filepath, "rb") as f:
-        data = f.read()
-        # Macbinary
-        if is_macbin(filepath):
-            return extract_macbin_mtime(data)
-
-        # Appledouble
-        if (
-            is_appledouble_rsrc(filepath)
-            or is_appledouble_in_dot_(filepath)
-            or is_appledouble_in_macosx(filepath)
-        ):
-            return extract_mtime_appledouble(data)
-
-    return None
+    try:
+        with open(filepath, "rb") as f:
+            data = f.read()
+            # Macbinary
+            if is_macbin(filepath):
+                return extract_macbin_mtime(data)
+
+            # Appledouble
+            if (
+                is_appledouble_rsrc(filepath)
+                or is_appledouble_in_dot_(filepath)
+                or is_appledouble_in_macosx(filepath)
+            ):
+                return extract_mtime_appledouble(data)
+
+        return None
+    except FileNotFoundError:
+        raise FileNotFoundError(f"File not found: {filepath}")
+    except OSError as e:
+        raise OSError(f"Could not read file: {filepath}") from e
+    except ValueError as e:
+        raise e
 
 
 def validate_date(date_str):
@@ -767,7 +783,9 @@ def validate_date(date_str):
             return datetime.strptime(date_str, fmt).date()
         except ValueError:
             continue
-    raise ValueError("Invalid date format. Use YYYY, YYYY-MM, or YYYY-MM-DD")
+    raise ValueError(
+        f"Error: Invalid date format: {date_str}. Use YYYY, YYYY-MM, or YYYY-MM-DD"
+    )
 
 
 def filter_files_by_timestamp(files, limit_timestamps_date):
@@ -779,8 +797,8 @@ def filter_files_by_timestamp(files, limit_timestamps_date):
 
     filtered_file_map = defaultdict(str)
 
-    if limit_timestamp_date is not None:
-        user_date = validate_date(limit_timestamps_date)
+    if limit_timestamps_date is not None:
+        user_date = limit_timestamps_date
     today = date.today()
 
     for filepath in files:
@@ -826,6 +844,23 @@ def create_dat_file(hash_of_dirs, path, checksum_size=0):
             file.write(")\n\n")
 
 
+def parse_positive_int(value, name):
+    """
+    Parser the size and depth values passed as cli arguements.
+    """
+    try:
+        num = int(value) if value else 0
+    except ValueError:
+        print(f"Error: Invalid {name} argument: {value}")
+        sys.exit(1)
+    if num < 0:
+        print(
+            f"Error: Invalid {name} value: {num}. Use a value greater than or equal to 0."
+        )
+        sys.exit(1)
+    return num
+
+
 class MyParser(argparse.ArgumentParser):
     def error(self, message):
         sys.stderr.write("Error: %s\n" % message)
@@ -833,22 +868,54 @@ class MyParser(argparse.ArgumentParser):
         sys.exit(2)
 
 
-parser = argparse.ArgumentParser()
-parser.add_argument("--directory", help="Path of directory with game files")
-parser.add_argument("--depth", help="Depth from root to game directories")
-parser.add_argument("--size", help="Use first n bytes of file to calculate checksum")
-parser.add_argument(
-    "--limit-timestamps",
-    help="Format - YYYY-MM-DD or YYYY-MM or YYYY. Filters out the files those were modified after the given timestamp. Note that if the modification time is today, it would not be filtered out.",
-)
-args = parser.parse_args()
-path = os.path.abspath(args.directory) if args.directory else os.getcwd()
-depth = int(args.depth) if args.depth else 0
-checksum_size = int(args.size) if args.size else 0
-limit_timestamp_date = str(args.limit_timestamps) if args.limit_timestamps else None
-
-create_dat_file(
-    compute_hash_of_dirs(path, depth, checksum_size, limit_timestamp_date),
-    path,
-    checksum_size,
-)
+def main():
+    try:
+        parser = argparse.ArgumentParser()
+        parser.add_argument(
+            "--directory", required=True, help="Path of directory with game files"
+        )
+        parser.add_argument("--depth", help="Depth from root to game directories")
+        parser.add_argument(
+            "--size", help="Use first n bytes of file to calculate checksum"
+        )
+        parser.add_argument(
+            "--limit-timestamps",
+            help="Format - YYYY-MM-DD or YYYY-MM or YYYY. Filters out the files those were modified after the given timestamp. Note that if the modification time is today, it would not be filtered out.",
+        )
+
+        args = parser.parse_args()
+        path = args.directory
+        if not os.path.isdir(path):
+            print(f"Error: Directory does not exist: {path}.")
+            sys.exit(1)
+        path = os.path.abspath(path)
+
+        depth = parse_positive_int(args.depth, "depth")
+        checksum_size = parse_positive_int(args.size, "size")
+
+        limit_timestamps_date = None
+        try:
+            if args.limit_timestamps:
+                limit_timestamps_date = validate_date(str(args.limit_timestamps))
+        except ValueError as ve:
+            print(ve)
+            sys.exit(1)
+
+        create_dat_file(
+            compute_hash_of_dirs(path, depth, checksum_size, limit_timestamps_date),
+            path,
+            checksum_size,
+        )
+    except KeyboardInterrupt:
+        print("Operation cancelled by user")
+        sys.exit(0)
+    except Exception:
+        traceback.print_exc()
+        print(
+            "Could not handle the exception. Look through the traceback and open an issue at: https://github.com/scummvm/scummvm-sites/issues"
+        )
+        sys.exit(1)
+
+
+if __name__ == "__main__":
+    main()


Commit: 203f60b547f6e2592998e7bc01945583a3fa1137
    https://github.com/scummvm/scummvm-sites/commit/203f60b547f6e2592998e7bc01945583a3fa1137
Author: ShivangNagta (shivangnag at gmail.com)
Date: 2025-08-14T22:21:10+02:00

Commit Message:
INTEGRITY: Print entire traceback when unknown exception is caught in dat_parser.py

Changed paths:
    src/scripts/dat_parser.py


diff --git a/src/scripts/dat_parser.py b/src/scripts/dat_parser.py
index a1b0679..3ffd683 100644
--- a/src/scripts/dat_parser.py
+++ b/src/scripts/dat_parser.py
@@ -2,6 +2,7 @@ import re
 import os
 import sys
 import argparse
+import traceback
 from src.scripts.db_functions import db_insert, match_fileset
 
 
@@ -217,8 +218,11 @@ def main():
     except KeyboardInterrupt:
         print("Operation cancelled by user")
         sys.exit(0)
-    except Exception as e:
-        print(f"Error: Unexpected error in main: {e}")
+    except Exception:
+        traceback.print_exc()
+        print(
+            "Could not handle the exception. Look through the traceback and open an issue at: https://github.com/scummvm/scummvm-sites/issues"
+        )
         sys.exit(1)
 
 




More information about the Scummvm-git-logs mailing list