Permissions, Orchestrators, and Pipeline Editor

2025-08-09
2 minute read

Today's focus was on refining the permission system for Catacloud, exploring solutions for command and read access, and designing orchestrators to handle complex entity operations. I also touched upon ongoing work with the pipeline configuration editor and persistent file uploads.

Permission System Refinements

Currently, Catacloud's permission system operates on command handlers and appends filters to queries for projections. This is a traditional policy enforcement and decision point system. Ideally, I'd like to implement a more comprehensive Attribute-Based Access Control (ABAC) system by extending Epoch with new traits. This would decouple authorization logic from data access, making testing much simpler. While the existing filtering system for queries is robust and has been developed over years, I'm deferring a full ABAC implementation for now.

For immediate needs, I'm adding a function to command handlers to perform policy enforcement. This function will extract required attributes from the command, reducing boilerplate code for repetitive checks (e.g., "admin can modify, otherwise only owner").

Orchestrators for Complex Operations

I've also been considering orchestrators as a solution for commands that affect multiple entities. For instance, if a command targets one entity but needs to delete a bunch of related ones (like unlinking multiple files from a job), an orchestrator can pick up an event emitted by the initial command and then cascade operations to other entities. This approach helps manage complex dependencies and ensures consistency across related data.

Pipeline Configuration Editor

My current main task is adding a configuration editor for pipelines. I've found a promising JSON editor project with a tree system that integrates well for navigating JSON structures. The challenge now is integrating it with HTMLX (or Alpine.js) to properly send requests to the backend.

File Upload Challenges

I'm still encountering issues with file uploads, particularly larger files. The web worker occasionally stops, especially when the profiler isn't active. My current thought is to ensure the state is persisted so users can resume uploads if the worker dies. This might involve prompting the user to re-select the file, but it would provide a more robust experience.