Project Raygine #15
That GIF above ... that felt like lot's of effort for a small looking feature. But let me explain: A lot of needed basic functionality wasn't there yet. The way this works now is, that there's a dedicated call to the components to please draw their content into a render texture that contains IDs as colors. This is then used to read the pixel under the mouse cursor and find the object that was clicked. And also for outlining the selected (or hovered) object. This isn't the most optimal way to handle this, but there are reasons why I chose this approach; first, it's relatively simple to implement (HA!). Second, it's fairly simple to draw something on screen for making it selectable. Third, I needed some form of highlight for the selected object anyway - which I get with this method (not for free but at least without additional work).
As the gif shows, it works now with mesh renderers and also with the camera gizmo. Since objects can be selected via clicks and also be highlighted (still need to highlight the selected one).
There much missing to do all that - like the shader API was practically non-existent. So I carved that out for this.
However, I had a few realizations about the editor actions that overwhelm me right now.
- I really like the editing flow in Blender and VS Code. I would like to translate this into my editor.
- Using hotkeys like "g" for move and then "x" for "along x axis" and "1.5" for "move 1.5 units along x axis". I think that's a really great way to do things. I want to copy this.
- Space for finding actions. Or CTRL+P for selecting / opening elements in the project, just like VS Code does. Very empowering, even new users.
- Gizmos are neat, but also not very powerful compared to these other actions. I will of course implement those as well, but those are more like a visual aid for the user. I want to have a way to do things without the gizmos.
- However; I have no concept how to build a system around all this;
- I need some action system to trigger commands. Context based - like what object is selected, what tool is active and where the mouse is.
- Where does an action end and where does a tool start? A menu is a tool, but it's also triggering actions. Is the tool a way to configure an action that is then executed? Is an action starting a tool that handles the action?
- One other thing to consider in this context: Undo. I have no undo yet, but if all editor manipulation is done through actions, any action could define the undo operation and store the data needed to undo it.
I have a vague idea, that maybe the context problem could be solved by handling available actions like an immediate GUI system: When code is utilizing imgui to render the UI, it could produce a list of actions alongside these commands. When the user presses "space" to search for actions, the editor could then show the list of actions that are available in the current context. The same applies to hotkeys that trigger actions.
Above all, everything that modifies the state must become an action. I currently don't have that many actions yet; The inspector allows most manipulations, followed by the hierarchy window.
I'll have to brood over all this and then, there'll be refactorings...
Beyond these problems, I think I need to consider at some point to move some stuff into submodules that can be shared with other repositories. Specifically, I want at some point to have the Android and Web client builds and I believe by now, that those are better done as separate projects. The only question is now, how to prioritize and when to do it.