Anyone Explored a Codemirror 6 Canvas/WebGL Renderer?

Thanks for the quick reply. I do recall Bespin, I was working with Ben/Dion on a contract to work with Mozilla Labs at the time, but then had to decline.

So with my approach, I’m trying to have the best of both worlds, and not recreate any work. This is how my approach will work.

Requirements (for my browser based game-editor-like tool that supports coding):

  1. Be able to edit/view editors in WebVR
  2. Be able to rasterize images/videos of my entire app UX
  3. Have more control to cache the rendering of code editors as I want to support creating many code editing panes, lazily loading Codemirror as needed.

In general I believe I should be able to still use the Codemirror instance as is, but with opacity set to make it invisible (the gutter and code elements, I leave the scroll one as is, or the scroll bars become invisible too), and have the visuals based on my rasterized version. This way I get all of the awesome accessibility, input, and etc details for free (minus all of the rasterizing I’m going to have to do). From the test with a subset of the Codemirror UI being rasterized, this seems to work well.

For #1, I need to have it rasterized as in WebVR full VR mode, no DOM can be rendered, only webgl graphics can render. So I don’t think I’ll even be able to overlay the invisible Codemirror instance. I’m going to have send simulated events (or use the API) to Codemirror instance (which still is running while in VR mode but not displayed). It’s bonus points that because I have the ability to render the editor mixed in with 3D content, that I can make fun visuals like the editor being on a 3D computer monitor, or have arbitrary shaders changing how the code looks.

For #2, one goal of the tool I’m making is to make it super easy to generate screenshots and videos to show the tool and content you made in action. While you can use screencapture tools, by having access to the pixels I’m able to make a specialized screencapture UX, and even do things like capture images/videos at higher res than the user’s display.

For #3, I should probably prove it, but I suspect I’ll have huge gains to have canvas caches of code editors, and have more control over aliasing and such, compared to just using css 3D transforms to get zooming/panning.

I’ll be posting updates on my Twitter account https://twitter.com/seflless, and will try to remember to post here too. I’m on the fence as to if I want to open source it just yet, as I’m still thinking about the business model and strategy for my tool (https://twitter.com/convey_it).

Does that all makes sense?