Anyone Explored a Codemirror 6 Canvas/WebGL Renderer?

For various reasons, I really want a canvas/webgl based code editor. I started a hacky way of rendering a Codemirror instance to a canvas. It’s very kludgy but seems promising. See twitter thread with progress videos.

For what I need, I plan to keep the DOM version running just invisible on top of my canvas version. This way I get all the native input while also getting a rasterized version of the editor.

Before I embark too far on this, I wanted to know if anyone knows of attempts to do this or has some tips?

1 Like

The Bespin project in 2009 worked this way, but was given up due to, I believe, problems with the approach. Some things to keep in mind:

  • In most situations, for text layout, the DOM will be faster than a canvas
  • Accessibility will be terrible if your text is just pixels on a canvas
  • Doing text positioning, styling, and wrapping is a whole lot more difficult with fillText

Thanks for the quick reply. I do recall Bespin, I was working with Ben/Dion on a contract to work with Mozilla Labs at the time, but then had to decline.

So with my approach, I’m trying to have the best of both worlds, and not recreate any work. This is how my approach will work.

Requirements (for my browser based game-editor-like tool that supports coding):

  1. Be able to edit/view editors in WebVR
  2. Be able to rasterize images/videos of my entire app UX
  3. Have more control to cache the rendering of code editors as I want to support creating many code editing panes, lazily loading Codemirror as needed.

In general I believe I should be able to still use the Codemirror instance as is, but with opacity set to make it invisible (the gutter and code elements, I leave the scroll one as is, or the scroll bars become invisible too), and have the visuals based on my rasterized version. This way I get all of the awesome accessibility, input, and etc details for free (minus all of the rasterizing I’m going to have to do). From the test with a subset of the Codemirror UI being rasterized, this seems to work well.

For #1, I need to have it rasterized as in WebVR full VR mode, no DOM can be rendered, only webgl graphics can render. So I don’t think I’ll even be able to overlay the invisible Codemirror instance. I’m going to have send simulated events (or use the API) to Codemirror instance (which still is running while in VR mode but not displayed). It’s bonus points that because I have the ability to render the editor mixed in with 3D content, that I can make fun visuals like the editor being on a 3D computer monitor, or have arbitrary shaders changing how the code looks.

For #2, one goal of the tool I’m making is to make it super easy to generate screenshots and videos to show the tool and content you made in action. While you can use screencapture tools, by having access to the pixels I’m able to make a specialized screencapture UX, and even do things like capture images/videos at higher res than the user’s display.

For #3, I should probably prove it, but I suspect I’ll have huge gains to have canvas caches of code editors, and have more control over aliasing and such, compared to just using css 3D transforms to get zooming/panning.

I’ll be posting updates on my Twitter account https://twitter.com/seflless, and will try to remember to post here too. I’m on the fence as to if I want to open source it just yet, as I’m still thinking about the business model and strategy for my tool (https://twitter.com/convey_it).

Does that all makes sense?

I haven’t worked with WebVR—seems like there’s a proposed standard for making it capable of showing regular DOM elements, but nothing actually implemented at the time. I guess as a workaround for that limitation, mirroring a CM instance to a canvas is a more or less reasonable thing to do.

1 Like

“More or less reasonable thing to do”, hahaha.

I’d like to do this, but I’d like to change make an alternative DOM output using my WebGL-backed custom elements. Being able to output a different type of DOM that has a different rendering backend will allow for the editor to still be accessible, while rendering in WebVR.

Any update on this, or possible pointers on how to get CM to render into canvas?

I’m looking into integrating CodeMirror into a canvas grid, but probably don’t want a CM instance per cell and only one for when a cell is being actively edited.

Thanks!

Honestly I never did get it fully working in the end. For your use case it sounds like you just want an image proxy of each Codemirror instance to have less Codemirror instances going. You won’t need any css transforms in your case right (scale, position, rotation, perspective, etc)?

1 Like