Nreal Air support in Monado

Over the last months (from April to September) I tinkered around with an Nreal Air. It’s an augmented reality device, essentially a pair of glasses with displays projecting a virtual content into your view. You can connect it to your phone, console or PC via a USB-C cable. Then in theory you can have 360° virtual monitors. But the whole package had a slight flaw. The manufacturer only targets Android, iOS, macOS and Windows with an own proprietary SDK and an app. So even the community on those platforms wasn’t entirely happy with that.

Why did I get such a device then? Well, I’ve seen a review of it and it was mentioned to support the Steam Deck. So naturally I thought if any device running GNU/Linux is supported in any way at all, I could likely get it to work with every device running GNU/Linux. Then I received the actual hardware and I had no officially supported device other than my Steam Deck.

But with that it worked, right? Well some functionality worked great out of the box. It would be recogniced as external monitor and render content on it locked to your head into your field of view. You also got audio from it through its stereo speakers and even the microphone worked properly (with surprisingly good quality, not gonna lie). The only missing part was that there was no use of its IMU sensors at all.

Technically the device contains a gyroscope, an accelerometer and a magnetometer. With these the device is able to offer (rotational) head tracking and render virtual content dependent on your heads orientation. On supported operating systems you would get the IMU sensor data or at least its derived orientation from that via their proprietary SDK. But that wouldn’t work for Linux, I assumed.

So instead I started to debug what I could read via the USB connection from the device. It actually exposed an HID device on Linux with three different interfaces for separate data and I started reverse engineering its behavior utilizing existing efforts from different people on other operating systems as well. I shared every bit of progress on Gitlab of course, potentially for others to use, test, improve or learn from it, similar as I utilized other efforts before. I also shared my findings with other people who were invested in reverse engineering it. You could say together we managed to get to a common goal and therefore I really recommend sharing information. I mean I had no proper idea how any of this worked other than my basic knowledge about geometric algebra from university. I had never worked on driver level either besides my small efforts to get a graphics tablet from Huion working going back years (and for that I had known of user-space Python code which worked on Linux).

Anyway in about a month I had a well working implementation of a user-space driver for Linux which could talk to the glasses and derive an orientation from its IMU sensor data. It still had drift issues but I was quite happy with that nonetheless. Now I started cleaning up my efforts to contribute further in the Monado project. Because my thought was having a user-space driver was great and all for a proof of concept. But people might want to actually use it and it’s unlikely applications will be rewritten to use my custom driver just to support one device.

Also Monado had another big advantages. There are efforts to support multiple operating systems. It offers a good implementation to do sensor fusion for head tracking as well. Then once you have a working driver in Monado you can use it for not just AR applications but also VR in theory. The only requirement would be that an application supports to use the OpenXR API which Monado implements. So I was quite excited to bring all of that to a test.

Getting there was my goal now. Therefore I started implementing a driver in Monado, improved it quite a bit and at some point with help from others commenting on my code so far, I got rid of most drift issues. It was looking great since I could already test the driver with Godot for example which supports OpenXR as an open game engine. So easily anyone would have a nice option to build on top of my efforts without learning low-level C or anything. I also used the OpenXR example projects to debug and implement stereo rendering when a firmware update from Xreal (yes, Nreal renamed themselves) easily allowed triggering a stereo rendering mode. For those interested you need to hold the “Brightness +” button for about 6 seconds and release it after it makes a sound. That will switch rendering mode back and furth. You will need firmware from 18th July 2023 or newer though and I wasn’t able to upgrade firmware on Linux yet (even the official web tool doesn’t seem to work in any web browser on Linux - it works on Windows though).

I should add that the stereo rendering mode doesn’t work with all compositors on Linux yet. At least from my experience it works with gamescope on the Deck and with KDE Plasma in general. Unfortunately it doesn’t seem to work with GNOME which just disables the monitor during switch and only re-enables it after unplugging and replugging it. But then it will not render in stereo anymore.

The driver now got finally merged after review process. That made me very happy because now this little project should be done and I can refocus on other development again. Overall it was a great opportunity for me to learn about multiple FOSS projects I hadn’t used yet. Also I learned a lot about sensors (they barely give you the data you want but mostly noise). All of this just because I thought a terminal via AR glasses but we a neat thing to have.

Read original article

Popular posts from this blog

GNUnet Messenger API: March

GNUnet Messenger API: September

GNUnet Messenger API: December