<?xml version="1.0" encoding="utf-8" standalone="yes"?><rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Virtual Reality | Ben Ahlbrand CV</title><link>https://benjamin.ahlbrand.me/tags/virtual-reality/</link><atom:link href="https://benjamin.ahlbrand.me/tags/virtual-reality/index.xml" rel="self" type="application/rss+xml"/><description>Virtual Reality</description><generator>Hugo Blox Builder (https://hugoblox.com)</generator><language>en-us</language><lastBuildDate>Tue, 29 Oct 2019 00:00:00 +0000</lastBuildDate><item><title>Distributed Systems and Virtual Reality</title><link>https://benjamin.ahlbrand.me/post/2021-11-27-distributed-systems-and-virtual-reality/</link><pubDate>Tue, 29 Oct 2019 00:00:00 +0000</pubDate><guid>https://benjamin.ahlbrand.me/post/2021-11-27-distributed-systems-and-virtual-reality/</guid><description>&lt;p>I&amp;rsquo;ve been thinking a lot lately about networking and how we use it for our co-located immersive experiences. What are the pressing questions as one designs such a system for shared experiences like these?&lt;/p>
&lt;p>In order to deliver these compelling experiences that exceed the capabilities of stand-alone, inside-out tracked devices such as the recent Oculus Quest, we need to leverage the &amp;lsquo;fog&amp;rsquo; (local cloud) or the cloud - once 5G conquers the world and delivers on promises of low latency cloud connectivity. By utilizing a local co-located server to maintain synchronized states across all devices, we can avoid high round-trip latency from remote hosts.&lt;/p>
&lt;p>This requirement of low latency, and &lt;em>consistency&lt;/em> of packet delivery times, also drives a need for an in-memory datastore with common semantics we encounter across different projects. There&amp;rsquo;s also the question of balance for your specific needs, and something that might generalize to other projects. Supposing you also wanted to be robust to server host failures, you could also consider consensus algorithms &lt;a href="https://en.wikipedia.org/wiki/Paxos_(computer_science)">&lt;/a>(see below for links, if you&amp;rsquo;ve got overhead to spare), and use 3 distributed versions of the datastore. You could also manage whether a synced object is dynamic, or static, or perhaps a steady stream, active / inactive, creation, deletion.&lt;/p>
&lt;p>You also need to consider what your needs are for object ownership semantics, how to acquire a lock, how to release, whether the world owns objects, etc - and then you might add synced physics to these objects. You can run all your physics server-side and broadcast that to clients, run physics on each client independently, use the server tick to sync a deterministic simulation, or run on both and correct the client every so often with an authoritative source (in this case the server).&lt;/p>
&lt;p>Once you have a datastore, and object sharing, you can divide your communication across a control plane, and a data plane. In this case, the control plane is embodied as a TCP stream to manage connections, and world state changes, and events that correspond to ownership, then let positions and otherwise stream over UDP, since getting an update for an avatar or object late - is rather useless, isn&amp;rsquo;t it?&lt;/p>
&lt;p>Do you stream each frame&amp;rsquo;s data? Do you perform view interpolation? Do you perform the interpolation client side or server side? Do you stream everything to everyone? Can you use frustum culling server side to ignore streaming to those who can&amp;rsquo;t see an object / avatar?&lt;/p>
&lt;p>You might also consider compression of messages, or quantization of your datatore further improve your experience, keeping in mind the trade offs between bandwidth and compute cycles.&lt;/p>
&lt;p>The topology of the network is another consideration, should you go with purely peer-to-peer? Client / server? A combination of the two? Peer-to-peer might remove a hop in your communication but, then each peer is sending quite a few more messages when broadcasting. Typically with current hardware, it&amp;rsquo;s probably advisable to push that responsibility to the server if your load is non-trivial.&lt;/p>
&lt;p>Perhaps you might also like to consider asset streaming, either at load-time, or streaming world chunks.&lt;/p>
&lt;p>Can we also integrate a generative blockchain? :) This might provide a way of distributing state of seeded world chunks or entities, that can&amp;rsquo;t be deleted.&lt;/p>
&lt;h2>Links&lt;/h2>
&lt;p>&lt;a href="https://raft.github.io/">&lt;a href="https://raft.github.io/" target="_blank" rel="noopener">https://raft.github.io/&lt;/a>&lt;/a>&lt;br>&lt;a href="https://en.wikipedia.org/wiki/Paxos_(computer_science)">&lt;a href="https://en.wikipedia.org/wiki/Paxos_%28computer_science%29" target="_blank" rel="noopener">https://en.wikipedia.org/wiki/Paxos_(computer_science)&lt;/a>&lt;/a>&lt;/p></description></item><item><title>Living at the Edge</title><link>https://benjamin.ahlbrand.me/post/2021-11-27-living-at-the-edge/</link><pubDate>Tue, 11 Jun 2019 00:00:00 +0000</pubDate><guid>https://benjamin.ahlbrand.me/post/2021-11-27-living-at-the-edge/</guid><description>&lt;p>Now that we have room-scale co-located tracking with no external sensors, thanks to the Quest, and Connor DeFanti&amp;rsquo;s &lt;a href="https://frl.nyu.edu/a-quick-and-easy-calibration-method/">calibration scheme&lt;/a> - we can begin to explore other exciting possibilities for the future. One of which is using small embedded boards, such as the Raspberry Pi, coupled with an camera - both attached to the front of the Quest, in order to communicate compressed edge features and otherwise to a dedicated PC running heavier algorithms such as SLAM / SFM, pose estimation, YOLO for object detection, etcetera.&lt;/p>
&lt;p>By using one or more of these Pi / camera systems, we can achieve stereo vision, without the need for video pass-through on the HMD, not to mention also offloading computational constraints. We can double dip into our room mapping research with 360 camera fixed on top of our friend Bracey Smith&amp;rsquo;s Loomo the friendly vision robot. Loomo enables us to remotely control and otherwise automate the process of roaming a space in order to collect 360 views for photogrammetric reconstruction of larger spaces and in a more controlled manner than might be done with manual collection.&lt;/p>
&lt;!-- wp:image -->
&lt;p>&lt;a href="https://www.indiegogo.com/projects/loomo-mini-transporter-meets-robot-sidekick#/" target="_blank" rel="noopener">https://www.indiegogo.com/projects/loomo-mini-transporter-meets-robot-sidekick#/&lt;/a>&lt;/a>&lt;/figcaption>&lt;/p>
&lt;p>Perhaps you&amp;rsquo;d use the same local server you use for SLAM / SFM at the beginning of a shared experience to communicate with the Pi / camera boards and the Quests in order to construct in a distributed fashion, a shared vision of the space the group is in. Feature extraction is independent of the rest of the pipeline, so even if the pipeline isn&amp;rsquo;t quite homogeneous end-to-end, it&amp;rsquo;s possible to reduce unnecessary computation and use the external cameras to fill in the blanks, so to speak.&lt;/p>
&lt;p>Once you can begin to understand the space you&amp;rsquo;re in, you can interact with the environment in many different ways, with physical manipulables, interactive procedural characters walking around on the floor and room furniture (more to come on this), dynamic environment sensing, and more, as the possibilities are boundless with Edge Computing.&lt;/p></description></item></channel></rss>