From: | Drew Curtis dcurtis@***.net |
---|---|
Subject: | Virtual Realities 3.0: UMS (and converting VR1 node maps to VR2 |
Date: | Wed, 26 Jul 2000 13:15:43 -0400 (EDT) |
> UNIVERSAL MATRIX SPECIFICATIONS
> UMS is the universal data communications protocol developed in 2039 to
> exploit the amazing advances in both computer technology and infrastructure
> realized after the Crash of 2029. Over a three month period, the UMS quickly
> erncompassed every major standard of the time. Every year the specifications
> became more all-inclusive, until by 2049 it was almost impossible to even
> conceive of the dark times of data incompatibility that existed before UMS.
>
There isn't really a problem with this now, as far as the Internet is
concerned. Sure you can't stick a Mac disk in a PC but you can easily
xfer files over.
> STANDARD FOR A NEW DAY
> In addition to codifying the network protocols that are still used on the
> Matrix to this day, the UMS also described standard data protocols for
> everything ranging from home appliances to mainframe interconnects. With a
> broad and extensible structure, UMS kickstarted the amazing growth of
> telecommunications to the modern day.
>
Unnecessary, it's already in place.
> NERPS
> The ICS itself is not part of the low-level protocols that govern data
> transfer on the Matrix however, that is handled by what is generally called
> just the "stack" - a small suite of programs that intrpret and route data
on
> the Matrix through the "persona" - a helper application that loads onto the
> host. The protocols technical name is New Environmental Routing Protocol
> System (NERPS), and it is the most fundamental part of modern
> telecommunications.
>
This makes sense. I've worked on VR in RL, all the multiuser VR
simulations I've ever seen did all the perception-based computations at
the client level. In general the server level manages traffic.
> VIRTUAL REALITY IS ITS OWN REWARD
> ICS is essentially a high-level "interpretive" standard, it is designed to
> run on any architecture as long as the system possesses the necessary
> interpetive software. In essence it is the future version of late 20th
> century interpetive languages as Java, Jini, and certain scripting languages
> such as Perl. When two devices that do not know how to communicate with each
> other make contact, each machine can send an ICS packet to the other that is
> stored in the other machines buffer and is executed. The two machines have
> then essentially loaded an device driver that allows then to communicate
> between each other. This is all handled automatically, and is the basis of
> all modern computing devices. You simply plug a device (even if its brand
> new) into a network and the network will automatically update itself to
> communicate and use the device.
>
Not really possible. Sure it could work but the problem you have is one
of security. Currently if you set something up on a dedicated network,
lets say a couple computers on DSL, you have to do one of two
things: either they both need an IP address which must be allocated by
the provider, or they must be added to an unroutable internal network that
takes the one IP given to the customer by the ISP. The one IP is assigned
to a machine which runs NAT (Network Address Translation) so that any
device on the internal network can route out using that single IP
address. The problem with this however is that nothing outside of the
internal network can route to the device. Traffic gets as far as the NAT
box and stops dead.
You could definitely have a device add itself to an internal-only network
because you control the routing internally. You can't have it add itself
to the global Internet because you must have the permission of the
company controlling your segment of the Internet, and they're not going to
just give it to you without prior arrangement.
I suppose one way you might be able to do this is if there is unlimitted
IP space you could give any given customer a block so large they'd never
fill it, then you'd be fine. It wouldn't require any recognition by the
upstream routers however because the entire block already routes.
So what I'm saying is it's technically possible but not sure if it's
workable. Again this may be because I know how it works currently and
can't see beyond that, I'll admit.
> One use of the ICS was found to be virtual reality. Instead of data for
> communicating with a device the ICS blocks were instead used to transmit
> behaviors, iconography, and interface specifications for virtual
> environments. At first, computers had to download the entire set of virtual
> reality blocks then the specific data for the host. This proved to be highly
> impractical, and soon most computers came built-in with the standard ICS
> series for using virtual environments.
>
Sure, but you'd still have to download other specs if they're not using
the standards. I would think there would be too many possible
conbinations. However if you assume that there's unlimitted bandwidth and
unlimitted storage capacity, it wouldn't be any problem at all to download
the specs right to the user. It would happen as fast or faster than a
hard-drive access.
> This high-level interface was based on much earlier work with the original
> networking virtual reality modelling language VRML (Virtual Reality
> Modelling Language). The Fuchi-developed EMS (Environmental Modelling
> Script) proved to be highly superior to the other protocols competing for
> the standard and it as adopted soon after.
>
This is just my opinion but I doubt VRML will be the standard for any
future VR. Bits of the theory might be borrowed but in general once you
figure out what you want to do, you build a whole new language for
it. VRML doesn't contain much in the way of innovation.
> EMS
> Every object in a virtual reality is coded as an EMS "entity". For example,
> in a virtual reality consisting of a room, a chair, and a lamp there will be
> entities for the room itself, the ambiant light source, the lamp object, and
> the light object tied to the lamp. Each object will have its icon, behaviors
> (such as modelling the movement of the chair if it is pushed), and physical
> properties (its surface texture, color, radiosity, sheen, etc). The systems
> graphics rendering engine could then "draw" the scene and have every object
> interact in at least a semi-realistic manner.
>
They're not actually coded per se. Most VR simulations use CAD-based 3d
objects and import them just like web-pages link in gif/jpg images. Once
in the simulation they are manipulated. This may be completely different
from how game designers do it, I don't have any experience there.
> As time went on EMS became more advanced and was extended to allow for even
> greater object detail, more capable enity programming, and even methods of
> embedding simsense into the environment. The standard icongraphy set for the
> Matrix was introduced by Fuchi in early 2041 as a way to bring
> standardization to the various virtual reality user interfaces that began to
> appear around that time, greatly easing user familiarity and ease of use.
>
In general, and this is just my own opinion, processes tend to drift
toward entropy rather than toward standards. What is more likely to
happen is that a bunch of competing formats are released and one of them
gets used more than the others. It happened with Beta/VHS, IBM/Mac/Amiga,
and sound encoding (a la MP3s). No one declared MP3 to be the main
standard to use, it was just declared as one standard of many and became
more popular because more people used it. There were many reasons why
this happened, but it wasn't because it was mandated by a standards
organization.
Standards are developed only when no clear winner rises out of competing
formats AND these formats are vital somehow. Example: a couple years ago
there were two formats for modems, KFlex and X2. The only reason a
standard was ever developed was because it was vital that there be some
agreement as people were buying modems incompatable with thier ISP. Even
when the standard (V.90) was developed, some companies didn't implement it
properly in an attempt to leverage people toward using their own
technology rather than the standard.
An example of a market where this didn't happen is digital movie theater
sound. There are still (and have been for years) three different digital
sound formats (Sony's, THX, and I forget what the other one is). No
standards organization has stepped in because it's not a vital issue to
the general populace, and furthermore not everyone can tell the difference
anyhow. I can't.
Sony comes up with propriatary formats regularly. Just about any time a
technology is released, you can bet Sony will come up with their own
format and try to make it de-facto. Microsoft behaves similarly. The
reason this is done is because if your format becomes the global standard,
anyone using it needs to pay you a royalty and/or buy your stuff to use
it. Thus, it is to any corporation's advantage to attempt to create their
own standard independent of any global standard. No one has ever mandated
an operating system standard for example, and no one ever will.
> Reality Filters
> Since EMS is an interpretive system, it was not long before "hacked"
> interpreters appeared. Many of these were hobbyist projects, and typically
> only modified certain aspects of the EMS data they received - such as
> rendering the entities as icons instead of 3d objects. More advanced
> software quickly appeared on the scene, including software toolkits that
> allowed users to modify every aspect of the interpreter to their liking.
>
Reality filters would run faster too, as less information would need to be
transmitted. If bandwidth is unlimitted it won't make any difference
however.
> As simsense (first as simsampling, then later as ASIST interfaces began to
> hit the market) begain to be incorporated into the EMS standards the editors
> changed along with the new technology, forming the basis for modern custom
> interfaces.
>
> Since EMS is essentially no different from a 20th century block of HTML code
> it is typically a trivial matter to ignore the visual representations, or
> assign a new one based on the underlying ICS data. This is anologous to
> looking at the source code for a 20th century web page - its easy to ignore
> bogus links or spot things that may be hidden after the data is interpreted
> and rendered.
>
Mostly correct. I won't nitpick.
> Nodes
> All systems of this type are built as "nodes" - discreet symbolic
> representations of various system functions. These nodes are connected to
> each other by datalines, which are simply reprsentations of data flow
> between the systems.
>
This doesn't make sense necessarily. Here's a good example of why
not: recall Jurassic Park where at the end the little girl is trying to
shut down something using what she laughably calls 'unix'. It takes her
forever to navigate through the VR-like operating system to get to the
shutdown command, a good several minutes.
This was particularly funny because if you were actually running unix
you'd just type the command and it would happen. Additionally, if you
know where a file is on your hard drive, you just go there. If you know
where a webpage is, you just go there. There's no need for links at
all. The problem is this makes decking less like the dungeon-crawl type
of fun the game designers were looking for. Sometimes reality should be
thrown out the window for the sake of fun.
> System Operations
> Because of the design of these systems it is not possible to perform every
> system operation in a specific node. A user must be in the correct "part"
of
> the system in order to execute certain operations.
>
The way this works in RL is that the user must be the correct class of
user. Doesn't matter where they are, if they don't have the correct
permissions they can't do certain actions.
> Connections
> Because of the way systems of this type are laid out, some nodes cannot to
> connect to other types of nodes. This was perhaps one of the biggest
> limitations of the standard, and a primary reason it was abandoned.
>
Can't see how this would happen. Again it's probably because I'm stuck in
the world of how things work today, we're talking 60 years from
now. Currently you can prevent users from going to certain places, but
not connections.
File permissions are both user-based and file-based (and system-based but
that's not important for this example). A user can run certain commands,
and a file will allow certain users to run certain commands on them. It's
a combination of the two.
Using a webpage analogy, you can both block people from accessing a
certain site using a password and change file permissions so that once
they're through you might not necessarily be able to see the files. Don't
know if that makes any sense or not.
> Perception
> Another odd part of the standard was the intentional "shortening" of
> perceptions in the system. It was not possible to access systems as a whole
> in the old standard. One could only "see" the nodes directly connected to
> your own. Even systems running non-virtual interfaces are limited to
> accessing one node at a time, like an old style dungeon-crawl game. Only a
> successful Analyze Host operation from the CPU node will reveal the entire
> "system map".
>
This wouldn't work, as in my Jurassic Park example above. We're talking
about taking functionality that operates as fast as you can type and
slowing it down to taking several minutes for no obvious practical
gain. Again, that makes the game less fun so it's a toss up in my mind.
> Dataline
> Datalines are simply the representation of connections between nodes.
> Essentially they "teleport" users between nodes. Datalines are not nodes
and
> it is not possible to be "in"a dataline for game purposes.
>
I would add that you could also jump to any node down the line assuming
you ditch the dungeon-crawl concept. In the interest of fun I'm not sure
that I would.
> I/O Ports (I/OP)
> An I/OP is a limited-access node that connects the host to various slave
> devices. Most I/OP nodes represent connections for dozens, or even hundreds,
> of individual devices. I/OPs can only control fairly simple devices (simple
> terminals, soykaf makers, etc).
>
An I/O port could also be a datajack. It would be hard to restrict they
types of objects cable to connect through them, networks generally don't
care what's connected to them as long as whatever it is can communicate.
Dunno if that helps, but there's what I think for what it's worth.
Drew Curtis President DCR.NET (502)226-3376
Local Internet access: Frankfort Lawrenceburg Shelbyville Owenton
Louisville Lexington Versailles Nicholasville Midway
http://www.fark.com: If it's not news, it's fark.