In this paper, we have discussed the use of VRML as a tool in scientific visualisation, paying particular attention to the use of some of the enhancements in version 2.0 of the language. However, we have not yet utilised many of its other new features. In this final section, then, we shall briefly discuss ways in which these could be employed, before going on to describe areas in which we perceive the language could be further improved for the benefit of the scientific visualisation community.
All of the dynamics that have been incorporated into our example worlds have used interpolators. These work well for simple changes to parts of the scene, but cannot be used for more complex behaviour, such as the dynamic loading of a series of scenes which represent time steps from a simulation. Such behaviour could perhaps be incorporated into the scene via scripting, although this would depend on the details of the scripting language supported by the browser--for example, Java is more flexible than JavaScript, but has a longer startup time.
Another possibility would be to incorporate behaviour into the scene as `visualisation nodes', which would be encoded in the scripting language. Thus, it might be possible to incorporate (say) a node which generates an isosurface through a 3D scalar dataset. The viewer of the scene would be able to select a threshold value and have the isosurface recalculated in the scene. This client-side calculation is to be contrasted with the Visualisation Web Server, which is built around a CGI script that passes instructions to the server about the visualisation to be created. The visualisation is downloaded, as a static 3D scene, onto the client machine. Each change to the visualisation (e.g., a new value for an isosurface threshold) requires a round trip to the server. Downloading the scene together with instructions for modifying it (in the form of scripting nodes which would be invoked locally on the client) might lead to a more efficient use of local and network resources.
Other enhancements in VRML 2.0 include the new graphics nodes, such as Extrusion. As mentioned above, this is used to create curved surfaces such as ribbons and tubes, which find extensive use in vector field visualisation and the display of molecules. Depth-cueing can now be added to a VRML scene through the use of the Fog node; as is well known, this can be helpful in providing an enhanced sense of 3D structure to scenes, especially when they are rendered in wireframe.
The improvements in the mechanism for prototyping and sharing new nodes could be exploited immediately. The ability to do this is important for (at least) two reasons. In the first place, it allows for more efficient organisation of scenes (as illustrated by our SphereSet example above). Secondly, it suggests the possibility of re-use of other work. For example, consider the creation of a user-defined Axis node, whose characteristics are defined in terms of a small set of parameters (e.g., starting and finishing values, number of divisions and labels). Publishing this node on the Web would allow it to be incorporated into other scenes (possibly created by other users) with minimum effort.
Finally we note but do not discuss the possibilities of using sound in VRML 2.0 for data sonification.
Although VRML 2.0 represents a significant increase in functionality over the earlier version of the language, which presents a wider range of options for the scientific visualiser (some of which have been discussed in this paper), it is still possible to imagine ways in which the language could be further enhanced. We note that these suggestions--many of which arise naturally from consideration of useful features in existing visualisation applications [5],[6],[7],[8]--are domain-specific, although some may turn out to be requirements for other application areas as well. The main thrust of current developments for the next version of VRML is apparently the support of multiple users--this would no doubt have a big impact on those parts of scientific visualisation which have a requirement for collaborative work.
Some of our suggestions appear to be comparatively simple to implement; for example, it should be possible to set the line thickness and point size in a scene. Other enhancements would probably require a good deal of work on the part of the language developers, and the browser builders. Thus, for example, it is well known that annotation is an indispensable part of visualisation. Although labels which are always turned to face the viewer can be incorporated into scenes via the new Billboard node in VRML 2.0 (see, for example, the way it is used to display the frequency of vibration in the scene shown in Figure 2), there is still no support for captions or titles. These elements of a visualisation live in a `screen space' as opposed to the `world space' that the rest of the geometry inhabits--i.e., they are unaffected by changes in the viewpoint. In other visualisation systems [2], they are implemented by allowing for multiple cameras in the scene--one in each space. We note in passing that such an enhancement would be of use to other areas which make use of VRML (many of which--such as games--are more popular, and receive more attention than scientific visualisation). For example, locating elements of the scene in screen space would allow for the incorporation of items such as dashboard controls which indicate speed or location in the world.
There are still further ways in which the language could be extended. One of the most useful features of Open Inventor [1] is the way in which new nodes can be defined along with methods for displaying and outputting them. Such a mechanism has been used in the past to create new visualisation nodes (such as textured smoke for flow volumes [24]) which can then be incorporated into Inventor-based applications such as IRIS Explorer. If this could be incorporated into VRML, it might lead to still greater use of this important technology for distributing and sharing 3D on the Web.