The use of multiple senses in interactive applications has become increasingly feasible due to the upsurge of commercial, off-the-shelf devices to produce sensory effects. Creating Multiple Sensorial Media (MulSeMedia) immersive systems requires understanding their digital ecosystem.
The results showed that by pre-processing sensory effects metadata before real-time communication, and selecting the appropriate protocol, response time interval in networked event-based mulsemedia systems can decrease remarkably. Technological advances in computing have allowed multimedia systems to create more immersive experiences for users. Beyond the traditional senses of sight and hearing, researchers have observed that the use of smell, taste, and touch in such systems is becoming increasingly well-received, leading to a new category of multimedia systems called mulsemedia-multiple sensorial media-systems. In parallel, these systems introduce heterogeneous technologies to deliver different sensory effects such as lighting, wind, vibration, and smell, under varied conditions and restrictions. This new paradigm shift poses many challenges, mainly related to mulsemedia integration, delay, responsiveness, sensory effects intensities, wearable and other heterogeneous devices for delivering sensory effects, and remote delivery of mulsemedia components.
The first experiment was conducted using separate regions of the human tongue to record occurrences of basic taste sensations and their respective intensity levels. The results indicate occurrences of sour, salty, bitter, and sweet sensations from different regions of the tongue. One of the major discoveries of this experiment was that the sweet taste emerges via an inverse-current mechanism, which deserves further research in the future. The second study was conducted to compare natural and artificial (virtual) sour taste sensations and examine the possibility of effectively controlling the artificial sour taste at three intensity levels (mild, medium, and strong). The proposed method is attractive since it does not require any chemical solutions and facilitates further research opportunities in several directions including human-computer interaction, virtual reality, food and beverage, as well as medicine.
The proposed ontology model consists of effect ontology and device ontology. The effect ontology represents semantic information about multimedia contents and sensory effects synchronized with the contents. The device ontology represents semantic information about devices generating sensory effects. By using the model, we can infer sensory effects and their attributes from the predefined semantic information of multimedia contents.
This is followed by a discussion of current technical and design challenges that could support the implementation of this concept. This discussion has informed the VTE framework (VTEf), which integrates different layers of experiences, including the role of each user and the technical challenges involved.
When SE ontologies are built and used in isolation, some problems remain, in particular those related to knowledge integration. The goal of this paper is to provide an integrated solution for better dealing with KM-related problems in SE by means of a Software Engineering Ontology Network (SEON). SEON is designed with mechanisms for easing the development and integration of SE domain ontologies.
In this chapter, we present ontology design patterns (ODPs), which are reusable modeling solutions that encode modeling best practices. ODPs are the main tool for performing pattern-based design of ontologies, which is an approach to ontology development that emphasizes reuse and promotes the development of a common â€œlanguageâ€ for sharing knowledge about ontology design best practices. We put specific focus on content ODPs (CPs) and show how they can be used within a particular methodology.
Mulsemedia systems encompass a set of applications, and devices of different types assembled to communicate or express feelings from the virtual world to the real world. Despite existing standards, tools, and recent research devoted to them, there is still a lack of formal and explicit representation of what mulsemedia is. Misconceptions could eventually lead to the construction of solutions that might not take into account reuse, integration, standardization, among other design features. In this paper, we propose to establish a common conceptualization about mulsemedia systems through a reference ontology, named MulseOnto, covering their main notions.
An experimental method was used to model the influence of exploration on perception, considering the application case. MulSeMedia is related to the combination of traditional media (e.g. text, image and video) with other objects that aim to stimulate other human senses, such as mechanoreceptors, chemoreceptors and thermoreceptors. Existing solutions embed the control of actuators in the applications, thus limiting their reutilization in other types of applications or different media players. This work presents PlaySEM, a platform that brings a new approach for simulating and rendering sensory effects that operates independently of any Media Player, and that is compatible with the MPEG-V standard, while taking into account reutilization requirement. Regarding this architecture conjectures are tested focusing on the decoupled operation of the renderer.
A media taxonomy has been develop to help students better understand the possible media form they can use in presentation. This media taxonomy serves both research and development of multimedia applications. By implementing the media taxonomy to two graduate courses including multimedia design and the research and evaluating of interactive multimedia, the technology was found to correlate well with previous categorizations of multimedia in addition to helping the researchers better understand the impact and value added by an individual medium in a multimedia presentation.
Recently, multimedia researchers have added several so-called new media to the traditional multimedia components (e.g., olfaction, haptic, and gustation). Evaluating multimedia user-perceived Quality of Experience (QoE) is already non-trivial and the addition of multisensorial media components increases this challenge.
Although important all affective multimedia databases have numerous deficiencies which impair their applicability. These problems, which are brought forward in the paper, result in low recall and precision of multimedia stimuli retrieval which makes creating emotion elicitation procedures difficult and labor-intensive. To address these issues a new core ontology STIMONT is introduced. The STIMONT is written in OWL-DL formalism and extends W3C EmotionML format with an expressive and formal representation of affective concepts, high-level semantics, stimuli document metadata and the elicited physiology.
Lua components are used for translating sensory effect high-level attributes to MPEG-V SEM (Sensory Effect Metadata) files. A sensory effect simulator was developed to receive SEM files and simulate mulsemedia application rendering. This paper describes a long-term research program on developing ontological foundations for conceptual modeling. This program, organized around the theoretical background of the foundational ontology UFO (Unified Foundational Ontology), aims at developing theories, methodologies and engineering tools with the goal of advancing conceptual modeling as a theoretically sound discipline but also one that has concrete and measurable practical implications. The paper describes the historical context in which UFO was conceived, briefly discusses its stratified organization, and reports on a number of applications of this foundational ontology over more than a decade.
The Vocktail system utilizes three common sensory modalities, taste, smell, and visual (color), to create virtual flavors and augment the existing flavors of a beverage. The system is coupled with a mobile application that enables users to create customized virtual flavor sensations by configuring each of the stimuli via Bluetooth. The system consists of a cocktail glass that is seamlessly fused into a 3D printed structure, which holds the electronic control module, three scent cartridges, and three micro air-pumps. When a user drinks from the system, the visual (RGB light projected on the beverage), taste (electrical stimulation at the tip of the tongue), and smell stimuli (emitted by micro air-pumps) are combined to create a virtual flavor sensation, thus altering the flavor of the beverage.