Skip navigation
Please use this identifier to cite or link to this item:

acessibilidade

http://hdl.handle.net/20.500.12207/526
Full metadata record
wcag
DC FieldValueLanguage
dc.contributor.authorJota, Ricardo-
dc.contributor.authorAraújo, Bruno-
dc.contributor.authorBruno, Luís-
dc.contributor.authorPereira, João-
dc.contributor.authorJorge, Joaquim-
dc.date.accessioned2013-10-23T14:36:05Z-
dc.date.available2013-10-23-
dc.date.available2013-10-23T14:36:05Z-
dc.date.issued2010-06-
dc.identifier.urihttp://hdl.handle.net/20.500.12207/526-
dc.description.abstractIMMIView is an interactive system that relies on multiple modalities and multi-user interaction to support collaborative design review. It was designed to offer natural interaction in visualization setups such as large-scale displays, head mounted displays or TabletPC computers. To support architectural design, our system provides content creation and manipulation, 3D scene navigation and annotations. Users can interact with the system using laser pointers, speech commands, body gestures and mobile devices. In this paper, we describe how we design a system to answer architectural user requirements. In particular, our system takes advantage of multiple modalities to provide a natural interaction for design review. We also propose a new graphical user interface adapted to architectural user tasks, such as navigation or annotations. The interface relies on a novel stroke-based interaction supported by simple laser pointers as input devices for large-scale displays. Furthermore, input devices such as speech and body tracking allow IMMIView to support multiple users. Moreover, they allow each user to select different modalities according to their preference and modality adequacy for the user task. We present a multi-modal fusion system developed to support multi-modal commands on a collaborative, co-located, environment, i.e. with two or more users interacting at the same time, on the same system. The multi-modal fusion system listens to inputs from all the IMMIView modules in order to model user actions and issue commands. The multiple modalities are fused based on a simple rule-based sub-module developed in IMMIView and presented in this paper. User evaluation performed over IMMIView is presented. The results show that users feel comfortable with the system and suggest that users prefer the multi-modal approach to more conventional interactions, such as mouse and menus, for the architectural tasks presented.pt
dc.language.isoengpt
dc.rightsclosedAccesspt
dc.subjectMixed realitypt
dc.subjectDesign reviewpt
dc.subjectHuman–computer interactionpt
dc.subjectReal-time collaborative Interactionpt
dc.subjectVirtual realitypt
dc.subject.classificationIndexação Scopuspt
dc.subject.classificationIndexação Web of Sciencept
dc.titleIMMIView: a multi-user solution for design review in real-timept
dc.typearticlept
dc.peerreviewedyespt
dc.relation.publisherversionhttp://dx.doi.org/10.1007/s11554-009-0141-1pt
degois.publication.firstPage91pt
degois.publication.lastPage107pt
degois.publication.titleJournal of Real-Time Image Processingpt
degois.publication.volume5(2)pt
Appears in Collections:D-ENG - Artigos em revistas com peer review

Files in This Item:
There are no files associated with this item.


FacebookTwitterDeliciousLinkedInDiggGoogle BookmarksMySpace
Formato BibTex MendeleyEndnote Currículo DeGóis 

Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.