Publications
intranet
ViRVIG
Year: Author:

Journals
Blanco, Rafael; Patow, Gustavo A.; Pelechano, Nuria
Scientific Reports, Vol. 14, Num. 2694, pp 1--17, 2024.
DOI: http://dx.doi.org/10.1038/s41598-024-52903-w
Current statistical models to simulate pandemics miss the most relevant information about the close atomic interactions between individuals which is the key aspect of virus spread. Thus, they lack a proper visualization of such interactions and their impact on virus spread. In the field of computer graphics, and more specifically in computer animation, there have been many crowd simulation models to populate virtual environments. However, the focus has typically been to simulate reasonable paths between random or semi-random locations in a map, without any possibility of analyzing specific individual behavior. We propose a crowd simulation framework to accurately simulate the interactions in a city environment at the individual level, with the purpose of recording and analyzing the spread of human diseases. By simulating the whereabouts of agents throughout the day by mimicking the actual activities of a population in their daily routines, we can accurately predict the location and duration of interactions between individuals, thus having a model that can reproduce the spread of the virus due to human-to-human contact. Our results show the potential of our framework to closely simulate the virus spread based on real agent-to-agent contacts. We believe that this could become a powerful tool for policymakers to make informed decisions in future pandemics and to better communicate the impact of such decisions to the general public.
Cortez, Alexandra; Vázquez, Pere-Pau; Sánchez-Espigares, J.A.
Heliyon, Cell Press, Vol. 10, Num. 18, 2024.
DOI: http://dx.doi.org/10.1016/j.heliyon.2024.e37608
During the last few years, Bike Sharing Systems (BSS) have become a popular means of transportation in several cities across the world, owing to their low costs and associated advantages. Citizens have adopted these systems as they help improve their health and contribute to creating more sustainable cities. However, customer satisfaction and the willingness to use the systems are directly affected by the ease of access to the docking stations and finding available bikes or slots. Therefore, system operators and managers' major responsibilities focus on urban and transport planning by improving the rebalancing operations of their BSS. Many approaches can be considered to overcome the unbalanced station problem, but predicting the number of arrivals and departures at the docking stations has been proven to be one of the most efficient. In this paper, we study the features that influence the prediction of bikes' arrivals and departures in Barcelona BSS, using a Random Forest model and a one-year data period. We considered features related to the weather, the stations' characteristics, and the facilities available within a 200-meter diameter of each station, called spatial features. The results indicate that features related to specific months, as well as temperature, pressure, altitude, and holidays, have a strong influence on the model, while spatial features have a small impact on the prediction results.
Huang, K.; Ding, G.; Yan, D.; Tang, R.; Huang, T.; Pelechano, Nuria
Computers & Graphics, Vol. 124, 2024.
DOI: http://dx.doi.org/10.1016/j.cag.2024.104051
This study introduces a novel framework for choreographing multi-degree of freedom (MDoF) behaviors in large-scale crowd simulations. The framework integrates multi-objective optimization with spatio-temporal ordering to effectively generate and control diverse MDoF crowd behavior states. We propose a set of evaluation criteria for assessing the aesthetic quality of crowd states and employ multi-objective optimization to produce crowd states that meet these criteria. Additionally, we introduce time offset functions and interpolation progress functions to perform complex and diversified behavior state interpolations. Furthermore, we designed a user-centric interaction module that allows for intuitive and flexible adjustments of crowd behavior states through sketching, spline curves, and other interactive means. Qualitative tests and quantitative experiments on the evaluation criteria demonstrate the effectiveness of this method in generating and controlling MDoF behaviors in crowds. Finally, case studies, including real-world applications in the Opening Ceremony of the 2022 Beijing Winter Olympics, validate the practicality and adaptability of this approach.
Javadiha, Mohammadreza; Andújar, Carlos; Calvanese, Michele; Lacasa, Enrique; Moyés, Jordi; Pontón, Jose Luis; Susin, Antonio; Wang, Jiabo
Padel scientific journal, Vol. 2, Num. 1, pp 89--106, 2024.
DOI: http://dx.doi.org/10.17398/2952-2218.2.89
Recent advances in computer vision and deep learning techniques have opened new possibilities regarding the automatic labeling of sport videos. However, an essen-tial requirement for supervised techniques is the availability of accurately labeled training datasets. In this paper we present PadelVic, an annotated dataset of an ama-teur padel match which consists of multi-view video streams, estimated positional data for all four players within the court (and for one of the players, accurate motion capture data of his body pose), as well as synthetic videos specifically designed to serve as training sets for neural networks estimating positional data from videos. For the recorded data, player positions were estimated by applying a state-of-the-art pose estimation technique to one of the videos, which yields a relatively small positional error (M=16 cm, SD=13 cm). For one of the players, we used a motion capture system providing the orientation of the body parts with an accuracy of 1.5º RMS. The highest accuracy though comes from our synthetic dataset, which provides ground-truth po-sitional and pose data of virtual players animated with the motion capture data. As an example application of the synthetic dataset, we present a system for a more accurate prediction of the center-of-mass of the players projected onto the court plane, from a single-view video of the match. We also discuss how to exploit per-frame positional data of the players for tasks such as synergy analysis, collective tactical analysis, and player profile generation.
Liarokapis, F.; Milata, V.; Pontón, Jose Luis; Pelechano, Nuria; Zacharatos, H.
IEEE Computer Graphics and Applications, Vol. 44, Num. 4, pp 79--88, 2024.
DOI: http://dx.doi.org/10.1109/MCG.2024.3406139
Recent developments in extended reality (XR) are already demonstrating the benefits of this technology in the educational sector. Unfortunately, educators may not be familiar with XR technology and may find it difficult to adopt this technology in their classrooms. This article presents the overall architecture and objectives of an EU-funded project dedicated to XR for education, called Extended Reality for Education (XR4ED). The goal of the project is to provide a platform, where educators will be able to build XR teaching experiences without the need to have programming or 3-D modeling expertise. The platform will provide the users with a marketplace to obtain, for example, 3-D models, avatars, and scenarios; graphical user interfaces to author new teaching environments; and communication channels to allow for collaborative virtual reality (VR). This article describes the platform and focuses on a key aspect of collaborative and social XR, which is the use of avatars. We show initial results on a) a marketplace which is used for populating educational content into XR environments, b) an intelligent augmented reality assistant that communicates between nonplayer characters and learners, and c) self-avatars providing nonverbal communication in collaborative VR.
Molina, Elena; Kouřil, D.; Isenberg, T.; Kozlíková, B.; Vázquez, Pere-Pau
Computers & Graphics (special issue VCBM), Vol. 124, 2024.
DOI: http://dx.doi.org/10.1016/j.cag.2024.104059
Understanding the packing of long DNA strands into chromatin is one of the ultimate challenges in genomic research. An intrinsic part of this complex problem is studying the chromatin’s spatial structure. Biologists reconstruct 3D models of chromatin from experimental data, yet the exploration and analysis of such 3D structures is limited in existing genomic data visualization tools. To improve this situation, we investigated the current options of immersive methods and designed a prototypical VR visualization tool for 3D chromatin models that leverages virtual reality to deal with the spatial data. We showcase the tool in three primary use cases. First, we provide an overall 3D shape overview of the chromatin to facilitate the identification of regions of interest and the selection for further investigation. Second, we include the option to export the selected regions and elements in the BED format, which can be loaded into common analytical tools. Third, we integrate epigenetic modification data along the sequence that influence gene expression, either as in-world 2D charts or overlaid on the 3D structure itself. We developed our application in collaboration with two domain experts and gathered insights from two informal studies with five other experts.
Digital 3D models for medieval heritage: diachronic analysis and documentation of its architecture and paintings
Munoz-Pandiella, Imanol; Bosch, Carles; Guardia, Milagros; Cayuela, Begoña; Pogliani, Paola ; Bordi, Giulia ; Paschali, Maria ; Andújar, Carlos; Charalambous, Panayiotis
Personal and Ubiquitous Computing, 2024.
DOI: http://dx.doi.org/10.1007/s00779-024-01816-6
In this paper, we discuss the requirements and technical challenges within the EHEM project, Enhancement of Heritage Experiences: The Middle Ages, an ongoing research program for the acquisition, analysis, documentation, interpretation, digital restoration, and communication of medieval artistic heritage. The project involves multidisciplinary teams comprising art historians and visual computing experts. Despite the vast literature on digital 3D models in support of Cultural Heritage, the field is so rich and diverse that specific projects often imply distinct, unique requirements which often challenge the computational technologies and suggest new research opportunities. As good representatives of such diversity, we describe the three monuments that serve as test cases for the project, all of them with a rich history of architecture and paintings. We discuss the art historians’ view of how digital models can support their research, the expertise and technological solutions adopted so far, as well as the technical challenges in multiple areas spanning geometry and appearance acquisition, color analysis and digital restitution, as well as the representation of the profound transformations due to the alterations suffered over the centuries.
A 3D feature-based approach for mapping scaling effects on stone monuments
Munoz-Pandiella, Imanol; Pueyo, Xavier; Bosch, Carles
ACM Journal on Computing and Cultural Heritage, 2024.
DOI: http://dx.doi.org/10.1145/3651988
Weathering effects caused by physical, chemical, or biological processes result in visible damages that alter the appearance of stones’ surfaces. Consequently, weathered stone monuments can offer a distorted perception of the artworks to the point of making their interpretation misleading. Being able to detect and monitor decay is crucial for restorers and curators to perform important tasks such as identifying missing parts, assessing the preservation state, or evaluating curating strategies. Decay mapping, the process of identifying weathered zones of artworks, is essential for preservation and research projects. This is usually carried out by marking the affected parts of the monument on a 2D drawing or picture of it. One of the main problems of this methodology is that it is manual work based only on experts’ observations. This makes the process slow and often results in disparities between the mappings of the same monument made by different experts. In this paper, we focus on the weathering effect known as “scaling”, following the ICOMOS ISCS definition. We present a novel technique for detecting, segmenting, and classifying these effects on stone monuments. Our method is user-friendly, requiring minimal user input. By analyzing 3D reconstructed data considering geometry and appearance, the method identifies scaling features and segments weathered regions, classifying them by scaling subtype. It shows improvements over previous approaches and is well-received by experts, representing a significant step towards objective stone decay mapping.
Pagès, Anna; Pueyo, Xavier; Munoz-Pandiella, Imanol
ACM Journal on Computing and Cultural Heritage, 2024.
DOI: http://dx.doi.org/10.1145/3652860
An important challenge of Digital Cultural Heritage is to contribute to the recovery of artworks with their original shape and appearance. Many altarpieces, which are very relevant Christian art elements, have been damaged and/or, partly or fully, lost. Therefore, the only way to recover them is to carry out their digital reconstruction. Although the procedure that we present here is valid for any altarpiece with similar characteristics, and even for other akin elements, our test bench is the altarpieces damaged, destroyed, or disappeared during the Spanish Civil War (1936-1939) in Catalonia where most suffered these effects. The first step of our work has been the classification of these artworks into different categories on the basis of their degree of destruction and of the available visual information related to each one. This paper proposes, for the first time to our knowledge, a workflow for the virtual reconstruction, through photogrammetry, digital modeling, and digital color restoration; of whole altarpieces partially preserved with very little visual information. Our case study is the Rosary’s altarpiece of Sant Pere Màrtir de Manresa church. Currently, this altarpiece is partially preserved in fragments in the Museu Comarcal de Manresa (Spain). But, it can not be reassembled physically owing to the lack of space (actually the church does not exist anymore) and the cost of such an operation. Thus, there is no other solution than the digital one to contemplate and study the altarpiece as a whole. The reconstruction that we provide allows art historians and the general public to virtually see the altarpiece complete and assembled as it was until 1936. The results obtained also allow us to see in detail the reliefs and ornaments of the altarpiece with their digitally restored color.
Pujol, Eduard; Chica, Antoni
Computers & Graphics, Vol. 122, pp 103981, 2024.
DOI: http://dx.doi.org/10.1016/j.cag.2024.103981
Signed distance fields (SDFs) have emerged as an alternative shape representation for real-time collision detection and lighting effects. Computing these for complex models can be expensive, so one popular approach is to prepare an approximation via sampling and interpolation. Then, these may be rendered using sphere marching, which gets close to the surface quickly, but needs several iterations to converge to it. In this paper, we propose an alternative that computes the intersection of a given ray and the surface analytically at a narrow band. This may be combined with other enhancements like having variable error for the approximation depending on the distance to the surface and skipping regions that do not contain the surface to accelerate the outer band ray traversal while reducing the required memory. To achieve smoother representations with minimal computational cost, we propose a method for computing surface intersections and normals from separate interpolants. We evaluate all these to find the optimal combination improving the rendering performance and memory consumption of these SDF approximations.
Deep weathering effects
Adrien Verhulst; Jean-Marie Normand; Guillaume Moreau; Patow, Gustavo A.
Computers & Graphics, Vol. 112, pp 40--49, 2023.
DOI: http://dx.doi.org/10.1016/j.cag.2023.03.006
Alonso, Jesús; Joan Arinyo, Robert; Chica, Antoni
Computers & Graphics, Vol. 114, pp 306--315, 2023.
DOI: http://dx.doi.org/10.1016/j.cag.2023.06.019
We present a novel proposal for modeling complex dynamic terrains that offers real-time rendering, dynamic updates and physical interaction of entities simultaneously. We can capture any feature from landscapes including tunnels, overhangs and caves, and we can conduct a total destruction of the terrain. Our approach is based on a Constructive Solid Geometry tree, where a set of spheres are subtracted from a base Digital Elevation Model. Erosions on terrain are easily and efficiently carried out with a spherical sculpting tool with pixel-perfect accuracy. Real-time rendering performance is achieved by applying a one-direction CPU–GPU communication strategy and using the standard depth and stencil buffer functionalities provided by any graphics processor.
Charalambous, Panayiotis; Pettrè, Julien; Vassiliades, Vassilis; Chrysanthou, Yiorgos; Pelechano, Nuria
Transactions on Graphics, Vol. 42, Num. 4, pp 15, 2023.
DOI: http://dx.doi.org/10.1145/3592459
Simulating crowds with realistic behaviors is a difficult but very important task for a variety of applications. Quantifying how a person balances between different conflicting criteria such as goal seeking,collision avoidance and moving within a group is not intuitive, especially if we consider that behaviors differ largely between people. Inspired by recent advances in Deep Reinforcement Learning, we propose Guided REinforcement Learning (GREIL) Crowds, a method that learns a model for pedestrian behaviors which is guided by reference crowd data. The model successfully captures behaviors such as goal seeking, being part of consistent groups without the need to define explicit relationships and wandering around seemingly without a specific purpose. Two fundamental concepts are important in achieving these results: (a) the per agent state representation and (b) the reward function. The agent state is a temporal representation of the situation around each agent. The reward function is based on the idea that people try to move in situations/states in which they feel comfortable in. Therefore, in order for agents to stay in a comfortable state space, we first obtain a distribution of states extracted from real crowd data; then we evaluate states based on how much of an outlier they are compared to such a distribution. We demonstrate that our system can capture and simulate many complex and subtle crowd interactions in varied scenarios. Additionally, the proposed method generalizes to unseen situations, generates consistent behaviors and does not suffer from the limitations of other data-driven and reinforcement learning approaches.
Cortez, Alexandra; Vázquez, Pere-Pau; Sanchez-Espigares, J.A.
Heliyon, Cell Press, Vol. 9, 2023.
DOI: http://dx.doi.org/10.1016/j.heliyon.2023.e20129
Public Bicycle Sharing Systems (BSS) have spread in many cities for the last decade. The need of analysis tools to predict the behavior or estimate balancing needs has fostered a wide set of approaches that consider many variables. Often, these approaches use a single scenario to evaluate their algorithms, and little is known about the applicability of such algorithms in BSS of different sizes. In this paper, we evaluate the performance of widely known prediction algorithms for three sized scenarios: a small system, with around 20 docking stations, a medium-sized one, with 400+ docking stations, and a large one, with more than 1500 stations. The results show that Prophet and Random Forest are the prediction algorithms with more consistent results, and that small systems often have not enough data for the algorithms to perform a solid work.
Kuták, D.; Vázquez, Pere-Pau; Isenberg, T.; Krone, M.; Baaden, M.; Byska, J.; Kozlíková, B.; Miao, H.
Computer Graphics Forum, Vol. 42, Num. 6, 2023.
DOI: http://dx.doi.org/doi.org/10.1111/cgf.14738
Visualization plays a crucial role in molecular and structural biology. It has been successfully applied to a variety of tasks,including structural analysis and interactive drug design. While some of the challenges in this area can be overcome with moreadvancedvisualizationandinteractiontechniques,othersarechallengingprimarilyduetothelimitationsofthehardwaredevicesused to interact with the visualized content. Consequently, visualization researchers are increasingly trying to take advantageof new technologies to facilitate the work of domain scientists. Some typical problems associated with classic 2D interfaces,such as regular desktop computers, are a lack of natural spatial understanding and interaction, and a limited field of view.These problems could be solved by immersive virtual environments and corresponding hardware, such as virtual reality head-mounted displays. Thus, researchers are investigating the potential of immersive virtual environments in the field of molecularvisualization. There is already a body of work ranging from educational approaches to protein visualization to applications forcollaborative drug design. This review focuses on molecular visualization in immersive virtual environments as a whole, aimingto cover this area comprehensively. We divide the existing papers into different groups based on their application areas, and typesof tasks performed. Furthermore, we also include a list of available software tools. We conclude the report with a discussion ofpotential future research on molecular visualization in immersive environments.
Molina, Elena; Vázquez, Pere-Pau
Graphical Models, Vol. 128, 2023.
DOI: http://dx.doi.org/doi.org/10.1016/j.gmod.2023.101183
One of the key interactions in 3D environments is target acquisition, which can be challenging when targets are small or in cluttered scenes. Here, incorrect elements may be selected, leading to frustration and wasted time. The accuracy is further hindered by the physical act of selection itself, typically involving pressing a button. This action reduces stability, increasing the likelihood of erroneous target acquisition. We focused on molecular visualization and on the challenge of selecting atoms, rendered as small spheres. We present two techniques that improve upon previous progressive selection techniques. They facilitate the acquisition of neighbors after an initial selection, providing a more comfortable experience compared to using classical ray-based selection, particularly with occluded elements. We conducted a pilot study followed by two formal user studies. The results indicated that our approaches were highly appreciated by the participants. These techniques could be suitable for other crowded environments as well.
Orellana, Bernat; Monclús, Eva; Navazo, Isabel; Bendezú, Álvaro; Malagelada, Carolina; Azpiroz, Fernando
Diagnostics, Vol. 13, Num. 5, 2023.
DOI: http://dx.doi.org/10.3390/diagnostics13050910
The analysis of colonic contents is a valuable tool for the gastroenterologist and has multiple applications in clinical routine. When considering magnetic resonance imaging (MRI) modalities, T2 weighted images are capable of segmenting the colonic lumen, whereas fecal and gas contents can only be distinguished in T1 weighted images. In this paper, we present an end-to-end quasi-automatic framework that comprises all the steps needed to accurately segment the colon in T2 and T1 images and to extract colonic content and morphology data to provide the quantification of colonic content and morphology data. As a consequence, physicians have gained new insights into the effects of diets and the mechanisms of abdominal distension.
Pontón, Jose Luis; Ceballos, Victor; Acosta, Lesly; Rios, Àlex; Monclús, Eva; Pelechano, Nuria
Virtual Reality, pp 20, 2023.
DOI: http://dx.doi.org/10.1007/s10055-023-00821-z
In the era of the metaverse, self-avatars are gaining popularity, as they can enhance presence and provide embodiment when a user is immersed in Virtual Reality. They are also very important in collaborative Virtual Reality to improve communication through gestures. Whether we are using a complex motion capture solution or a few trackers with inverse kinematics (IK), it is essential to have a good match in size between the avatar and the user, as otherwise mismatches in self-avatar posture could be noticeable for the user. To achieve such a correct match in dimensions, a manual process is often required, with the need for a second person to take measurements of body limbs and introduce them into the system. This process can be time-consuming, and prone to errors. In this paper, we propose an automatic measuring method that simply requires the user to do a small set of exercises while wearing a Head-Mounted Display (HMD), two hand controllers, and three trackers. Our work provides an affordable and quick method to automatically extract user measurements and adjust the virtual humanoid skeleton to the exact dimensions. Our results show that our method can reduce the misalignment produced by the IK system when compared to other solutions that simply apply a uniform scaling to an avatar based on the height of the HMD, and make assumptions about the locations of joints with respect to the trackers.
SparsePoser: Real-Time Full-Body Motion Reconstruction from Sparse Data
Pontón, Jose Luis; Yun, Haoran; Aristidou, A.; Andújar, Carlos; Pelechano, Nuria
ACM Transactions on Graphics, Vol. 43, Num. 1, pp 1--14, 2023.
DOI: http://dx.doi.org/doi.org/10.1145/3625264
Accurate and reliable human motion reconstruction is crucial for creating natural interactions of full-body avatars in Virtual Reality (VR) and entertain- ment applications. As the Metaverse and social applications gain popularity, users are seeking cost-effective solutions to create full-body animations that are comparable in quality to those produced by commercial motion capture systems. In order to provide affordable solutions though, it is important to minimize the number of sensors attached to the subject’s body. Unfor- tunately, reconstructing the full-body pose from sparse data is a heavily under-determined problem. Some studies that use IMU sensors face chal- lenges in reconstructing the pose due to positional drift and ambiguity of the poses. In recent years, some mainstream VR systems have released 6-degree- of-freedom (6-DoF) tracking devices providing positional and rotational information. Nevertheless, most solutions for reconstructing full-body poses rely on traditional inverse kinematics (IK) solutions, which often produce non-continuous and unnatural poses. In this paper, we introduce Sparse- Poser, a novel deep learning-based solution for reconstructing a full-body pose from a reduced set of six tracking devices. Our system incorporates a convolutional-based autoencoder that synthesizes high-quality continuous human poses by learning the human motion manifold from motion capture data. Then, we employ a learned IK component, made of multiple light- weight feed-forward neural networks, to adjust the hands and feet towards the corresponding trackers. We extensively evaluate our method on publicly available motion capture datasets and with real-time live demos. We show that our method outperforms state-of-the-art techniques using IMU sensors or 6-DoF tracking devices, and can be used for users with different body dimensions and proportions.
Pujol, Eduard; Chica, Antoni
Computer Graphics Forum, Vol. 42, Num. 6, pp e14861, 2023.
DOI: http://dx.doi.org/10.1111/cgf.14861
We present an acceleration structure to efficiently query the Signed Distance Field (SDF) of volumes represented by triangle meshes. The method is based on a discretization of space. In each node, we store the triangles defining the SDF behaviour in that region. Consequently, we reduce the cost of the nearest triangle search, prioritizing query performance, while avoiding approximations of the field. We propose a method to conservatively compute the set of triangles influencing each node. Given a node, each triangle defines a region of space such that all points inside it are closer to a point in the node than the triangle is. This property is used to build the SDF acceleration structure. We do not need to explicitly compute these regions, which is crucial to the performance of our approach. We prove the correctness of the proposed method and compare it to similar approaches, confirming that our method produces faster query times than other exact methods.
Pujol, Eduard; Chica, Antoni
Computers & Graphics, Vol. 114, pp 337--346, 2023.
DOI: http://dx.doi.org/10.1016/j.cag.2023.06.020
In this paper, we present an adaptive structure to represent a signed distance field through trilinear or tricubic interpolation of values, and derivatives, that allows for fast querying of the field. We also provide a method to decide when to subdivide a node to achieve a provided threshold error. Both the numerical error control, and the values needed to build the interpolants, require the evaluation of the input field. Still, both are designed to minimize the total number of evaluations. C0 continuity is guaranteed for both the trilinear and tricubic version of the algorithm. Furthermore, we describe how to preserve C0 continuity between nodes of different levels when using a tricubic interpolant, and provide a proof that this property is maintained. Finally, we illustrate the usage of our approach in several applications, including direct rendering using sphere marching.
Rafieian, B.; Hermosilla, Pedro; Vázquez, Pere-Pau
Journal of Applied Sciences, special issue AI Applied to Data Visualization, Vol. 13, Num. 17, 2023.
DOI: http://dx.doi.org/doi.org/10.3390/app13179967
In data science and visualization, dimensionality reduction techniques have been extensively employed for exploring large datasets. These techniques involve the transformation of high-dimensional data into reduced versions, typically in 2D, with the aim of preserving significant properties from the original data. Many dimensionality reduction algorithms exist, and nonlinear approaches such as the t-SNE (t-Distributed Stochastic Neighbor Embedding) and UMAP (Uniform Manifold Approximation and Projection) have gained popularity in the field of information visualization. In this paper, we introduce a simple yet powerful manipulation for vector datasets that modifies their values based on weight frequencies. This technique significantly improves the results of the dimensionality reduction algorithms across various scenarios. To demonstrate the efficacy of our methodology, we conduct an analysis on a collection of well-known labeled datasets. The results demonstrate improved clustering performance when attempting to classify the data in the reduced space. Our proposal presents a comprehensive and adaptable approach to enhance the outcomes of dimensionality reduction for visual data exploration.
Clustered voxel real-time global illumination
Alejandro Cosin Ayerbe; Patow, Gustavo A.
Computers & Graphics, Vol. 103, pp 75--89, 2022.
DOI: http://dx.doi.org/10.1016/j.cag.2022.01.005
Andújar, Carlos; Brunet, Pere; Chica, Antoni; Navazo, Isabel; Vinacua, Àlvar
CAD Computer Aided Design, Vol. 152, Num. 103370, pp 1--11, 2022.
DOI: http://dx.doi.org/10.1016/j.cad.2022.103370
Herb Voelcker and his research team laid the foundations of Solid Modelling, on which Computer-Aided Design is based. He founded the ambitious Production Automation Project, that included Constructive Solid Geometry (CSG) as the basic 3D geometric representation. CSG trees were compact and robust, saving a memory space that was scarce in those times. But the main computational problem was Boundary Evaluation: the process of converting CSG trees to Boundary Representations (BReps) with explicit faces, edges and vertices for manufacturing and visualization purposes. This paper presents some glimpses of the history and evolution of some ideas that started with Herb Voelcker. We briefly describe the path from -localization and boundary evaluation- to -localization and printing-, with many intermediate steps driven by hardware, software and new mathematical tools: voxel and volume representations, triangle meshes, and many others, observing also that in some applications, voxel models no longer require Boundary Evaluation. In this last case, we consider the current research challenges and discuss several avenues for further research.
Balius, R.; Pujol, M.; Pérez-Cuenca, D.; Morros, C.; Susin, Antonio; Corominas, H.; Sala-Blanch, X.
Clinical Anatomy, Vol. 35, Num. 4, pp 482--491, 2022.
DOI: http://dx.doi.org/10.1002/ca.23828
We hypothesize that the sciatic nerve in the subgluteal space has a specific behavior during internal and external coxofemoral rotation and during isometric contraction of the internal and external rotator muscles of the hip. In 58 healthy volunteers, sciatic nerve behavior was studied by ultrasound during passive internal and external hip rotation movements and during isometric contraction of internal and external rotators. Using MATLAB software, changes in nerve curvature at the beginning and end of each exercise were evaluated for longitudinal catches and axial movement for transverse catches. In the long axis, it was observed that during the passive internal rotation and during the isometric contraction of external rotators, the shape of the curve increased significantly while during the passive external rotation and the isometric contraction of the internal rotators the curvature flattened out. During passive movements in internal rotation, on the short axis, the nerve tended to move laterally and forward, while during external rotation the tendency of the nerve was to move toward a medial and backward position. During the isometric exercises, this displacement was less in the passive movements. Passive movements of hip rotation and isometric contraction of the muscles affect the sciatic nerve in the subgluteal space. Retrotrochanteric pain may be related to both the shear effect of the subgluteus muscles and the endoneural and mechanosensitive aggression to which the sciatic nerve is subjected.
Beacco, Alejandro; Gallego, J.; Slater, M.
The Visual Computer, pp 1--16, 2022.
DOI: http://dx.doi.org/10.1007/s00371-022-02669-x
This work deals with the automatic 3D reconstruction of objects from frontal RGB images. This aims at a better understanding of the reconstruction of 3D objects from RGB images and their use in immersive virtual environments.We propose a complete workflow that can be easily adapted to almost any other family of rigid objects. To explain and validate our method, we focus on guitars. First, we detect and segment the guitars present in the image using semantic segmentation methods based on convolutional neural networks. In a second step, we perform the final 3D reconstruction of the guitar by warping the rendered depthmaps of a fitted 3Dtemplate in 2Dimage space to match the input silhouette.We validated our method by obtaining guitar reconstructions from real input images and renders of all guitar models available in the ShapeNet database. Numerical results for different object families were obtained by computing standard mesh evaluation metrics such as Intersection over Union, Chamfer Distance, and the F-score. The results of this study show that our method can automatically generate high-quality 3D object reconstructions from frontal images using various segmentation and 3D reconstruction techniques.
Comino, Marc; Vinacua, Àlvar; Carruesco, Alex; Chica, Antoni; Brunet, Pere
Computer-Aided Design, Vol. 146, pp 103189, 2022.
DOI: http://dx.doi.org/10.1016/j.cad.2021.103189
Slicing a model (computing thin slices of a geometric or volumetric model with a sweeping plane) is necessary for several applications ranging from 3D printing to medical imaging. This paper introduces a technique designed to compute these slices efficiently, even for huge and complex models. We voxelize the volume of the model at a required resolution and show how to encode this voxelization in an out-of-core octree using a novel Sweep Encoding linearization. This approach allows for efficient slicing with bounded cost per slice. We discuss specific applications, including 3D printing, and compare these octrees’ performance against the standard representations in the literature.
Cortez, A.; Sánchez, J.A.; Vázquez, Pere-Pau
Computers and Graphics, Vol. 109, pp 30--41, 2022.
DOI: http://dx.doi.org/10.1016/j.cag.2022.09.009
The last two decades have exhibited a profound transformation of traditional urban mobility patterns partly due to the exponential growth in both number and popularity of public bicycle sharing systems (BSS). Analysis and visualization of the data generated by BSSs have become of special interest to municipalities to evaluate the effect of their mobility programs and offer integrated urban mobility solutions. In this paper, we present a visualization system that aims to assist city officials from small and medium cities in their decision-making process with an intuitive representation of BSS data. It has been developed, tested, and evaluated together with officials and domain experts from the city of Logroño (Spain). Our tool presents usage information with different time granularities (yearly, monthly, weekly, or seasonally), shows traffic flows between stations, and provides an in-depth breakdown of users data such as their registered address, traveled distance, or gender-based patterns.
Assessing Multi-Site rs-fMRI-Based Connectomic Harmonization Using Information Theory
Facundo Roffet; Claudio Delrieux; Patow, Gustavo A.
Brain Science, Vol. 12, Num. 9, pp 1219, 2022.
DOI: http://dx.doi.org/10.3390/brainsci12091219
Gómez, J.; Vázquez, Pere-Pau
Special issue , 2022.
DOI: http://dx.doi.org/10.3390/app12115664
The comparison of documents—such as articles or patents search, bibliography recom- mendations systems, visualization of document collections, etc.—has a wide range of applications in several fields. One of the key tasks that such problems have in common is the evaluation of a similarity metric. Many such metrics have been proposed in the literature. Lately, deep learning techniques have gained a lot of popularity. However, it is difficult to analyze how those metrics perform against each other. In this paper, we present a systematic empirical evaluation of several of the most popular similarity metrics when applied to research articles. We analyze the results of those metrics in two ways, with a synthetic test that uses scientific papers and Ph.D. theses, and in a real-world scenario where we evaluate their ability to cluster papers from different areas of research.
Hartwig, S.; Schelling, M.; van Onzenoot, C.; Vázquez, Pere-Pau; Hermosilla, Pedro; Ropinski, T.
Computer Graphics Forum, pp 1--14, 2022.
DOI: http://dx.doi.org/10.1111/cgf.14613
View quality measures compute scores for given views and are used to determine an optimal view in viewpoint selection tasks.Unfortunately, despite the wide adoption of these measures, they are rather based on computational quantities, such as entropy,than human preferences. To instead tailor viewpoint measures towards humans, view quality measures need to be able to capturehuman viewpoint preferences. Therefore, we introduce a large-scale crowdsourced data set, which contains 58kannotated view-points for 3220 ModelNet40 models. Based on this data, we derive a neural view quality measure abiding to human preferences.We further demonstrate that this view quality measure not only generalizes to models unseen during training, but also to unseenmodel categories. We are thus able to predict view qualities for single images, and directly predict human preferred viewpointsfor 3D models by exploiting point-based learning technology, without requiring to generate intermediate images or samplingthe view sphere. We will detail our data collection procedure, describe the data analysis and model training and will evaluatethe predictive quality of our trained viewpoint measure on unseen models and categories. To our knowledge, this is the first deeplearning approach to predict a view quality measure solely based on human preferences.
Julio C. S. Jacques; Yağmur Güçlütürk; Marc Perez; Umut Güçlü; Andújar, Carlos; Xavier Baró; Hugo Jair; Isabelle Guyon; Marcel A. J. Van Gerven; Rob Van Lier; Sergio Escalera
IEEE Transactions on Affective Computing, Vol. 13, Num. 1, pp 75-95, 2022.
DOI: http://dx.doi.org/10.1109/TAFFC.2019.2930058
Personality analysis has been widely studied in psychology, neuropsychology, and signal processing fields, among others. From the past few years, it also became an attractive research area in visual computing. From the computational point of view, by far speech and text have been the most considered cues of information for analyzing personality. However, recently there has been an increasing interest from the computer vision community in analyzing personality from visual data. Recent computer vision approaches are able to accurately analyze human faces, body postures and behaviors, and use these information to infer apparent personality traits. Because of the overwhelming research interest in this topic, and of the potential impact that this sort of methods could have in society, we present in this paper an up-to-date review of existing vision-based approaches for apparent personality trait recognition. We describe seminal and cutting edge works on the subject, discussing and comparing their distinctive features and limitations. Future venues of research in the field are identified and discussed. Furthermore, aspects on the subjectivity in data labeling/evaluation, as well as current datasets and challenges organized to push the research on the field are reviewed.
Lemonari, M.; Blanco, Rafael; Charalambous, P.; Pelechano, Nuria; Avraamides, M.; Pettre, J.; Chrysanthou, Y.
Eurographics STAR , Vol. 41, Num. 2, 2022.
DOI: http://dx.doi.org/10.1111/cgf.14506
Recent advancements in crowd simulation unravel a wide range of functionalities for virtual agents, delivering highly-realistic, natural virtual crowds. Such systems are of particular importance to a variety of applications in fields such as: entertainment (e.g., movies, computer games); architectural and urban planning; and simulations for sports and training. However, providing their capabilities to untrained users necessitates the development of authoring frameworks. Authoring virtual crowds is a complex and multi-level task, varying from assuming control and assisting users to realise their creative intents, to delivering intuitive and easy to use interfaces, facilitating such control. In this paper, we present a categorisation of the authorable crowd simulation components, ranging from high-level behaviours and path-planning to local movements, as well as animation and visualisation. We provide a review of the most relevant methods in each area, emphasising the amount and nature of influence that the users have over the final result. Moreover, we discuss the currently available authoring tools (e.g., graphical user interfaces, drag-and-drop), identifying the trends of early and recent work. Finally, we suggest promising directions for future research that mainly stem from the rise of learning-based methods, and the need for a unified authoring framework.
Black hole algorithm with convolutional neural networks for the creation of brain-computer interface based in visual perception and visual imagery
Llorella, Fabio R.; José Azorín; Patow, Gustavo A.
Neural Computing and Applications, 2022.
DOI: http://dx.doi.org/10.1007/s00521-022-07542-5
Molina, Elena; Viale, L.; Vázquez, Pere-Pau
4th IEEE VIS Workshop on Visualization Guidelines, 2022.
DOI: http://dx.doi.org/10.1109/VisGuides57787.2022.00006
One way to illustrate distributions of samples is through the use of violin plots. The original design is a combination of boxplot and density plot mirrored and plot around the boxplot. However, there are other designs in literature. Although they seem a powerful way to illustrate distributions, the fact that they encode distributions makes them difficult to read. Users have problems comparing two different distributions, and certain basic statistics such as the mean can be difficult to estimate properly. To get more insights on how people interprets violin plots, we have carried out an experiment to analyze how the different configurations affect judgments over values encoded in those plots.
Molina, Elena; Vázquez, Pere-Pau
Smart Tools and Applications in Graphics (STAG) 2022, EuroGraphics Digital Library, 2022.
DOI: http://dx.doi.org/10.2312/evp.20221121
Accurate selection in cluttered scenes is complex because a high amount of precision is required. In Virtual Reality Environments, it is even worse, because it is more difficult for us to point a small object with our arms in the air. Not only our arms move slightly, but the button/trigger press reduces our weak stability. In this paper, we present two alternatives to the classical ray pointing intended to facilitate the selection of atoms in molecular environments. We have implemented and analyzed such techniques through an informal user study and found that they were highly appreciated by the users. This selection method could be interesting in other crowded environments beyond molecular visualization.
Munoz-Pandiella, Imanol; Comino, Marc; Andújar, Carlos; Argudo, Oscar; Bosch, Carles; Chica, Antoni; Martinez, Beatriz
Computers & Graphics, Vol. 106, pp 174-186, 2022.
DOI: http://dx.doi.org/10.1016/j.cag.2022.06.003
High-end Terrestrial Lidar Scanners are often equipped with RGB cameras that are used to colorize the point samples. Some of these scanners produce panoramic HDR images by encompassing the information of multiple pictures with different exposures. Unfortunately, exported RGB color values are not in an absolute color space, and thus point samples with similar reflectivity values might exhibit strong color differences depending on the scan the sample comes from. These color differences produce severe visual artifacts if, as usual, multiple point clouds colorized independently are combined into a single point cloud. In this paper we propose an automatic algorithm to minimize color differences among a collection of registered scans. The basic idea is to find correspondences between pairs of scans, i.e. surface patches that have been captured by both scans. If the patches meet certain requirements, their colors should match in both scans. We build a graph from such pair-wise correspondences, and solve for the gain compensation factors that better uniformize color across scans. The resulting panoramas can be used to colorize the point clouds consistently. We discuss the characterization of good candidate matches, and how to find such correspondences directly on the panorama images instead of in 3D space. We have tested this approach to uniformize color across scans acquired with a Leica RTC360 scanner, with very good results.
Pontón, Jose Luis; Yun, Haoran; Andújar, Carlos; Pelechano, Nuria
ACM SIGGRAPH / Eurographics Symposium on Computer Animation (SCA'2022), 2022.
DOI: http://dx.doi.org/10.1111/cgf.14628
The animation of user avatars plays a crucial role in conveying their pose, gestures, and relative distances to virtual objects or other users. Consumer-grade VR devices typically include three trackers: the Head Mounted Display (HMD) and two handheld VR controllers. Since the problem of reconstructing the user pose from such sparse data is ill-defined, especially for the lower body, the approach adopted by most VR games consists of assuming the body orientation matches that of the HMD, and applying animation blending and time-warping from a reduced set of animations. Unfortunately, this approach produces noticeable mismatches between user and avatar movements. In this work we present a new approach to animate user avatars for current mainstream VR devices. First, we use a neural network to estimate the user’s body orientation based on the tracking information from the HMD and the hand controllers. Then we use this orientation together with the velocity and rotation of the HMD to build a feature vector that feeds a Motion Matching algorithm. We built a MoCap database with animations of VR users wearing a HMD and used it to test our approach on both self-avatars and other users’ avatars. Our results show that our system can provide a large variety of lower body animations while correctly matching the user orientation, which in turn allows us to represent not only forward movements but also stepping in any direction.
Rahmani, Vahid; Pelechano, Nuria
Computers & Graphics, Vol. 102, pp 164--174, 2022.
DOI: http://dx.doi.org/10.1016/j.cag.2021.08.020
Path finding for autonomous agents has been traditionally driven by finding optimal paths, typically by using A* search or any of its variants. When it comes to simulating virtual humanoids, traditional approaches rarely consider aspects of human memory or orientation. In this work, we propose a new path finding algorithm, inspired by current research regarding how the brain learns and builds cognitive maps. Our method represents the space as a hexagonal grid with counters, based on brain research that has investigated how memory cells are fired. Our path finder then combines a method for exploring unknown environments while building such a cognitive map, with an A* search using a modified heuristic that takes into account the cognitive map. The resulting paths show how as the agent learns the environment, the paths become shorter and more consistent with the optimal A* search. Moreover, we run a perceptual study to demonstrate that the viewers could successfully identify the intended level of knowledge of the simulated agents. This line of research could enhance the believability of autonomous agents’ path finding in video games and other VR applications.
Slater, M.; Banakou, D.; Beacco, Alejandro; Gallego, J.; Macia-Varela, F.; Oliva, R.
Frontiers in Virtual Reality, Vol. 3, Num. 914392, pp 1--16, 2022.
DOI: http://dx.doi.org/10.3389/frvir.2022.914392
PDF
We review the concept of presence in virtual reality, normally thought of as the sense of “being there” in the virtual world. We argued in a 2009 paper that presence consists of two orthogonal illusions that we refer to as Place Illusion (PI, the illusion of being in the place depicted by the VR) and Plausibility (Psi, the illusion that the virtual situations and events are really happening). Both are with the proviso that the participant in the virtual reality knows for sure that these are illusions. Presence (PI and Psi) together with the illusion of ownership over the virtual body that self-represents the participant, are the three key illusions of virtual reality. Copresence, togetherness with others in the virtual world, can be a consequence in the context of interaction between remotely located participants in the same shared virtual environments, or between participants and virtual humans. We then review several different methods of measuring presence: questionnaires, physiological and behavioural measures, breaks in presence, and a psychophysics method based on transitions between different system configurations. Presence is not the only way to assess the responses of people to virtual reality experiences, and we present methods that rely solely on participant preferences, including the use of sentiment analysis that allows participants to express their experience in their own words rather than be required to adopt the terminology and concepts of researchers. We discuss several open questions and controversies that exist in this field, providing an update to the 2009 paper, in particular with respect to models of Plausibility. We argue that Plausibility is the most interesting and complex illusion to understand and is worthy of significant more research. Regarding measurement we conclude that the ideal method would be a combination of a psychophysical method and qualitative methods including sentiment analysis.
Slater, M.; Cabriera, C.; Senel, G.; Banakou, D.; Beacco, Alejandro; Oliva, R.; Gallego, J.
Virtual Reality, 2022.
DOI: http://dx.doi.org/10.1007/s10055-022-00685-9
PDF
We created a virtual reality version of a 1983 performance by Dire Straits, this being a highly complex scenario consisting of both the virtual band performance and the appearance and behaviour of the virtual audience surrounding the participants. Our goal was to understand the responses of participants, and to learn how this type of scenario might be improved for later reconstructions of other concerts. To understand the responses of participants we carried out two studies which used sentiment analysis of texts written by the participants. Study 1 (n = 25) (Beacco et al. in IEEE Virtual Reality: 538–545, 2021) had the unexpected finding that negative sentiment was caused by the virtual audience, where e.g. some participants were fearful of being harassed by audience members. In Study 2 (n = 26) notwithstanding some changes, the audience again led to negative sentiment—e.g. a feeling of being stared at. For Study 2 we compared sentiment with questionnaire scores, finding that the illusion of being at the concert was associated with positive sentiment for males but negative for females. Overall, we found sentiment was dominated by responses to the audience rather than the band. Participants had been placed in an unusual situation, being alone at a concert, surrounded by strangers, who seemed to pose a social threat for some of them. We relate our findings to the concept of Plausibility, the illusion that events and situations in the VR are really happening. The results indicate high Plausibility, since the negative sentiment, for example in response to being started at, only makes sense if the events are experienced as actually happening. We conclude with the need for co-design of VR scenarios, and the use of sentiment analysis in this process, rather than sole reliance on concepts proposed by researchers, typically expressed through questionnaires, which may not reflect the experiences of participants.
van Onzenoodt, C.; Vázquez, Pere-Pau; Ropinski, T.
IEEE TVCG, pp 1--15, 2022.
DOI: http://dx.doi.org/1010.1109/TVCG.2022.3216919
Exploring high-dimensional data is a common task in many scientific disciplines. To address this task, two-dimensional embeddings, such as tSNE and UMAP, are widely used. While these determine the 2D position of data items, effectively encoding the first two dimensions, suitable visual encodings can be employed to communicate higher-dimensional features. To investigate such encodings, we have evaluated two commonly used glyph types, namely flower glyphs and star glyphs. To evaluate their capabilities for communicating higher-dimensional features in two-dimensional embeddings, we ran a large set of crowd-sourced user studies using real-world data obtained from data.gov. During these studies, participants completed a broad set of relevant tasks derived from related research. This paper describes the evaluated glyph designs, details our tasks, and the quantitative study setup before discussing the results. Finally, we will present insights and provide guidance on the choice of glyph encodings when exploring high-dimensional data.
Díaz, Jose; Fort, Marta; Vázquez, Pere-Pau
Special Issue EuroVis, Vol. 4, Num. 3, pp 531--542, 2021.
DOI: http://dx.doi.org/10.1111/cgf.14327
There are many multiple-stage racing competitions in various sports such as swimming, running, or cycling. The wide availability of affordable tracking devices facilitates monitoring the position along with the race of all participants, even for non-professional contests. Getting real-time information of contenders is useful but also unleashes the possibility of creating more complex visualization systems that ease the understanding of the behavior of all participants during a simple stage or throughout the whole competition. In this paper we focus on bicycle races, which are highly popular, especially in Europe, being the Tour de France its greatest exponent. Current visualizations from TV broadcasting or real-time tracking websites are useful to understand the current stage status, up to a certain extent. Unfortunately, still no current system exists that visualizes a whole multi-stage contest in such a way that users can interactively explore the relevant events of a single stage (e.g. breakaways, groups, virtual leadership…), as well as the full competition. In this paper, we present an interactive system that is useful both for aficionados and professionals to visually analyze the development of multi-stage cycling competitions.
Feature-based clustered geometry for interpolated Ray-casting
Gonzalez, Francisco; Martín, Ignacio; Patow, Gustavo A.
Computers & Graphics, Vol. 102, pp 175--186, 2021.
DOI: http://dx.doi.org/10.1016/j.cag.2021.08.019
Javadiha, Mohammadreza; Andújar, Carlos; Enrique Lacasa; Ángel Ric; Susin, Antonio
Sensors, Vol. 21, Num. 10, pp 1--17, 2021.
DOI: http://dx.doi.org/10.3390/s21103368
The estimation of player positions is key for performance analysis in sport. In this paper, we focus on image-based, single-angle, player position estimation in padel. Unlike tennis, the primary camera view in professional padel videos follows a de facto standard, consisting of a high-angle shot at about 7.6 m above the court floor. This camera angle reduces the occlusion impact of the mesh that stands over the glass walls, and offers a convenient view for judging the depth of the ball and the player positions and poses. We evaluate and compare the accuracy of state-of-the-art computer vision methods on a large set of images from both amateur videos and publicly available videos from the major international padel circuit. The methods we analyze include object detection, image segmentation and pose estimation techniques, all of them based on deep convolutional neural networks. We report accuracy and average precision with respect to manually-annotated video frames. The best results are obtained by top-down pose estimation methods, which offer a detection rate of 99.8% and a RMSE below 5 and 12 cm for horizontal/vertical court-space coordinates (deviations from predicted and ground-truth player positions). These results demonstrate the suitability of pose estimation methods based on deep convolutional neural networks for estimating player positions from single-angle padel videos. Immediate applications of this work include the player and team analysis of the large collection of publicly available videos from international circuits, as well as an inexpensive method to get player positional data in amateur padel clubs.
Classify four imagined objects with EEG signals
Llorella, Fabio R.; Eduardo Iañez; José Azorín; Patow, Gustavo A.
Evolutionary Intelligence, 2021.
DOI: http://dx.doi.org/10.1007/s12065-021-00577-y
A 3D digitisation workflow for architecture-specific annotation of built heritage
Marissia Deligiorgi; Maria I. Maslioukova; Melinos Averkiou; Andreas C. Andreou; Pratheba Selvaraju; Evangelos Kalogerakis; Patow, Gustavo A.; Yiorgos Chrysanthou; George Artopoulos
Journal of Archaeological Science: Reports, Vol. 37, pp 102787, 2021.
DOI: http://dx.doi.org/10.1016/j.jasrep.2020.102787
Rogla, Otger; Patow, Gustavo A.; Pelechano, Nuria
Computers & Graphics, Vol. 99, pp 83--99, 2021.
DOI: http://dx.doi.org/10.1016/j.cag.2021.06.014
Authoring meaningful crowds to populate a virtual city can be a cumbersome, time-consuming and an error-prone task. In this work, we present a new framework for authoring populated environments in an easier and faster way, by relying on the use of procedural techniques. Our framework consists of the procedural generation of semantically-augmented virtual cities to drive the procedural generation and simulation of crowds. The main novelty lies in the generation of agendas for each individual inhabitant (alone or as part of a family) by using a rule-based grammar that combines city semantics with the autonomous persons’ characteristics. A new population or city can be authored by editing rule files with the flexibility of reusing, combining or extending the rules of previous populations. The results show how logical and consistent sequences of whereabouts can be easily generated for a crowd providing a good starting point to bring virtual cities to life.
Schatz, K.; Franco, Juan Jose; Schäfer, M.; Rose, A.S.; Ferrario, V.; Pleiss, J.; Vázquez, Pere-Pau; Ertl, T.; Krone, M.
Computer Graphics Forum, Vol. 40, Num. 6, pp 394--408, 2021.
DOI: http://dx.doi.org/10.1111/cgf.14386
When studying protein-ligand interactions, many different factors can influence the behaviour of the protein as well as theligands. Molecular visualisation tools typically concentrate on the movement of single ligand molecules; however, viewing onlyone molecule can merely provide a hint of the overall behaviour of the system. To tackle this issue, we do not focus on thevisualisationofthelocalactionsofindividualligandmoleculesbutontheinfluenceofaproteinandtheiroverallmovement.Sincethe simulations required to study these problems can have millions of time steps, our presented system decouples visualisationand data preprocessing: our preprocessing pipeline aggregates the movement of ligand molecules relative to a receptor protein.For data analysis, we present a web-based visualisation application that combines multiple linked 2D and 3D views that displaythe previously calculated data The central view, a novel enhanced sequence diagram that shows the calculated values, is linkedto a traditional surface visualisation of the protein. This results in an interactive visualisation that is independent of the size ofthe underlying data, since the memory footprint of the aggregated data for visualisation is constant and very low, even if the rawinput consisted of several terabytes.
Schelling, M.; Hermosilla, Pedro; Vázquez, Pere-Pau; Ropinski, T.
Computer Graphics Forum (Proc. EuroGraphics, 2021), Vol. 40, Num. 2, pp 413--423, 2021.
DOI: http://dx.doi.org/10.1111/cgf.142643
Optimal viewpoint prediction is an essential task in many computer graphics applications. Unfortunately, common viewpoint qualities suffer from two major drawbacks: dependency on clean surface meshes, which are not always available, and the lack of closed-form expressions, which requires a costly search involving rendering. To overcome these limitations we propose to separate viewpoint selection from rendering through an end-to-end learning approach, whereby we reduce the influence of the mesh quality by predicting viewpoints from unstructured point clouds instead of polygonal meshes. While this makes our approach insensitive to the mesh discretization during evaluation, it only becomes possible when resolving label ambiguities that arise in this context. Therefore, we additionally propose to incorporate the label generation into the training procedure, making the label decision adaptive to the current network predictions. We show how our proposed approach allows for learning viewpoint predictions for models from different object categories and for different viewpoint qualities. Additionally, we show that prediction times are reduced from several minutes to a fraction of a second, as compared to state-of-the-art (SOTA) viewpoint quality evaluation. We will further release the code and training data, which will to our knowledge be the biggest viewpoint quality dataset available.
Altarriba-Bartés, A.; Calle, M.; Susin, Antonio; Gonçalves, B.; Vives, M.; Sampaio, J.; Peña, J.
RICYDE, Vol. 16, Num. 59, pp 67--84, 2020.
DOI: http://dx.doi.org/10.5232/ricyde2020.05906
This study aimed to assess the effect of the scoring moment on the conditional probability of winning or losing a professional soccer match, as well as identified the most influential variables contributing to victory in Major League Soccer (MLS), the men’s professional league in the United States. Data from 680 matches played in the 2015 and 2016 regular seasons were analysed, by dividing the matches into fifteen-minute periods. Additionally, the influence of playing home or away on the match outcome and the type of technical-tactical actions that lead to a goal was also analysed. The temporal analysis revealed that scoring first increased the probability of winning the match significantly and showed dependency on the time in which the goal was scored. The two principal components of the principal component analysis (PCA) were counterattacks (PC1) and crosses (PC2). These were the most critical variables during open play to determine how MLS teams scored goals. Nevertheless, scoring first, playing as a home team always gave a better chance to win the game than scoring first and playing away (0.72 vs. 0.32 probability). As the match approached the end, winning or losing was even more determinant and less reversible (0.85 vs. 0.72 for the home and away team respectively when they were ahead on the score in the minute 75 or later). These findings can contribute to a better understanding of the performance indicators in professional soccer, helping coaches to determine the right strategies and improving the tactical patterns to succeed in competition.
Andújar, Carlos; Vijulie, CristinaRaluca; Vinacua, Àlvar
IEEE computer graphics and applications, Vol. 40, Num. 3, pp 105--111, 2020.
DOI: http://dx.doi.org/10.1109/MCG.2020.2981786
Modern computer graphics courses require students to complete assignments involving computer programming. The evaluation of student programs, either by the student (self-assessment) or by the instructors (grading) can take a considerable amount of time and does not scale well with large groups. Interactive judges giving a pass/fail verdict do constitute a scalable solution, but they only provide feedback on output correctness. In this article, we present a tool to provide extensive feedback on student submissions. The feedback is based both on checking the output against test sets, as well as on syntactic and semantic analysis of the code. These analyses are performed through a set of code features and instructor-defined rubrics. The tool is built with Python and supports shader programs written in GLSL. Our experiments demonstrate that the tool provides extensive feedback that can be useful to support self-assessment, facilitate grading, and identify frequent programming mistakes.
Argudo, Oscar; Andújar, Carlos; Chica, Antoni
Computer Graphics Forum, Vol. 39, Num. 1, pp 174--184, 2020.
DOI: http://dx.doi.org/10.1111/cgf.13752
The automatic generation of realistic vegetation closely reproducing the appearance of specific plant species is still a challenging topic in computer graphics. In this paper, we present a new approach to generate new tree models from a small collection of frontal RGBA images of trees. The new models are represented either as single billboards (suitable for still image generation in areas such as architecture rendering) or as billboard clouds (providing parallax effects in interactive applications). Key ingredients of our method include the synthesis of new contours through convex combinations of exemplar countours, the automatic segmentation into crown/trunk classes and the transfer of RGBA colour from the exemplar images to the synthetic target. We also describe a fully automatic approach to convert a single tree image into a billboard cloud by extracting superpixels and distributing them inside a silhouette defined 3D volume. Our algorithm allows for the automatic generation of an arbitrary number of tree variations from minimal input, and thus provides a fast solution to add vegetation variety in outdoor scenes.
Argudo, Oscar; Galin, Eric; Peytavie, Adrien; Paris, Axel; Guérin, Eric
ACM Transactions on Graphics (SIGGRAPH Asia 2020), Vol. 39, Num. 6, pp 1--14, 2020.
DOI: http://dx.doi.org/10.1145/3414685.3417855
Glaciers are some of the most visually arresting and scenic elements of cold regions and high mountain landscapes. Although snow-covered terrains have previously received attention in computer graphics, simulating the temporal evolution of glaciers as well as modeling their wide range of features has never been addressed. In this paper, we combine a Shallow Ice Approximation simulation with a procedural amplification process to author high-resolution realistic glaciers. Our multiresolution method allows the interactive simulation of the formation and the evolution of glaciers over hundreds of years. The user can easily modify the environment variables, such as the average temperature or precipitation rate, to control the glacier growth, or directly use brushes to sculpt the ice or bedrock with interactive feedback. Mesoscale and smallscale landforms that are not captured by the glacier simulation, such as crevasses, moraines, seracs, ogives, or icefalls are synthesized using procedural rules inspired by observations in glaciology and according to the physical parameters derived from the simulation. Our method lends itself to seamless integration into production pipelines to decorate reliefs with glaciers and realistic ice features.
Realistic Buoyancy Model for Real-Time Applications
Bajo, Juan; Patow, Gustavo A.; Delrieus, Claudio
Computer Graphics Forum, Vol. 39, Num. 6, pp 217--231, 2020.
DOI: http://dx.doi.org/10.1111/cgf.14013
Following Archimedes Principle, any object immersed in a fluid is subject to an upward buoyancy force equal to the weight of the fluid displaced by the object. This simple description is the origin of a set of effects that are ubiquitous in nature, and are becoming commonplace in games, simulators and interactive animations. Although there are solutions to the fluid‐to‐solid coupling problem in some particular cases, to the best of our knowledge, comprehensive and accurate computational buoyancy models adequate in general contexts are still lacking. We propose a real‐time Graphics Processing Unit (GPU) based algorithm for realistic computation of the fluid‐to‐solid coupling problem, which is adequate for a wide generality of cases (solid or hollow objects, with permeable or leak‐proof surfaces, and with variable masses). The method incorporates the behaviour of the fluid into which the object is immersed, and decouples the computation of the physical parameters involved in the buoyancy force of the empty object from the mass of contained liquid. The dynamics of this mass of liquid are also computed, in a way such that the relation between the centre of mass of the object and the buoyancy force may vary, leading to complex, realistic beha viours such as the ones arising for instance with a sinking boat.
VR4Health: Personalized teaching and learning anatomy using VR
Fairén, Marta; Moyés, Jordi; Insa, Esther
Journal of Medical Systems, Vol. 44, Num. 5, pp 1--13, 2020.
DOI: http://dx.doi.org/10.1007/s10916-020-01550-5
Virtual Reality (VR) is being integrated into many different areas of our lives, from industrial engineering to video-games, and also including teaching and education. We have several examples where VR has been used to engage students and facilitate their 3D spatial understanding, but can VR help also teachers? What are the benefits teachers can obtain on using VR applications? In this paper we present an application (VR4Health) designed to allow students to directly inspect 3D models of several human organs by using Virtual Reality systems. The application is designed to be used in an HMD device autonomously as a self-learning tool and also reports information to teachers in order that he/she becomes aware of what the students do and can redirect his/her work to the concrete necessities of the student. We evaluate both the students and the teachers perception by doing an experiment and asking them to fill-in a questionnaire at the end of the experiment.
Earthquake Simulation on Ancient Masonry Buildings
Fita, Josep Lluis; Besuievsky, Gonzalo; Patow, Gustavo A.
Journal on Computing and Cultural Heritage, Vol. 13, Num. 2, pp 11, 2020.
DOI: http://dx.doi.org/10.1145/3372421
Research on seismic simulations has focused mainly on methodologies specially tailored to civil engineering. However, we have detected a lack in the area of interactive cultural heritage applications, where speed and plausibility are the main requirements to satisfy. We designed a tool that allows setting up and recreating earthquakes in a simple way. We coupled our earthquake simulator with a structural simulator of physics, specifically tailored to masonry buildings, achieving a high degree of accuracy in the simulations. To validate our model, we performed a series of tests over a set of ancient masonry structures such as walls and churches. We show the feasibility of including earthquake simulations and structural vulnerability, a building property that limits the damage of this under seismic movements, into historical studies for helping professionals understand those events of the past where an earthquake took place.
Gonzalez Franco, M.; Ofek, E.; Pan, Y.; Antley, A.; Steed, A.; Spanlang, Bernhard; Maselli, A.; Banakou, D.; Pelechano, Nuria; Orts-Escolano, S.; Orvalho, V.; Trutoiu, L.; Wojcik, M.; Sanchez-Vives, M.V.; Bailenson, J.; Slater, M.; Lanier, J.
Frontiers in Virtual Reality, Vol. 1, pp 20, 2020.
DOI: http://dx.doi.org/10.3389/frvir.2020.561558
As part of the open sourcing of the Microsoft Rocketbox avatar library for research and academic purposes, here we discuss the importance of rigged avatars for the Virtual and Augmented Reality (VR, AR) research community. Avatars, virtual representations of humans, are widely used in VR applications. Furthermore many research areas ranging from crowd simulation to neuroscience, psychology, or sociology have used avatars to investigate new theories or to demonstrate how they influence human performance and interactions. We divide this paper in two main parts: the first one gives an overview of the different methods available to create and animate avatars. We cover the current main alternatives for face and body animation as well introduce upcoming capture methods. The second part presents the scientific evidence of the utility of using rigged avatars for embodiment but also for applications such as crowd simulation and entertainment. All in all this paper attempts to convey why rigged avatars will be key to the future of VR and its wide adoption.
Hermosilla, Pedro; Schäfer, M.; Lang, M.; Fackelmann, G.; Vázquez, Pere-Pau; Kozlíková, B.; Krone, M.; Ritschel, T.; Ropinski, T.
PrePrints, 2020.
DOI: http://dx.doi.org/10.1007/inventado
Proteins perform a large variety of functions in living organisms, thus playing a key role in biology. As of now, available learning algorithms to process protein data do not consider several particularities of such data and/or do not scale well for large protein conformations. To fill this gap, we propose two new learning operations enabling deep 3D analysis of large-scale protein data. First, we introduce a novel convolution operator which considers both, the intrinsic (invariant under protein folding) as well as extrinsic (invariant under bonding) structure, by using n-D convolutions defined on both the Euclidean distance, as well as multiple geodesic distances between atoms in a multi-graph. Second, we enable a multi-scale protein analysis by introducing hierarchical pooling operators, exploiting the fact that proteins are a recombination of a finite set of amino acids, which can be pooled using shared pooling matrices. Lastly, we evaluate the accuracy of our algorithms on several large-scale data sets for common protein analysis tasks, where we outperform state-of-the-art methods.
Juan Bajo; Claudio delrieux; Patow, Gustavo A.
The Visual Computer, Num. 37, pp 2053--2068, 2020.
DOI: http://dx.doi.org/10.1007/s00371-020-01963-w
The visual appearance of materials depends on their intrinsic light transfer properties, the illumination and camera conditions, and other environmental factors. This is in particular the case of porous, rough, or absorbent materials, where the presence of liquid on the surface alters significantly their BRDF, which in turn results in considerable changes in their visual appearance. For this reason, rendering materials change their appearance when wet continues to be a relevant topic in computer graphics. This is especially true when real-time photo-realistic rendering is required in scenes involving this kind of materials in interaction with water or other liquids. In this paper, we introduce a physically inspired technique to model and render appearance changes of absorbent materials when their surface is wet. First, we develop a new method to solve the interaction between the liquid and the object surface using its own underlying texture coordinates. Then, we propose an algorithm to model the diffusion phenomenon that occurs in the interface between a solid porous object and a liquid. Finally, we extend a model that explains the change of appearance of materials under wet conditions, and we implement it achieving real-time performance. The complete model is developed using GPU acceleration.
Convolutional Neural Networks and Genetic Algorithm for Visual Imagery Classification
Llorella, Fabio R.; Patow, Gustavo A.; Azorín, José M.
Physical and Engineering Sciences in Medicine, Vol. 43, Num. 3, pp 973--983, 2020.
DOI: http://dx.doi.org/10.1007/s13246-020-00894-z
Brain-Computer Interface (BCI) systems establish a channel for direct communication between the brain and the outside world without having to use the peripheral nervous system. While most BCI systems use evoked potentials and motor imagery, in the present work we present a technique that employs visual imagery. Our technique uses neural networks to classify the signals produced in visual imagery. To this end, we have used densely connected neural and convolutional networks, together with a genetic algorithm to find the best parameters for these networks. The results we obtained are a 60% success rate in the classification of four imagined objects (a tree, a dog, an airplane and a house) plus a state of relaxation, thus outperforming the state of the art in visual imagery classification.
Males, Jan; Monclús, Eva; Díaz, Jose; Navazo, Isabel; Vázquez, Pere-Pau
Computers & graphics , Vol. 91, pp 39--51, 2020.
DOI: http://dx.doi.org/10.1016/j.cag.2020.06.005
Computerized Tomography (CT) and, more recently, Magnetic Resonance Imaging (MRI) have become the state-of-the art techniques for morpho-volumetric analysis of abdominal cavities. Due to its constant motility, the colon is an organ difficult to analyze. Unfortunately, CTs radiative nature makes it only indicated for patients with important disorders. Lately, acquisition techniques that rely on the use of MRI have matured enough to enable the analysis of colon data. This allows gathering data of patients with- out preparation (i.e. administration of drugs or contrast agents), and incorporating data of patients with non life-threatening diseases and healthy subjects to databases. In this paper we present an end-to-end framework that comprises all the steps to extract colon content and morphology data coupled with a web-based visualization tool that facilitates the visual exploration of such data. We also introduce the set of tools for the extraction of morphological data, and a detailed description of a specifically-designed interactive tool that facilitates a visual comparison of numerical variables within a set of patients, as well as a detailed inspection of an individual. Our prototype was evaluated by domain experts, which showed that our visual approach may reduce the costly process of colon data analysis. As a result, physicians have been able to get new insights on the effects of diets, and also to obtain a better understanding on the motility of the colon.
Mas, Albert; Martín, Ignacio; Patow, Gustavo A.
Computer Graphics Forum, Vol. 39, Num. 1, pp 650-671, 2020.
DOI: http://dx.doi.org/10.1111/cgf.13897
Ancient cities and castles are ubiquitous cultural heritage structures all over Europe, and countless digital creations (e.g. movies and games) use them for storytelling. However, they got little or no attention in the computer graphics literature. This paper aims to close the gap between historical and geometrical modelling, by presenting a framework that allows the forward and inverse design of ancient city (e.g. castles and walled cities) evolution along history. The main component is an interactive loop that cycles over a number of years simulating the evolution of a city. The user can define events, such as battles, city growth, wall creations or expansions, or any other historical event. Firstly, cities (or castles) and their walls are created, and, later on, expanded to encompass civil or strategic facilities to protect. In our framework, battle simulations are used to detect weaknesses and strengthen them, evolving to accommodate to developments in offensive weaponry. We conducted both forward and inverse design tests on three different scenarios: the city of Carcassone (France), the city of Gerunda (Spain) and the Ciutadella in ancient Barcelona. All the results have been validated by historians who helped fine‐tune the different parameters involved in the simulations. Code available at: https://github.com/neich/BattleField
Orellana, Bernat; Monclús, Eva; Brunet, Pere; Navazo, Isabel; Bendezú, Álvaro; Azpiroz, Álvaro
Medical Image Analysis, 2020.
DOI: http://dx.doi.org/10.1016/j.media.2020.101697
The study of the colonic volume is a procedure with strong relevance to gastroenterologists. Depending on the clinical protocols, the volume analysis has to be performed on MRI of the unprepared colon without contrast administration. In such circumstances, existing measurement procedures are cumbersome and time-consuming for the specialists. The algorithm presented in this paper permits a quasi-automatic segmentation of the unprepared colon on T2-weighted MRI scans. The segmentation algorithm is organized as a three-stage pipeline. In the first stage, a custom tubularity filter is run to detect colon candidate areas. The specialists provide a list of points along the colon trajectory, which are combined with tubularity information to calculate an estimation of the colon medial path. In the second stage, we delimit the region of interest by applying custom segmentation algorithms to detect colon neighboring regions and the fat capsule containing abdominal organs. Finally, within the reduced search space, segmentation is performed via 3D graph-cuts in a three-stage multigrid approach. Our algorithm was tested on MRI abdominal scans, including different acquisition resolutions, and its results were compared to the colon ground truth segmentations provided by the specialists. The experiments proved the accuracy, efficiency, and usability of the algorithm, while the variability of the scan resolutions contributed to demonstrate the computational scalability of the multigrid architecture. The system is fully applicable to the colon measurement clinical routine, being a substantial step towards a fully automated segmentation.
Pueyo, Oriol; Albert Sabria; Pueyo, Xavier; Patow, Gustavo A.; Michael Wimmer
Computers & Graphics, Vol. 86, pp 15--26, 2020.
DOI: http://dx.doi.org/10.1016/j.cag.2019.11.004
One important use of realistic city environments is in the video game industry. When a company works on a game whose action occurs in a real-world environment, a team of designers usually creates a simplified model of the real city. In particular, the resulting city is desired to be smaller in extent to increase playability and fun, avoiding long walks and boring neighborhoods. This is manual work, usually started from scratch, where the first step is to take the original city map as input, and from it create the street network of the final city, removing insignificant streets and bringing important places closer together in the process. This first draft of the city street network is like a kind of skeleton with the most important places connected, from which the artist can (and should) start working until the desired result is obtained. In this paper, we propose a solution to automatically generate such a first simplified street network draft. This is achieved by using the well-established seam-carving technique applied to a skeleton of the city layout, built with the important landmarks and streets of the city. The output that our process provides is a street network that reduces the city area as much as the designer wants, preserving land- marks and key streets, while keeping the relative positions between them. For this, we run a shrinking process that reduces the area in an irregular way, prioritizing the removal of areas of less importance. This way, we achieve a smaller city but retain the essence of the real-world one. To further help the designer, we also present an automatic filling algorithm that adds unimportant streets to the shrunken skeleton.
Rahmani, Vahid; Pelechano, Nuria
Computers & Graphics, Vol. 86, pp 1--14, 2020.
DOI: http://dx.doi.org/10.1016/j.cag.2019.10.006
One of the main challenges in video games is to compute paths as efficiently as possible for groups of agents. As both the size of the environments and the number of autonomous agents increase, it becomes harder to obtain results in real time under the constraints of memory and computing resources. Hierarchical approaches, such as HNA* (Hierarchical A* for Navigation Meshes) can compute paths more efficiently, although only for certain configurations of the hierarchy. For other configurations, the method suffers from a bottleneck in the step that connects the Start and Goal positions with the hierarchy. This bottleneck can drop performance drastically. In this paper we present two approaches to solve the HNA* bottleneck and thus obtain a performance boost for all hierarchical configurations. The first method relies on further memory storage, and the second one uses parallelism on the GPU. Our comparative evaluation shows that both approaches offer speed-ups as high as 9x faster than A*, and show no limitations based on hierarchical configuration. Finally we show how our CUDA based parallel implementation of HNA* for multi-agent path finding can now compute paths for over 500K agents simultaneously in real-time, with speed-ups above 15x faster than a parallel multi-agent implementation using A*.
Raupp Musse , S.; Cesar, R.; Pelechano, Nuria; Wang, Z.
Computers & Graphics, Vol. 94, pp 5--6, 2020.
DOI: http://dx.doi.org/10.1016/j.cag.2020.11.004
SIBGRAPI-Conference on Graphics, Patterns and Images is an international conference annually promoted by the Brazilian Computer Society (SBC). SIBGRAPI is one of the most traditional and important Brazilian scientific events in Computer Science. It is attended by researchers, artists, designers, and students from Colleges, Universities, Companies and Research Centers, gathering around 200 participants from different regions of Brazil and abroad. SIBGRAPI is the main conference of the Special Committee of Computer Graphics and Image Processing of SBC (Brazilian Computer Society) and held in cooperation with ACM SIGGRAPH. The proceedings of the event have been published by CPS since 1997, and all the editions are available from IEEE Xplore Digital Library. In addition, SIBGRAPI 2020 has Special Sections of the Elsevier Computers and Graphics, IEEE Geoscience and Remote Sensing Letters and Pattern Recognition Letters journals. We are really happy that Alan Bovik (UT Austin, USA), Catherine Pelachaud (CNRS-ISIR, Sorbonne University, France), James Gain (UCT, South Africa), Helio Lopes (PUC-Rio, Brazil) and Olga Bellon (UFPR, Brazil) each gave a keynote at SIBGRAPI 2020. This year we accepted papers previously submitted to the Special Track on SIBGRAPI for the Elsevier CG journal. We received 31 high-standard papers and only the ten best papers have made it to publication in this special issue. They were selected by a committee of well-renowned researchers in the field in two revision phases, where each paper has been reviewed by at least three reviewers at each phase. We thank all reviewers for their amazing and high-quality work. Selected papers were focused on Global illumination and Scientific Visualization, Point-based rendering, Visual Analytics and Explainable AI, Deep learning and Scene understanding, Computer animation and Convolutional adversarial network to generate dance motion, Spherical images and Realistic rendering. The remaining papers have been encouraged to submission to the main track at SIBGRAPI conference.
Rios, Àlex; Pelechano, Nuria
Virtual Reality, Num. 24, pp 683--694, 2020.
DOI: http://dx.doi.org/10.1007/s10055-020-00428-8
Understanding human decision making is a key requirement to improve crowd simulation models so that they can better mimic real human behavior. It is often difficult to study human decision making during dangerous situations because of the complexity of the scenarios and situations to be simulated. Immersive virtual reality offers the possibility to carry out such experiments without exposing participants to real danger. In the real world, it has often been observed that people tend to follow others in certain situations (e.g., unfamiliar environments or stressful situations). In this paper, we study human following behavior when it comes to exit choice during an evacuation of a train station. We have carried out immersive VR experiments under different levels of stress (alarm only or alarm plus fire), and we have observed how humans consistently tend to follow the crowd regardless of the levels of stress. Our results show that decision making is strongly influenced by the behavior of the virtual crowd: the more virtual people running, the more likely are participants to simply follow others. The results of this work could improve behavior simulation models during crowd evacuation, and thus build more plausible scenarios for training firefighters.
Serrancoli, G.; Bogatikov, P.; Pales, J.; Forcada, A.; Sanchez Egea, A.; Torner, J.; Izquierdo, K.; Susin, Antonio
IEEE access, Vol. 8, pp 122782--122790, 2020.
DOI: http://dx.doi.org/10.1109/ACCESS.2020.3006423
Marker-less systems are becoming popular to detect a human skeleton in an image automatically. However, these systems have difficulties in tracking points when part of the body is hidden, or there is an artifact that does not belong to the subject (e.g., a bicycle). We present a low-cost tracking system combined with economic force-measurement sensors that allows the calculation of individual joint moments and powers affordable for anybody. The system integrates OpenPose (deep-learning based C++ library to detect human skeletons in an image) in a system of two webcams, to record videos of a cyclist, and seven resistive sensors to measure forces at the pedals and the saddle. OpenPose identifies the skeleton candidate using a convolution neural network. A corrective algorithm was written to automatically detect the hip, knee, ankle, metatarsal and heel points from webcam-recorded motions, which overcomes the limitations of the marker-less system. Then, with the information of external forces, an inverse dynamics analysis is applied in OpenSim to calculate the joint moments and powers at the hip, knee, and ankle joints. The results show that the obtained moments have similar shapes and trends compared to the literature values. Therefore, this represents a low-cost method that could be used to estimate relevant joint kinematics and dynamics, and consequently follow up or improve cycling training plans.
Susin, Antonio; Wang, Y.; Le Cao, K.; Calle, M.
NAR Genomics and Bioinformatics, Vol. 2, Num. 2, 2020.
DOI: http://dx.doi.org/10.1093/nargab/lqaa029
Though variable selection is one of the most relevant tasks in microbiome analysis, e.g. for the identification of microbial signatures, many studies still rely on methods that ignore the compositional nature of microbiome data. The applicability of compositional data analysis methods has been hampered by the availability of software and the difficulty in interpreting their results. This work is focused on three methods for variable selection that acknowledge the compositional structure of microbiome data: selbal, a forward selection approach for the identification of compositional balances, and clr-lasso and coda-lasso, two penalized regression models for compositional data analysis. This study highlights the link between these methods and brings out some limitations of the centered log-ratio transformation for variable selection. In particular, the fact that it is not subcompositionally consistent makes the microbial signatures obtained from clr-lasso not readily transferable. Coda-lasso is computationally efficient and suitable when the focus is the identification of the most associated microbial taxa. Selbal stands out when the goal is to obtain a parsimonious model with optimal prediction performance, but it is computationally greedy. We provide a reproducible vignette for the application of these methods that will enable researchers to fully leverage their potential in microbiome studies.
Van Toll, Wouter; Triesscheijn, Roy; Kallmann, Marcelo; Oliva, Ramon; Pelechano, Nuria; Pettre, Julien; Geraerts, Roland
Computers & Graphics, Vol. 91, pp 52--82, 2020.
DOI: http://dx.doi.org/10.1016/j.cag.2020.06.006
A navigation mesh is a representation of a 2D or 3D virtual environment that enables path planning and crowd simulation for walking characters. Various state-of-the-art navigation meshes exist, but there is no standardized way of evaluating or comparing them. Each implementation is in a different state of maturity, has been tested on different hardware, uses different example environments, and may have been designed with a different application in mind. In this paper, we develop and use a framework for comparing navigation meshes. First, we give general definitions of 2D and 3D environments and navigation meshes. Second, we propose theoretical properties by which navigation meshes can be classified. Third, we introduce metrics by which the quality of a navigation mesh implementation can be measured objectively. Fourth, we use these properties and metrics to compare various state-of-the-art navigation meshes in a range of 2D and 3D environments. Finally, we analyze our results to identify important topics for future research on navigation meshes. We expect that this work will set a new standard for the evaluation of navigation meshes, that it will help developers choose an appropriate navigation mesh for their application, and that it will steer future research in interesting directions.
Argudo, Oscar; Galin, Eric; Peytavie, Adrien; Paris, Axel; Gain, James; Guérin, Eric
ACM Transactions on Graphics (SIGGRAPH Asia 2019), Vol. 38, Num. 6, pp 1--12, 2019.
DOI: http://dx.doi.org/10.1145/3355089.3356535
Mountainous digital terrains are an important element of many virtual environments and find application in games, film, simulation and training. Unfortunately, while existing synthesis methods produce locally plausible results they often fail to respect global structure. This is exacerbated by a dearth of automated metrics for assessing terrain properties at a macro level. We address these issues by building on techniques from orometry, a field that involves the measurement of mountains and other relief features. First, we construct a sparse metric computed on the peaks and saddles of a mountain range and show that, when used for classification, this is capable of robustly distinguishing between different mountain ranges. Second, we present a synthesis method that takes a coarse elevation map as input and builds a graph of peaks and saddles respecting a given orometric distribution. This is then expanded into a fully continuous elevation function by deriving a consistent river network and shaping the valley slopes. In terms of authoring, users provide various control maps and are also able to edit, reposition, insert and remove terrain features all while retaining the characteristics of a selected mountain range. The result is a terrain analysis and synthesis method that considers and incorporates orometric properties, and is, on the basis of our perceptual study, more visually plausible than existing terrain generation methods.
Barba, Elizabeth; Sánchez, Borja; Burri, Emanuel; Accarino, Anna; Monclús, Eva; Navazo, Isabel; Guarner, Francisco; Margolles, Abelardo; Azpiroz, Fernando
Neurogastroenterology & Motility, Vol. 31, Num. 12, pp 1--7, 2019.
DOI: http://dx.doi.org/10.1111/nmo.13703
Some patients complain that eating lettuce, gives them gas and abdominal distention. Our aim was to determine to what extent the patients assertion is sustained by evidence. An in vitro study measured the amount of gas produced during the process of fermentation by a preparation of human colonic microbiota (n = 3) of predigested lettuce, as compared to beans, a high gas-releasing substrate, to meat, a low gas-releasing substrate, and to a nutrient-free negative control. A clinical study in patients complaining of abdominal distention after eating lettuce (n = 12) measured the amount of intestinal gas and the morphometric configuration of the abdominal cavity in abdominal CT scans during an episode of lettuce-induced distension as compared to basal conditions. Gas production by microbiota fermentation of lettuce in vitro was similar to that of meat (P = .44), lower than that of beans (by 78 ± 15%; P < .001) and higher than with the nutrient-free control (by 25 ± 19%; P = .05). Patients complaining of abdominal distension after eating lettuce exhibited an increase in girth (35 ± 3 mm larger than basal; P < .001) without significant increase in colonic gas content (39 ± 4 mL increase; P = .071); abdominal distension was related to a descent of the diaphragm (by 7 ± 3 mm; P = .027) with redistribution of normal abdominal contents. Lettuce is a low gas - releasing substrate for microbiota fermentation and lettuce - induced abdominal distension is produced by an uncoordinated activity of the abdominal walls. Correction of the somatic response might be more effective than the current dietary restriction strategy.
Bosch, Carles; Patow, Gustavo A.
Computer Graphics Forum, Vol. 38, Num. 1, pp 274-285, 2019.
DOI: http://dx.doi.org/10.1111/cgf.13530
Abstract Modelling flow phenomena and their related weathering effects is often cumbersome due their dependence on the environment, materials and geometric properties of objects in the scene. Example-based modelling provides many advantages for reproducing real textures, but little effort has been devoted to reproducing and transferring complex phenomena. In order to produce realistic flow effects, it is possible to take advantage of the widespread availability of flow images on the Internet, which can be used to gather key information about the flow. In this paper, we present a technique that allows the transfer of flow phenomena between photographs, adapting the flow to the target image and giving the user flexibility and control through specifically tailored parameters. This is done through two types of control curves: a fitted theoretical curve to control the mass of deposited material, and an extended colour map for properly adapting to the target appearance. In addition, our method filters and warps the input flow in order to account for the geometric details of the target surface. This leads to a fast and intuitive approach to easily transfer phenomena between images, providing a set of simple and intuitive parameters to control the process.
Visualization of Large Molecular Trajectories
Duran, David; Hermosilla, Pedro; Ropinski, Timo; Kozlíková, Barbora; Vinacua, Àlvar; Vázquez, Pere-Pau
Proc. IEEE Transactions on Visualization and Computer Graphics, Vol. 25, Num. 1, pp 987--996, 2019.
DOI: http://dx.doi.org/10.1109/TVCG.2018.2864851
PDF
The analysis of protein-ligand interactions is a time-intensive task. Researchers have to analyze multiple physico-chemical properties of the protein at once and combine them to derive conclusions about the protein-ligand interplay. Typically, several charts are inspected, and 3D animations can be played side-by-side to obtain a deeper understanding of the data. With the advances in simulation techniques, larger and larger datasets are available, with up to hundreds of thousands of steps. Unfortunately, such large trajectories are very difficult to investigate with traditional approaches. Therefore, the need for special tools that facilitate inspection of these large trajectories becomes substantial. In this paper, we present a novel system for visual exploration of very large trajectories in an interactive and user-friendly way. Several visualization motifs are automatically derived from the data to give the user the information about interactions between protein and ligand. Our system offers specialized widgets to ease and accelerate data inspection and navigation to interesting parts of the simulation. The system is suitable also for simulations where multiple ligands are involved. We have tested the usefulness of our tool on a set of datasets obtained from protein engineers, and we describe the expert feedback.
Ruleset-rewriting for procedural modeling of buildings
Martín, Ignacio; Patow, Gustavo A.
Computers & Graphics, Vol. 84, Num. 11, pp 93 -- 102, 2019.
DOI: http://dx.doi.org/10.1016/j.cag.2019.08.003
Procedural modeling techniques have emerged as a fundamental tool for automatic design and reconstruction of buildings and urban landscapes. In recent years, we have witnessed an impressive increase in the expressive capabilities of such techniques, being its main strength the possibility of generating large urban scenes with a small ruleset. In this paper, we propose what we consider the next stage in this process, where generic graph-rewriting techniques are used to transform input rulesets into new ones, thus allowing the automatic reuse, transformation and generation of rulesets. We showcase our system with an application to a high-level procedural language (based on the well-known CGA grammars) for facades. We demonstrate the practicality of this new approach by transforming the style of the input facade previously created to different styles. User studies confirm this result.
A procedural technique for thermal simulation and visualization in urban environments
Muñoz, David; Besuievsky, Gonzalo; Patow, Gustavo A.
Building Simulation, 2019.
DOI: http://dx.doi.org/10.1007/s12273-019-0549-x
Analysing the thermal behaviour of buildings is an important goal for any and all of the tasks involving energy flow simulation in urban environments. However, the number of variables to be considered, along with the difficulty of implementing some of them, make it difficult to address the problem on an urban scale. In this paper we propose a procedural approach that, from a 3D urban model and a set of parameters, simulates the thermal exchanges that take place inside and outside buildings in an urban environment. We also provide a technique to efficiently visualise thermal variations over time of both the interior and exterior of buildings in an urban environment. We believe this technique will be helpful for performing a rapid analysis when building parameters, such as materials, dimensions, shape or number of floors, are being changed.
Paris, Axel; Peytavie, Adrien; Guérin, Eric; Argudo, Oscar; Galin, Eric
Computer Graphics Forum (Pacific Graphics 2019), Vol. 38, Num. 7, pp 47--55, 2019.
DOI: http://dx.doi.org/10.1111/cgf.13815
We present an interactive aeolian simulation to author hot desert scenery. Wind is an important erosion agent in deserts which, despite its importance, has been neglected in computer graphics. Our framework overcomes this and allows generating a variety of sand dunes, including barchans, longitudinal and anchored dunes, and simulates abrasion which erodes bedrock and sculpts complex landforms. Given an input time varying high altitude wind field, we compute the wind field at the surface of the terrain according to the relief, and simulate the transport of sand blown by the wind. The user can interactively model complex desert landscapes, and control their evolution throughout time either by using a variety of interactive brushes or by prescribing events along a user-defined time-line.
Pueyo, Oriol; Pueyo, Xavier; Patow, Gustavo A.
Graphical Models, Vol. 106, pp 101049, 2019.
DOI: http://dx.doi.org/10.1016/j.gmod.2019.101049
Generalization of 2D city layouts is a relevant operation common to Computer Graphics and GIS, whose goal is to generate simplified representations of street networks. However, most of the contributions in this area belong to the GIS literature, which we intend to bring closer to the CG community. In this paper we propose a three- fold characterization of the algorithms dedicated to generic generalization and we also analyze the techniques proposed for the generation of personalized route maps in CG. We examine their data structures, simplification criteria and theoretical basis. To enable a comparative comprehension, we propose unified terminology and we refer the graphs used in the GIS literature to their name used in graph theory. From our analysis of the generalization techniques, we propose four research lines for further investigation to design new generalization algorithms, either from original ideas or by combining/extending some of the reviewed techniques.
Vázquez, Pere-Pau
Entropy, Vol. 21, Num. 6, pp 612, 2019.
DOI: http://dx.doi.org/10.3390/e21060612
The analysis of research paper collections is an interesting topic that can give insights on whether a research area is stalled in the same problems, or there is a great amount of novelty every year. Previous research has addressed similar tasks by the analysis of keywords or reference lists, with different degrees of human intervention. In this paper, we demonstrate how, with the use of Normalized Relative Compression, together with a set of automated data-processing tasks, we can successfully visually compare research articles and document collections. We also achieve very similar results with Normalized Conditional Compression that can be applied with a regular compressor. With our approach, we can group papers of different disciplines, analyze how a conference evolves throughout the different editions, or how the profile of a researcher changes through the time. We provide a set of tests that validate our technique, and show that it behaves better for these tasks than other techniques previously proposed.
Argudo, Oscar; Chica, Antoni; Andújar, Carlos
Computer Graphics Forum, Vol. 37, Num. 2, pp 101--110, 2018.
DOI: http://dx.doi.org/10.1111/cgf.13345
Despite recent advances in surveying techniques, publicly available Digital Elevation Models (DEMs) of terrains are lowresolution except for selected places on Earth. In this paper we present a new method to turn low-resolution DEMs into plausible and faithful high-resolution terrains. Unlike other approaches for terrain synthesis/amplification (fractal noise, hydraulic and thermal erosion, multi-resolution dictionaries), we benefit from high-resolution aerial images to produce highly-detailed DEMs mimicking the features of the real terrain. We explore different architectures for Fully Convolutional Neural Networks to learn upsampling patterns for DEMs from detailed training sets (high-resolution DEMs and orthophotos), yielding up to one order of magnitude more resolution. Our comparative results show that our method outperforms competing data amplification approaches in terms of elevation accuracy and terrain plausibility.
Argudo, Oscar; Comino, Marc; Chica, Antoni; Andújar, Carlos; Lumbreras, Felipe
Computers & Graphics, Vol. 71, pp 23 - 34, 2018.
DOI: http://dx.doi.org/10.1016/j.cag.2017.11.004
The visual enrichment of digital terrain models with plausible synthetic detail requires the segmentation of aerial images into a suitable collection of categories. In this paper we present a complete pipeline for segmenting high-resolution aerial images into a user-defined set of categories distinguishing e.g. terrain, sand, snow, water, and different types of vegetation. This segmentation-for-synthesis problem implies that per-pixel categories must be established according to the algorithms chosen for rendering the synthetic detail. This precludes the definition of a universal set of labels and hinders the construction of large training sets. Since artists might choose to add new categories on the fly, the whole pipeline must be robust against unbalanced datasets, and fast on both training and inference. Under these constraints, we analyze the contribution of common per-pixel descriptors, and compare the performance of state-of-the-art supervised learning algorithms. We report the findings of two user studies. The first one was conducted to analyze human accuracy when manually labeling aerial images. The second user study compares detailed terrains built using different segmentation strategies, including official land cover maps. These studies demonstrate that our approach can be used to turn digital elevation models into fully-featured, detailed terrains with minimal authoring efforts.
Gemelli-Obturator Complex in the deep gluteal space. An anatomic and dynamic study.
Balius, R.; Susin, Antonio; Morros, C.; Pujol, M.; Perez, M.; Sala, X.
Skeletal Radiology, Vol. 47, Num. 6, pp 763-770, 2018.
DOI: http://dx.doi.org/10.1007/s00256-017-2831-2
OBJECTIVE: To investigate the behavior of the sciatic nerve during hip rotation at subgluteal space. MATERIALS AND METHODS: Sonographic examination (high-resolution ultrasound machine at 5.0-14 MHZ) of the gemelli-obturator internus complex following two approaches: (1) a study on cadavers and (2) a study on healthy volunteers. The cadavers were examined in pronation, pelvis-fixed position by forcing internal and external rotations of the hip with the knee in 90° flexion. Healthy volunteers were examined during passive internal and external hip rotation (prone position; lumbar and pelvic regions fixed). Subjects with a history of major trauma, surgery or pathologies affecting the examined regions were excluded. RESULTS: The analysis included eight hemipelvis from six fresh cadavers and 31 healthy volunteers. The anatomical study revealed the presence of connective tissue attaching the sciatic nerve to the structures of the gemellus-obturator system at deep subgluteal space. The amplitude of the nerve curvature during rotating position was significantly greater than during resting position. During passive internal rotation, the sciatic nerve of both cadavers and healthy volunteers transformed from a straight structure to a curved structure tethered at two points as the tendon of the obturator internus contracted downwards. Conversely, external hip rotation caused the nerve to relax. CONCLUSION: Anatomically, the sciatic nerve is closely related to the gemelli-obturator internus complex. This relationship results in a reproducible dynamic behavior of the sciatic nerve during passive hip rotation, which may contribute to explain the pathological mechanisms of the obturator internal gemellus syndrome.
Besuievsky, Gonzalo; Beckers, Benoit; Patow, Gustavo A.
Graphical Models, Vol. 95, pp 42-50, 2018.
DOI: http://dx.doi.org/10.1016/j.gmod.2017.06.002
Solar simulation for 3D city models may be a complex task if detailed geometry is taken into account. For this reason, the models are often approximated by simpler geometry to reduce their size and complexity. However, geometric details, as for example the ones that exist in a roof, can significantly change the simulation results if not properly taken into account. The classic solution to deal with a too detailed city model is to use a Level-of-Detail (LoD) approach for geometry reduction. In this paper we present a new LoD strategy for 3D city models aimed at accurate solar simulations able to cope with models with highly detailed geometry. Given a Point of Interest (POI) or a Region of Interest (ROI) to analyze, the method works by automatically detecting and preserving all the geometry (i.e., roofs) that have significant impact on the simulation and simplifying the rest of the geometry.
Casafont, Miquel; Bonada, Jordi; Roure, Francesc; Pastor, Magdalena; Susin, Antonio
International Journal of Structural Stability and Dynamics, Vol. 18, Num. 1, pp 1--32, 2018.
DOI: http://dx.doi.org/10.1142/S0219455418500049
PDF
The investigation attempts to adapt a beam finite element procedure based on the Generalized Beam Theory (GBT) to the analysis of perforated columns. The presence of perforations is taken into account through the use of two beam elements with different properties for the non-perforated and perforated parts of the member. Each part is meshed with its corresponding finite element and, afterwards, they are linked by means of constraint equations. Linear buckling analyses on steel storage rack columns are carried out to demonstrate how the proposed procedure should be applied. Some practical issues are discussed, such as the GBT deformation modes to be included in the analyses, or the optimum finite element discretization. The resulting buckling loads are validated by comparison with the values obtained in analyses performed using shell finite element models. Finally, it is verified that the buckling loads produced with the proposed method are rather accurate.
Comino, Marc; Andújar, Carlos; Chica, Antoni; Brunet, Pere
Computer Graphics Forum, Vol. 37, Num. 5, pp 233--243, 2018.
DOI: http://dx.doi.org/10.1111/cgf.13505
Normal vectors are essential for many point cloud operations, including segmentation, reconstruction and rendering. The robust estimation of normal vectors from 3D range scans is a challenging task due to undersampling and noise, specially when combining points sampled from multiple sensor locations. Our error model assumes a Gaussian distribution of the range error with spatially-varying variances that depend on sensor distance and reflected intensity, mimicking the features of Lidar equipment. In this paper we study the impact of measurement errors on the covariance matrices of point neighborhoods. We show that covariance matrices of the true surface points can be estimated from those of the acquired points plus sensordependent directional terms. We derive a lower bound on the neighbourhood size to guarantee that estimated matrix coefficients will be within a predefined error with a prescribed probability. This bound is key for achieving an optimal trade-off between smoothness and fine detail preservation. We also propose and compare different strategies for handling neighborhoods with samples coming from multiple materials and sensors. We show analytically that our method provides better normal estimates than competing approaches in noise conditions similar to those found in Lidar equipment.
Diego Jesus; Patow, Gustavo A.; António Coelho; António Augusto Sousa
Computers & Graphics, Vol. 72, pp 106-121, 2018.
DOI: http://dx.doi.org/10.1016/j.cag.2018.02.003
Procedural modeling techniques reduce the effort of creating large virtual cities. However, current methodologies do not allow direct user control over the generated models. Associated with this problem, we face the additional problem related to intrinsic ambiguity existing in user selections. In this paper, we propose to address this problem by using a genetic algorithm to generalize user-provided point-and-click selections of building elements. From a few user-selected elements, the system infers new sets of elements that potentially correspond to the users intention, including the ones manually selected. These sets are obtained by queries over the shape trees generated by the procedural rules, thus exploiting shape semantics, hierarchy and geometric properties. Our system also provides a complete selection-action paradigm that allows users to edit procedurally generated buildings without necessarily explicitly writing queries. The pairs of user selections and procedural operations (the actions) are stored in a tree-like structure, which is easily evaluated. Results show that the selection inference is capable of generating sets of shapes that closely match the user intention and queries are able to perform complex selections that would be difficult to achieve in other systems. User studies confirm this result.
Díaz-García, Jesús; Brunet, Pere; Navazo, Isabel; Vázquez, Pere-Pau
Computers & Graphics, Vol. 73, pp 1--16, 2018.
DOI: http://dx.doi.org/10.1016/j.cag.2018.02.007
Mobile devices have experienced an incredible market penetration in the last decade. Currently, medium to premium smartphones are relatively a ordable devices. With the increase in screen size and resolution, together with the improvements in performance of mobile CPUs and GPUs, more tasks have become possible. In this paper we explore the rendering of medium to large volumetric models on mobile and low performance devices in general. To do so, we present a progressive ray casting method that is able to obtain interactive frame rates and high quality results for models that not long ago were only supported by desktop computers.
Hermosilla, Pedro; Ristchel, T.; Vázquez, Pere-Pau; Vinacua, Àlvar; Ropinski, T.
Proc.ACM Transactions on Computer Graphics, Proc. SIGGRAPH Asia., Vol. 37, Num. 6, pp 235:1--235:12, 2018.
DOI: http://dx.doi.org/10.1145/3272127.3275110
Deep learning systems extensively use convolution operations to process input data. Though convolution is clearly defined for structured data such as 2D images or 3D volumes, this is not true for other data types such as sparse point clouds. Previous techniques have developed approximations to convolutions for restricted conditions. Unfortunately, their applicability is limited and cannot be used for general point clouds. We propose an efficient and effective method to learn convolutions for non-uniformly sampled point clouds, as they are obtained with modern acquisition techniques. Learning is enabled by four key novelties: first, representing the convolution kernel itself as a multilayer perceptron; second, phrasing convolution as a Monte Carlo integration problem, third, using this notion to combine information from multiple samplings at different levels; and fourth using Poisson disk sampling as a scalable means of hierarchical point cloud learning. The key idea across all these contributions is to guarantee adequate consideration of the underlying non-uniform sample distribution function from a Monte Carlo perspective. To make the proposed concepts applicable to real-world tasks,we furthermore propose an efficient implementation which significantly reduces the GPU memory required during the training process. By employing our method in hierarchical network architectures we can outperform most of the state-of-the-art networks on established point cloud segmentation, classification and normal estimation benchmarks. Furthermore, in contrast to most existing approaches, we also demonstrate the robustness of our method with respect to sampling variations, even when training with uniformly sampled data only. To support the direct application of these concepts, we provide a ready-to-use TensorFlow implementation of these layers at https://github.com/viscom-ulm/MCCNN.
A General Illumination Model for Molecular Visualization
Hermosilla, Pedro; Vázquez, Pere-Pau; Vinacua, Àlvar; Ropinski, Timo
Computer Graphics Forum, Vol. 37, Num. 3, pp 367--378, 2018.
DOI: http://dx.doi.org/10.1111/cgf.13426
Several visual representations have been developed over the years to visualize molecular structures, and to enable a better understanding of their underlying chemical processes. Today, the most frequently used atom-based representations are the Space-filling, the Solvent Excluded Surface, the Balls-and-Sticks, and the Licorice models. While each of these representations has its individual benefits, when applied to large-scale models spatial arrangements can be difficult to interpret when employing current visualization techniques. In the past it has been shown that global illumination techniques improve the perception of molecular visualizations; unfortunately existing approaches are tailored towards a single visual representation. We propose a general illumination model for molecular visualization that is valid for different representations. With our illumination model, it becomes possible, for the first time, to achieve consistent illumination among all atom-based molecular representations. The proposed model can be further evaluated in real-time, as it employs an analytical solution to simulate diffuse light interactions between objects. To be able to derive such a solution for the rather complicated and diverse visual representations, we propose the use of regression analysis together with adapted parameter sampling strategies as well as shape parametrization guided sampling, which are applied to the geometric building blocks of the targeted visual representations. We will discuss the proposed sampling strategies, the derived illumination model, and demonstrate its capabilities when visualizing several dynamic molecules.
Top-down model fitting for hand pose recovery in sequences of depth images
Madadi. Meysam; Escalera, Sergio; Carruesco, Alex; Andújar, Carlos; Baró, Xavier; González, Jordi
Image and Vision Computing, Vol. 79, pp 63--75, 2018.
DOI: http://dx.doi.org/10.1016/j.imavis.2018.09.006
State-of-the-art approaches on hand pose estimation from depth images have reported promising results under quite controlled considerations. In this paper we propose a two-step pipeline for recovering the hand pose from a sequence of depth images. The pipeline has been designed to deal with images taken from any viewpoint and exhibiting a high degree of finger occlusion. In a first step we initialize the hand pose using a part-based model, fitting a set of hand components in the depth images. In a second step we consider temporal data and estimate the parameters of a trained bilinear model consisting of shape and trajectory bases. We evaluate our approach on a new created synthetic hand dataset along with NYU and MSRA real datasets. Results demonstrate that the proposed method outperforms the most recent pose recovering approaches, including those based on CNNs.
Mas, Albert; Martín, Ignacio; Patow, Gustavo A.
Computers & Graphics, Vol. 77, pp 1 - 15, 2018.
DOI: http://dx.doi.org/10.1016/j.cag.2018.09.010
This paper presents a global optimization algorithm specifically tailored for inverse reflector design problems. In such problems, the goal is to obtain a reflector shape that produces a light distribution as close as possible to a user-provided one. The optimization is an iterative process where each step evaluates the difference between the current reflector illumination and the desired one. We propose a tree-based stochastic method that drives the optimization process, using some heuristic rules, to reach a minimum below a user-provided threshold that satisfies the requirements. When we are close to the solution, we resort to the Hooke and Jeeves method, to reach the minimum faster. Extending our previous work Mas et al. (2010), we show that our method reaches a solution in fewer steps than most other classic optimization methods, and also avoids many local minima. The method has been tested on a real case study based on European road lighting safety regulations.
Munoz-Pandiella, Imanol; Bosch, Carles; Mérillou, Stephane; Mérillou, Nicolas; Patow, Gustavo A.; Pueyo, Xavier
IEEE Transactions on Visualization and Computer Graphics, Vol. 24, Num. 12, pp 3239--3252, 2018.
DOI: http://dx.doi.org/10.1109/TVCG.2018.2794526
Weathering effects are ubiquitous phenomena in cities. Buildings age and deteriorate over time as they interact with the environment. Pollution accumulating on facades is a particularly visible consequence of this. Even though relevant work has been done to produce impressive images of virtual urban environments including weathering effects, so far, no technique using a global approach has been proposed to deal with weathering effects. Here, we propose a technique based on a fast physically-inspired approach, that focuses on modeling the changes in appearance due to pollution soiling on an urban scale. We consider pollution effects to depend on three main factors: wind, rain and sun exposure, and we take into account three intervening steps: deposition, reaction and washing. Using a low-cost pre-computation, we evaluate the pollution distribution throughout the city. Based on this and the use of screen-space operators, our method results in an efficient approach able to generate realistic images of urban scenes by combining the intervening factors at interactive rates. In addition, the pre-computation demands a reduced amount of memory to store the resulting pollution map and, as it is independent from scene complexity, it can suit large and complex models by adapting the map resolution.
A technique for massive sky view factor calculations in large cities
Muñoz, David; Beckers, Benoit; Besuievsky, Gonzalo; Patow, Gustavo A.
International Journal of Remote Sensing, Vol. 39, Num. 112, pp 4040--4058, 2018.
DOI: http://dx.doi.org/10.1080/01431161.2018.1452071
In many applications, such as urban physics simulations or the study of the solar impact effects at different scales, complex 3D city models are required to evaluate physical values. In this article, we propose an efficient system for quickly computing the Sky View Factor (SVF) for a massive number of points inside a large city. To do that, we embed the city into a regular grid, and for each cell we select a subset of the geometry consisting of a square area centred in the cell and including it. Then, we remove the selected geometry from the city model and we project the rest onto a panoramic image, called environment map. Later, when several SVF evaluations are required, we only need to determine the cell that each evaluation point belongs to, and compute the SVF with the cell’s geometry plus its corresponding environment map. To test our system, we perform several evaluations inside a cell’s area, and compare the results with an accurate ray-tracing-based SVF evaluation. Our results show the feasibility of the method and its advantages when used for a large set of computations. We show that our tool provides a way to handle the complexity of urban scale models, and specifically allows working with geometry details if they are required.
Vázquez, Pere-Pau; Hermosilla, Pedro; Guallar, Víctot; Estrada, Jorge; Vinacua, Àlvar
Computer Graphics Forum, Vol. 37, Num. 3, pp 391--402, 2018.
DOI: http://dx.doi.org/10.1111/cgf.13428
The analysis of protein-ligand interactions is complex because of the many factors at play. Most current methods for visual analysis provide this information in the form of simple 2D plots, which, besides being quite space hungry, often encode a low number of different properties. In this paper we present a system for compact 2D visualization of molecular simulations. It purposely omits most spatial information and presents physical information associated to single molecular components and their pairwise interactions through a set of 2D InfoVis tools with coordinated views, suitable interaction, and focus+context techniques to analyze large amounts of data. The system provides a wide range of motifs for elements such as protein secondary structures or hydrogen bond networks, and a set of tools for their interactive inspection, both for a single simulation and for comparing two different simulations. As a result, the analysis of protein-ligand interactions of Molecular Simulation trajectories is greatly facilitated.
Aguerre, Jose Pedro; Fernandez, Eduardo; Besuievsky, Gonzalo; Beckers, Benoit
Graphical Models, Vol. 91, pp 1--11, 2017.
DOI: http://dx.doi.org/10.1016/j.gmod.2017.05.002
Cities numerical simulation including physical phenomena generates highly complex computational chal- lenges. In this paper, we focus on the radiation exchange simulation on an urban scale, considering differ- ent types of cities. Observing that the matrix representing the view factors between buildings is sparse, we propose a new numerical model for radiation computation. This solution is based on the radiosity method. We show that the radiosity matrix associated with models composed of up to 140k patches can be stored in main memory, providing a promising avenue for further research. Moreover, a new technique is proposed for estimating the inverse of the radiosity matrix, accelerating the computation of radiation exchange. These techniques could help to consider the characteristics of the environment in building de- sign, as well as assessing in the definition of city regulations related to urban construction.
Coherent multi-layer landscape synthesis
Argudo, Oscar; Andújar, Carlos; Chica, Antoni; Guérin, Eric; Digne, Julie; Peytavie, Adrien; Galin, Eric
The Visual Computer, Vol. 33, Num. 6, pp 1005--1015, 2017.
DOI: http://dx.doi.org/10.1007/s00371-017-1393-6
We present an efficient method for generating coherent multi-layer landscapes. We use a dictionary built from exemplars to synthesize high-resolution fully featured terrains from input low-resolution elevation data. Our example-based method consists in analyzing real-world terrain examples and learning the procedural rules directly from these inputs. We take into account not only the elevation of the terrain, but also additional layers such as the slope, orientation, drainage area, the density and distribution of vegetation, and the soil type. By increasing the variety of terrain exemplars, our method allows the user to synthesize and control different types of landscapes and biomes, such as temperate or rain forests, arid deserts and mountains.
Bendezú, Alvaro; Mego, Marianela; Monclús, Eva; Merino, Xavier; Accarino, Ana; Malagelada, Juan Ramón; Navazo, Isabel; Azpiroz, Fernando
Neurogastroenterology and Motility, Vol. 29, Num. 2, pp 12930-1--12930-8, 2017.
DOI: http://dx.doi.org/10.1111/nmo.12930
Background: The metabolic activity of colonic microbiota is influenced by diet; however, the relationship between metabolism and colonic content is not known. Our aim was to determine the effect of meals, defecation, and diet on colonic content. Methods: In 10 healthy subjects, two abdominal MRI scans were acquired during fasting, 1 week apart, and after 3 days on low- and high-residue diets, respectively. With each diet, daily fecal output and the number of daytime anal gas evacuations were measured. On the first study day, a second scan was acquired 4 hours after a test meal (n=6) or after 4 hours with nil ingestion (n=4). On the second study day, a scan was also acquired after a spontaneous bowel movement. Results: On the low-residue diet, daily fecal volume averaged 145 ± 15 mL; subjects passed 10.6 ± 1.6 daytime anal gas evacuations and, by the third day, non-gaseous colonic content was 479 ± 36 mL. The high-residue diet increased the three parameters to 16.5 ± 2.9 anal gas evacuations, 223 ± 19 mL fecal output, and 616 ± 55 mL non-gaseous colonic content (P<.05 vs low-residue diet for all). On the low-residue diet, non-gaseous content in the right colon had increased by 41 ± 11 mL, 4 hours after the test meal, whereas no significant change was observed after 4-hour fast (-15 ± 8 mL; P=.006 vs fed). Defecation significantly reduced the non-gaseous content in distal colonic segments. Conclusion & inferences: Colonic content exhibits physiologic variations with an approximate 1/3 daily turnover produced by meals and defecation, superimposed over diet-related day-to-day variations.
Coll, Narcís; Guerrieri, Marité Ethel
International Journal of Geographical Information Science, Vol. 31, Num. 7, pp 1467--1484, 2017.
DOI: http://dx.doi.org/10.1080/13658816.2017.1300804
In this paper, we propose a new graphics processing unit (GPU) method able to compute the 2D constrained Delaunay triangulation (CDT) of a planar straight-line graph consisting of points and segments. All existing methods compute the Delaunay triangulation of the given point set, insert all the segments, and then finally transform the resulting triangulation into the CDT. To the contrary, our novel approach simultaneously inserts points and segments into the triangulation, taking special care to avoid conflicts during retriangulations due to concurrent insertion of points or concurrent edge flips. Our implementation using the Compute Unified Device Architecture programming model on NVIDIA GPUs improves, in terms of running time, the best known GPU-based approach to the CDT problem.
Error-aware Construction and Rendering of Multi-scan Panoramas from Massive Point Clouds
Comino, Marc; Andújar, Carlos; Chica, Antoni; Brunet, Pere
Computer Vision and Image Understanding, Vol. 157, pp 43--54, 2017.
DOI: http://dx.doi.org/10.1016/j.cviu.2016.09.011
Obtaining 3D realistic models of urban scenes from accurate range data is nowadays an important research topic, with applications in a variety of fields ranging from Cultural Heritage and digital 3D archiving to monitoring of public works. Processing massive point clouds acquired from laser scanners involves a number of challenges, from data management to noise removal, model compression and interactive visualization and inspection. In this paper, we present a new methodology for the reconstruction of 3D scenes from massive point clouds coming from range lidar sensors. Our proposal includes a panorama-based compact reconstruction where colors and normals are estimated robustly through an error-aware algorithm that takes into account the variance of expected errors in depth measurements. Our representation supports efficient, GPU-based visualization with advanced lighting effects. We discuss the proposed algorithms in a practical application on urban and historical preservation, described by a massive point cloud of 3.5 billion points. We show that we can achieve compression rates higher than 97% with good visual quality during interactive inspections.
Díaz, Jose; Ropinski, Timo; Navazo, Isabel; Gobbetti, Enrico; Vázquez, Pere-Pau
The Visual Computer, Vol. 33, Num. 1, pp 47-61, 2017.
DOI: http://dx.doi.org/10.1007/s00371-015-1151-6
Throughout the years, many shading techniques have been developed to improve the conveying of information in volume visualization. Some of these methods, usually referred to as realistic, are supposed to provide better cues for the understanding of volume data sets. While shading approaches are heavily exploited in traditional monoscopic setups, no previous study has analyzed the effect of these techniques in virtual reality. To further explore the influence of shading on the understanding of volume data in such environments, we carried out a user study in a desktop-based stereoscopic setup. The goals of the study were to investigate the impact of well-known shading approaches and the influence of real illumination on depth perception. Participants had to perform three different perceptual tasks when exposed to static visual stimuli. 45 participants took part in the study, giving us 1152 trials for each task. Results show that advanced shading techniques improve depth perception in stereoscopic volume visualization. As well, external lighting does not affect depth perception when these shading methods are applied. As a result, we derive some guidelines that may help the researchers when selecting illumination models for stereoscopic rendering.
Ferre, Josep; Peña, Marta; Susin, Antonio
International Journal of Bifurcation and Chaos, Vol. 27, Num. 1, pp 1-13, 2017.
DOI: http://dx.doi.org/10.1142/S0218127417500055
PDF
We complete the study of the bifurcations of saddle/spiral bimodal linear systems, depending on the respective traces T and τ: one 2-codimensional bifurcation; four kinds of 1-codimensional bifurcations. We stratify the bifurcation set in the (T, τ)-plane and we describe the qualitative changes of the dynamical behavior at each kind of bifurcation point
A Perspective on procedural modeling based on structural analysis
Fita, Josep Lluis; Besuievsky, Gonzalo; Patow, Gustavo A.
Virtual Archaeology Review, Vol. 8, Num. 16, pp 44--50, 2017.
DOI: http://dx.doi.org/10.4995/var.2017.5765.
With the rise of available computing capabilities, structural analysis has recently become a key tool for building assessment usually managed by art historians, curators, and other specialist related to the study and preservation of ancient buildings. On the other hand, the flourishing field of procedural modeling has provided some exciting breakthroughs for the recreation of lost buildings and urban structures. However, there is a surprising lack of literature to enable the production of procedural-based buildings taking into account structural analysis, which has proven to be a crucial element for the recreation of faithful masonry structures. In order to perform an in-depth study of the advances in this type of analysis for cultural heritage buildings, we performed a study focused on procedural modeling that make use of structural analysis methods, especially in its application to historic masonry buildings such as churches and cathedrals. Moreover, with the aim of improving the knowledge about structural analysis of procedurally-recreated historical buildings, we have taken a geometric structure, added a set of procedural walls structured in masonry bricks, and studied its behavior in a generic, freely-available simulation tool, thus showing the feasibility of its analysis with non-specialized tools. This not only has allowed us to understand and learn how the different parameter values of a masonry structure can affect the results of the simulation, but also has proven that this kind of simulations can be easily integrated in an off-the-shelf procedural modeling tool, enabling this kind of analysis for a wide variety of historical studies, or restoration and preservation actions.
Intersecting two families of sets on the GPU
Fort, Marta; Sellarès, J. Antoni; Valladares, Ignacio
Journal of Parallel and Distributed Computing, 2017.
DOI: http://dx.doi.org/10.1016/j.jpdc.2017.01.026
The computation of the intersection family of two large families of unsorted sets is an interesting problem from the mathematical point of view which also appears as a subproblem in decision making applications related to market research or temporal evolution analysis problems. The problem of intersecting two families of sets F and F′ is to find the family I of all the sets which are the intersection of some set of F and some other set of F. In this paper, we present an efficient parallel GPU-based approach, designed under CUDA architecture, to solve the problem. We also provide an efficient parallel GPU strategy to summarize the output by removing the empty and duplicated sets of the obtained intersection family, maintaining, if necessary, the sets frequency. The complexity analysis of the presented algorithm together with experimental results obtained with its implementation is also presented.
Hermosilla, Pedro; Jorge Estrada; Víctor Guallar; Timo Ropinsky; Vinacua, Àlvar; Vázquez, Pere-Pau
IEEE Transactions on Visualization and Computer Graphics, Vol. 23, Num. 1, pp 731--740, 2017.
DOI: http://dx.doi.org/10.1109/TVCG.2016.2598825
Molecular simulations are used in many areas of biotechnology, such as drug design and enzyme engineering. Despite the development of automatic computational protocols, analysis of molecular interactions is still a major aspect where human comprehension and intuition are key to accelerate, analyze, and propose modifications to the molecule of interest. Most visualization algorithms help the users by providing an accurate depiction of the spatial arrangement: the atoms involved in inter-molecular contacts. There are few tools that provide visual information on the forces governing molecular docking. However, these tools, commonly restricted to close interaction between atoms, do not consider whole simulation paths, long-range distances and, importantly, do not provide visual cues for a quick and intuitive comprehension of the energy functions (modeling intermolecular interactions) involved. In this paper, we propose visualizations designed to enable the characterization of interaction forces by taking into account several relevant variables such as molecule-ligand distance and the energy function, which is essential to understand binding affinities. We put emphasis on mapping molecular docking paths obtained from Molecular Dynamics or Monte Carlo simulations, and provide time-dependent visualizations for different energy components and particle resolutions: atoms, groups or residues. The presented visualizations have the potential to support domain experts in a more efficient drug or enzyme design process.
Hermosilla, Pedro; Michael Krone; Víctor Guallar; Vázquez, Pere-Pau; Vinacua, Àlvar; Timo Ropinsky
The Visual Computer, Vol. 33, Num. 6, pp 869--881, 2017.
DOI: http://dx.doi.org/10.1007/s00371-017-1397-2
The Solvent Excluded Surface (SES) is a popular molecular representation that gives the boundary of the molecular volume with respect to a specific solvent. SESs depict which areas of a molecule are accessible by a specific solvent, which is represented as a spherical probe. Despite the popularity of SESs, their generation is still a compute-intensive process, which is often performed in a preprocessing stage prior to the actual rendering (except for small models). For dynamic data or varying probe radii, however, such a preprocessing is not feasible as it prevents interactive visual analysis. Thus, we present a novel approach for the on-the-fly generation of SESs, a highly parallelizable, grid-based algorithm where the SES is rendered using ray-marching. By exploiting modern GPUs, we are able to rapidly generate SESs directly within the mapping stage of the visualization pipeline. Our algorithm can be applied to large time-varying molecules and is scalable, as it can progressively refine the SES if GPU capabilities are insufficient. In this paper, we show how our algorithm is realized and how smooth transitions are achieved during progressive refinement. We further show visual results obtained from real world data and discuss the performance obtained, which improves upon previous techniques in both the size of the molecules that can be handled and the resulting frame rate.
Real-Time Solar Exposure Simulation in Complex Cities
Munoz-Pandiella, Imanol; Bosch, Carles; Mérillou, Nicolas; Pueyo, Xavier; Mérillou, Stephane
Computer Graphics Forum, Vol. 36, Num. 8, pp 554--566, 2017.
DOI: http://dx.doi.org/10.1111/cgf.13152
In urban design, estimating solar exposure on complex city models is crucial but existing solutions typically focus on simplified building models and are too demanding in terms of memory and computational time. In this paper, we propose an interactive technique that estimates solar exposure on detailed urban scenes. Given a directional exposure map computed over a given time period, we estimate the sky visibility factor that serves to evaluate the final exposure at each visible point. This is done using a screen-space method based on a two-scale approach, which is geometry independent and has low storage costs. Our method performs at interactive rates and is designer-oriented. The proposed technique is relevant in architecture and sustainable building design as it provides tools to estimate the energy performance of buildings as well as weathering effects in urban environments.
Argudo, Oscar; Besora, Isaac; Brunet, Pere; Creus, Carles; Hermosilla, Pedro; Navazo, Isabel; Vinacua, Àlvar
Computer-Aided Design, Vol. 79, pp 48--59, 2016.
DOI: http://dx.doi.org/10.1016/j.cad.2016.06.005
The use of virtual prototypes and digital models containing thousands of individual objects is commonplace in complex industrial applications like the cooperative design of huge ships. Designers are interested in selecting and editing specific sets of objects during the interactive inspection sessions. This is however not supported by standard visualization systems for huge models. In this paper we discuss in detail the concept of rendering front in multiresolution trees, their properties and the algorithms that construct the hierarchy and efficiently render it, applied to very complex CAD models, so that the model structure and the identities of objects are preserved. We also propose an algorithm for the interactive inspection of huge models which uses a rendering budget and supports selection of individual objects and sets of objects, displacement of the selected objects and real-time collision detection during these displacements. Our solution ---based on the analysis of several existing view-dependent visualization schemes--- uses a Hybrid Multiresolution Tree that mixes layers of exact geometry, simplified models and impostors, together with a time-critical, view-dependent algorithm and a Constrained Front. The algorithm has been successfully tested in real industrial environments; the models involved are presented and discussed in the paper.
Single-picture reconstruction and rendering of trees for plausible vegetation synthesis
Argudo, Oscar; Chica, Antoni; Andújar, Carlos
Computers & Graphics, Vol. 57, pp 55--67, 2016.
DOI: http://dx.doi.org/10.1016/j.cag.2016.03.005
State-of-the-art approaches for tree reconstruction either put limiting constraints on the input side (requiring multiple photographs, a scanned point cloud or intensive user input) or provide a representation only suitable for front views of the tree. In this paper we present a complete pipeline for synthesizing and rendering detailed trees from a single photograph with minimal user effort. Since the overall shape and appearance of each tree is recovered from a single photograph of the tree crown, artists can benefit from georeferenced images to populate landscapes with native tree species. A key element of our approach is a compact representation of dense tree crowns through a radial distance map. Our first contribution is an automatic algorithm for generating such representations from a single exemplar image of a tree. We create a rough estimate of the crown shape by solving a thin-plate energy minimization problem, and then add detail through a simplified shape-from-shading approach. The use of seamless texture synthesis results in an image-based representation that can be rendered from arbitrary view directions at different levels of detail. Distant trees benefit from an output-sensitive algorithm inspired on relief mapping. For close-up trees we use a billboard cloud where leaflets are distributed inside the crown shape through a space colonization algorithm. In both cases our representation ensures efficient preservation of the crown shape. Major benefits of our approach include: it recovers the overall shape from a single tree image, involves no tree modeling knowledge and minimal authoring effort, and the associated image-based representation is easy to compress and thus suitable for network streaming.
Beacco, Alejandro; Pelechano, Nuria; Andújar, Carlos
Computer Graphics Forum, Vol. 35, Num. 8, pp 32--50, 2016.
DOI: http://dx.doi.org/10.1111/cgf.12774
In this survey we review, classify and compare existing approaches for real-time crowd rendering. We first overview character animation techniques, as they are highly tied to crowd rendering performance, and then we analyze the state of the art in crowd rendering. We discuss different representations for level-of-detail (LoD) rendering of animated characters, including polygon-based, point-based, and image-based techniques, and review different criteria for runtime LoD selection. Besides LoD approaches, we review classic acceleration schemes, such as frustum culling and occlusion culling, and describe how they can be adapted to handle crowds of animated characters. We also discuss specific acceleration techniques for crowd rendering, such as primitive pseudo-instancing, palette skinning, and dynamic key-pose caching, which benefit from current graphics hardware. We also address other factors affecting performance and realism of crowds such as lighting, shadowing, clothing and variability. Finally we provide an exhaustive comparison of the most relevant approaches in the field.
Colonic content in health and its relation to functional gut symptoms
Bendezú, Alvaro; Barba, Elizabeth; Burri, Emanuel; Cisternas, Daniel; Accarino, Ana; Quiroga, Sergi; Monclús, Eva; Navazo, Isabel; Malagelada, Juan Ramon; Azpiroz, Fernando
Neurogastroenterology and Motility , Vol. 28, Num. 6, pp 849--854, 2016.
DOI: http://dx.doi.org/10.1111/nmo.12782
Background: Gut content may be determinant in the generation of digestive symptoms, particularly in patients with impaired gut function and hypersensitivity. Since the relation of intraluminal gas to symptoms is only partial, we hypothesized that non-gaseous component may play a decisive role. Methods: Abdominal computed tomography scans were evaluated in healthy subjects during fasting and after a meal (n = 15) and in patients with functional gut disorders during basal conditions (when they were feeling well) and during an episode of abdominal distension (n = 15). Colonic content and distribution were measured by an original analysis program. Key results: In healthy subjects both gaseous (87 ± 24 mL) and non-gaseous colonic content (714 ± 34 mL) were uniformly distributed along the colon. In the early postprandial period gas volume increased (by 46 ± 23 mL), but non-gaseous content did not, although a partial caudad displacement from the descending to the pelvic colon was observed. No differences in colonic content were detected between patients and healthy subjects. Symptoms were associated with discrete increments in gas volume. However, no consistent differences in non-gaseous content were detected in patients between asymptomatic periods and during episodes of abdominal distension. Conclusions & inferences: In patients with functional gut disorders, abdominal distension is not related to changes in non-gaseous colonic content. Hence, other factors, such as intestinal hypersensitivity and poor tolerance of small increases in luminal gas may be involved.
3D Model deformations with arbitrary control points
Cerveró, M.Àngels; Brunet, Pere; Vinacua, Àlvar
Computer & Graphics, Vol. 57, pp 92-101, 2016.
DOI: http://dx.doi.org/10.1016/j.cag.2016.03.010
Cage-based space deformations are often used to edit and animate images and geometric models. The deformations of the cage are easily transferred to the model by recomputing fixed convex combinations of the vertices of the cage, the control points. In current cage-based schemes the configuration of edges and facets between these control points affects the resulting deformations. In this paper we present a family of similar schemes that includes some of the current techniques, but also new schemes that depend only on the positions of the control points. We prove that these methods afford a solution under fairly general conditions and result in an easy and flexible way to deform objects using freely placed control points, with the necessary conditions of positivity and continuity.
Díaz-García, Jesús; Brunet, Pere; Navazo, Isabel; Perez, Frederic; Vázquez, Pere-Pau
Computer Graphics International , Vol. 32, Num. 6, pp 835--845, 2016.
DOI: http://dx.doi.org/10.1007/s00371-016-1253-9
Medical datasets are continuously increasing in size. Although larger models may be available for certain research purposes, in the common clinical practice the models are usually of up to 512×512×2000 voxels. These resolutions exceed the capabilities of conventional GPUs, the ones usually found in the medical doctors’ desktop PCs. Commercial solutions typically reduce the data by downsampling the dataset iteratively until it fits the available target specifications. The data loss reduces the visualization quality and this is not commonly compensated with other actions that might alleviate its effects. In this paper, we propose adaptive transfer functions, an algorithm that improves the transfer function in downsampled multiresolution models so that the quality of renderings is highly improved. The technique is simple and lightweight, and it is suitable, not only to visualize huge models that would not fit in a GPU, but also to render not-so-large models in mobile GPUs, which are less capable than their desktop counterparts. Moreover, it can also be used to accelerate rendering frame rates using lower levels of the multiresolution hierarchy while still maintaining high-quality results in a focus and context approach. We also show an evaluation of these results based on perceptual metrics.
A Fast Daylighting Method to Optimize Opening Configurations in Building Design
Fernandez, Eduardo; Beckers, Benoit; Besuievsky, Gonzalo
Energy and Buildings, Vol. 125, Num. 1, pp 205--218, 2016.
DOI: http://dx.doi.org/10.1016/j.enbuild.2016.05.012
Daylighting plays a very important role for energy saving in sustainable building, therefore, setting the optimal shapes and positions of the openings is crucial for daylighting availability. On the other hand, computing daylighting for climate-based data is a time-consuming task involving large data set and is not well suited for optimization approaches. In this paper we propose a new and fast daylighting method that allows to perform opening shape optimizations. The base of our method is to model each element of an opening surface as a pinhole and then formulate a compact irradiance-based representation to ease global illumination calculations. We use the UDI metric to evaluate our method, on an office-based model, for different orientations and different geographical locations, showing that optimal windows shapes can be obtained in short times. Our method also provides an efficient way to analyze the impact of climate-based data on the shape of the openings, as they could be modified interactively.
Solving multiple kth smallest dissimilarity queries for non-metric dissimilarities with the GPU
Fort, Marta; Sellarès, J. Antoni
Information Sciences, Vol. 361, pp 66-83, 2016.
DOI: http://dx.doi.org/10.1016/j.ins.2016.03.054
The kth smallest dissimilarity of a query point with respect to a given set is the dissimilarity that ranks number k when we sort, in increasing order, the dissimilarity value of the points in the set with respect to the query point. A multiple kth smallest dissimilarity query determines the kth smallest dissimilarity for several query points simultaneously. Although the problem of solving multiple kth smallest dissimilarity queries is an important primitive operation used in many areas, such as spatial data analysis, facility location, text classification and content-based image retrieval, it has not been previously addressed explicitly in the literature. In this paper we present three parallel strategies, to be run on a Graphics Processing Unit, for computing multiple kth smallest dissimilarity queries when non-metric dissimilarities, that do not satisfy the triangular inequality, are used. The strategies are theoretically and experimentally analyzed and compared among them and with an efficient sequential strategy to solve the problem.
Fort, Marta; Sellarès, J. Antoni
Information Systems, Vol. 62, pp 136-154, 2016.
DOI: http://dx.doi.org/10.1016/j.is.2016.07.003
In this paper we propose, motivate and solve multiple bichromatic mutual nearest neighbor queries in the plane considering multiplicative weighted Euclidean distances. Given two sets of facilities of different types, a multiple bichromatic mutual (k,k′)-nearest neighbor query finds pairs of points, one of each set, such that the point of the first set is a k -nearest neighbor of the point of the second set and, at the same time, the point of the second set is a k′-nearest neighbor of the point of the first set. These queries find applications in collaborative marketing and prospective data analysis, where facilities of one type cooperate with facilities of the other type to obtain reciprocal benefits. We present a sequential and a parallel algorithm, to be run on the CPU and on a Graphics Processing Unit, respectively, for solving multiple bichromatic mutual nearest neighbor queries. We also present the time and space complexity analysis of both algorithms, together with their theoretical comparison. Finally, we provide and discuss experimental results obtained with the implementation of the proposed sequential and a parallel algorithm.
Continuity and Interpolation Techniques for Computer Graphics
Gonzalez Garcia, Francisco; Patow, Gustavo A.
Computer Graphics Forum, 2016.
DOI: http://dx.doi.org/10.1111/cgf.12727
Continuity and interpolation have been crucial topics for computer graphics since its very beginnings. Every time we want to interpolate values across some area, we need to take a set of samples over that interpolating region. However, interpolating samples faithfully allowing the results to closely match the underlying functions can be a tricky task as the functions to sample could not be smooth and, in the worst case, it could be even impossible when they are not continuous. In those situations bringing the required continuity is not an easy task, and much work has been done to solve this problem. In this paper, we focus on the state of the art in continuity and interpolation in three stages of the real-time rendering pipeline. We study these problems and their current solutions in texture space (2D), object space (3D) and screen space. With this review of the literature in these areas, we hope to bring new light and foster research in these fundamental, yet not completely solved problems in computer graphics.
Pelechano, Nuria; Fuentes, Carlos
Computers & Graphics, Vol. 59, pp 68--78, 2016.
DOI: http://dx.doi.org/10.1016/j.cag.2016.05.023
Path-finding can become an important bottleneck as both the size of the virtual environments and the number of agents navigating them increase. It is important to develop techniques that can be efficiently applied to any environment independently of its abstract representation. In this paper we present a hierarchical NavMesh representation to speed up path-finding. Hierarchical path-finding (HPA*) has been successfully applied to regular grids, but there is a need to extend the benefits of this method to polygonal navigation meshes. As opposed to regular grids, navigation meshes offer representations with higher accuracy regarding the underlying geometry, while containing a smaller number of cells. Therefore, we present a bottom-up method to create a hierarchical representation based on a multilevel k-way partitioning algorithm (MLkP), annotated with sub-paths that can be accessed online by our Hierarchical NavMesh Path-finding algorithm (HNA*). The algorithm benefits from searching in graphs with a much smaller number of cells, thus performing up to 7.7 times faster than traditional A⁎ over the initial NavMesh. We present results of HPA* over a variety of scenarios and discuss the benefits of the algorithm together with areas for improvement.
Sunet, Marc; Comino, Marc; Karatzas, Dimosthenis; Chica, Antoni; Vázquez, Pere-Pau
IADIS International Journal on Computer Science and Information Systems, Vol. 11, Num. 2, pp 1--18, 2016.
Despite the large amount of methods and applications of augmented reality, there is little homogenization on the software platforms that support them. An exception may be the low level control software that is provided by some high profile vendors such as Qualcomm and Metaio. However, these provide fine grain modules for e.g. element tracking. We are more concerned on the application framework, that includes the control of the devices working together for the development of the AR experience. In this paper we describe the development of a software framework for AR setups. We concentrate on the modular design of the framework, but also on some hard problems such as the calibration stage, crucial for projection-based AR. The developed framework is suitable and has been tested in AR applications using camera-projector pairs, for both fixed and nomadic setups.
Argudo, Oscar; Brunet, Pere; Chica, Antoni; Vinacua, Àlvar
Graphical Models, Vol. 82, pp 137–148, 2015.
DOI: http://dx.doi.org/10.1016/j.gmod.2015.06.010
We discuss bi-harmonic fields which approximate signed distance fields. We conclude that the biharmonic field approximation can be a powerful tool for mesh completion in general and complex cases. We present an adaptive, multigrid algorithm to extrapolate signed distance fields. By defining a volume mask in a closed region bounding the area that must be repaired, the algorithm computes a signed distance field in well-defined regions and uses it as an over-determined boundary condition constraint for the biharmonic field computation in the remaining regions. The algorithm operates locally, within an expanded bounding box of each hole, and therefore scales well with the number of holes in a single, complex model. We discuss this approximation in practical examples in the case of triangular meshes resulting from laser scan acquisitions which require massive hole repair. We conclude that the proposed algorithm is robust and general, and is able to deal with complex topological cases.
Barba, E.; Burri, E.; Accarino, A.; Cisterna, D.; Quiroga, S.; Monclús, Eva; Navazo, Isabel; Malagelada, J; Azpiroz, F.
Gastroenterology, Vol. 148, Num. 4, pp 732--739, 2015.
DOI: http://dx.doi.org/10.1053/j.gastro.2014.12.006
PDF
Background and Aims: In patients with functional gut disorders, abdominal distension has been associated with descent of the diaphragm and protrusion of the anterior abdominal wall. We investigated mechanisms of abdominal distension in these patients. Methods: We performed a prospective study of 45 patients (42 women, 24–71 years old) with functional intestinal disorders (27 with irritable bowel syndrome with constipation, 15 with functional bloating, and 3 with irritable bowel syndrome with alternating bowel habits) and discrete episodes of visible abdominal distension. Subjects were assessed by abdominothoracic computed tomography (n = 39) and electromyography (EMG) of the abdominothoracic wall (n = 32) during basal conditions (without abdominal distension) and during episodes of severe abdominal distension. Fifteen patients received a median of 2 sessions (range, 1–3 sessions) of EMG-guided, respiratory-targeted biofeedback treatment; 11 received 1 control session before treatment. Results: Episodes of abdominal distension were associated with diaphragm contraction (19% ± 3% increase in EMG score and 12 ± 2 mm descent; P < .001 vs basal values) and intercostal contraction (14% ± 3% increase in EMG scores and 6 ± 1 mm increase in thoracic antero-posterior diameter; P < .001 vs basal values). They were also associated with increases in lung volume (501 ± 93 mL; P < .001 vs basal value) and anterior abdominal wall protrusion (32 ± 3 mm increase in girth; P < .001 vs basal). Biofeedback treatment, but not control sessions, reduced the activity of the intercostal muscles (by 19% ± 2%) and the diaphragm (by 18% ± 4%), activated the internal oblique muscles (by 52% ± 13%), and reduced girth (by 25 ± 3 mm) (P ≤ .009 vs pretreatment for all). Conclusions: In patients with functional gut disorders, abdominal distension is a behavioral response that involves activity of the abdominothoracic wall. This distension can be reduced with EMG-guided, respiratory-targeted biofeedback therapy.
Beacco, Alejandro; Pelechano, Nuria; Kapadia, M; Badler, N.I.
Computer & Graphics, Vol. 47, pp 105-112, 2015.
DOI: http://dx.doi.org/10.1016/j.cag.2014.12.004
PDF
This paper presents a real-time animation system for fully-embodied virtual humans that satisfies accurate foot placement constraints for different human walking and running styles. Our method offers a fine balance between motion fidelity and character control, and can efficiently animate over sixty agents in real time (25 FPS) and over a hundred characters at 13 FPS. Given a point cloud of reachable support foot configurations extracted from the set of available animation clips, we compute the Delaunay triangulation. At runtime, the triangulation is queried to obtain the simplex containing the next footstep, which is used to compute the barycentric blending weights of the animation clips. Our method synthesizes animations to accurately follow footsteps, and a simple IK solver adjusts small offsets, foot orientation, and handles uneven terrain. To incorporate root velocity fidelity, the method is further extended to include the parametric space of root movement and combine it with footstep based interpolation. The presented method is evaluated on a variety of test cases and error measurements are calculated to offer a quantitative analysis of the results achieved.
Bendezú, A.; Barba, E.; Burri, E.; Cisternas, D.; Malagelada, C.; Segui, S.; Accarino, A.; Quiroga, S.; Monclús, Eva; Navazo, Isabel; Malagelada, J.; Azpiroz, F.
Neurogastroenterology and motility, Vol. 27, Num. 9, pp 1249--1257, 2015.
DOI: http://dx.doi.org/10.1111/nmo.12618
Background: The precise relation of intestinal gas to symptoms, particularly abdominal bloating and distension remains incompletely elucidated. Our aim was to define the normal values of intestinal gas volume and distribution and to identify abnormalities in relation to functional‐type symptoms. Methods: Abdominal computed tomography scans were evaluated in healthy subjects (n = 37) and in patients in three conditions: basal (when they were feeling well; n = 88), during an episode of abdominal distension (n = 82) and after a challenge diet (n = 24). Intestinal gas content and distribution were measured by an original analysis program. Identification of patients outside the normal range was performed by machine learning techniques (one‐class classifier). Results are expressed as median (IQR) or mean ± SE, as appropriate. Key Results: In healthy subjects the gut contained 95 (71, 141) mL gas distributed along the entire lumen. No differences were detected between patients studied under asymptomatic basal conditions and healthy subjects. However, either during a spontaneous bloating episode or once challenged with a flatulogenic diet, luminal gas was found to be increased and/or abnormally distributed in about one‐fourth of the patients. These patients detected outside the normal range by the classifier exhibited a significantly greater number of abnormal features than those within the normal range (3.7 ± 0.4 vs 0.4 ± 0.1; p < 0.001). Conclusions & Inferences: The analysis of a large cohort of subjects using original techniques provides unique and heretofore unavailable information on the volume and distribution of intestinal gas in normal conditions and in relation to functional gastrointestinal symptoms.
Inverse Opening Design with Anisotropic Lighting Incidence
Besuievsky, Gonzalo
Computers and Graphics, Vol. 47, Num. 1, pp 113--122, 2015.
DOI: http://dx.doi.org/10.1016/j.cag.2015.01.003
In architecturaldesign, configuring opening shapes is a crucial element of daylight analysis. In this paper we present a new method which optimizes opening shapes to meet specified lighting design purposes. This novel approach treats the problem as an inverse lighting problem considering global illumination contributions and anisotropic lighting incidence, therefore any kind of sky distribution can be used as an external source of light. The key to our technique is in exploiting coherence to formulate a compact representation that can be tailored to optimization processes. The resulting reduction in processing time and efficiency in achieving optimal shapes along with the feasibility of dealing with an isotropic light sources are our key contributions.
Immersive data comprehension: visualizing uncertainty in measurable models
Brunet, Pere; Andújar, Carlos
Frontiers in Robotics and AI, Virtual Environments, pp 2-22, 2015.
DOI: http://dx.doi.org/10.3389/frobt.2015.00022
Recent advances in 3D scanning technologies have opened new possibilities in a broad range of applications including cultural heritage, medicine, civil engineering, and urban planning. Virtual Reality systems can provide new tools to professionals that want to understand acquired 3D models. In this review paper, we analyze the concept of data comprehension with an emphasis on visualization and inspection tools on immersive setups. We claim that in most application fields, data comprehension requires model measurements, which in turn should be based on the explicit visualization of uncertainty. As 3D digital representations are not faithful, information on their fidelity at local level should be included in the model itself as uncertainty bounds. We propose the concept of Measurable 3D Models as digital models that explicitly encode such local uncertainty bounds. We claim that professionals and experts can strongly benefit from immersive interaction through new specific, fidelity-aware measurement tools, which can facilitate 3D data comprehension. Since noise and processing errors are ubiquitous in acquired datasets, we discuss the estimation, representation, and visualization of data uncertainty. We show that, based on typical user requirements in Cultural Heritage and other domains, application-oriented measuring tools in 3D models must consider uncertainty and local error bounds. We also discuss the requirements of immersive interaction tools for the comprehension of huge 3D and nD datasets acquired from real objects.
Common influence region problems
Fort, Marta; Sellarès, J. Antoni
Information Sciences, Vol. 231, pp 116--135, 2015.
DOI: http://dx.doi.org/10.1016/j.ins.2015.05.038
We introduce problems related to two competitive sets of collaborative facilities.We solve common influence region queries and location problems.We present algorithms to be run in parallel to solve the introduced problems.We provide the theoretical complexity analysis of the proposed solutions.We experimentally test the algorithms showing their efficiency and scalability. In this paper we propose and solve common influence region problems. These problems are related to the simultaneous influence, or the capacity to attract customers, of two sets of facilities of different types. For instance, while a facility of the first type competes with the other facilities of the first type, it cooperates with several facilities of the second type. The problems studied can be applied, for example, to decision-making support systems for marketing and/or locating facilities. We present parallel algorithms, to be run on a Graphics Processing Unit, for approximately solving the problems considered here. We also provide experimental results and discuss the efficiency and scalability of our approach. Finally, we present the speedup ratios obtained when the running times of the parallel proposed algorithms using a GPU are compared with those obtained from their respective efficient sequential CPU versions.
Hermosilla, Pedro; Guallar, Víctor; Vinacua, Àlvar; Vázquez, Pere-Pau
Computers & Graphics, Vol. 54, pp 113-120, 2015.
DOI: http://dx.doi.org/10.1016/j.cag.2015.07.017
All-atom simulations are crucial in biotechnology. In Pharmacology, for example, molecular knowledge of protein-drug interactions is essential in the understanding of certain pathologies and in the development of improved drugs. To achieve this detailed information, fast and enhanced molecular visualization is critical. Moreover, hardware and software developments quickly deliver extensive data, providing intermediate results that can be analyzed by scientists in order to interact with the simulation process and direct it to a more promising configuration. In this paper we present a GPU-friendly data structure for real-time illustrative visualization of all-atom simulations. Our system generates both ambient occlusion and halos using an occupancy pyramid that needs no precalculation and that is updated on the fly during simulation, allowing the real time rendering of simulation results at sustained high framerates.
Oliva, Ramón; Pelechano, Nuria
Computers & graphics, Vol. 47, pp 48--58, 2015.
DOI: http://dx.doi.org/10.1016/j.cag.2014.11.004
There are two frequent artifacts in crowd simulation caused by navigation mesh design. The first appears when all agents attempt to traverse the navigation mesh and share the same way points through portals, thus increasing the probability of collisions with other agents or queues forming around portals. The second is caused by way points being assigned at locations where clearance is not guaranteed, which causes the agents to either walk too close to the static geometry, slide along walls or get stuck. To overcome this we use the full length of the portal and propose a novel method for dynamically calculating way points based on current trajectory, destination, and clearance, therefore guaranteeing that agents in a crowd will have different way points assigned. To achieve collision free paths we propose two novel techniques: the first provides the computation of paths with clearance for cells of any shape (even with concavities) and the second presents a new method for calculating portals with clearance, so that the dynamically assigned way points will always guarantee collision free paths relative to the static geometry. In this paper, we extend our previous work by describing a new version of the algorithm that is suitable for a larger number of navigation meshes, while further improving performance. Our results show how the combination of portals with exact clearance and dynamic way points improve local movement by reducing the number of collision between agents and the static geometry. We evaluate our algorithm with a variety of scenarios and compare our results with traditional way points to show that our technique also offers better use of the space by the agents.
Ramirez-Flores, J.E.; Susin, Antonio
Computer Animation and Virtual Worlds, 2015.
DOI: http://dx.doi.org/10.1002/cav.1687
Skeleton-driven animation is popular by its simplicity and intuitive control of the limbs of a character. Linear blend skinning (LBS) is up to date the most efficient and simple deformation method; however, painting influence skinning weights is not intuitive, and it suffers the candy-wrapper artifact. In this paper, we propose an approach based on mesh segmentation for skinning and skeleton-driven computer animation. We propose a novel and fast method, based in watershed segmentation to deal with characters in T-Pose and arbitrary poses, a simple weight assign algorithm based in the rigid skinning obtained with the segmentation algorithm for the LBS deformation method, and finally, a modified version of the LBS that avoids the loss of volume in twist rotations using the segmentation stage output values.
Real-Time Molecular Visualization Supporting Diffuse Illumination and Ambient Occlusion
Skanberg, Robin; Vázquez, Pere-Pau; Guallar, Victor; Ropinski, Timo
IEEE Transactions on Visualization and Computer Graphics, Vol. 22, Num. 1, pp 718-27, 2015.
DOI: http://dx.doi.org/10.1109/TVCG.2015.2467293
Today molecular simulations produce complex data sets capturing the interactions of molecules in detail. Due to the complexity of this time-varying data, advanced visualization techniques are required to support its visual analysis. Current molecular visualization techniques utilize ambient occlusion as a global illumination approximation to improve spatial comprehension. Besides these shadow-like effects, interreflections are also known to improve the spatial comprehension of complex geometric structures. Unfortunately, the inherent computational complexity of interreflections would forbid interactive exploration, which is mandatory in many scenarios dealing with static and time-varying data. In this paper, we introduce a novel analytic approach for capturing interreflections of molecular structures in real-time. By exploiting the knowledge of the underlying space filling representations, we are able to reduce the required parameters and can thus apply symbolic regression to obtain an analytic expression for interreflections. We show how to obtain the data required for the symbolic regression analysis, and how to exploit our analytic solution to enhance interactive molecular visualizations.
Andújar, Carlos; Chica, Antoni; Vico, Miguel Angel; Moya, Sergio; Brunet, Pere
Computer Graphics Forum, Vol. 33, Num. 6, pp 101--117, 2014.
DOI: http://dx.doi.org/10.1111/cgf.12281
In this paper, we present an inexpensive approach to create highly detailed reconstructions of the landscape surrounding a road. Our method is based on a space-efficient semi-procedural representation of the terrain and vegetation supporting high-quality real-time rendering not only for aerial views but also at road level. We can integrate photographs along selected road stretches. We merge the point clouds extracted from these photographs with a low-resolution digital terrain model through a novel algorithm which is robust against noise and missing data. We pre-compute plausible locations for trees through an algorithm which takes into account perceptual cues. At runtime we render the reconstructed terrain along with plants generated procedurally according to pre-computed parameters. Our rendering algorithm ensures visual consistency with aerial imagery and thus it can be integrated seamlessly with current virtual globes.
Procedural bread making
Baravalle, Rodrigo; Patow, Gustavo A.; Delrieux, Claudio
Computers & Graphics, Vol. 50, pp 13-24, 2014.
DOI: http://dx.doi.org/10.1016/j.cag.2015.05.003
Accurate modeling and rendering of food, and in particular of bread and other baked edible stuff, have not received as much attention as other materials in the photorealistic rendering literature. In particular, bread turns out to be a structurally complex material, and the eye is very precise in spotting improper models, making adequate bread modeling a difficult task. In this paper we present an accurate computational bread making model that allows us to faithfully represent the geometrical structure and the appearance of bread through its making process. This is achieved by a careful simulation of the conditions during proving and baking to get realistically looking bread. Our results are successfully compared to real bread by both visual inspection and by a multifractal-based error metric.
Civit, Oscar; Susin, Antonio
Computer Graphics Forum, Vol. 33, Num. 6, pp 298--309, 2014.
DOI: http://dx.doi.org/10.1111/cgf.12351
We address the problem of robust and efficient treatment of element collapse and inversion in corotational FEM simulations of deformable objects in two and three dimensions, and show that existing degeneration treatment methods have previously unreported flaws that seriously threaten robustness and physical plausibility in interactive applications. We propose a new method that avoids such flaws, yields faster and smoother degeneration recovery and extends the range of well-behaved degenerate configurations without adding significant complexity or computational cost to standard explicit and quasi-implicit solvers.
A Sample-Based Method for Computing the Radiosity Inverse Matrix
Eduardo Fernández; Besuievsky, Gonzalo
Computers & Graphics, 2014.
DOI: http://dx.doi.org/10.1016/j.cag.2014.02.001
The radiosity {problem} can be expressed as a linear system, where the {light transport} interactions of all patches of the scene are considered. Due to the amount of computation required to solve the system, the whole matrix is rarely computed and iterative methods are used instead. In this paper we introduce a new algorithm to obtain an approximation of the radiosity inverse matrix. The method is based on the calculation of a random sample of rows of the form factor matrix. The availability of this matrix allows {us} to reduce the radiosity calculation costs, speeding up the radiosity process. This is useful in applications where the radiosity equation must be solved thousands of times for different light configurations. We apply it to solve inverse lighting problems, in scenes up to 170K patches. The optimization process used finds optimal solutions in nearly interactive times, which improves {on} previous work.
Eduardo Fernández; Besuievsky, Gonzalo
Automation in Construction, Vol. 37, Num. 1, pp 48--57, 2014.
DOI: http://dx.doi.org/10.1016/j.autcon.2013.09.004
Given a scene to illuminate satisfying a specific set of lighting intentions, the inverse lighting techniques allows to obtain the unknown light sources parameters, such as light position or flux emission. This paper introduces a new inverse lighting approach that uses the radiosity mean and variance to define lighting intentions of a scene. It is shown that these statistical parameters can be obtained without the previous calculation of the radiosity of the scene. Avoiding the explicit computation of the illumination of the scene results in a drastic reduction of the time required for the inverse process. This approach also provides a methodology that transforms a current set of lighting intentions into a single lighting intention with statistical parameters. The tests show that the processing time for solving the inverse problem can be reduced to a few seconds in most cases, improving previous work.
Ferrer, J.; Peña, M.; Susin, Antonio
Mathematical Problems in Engineering, Vol. 2014, pp 8, 2014.
DOI: http://dx.doi.org/10.1155/2014/892948
PDF
Structural stability ensures that the qualitative behavior of a system is preserved under small perturbations. We study it for planar bimodal linear dynamical systems, that is, systems consisting of two linear dynamics acting on each side of a given hyperplane and assuming continuity along the separating hyperplane. We describe which one of these systems is structurally stable when (real) spiral does not appear and when it does we give necessary and sufficient conditions concerning finite periodic orbits and saddle connections. In particular, we study the finite periodic orbits and the homoclinic orbits in the saddle/spiral case.
Finding extremal sets on the GPU
Fort, Marta; Sellarès, J. Antoni
Journal of Parallel and Distributed Computing, Vol. 74, Num. 1, pp 1891-1899, 2014.
DOI: http://dx.doi.org/10.1016/j.jpdc.2013.07.004
The extremal sets of a family FF of sets consist of all sets of FF that are maximal or minimal with respect to the partial order induced by the subset relation in FF. In this paper we present efficient parallel GPU-based algorithms, designed under CUDA architecture, for finding the extremal sets of a family FF of sets. The complexity analysis of the presented algorithms together with experimental results showing the efficiency and scalability of the approach is provided.
Solving the k-influence region problem with the GPU
Fort, Marta; Sellarès, J. Antoni
Information Sciences, 2014.
DOI: http://dx.doi.org/10.1016/j.ins.2013.12.002
In this paper we study a problem that arises in the competitive facility location field. Facilities and customers are represented by points of a planar Euclidean domain.We associate a weighted distance to each facility to reflect that customers select facilities depending on distance and importance. We define, by considering weighted distances, the k-influence region of a facility as the set of points of the domain that has the given facility among their k-nearest/farthest neighbors. On the other hand, we partition the domain into subregions so that each subregion has a non-negative weight associated to it which measures a characteristic related to the area of the subregion. Given a weighted partition of the domain, the k-influence region problem finds the points of the domain where are new facility should be opened. This is done considering the known weight associated to the new facility and ensuring a minimum weighted area of its k-influence region. We present a GPU parallel approach, designed under CUDA architecture, for approximately solving the k-influence region problem. In addition, we describe how to visualize the solutions, which improves the understanding of the problem and reveals complicated structures that would be hard to capture otherwise. Integration of computation and visualization facilitates decision makers with an iterative what-if analysis process, to acquire more information to obtain an approximate optimal location. Finally, we provide and discuss experimental results showing the efficiency and scalability of our approach.
Fort, Marta; Sellarès, J. Antoni; Valladares, Ignacio
Knowledge and Information Systems, 2014.
DOI: http://dx.doi.org/10.1007/s10115-013-0639-5
Data analysis and knowledge discovery in trajectory databases is an emerging ?eld with a growing number of applications such as managing traf?c, planning tourism infrastructures, analyzing professional sport matches or better understanding wildlife. A well-known collection of patterns which can occur for a subset of trajectories of moving objects exists. In this paper, we study the popular places pattern, that is, locations that are visited by many moving objects. We consider two criteria, strong and weak, to establish either the exact number of times that an object has visited a place during its complete trajectory or whether it has visited the place, or not. To solve the problem of reporting popular places, we introduce the popularity map. The popularity of a point is a measure of how many times the moving objects of a set have visited that point. The popularity map is the subdivision, into regions, of a plane where all the points have the same popularity. We propose different algorithms to ef?ciently compute and visualize popular places, the so-called popular regions and their schematization, by taking advantage of the parallel computing capabilities of the graphics processing units. Finally, we provide and discuss the experimental results obtained with the implementation of our algorithms.
A parallel GPU-based approach for reporting flock patterns
Fort, Marta; Sellarès, J. Antoni; Valladares, Ignacio
International Journal of Geographical Information Science, Vol. 28, Num. 9, pp 1877--1903, 2014.
DOI: http://dx.doi.org/10.1080/13658816.2014.902949
Data analysis and knowledge discovery in trajectory databases is an emerging field with a growing number of applications such as managing traffic, planning tourism infrastructures or better understanding wildlife. In this paper, we study the problem of finding flock patterns in trajectory databases. A flock refers to a large enough subset of entities that move close to each other for, at least, a given time interval. We present parallel algorithms, to be run on a Graphics Processing Unit, for reporting three different variants of the flock pattern: (1) all maximal flocks, (2) the largest flock and (3) the longest flock. We also provide their complexity analysis together with experimental results showing the efficiency and scalability of our approach.
Fracture Modeling in Computer Graphics
Muguercia, Lien; Bosch, Carles; Patow, Gustavo A.
Computers & Graphics, Vol. 45, pp 86-100, 2014.
DOI: http://dx.doi.org/10.1016/j.cag.2014.08.006
While object deformation has received a lot of attention in Computer Graphics in recent years, with several good surveys that summarize the state-of-the-art in the field, a comparable comprehensive literature review is still needed for the related problem of crack and fracture modeling. In this paper we present such a review, with a special focus on the latest advances in this area, and a careful analysis of the open issues along with the avenues for further research. With this survey, we hope to provide the community not only a fresh view of the topic, but also an incentive to delve into and explore these unsolved problems further.
Ojeda, Jesús; Susin, Antonio
Communications in computer and information science, Vol. 458, pp 3--18, 2014.
DOI: http://dx.doi.org/10.1007/978-3-662-44911-0_1
We present a new approach for the simulation of surface-based fluids based in a hybrid formulation of Lattice Boltzmann Method for Shallow Waters and particle systems. The modified LBM can handle arbitrary underlying terrain conditions and arbitrary fluid depth. It also introduces a novel method for tracking dry-wet regions and moving boundaries. Dynamic rigid bodies are also included in our simulations using a two-way coupling. Certain features of the simulation that the LBM can not handle because of its heightfield nature, as breaking waves, are detected and automatically turned into splash particles. Here we use a ballistic particle system, but our hybrid method can handle more complex systems as SPH. Both the LBM and particle systems are implemented in CUDA, although dynamic rigid bodies are simulated in CPU. We show the effectiveness of our method with various examples which achieve real-time on consumer-level hardware.
Pueyo, Oriol; Patow, Gustavo A.
The Visual Computer, Vol. 30, Num. 2, pp 159-172, 2014.
DOI: http://dx.doi.org/10.1007/s00371-013-0791-7
Geometric city modeling is an open problem without standard solutions. Within this problem, there appear several sub-problems that must be faced, like the accurate modeling of streets, buildings and other architectonic structures. One important source of geographical information is (measured) cadastral urban data. However, this information is not always well structured, and sometimes it is even simply corrupted GIS data. In this paper we present a robust and generic solution for the generation of block and building layouts based on a repairing process applied when this data is not correct. Our input data is a top projection map of a city which usually has been created by a mixture of photogrammetric restitution and, in a second stage, hand-drawn using any GIS application. Moreover, these maps are under continuous modifications, like in the case of public administrations. This process sometimes results in the introduction of mistakes and anomalies, which are hard to correct without the appropriate tools. Our solution is based on a novel semiautomatic 2D restructuring algorithm, which uniformly corrects errors and ambiguities that are commonly present in corrupted cadastral data. This problem is complex because it is necessary to identify not just simple elements from the input file, but also their connectivity and structure in the real world. The output of our algorithm is the urban data restructured into a hierarchy of blocks and buildings, from which we can get a realistic 3D model by extruding each building using the floor number for each building within the cadastral data.
Pueyo, Xavier; Bosch, Carles; Patow, Gustavo A.
Frontiers in Robotics and A.I., Vol. 1, Num. 17, 2014.
DOI: http://dx.doi.org/10.3389/frobt.2014.00017
Computer Graphics has evolved into a mature and powerful field that offers many opportunities to enhance different disciplines, adapting to the specific needs of each. One of these important fields is the design and analysis of Urban Environments. In this article we try to offer a perspective of one of the sectors identified in Urban Environment studies: Urbanization. More precisely we focus on geometric and appearance modeling, rendering and simulation tools to help stakeholders in key decision stages of the process.
Continuous surveillance of points by rotating floodlights
S.Bereg; J.M. Díaz-Bañez; Fort, Marta; M.A.López; P. Pérez-Lantero; J.Urrutia
International Journal of Computational Geometry, Vol. 24, Num. 3, pp 183--196, 2014.
DOI: http://dx.doi.org/10.1142/S0218195914600024
Let P and F be sets of n ≥ 2 and m ≥ 2 points in a plane, respectively. We study the problem of finding the minimum angle α ϵ [2Π/m, 2Π] such that one can install at each point of F a stationary rotating floodlight with illumination angle α, initially oriented in a suitable direction, in such a way that, at all times, every target point of P is illuminated by at least one floodlight. All floodlights rotate clockwise at unit speed. We provide bounds for the case in which the elements of P ⋃ F are on a given line, and present exact results for the case in the plane in which we have two floodlights and many target points. We further consider the non-rotating version of the problem and look for the minimum angle α such that one can install a non-rotating floodlight with illumination angle α at each point of F, in such a way that every target point of P is illuminated by at least one floodlight. We show that this problem is NP-hard and hard to approximate.
Argelaguet, Ferran; Andújar, Carlos
Computers & Graphics, Vol. 37, Num. 3, pp 121-136, 2013.
DOI: http://dx.doi.org/10.1016/j.cag.2012.12.003
Computer graphics applications controlled through natural gestures are gaining increasing popularity these days due to recent developments in low-cost tracking systems and gesture recognition technologies. Although interaction techniques through natural gestures have already demonstrated their benefits in manipulation, navigation and avatar-control tasks, effective selection with pointing gestures remains an open problem. In this paper we survey the state-of-the-art in 3D object selection techniques. We review important findings in human control models, analyze major factors influencing selection performance, and classify existing techniques according to a number of criteria. Unlike other components of the applications user interface, pointing techniques need a close coupling with the rendering pipeline, introducing new elements to be drawn, and potentially modifying the object layout and the way the scene is rendered. Conversely, selection performance is affected by rendering issues such as visual feedback, depth perception, and occlusion management. We thus review existing literature paying special attention to those aspects in the boundary between computer graphics and human–computer interaction.
Barba, Elisabeth; Quiroga, Sergi; Accarino, Anna; Monclús, Eva; Malagelada, C.; Burri, E; Navazo, Isabel; Malagelada, JR; Azpiroz, Fernando
Neurogastroenterology and motility, Vol. 25, Num. 6, pp e389--e394, 2013.
DOI: http://dx.doi.org/10.1111/nmo.12128
We previously showed that abdominal distension in patients with functional gut disorders is due to a paradoxical diaphragmatic contraction without major increment in intraabdominal volume. Our aim was to characterize the pattern of gas retention and the abdomino-thoracic mechanics associated with abdominal distension in patients with intestinal dysmotility.
Barroso, Santiago; Besuievsky, Gonzalo; Patow, Gustavo A.
Computers & Graphics, Vol. 37, pp 238--246, 2013.
DOI: http://dx.doi.org/10.1016/j.cag.2013.01.003
PDF
With the increase in popularity of procedural urban modeling for film, TV, and interactive entertainment, an urgent need for editing tools to support procedural content creation has become apparent. In this paper we present an end-to-end system for procedural copy and paste in a rule-based setting to address this need. As we show, no trivial extension exists to perform this action in a way such that the resulting ruleset is ready for production. For procedural copy and paste we need to handle the rulesets in both the source and target graphs to obtain a final consistent ruleset. As one of the main contributions of our system, we introduce a graph-rewriting procedure for seamlessly gluing both graphs and obtaining a consistent new procedural building ruleset. Hence, we focus on intuitive and minimal user interaction, and our editing operations perform interactively to provide immediate feedback.
Besuievsky, Gonzalo; Patow, Gustavo A.
Computer Graphics Forum, Vol. 32, Num. 8, pp 1467-8659, 2013.
DOI: http://dx.doi.org/10.1111/cgf.12141
This paper presents a new semantic and procedural level-of-Detail (LoD) method applicable to any rule-based procedural building definition. This new LoD system allows the customizable and flexible selection of the archi- tectural assets to simplify, doing it in an efficient and artist-transparent way. The method, based on an extension of traditional grammars, uses LoD-oriented commands. A graph-rewriting process introduces these new commands in the artist-provided ruleset, which allows to select different simplification criteria (distance, screen-size projec- tion, semantic selection, or any arbitrary method) through a scripting interface, according to user needs. This way we define a flexible, customizable and efficient procedural LoD system, which generates buildings directly with the correct LoD for a given set of viewing and semantic conditions.
Besuievsky, Gonzalo; Patow, Gustavo A.
Virtual Archaeology Review, Vol. 4, Num. 9, pp 160--166, 2013.
DOI: http://dx.doi.org/10.4995/var.2013.4268
In this paper we target the goal of obtaining detailed historical virtual buildings, like a castle or a city old town, through a methodology that facilitates their reconstruction. We allow having in a short time an approximation model that is flexible for being explored, analyzed and eventually modified. This is crucial for serious game development pipelines, whose objective is focused not only on accuracy and realism, but also on transmitting a sense of immersion to the player.
Campoalegre, Lázaro; Brunet, Pere; Navazo, Isabel
Personal and Ubiquitous Computing, Vol. 17, Num. 7, pp 1503-1514, 2013.
DOI: http://dx.doi.org/10.1007/s00779-012-0596-0
Interactive visualization of volume models in standard mobile devices is a challenging present problem with increasing interest from new application fields like telemedicine. The complexity of present volume models in medical applications is continuously increasing, therefore increasing the gap between the available models and the rendering capabilities in low-end mobile clients. New and efficient rendering algorithms and interaction paradigms are required for these small platforms. In this paper, we propose a transfer function-aware compression and interaction scheme, for client-server architectures with visualization on standard mobile devices. The scheme is block-based, supporting adaptive ray-casting in the client. Our two-level ray-casting allows focusing on small details on targeted regions while keeping bounded memory requirements in the GPU of the client. Our approach includes a transfer function-aware compression scheme based on a local wavelet transformation, together with a bricking scheme that supports interactive inspection and levels of detail in the mobile device client. We also use a quantization technique that takes into account a perceptive metrics of the visual error. Our results show that we can have full interaction with high compression rates and with transmitted model sizes that can be of the order of a single photographic image.
R4: Realistic Rain Rendering in Realtime
Carles Creus; Patow, Gustavo A.
Computers & Graphics, Vol. 37, Num. 2, pp 33--40, 2013.
DOI: http://dx.doi.org/10.1016/j.cag.2012.12.002
Realistic rain simulation is a challenging problem due to the variety of different phenomena to consider. In this paper we propose a new rain rendering algorithm that extends present state of the art in the field, achieving real-time rendering of rain streaks and splashes with complex illumination effects, along with fog, halos and light glows as hints of the participating media. Our algorithm creates particles in the scene using an artist-defined storm distribution (e.g., provided as a 2D cloud distribution). Unlike previous algorithms, no restrictions are imposed on the rain area dimension or shape. Our technique adaptively samples the storm area to simulate rain particles only in the relevant regions and only around the observer. Particle simulation is executed entirely in the graphics hardware, by placing the particles at their updated coordinates at each time-step, also checking for collisions with the scene. To render the rain streaks, we use precomputed images and combine them to achieve complex illumination effects. Several optimizations are introduced to render realistic rain with virtually millions of falling rain droplets.
Fort, Marta; Sellarès, J. Antoni
Knowledge-Based Systems, 2013.
DOI: http://dx.doi.org/10.1016/j.knosys.2013.03.013
In this paper we introduce and solve several problems that arise in the single facility location field. A reverse k-influential location problem finds a region such that the location of a new facility, desirable or obnoxious, in the region guarantees a minimum k-influential value associated to the importance, attractiveness or repulsiveness, of the facility as a solution to a reverse k-nearest or farthest neighbor query. Solving reverse k-influential location problems help decision makers to progress towards suitable locations for a new facility. We present a parallel approach, to be ran on a graphics processing unit, for approximately solving reverse k-influential location problems, and also provide and discuss experimental results showing the efficiency and scalability of our approach.
Interactive applications for sketch-based editable polycube-map
Garcia Fernández, Ismael; Jiazhi Xia; Ying He; Shi-Qing Xin; Patow, Gustavo A.
IEEE Transactions on Visualization and Computer Graphics, Vol. 19, Num. 7, pp 1158–-1171, 2013.
DOI: http://dx.doi.org/10.1109/TVCG.2012.308
In this paper we propose a sketch-based editable polycube mapping method that, given a general mesh and a simple polycube that coarsely resembles the shape of the object, plus sketched features indicating relevant correspondences between the two, provides a uniform, regular and user-controllable quads-only mesh that can be used as a basis structure for subdivision. Large scale models with complex geometry and topology can be processed efficiently with simple, intuitive operations. We show that the simple, intuitive nature of the polycube map is a substantial advantage from the point of view of the interface by demonstrating a series of applications, including kit-basing, shape morphing, painting over the parameterization domain, and GPU-friendly tessellated subdivision displacement, where the user is also able to control the number of patches in the base mesh by the construction of the base polycube.
Gonzalez Garcia, Francisco; Paradinas, Teresa; Coll, Narcís; Patow, Gustavo A.
ACM Transactions on Graphics, Vol. 32, Num. 3, pp 13, 2013.
DOI: http://dx.doi.org/10.1145/2487228.2487232
PDF
Cage-based deformation has been one of the main approaches for mesh deformation in recent years, with a lot of interesting and active research. The main advantages of cage-based deformation techniques are their simplicity, relative flexibility and speed. However, to date there has been no widely accepted solution that provides both user control at different levels of detail and high quality deformations. We present *Cages (star-cages), a significant step forward with respect to traditional single-cage coordinate systems, and which allows the usage of multiple cages enclosing the model for easier manipulation while still preserving the smoothness of the mesh in the transitions between them. The proposed deformation scheme is extremely flexible and versatile, allowing the usage of heterogeneous sets of coordinates and different levels of deformation, ranging from a whole- model deformation to a very localized one. That locality allows faster evaluation and a reduced memory footprint, and as a result outperforms single-cage approaches in flexibility, speed and memory requirements for complex editing operations.
Ramon Oliva; Pelechano, Nuria
Computer & Graphics, Vol. 37, Num. 5, pp 403--412, 2013.
DOI: http://dx.doi.org/10.1016/j.cag.2013.03.004
In this paper we introduce a novel automatic method for generating near optimal navigation meshes from a 3D multi-layered virtual environment. Firstly, a GPU voxelization of the entire scene is calculated in order to identify and extract the different walkable layers. Secondly, a high resolution render is performed with a fragment shader to obtain the 2D floor plan of each layer. Finally, a convex decomposition of each layer is calculated and layers are linked in order to create a Navigation Mesh of the scene. Results show that our method is not only faster than the previous work, but also creates more accurate NavMeshes since it respects the original shape of the static geometry. It also provides a significantly lower number of cells and avoids ill-conditioned cells and T-Joints between portals that could lead to unnatural character navigation.
Andújar, Carlos
Computer Graphics Forum, Vol. 31, Num. 6, pp 1973–1983, 2012.
DOI: http://dx.doi.org/10.1111/j.1467-8659.2012.03077.x
High-quality texture minification techniques, including trilinear and anisotropic filtering, require texture data to be arranged into a collection of pre-filtered texture maps called mipmaps. In this paper, we present a compression scheme for mipmapped textures which achieves much higher quality than current native schemes by exploiting image coherence across mipmap levels. The basic idea is to use a high-quality native compressed format for the upper levels of the mipmap pyramid (to retain efficient minification filtering) together with a novel compact representation of the detail provided by the highest-resolution mipmap. Key elements of our approach include delta-encoding of the luminance signal, efficient encoding of coherent regions through texel runs following a Hilbert scan, a scheme for run encoding supporting fast random-access, and a predictive approach for encoding indices of variable-length blocks. We show that our scheme clearly outperforms native 6:1 compressed texture formats in terms of image quality while still providing real-time rendering of trilinearly filtered textures.
Andújar, Carlos; Chica, Antoni; Brunet, Pere
Computer & Graphics, Vol. 36, Num. 1, pp 28--37, 2012.
DOI: http://dx.doi.org/10.1016/j.cag.2011.10.005
Computer Graphics and Virtual Reality technologies provide powerful tools for visualizing, documenting and disseminating cultural heritage. Virtual inspection tools have been used proficiently to show cultural artifacts either through the web or in museum exhibits. The usability of the user interface has been recognized to play a crucial role in overcoming the typical fearful attitude of the cultural heritage community towards 3D graphics. In this paper we discuss the design of the user interface for the virtual inspection of the impressive entrance of the Ripoll Monastery in Spain. The system was exhibited in the National Art Museum of Catalonia (MNAC) during 2008 and since June 2011 it is part of its Romanesque exhibition. The MNAC is the third most visited art museum in Spain, and features the world?s largest collection on Romanesque Art. We analyze the requirements from museum curators and discuss the main interface design decisions. The user interface combines (a) focus-plus-context visualization, with focus (detail view) and context (overview) being shown at separate displays, (b) touch-based camera control techniques, and (c) continuous feedback about the exact location of the detail area within the entrance. The interface allows users to aim the camera at any point of the entrance with centimeter accuracy using a single tap. We provide the results of a user study comparing our user interface with alternative approaches. We also discuss the benefits the exhibition had to the cultural heritage community.
Beacco, Alejandro; Andújar, Carlos; Pelechano, Nuria; Bernhard Spanlang
Journal of Computer Animation and Virtual Worlds, Vol. 23, Num. 2, pp 33-47, 2012.
DOI: http://dx.doi.org/10.1002/cav.1422
In this paper, we present a new impostor‐based representation for 3D animated characters supporting real‐time rendering of thousands of agents. We maximize rendering performance by using a collection of pre‐computed impostors sampled from a discrete set of view directions. Our approach differs from previous work on view‐dependent impostors in that we use per‐joint rather than per‐character impostors. Our characters are animated by applying the joint rotations directly to the impostors, instead of choosing a single impostor for the whole character from a set of pre‐defined poses. This offers more flexibility in terms of animation clips, as our representation supports any arbitrary pose, and thus, the agent behavior is not constrained to a small collection of pre‐defined clips. Because our impostors are intended to be valid for any pose, a key issue is to define a proper boundary for each impostor to minimize image artifacts while animating the agents. We pose this problem as a variational optimization problem and provide an efficient algorithm for computing a discrete solution as a pre‐process. To the best of our knowledge, this is the first time a crowd rendering algorithm encompassing image‐based performance, small graphics processing unit footprint, and animation independence is proposed.
Chica, Antoni; Monclús, Eva; Brunet, Pere; Navazo, Isabel; Vinacua, Àlvar
Graphical Models, Vol. 74, Num. 6, pp 302--310, 2012.
DOI: http://dx.doi.org/10.1016/j.gmod.2012.03.002
In this paper, we propose a novel strategy to automatically segment volume data using a high-quality mesh segmentation of an "example" model as a guiding example. The example mesh is deformed until it matches the relevant volume features. The algorithm starts from a medical volume model (scalar field of densities) to be segmented, together with an already existing segmentation (polygonal mesh) of the same organ, usually from a different person. The pre-process step computes a suitable atracting scalar field in the volume model. After an approximate 3D registration between the example mesh and the volume (this is the only step requiring user intervention), the algorithm works by minimizing an energy and adapts the shape of the polygonal mesh to the volume features in order to segment the target organ. The resulting mesh adapts to the volume features in the areas which can be unambiguously segmented, while taking the shape of the example mesh in regions which lack relevant volume information. The paper discusses several examples involving human foot bones, with results that clearly outperform present segmentation schemes.
Díaz, Jose; Monclús, Eva; Navazo, Isabel; Vázquez, Pere-Pau
Computer Graphics Forum, Vol. 31, Num. 7, pp 2155--2164, 2012.
DOI: http://dx.doi.org/10.1111/j.1467-8659.2012.03208.x
Medical illustrations have been used for a long time for teaching and communicating information for diagnosis or surgery planning. Illustrative visualization systems create methods and tools that adapt traditional illustration techniques to enhance the result of renderings. Clipping the volume is a popular operation in volume rendering for inspecting the inner parts, though it may remove some information of the context that is worth preserving. In this paper we present a new editing technique based on the use of clipping planes, direct structure extrusion, and illustrative methods, which preserves the context by adapting the extruded region to the structures of interest of the volumetric model. We will show that users may interactively modify the clipping plane and edit the structures to highlight, in order to easily create the desired result. Our approach works with segmented volume models and nonsegmented ones. In the last case, a local segmentation is performed on-the-fly. We will demonstrate the efficiency and utility of our method
Díaz-García, Jesús; Vázquez, Pere-Pau
International Symposium on Visual Computing , Vol. 7431, pp 698-707, 2012.
DOI: http://dx.doi.org/10.1007/978-3-642-33179-4_66
The visualization of human brain fibers is becoming a new challenge in the computer graphics field. Nowadays, with the aid of DTI and fiber tracking algorithms, complex geometric models consisting of massive sets of polygonal lines can be extracted. However, rendering such massive models often results in non-detailed, cluttered visualizations. In this paper we propose two methods (one object-space and another image-space) for the fast rendering of fiber tracts by including illustrative effects such as halos and ambient occlusion. We will show how our approaches provide extra visible cues that enhance the final result by removing clutter, thus revealing fibers’ shapes and orientations. Moreover, the use of ambient-occlusion based techniques improves the perception of their absolute and relative positions in space.
Eduardo Fernández; Besuievsky, Gonzalo
Computers & Graphics, Vol. 36, Num. 8, pp 1096--1108, 2012.
DOI: http://dx.doi.org/10.1016/j.cag.2012.09.003
PDF
In this paper we propose a new method for solving inverse lighting design problems that can include diverse sources as roof skylights or artificial light sources. Given a user specification of illumination requirements, our approach provides optimal light sources positions as well as optimal shapes for skylight installations in interior architectural models. The well known huge computational e ort that involves searching for an optimal solution is tackled combining two concepts: exploiting the scene coherence to compute global illumination and using a metaheuristic technique for optimization. Results and analysis show that our methodology presents fast and accurate results and that it can be applied for lighting design in indoors environments with interactive global illumination visualization support.
Fort, Marta; Sellarès, J. Antoni
Journal of Computational and Applied Mathematics, Vol. 236, Num. 14, pp 3461--3477, 2012.
DOI: http://dx.doi.org/10.1016/j.cam.2012.03.028
Given P, a simple connected, possibly non-convex, polyhedral surface composed of positively weighted triangular faces, we consider paths from generalized sources (points, segments, polygonal chains or polygonal regions) to points on P that stay on P and avoid obstacles (segments, polygonal chains or polygonal regions). The distance function defined by a generalized source is a function that assigns to each point of P the cost of the shortest path from the source to the point. In this paper we present an algorithm for computing approximate generalized distance functions. We also provide an algorithm that computes a discrete representation of the approximate distance function and, as applications, algorithms for computing discrete order-k Voronoi diagrams and for approximately solving facility location problems. Finally, we present experimental results obtained with our implementation of the provided algorithms.
Glondu, Loeiz; Muguercia, Lien; Marchal, Maud; Bosch, Carles; Rushmeier, Holly; Dumont, Georges; Drettakis, George
Computer Graphics Forum, Vol. 31, Num. 4, pp 1547--1556, 2012.
DOI: http://dx.doi.org/10.1111/j.1467-8659.2012.03151.x
PDF
A common weathering effect is the appearance of cracks due to material fractures. Previous exemplar-based aging and weathering methods have either reused images or sought to replicate observed patterns exactly. We introduce a new approach to exemplar-based modeling that creates weathered patterns on synthetic objects by matching the statistics of fracture patterns in a photograph. We present a user study to determine which statistics are correlated to visual similarity and how they are perceived by the user. We then describe a revised physically-based fracture model capable of producing a wide range of crack patterns at interactive rates. We demonstrate how a Bayesian optimization method can determine the parameters of this model so it can produce a pattern with the same key statistics as an exemplar. Finally, we present results using our approach and various exemplars to produce a variety of fracture effects in synthetic renderings of complex environments. The speed of the fracture simulation allows interactive previews of the fractured results and its application on large scale environments.
User-Friendly Graph Editing for Procedural Buildings
Patow, Gustavo A.
IEEE Computer Graphics and Applications, Vol. 32, Num. 2, pp 66--75, 2012.
DOI: http://dx.doi.org/10.1109/MCG.2010.104
A proposed rule-based editing metaphor intuitively lets artists create buildings without changing their workflow. It is based on the realization that the rule base represents a directed acyclic graph and on a shift in the development paradigm from product-based to rule-based representations. Users can visually add or edit rules, connect them to control the workflow, and easily create commands that expand the artists toolbox (for example, Boolean operations or local controlling operators). This approach opens new possibilities, from model verification to model editing through graph rewriting.
Tim Reiner; Sylvain Lefebvre; Lorenz Diener; Garcia Fernández, Ismael; Bruno Jobard; Carsten Dachsbacher
Computers & Graphics, Vol. 36, Num. 5, pp 366--375, 2012.
DOI: http://dx.doi.org/10.1016/j.cag.2012.03.031
We present an efficient runtime cache to accelerate the display of procedurally displaced and textured implicit surfaces, exploiting spatio-temporal coherence between consecutive frames. We cache evaluations of implicit textures covering a conceptually infinite space. Rotating objects, zooming onto surfaces, and locally deforming shapes now requires minor cache updates per frame and benefits from mostly cached values, avoiding expensive re-evaluations. A novel parallel hashing scheme supports arbitrarily large data records and allows for an automated deletion policy: new information may evict information no longer required from the cache, resulting in an efficient usage. This sets our solution apart from previous caching techniques, which do not dynamically adapt to view changes and interactive shape modifications. We provide a thorough analysis on cache behavior for different procedural noise functions to displace implicit base shapes, during typical modeling operations.
Multi-Modal Medical Image Registration Using Normalized Compression Distance
Vázquez, Pere-Pau; Marco, Jordi
IADIS International Journal on Computer Science and Information Systems, Vol. 7, Num. 1, pp 47-63, 2012.
Image registration is an important task in medicine, especially when images have been acquired by different scanner/sensor types, since they provide information on different body structures (bones, muscles, vessels...). Several techniques have been proposed in the past, and among those, Normalized Mutual Information has been proven as successful in many cases. Normalized Compression Distance has been proposed as a simple yet effective technique for image registration. It is especially suitable for the case of CT-MRI registration. However, other image modalities such as PET pose some problems and do not achieve accurate registration. In this paper we analyse and propose a valid approach for image registration using compression that works properly for different combinations of CT, MRI and PET images.
The ViRVIG Institute
Andújar, Carlos; Navazo, Isabel; Vázquez, Pere-Pau; Patow, Gustavo A.; Pueyo, Xavier
SBC Journal on 3D Interactive Systems, Vol. 2, Num. 2, 2011.
PDF
In this paper we present the ViRVIG Institute, a recently created institution that joins two well-known research groups: MOVING in Barcelona, and GGG in Girona. Our main research topics are Virtual Reality devices and interaction techniques, complex data models, realistic materials and lighting, geometry processing, and medical image visualization. We briefly introduce the history of both research groups and present some representative projects. Finally, we sketch our lines for future research.
Beacco, Alejandro; Andújar, Carlos; Pelechano, Nuria
Computer Graphics Forum, Vol. 30, Num. 8, pp 2328--2340, 2011.
DOI: http://dx.doi.org/10.1111/j.1467-8659.2011.02065.x
Rendering detailed animated characters is a major limiting factor in crowd simulation. In this paper we present a new representation for 3D animated characters which supports output-sensitive rendering. Our approach is flexible in the sense that it does not require us to pre-define the animation sequences beforehand, nor to pre-compute a dense set of pre-rendered views for each animation frame. Each character is encoded through a small collection of textured boxes storing colour and depth values. At runtime, each box is animated according to the rigid transformation of its associated bone and a fragment shader is used to recover the original geometry using a dual-depth version of relief mapping. Unlike competing output-sensitive approaches, our compact representation is able to recover high-frequency surface details and reproduces view-motion parallax effectively. Our approach drastically reduces both the number of primitives being drawn and the number of bones influencing each primitive, at the expense of a very slight per-fragment overhead. We show that, beyond a certain distance threshold, our compact representation is much faster to render than traditional level-of-detail triangle meshes. Our user study demonstrates that replacing polygonal geometry by our impostors produces negligible visual artefacts.
Bosch, Carles; Laffont, Pierre-Yves; Rushmeier, Holly; Dorsey, Julie; Drettakis, George
ACM Transactions on Graphics, Vol. 30, Num. 3, pp 20:1--20:13, 2011.
DOI: http://dx.doi.org/10.1145/1966394.1966399
PDF
The simulation of weathered appearance is essential in the realistic modeling of urban environments. A representative and particularly difficult effect to produce on a large scale is the effect of fluid flow. Changes in appearance due to flow are the result of both the global effect of large-scale shape, and local effects, such as the detailed roughness of a surface. With digital photography and Internet image collections, visual examples of flow effects are readily available. These images, however, mix the appearance of flows with the specific local context. We present a methodology to extract parameters and detail maps from existing imagery in a form that allows new target-specific flow effects to be produced, with natural variations in the effects as they are applied in different locations in a new scene. In this paper, we focus on producing a library of parameters and detail maps for generating flow patterns – and this methodology can be used to extend the library with additional image exemplars. To illustrate our methodology, we show a rich collection of patterns applied to urban models.
Callieri, Marco; Chica, Antoni; Dellepiane, Matteo; Besora, Isaac; Corsini, Massimiliano; Moyés, Jordi; Ranzuglia, Guido; Scopigno, Roberto; Brunet, Pere
ACM Journal on Computing and Cultural Heritage, Vol. 3, Num. 4, pp 14:1 -- 14:20, 2011.
DOI: http://dx.doi.org/10.1145/1957825.1957827
The dichotomy between full detail representation and the efficient management of data digitization is still a big issue in the context of the acquisition and visualization of 3D objects, especially in the field of the Cultural Heritage. Modern scanning devices enable very detailed geometry to be acquired, but it is usually quite hard to apply these technologies to large artifacts. In this paper we present a project aimed at virtually reconstructing the impressive (7x11 m.) portal of the Ripoll Monastery, Spain. The monument was acquired using triangulation laser scanning technology, producing a dataset of 2212 range maps for a total of more than 1 billion triangles. All the steps of the entire project are described, from the acquisition planning to the final setup for dissemination to the public. We show how time-of-flight laser scanning data can be used to speed-up the alignment process. In addition we show how, after creating a model and repairing imperfections, an interactive and immersive setup enables the public to navigate and display a fully detailed representation of the portal. This paper shows that, after careful planning and with the aid of state-of-the-art algorithms, it is now possible to preserve and visualize highly detailed information, even for very large surfaces.
Coll, Narcís; Guerrieri, Marité Ethel; Rivara, María Cecilia; Sellarès, J. Antoni
Journal of Computational and Applied Mathematics, Vol. 236, Num. 6, pp 1410-1422, 2011.
DOI: http://dx.doi.org/10.1016/j.cam.2011.09.005
We propose and discuss a new Lepp-surface method able to produce a small triangular approximation of huge sets of terrain grid data by using a two-goal strategy that assures both small approximation error and well-shaped 3D triangles. This is a refinement method which starts with a coarse initial triangulation of the input data, and incrementally selects and adds data points into the mesh as follows: for the edge e having the highest error in the mesh, one or two points close to (one or two) terminal edges associated with e are inserted in the mesh. The edge error is computed by adding the triangle approximation errors of the two triangles that share e, while each L(2)-norm triangle error is computed by using a curvature tensor (a good approximation of the surface) at a representative point associated with both triangles. The method produces triangular approximations that capture well the relevant features of the terrain surface by naturally producing well-shaped triangles. We compare our method with a pure L(2)-norm optimization method.
Coll, Narcís; Paradinas, Teresa
Computer Graphics Forum, Vol. 30, Num. 1, pp 187-198, 2011.
DOI: http://dx.doi.org/10.1111/j.1467-8659.2010.01842.x
Development of approximation techniques for highly detailed surfaces is one of the challenges faced today. We introduce a new mesh structure that allows dense triangular meshes of arbitrary topology to be approximated. The structure is constructed from the information gathered during a simplification process. Each vertex of the simplified model collects a neighbourhood of input vertices. Then, each neighbourhood is fitted by a set of local surfaces taking into account the sharp features detected. The simplified model plus the parameters of these local surfaces, conveniently stored in a file, is what we call Compact Model (CM). The input model can be approximated from its CM by refining each triangle of the simplified model. The main feature of our approach is that each triangle is refined by blending the local surfaces at its vertices, which can be done independently of the others. Consequently, adaptive reconstructions are possible, local shape deformations can be incorporated and the whole approximation process can be completely parallelized.
Durupinar, Funda; Pelechano, Nuria; Allbeck, Jan; Gudukbay, Ugur; Badler, Norman
IEEE Computer Graphics and Applications, Vol. 31, Num. 3, pp 22--31, 2011.
DOI: http://dx.doi.org/10.1109/MCG.2009.105
Most crowd simulators animate homogeneous crowds but include underlying parameters that users can tune to create variations in the crowd. However, these parameters are specific to the crowd models and might be difficult for animators or naïve users to use. A proposed approach maps these parameters to personality traits. It extends the HiDAC (High-Density Autonomous Crowds) system by providing each agent with a personality model based on the Ocean (openness, conscientiousness, extroversion, agreeableness, and neuroticism) personality model. Each trait has an associated nominal behavior. Specifying an agents personality leads to automation of low-level parameter tuning. User studies validated the mapping by assessing users perception of the traits in animations that illustrate such behaviors.
Fortuny, G.; López-Cano, M.; Susin, Antonio; Herrera, B.
Computer Methods in Biomechanics and Biomedical Engineering, Vol. 15, Num. 2, pp 195-201, 2011.
DOI: http://dx.doi.org/10.1080/10255842.2010.522182
We are interested in studying the genesis of a very common pathology: the human inguinal hernia. How the human inguinal hernia appears is not definitively clear, but it is accepted that it is caused by a combination of mechanical and biochemical alterations, and that muscular simulation plays an important role in this. This study proposes a model to explain how some physical parameters affect the ability to simulate the region dynamically and how these parameters are involved in generating inguinal hernias. We are particularly interested in understanding the mechanical alterations in the inguinal region because little is known about them or how they behave dynamically. Our model corroborates the most important theories regarding the generation of inguinal hernias and is an initial approach to numerically evaluating this affection.
Garcia Fernández, Ismael; Sylvain Lefebvre; Samuel Hornus; Anass Lasram
ACM Transactions on Graphics, 2011.
DOI: http://dx.doi.org/10.1145/2070781.2024195
Recent spatial hashing schemes hash millions of keys in parallel, compacting sparse spatial data in small hash tables while still allowing for fast access from the GPU. Unfortunately, available schemes suffer from two drawbacks: Multiple runs of the construction process are often required before success, and the random nature of the hash functions decreases access performance. We introduce a new parallel hashing scheme which reaches high load factor with a very low failure rate. In addition our scheme has the unique advantage to exploit coherence in the data and the access patterns for faster performance. Compared to existing approaches, it exhibits much greater locality of memory accesses and consistent execution paths within groups of threads. This is especially well suited to Computer Graphics applications, where spatial coherence is common. In absence of coherence our scheme performs similarly to previous methods, but does not suffer from construction failures. Our scheme is based on the Robin Hood scheme modified to quickly abort queries of keys that are not in the table, and to preserve coherence. We demonstrate our scheme on a variety of data sets. We analyze construction and access performance, as well as cache and threads behavior.
Monclús, Eva; Vázquez, Pere-Pau; Navazo, Isabel
Visualization in Medicine and Life Sciences II, pp 133-151, 2011.
DOI: http://dx.doi.org/10.1007/978-3-642-21608-4_8
The visualization of volumetric datasets, quite common in medical image processing, has started to receive attention fromother communities such as scientific and engineering. The main reason is that it allows the scientists to gain important insights into the data. While the datasets are becoming larger and larger, the computational power does not always go hand to hand, because the requirements of using low-end PCs or mobile phones increase. As a consequence, the selection of an optimal viewpoint that improves user comprehension of the datasets is challenged with time consuming trial and error tasks. In order to facilitate the exploration process, informative viewpoints together with camera paths showing representative information on the model can be determined. In this paper we present amethod for representative viewselection and path construction, togetherwith some accelerations that make this process extremely fast on a modern GPU.
Pelechano, Nuria; Spanlang, Bernhard; Beacco, Alejandro
The International Journal of Virtual Reality, Vol. 10, Num. 1, pp 13-19, 2011.
DOI: http://dx.doi.org/10.20870/IJVR.2011.10.1.2796
This paper presents an Animation Planning Mediator (APM) designed to synthesize animations efficiently for virtual characters in real time crowd simulation. From a set of animation clips, the APM selects the most appropriate and modifies the skeletal configuration of each character to satisfy desired constraints (e.g. eliminating foot-sliding or restricting upper body torsion), while still providing natural looking animations. We use a hardware accelerated character animation library to blend animations increasing the number of possible locomotion types. The APM allows the crowd simulation module to maintain control of path planning, collision avoidance and response. A key advantage of our approach is that the APM can be integrated with any crowd simulator working in continuous space. We show visual results achieved in real time for several hundreds of agents, as well as the quantitative accuracy.
Rossignac, Jarek; Vinacua, Àlvar
ACM TOG, Vol. 30, Num. 5, pp 116:1--116-16, 2011.
DOI: http://dx.doi.org/10.1145/2019627.2019635
We propose to measure the quality of an affine motion by its steadiness, which we formulate as the inverse of its Average Relative Acceleration (ARA). Steady affine motions, for which ARA=0, include translations, rotations, screws, and the golden spiral. To facilitate the design of pleasing in-betweening motions that interpolate between an initial and a final pose (affine transformation), B and C, we propose the Steady Affine Morph (SAM), defined as A^t? B with A=C ? B^{?1}. A SAM is affine-invariant and reversible. It preserves isometries (i.e., rigidity), similarities, and volume. Its velocity field is stationary both in the global and the local (moving) frames. Given a copy count, n, the series of uniformly sampled poses, A^{i/n}? B, of a SAM form a regular pattern which may be easily controlled by changing B, C, or n, and where consecutive poses are related by the same affinity A^{1/n}. Although a real matrix A^t does not always exist, we show that it does for a convex and large subset of orientation-preserving affinities A. Our fast and accurate Extraction of Affinity Roots (EAR) algorithm computes A^t, when it exists, using closed-form expressions in two or in three dimensions. We discuss SAM applications to pattern design and animation and to key-frame interpolation.
Vázquez, Pere-Pau; Marco, Jordi
The Visual Computer, Vol. 28, Num. 11, pp 1063--1084, 2011.
DOI: http://dx.doi.org/10.1007/s00371-011-0651-2
Similarity metrics are widely used in computer graphics. In this paper, we will concentrate on a new, algorithmic complexity-based metric called Normalized Compression Distance. It is a universal distance used to compare strings. This measure has also been used in computer graphics for image registration or viewpoint selection. However, there is no previous study on how the measure should be used: which compressor and image format are the most suitable. This paper presents a practical study of the Normalized Compression Distance (NCD) applied to color images. The questions we try to answer are: Is NCD a suitable metric for image comparison? How robust is it to rotation, translation, and scaling? Which are the most adequate image formats and compression algorithms? The results of our study show that NCD can be used to address some of the selected image comparison problems, but care must be taken on the compressor and image format selected.
Andújar, Carlos; Brunet, Pere; Chica, Antoni; Navazo, Isabel
Computer Graphics Forum, Vol. 29, Num. 8, pp 2456--2468, 2010.
DOI: http://dx.doi.org/10.1111/j.1467-8659.2010.01757.x
In this paper, we present an efficient approach for the interactive rendering of large-scale urban models, which can be integrated seamlessly with virtual globe applications. Our scheme fills the gap between standard approaches for distant views of digital terrains and the polygonal models required for close-up views. Our work is oriented towards city models with real photographic textures of the building facades. At the heart of our approach is a multi-resolution tree of the scene defining multi-level relief impostors. Key ingredients of our approach include the pre-computation of a small set of zenithal and oblique relief maps that capture the geometry and appearance of the buildings inside each node, a rendering algorithm combining relief mapping with projective texture mapping which uses only a small subset of the pre-computed relief maps, and the use of wavelet compression to simulate two additional levels of the tree. Our scheme runs considerably faster than polygonal-based approaches while producing images with higher quality than competing relief-mapping techniques. We show both analytically and empirically that multi-level relief impostors are suitable for interactive navigation through large urban models.
Argelaguet, Ferran; Andújar, Carlos
10th International Symposium on Smart Graphics, pp 115--126, 2010.
DOI: http://dx.doi.org/10.1007/978-3-642-13544-6_11
Predefined camera paths are a valuable tool for the exploration of complex virtual environments. The speed at which the virtual camera travels along different path segments is key for allowing users to perceive and understand the scene while maintaining their attention. Current tools for speed adjustment of camera motion along predefined paths, such as keyframing, interpolation types and speed curve editors provide the animators with a great deal of flexibility but offer little support for the animator to decide which speed is better for each point along the path. In this paper we address the problem of computing a suitable speed curve for a predefined camera path through an arbitrary scene. We strive at adapting speed along the path to provide non-fatiguing, informative, interestingness and concise animations. Key elements of our approach include a new metric based on optical flow for quantifying the amount of change between two consecutive frames, the use of perceptual metrics to disregard optical flow in areas with low image saliency, and the incorporation of habituation metrics to keep the user attention. We also present the results of a preliminary user-study comparing user response with alternative approaches for computing speed curves.
Argelaguet, Ferran; Kunert, André; Kulik, Alexander; Froehlich, Bernd
IEEE Symposium on 3D User Interfaces, pp 55--62, 2010.
DOI: http://dx.doi.org/10.1109/3DUI.2010.5444719
Multi-user virtual reality systems enable natural interaction with shared virtual worlds. Users can talk to each other, gesture and point into the virtual scenery as if it were real. As in reality, referring to objects by pointing, results often in a situation whereon objects are occluded from the other users viewpoints. While in reality this problem can only be solved by adapting the viewing position, specialized individual views of the shared virtual scene enable various other solutions. As one such solution we propose show-through techniques to make sure that the objects one is pointing to can be seen by others. We analyzed the influence of such augmented viewing techniques on the spatial understanding of the scene, the rapidity of mutual information exchange as well as the social behavior of users. The results of our user study revealed that show-through techniques support spatial understanding on a similar level as walking around to achieve a non-occluded view of specified objects. However, advantages in terms of comfort, user acceptance and compliance to social protocols could be shown, which suggest that virtual reality techniques can in fact be better than 3D reality.
Real-Time Path-Based Surface Detail
Bosch, Carles; Patow, Gustavo A.
Computers & Graphics, Vol. 34, Num. 4, pp 430--440, 2010.
DOI: http://dx.doi.org/10.1016/j.cag.2010.04.001
PDF
We present a GPU algorithm to render path-based 3D surface detail in real-time. Our method models these features using a vector representation that is efficiently stored in two textures. First texture is used to specify the position of the features, while the second texture contains their paths, profi les and material information. A fragment shader is then proposed to evaluate this data on the GPU by performing an accurate and fast rendering of the details, including visibility computations and antialiasing. Some of our main contributions include a CSG approach to efficiently deal with intersections and similar cases, and an efficient antialiasing method for the GPU. This technique allows application of path-based features such as grooves and similar details just like traditional textures, thus can be used onto general surfaces.
Coll, Narcís; Madern, Narcís; Sellarès, J. Antoni
Visual Computer, Vol. 26, Num. 2, pp 109-120, 2010.
DOI: http://dx.doi.org/10.1007/s00371-009-0380-y
Given a set V of viewpoints and a set S of obstacles in an environmental space, the good-visibility depth of a point q in relation to V and S is a measure of how deep or central q is with respect to the points in V that see q while minding the obstacles of S. The good-visibility map determined by V and S is the subdivision of the environmental space in good-visibility regions where all points have the same fixed good-visibility depth. In this paper we present algorithms for computing and efficiently visualizing, using graphics hardware capabilities, good-visibility maps in the plane as well as on triangulated terrains, where the obstacles are the terrain faces. Finally, we present experimental results obtained with the implementation of our algorithms.
Coll, Narcís; Paradinas, Teresa
Computer Graphics Forum, Vol. 29, Num. 6, pp 1842-1853, 2010.
DOI: http://dx.doi.org/10.1111/j.1467-8659.2010.01652.x
Scanning and acquisition methods produce highly detailed surface meshes that need multi-chart parameterizations to reduce stretching and distortion. From these complex shape surfaces, high-quality approximations are automatically generated by using surface simplification techniques. Multi-chart textures hinder the quality of the simplification of these techniques for two reasons: either the chart boundaries cannot be simplified leading to a lack of geometric fidelity; or texture distortions and artefacts appear near the simplified boundaries. In this paper, we present an edge-collapse based simplification method that provides an accurate, low-resolution approximation from a multi-chart textured model. For each collapse, the model is reparameterized by local bijective mappings to avoid texture distortions and chart boundary artefacts on the simplified mesh due to the geometry changes. To better apply the appearance attributes and to guarantee geometric fidelity, we drive the simplification process with the quadric error metrics weighted by a local area distortion measure.
Díaz, Jose; Vázquez, Pere-Pau; Navazo, Isabel; Duguet, Florent
Computers & Graphics, Vol. 34, Num. 4, pp 337--350, 2010.
DOI: http://dx.doi.org/10.1016/j.cag.2010.03.005
Volume models often show high depth complexity. This poses difficulties to the observer in judging the spatial relationships accurately. Illustrators usually use certain techniques such as improving the shading through shadows, halos, or edge darkening in order to enhance depth perception of certain structures. Both effects are difficult to generate in real-time for volumetric models. Either they may have an important impact in rendering time, or they require precomputation that prevents changing the transfer function interactively, as it determines the occlusions. In this paper we present two methods for the fast generation of ambient occlusion on volumetric models. The first is a screen-space approach that does not require any precomputed data structure. The second is a view independent method that stores volumetric information in the form of a Summed Area Table of the density values, and thus, allows the interactive change of transfer functions on demand, although at the expense of memory space. Despite the fact that similar quality results are obtained with both approaches, the 3D version is more suitable for objects with discontinuous structures such as a vessels tree or the intestines, and it yields better framerates. The screen-space version is more suitable in limited GPU memory environments because it does not need extra 3D texture storage. As an extra result, our screen-space technique also allows for the computation of view dependent, interactively configurable halos using the same data structure. We have also implemented both methods using CUDA and have analyzed their efficiency.
Hétroy, Frank; Rey, Stéphanie; Andújar, Carlos; Brunet, Pere; Vinacua, Àlvar
Computer-Aided Design, Vol. 43, Num. 1, pp 101--113, 2010.
DOI: http://dx.doi.org/10.1016/j.cad.2010.09.012
Limitations of current 3D acquisition technology often lead to polygonal meshes exhibiting a number of geometrical and topological defects which prevent them from widespread use. In this paper we present a new method for model repair which takes as input an arbitrary polygonal mesh and outputs a valid 2-manifold triangle mesh. Unlike previous work, our method allows users to quickly identify areas with potential topological errors and to choose how to fix them in a user-friendly manner. Key steps of our algorithm include the conversion of the input model into a set of voxels, the use of morphological operators to allow the user to modify the topology of the discrete model, and the conversion of the corrected voxel set back into a 2-manifold triangle mesh. Our experiments demonstrate that the proposed algorithm is suitable for repairing meshes of a large class of shapes.
Brunet, Pere; Chica, Antoni; Navazo, Isabel; Vinacua, Àlvar
Computing, Vol. 86, Num. 2, pp 101--115, 2009.
DOI: http://dx.doi.org/10.1007/s00607-009-0052-9
In constructing a model of a large twelfth century monument, we face the repair of a huge amount of small to medium-sized defects in the mesh. The total size of the mesh after registration was in the vicinity of 173M-triangles, and presented 14,622 holes of different sizes. Although other algorithms have been presented in the literature to fix these defects, in this case a fully automatic algorithm able to fix most of the defects is needed. In this paper we present the algorithms developed for this purpose, together with examples and results to measure the final surface quality. The algorithm is based on the iteration of smoothing and fitting steps on a uniform B-Spline defined on a 3D box domain bounding the hole. Tricubic and trilinear B-Splines are compared and the respective effectiveness is discussed.
El-Hajjar, Jean-François; Jolivet, Vincent; Ghazanfarpour, Djamchid; Pueyo, Xavier
The Visual Computer , Vol. 25, Num. 2, pp 87--100, 2009.
DOI: http://dx.doi.org/10.1007/s00371-007-0207-7
We present a novel empirical method for the animation of liquid droplets lying on a flat surface, the core of our technique being a simulation operating on a 2D grid which is implementable on GPU. The wetted surface can freely be oriented in space and is not limited to translucent materials, the liquid flow being governed by external forces, the viscosity parameter and the presence of obstacles. Furthermore, we show how to simply incorporate in our simulation scheme two enriching visual effects, namely absorption and ink transport. Rendering can be achieved from an arbitrary view point using a GPU image based ray-casting approach and takes into account the refraction and reflection of light. Even though our method doesn’t benefit from the literature of fluid mechanics, we show that convincing animations in terms of realism can be achieved in real-time.
Fort, Marta; Sellarès, J. Antoni
Applied Mathematics and Computation, Vol. 215, Num. 1, pp 235 -- 250, 2009.
DOI: http://dx.doi.org/10.1016/j.amc.2009.04.075
We present an algorithm for computing exact shortest paths, and consequently distance functions, from a generalized source (point, segment, polygonal chain or polygonal region) on a possibly non-convex triangulated polyhedral surface. The algorithm is generalized to the case when a set of generalized sites is considered, providing their distance field that implicitly represents the Voronoi diagram of the sites. Next, we present an algorithm to compute a discrete representation of the distance function and the distance field. Then, by using the discrete distance field, we obtain the Voronoi diagram of a set of generalized sites (points, segments, polygonal chains or polygons) and visualize it on the triangulated surface. We also provide algorithms that, by using the discrete distance functions, provide the closest, furthest and k-order Voronoi diagrams and an approximate 1-Center and 1-Median.
Fort, Marta; Sellarès, J. Antoni; Cabello, Sergio
Information Processing Letters, Vol. 109, Num. 9, pp 440--445, 2009.
DOI: http://dx.doi.org/10.1016/j.ipl.2009.01.001
We study the complexity of higher-order Voronoi diagrams on triangulated surfaces under the geodesic distance, when the sites may be polygonal domains of constant complexity. More precisely, we show that on a surface defined by n triangles the sum of the combinatorial complexities of the order-j Voronoi diagrams of m sites, for j=1,…,k, is O(k2n2+k2m+knm), which is asymptotically tight in the worst case.
Gonzalez Garcia, Francisco; Patow, Gustavo A.
ACM Transactions on Graphics, Vol. 28, Num. 5, pp 1--8, 2009.
DOI: http://dx.doi.org/10.1145/1618452.1618455
It is well known that multi-chart parameterizations introduce seams over meshes, causing serious problems for applications like texture filtering, relief mapping and simulations in the texture domain. Here we present two techniques, collectively known as Continuity Mapping, that together make any multi-chart parameterization seamless: Traveler’s Map is used for solving the spatial discontinuities of multi-chart parameterizations in texture space thanks to a bidirectional mapping between areas outside the charts and the corresponding areas inside; and Sewing the Seams addresses the sampling mismatch at chart boundaries using a set of stitching triangles that are not true geometry, but merely evaluated on a perfragment basis to perform consistent linear interpolation between non-adjacent texel values. Continuity Mapping does not require any modification of the artist-provided textures or models, it is fully automatic, and achieves continuity with small memory and computational costs.
László Szirmay-Kalos; Tamás Umenhoffer; Patow, Gustavo A.; László Szécsi; Mateu Sbert
Computer Graphics Forum, Vol. 28, Num. 6, pp 1586--1617, 2009.
DOI: http://dx.doi.org/ 10.1111/j.1467-8659.2009.01350.x
This survey reviews algorithms that can render specular, i.e. mirror reflections, refractions, and caustics on the GPU. We establish a taxonomy of methods based on the three main different ways of representing the scene and computing ray intersections with the aid of the GPU, including ray tracing in the original geometry, ray tracing in the sampled geometry, and geometry transformation. Having discussed the possibilities of implementing ray tracing, we consider the generation of single reflections/refractions, inter-object multiple reflections/refractions, and the general case which also includes self reflections or refractions. Moving the focus from the eye to the light sources, caustic effect generation approaches are also examined.
Mas, Albert; Patow, Gustavo A.; Martín, Ignacio
Computer Graphics Forum, Vol. 28, Num. 8, pp 2046--2056, 2009.
DOI: http://dx.doi.org/10.1111/j.1467-8659.2009.01430.x
PDF
This paper presents a new inverse reflector design method using a GPU-based computation of outgoing light distribution from reflectors. We propose a fast method to obtain the outgoing light distribution of a parameterized reflector, and then compare it with the desired illumination. The new method works completely in the GPU. We trace millions of rays using a hierarchical height-field representation of the reflector. Multiple reflections are taken into account. The parameters that define the reflector shape are optimized in an iterative procedure in order for the resulting light distribution to be as close as possible to the desired, user-provided one. We show that our method can calculate reflector lighting at least one order of magnitude faster than previous methods, even with millions of rays, complex geometries and light sources.
Vázquez, Pere-Pau
The Visual Computer, Vol. 25, Num. 5, pp 441--449, 2009.
DOI: http://dx.doi.org/10.1007/s00371-009-0326-4
Although the real world is composed of threedimensional objects, we communicate information using two-dimensional media. The initial 2D view we see of an object has great importance on how we perceive it. Deciding which of the all possible 2D representations of 3D objects communicates the maximum information to the user is still challenging, and it may be highly dependent on the addressed task. Psychophysical experiments have shown that three-quarter views (oblique views between frontal view and profile view) are often preferred as representative views for 3D objects; however, for most models, no knowledge of its proper orientation is provided. Our goal is the selection of informative views without any user intervention. In order to do so, we analyze some stability-based view descriptors and present a new one that computes view stability through the use of depth maps, without prior knowledge on the geometry or orientation of the object.We will show that it produces good views that, in most of the analyzed cases, are close to three-quarter views.
Booada, Imma; Coll, Narcís; Sellarès, J. Antoni
International Journal of Computer Mathematics, Vol. 85, Num. 7, pp 1003--1022, 2008.
DOI: http://dx.doi.org/10.1080/00207160701466362
We propose a new approach for computing in an efficient way polygonal approximations of generalized 2D/3D Voronoi diagrams. The method supports distinct site shapes (points, line-segments, curved-arc segments, polygons, spheres, lines, polyhedra, etc.), different distance functions (Euclidean distance, convex distance functions, etc.) and is restricted to diagrams with connected Voronoi regions. The presented approach constructs a tree (a quadtree in 2D/an octree in 3D) which encodes in its nodes and in a compact way all the information required for generating an explicit representation of the boundaries of the Voronoi diagram approximation. Then, by using this hierarchical data structure a reconstruction strategy creates the diagram approximation. We also present the algorithms required for dynamically maintaining under the insertion or deletion of sites the Voronoi diagram approximation. The main features of our approach are its generality, efficiency, robustness and easy implementation.
A Resolution Independent Approach for the Accurate Rendering of Grooved Surfaces
Bosch, Carles; Pueyo, Xavier; Mérillou, Stéphane; Ghazanfarpour, Djamchid
Computer Graphics Forum, Vol. 27, Num. 7, pp 1937--1944, 2008.
DOI: http://dx.doi.org/10.1111/j.1467-8659.2008.01342.x
PDF
This paper presents a method for the accurate rendering of path-based surface details such as grooves, scratches and similar features. The method is based on a continuous representation of the features in texture space, and the rendering is performed by means of two approaches: one for isolated or non-intersecting grooves and another for special situations like intersections or ends. The proposed solutions perform correct antialiasing and take into account visibility and inter-re?ections with little computational effort and memory requirements. Compared to anisotropic BRDFs and scratch models, we have no limitations on the distribution of grooves over the surface or their geometry, thus allowing more general patterns. Compared to displacement mapping techniques, we can efficiently simulate features of all sizes without requiring additional geometry or multiple representations
Chica, Antoni; Williams, Jason; Andújar, Carlos; Brunet, Pere; Navazo, Isabel; Rossignac, Jarek; Vinacua, Àlvar
Computer Graphics Forum, Vol. 27, Num. 1, pp 36--46, 2008.
DOI: http://dx.doi.org/10.1111/j.1467-8659.2007.01039.x
We present "Pressing", an algorithm for smoothing isosurfaces extracted from binary volumes while recovering their large planar regions (flats). Pressing yields a surface that is guaranteed to contain the samples of the volume classified as interior and exclude those classified as exterior. It uses global optimization to identify flats and constrained bilaplacian smoothing to eliminate sharp features and high-frequencies from the rest of the isosurface. It recovers sharp edges between flat regions and between flat and smooth regions. Hence, the resulting isosurface is usually a much more accurate approximation of the original solid than isosurfaces produced by previously proposed approaches. Furthermore, the segmentation of the isosurface into flat and curved faces and the sharp/smooth labelling of their edges may be valuable for shape recognition, simplification, compression, and various reverse engineering and manufacturing applications.
Coll, Narcís; Guerrieri, Marité Ethel; Sellarès, J. Antoni
Applied Mathematics and Computation, Vol. 201, pp 527--546, 2008.
DOI: http://dx.doi.org/10.1016/j.amc.2007.12.040
We propose a framework that combines improvement and Delaunay refinement techniques for incrementally adapting a refined mesh by interactively inserting and removing domain elements. Our algorithms achieve quality mesh by deleting, moving or inserting Steiner points from or into the mesh. The modifications applied to the mesh are local and the number of Steiner points added during the mesh adaptation process remains low. Moreover, since a mesh generation process can be viewed as an adaptation mesh process when domain elements are inserted one by one, our approach can also be applied to the generation of refined Delaunay quality meshes by incorporating our framework in the main body of Delaunay refinement mesh generation algorithms.
Garcia Fernández, Ismael; Patow, Gustavo A.
ACM Transactions on Graphics, Vol. 27, Num. 5, pp 1--9, 2008.
DOI: http://dx.doi.org/10.1145/1409060.1409090
Preserving details from a high resolution reference model onto lower resolution models is a complex, and sometimes daunting, task as manual intervention is required to correct texture misplacements. Inverse Geometric Textures (IGT) is a parameterization independent texturing technique that allows preservation of texture details from a high resolution reference model onto lower resolutions, generated with a given simplification method. IGT uses a parameterization defined on the reference model to generate an inversely parameterized texture that stores, for each texel, a list of all triangles that mapped onto it. This way, for any valid texture coordinate, IGT can know the point and the triangle of the detailed model that was projected, allowing application of details from the reference model onto the fragment from the low-resolution model. IGT is encoded in compact data structures and can be evaluated quickly. Furthermore, the high resolution model can have its own independent, secondary parameterization, so that no additional effort is required to directly use artist-designed content.
Mas, Albert; Martín, Ignacio; Patow, Gustavo A.
Computer Graphics Forum, Vol. 27, Num. 8, pp 2013--2027, 2008.
DOI: http://dx.doi.org/10.1111/j.1467-8659.2008.01180.x
PDF
This paper presents a method for compressing measured datasets of the near-field emission of physical light sources (represented by raysets). We create a mesh on the bounding surface of the light source that stores illumination information. The mesh is augmented with information about directional distribution and energy density. We have developed a new approach to smoothly generate random samples on the illumination distribution represented by the mesh, and to eficiently handle importance sampling of points and directions. We will show that our representation can compress a 10 million particle rayset into a mesh of a few hundred triangles. We also show that the error of this representation is low, even for very close objects.
Bose, Prosenjit; Coll, Narcís; Hurtado, Ferran; Sellarès, J. Antoni
International Journal of Computational Geometry and Applications, Vol. 17, pp 529--554, 2007.
DOI: http://dx.doi.org/10.1142/S0218195907002471
Given an unknown target planar map, we present an algorithm for constructing an approximation of the unknown target based on information gathered from linear probes of the target. Our algorithm is a general purpose reconstruction algorithm that can be applied in many settings. Our algorithm is particularly suited for the setting where computing the intersection of a line with an unknown target is much simpler than computing the unknown target itself. The algorithm maintains a triangulation from which the approximation of the unknown target can be extracted. We evaluate the quality of the approximation with respect to the target both in the topological sense and the metric sense. The correctness of the algorithm and the evaluation of its time complexity are also presented. Finally, we present some experimental results. For example, since generalized Voronoi diagrams are planar maps, our algorithm presents a simpler alternative method for constructing approximations of generalized Voronoi diagrams, which are notoriously difficult to compute.
Coll, Narcís; Fort, Marta; Madern, Narcís; Sellarès, J. Antoni
International Journal of Geographical Information Science, Vol. 21, Num. 10, pp 1115--1134, 2007.
DOI: http://dx.doi.org/10.1080/13658810701300097
Visibility computation on terrain models is an important research topic with many applications in Geographical Information Systems. A multi‐visibility map is the subdivision of the domain of a terrain into regions that, according to different criteria, encodes the visibility with respect to a set of view elements. We present an approach for visualising approximated multi‐visibility maps of a triangulated terrain corresponding to a set of view elements by using graphics hardware. Our method supports heterogeneous sets of view elements containing points, segments, polygonal chains and polygons and works for weak and strong visibility. Moreover, we are also able to efficiently solve approximated point and polygonal region multi‐visibility queries. To illustrate the usefulness of our approach we present results obtained with an implementation of the proposed algorithms.
Fast GPU-based reuse of paths in radiosity
Francesc Castro; Patow, Gustavo A.; Mateu Sbert; Halton, J. H.
Monte Carlo Methods and Applications, 2007.
DOI: http://dx.doi.org/10.1515/mcma.2007.014
PDF
We present in this paper a GPU-based strategy that allows a fast reuse of paths in the context of shooting random walk applied to radiosity. Given an environment with diffuse surfaces, we aim at computing a basis of n radiosity solutions, corresponding to n light-source positions. Thanks to the reuse, paths originated at each of the positions are used to also dis-tribute power from every other position. The visibility computations needed to make possible the reuse of paths are drastically accelerated using graphic hardware, resulting in a theoret-ical speed-up factor of n with regard to the computation of the independent solutions. Our contribution has application to the fields of interior design, animation, and videogames.
User-Guided Inverse Reflector Design
Patow, Gustavo A.; Pueyo, Xavier; Vinacua, Àlvar
Computer and Graphics, Vol. 31, Num. 3, pp 501-515, 2007.
DOI: http://dx.doi.org/10.1016/j.cag.2006.12.003
PDF
This paper proposes a technique for the design of luminaire reflector shapes from prescribed optical properties (far-field radiance distribution), geometrical constraints and users knowledge. This is an important problem in the field of Lighting Engineering, more specifically for Luminaire Design. The reflectors shape to be found is just a part of a set of pieces called in Lighting Engineering an optical set. This is composed of a light bulb (the source), the reflector and usually a glass that acts as a diffusor for the light, and protects the system from dust and other environmental phenomena. Thus, we aim at the design and development of a system capable of generating automatically a reflector shape in a way such that the optical set emits a given, user-defined, far-field radiance distribution for a known bulb. In order to do so, light propagation inside and outside the optical set must be simulated and the resulting radiance distribution compared to the desired one. Constraints on the shape imposed by industry needs and experts knowledge must be taken into account, bounding the set of possible shapes. The general approach taken is based on a minimization procedure on the space of possible reflector shapes, starting from a user-provided starting shape. The algorithm moves towards minimizing the distance, in the l2 metric, between the resulting illumination from the reflector and the prescribed, ideal optical radiance distribution specified by the user. The initial shape and a provided confidence value are used during the whole process as a boundary for the space of spanned reflectors used during the simulation.
Optimizing the topological and combinatorial complexity of isosurfaces
Andújar, Carlos; Brunet, Pere; Chica, Antoni; Navazo, Isabel; Rossignac, Jarek; Vinacua, Àlvar
Computer Aided Design, Vol. 37, Num. 8, pp 847--857, 2005.
DOI: http://dx.doi.org/10.1016/j.cad.2004.09.013
Since the publication of the original Marching Cubes algorithm, numerous variations have been proposed for guaranteeing water-tight constructions of triangulated approximations of isosurfaces. Most approaches divide the 3D space into cubes that each occupy the space between eight neighboring samples of a regular lattice. The portion of the isosurface inside a cube may be computed independently of what happens in the other cubes, provided that the constructions for each pair of neighboring cubes agree along their common face. The portion of the isosurface associated with a cube may consist of one or more connected components, which we call sheets. The topology and combinatorial complexity of the isosurface is influenced by three types of decisions made during its construction: (1) how to connect the four intersection points on each ambiguous face, (2) how to form interpolating sheets for cubes with more than one loop, and (3) how to triangulate each sheet. To determine topological properties, it is only relevant whether the samples are inside or outside the object, and not their precise value, if there is one. Previously reported techniques make these decisions based on local —per cube — criteria, often using precomputed look-up tables or simple construction rules. Instead, we propose global strategies for optimizing several topological and combinatorial measures of the isosurfaces: triangle count, genus, and number of shells. We describe efficient implementations of these optimizations and the auxiliary data structures developed to support them.
Computing maximal tiles and application to impostor-based simplification
Andújar, Carlos; Brunet, Pere; Chica, Antoni; Navazo, Isabel; Rossignac, Jarek; Vinacua, Àlvar
Computer Graphics Forum, Vol. 23, Num. 3, pp 401--410, 2004.
DOI: http://dx.doi.org/10.1111/j.1467-8659.2004.00771.x
The computation of the largest planar region approximating a 3D object is an important problem with wide applications in modeling and rendering. Given a voxelization of the 3D object, we propose an efficient algorithm to solve a discrete version of this problem. The input of the algorithm is the set of grid edges connecting the interior and the exterior of the object (called sticks). Using a voting-based approach, we compute the plane that slices the largest number of sticks and is orientation-compatible with these sticks. The robustness and efficiency of our approach rests on the use of two different parameterizations of the planes with suitable properties. The first of these is exact and is used to retrieve precomputed local solutions of the problem. The second one is discrete and is used in a hierarchical voting scheme to compute the global maximum. This problem has diverse applications that range from finding object signatures to generating simplified models. Here we demonstrate the merits of the algorithm for efficiently computing an optimized set of textured impostors for a given polygonal model.
Bosch, Carles; Pueyo, Xavier; Stéphane Mérillou; Djamchid Ghazanfarpour
Computer Graphics Forum, Vol. 23, Num. 3, pp 361--370, 2004.
DOI: http://dx.doi.org/10.1111/j.1467-8659.2004.00767.x
PDF
Individually visible scratches, also called isolated scratches, are very common in real world surfaces. Although their microgeometry is not visible, they are individually perceptible by the human eye, lying into a representation scale between BRDF and texture. In order to simulate this kind of scratches in synthetic images we need to know their position over the surface (texture scale), so we can determine where to use the specific scratch BRDF instead of the ordinary surface BRDF. Computing the BRDF of a scratch is difficult because it depends on the scratch’s invisible microgeometry. In this paper, we propose a new physically based model to derive this microgeometry by simulating the formation process of scratches. We allow specifying intuitively the parameters involved in the process such as the scratching tool, the penetration forces, and the material properties of the object. From these parameters, we derive the microgeometries of the scratches by taking into account the real behaviour of the process. This behaviour has been determined by analysing existing models in the field of materials engineering and some “scratch tests” that we performed on metals. Our method has the advantages of easily simulating scratches with a wide range of microgeometries and taking into account the variability of their microgeometry along the scratch path. Another contribution is related to the location of the scratches over the surface. Instead of using an image of the paths as in previous work, we present a new representation based on curves defining the paths. This offers an independence on the image resolution or the distance from the observer and accurately provides the scratch direction in order to compute scratch BRDFs
Conferences
Alvarado, E.; Argudo, Oscar; Rohmer, D.; Cani, M-P.; Pelechano, Nuria
The Visual Computer (CGI), pp 1--13, 2024.
DOI: http://dx.doi.org/10.1007/s00371-024-03506-z
Human and animal presence in natural landscapes is initially revealed by the immediate impact of their locomotion, from footprints to crushed grass. In this work, we present an approach to model the effects of virtual characters on natural terrains, focusing on the impact of human locomotion. We introduce a lightweight solution to compute accurate foot placement on uneven ground and infer dynamic foot pressure from kinematic animation data and the mass of the character. A ground and vegetation model enables us to effectively simulate the local impact of locomotion on soft soils and plants over time, resulting in theformationofvisiblepaths.Asourresultsshow,wecanparameterizevarioussoilmaterialsandvegetationtypesvalidated with real-world data. Our method can be used to significantly increase the realism of populated natural landscapes and the sense of presence in virtual applications and games.
Itatani, Reiya; Pelechano, Nuria
Motion Interaction and Games (MIG'24), 2024.
Stretch your reach: Studying Self-Avatar and Controller Misalignment in Virtual Reality Interaction
Pontón, Jose Luis; Keshavarz, Reza; Beacco, Alejandro; Pelechano, Nuria
CHI '24: Proceedings of the CHI Conference on Human Factors in Computing Systems, Honolulu, Hawaii, pp 1--15, 2024.
DOI: http://dx.doi.org/10.1145/3613904.3642268
Immersive Virtual Reality typically requires a head-mounted display (HMD) to visualize the environment and hand-held controllers to interact with the virtual objects. Recently, many applications display full-body avatars to represent the user and animate the arms to follow the controllers. Embodiment is higher when the self-avatar movements align correctly with the user. However, having a full-body self-avatar following the user’s movements can be challenging due to the disparities between the virtual body and the user’s body. This can lead to misalignments in the hand position that can be noticeable when interacting with virtual objects. In this work, we propose five different interaction modes to allow the user to interact with virtual objects despite the self-avatar and controller misalignment and study their influence on embodiment, proprioception, preference, and task performance. We modify aspects such as whether the virtual controllers are rendered, whether controllers are rendered in their real physical location or attached to the user’s hand, and whether stretching the avatar arms to always reach the real controllers. We evaluate the interaction modes both quantitatively (performance metrics) and qualitatively (embodiment, proprioception, and user preference questionnaires). Our results show that the stretching arms solution, which provides body continuity and guarantees that the virtual hands or controllers are in the correct location, offers the best results in embodiment, user preference, proprioception, and performance. Also, rendering the controller does not have an effect on either embodiment or user preference.
Are LLMs ready for Visualization?
Vázquez, Pere-Pau
IEEE 17th Pacific Visualization Conference (PacificVis), pp 343--352, 2024.
DOI: http://dx.doi.org/10.1109/PacificVis60374.2024.00049
PDF
Generative models have received a lot of attention in many areas of academia and the industry. Their capabilities span many areas, from the invention of images given a prompt to the generation of concrete code to solve a certain programming issue. These two paradigmatic cases fall within two distinct categories of requirements, ranging from "creativity" to "precision", as characterized by Bing Chat, which employs ChatGPT-4 as its backbone. Visualization practitioners and researchers have wondered to what end one of such systems could accomplish our work in a more efficient way. Several works in the literature have utilized them for the creation of visualizations. And some tools such as Lida, incorporate them as part of their pipeline. Nevertheless, to the authors’ knowledge, no systematic approach for testing their capabilities has been published, which includes both extensive and in-depth evaluation. Our goal is to fill that gap with a systematic approach that analyzes three elements: whether Large Language Models are capable of correctly generating a large variety of charts, what libraries they can deal with effectively, and how far we can go to configure individual charts. To achieve this objective, we initially selected a diverse set of charts, which are commonly utilized in data visualization. We then developed a set of generic prompts that could be used to generate them, and analyzed the performance of different LLMs and libraries. The results include both the set of prompts and the data sources, as well as an analysis of the performance with different configurations.
Yun, Haoran; Pontón, Jose Luis; Beacco, Alejandro; Andújar, Carlos; Pelechano, Nuria
IEEE Conference Virtual Reality and 3D User Interfaces (IEEE VR), 2024.
DOI: http://dx.doi.org/10.1109/VR58804.2024.00068
An increasing number of virtual reality applications require environments that emulate real-world conditions. These environments often involve dynamic virtual humans showing realistic behaviors. Understanding user perception and navigation among these virtual agents is key for designing realistic and effective environments featuring groups of virtual humans. While collision risk significantly influences human locomotion in the real world, this risk is largely absent in virtual settings. This paper studies the impact of the expected collision feedback on user perception and interaction with virtual crowds. We examine the effectiveness of commonly used collision feedback techniques (auditory cues and tactile vibrations) as well as inducing participants to expect that a physical bump with a real person might occur, as if some virtual humans actually correspond to real persons embodied into them and sharing the same physical space. Our results indicate that the expected collision feedback significantly influences both participant behavior—encompassing global navigation and local movements—and subjective perceptions of presence and copresence. Specifically, the introduction of a perceived risk of actual collision was found to significantly impact global navigation strategies and increase the sense of presence. Auditory cues had a similar effect on global navigation and additionally enhanced the sense of copresence. In contrast, vibrotactile feedback was primarily effective in influencing local movements.
Franco, Juan Jose; Vázquez, Pere-Pau
In Proc. of VISIGRAPP - International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications, pp 256--267, 2023.
DOI: http://dx.doi.org/10.5220/0011895300003417
Environmental noise pollution is a problem for cities’ inhabitants, that can be especially severe in large cities. To implement measures that can alleviate this problem, it is necessary to understand the extent and impact of different noise sources. Although gathering data is relatively cheap, processing and analyzing the data is still complex. Besides the lack of an automatic method for labelling city sounds, maybe more important is the fact that there is not a tool that allows domain experts to analytically explore data that has been manually labelled. To solve this problem, we have created a visual analytics application that facilitates the exploration of multiple- labelled temporal data captured at four different corners of a crossing in a populated area of Barcelona, the Eixample neighborhood. Our tool consists of a series of linked interactive views that facilitate top-down (from noise events to labels) and bottom-up (from labels to time slots) exploration of the captured data.
Molina, Elena; Middel, C.; Vázquez, Pere-Pau
Eurovis, Short papers, 2023.
DOI: http://dx.doi.org/10.2312/evs.20231039
Heatmaps are a widely used technique in visualization. Unfortunately, they have not been investigated in depth and little is known about the best parameterizations so that they are properly interpreted. The effect of different palettes on our ability to read values is still unknown. To address this issue, we conducted a user study, in which we analyzed the effect of two commonly used color palettes, Blues and Viridis, on value estimation and value search. As a result, we provide some suggestions for what to expect from the heatmap configurations analyzed.
Monclús, Eva; Vázquez, Pere-Pau
VCBM 2023: Eurographics Workshop on Visual Computing for Biology and Medicine, 2023.
DOI: http://dx.doi.org/10.2312/vcbm.20232019
The segmentation of medical models is a complex and time-intensive process required for both diagnosis and surgical preparation. Despite the advancements in deep learning, neural networks can only automatically segment a limited number of structures, often requiring further validation by a domain expert. In numerous instances, manual segmentation is still necessary. Virtual Reality (VR) technology can enhance the segmentation process by providing improved perception of segmentation outcomes and enabling interactive supervision by experts. But inspecting how the progress of the segmentation algorithm is evolving, and defining new seeds requires seeing the inner layers of the volume, which can be costly and difficult to achieve with typical metaphors such as clipping planes. In this paper, we introduce a wedge-shaped 3D interaction metaphor designed to facilitate VR-based segmentation through detailed inspection and guidance. User evaluations demonstrated increased satisfaction with usability and faster task completion times using the tool.
Vallespí, Pau; Álvarez, Brian; Monclús, Eva; Fairén, Marta
IMET 2023: International Conference on Interactive Media, Smart Systems and Emerging Technologies, 2023.
DOI: http://dx.doi.org/https://doi.org/10.2312/imet.20231248
Organ transplantation has become very important for treating organ diseases and new technologies have been developed to increase the donor pool. Virtual Reality (VR) has been proven to be a valuable tool for training in medical education. This paper presents a VR application designed to simulate the medical protocol needed for the management of a deceased organ donation process called Uncontrolled Donation after Circulatory determination of Death (uDCD). Results from an explorative study on five medical experts in the area are presented, and future opportunities for improvement are discussed in the paper.
Yun, Haoran; Pontón, Jose Luis; Andújar, Carlos; Pelechano, Nuria
IEEE Conference Virtual Reality and 3D User Interfaces (IEEE VR), pp 286--296, 2023.
DOI: http://dx.doi.org/10.1109/VR55154.2023.00044
The use of self-avatars is gaining popularity thanks to affordable VR headsets. Unfortunately, mainstream VR devices often use a small number of trackers and provide low-accuracy animations. Previous studies have shown that the Sense of Embodiment, and in particular the Sense of Agency, depends on the extent to which the avatar's movements mimic the user's movements. However, few works study such effect for tasks requiring a precise interaction with the environment, i.e., tasks that require accurate manipulation, precise foot stepping, or correct body poses. In these cases, users are likely to notice inconsistencies between their self-avatars and their actual pose. In this paper, we study the impact of the animation fidelity of the user avatar on a variety of tasks that focus on arm movement, leg movement and body posture. We compare three different animation techniques: two of them using Inverse Kinematics to reconstruct the pose from sparse input (6 trackers), and a third one using a professional motion capture system with 17 inertial sensors. We evaluate these animation techniques both quantitatively (completion time, unintentional collisions, pose accuracy) and qualitatively (Sense of Embodiment). Our results show that the animation quality affects the Sense of Embodiment. Inertial-based MoCap performs significantly better in mimicking body poses. Surprisingly, IK-based solutions using fewer sensors outperformed MoCap in tasks requiring accurate positioning, which we attribute to the higher latency and the positional drift that causes errors at the end-effectors, which are more noticeable in contact areas such as the feet.
Beacco, Alejandro; Gallego, J.; Slater, M.
ICIP, pp 1--16, 2022.
DOI: http://dx.doi.org/10.1007/s00371-022-02669-x
This work deals with the automatic 3D reconstruction of objects from frontal RGB images. This aims at a better understanding of the reconstruction of 3D objects from RGB images and their use in immersive virtual environments. We propose a complete workflow that can be easily adapted to almost any other family of rigid objects. To explain and validate our method, we focus on guitars. First, we detect and segment the guitars present in the image using semantic segmentation methods based on convolutional neural networks. In a second step, we perform the final 3D reconstruction of the guitar by warping the rendered depth maps of a fitted 3D template in 2D image space to match the input silhouette. We validated our method by obtaining guitar reconstructions from real input images and renders of all guitar models available in the ShapeNet database. Numerical results for different object families were obtained by computing standard mesh evaluation metrics such as Intersection over Union, Chamfer Distance, and the F-score. The results of this study show that our method can automatically generate high-quality 3D object reconstructions from frontal images using various segmentation and 3D reconstruction techniques.
Beacco, Alejandro; Gallego, J.; Slater, M.
IEEE VR, 2022.
DOI: http://dx.doi.org/10.1109/VRW55335.2022.00233
We present a complete automatic system to obtain a realistic 3D avatar reconstruction of a person using only a frontal RGB image. Our proposed workflow first determines the pose, shape and semantic information from the input image. All this information is processed to create the skeleton and the 3D skinned textured mesh that forms the final avatar. We use a specific head reconstruction method to correctly match our final mesh to a realistic avatar. Our pipeline focuses on three main aspects: automation of the process, identification of the person, and usability of the avatar.
Visual Analysis of Environmental Noise Data
Franco, Juan Jose; Alsina-Pages, R.; Vázquez, Pere-Pau
Proc. 16th International Conference on Computer Graphics, Visualization, Computer Vision and Image Processing (CGVCVIP 2022), IADIS Press, pp 45--53, 2022.
Smart cities generate a large amount of information that, if used properly, can help improve people s quality of life, while providing environmental support. Decreasing pollution generated by noise, is a prevailing concern, in today s society. A group of experts has collected data in different areas of l Escaldes-Engordany, in Andorra. These areas have been separated according to traffic policies, i.e., the fully pedestrian ones, the fully vehicular ones, and the combined ones. Analyzing the resulting data may be a complex task, where optimal tools are not available; hence, we have developed a visualization application that allows comparisons and analysis. The main purpose is to improve the conditions in decision making, by having accurate information so that policymakers may have a better understanding of how different transit levels configure the noise landscape, and how contrasting sounds are distributed throughout time. To achieve this, we have developed a visualization application that allows overviewing, detailed individual analysis, and comparative views to parameterize the results of the policies adopted and be in a better position to improve them.
Accurate molecular atom selection in VR
Molina, Elena; Vázquez, Pere-Pau
EuroVis - Poster, 2022.
DOI: http://dx.doi.org/10.2312/evp.20221121
Accurate selection in cluttered scenes is complex because a high amount of precision is required. In Virtual Reality Environments, it is even worse because it is more difficult for us to point a small object with our arms in the air. Not only our arms move slightly, but the button/trigger press reduces our weak stability. In this paper, we present two alternatives to the classical ray pointing intended to facilitate the selection of atoms in molecular environments. We have implemented and analyzed such techniques through an informal user study and found that they were highly appreciated by the users. This selection method could be interesting in other crowded environments beyond molecular visualization.
Digital Reintegration of Distributed Mural Paintings at Different Architectural Phases: the Case of St. Quirze de Pedret
Munoz-Pandiella, Imanol; Argudo, Oscar; Lorés-Otzet, Immaculada; Font-Comas, Joan; Àvila-Casademont, Genís; Pueyo, Xavier; Andújar, Carlos
Eurographics Workshop on Graphics and Cultural Heritage, 2022.
DOI: http://dx.doi.org/10.2312/gch.20221227
Sant Quirze de Pedret is a Romanesque church located in Cercs (Catalonia, Spain) at the foothills of the Pyrenees. Its walls harbored one of the most important examples of mural paintings in Catalan Romanesque Art. However, in two different campaigns (in 1921 and 1937) the paintings were removed using the strappo technique and transferred to museums for safekeeping. This detachment protected the paintings from being sold in the art market, but at the price of breaking the integrity of the monument. Nowadays, the paintings are exhibited in the Museu Nacional d’Art de Catalunya - MNAC (Barcelona, Catalonia) and the Museu Diocesà i Comarcal de Solsona - MDCS (Solsona, Catalonia). Some fragments of the paintings are still on the walls of the church. In this work, we present the methodology to digitally reconstruct the church building at its different phases and group the dispersed paintings in a single virtual church, commissioned by the MDCS. We have combined 3D reconstruction (LIDAR and photogrammetric using portable artificial illumination) and modeling techniques (including texture transfer between different shapes) to recover the integrity of the monument in a single 3D virtual model. Furthermore, we have reconstructed the church building at different significant historical moments and placed actual paintings on its virtual walls, based on archaeological knowledge. This set of 3D models allows experts and visitors to better understand the monument as a whole, the relations between the different paintings, and its evolution over time.
Pontón, Jose Luis; Monclús, Eva; Pelechano, Nuria
Eurographics short papers, 2022.
DOI: http://dx.doi.org/10.48550/arXiv.2209.11482
The use of self-avatars in a VR application can enhance presence and embodiment which leads to a better user experience. In collaborative VR it also facilitates non-verbal communication. Currently it is possible to track a few body parts with cheap trackers and then apply IK methods to animate a character. However, the correspondence between trackers and avatar joints is typically fixed ad-hoc, which is enough to animate the avatar, but causes noticeable mismatches between the user’s body pose and the avatar. In this paper we present a fast and easy to set up system to compute exact offset values, unique for each user, which leads to improvements in avatar movement. Our user study shows that the Sense of Embodiment increased significantly when using exact offsets as opposed to fixed ones. We also allowed the users to see a semitransparent avatar overlaid with their real body to objectively evaluate the quality of the avatar movement with our technique.
Comino, Marc; Andújar, Carlos; Bosch, Carles; Chica, Antoni; Munoz-Pandiella, Imanol
Spanish Computer Graphics Conference (CEIG), pp 15--18, 2021.
DOI: http://dx.doi.org/10.2312/ceig.20211357
Terrestrial Laser Scanners, also known as LiDAR, are often equipped with color cameras so that both infrared and RGB values are measured for each point sample. High-end scanners also provide panoramic High Dynamic Range (HDR) images. Rendering such HDR colors on conventional displays requires a tone-mapping operator, and getting a suitable exposure everywhere on the image can be challenging for 360° indoor scenes with a variety of rooms and illumination sources. In this paper we present a simple-to-implement tone mapping algorithm for HDR panoramas captured by LiDAR equipment. The key idea is to choose, on a per-pixel basis, an exposure correction factor based on the local intensity (infrared reflectivity). Since LiDAR intensity values for indoor scenes are nearly independent from the external illumination, we show that intensity-guided exposure correction often outperforms state-of-the-art tone-mapping operators on this kind of scenes.
Comino, Marc; Andújar, Carlos; Bosch, Carles; Chica, Antoni; Munoz-Pandiella, Imanol
Spanish Computer Graphics Conference (CEIG), pp 9--14, 2021.
DOI: http://dx.doi.org/10.2312/ceig.20211356
Laser scanners enable the digitization of 3D surfaces by generating a point cloud where each point sample includes an intensity (infrared reflectivity) value. Some LiDAR scanners also incorporate cameras to capture the color of the surfaces visible from the scanner location. Getting usable colors everywhere across 360 scans is a challenging task, especially for indoor scenes. LiDAR scanners lack flashes, and placing proper light sources for a 360 indoor scene is either unfeasible or undesirable. As a result, color data from LiDAR scans often do not have an adequate quality, either because of poor exposition (too bright or too dark areas) or because of severe illumination changes between scans (e.g. direct Sunlight vs cloudy lighting). In this paper, we present a new method to recover plausible color data from the infrared data available in LiDAR scans. The main idea is to train an adapted image-to-image translation network using color and intensity values on well-exposed areas of scans. At inference time, the network is able to recover plausible color using exclusively the intensity values. The immediate application of our approach is the selective colorization of LiDAR data in those scans or regions with missing or poor color data.
Analysis and Visual Exploration of Prediction Algorithms for Public Bicycle Sharing Systems
Cortez, A.; Vázquez, Pere-Pau
15th International Conference on Computer Graphics, Visualization, Computer Vision and Image Processing (CGVCVIP 2021), pp 61--70, 2021.
Public bicycle sharing systems have become an increasingly popular means of transportation in many cities around the world. However, the information shown in mobile apps or websites is commonly limited to the system’s current status and is of little use for both citizens and responsible planning entities. The vast amount of data produced by these managing systems makes it feasible to elaborate and present predictive models that may help its users in the decision-making process. For example, if a user finds a station empty, the application could provide an estimation of when a new bicycle would be available. In this paper, we explore the suitability of several prediction algorithms applied to this case of bicycle availability, and we present a web-based tool to visually explore their prediction errors under different time frames. Even though a quick quantitative analysis may initially suggest that Random Forest yields a lower error, our visual exploration interface allows us to perform a more thorough analysis and detect subtle but relevant differences between algorithms depending on variables such as the stations behavior, hourly intervals, days, or types of days (weekdays and weekends). This case illustrates the potential of visual representation together with quantitative metrics to compare prediction algorithms with a higher level of detail, which can, in turn, assist application designers and decision-makers to dynamically adjust the best model for their specific scenarios.
Cortez, A.; Vázquez, Pere-Pau
Computer Science Research Notes (Proc. WSCG 2021), 2021.
DOI: http://dx.doi.org/10.24132/CSRN.2021.3002.23
Nowadays, public bicycle sharing systems have become popular and widespread across the world. Their usefulness largely depends on their ability to synchronize with citizens’ usage patterns and optimize re-balancing operations that must be carried out to reduce outages. Two crucial factors to tackle this problem are stations’ characteristics (geography, location, etc) and the availability of bikes and drop-off slots. Based on the requirements and input from regular users and experts in policy-making, system operation, and urban planning, we have created a web-based visualization system that facilitates the analysis of docking stations’ behavior. This system provides the first group with the availability prediction of both bike and free slots in docking stations to assist their planning. Besides, the system helps the second group understand patterns of usage and get deeper insights (e.g. need for resizing or complementary transportation systems) to facilitate decision-making and better fulfill the citizens’ needs. In a final evaluation, both groups found it highly useful, effective, and better suited than other existent applications.
Farràs, Arnau; Comino, Marc; Andújar, Carlos
EG GCH - Eurographics Workshop on Graphics and Cultural Heritage, pp 21--30, 2021.
DOI: http://dx.doi.org/10.2312/gch.20211402
Recent advances in 3D acquisition technologies have facilitated the inexpensive digitization of cultural heritage. In addition to the 3D digital model, in many cases multiple photo collections are also available. These photo collections often provide valuable information not included in the 3D digital model. In this paper we describe a VR-ready web application to simultaneously explore a cultural heritage model together with arbitrary photo collections. At any time, users can define a region of interest either explicitly or implicitly, and the application retrieves, scores, groups and shows a matching subset of the photos. Users can then select a photo to project it onto the 3D model, to inspect the photo separately, or to teleport to the position the photo was taken from. Unlike previous approaches for joint 2D-3D model exploration, our interface has been specifically adapted to VR. We conducted a user study and found that the application greatly facilitates navigation and provides a fast, intuitive access to the available photos. The application supports any modern browser running on desktop, mobile and VR headset systems.
Digital Layered Models of Architecture and Mural Paintings over Time
Guardia, Milagros; Pogliani, Paola; Bordi, Giulia; Charalambous, Panayiotis; Andújar, Carlos; Pueyo, Xavier
XXX Spanish Computer Graphics Conference, CEIG 2021, pp 39--42, 2021.
DOI: http://dx.doi.org/10.2312/ceig.20211363
The European project Enhancement of Heritage Experiences: The Middle Ages. Digital Layered Models of Architecture and Mural Paintings over Time (EHEM) aims to obtain virtual reconstructions of medieval artistic heritage -architecture with mural paintings- that are as close as possible to the original at different times, incorporating historical-artistic knowledge and the diachronic perspective of heritage. The project has also the purpose of incorporating not only how these painted buildings are and how they were, but also what function they had, how they were used and how they were perceived by the different users. EHEM will offer an instrument for researchers, restorers and heritage curators and will -humanize- the heritage proposing to the spectator of the 21st century an experience close to the users of the Middle Ages.
Hermosilla, Pedro; Schäfer, M.; Lang, M.; Fackelmann, G.; Vázquez, Pere-Pau; Kozlikova, B.; Krone, M.; Ritschel, T.; Ropinski, T.
International Conference on Learning Representations, ICLR 2021: Vienna, Austria, pp 1--16, 2021.
DOI: http://dx.doi.org
Proteins perform a large variety of functions in living organisms and thus playa key role in biology. However, commonly used algorithms in protein learningwere not specifically designed for protein data, and are therefore not able tocapture all relevant structural levels of a protein during learning. To fill this gap,we propose two new learning operators, specifically designed to process proteinstructures. First, we introduce a novel convolution operator that considers theprimary, secondary, and tertiary structure of a protein by usingn-D convolutionsdefined on both the Euclidean distance, as well as multiple geodesic distancesbetween the atoms in a multi-graph. Second, we introduce a set of hierarchicalpooling operators that enable multi-scale protein analysis. We further evaluate theaccuracy of our algorithms on common downstream tasks, where we outperformstate-of-the-art protein learning algorithms.
Kamal, Ahmed; Andújar, Carlos
Web3D '21: The 26th International Conference on 3D Web Technology, pp 1-6, 2021.
DOI: http://dx.doi.org/10.1145/3485444.3487643
Navigating through a virtual environment is one of the major user tasks in the 3D web. Although hundreds of interaction techniques have been proposed to navigate through 3D scenes in desktop, mobile and VR headset systems, 3D navigation still poses a high entry barrier for many potential users. In this paper we discuss the design and implementation of a test platform to facilitate the creation and fine-tuning of interaction techniques for 3D navigation. We support the most common navigation metaphors (walking, flying, teleportation). The key idea is to let developers specify, at runtime, the exact mapping between user actions and virtual camera changes, for any of the supported metaphors. We demonstrate through many examples how this method can be used to adapt the navigation techniques to various people including persons with no previous 3D navigation skills, elderly people, and people with disabilities.
The impact of animations in the perception of a simulated crowd
Molina, Elena; Rios, Àlex; Pelechano, Nuria
Computer Graphics International, 2021.
Simulating virtual crowds is an important challenge in many areas such as games and virtual reality applications. A lot of effort has been dedicated to improving pathfinding, collision avoidance, or decision making, to achieve more realistic human-like behavior. However, crowd simulation will be far from appearing realistic as long as virtual humans are limited to walking animations. Including animation variety could greatly enhance the plausibility of the populated environment. In this paper, we evaluated to what extend animation variety can affect the perceived level of realism of a crowd, regardless of the appearance of the virtual agents (bots vs. humanoids). The goal of this study is to provide recommendations for crowd animation and rendering when simulating crowds. Our results show that the perceived realism of the crowd trajectories and animations is significantly higher when using a variety of animations as opposed to simply having locomotion animations, but only if we render realistic humanoids. If we can only render agents as bots, then there is no much gain from having animation variety, in fact, it could potentially lower the perceived quality of the trajectories.
Andújar, Carlos; Chica, Antoni; Comino, Marc
EuroVis 2020, Eurographics/IEEE VGTC Conference on Visualization 2020, pp 151--155, 2020.
DOI: http://dx.doi.org/10.2312/evs.20201064
Finding robust correspondences between images is a crucial step in photogrammetry applications. The traditional approach to visualize sparse matches between two images is to place them side-by-side and draw link segments connecting pixels with matching features. In this paper we present new visualization techniques for sparse correspondences between image pairs. Key ingredients of our techniques include (a) the clustering of consistent matches, (b) the optimization of the image layout to minimize occlusions due to the super-imposed links, (c) a color mapping to minimize color interference among links (d) a criterion for giving visibility priority to isolated links, (e) the bending of link segments to put apart nearby links, and (f) the use of glyphs to facilitate the identification of matching keypoints. We show that our technique substantially reduces the clutter in the final composite image and thus makes it easier to detect and inspect both inlier and outlier matches. Potential applications include the validation of image pairs in difficult setups and the visual comparison of feature detection / matching algorithms.
Andújar, Carlos; Comino, Marc; Fairén, Marta; Vinacua, Àlvar
EuroVis 2020, Eurographics / IEEE VGTC Conference on Visualization 2020 - Posters, pp 33--35, 2020.
DOI: http://dx.doi.org/https://doi.org/10.2312/eurp.20201122
Programming exercises are a corner stone in Computer Science courses. If used properly, these exercises provide valuable feedback both to students and instructors. Unfortunately, the assessment of student submissions through code inspection requires a considerable amount of time. In this work we present an interactive tool to support the analysis of code submissions before, during, and after grading. The key idea is to compute a dissimilarity matrix for code submissions, using a metric that incorporates syntactic, semantic and functional aspects of the code. This matrix is used to embed the submissions in 2D space, so that similar submissions are mapped to nearby locations. The tool allows users to visually identify clusters, inspect individual submissions, and perform detailed pair-wise and abridged n-way comparisons. Finally, our approach facilitates comparative scoring by presenting submissions in a nearly-optimal order, i.e. similar submissions appear close in the sequence. Our initial evaluation indicates that the tool (currently supporting C++/GLSL code) provides clear benefits both to students (more fair scores, less bias, more consistent feedback) and instructors (less effort, better feedback on student performance).
Comino, Marc; Chica, Antoni; Andújar, Carlos
18th Eurographics Workshop on Graphics and Cultural Heritage, pp 23--32, 2020.
DOI: http://dx.doi.org/10.2312/gch.20201289
Visual storytelling is a powerful tool for Cultural Heritage communication. However, traditional authoring tools either produce videos that cannot be fully integrated with 3D scanned models, or require 3D content creation skills that imply a high entry barrier for Cultural Heritage experts. In this paper we present an image-supported, video-based authoring tool allowing non-3D-experts to create rich narrative content that can be fully integrated in immersive virtual reality experiences. Given an existing 3D scanned model, each story is based on a user-provided photo or system-proposed image. First, the system automatically registers the image against the 3D model, and creates an undistorted version that will serve as a fixed background image for the story. Authors can then use their favorite presentation software to annotate or edit the image while recording their voice. The resulting video is processed automatically to detect per-frame regions-of-interest. At visualization time, videos are projected onto the 3D scanned model, allowing the audience to watch the narrative piece in its surrounding spatial context. We discuss multiple color blending techniques, inspired by detail textures, to provide high-resolution detail. The system uses the image-to-model registration data to find suitable locations for triggers and avatars that draw the user attention towards the 3D model parts being referred to by the presenter. We conducted an informal user study to evaluate the quality of the immersive experience. Our findings suggest that our approach is a valuable tool for fast and easy creation of fully-immersive visual storytelling experiences.
Fons, Joan; Chica, Antoni; Andújar, Carlos
GRAPP, pp 71--82, 2020.
DOI: http://dx.doi.org/10.5220/0008935900710082
The popularization of inexpensive 3D scanning, 3D printing, 3D publishing and AR/VR display technologies have renewed the interest in open-source tools providing the geometry processing algorithms required to clean, repair, enrich, optimize and modify point-based and polygonal-based models. Nowadays, there is a large variety of such open-source tools whose user community includes 3D experts but also 3D enthusiasts and professionals from other disciplines. In this paper we present a Python-based tool that addresses two major caveats of current solutions: the lack of easy-to-use methods for the creation of custom geometry processing pipelines (automation), and the lack of a suitable visual interface for quickly testing, comparing and sharing different pipelines, supporting rapid iterations and providing dynamic feedback to the user (demonstration). From the users point of view, the tool is a 3D viewer with an integrated Python console from which internal or external Python code can be executed. We provide an easy-to-use but powerful API for element selection and geometry processing. Key algorithms are provided by a high-level C library exposed to the viewer via Python-C bindings. Unlike competing open-source alternatives, our tool has a minimal learning curve and typical pipelines can be written in a few lines of Python code.
Guardia, Milagros; Pogliani, Paola; Bordi, Giulia; Charalambous, Panayiotis; Andújar, Carlos; Pueyo, Xavier
18th Eurographics Workshop on Graphics and Cultural Heritage, pp 79, 2020.
DOI: http://dx.doi.org/10.2312/gch.20201295
The European project Enhancement of Heritage Experiences: The Middle Ages. Digital Layered Models of Architecture and Mural Paintings over Time (EHEM), approved in the call for JPICH Conservation, Protection and Use (0127) in the year 2020, aims to obtain virtual reconstructions of medieval artistic heritage - architecture with mural paintings - that are as close as possible to the original at different times, incorporating historical-artistic knowledge and the diachronic perspective of heritage, as an instrument for researchers, restorers and heritage curators and to improve the visitor's perceptions and experiences.
Avatars rendering and its effect on perceived realism in Virtual Reality
Molina, Elena; Rios, Àlex; Pelechano, Nuria
MARCH: Modeling and Animating Realistic Crowds and Humans; Workshop in 3rd IEEE International Conference on Artificial Intelligence and Virtual Reality (AIVR) , 2020.
Immersive virtual environments have proven to be a plausible platform to be used by multiple disciplines to simulate different types of scenarios and situations at a low cost. When participants are immersed in a virtual environment experience presence, they are more likely to behave as if they were in the real world. Improving the level of realism should provide a more compelling scenario so that users will experience higher levels of presence, and thus be more likely to behave as if they were in the real world. This paper presents preliminary results of an experiment in which participants navigate through two versions of the same scenario with different levels of realism of both the environment and the avatars. Our current results, from a between subjects experiment, show that the reported levels of quality in the visualization are not significantly different, which means that other aspects of the virtual environment and/or avatars must be taken into account in order to improve the perceived level of realism.
Serrancoli, G.; Sanchez Egea, A.; Torner, J.; Izquierdo, K.; Susin, Antonio
Comunicaciones del I Congreso de la Red Española de Investigación del Rendimiento Deportivo en Ciclismo y Mujer (REDICYM), pp 1--1, 2020.
Los algoritmos basados en reconocimiento de la imagen cada vez son más precisos y pueden ser usados para capturar el movimiento del esqueleto humano, como OpenPose (librería deep-learning). Sin embargo, estos algoritmos tienen dificultades cuando hay oclusiones. Es el caso de la captura de movimientos en el ciclismo (una pierna queda parcialmente ocultada). Objetivo: En primer lugar, se ha desarrollado un algoritmo de corrección para detectar automáticamente la cadera, rodilla, tobillo, y los puntos del pie cuando no son visibles. En segundo lugar, se ha integrado la cinemática medida con un sistema de sensores de fuerza low- cost, que mide las fuerzas externas, para realizar el análisis dinámico inverso y estimar los momentos articulares. Métodos: Se capturó el movimiento del pedaleo y las fuerzas de contacto al sillín y a los pedales de cinco sujetos. La cinemática angular fue calculada a partir de los datos de OpenPose y del algoritmo correctivo. Los análisis dinámicos se realizaron en OpenSim para estimar los momentos articulares, y seguidamente, las potencias articulares. La precisión de la cinemática fue evaluada comparando los resultados con trayectorizaciones manuales. Resultados: La media del error RMS fue menor de 2º, 4.5º, y 11º en la cadera, rodilla y tobillo, respectivamente. La cinemática, dinámica y potencias articulares fueron coherentes en términos biomecánicos. Discusión: La cinemática, dinámica y potencia articulares son comparables con los resultados de la literatura. La hipótesis de despreciar las fuerzas tangenciales se tendría que evaluar en detalle. Las funciones MATLAB para utilizar los algoritmos descritos están disponible online. Conclusión: El sistema marker-less desarrollado representa un método prometedor para analizar la cinemática y dinámica en el ciclismo.
A Parser-based Tool to Assist Instructors in Grading Computer Graphics Assignments
Andújar, Carlos; Vijulie, CristinaRaluca; Vinacua, Àlvar
40th Annual Conference of the European Association for Computer Graphics, Education papers, pp 21--28, 2019.
DOI: http://dx.doi.org/10.2312/eged.20191025
Colonic content assessment from MRI imaging using a semi-automatic approach
Ceballos, Victor; Monclús, Eva; Vázquez, Pere-Pau; Bendezú, Álvaro; Mego,Marianela; Merino, Xavier; Azpiroz, Fernando; Navazo, Isabel
Eurographics Workshop on Visual Computing for Biology and Medicine. EG VCBM 2019, pp 17-26, 2019.
DOI: http://dx.doi.org/10.2312/vcbm.20191227
The analysis of the morphology and content of the gut is necessary in order to achieve a better understanding of its metabolic and functional activity. Magnetic resonance imaging (MRI) has become an important imaging technique since it is able to visualize soft tissues in an undisturbed bowel using no ionizing radiation. In the last few years, MRI of gastrointestinal function has advanced substantially. However, few studies have focused on the colon, because the analysis of colonic content is time consuming and cumbersome. This paper presents a semi-automatic segmentation tool for the quantitative assessment of the unprepared colon from MRI images. The techniques developed here have been crucial for a number of clinical experiments.
Comino, Marc; Chica, Antoni; Andújar, Carlos
CEIG-Spanish Computer Graphics Conference (2019), pp 51--57, 2019.
DOI: http://dx.doi.org/10.2312/ceig.20191203
Nowadays, there are multiple available range scanning technologies which can capture extremely detailed models of realworld surfaces. The result of such process is usually a set of point clouds which can contain billions of points. While these point clouds can be used and processed offline for a variety of purposes (such as surface reconstruction and offline rendering) it is unfeasible to interactively visualize the raw point data. The most common approach is to use a hierarchical representation to render varying-size oriented splats, but this method also has its limitations as usually a single color is encoded for each point sample. Some authors have proposed the use of color-textured splats, but these either have been designed for offline rendering or do not address the efficient encoding of image datasets into textures. In this work, we propose extending point clouds by encoding their color information into textures and using a pruning and scaling rendering algorithm to achieve interactive rendering. Our approach can be combined with hierarchical point-based representations to allow for real-time rendering of massive point clouds in commodity hardware.
Delicado, Luis; Pelechano, Nuria
ACM Conference on Motion Interaction and Games (MIG'19), pp 1--6, 2019.
DOI: http://dx.doi.org/10.1145/3359566.3360063
Achieving realistic virtual humans is crucial in virtual reality appli-cations and video games. Nowadays there are software and gamedevelopment tools, that are of great help to generate and simulatecharacters. They offer easy to use GUIs to create characters bydragging and drooping features, and making small modifications.Similarly, there are tools to create animation graphs and settingblending parameters among others. Unfortunately, even thoughthese tools are relatively user friendly, achieving natural animationtransitions is not straight forward and thus non-expert users tendto spend a large amount of time to generate animations that arenot completely free of artefacts. In this paper we present a methodto automatically generate animation blend spaces in Unreal engine,which offers two advantages: the first one is that it provides a toolto evaluate the quality of an animation set, and the second one isthat the resulting graph does not depend on user skills and it isthus not prone to user errors.
Escolano, Carlos; Costa-jussà, Marta R.; Lacroux, Elora; Vázquez, Pere-Pau
Conference on Empirical Methods in Natural Language Processing (EMNLP) and 9th International Joint Conference on Natural Language Processing (IJCNLP), pp 151–156, 2019.
DOI: http://dx.doi.org/10.18653/v1/D19-3026
The main alternatives nowadays to deal with sequences are Recurrent Neural Networks (RNN), Convolutional Neural Networks (CNN) architectures and the Transformer. In this context, RNNs, CNNs and Transformer have most commonly been used as an encoder-decoder architecture with multiple layers in each module. Far beyond this, these architectures are the basis for the contextual word embeddings which are revolutionizing most natural language downstream applications. However, intermediate layer representations in sequence-based architectures can be difficult to interpret. To make each layer representation within these architectures more accessible and meaningful, we introduce a web-based tool that visualizes them both at the sentence and token level. We present three use cases. The first analyses gender issues in contextual word embeddings. The second and third are showing multilingual intermediate representations for sentences and tokens and the evolution of these intermediate representations along the multiple layers of the decoder and in the context of multilingual machine translation.
Males, Jan; Monclús, Eva; Díaz, Jose; Vázquez, Pere-Pau
Eurographics Workshop on Visual Computing for Biology and Medicine (EG VCBM 2019)-Short papers, pp 27-31, 2019.
DOI: http://dx.doi.org/10.2312/vcbm.20191228
The colon is an organ whose constant motility poses difficulties to its analysis. Although morphological data can be successfully extracted from Computational Tomography, its radiative nature makes it only indicated for patients with disorders. Only recently, acquisition techniques that rely on the use of Magnetic Resonance Imaging have matured enough to enable the generation of morphological colon data of healthy patients without preparation (i. e. administration of drugs or contrast agents). As a result, a database of colon morphological data for patients under different diets, has been created. Currently, the digestologists we collaborate with analyze the measured data of the gut by inspecting a set of spreadsheets. In this paper, we propose a system for the exploratory visual analysis of the whole database of morphological data at once. It provides features for the visual comparison of data correlations, the inspection of the morphological measures, as well 3D rendering of the colon segmented models. The system solely relies on the use of web technologies, which makes it portable even to mobile devices.
A Level-of-Detail Technique for Urban Physics Calculations in Large Urban Environments
Muñoz, David; Besuievsky, Gonzalo; Patow, Gustavo A.
Spanish Computer Graphics Conference (CEIG), pp 09-17, 2019.
DOI: http://dx.doi.org/10.2312/ceig.20191198
In many applications, such as urban physics simulations or the study of the solar impact effects at different scales, complex 3D city models are required to evaluate physical values. In this paper we present a new technique which, through the use of an electrical analogy and the calculation of sky view factors and form factors, allows to simulate and study the thermal behaviour of an urban environment, taking into account the solar and sky radiation, the air and sky temperatures, and even the thermal interaction between nearby buildings. We also show that it is possible, from a 3D recreation of a large urban environment, to simulate the heat exchanges that take place between the buildings of a city and its immediate surroundings. In the same way, taking into account the terrestrial zone, the altitude and the type of climate with which the simulations are carried out, it is possible to compare the thermal behaviour of a large urban environment according to the chosen conditions.
The future of avatar‐human interaction in VR, AR and mixed reality applications.
Pelechano, Nuria; Pettré, Julien; Chrysanthou, Yiorgos
Eurographics 2019 Think Tank, 2019.
As HMDs and AR technology have become increasingly popular and cheaper, the number of applications is also rapidly increasing. An important remaining challenge with such environments is the faithful representation of virtual humanoids. Not necessarily their visual appearance as much as the naturalness of their motion, behavior and responses. Simulating and animating correctly virtual humanoid for immersive VR and AR sits at the crossing between several research fields: Computer Graphics, Computer Animation, Computer Vision, Machine Learning and Virtual Reality and Mixed Reality. This Think Tank aims at discussing the integration of the latest advancements in the fields mentioned above with the purpose of enhancing VR, AR and mixed reality for populated environments. This session should open the discussion regarding how these different fields could work together to achieve real breakthroughs that go beyond the current state of the art in interaction between avatars and humans.
A Virtual Reality Front-end for City Modeling
Rando, Eduardo; Andújar, Carlos; Patow, Gustavo A.
XXIX Spanish Computer Graphics Conference, CEIG 2019, pp 89--92, 2019.
DOI: http://dx.doi.org/10.2312/ceig.20191210
Salvetti, Isadora; Rios, Àlex; Pelechano, Nuria
CEIG-Spanish Computer Gra`hics Conference (2019), pp 97--101, 2019.
DOI: http://dx.doi.org/https://doi.org/10.2312/ceig.20191212
Virtual navigation should be as similar as possible to how we move in the real world, however the limitations of hardware and physical space make this a challenging problem. Tracking natural walk is only feasible when the dimensions of the virtual environment match those of the real world. The problem of most navigation techniques is that they produce motion sickness because the optical flow observed does not match the vestibular and proprioceptive information that appears during real physical movement. Walk in place is a technique that can successfully reduce motion sickness without losing presence in the virtual environment. It is suitable for navigating in a very large virtual environment but it is not usually needed in small virtual spaces. Most current work focuses on one specific navigation metaphor, however in our experience we have observed that if users are given the possibility to use walk in place for large distances, they tend to switch to normal walk when they are in a confined virtual area (such as a small room). Therefore, in this paper we present our ongoing work to seamlessly switch between two navigation metaphors based on leg and head tracking to achieve a more intuitive and natural virtual navigation.
Andújar, Carlos; Argudo, Oscar; Besora, Isaac; Brunet, Pere; Chica, Antoni; Comino, Marc
XXVIII Spanish Computer Graphics Conference (CEIG 2018), Madrid, Spain, June 27-29, 2018, pp 25--32, 2018.
DOI: http://dx.doi.org/10.2312/ceig.20181162
Structure-from-motion along with multi-view stereo techniques jointly allow for the inexpensive scanning of 3D objects (e.g. buildings) using just a collection of images taken from commodity cameras. Despite major advances in these fields, a major limitation of dense reconstruction algorithms is that correct depth/normal values are not recovered on specular surfaces (e.g. windows) and parts lacking image features (e.g. flat, textureless parts of the facade). Since these reflective properties are inherent to the surface being acquired, images from different viewpoints hardly contribute to solve this problem. In this paper we present a simple method for detecting, classifying and filling non-valid data regions in depth maps produced by dense stereo algorithms. Triangles meshes reconstructed from our repaired depth maps exhibit much higher quality than those produced by state-of-the-art reconstruction algorithms like Screened Poisson-based techniques.
Andújar, Carlos; Brunet, Pere; Buxareu, Jerónimo; Fons, Joan; Laguarda, Narcís; Pascual, Jordi; Pelechano, Nuria
EUROGRAPHICS Workshop on Graphics and Cultural Heritage (EG GCH) . November 12-15. Viena (Austria), pp 47--56, 2018.
DOI: http://dx.doi.org/10.2312/gch.20181340
PDF
Virtual Reality (VR) simulations have long been proposed to allow users to explore both yet-to-built buildings in architectural design, and ancient, remote or disappeared buildings in cultural heritage. In this paper we describe an on-going VR project on an UNESCO World Heritage Site that simultaneously addresses both scenarios: supporting architects in the task of designing the remaining parts of a large unfinished building, and simulating existing parts that define the environment that new designs must conform to. The main challenge for the team of architects is to advance towards the project completion being faithful to the original Gaudí’s project, since many plans, drawings and plaster models were lost. We analyze the main requirements for collaborative architectural design in such a unique scenario, describe the main technical challenges, and discuss the lessons learned after one year of use of the system.
GL-Socket: A CG Plugin-based Framework for Teaching and Assessment
Andújar, Carlos; Chica, Antoni; Fairén, Marta; Vinacua, Àlvar
EG 2018 - Education Papers, pp 25--32, 2018.
DOI: http://dx.doi.org/10.2312/eged.20181003
In this paper we describe a plugin-based C++ framework for teaching OpenGL and GLSL in introductory Computer Graphics courses. The main strength of the framework architecture is that student assignments are mostly independent and thus can be completed, tested and evaluated in any order. When students complete a task, the plugin interface forces a clear separation of initialization, interaction and drawing code, which in turn facilitates code reusability. Plugin code can access scene, camera, and OpenGL window methods through a simple API. The plugin interface is flexible enough to allow students to complete tasks requiring shader development, object drawing, and multiple rendering passes. Students are provided with sample plugins with basic scene drawing and camera control features. One of the plugins that the students receive contains a shader development framework with self-assessment features. We describe the lessons learned after using the tool for four years in a Computer Graphics course involving more than one hundred Computer Science students per year.
Díaz, Jose; Meruvia-Pastor, Oscar; Vázquez, Pere-Pau
22nd International Conference Information Visualisation, IV 2018, Fisciano, Italy, July 10-13, 2018, pp 159--168, 2018.
DOI: http://dx.doi.org/10.1109/iV.2018.00037
Bar charts are among the most commonly used visualization graphs. Their main goal is to communicate quantities that can be visually compared. Since they are easy to produce and interpret, they are found in any situation where quantitative data needs to be conveyed (websites, newspapers, etc.). However, depending on the layout, the perceived values can vary substantially. For instance, previous research has shown that the positioning of bars (e.g. stacked vs separate) may influence the accuracy in bar ratio length estimation. Other works have studied the effects of embellishments on the perception of encoded quantities. However, to the best of the authors knowledge, the effect of perceptual elements used to reinforce the quantity depicted within the bars, such as contrast and inner lines, has not been studied in depth. In this research we present a study that analyzes the effect of several internal contrast and framing enhancements with respect to the use of basic solid bars. Our results show that the addition of minimal visual elements that are easy to implement with current technology can help users to better recognize the amounts depicted by the bar charts.
Fons, Joan; Monclús, Eva; Vázquez, Pere-Pau; Navazo, Isabel
XXVIII Spanish Computer Graphics Conference (CEIG 2018), Madrid, Spain, June 27-29, 2018, pp 47--50, 2018.
DOI: http://dx.doi.org/10.2312/ceig.20181153
The recent advances in VR headsets, such as the Oculus Rift or HTC Vive, at affordable prices offering a high resolution display, has empowered the development of immersive VR applications. data. In this paper we propose an immersive VR system that uses some well-known acceleration algorithms to achieve real-time rendering of volumetric datasets in an immersive VR system. Moreover, we have incorporated different basic interaction techniques to facilitate the inspection of the volume dataset. The interaction has been designed to be as natural as possible in order to achieve the most comfortable, user-friendly virtual experience. We have conducted an informal user study to evaluate the user preferences. Our evaluation shows that our application is perceived usable, easy of learn and very effective in terms of the high level of immersion achieved
LeoMCAD: A Lego-based Mechanical CAD system
Gonzalez, Francisco; Jesús Amador Pérez; Patow, Gustavo A.
Spanish Computer Graphics Conference (CEIG), 2018.
DOI: http://dx.doi.org/10.2312/ceig.20181163
PDF
Mechanical Design (MCAD) tools are used for creating 3D digital prototypes used in the design, visualization, and simulation of products. In this paper we present LeoMCAD, a Lego-based mechanical system designed to be used as an education tool both for kids and Lego hobbyists; but which features a novel solver that naturally and seamlessly computes the interaction between the pieces that build-up a given model, solving an otherwise complex forward kinematic system of equations in a much simpler way. The results show how our system is able to cope with situations that would produce dead-lock situations in more advanced commercial systems.
Hermosilla, Pedro; Maisch, Sebastian; Vázquez, Pere-Pau; Ropinski, Timo
VCBM 18: Eurographics Workshop on Visual Computing for Biology and Medicine, Granada, Spain, September 20-21, 2018, pp 185--195, 2018.
DOI: http://dx.doi.org/10.2312/vcbm.20181244
PDF
Molecular surfaces are a commonly used representation in the analysis of molecular structures as they provide a compact description of the space occupied by a molecule and its accessibility. However, due to the high abstraction of the atomic data, fine grain features are hard to identify. Moreover, these representations involve a high degree of occlusions, which prevents the identification of internal features and potentially impacts shape perception. In this paper, we present a set of techniques which are inspired by the properties of translucent materials, that have been developed to improve the perception of molecular surfaces: First, we introduce an interactive algorithm to simulate subsurface scattering for molecular surfaces, in order to improve the thickness perception of the molecule. Second, we present a technique to visualize structures just beneath the surface, by still conveying relevant depth information. And lastly, we introduce reflections and refractions into our visualization that improve the shape perception of molecular surfaces. We evaluate the benefits of these methods through crowd-sourced user studies as well as the feedback from several domain experts.
A procedural approach for thermal visualization on buildings
Muñoz, David; Besuievsky, Gonzalo; Patow, Gustavo A.
Spanish Computer Graphics Conference (CEIG), pp 109--117, 2018.
DOI: http://dx.doi.org/10.2312/ceig.20181164
PDF
Thermal behaviour analysis on buildings is an important goal for all tasks involving energy flow simulation in urban environments. One of the most widely used simplified thermal models is based on an electrical analogy, where nodes are set to simulate and solve a circuit network. In this paper we propose a procedural approach for automatically locate the nodes of the circuit, according to the building structure. We provide a conceptual technique to efficiently visualize thermal variations over time in buildings. We show that we can simulate and visually represent the variations of the interior temperatures of a building over a period of time. We believe that the technique could be helpful for rapid analysis for changing building parameters, such as materials, dimensions or number of floors.
Orellana, Bernat; Monclús, Eva; Brunet, Pere; Navazo, Isabel; Bendezú, Álvaro; Azpiroz, Fernando
Medical Image Computing and Computer Assisted Intervention - MICCAI 2018 - 21st International Conference, Granada, Spain, September 16-20, 2018, Proceedings, Part {II}, pp 638--647, 2018.
DOI: http://dx.doi.org/10.1007/978-3-030-00934-2\_71
About 50% of the patients consulting a gastroenterology clinic report symptoms without detectable cause. Clinical researchers are interested in analyzing the volumetric evolution of colon segments under the effect of different diets and diseases. These studies require noninvasive abdominal MRI scans without using any contrast agent. In this work, we propose a colon segmentation framework designed to support T2-weighted abdominal MRI scans obtained from an unprepared colon. The segmentation process is based on an efficient and accurate quasiautomatic approach that drastically reduces the specialist interaction and effort with respect other state-of-the-art solutions, while decreasing the overall segmentation cost. The algorithm relies on a novel probabilistic tubularity filter, the detection of the colon medial line, probabilistic information extracted from a training set and a final unsupervised clustering. Experimental results presented show the benefits of our approach for clinical use.
Follower Behavior in a Virtual Environment
Rios, Àlex; Mateu, D.; Pelechano, Nuria
Virtual Humans and Crowds in Immersive Environments (VHCIE) . March 18. Reutlingen (Germany), 2018.
PDF
Crowd simulation models typically combine low level and high level behavior. The low level deals with reactive behavior and collision avoidance, while the high level deals with path finding and decision making. There has been a large amount of work studying collision avoidance manoeuvres for humans in both virtual reality and from real data. When it comes to high level behavior, such as decision making when choosing paths, there have been many approaches to try to simulate the large variety of possible human decisions, for instance based on minimizing energy, visibility, or path length combined with terrain constraints. For long, it has been assumed that in an emergency situation, humans just follow the behavior of others. This social behavior has been observed in the real world, and thus mimicked in crowd simulation models. However there is not an accurate model yet to determine under what circumstances this behaviour emerges, and to what extend. This paper focuses on studying human behavior regarding following others, during an evacuation situation without imminent danger.
Users locomotor behavior in Collaborative Virtual Reality
Rios, Àlex; Palomar, Marc; Pelechano, Nuria
Proceedings of the 11th Annual International Conference on Motion, Interaction, and Games, MIG 2018, Limassol, Cyprus, November 08-10, 2018, pp 1--9, 2018.
DOI: http://dx.doi.org/10.1145/3274247.3274513
This paper presents a virtual reality experiment in which two participants share both the virtual and the physical space while performing a collaborative task. We are interested in studying what are the differences in human locomotor behavior between the real world and the VR scenario. For that purpose, participants performed the experiment in both the real and the virtual scenarios. For the VR case, participants can see both their own animated avatar and the avatar of the other participant in the environment. As they move, we store their trajectories to obtain information regarding speeds, clearance distances and task completion times. For the VR scenario, we also wanted to evaluate whether the users were aware of subtle differences in the avatars animations and foot steps sounds. We ran the same experiment under three different conditions: (1) synchronizing the avatars feet animation and sound of footsteps with the movement of the participant; (2) synchronizing the animation but not the sound and finally (3) not synchronizing either one. The results show significant differences in users presence questionnaires and also different trends in their locomotor behavior between the real world and the VR scenarios. However the subtle differences in animations and sound tested in our experiment had no impact on the results of the presence questionnaires, although it showed a small impact on their locomotor behavior in terms of time to complete their tasks, and clearance distances kept while crossing paths.
Agus, M.; Gobbetti, E.; Martona, F.; Pintore, G.; Vázquez, Pere-Pau
Proceedings in EuroGraphics Tutorials, 2017.
The increased availability and performance of mobile graphics terminals, including smartphones and tablets with high resolution screens and powerful GPUs, combined with the increased availability of high-speed mobile data connections, is opening the door to a variety of networked graphics applications. In this world, native apps or mobile sites coexist to reach the goal of providing us access to a wealth of multimedia information while we are on the move. This half-day tutorial provides a technical introduction to the mobile graphics world spanning the hardware-software spectrum, and explores the state of the art and key advances in specific application domains, including capture and acquisition, real-time high-quality 3D rendering and interactive exploration.
Agus, M.; Gobbetti, E.; Martona, F.; Pintore, G.; Vázquez, Pere-Pau
Siggraph Asia Courses, 2017.
This half-day tutorial provides a technical introduction to the mobile graphics world spanning the hardware-software spectrum, and explores the state of the art and key advances in specific application domains, including capture and acquisition, real-time high-quality 3D rendering, and interactive exploration.
Agus, Marco; Gobbetti, Enrico; Marton, Fabio; Pintore, Giovanni; Vázquez, Pere-Pau
International Conference on 3DVision, Verona, Italy, Sept. 5-8, 2017.
The hardware for mobile devices, from smartphones and tablets to mobile cameras, continues to be one of the fastest-growing areas of the technology market. Not only mobile CPUs and GPUs are rapidly increasing in power, but a variety of high-quality visual and motion sensors are being embedded in mobile solutions. This, together with the increased availability of high-speed networks at lower prices, has opened the door to a variety of novel VR, AR, vision, and graphics applications. This half-day tutorial provides a technical introduction to the mobile graphics world spanning the hardware-software spectrum, and explores the state-of-the-art and key advances in specific application domains. The five key areas that will be presented are: 1) the evolution of mobile graphics capabilities; 2) the current trends in GPU hardware for mobile devices; 3) the main software development systems; 4) the scalable visualization of large scenes on mobile platforms; and, finally, 5) the use of mobile capture and data fusion for 3D acquisition and reconstruction.
Tree Variations
Argudo, Oscar; Andújar, Carlos; Chica, Antoni
CEIG - Spanish Computer Graphics Conference, pp 121--130, 2017.
DOI: http://dx.doi.org/10.2312/ceig.20171218
The cost-effective generation of realistic vegetation is still a challenging topic in computer graphics. The simplest representation of a tree consists of a single texture-mapped billboard. Although a tree billboard does not support top views, this is the most common representation for still image generation in areas such as architecture rendering. In this paper we present a new approach to generate new tree models from a small collection of RGBA images of trees. Key ingredients of our method are the representation of the tree contour space with a small set of basis vectors, the automatic crown/trunk segmentation, and the continuous transfer of RGBA color from the exemplar images to the synthetic target. Our algorithm allows the efficient generation of an arbitrary number of tree variations and thus provides a fast solution to add variety among trees in outdoor scenes
Díaz-García, Jesús; Brunet, Pere; Navazo, Isabel; Vázquez, Pere-Pau
CEIG 2017:XXVII Spanish Computer Graphics Conference, pp 51--60, 2017.
DOI: http://dx.doi.org/10.2312/ceig.20171208
The way in which gradients are computed in volume data-sets influences both the quality of the shading and the performance obtained in rendering algorithms. In particular, the visualization of coarse datasets in multi-resolution representations is affected when gradients are evaluated on-the-fly in the shader code by accessing neighbouring positions. We propose a downsampling filter for pre-computed gradients that provides improved gradients that better match the originals such that the aforementioned artifacts disappear. Secondly, to address the storage problem, we present a method for the efficient storage of gradient directions that is able to minimize the minimum angle achieved among all representable vectors in a space of 3 bytes.
Díaz-García, Jesús; Brunet, Pere; Navazo, Isabel; Vázquez, Pere-Pau
IADIS International Conference Computer Graphics, Visualization, Computer Vision and Image Processing-CGVCVIP, pp 12--20, 2017.
Volume visualization software usually has to deal with datasets that are larger than the GPUs may hold. This is especially true in one of the most popular application scenarios: medical visualization. In this paper we explore the quality of different downsampling methods and present a new approach that produces smooth lower-resolution representations, yet still preserves small features that are prone to disappear with other approaches.
Virtual Reality to teach anatomy
Fairén, Marta; Farrés, Mariona; Moyés, Jordi; Insa, Esther
Proceedings in Eurographics Education Papers, pp 51-58, 2017.
DOI: http://dx.doi.org/10.2312/eged.20171026
PDF
Virtual Reality (VR) and Augmented Reality (AR) have been gradually introduced in the curriculum of schools given the benefits they bring to classical education. We present an experiment designed to expose students to a VR session where they can directly inspect 3D models of several human organs by using Virtual Reality systems. Our systems allow the students to see the models directly visualized in 3D and to interact with them as if they were real. The experiment has involved 254 students of a Nursing Degree, enrolled in the Human anatomy and physiology course during 2 years (2 consecutive courses). It includes 10 3D models representing different anatomical structures which have been enhanced with meta-data to help the students understand the structure. In order to evaluate the students satisfaction facing such a new teaching methodology, the students were asked to fill in a questionnaire with two categories. The first one measured whether or not, the teaching session using VR facilitates the understanding of the structures. The second one measured the students satisfaction with this VR session. From the results we can see that the items most valuated are the use of the activity as a learning tool, and the satisfaction of the students expectations. We can therefore conclude that VR session for teaching is a powerful learning tool that helps to understand the anatomical structures.
Julien Pettré; Pelechano, Nuria
EG2017 - Tutorials, 2017.
DOI: http://dx.doi.org/10.2312/egt.20171029
Crowd simulation is today a frequently used computer animation technique in the field of video-games or the one of visual effects for movies. It is used to populate game scenes and make them lively and interactive, or to generate background characters in movies. The topic received a lot of attention in the research community. Many simulation algorithms were proposed to simulate crowds. How do they work? How are they concretely used in the field? This tutorial is clearly intended for beginners and all who are curious about the topic, and will present the basics of crowd simulation. It will also address some related questions such as the one of animation and rendering of crowd characters.
Lopez-Garcia, Axel; Susin, Antonio
CEIG 2017:XXVII Spanish Computer Graphics Conference, pp 19-22, 2017.
DOI: http://dx.doi.org/ http://dx.doi.org/10.2312/ceig.20171203
Skeleton tracking has multiple applications such as games, virtual reality, motion capture and more. One of the main challenges of pose detection is to be able to obtain the best possible quality with a cheap and easy-to-use device. In this work we propose a physically based method to detect errors and tracking issues which appear when we use low cost tracking devices such as Kinect. Therefore, we can correct the animation in order to obtain a smoother movement. We have implemented the Newton- Euler Algorithm, which allow us to compute the internal forces involved in a skeleton. In a common movement, forces are usually smooth without sudden variations. When the tracking yields poor results or invalid poses the internal forces become very large with a lot of variation. This allow us to detect when the tracking system fails and the animation needs to be inferred through different methods.
Occlusion aware hand pose recovery from sequences of depth images
Meysam Madadi; Sergio Escalera; Carruesco, Alex; Andújar, Carlos; Xavier Baró; Jordi González
12th IEEE International Conference on Automatic Face & Gesture Recognition (FG 2017), pp 230-237, 2017.
DOI: http://dx.doi.org/10.1109/FG.2017.37
State-of-the-art approaches on hand pose estimation from depth images have reported promising results under quite controlled considerations. In this paper we propose a two-step pipeline for recovering the hand pose from a sequence of depth images. The pipeline has been designed to deal with images taken from any viewpoint and exhibiting a high degree of finger occlusion. In a first step we initialize the hand pose using a part-based model, fitting a set of hand components in the depth images. In a second step we consider temporal data and estimate the parameters of a trained bilinear model consisting of shape and trajectory bases. Results on a synthetic, highly-occluded dataset demonstrate that the proposed method outperforms most recent pose recovering approaches, including those based on CNNs.
Munoz-Pandiella, Imanol; Akoglu, Kiraz; Bosch, Carles; Rushmeier, Holly
EUROGRAPHICS Workshop on Graphics and Cultural Heritage, 2017.
DOI: http://dx.doi.org/10.2312/gch.20171291
In Cultural Heritage projects, it is very important to identify and track weathering effects on monuments in order to design and test conservation strategies. Currently, this mapping is manual work performed by experts based on what they observe and their experience. In this paper, we present a workflow to map the weathering effect known as scaling on monuments with very little user interaction. First, we generate a 3D model of the monuments using photogrammetry techniques. Then, we reduce the noise in the acquired data using an adaptive and anisotropic filter. After that, we estimate the original shape of the surface before the weathering effects using the RANSAC algorithm. With this information, we perform a geometrical analysis to detect the features affected by this weathering effect and compute their characteristics. Then, we map the regions that have suffered scaling using the detected features and a segmentation based on the distance between the mesh and the unweathered surface. Our technique results can be very useful to understand the level of weathering of a monument and to trace the weathered parts through time automatically.
Real-time solar exposure in complex cities
Muñoz-Pandiella, Imanol; Bosch, Carles; Mérillou, Nicolas; Pueyo, Xavier; Mérillou, Stephane
28th Eurographics Symposium on Rendering, 2017.
DOI: http://dx.doi.org/10.1111/cgf.13152
In urban design, estimating solar exposure on complex city models is crucial but existing solutions typically focus on simplified building models and are too demanding in terms of memory and computational time. In this paper, we propose an interactive technique that estimates solar exposure on detailed urban scenes. Given a directional exposure map computed over a given time period, we estimate the sky visibility factor that serves to evaluate the final exposure at each visible point. This is done using a screen‐space method based on a two‐scale approach, which is geometry independent and has low storage costs. Our method performs at interactive rates and is designer‐oriented. The proposed technique is relevant in architecture and sustainable building design as it provides tools to estimate the energy performance of buildings as well as weathering effects in urban environments.In urban design, estimating solar exposure on complex city models is crucial but existing solutions typically focus on simplified building models and are too demanding in terms of memory and computational time. In this paper, we propose an interactive technique that estimates solar exposure on detailed urban scenes. Given a directional exposure map computed over a given time period, we estimate the sky visibility factor that serves to evaluate the final exposure at each visible point. This is done using a screen‐space method based on a two‐scale approach, which is geometry independent and has low storage costs.
Rahmani, Vahid; Pelechano, Nuria
10th ACM SIGGRAPH Conference on Motion in Games, pp 8:1-8:6, 2017.
DOI: http://dx.doi.org/10.1145/3136457.3136465
The challenge of path-finding in video games is to compute optimal or near optimal paths as efficiently as possible. As both the size of the environments and the number of autonomous agents increase, this computation has to be done under hard constraints of memory and CPU resources. Hierarchical approaches, such as HNA* can compute paths more efficiently, although only for certain configurations of the hierarchy. For other configurations, performance can drop drastically when inserting the start and goal position into the hierarchy. In this paper we present improvements to HNA* to eliminate bottlenecks. We propose different methods that rely on further memory storage or parallelism on both CPU and GPU, and carry out a comparative evaluation. Results show an important speed-up for all tested configurations and scenarios.
Rogla, Otger; Pelechano, Nuria
Spanish Computer Graphics Conference (CEIG), pp 113-120, 2017.
DOI: http://dx.doi.org/10.2312/ceig.20171217
Procedural modeling of virtual cities has achieved high levels of realism with little effort from the user. One can rapidly obtain a large city using off-the-shelf software based on procedural techniques, such as the use of CGA. However in order to obtain realistic virtual cities it is necessary to include virtual humanoids that behave realistically adapting