Publications
intranet
ViRVIG
Year: Author:

Journals
Argudo, Oscar; Andújar, Carlos; Chica, Antoni
Computer Graphics Forum, Vol. 39, Num. 1, pp 174--184, 2019.
DOI: http://dx.doi.org/10.1111/cgf.13752
The automatic generation of realistic vegetation closely reproducing the appearance of specific plant species is still a challenging topic in computer graphics. In this paper, we present a new approach to generate new tree models from a small collection of frontal RGBA images of trees. The new models are represented either as single billboards (suitable for still image generation in areas such as architecture rendering) or as billboard clouds (providing parallax effects in interactive applications). Key ingredients of our method include the synthesis of new contours through convex combinations of exemplar countours, the automatic segmentation into crown/trunk classes and the transfer of RGBA colour from the exemplar images to the synthetic target. We also describe a fully automatic approach to convert a single tree image into a billboard cloud by extracting superpixels and distributing them inside a silhouette defined 3D volume. Our algorithm allows for the automatic generation of an arbitrary number of tree variations from minimal input, and thus provides a fast solution to add vegetation variety in outdoor scenes.
Julio C. S. Jacques; Yağmur Güçlütürk; Marc Perez; Umut Güçlü; Andújar, Carlos; Xavier Baró; Hugo Jair; Isabelle Guyon; Marcel A. J. Van Gerven; Rob Van Lier; Sergio Escalera
IEEE Transactions on Affective Computing, pp 1-21, 2019.
DOI: http://dx.doi.org/10.1109/TAFFC.2019.2930058
Personality analysis has been widely studied in psychology, neuropsychology, and signal processing fields, among others. From the past few years, it also became an attractive research area in visual computing. From the computational point of view, by far speech and text have been the most considered cues of information for analyzing personality. However, recently there has been an increasing interest from the computer vision community in analyzing personality from visual data. Recent computer vision approaches are able to accurately analyze human faces, body postures and behaviors, and use these information to infer apparent personality traits. Because of the overwhelming research interest in this topic, and of the potential impact that this sort of methods could have in society, we present in this paper an up-to-date review of existing vision-based approaches for apparent personality trait recognition. We describe seminal and cutting edge works on the subject, discussing and comparing their distinctive features and limitations. Future venues of research in the field are identified and discussed. Furthermore, aspects on the subjectivity in data labeling/evaluation, as well as current datasets and challenges organized to push the research on the field are reviewed.
Argudo, Oscar; Chica, Antoni; Andújar, Carlos
Computer Graphics Forum, Vol. 37, Num. 2, pp 101--110, 2018.
DOI: http://dx.doi.org/10.1111/cgf.13345
Despite recent advances in surveying techniques, publicly available Digital Elevation Models (DEMs) of terrains are lowresolution except for selected places on Earth. In this paper we present a new method to turn low-resolution DEMs into plausible and faithful high-resolution terrains. Unlike other approaches for terrain synthesis/amplification (fractal noise, hydraulic and thermal erosion, multi-resolution dictionaries), we benefit from high-resolution aerial images to produce highly-detailed DEMs mimicking the features of the real terrain. We explore different architectures for Fully Convolutional Neural Networks to learn upsampling patterns for DEMs from detailed training sets (high-resolution DEMs and orthophotos), yielding up to one order of magnitude more resolution. Our comparative results show that our method outperforms competing data amplification approaches in terms of elevation accuracy and terrain plausibility.
Argudo, Oscar; Comino, Marc; Chica, Antoni; Andújar, Carlos; Lumbreras, Felipe
Computers & Graphics, Vol. 71, pp 23 - 34, 2018.
DOI: http://dx.doi.org/10.1016/j.cag.2017.11.004
The visual enrichment of digital terrain models with plausible synthetic detail requires the segmentation of aerial images into a suitable collection of categories. In this paper we present a complete pipeline for segmenting high-resolution aerial images into a user-defined set of categories distinguishing e.g. terrain, sand, snow, water, and different types of vegetation. This segmentation-for-synthesis problem implies that per-pixel categories must be established according to the algorithms chosen for rendering the synthetic detail. This precludes the definition of a universal set of labels and hinders the construction of large training sets. Since artists might choose to add new categories on the fly, the whole pipeline must be robust against unbalanced datasets, and fast on both training and inference. Under these constraints, we analyze the contribution of common per-pixel descriptors, and compare the performance of state-of-the-art supervised learning algorithms. We report the findings of two user studies. The first one was conducted to analyze human accuracy when manually labeling aerial images. The second user study compares detailed terrains built using different segmentation strategies, including official land cover maps. These studies demonstrate that our approach can be used to turn digital elevation models into fully-featured, detailed terrains with minimal authoring efforts.
Comino, Marc; Andújar, Carlos; Chica, Antoni; Brunet, Pere
Computer Graphics Forum, Vol. 37, Num. 5, pp 233--243, 2018.
DOI: http://dx.doi.org/10.1111/cgf.13505
Normal vectors are essential for many point cloud operations, including segmentation, reconstruction and rendering. The robust estimation of normal vectors from 3D range scans is a challenging task due to undersampling and noise, specially when combining points sampled from multiple sensor locations. Our error model assumes a Gaussian distribution of the range error with spatially-varying variances that depend on sensor distance and reflected intensity, mimicking the features of Lidar equipment. In this paper we study the impact of measurement errors on the covariance matrices of point neighborhoods. We show that covariance matrices of the true surface points can be estimated from those of the acquired points plus sensordependent directional terms. We derive a lower bound on the neighbourhood size to guarantee that estimated matrix coefficients will be within a predefined error with a prescribed probability. This bound is key for achieving an optimal trade-off between smoothness and fine detail preservation. We also propose and compare different strategies for handling neighborhoods with samples coming from multiple materials and sensors. We show analytically that our method provides better normal estimates than competing approaches in noise conditions similar to those found in Lidar equipment.
Top-down model fitting for hand pose recovery in sequences of depth images
Madadi. Meysam; Escalera, Sergio; Carruesco, Alex; Andújar, Carlos; Baró, Xavier; González, Jordi
Image and Vision Computing, Vol. 79, pp 63--75, 2018.
DOI: http://dx.doi.org/https://doi.org/10.1016/j.imavis.2018.09.006
State-of-the-art approaches on hand pose estimation from depth images have reported promising results under quite controlled considerations. In this paper we propose a two-step pipeline for recovering the hand pose from a sequence of depth images. The pipeline has been designed to deal with images taken from any viewpoint and exhibiting a high degree of finger occlusion. In a first step we initialize the hand pose using a part-based model, fitting a set of hand components in the depth images. In a second step we consider temporal data and estimate the parameters of a trained bilinear model consisting of shape and trajectory bases. We evaluate our approach on a new created synthetic hand dataset along with NYU and MSRA real datasets. Results demonstrate that the proposed method outperforms the most recent pose recovering approaches, including those based on CNNs.
Coherent multi-layer landscape synthesis
Argudo, Oscar; Andújar, Carlos; Chica, Antoni; Guérin, Eric; Digne, Julie; Peytavie, Adrien; Galin, Eric
The Visual Computer, Vol. 33, Num. 6, pp 1005--1015, 2017.
DOI: http://dx.doi.org/10.1007/s00371-017-1393-6
We present an efficient method for generating coherent multi-layer landscapes. We use a dictionary built from exemplars to synthesize high-resolution fully featured terrains from input low-resolution elevation data. Our example-based method consists in analyzing real-world terrain examples and learning the procedural rules directly from these inputs. We take into account not only the elevation of the terrain, but also additional layers such as the slope, orientation, drainage area, the density and distribution of vegetation, and the soil type. By increasing the variety of terrain exemplars, our method allows the user to synthesize and control different types of landscapes and biomes, such as temperate or rain forests, arid deserts and mountains.
Error-aware Construction and Rendering of Multi-scan Panoramas from Massive Point Clouds
Comino, Marc; Andújar, Carlos; Chica, Antoni; Brunet, Pere
Computer Vision and Image Understanding, Vol. 157, pp 43--54, 2017.
DOI: http://dx.doi.org/10.1016/j.cviu.2016.09.011
Obtaining 3D realistic models of urban scenes from accurate range data is nowadays an important research topic, with applications in a variety of fields ranging from Cultural Heritage and digital 3D archiving to monitoring of public works. Processing massive point clouds acquired from laser scanners involves a number of challenges, from data management to noise removal, model compression and interactive visualization and inspection. In this paper, we present a new methodology for the reconstruction of 3D scenes from massive point clouds coming from range lidar sensors. Our proposal includes a panorama-based compact reconstruction where colors and normals are estimated robustly through an error-aware algorithm that takes into account the variance of expected errors in depth measurements. Our representation supports efficient, GPU-based visualization with advanced lighting effects. We discuss the proposed algorithms in a practical application on urban and historical preservation, described by a massive point cloud of 3.5 billion points. We show that we can achieve compression rates higher than 97% with good visual quality during interactive inspections.
Single-picture reconstruction and rendering of trees for plausible vegetation synthesis
Argudo, Oscar; Chica, Antoni; Andújar, Carlos
Computers & Graphics, Vol. 57, pp 55--67, 2016.
DOI: http://dx.doi.org/10.1016/j.cag.2016.03.005
State-of-the-art approaches for tree reconstruction either put limiting constraints on the input side (requiring multiple photographs, a scanned point cloud or intensive user input) or provide a representation only suitable for front views of the tree. In this paper we present a complete pipeline for synthesizing and rendering detailed trees from a single photograph with minimal user effort. Since the overall shape and appearance of each tree is recovered from a single photograph of the tree crown, artists can benefit from georeferenced images to populate landscapes with native tree species. A key element of our approach is a compact representation of dense tree crowns through a radial distance map. Our first contribution is an automatic algorithm for generating such representations from a single exemplar image of a tree. We create a rough estimate of the crown shape by solving a thin-plate energy minimization problem, and then add detail through a simplified shape-from-shading approach. The use of seamless texture synthesis results in an image-based representation that can be rendered from arbitrary view directions at different levels of detail. Distant trees benefit from an output-sensitive algorithm inspired on relief mapping. For close-up trees we use a billboard cloud where leaflets are distributed inside the crown shape through a space colonization algorithm. In both cases our representation ensures efficient preservation of the crown shape. Major benefits of our approach include: it recovers the overall shape from a single tree image, involves no tree modeling knowledge and minimal authoring effort, and the associated image-based representation is easy to compress and thus suitable for network streaming.
Beacco, Alejandro; Pelechano, Nuria; Andújar, Carlos
Computer Graphics Forum, Vol. 35, Num. 8, pp 32--50, 2016.
DOI: http://dx.doi.org/10.1111/cgf.12774
In this survey we review, classify and compare existing approaches for real-time crowd rendering. We first overview character animation techniques, as they are highly tied to crowd rendering performance, and then we analyze the state of the art in crowd rendering. We discuss different representations for level-of-detail (LoD) rendering of animated characters, including polygon-based, point-based, and image-based techniques, and review different criteria for runtime LoD selection. Besides LoD approaches, we review classic acceleration schemes, such as frustum culling and occlusion culling, and describe how they can be adapted to handle crowds of animated characters. We also discuss specific acceleration techniques for crowd rendering, such as primitive pseudo-instancing, palette skinning, and dynamic key-pose caching, which benefit from current graphics hardware. We also address other factors affecting performance and realism of crowds such as lighting, shadowing, clothing and variability. Finally we provide an exhaustive comparison of the most relevant approaches in the field.
Immersive data comprehension: visualizing uncertainty in measurable models
Brunet, Pere; Andújar, Carlos
Frontiers in Robotics and AI, Virtual Environments, pp 2-22, 2015.
DOI: http://dx.doi.org/ http://dx.doi.org/10.3389/frobt.2015.00022
Recent advances in 3D scanning technologies have opened new possibilities in a broad range of applications including cultural heritage, medicine, civil engineering, and urban planning. Virtual Reality systems can provide new tools to professionals that want to understand acquired 3D models. In this review paper, we analyze the concept of data comprehension with an emphasis on visualization and inspection tools on immersive setups. We claim that in most application fields, data comprehension requires model measurements, which in turn should be based on the explicit visualization of uncertainty. As 3D digital representations are not faithful, information on their fidelity at local level should be included in the model itself as uncertainty bounds. We propose the concept of Measurable 3D Models as digital models that explicitly encode such local uncertainty bounds. We claim that professionals and experts can strongly benefit from immersive interaction through new specific, fidelity-aware measurement tools, which can facilitate 3D data comprehension. Since noise and processing errors are ubiquitous in acquired datasets, we discuss the estimation, representation, and visualization of data uncertainty. We show that, based on typical user requirements in Cultural Heritage and other domains, application-oriented measuring tools in 3D models must consider uncertainty and local error bounds. We also discuss the requirements of immersive interaction tools for the comprehension of huge 3D and nD datasets acquired from real objects.
Andújar, Carlos; Chica, Antoni; Vico, Miguel Angel; Moya, Sergio; Brunet, Pere
Computer Graphics Forum, Vol. 33, Num. 6, pp 101--117, 2014.
DOI: http://dx.doi.org/10.1111/cgf.12281
In this paper, we present an inexpensive approach to create highly detailed reconstructions of the landscape surrounding a road. Our method is based on a space-efficient semi-procedural representation of the terrain and vegetation supporting high-quality real-time rendering not only for aerial views but also at road level. We can integrate photographs along selected road stretches. We merge the point clouds extracted from these photographs with a low-resolution digital terrain model through a novel algorithm which is robust against noise and missing data. We pre-compute plausible locations for trees through an algorithm which takes into account perceptual cues. At runtime we render the reconstructed terrain along with plants generated procedurally according to pre-computed parameters. Our rendering algorithm ensures visual consistency with aerial imagery and thus it can be integrated seamlessly with current virtual globes.
Argelaguet, Ferran; Andújar, Carlos
Computers & Graphics, Vol. 37, Num. 3, pp 121-136, 2013.
DOI: http://dx.doi.org/10.1016/j.cag.2012.12.003
Computer graphics applications controlled through natural gestures are gaining increasing popularity these days due to recent developments in low-cost tracking systems and gesture recognition technologies. Although interaction techniques through natural gestures have already demonstrated their benefits in manipulation, navigation and avatar-control tasks, effective selection with pointing gestures remains an open problem. In this paper we survey the state-of-the-art in 3D object selection techniques. We review important findings in human control models, analyze major factors influencing selection performance, and classify existing techniques according to a number of criteria. Unlike other components of the applications user interface, pointing techniques need a close coupling with the rendering pipeline, introducing new elements to be drawn, and potentially modifying the object layout and the way the scene is rendered. Conversely, selection performance is affected by rendering issues such as visual feedback, depth perception, and occlusion management. We thus review existing literature paying special attention to those aspects in the boundary between computer graphics and human–computer interaction.
Andújar, Carlos
Computer Graphics Forum, Vol. 31, Num. 6, pp 1973–1983, 2012.
High-quality texture minification techniques, including trilinear and anisotropic filtering, require texture data to be arranged into a collection of pre-filtered texture maps called mipmaps. In this paper, we present a compression scheme for mipmapped textures which achieves much higher quality than current native schemes by exploiting image coherence across mipmap levels. The basic idea is to use a high-quality native compressed format for the upper levels of the mipmap pyramid (to retain efficient minification filtering) together with a novel compact representation of the detail provided by the highest-resolution mipmap. Key elements of our approach include delta-encoding of the luminance signal, efficient encoding of coherent regions through texel runs following a Hilbert scan, a scheme for run encoding supporting fast random-access, and a predictive approach for encoding indices of variable-length blocks. We show that our scheme clearly outperforms native 6:1 compressed texture formats in terms of image quality while still providing real-time rendering of trilinearly filtered textures.
Andújar, Carlos; Chica, Antoni; Brunet, Pere
Computer & Graphics, Vol. 36, Num. 1, pp 28--37, 2012.
DOI: http://dx.doi.org/10.1016/j.cag.2011.10.005
Computer Graphics and Virtual Reality technologies provide powerful tools for visualizing, documenting and disseminating cultural heritage. Virtual inspection tools have been used proficiently to show cultural artifacts either through the web or in museum exhibits. The usability of the user interface has been recognized to play a crucial role in overcoming the typical fearful attitude of the cultural heritage community towards 3D graphics. In this paper we discuss the design of the user interface for the virtual inspection of the impressive entrance of the Ripoll Monastery in Spain. The system was exhibited in the National Art Museum of Catalonia (MNAC) during 2008 and since June 2011 it is part of its Romanesque exhibition. The MNAC is the third most visited art museum in Spain, and features the world?s largest collection on Romanesque Art. We analyze the requirements from museum curators and discuss the main interface design decisions. The user interface combines (a) focus-plus-context visualization, with focus (detail view) and context (overview) being shown at separate displays, (b) touch-based camera control techniques, and (c) continuous feedback about the exact location of the detail area within the entrance. The interface allows users to aim the camera at any point of the entrance with centimeter accuracy using a single tap. We provide the results of a user study comparing our user interface with alternative approaches. We also discuss the benefits the exhibition had to the cultural heritage community.
Beacco, Alejandro; Andújar, Carlos; Pelechano, Nuria; Bernhard Spanlang
Journal of Computer Animation and Virtual Worlds, Vol. 23, Num. 2, pp 33-47, 2012.
DOI: http://dx.doi.org/10.1002/cav.1422
In this paper, we present a new impostor‐based representation for 3D animated characters supporting real‐time rendering of thousands of agents. We maximize rendering performance by using a collection of pre‐computed impostors sampled from a discrete set of view directions. Our approach differs from previous work on view‐dependent impostors in that we use per‐joint rather than per‐character impostors. Our characters are animated by applying the joint rotations directly to the impostors, instead of choosing a single impostor for the whole character from a set of pre‐defined poses. This offers more flexibility in terms of animation clips, as our representation supports any arbitrary pose, and thus, the agent behavior is not constrained to a small collection of pre‐defined clips. Because our impostors are intended to be valid for any pose, a key issue is to define a proper boundary for each impostor to minimize image artifacts while animating the agents. We pose this problem as a variational optimization problem and provide an efficient algorithm for computing a discrete solution as a pre‐process. To the best of our knowledge, this is the first time a crowd rendering algorithm encompassing image‐based performance, small graphics processing unit footprint, and animation independence is proposed.
The ViRVIG Institute
Andújar, Carlos; Navazo, Isabel; Vázquez, Pere-Pau; Patow, Gustavo A.; Pueyo, Xavier
SBC Journal on 3D Interactive Systems, Vol. 2, Num. 2, 2011.
PDF
In this paper we present the ViRVIG Institute, a recently created institution that joins two well-known research groups: MOVING in Barcelona, and GGG in Girona. Our main research topics are Virtual Reality devices and interaction techniques, complex data models, realistic materials and lighting, geometry processing, and medical image visualization. We briefly introduce the history of both research groups and present some representative projects. Finally, we sketch our lines for future research.
Beacco, Alejandro; Andújar, Carlos; Pelechano, Nuria
Computer Graphics Forum, Vol. 30, Num. 8, pp 2328--2340, 2011.
DOI: http://dx.doi.org/10.1111/j.1467-8659.2011.02065.x
Rendering detailed animated characters is a major limiting factor in crowd simulation. In this paper we present a new representation for 3D animated characters which supports output-sensitive rendering. Our approach is flexible in the sense that it does not require us to pre-define the animation sequences beforehand, nor to pre-compute a dense set of pre-rendered views for each animation frame. Each character is encoded through a small collection of textured boxes storing colour and depth values. At runtime, each box is animated according to the rigid transformation of its associated bone and a fragment shader is used to recover the original geometry using a dual-depth version of relief mapping. Unlike competing output-sensitive approaches, our compact representation is able to recover high-frequency surface details and reproduces view-motion parallax effectively. Our approach drastically reduces both the number of primitives being drawn and the number of bones influencing each primitive, at the expense of a very slight per-fragment overhead. We show that, beyond a certain distance threshold, our compact representation is much faster to render than traditional level-of-detail triangle meshes. Our user study demonstrates that replacing polygonal geometry by our impostors produces negligible visual artefacts.
Andújar, Carlos; Brunet, Pere; Chica, Antoni; Navazo, Isabel
Computer Graphics Forum, Vol. 29, Num. 8, pp 2456--2468, 2010.
DOI: http://dx.doi.org/10.1111/j.1467-8659.2010.01757.x
In this paper, we present an efficient approach for the interactive rendering of large-scale urban models, which can be integrated seamlessly with virtual globe applications. Our scheme fills the gap between standard approaches for distant views of digital terrains and the polygonal models required for close-up views. Our work is oriented towards city models with real photographic textures of the building facades. At the heart of our approach is a multi-resolution tree of the scene defining multi-level relief impostors. Key ingredients of our approach include the pre-computation of a small set of zenithal and oblique relief maps that capture the geometry and appearance of the buildings inside each node, a rendering algorithm combining relief mapping with projective texture mapping which uses only a small subset of the pre-computed relief maps, and the use of wavelet compression to simulate two additional levels of the tree. Our scheme runs considerably faster than polygonal-based approaches while producing images with higher quality than competing relief-mapping techniques. We show both analytically and empirically that multi-level relief impostors are suitable for interactive navigation through large urban models.
Argelaguet, Ferran; Andújar, Carlos
10th International Symposium on Smart Graphics, pp 115--126, 2010.
DOI: http://dx.doi.org/10.1007/978-3-642-13544-6_11
Predefined camera paths are a valuable tool for the exploration of complex virtual environments. The speed at which the virtual camera travels along different path segments is key for allowing users to perceive and understand the scene while maintaining their attention. Current tools for speed adjustment of camera motion along predefined paths, such as keyframing, interpolation types and speed curve editors provide the animators with a great deal of flexibility but offer little support for the animator to decide which speed is better for each point along the path. In this paper we address the problem of computing a suitable speed curve for a predefined camera path through an arbitrary scene. We strive at adapting speed along the path to provide non-fatiguing, informative, interestingness and concise animations. Key elements of our approach include a new metric based on optical flow for quantifying the amount of change between two consecutive frames, the use of perceptual metrics to disregard optical flow in areas with low image saliency, and the incorporation of habituation metrics to keep the user attention. We also present the results of a preliminary user-study comparing user response with alternative approaches for computing speed curves.
Hétroy, Frank; Rey, Stéphanie; Andújar, Carlos; Brunet, Pere; Vinacua, Àlvar
Computer-Aided Design, Vol. 43, Num. 1, pp 101--113, 2010.
DOI: http://dx.doi.org/10.1016/j.cad.2010.09.012
Limitations of current 3D acquisition technology often lead to polygonal meshes exhibiting a number of geometrical and topological defects which prevent them from widespread use. In this paper we present a new method for model repair which takes as input an arbitrary polygonal mesh and outputs a valid 2-manifold triangle mesh. Unlike previous work, our method allows users to quickly identify areas with potential topological errors and to choose how to fix them in a user-friendly manner. Key steps of our algorithm include the conversion of the input model into a set of voxels, the use of morphological operators to allow the user to modify the topology of the discrete model, and the conversion of the corrected voxel set back into a 2-manifold triangle mesh. Our experiments demonstrate that the proposed algorithm is suitable for repairing meshes of a large class of shapes.
Chica, Antoni; Williams, Jason; Andújar, Carlos; Brunet, Pere; Navazo, Isabel; Rossignac, Jarek; Vinacua, Àlvar
Computer Graphics Forum, Vol. 27, Num. 1, pp 36--46, 2008.
DOI: http://dx.doi.org/10.1111/j.1467-8659.2007.01039.x
We present "Pressing", an algorithm for smoothing isosurfaces extracted from binary volumes while recovering their large planar regions (flats). Pressing yields a surface that is guaranteed to contain the samples of the volume classified as interior and exclude those classified as exterior. It uses global optimization to identify flats and constrained bilaplacian smoothing to eliminate sharp features and high-frequencies from the rest of the isosurface. It recovers sharp edges between flat regions and between flat and smooth regions. Hence, the resulting isosurface is usually a much more accurate approximation of the original solid than isosurfaces produced by previously proposed approaches. Furthermore, the segmentation of the isosurface into flat and curved faces and the sharp/smooth labelling of their edges may be valuable for shape recognition, simplification, compression, and various reverse engineering and manufacturing applications.
Optimizing the topological and combinatorial complexity of isosurfaces
Andújar, Carlos; Brunet, Pere; Chica, Antoni; Navazo, Isabel; Rossignac, Jarek; Vinacua, Àlvar
Computer Aided Design, Vol. 37, Num. 8, pp 847--857, 2005.
DOI: http://dx.doi.org/10.1016/j.cad.2004.09.013
Since the publication of the original Marching Cubes algorithm, numerous variations have been proposed for guaranteeing water-tight constructions of triangulated approximations of isosurfaces. Most approaches divide the 3D space into cubes that each occupy the space between eight neighboring samples of a regular lattice. The portion of the isosurface inside a cube may be computed independently of what happens in the other cubes, provided that the constructions for each pair of neighboring cubes agree along their common face. The portion of the isosurface associated with a cube may consist of one or more connected components, which we call sheets. The topology and combinatorial complexity of the isosurface is influenced by three types of decisions made during its construction: (1) how to connect the four intersection points on each ambiguous face, (2) how to form interpolating sheets for cubes with more than one loop, and (3) how to triangulate each sheet. To determine topological properties, it is only relevant whether the samples are inside or outside the object, and not their precise value, if there is one. Previously reported techniques make these decisions based on local —per cube — criteria, often using precomputed look-up tables or simple construction rules. Instead, we propose global strategies for optimizing several topological and combinatorial measures of the isosurfaces: triangle count, genus, and number of shells. We describe efficient implementations of these optimizations and the auxiliary data structures developed to support them.
Computing maximal tiles and application to impostor-based simplification
Andújar, Carlos; Brunet, Pere; Chica, Antoni; Navazo, Isabel; Rossignac, Jarek; Vinacua, Àlvar
Computer Graphics Forum, Vol. 23, Num. 3, pp 401--410, 2004.
DOI: http://dx.doi.org/10.1111/j.1467-8659.2004.00771.x
The computation of the largest planar region approximating a 3D object is an important problem with wide applications in modeling and rendering. Given a voxelization of the 3D object, we propose an efficient algorithm to solve a discrete version of this problem. The input of the algorithm is the set of grid edges connecting the interior and the exterior of the object (called sticks). Using a voting-based approach, we compute the plane that slices the largest number of sticks and is orientation-compatible with these sticks. The robustness and efficiency of our approach rests on the use of two different parameterizations of the planes with suitable properties. The first of these is exact and is used to retrieve precomputed local solutions of the problem. The second one is discrete and is used in a hierarchical voting scheme to compute the global maximum. This problem has diverse applications that range from finding object signatures to generating simplified models. Here we demonstrate the merits of the algorithm for efficiently computing an optimized set of textured impostors for a given polygonal model.
Conferences
Andújar, Carlos; Chica, Antoni; Comino, Marc
EuroVis 2020, Eurographics/IEEE VGTC Conference on Visualization 2020, pp 151--155, 2020.
DOI: http://dx.doi.org/10.2312/evs.20201064
Finding robust correspondences between images is a crucial step in photogrammetry applications. The traditional approach to visualize sparse matches between two images is to place them side-by-side and draw link segments connecting pixels with matching features. In this paper we present new visualization techniques for sparse correspondences between image pairs. Key ingredients of our techniques include (a) the clustering of consistent matches, (b) the optimization of the image layout to minimize occlusions due to the super-imposed links, (c) a color mapping to minimize color interference among links (d) a criterion for giving visibility priority to isolated links, (e) the bending of link segments to put apart nearby links, and (f) the use of glyphs to facilitate the identification of matching keypoints. We show that our technique substantially reduces the clutter in the final composite image and thus makes it easier to detect and inspect both inlier and outlier matches. Potential applications include the validation of image pairs in difficult setups and the visual comparison of feature detection / matching algorithms.
Easy Authoring of Image-Supported Short Stories for 3D Scanned Cultural Heritage
Comino, Marc; Chica, Antoni; Andújar, Carlos
Eurographics Workshop on Graphics and Cultural Heritage, 2020.
Visual storytelling is a powerful tool for Cultural Heritage communication. However, traditional authoring tools either produce videos that cannot be fully integrated with 3D scanned models, or require 3D content creation skills that imply a high entry barrier for Cultural Heritage experts. In this paper we present an image-supported, video-based authoring tool allowing non-3D-experts to create rich narrative content that can be fully integrated in immersive virtual reality experiences. Given an existing 3D scanned model, each story is based on a user-provided photo or system-proposed image. First, the system automatically registers the image against the 3D model, and creates an undistorted version that will serve as a fixed background image for the story. Authors can then use their favorite presentation software to annotate or edit the image while recording their voice. The resulting video is processed automatically to detect per-frame regions-of-interest. At visualization time, videos are projected onto the 3D scanned model, allowing the audience to watch the narrative piece in its surrounding spatial context. We discuss multiple color blending techniques, inspired by detail textures, to provide high-resolution detail. The system uses the image-to-model registration data to find suitable locations for triggers and avatars that draw the user attention towards the 3D model parts being referred to by the presenter. We conducted an informal user study to evaluate the quality of the immersive experience. Our findings suggest that our approach is a valuable tool for fast and easy creation of fully-immersive visual storytelling experiences.
Fons, Joan; Chica, Antoni; Andújar, Carlos
GRAPP, pp 71--82, 2020.
DOI: http://dx.doi.org/10.5220/0008935900710082
The popularization of inexpensive 3D scanning, 3D printing, 3D publishing and AR/VR display technologies have renewed the interest in open-source tools providing the geometry processing algorithms required to clean, repair, enrich, optimize and modify point-based and polygonal-based models. Nowadays, there is a large variety of such open-source tools whose user community includes 3D experts but also 3D enthusiasts and professionals from other disciplines. In this paper we present a Python-based tool that addresses two major caveats of current solutions: the lack of easy-to-use methods for the creation of custom geometry processing pipelines (automation), and the lack of a suitable visual interface for quickly testing, comparing and sharing different pipelines, supporting rapid iterations and providing dynamic feedback to the user (demonstration). From the users point of view, the tool is a 3D viewer with an integrated Python console from which internal or external Python code can be executed. We provide an easy-to-use but powerful API for element selection and geometry processing. Key algorithms are provided by a high-level C library exposed to the viewer via Python-C bindings. Unlike competing open-source alternatives, our tool has a minimal learning curve and typical pipelines can be written in a few lines of Python code.
Comino, Marc; Chica, Antoni; Andújar, Carlos
CEIG-Spanish Computer Graphics Conference (2019), pp 51--57, 2019.
DOI: http://dx.doi.org/10.2312/ceig.20191203
Nowadays, there are multiple available range scanning technologies which can capture extremely detailed models of realworld surfaces. The result of such process is usually a set of point clouds which can contain billions of points. While these point clouds can be used and processed offline for a variety of purposes (such as surface reconstruction and offline rendering) it is unfeasible to interactively visualize the raw point data. The most common approach is to use a hierarchical representation to render varying-size oriented splats, but this method also has its limitations as usually a single color is encoded for each point sample. Some authors have proposed the use of color-textured splats, but these either have been designed for offline rendering or do not address the efficient encoding of image datasets into textures. In this work, we propose extending point clouds by encoding their color information into textures and using a pruning and scaling rendering algorithm to achieve interactive rendering. Our approach can be combined with hierarchical point-based representations to allow for real-time rendering of massive point clouds in commodity hardware.
Andújar, Carlos; Argudo, Oscar; Besora, Isaac; Brunet, Pere; Chica, Antoni; Comino, Marc
XXVIII Spanish Computer Graphics Conference (CEIG 2018), Madrid, Spain, June 27-29, 2018, pp 25--32, 2018.
DOI: http://dx.doi.org/10.2312/ceig.20181162
Structure-from-motion along with multi-view stereo techniques jointly allow for the inexpensive scanning of 3D objects (e.g. buildings) using just a collection of images taken from commodity cameras. Despite major advances in these fields, a major limitation of dense reconstruction algorithms is that correct depth/normal values are not recovered on specular surfaces (e.g. windows) and parts lacking image features (e.g. flat, textureless parts of the facade). Since these reflective properties are inherent to the surface being acquired, images from different viewpoints hardly contribute to solve this problem. In this paper we present a simple method for detecting, classifying and filling non-valid data regions in depth maps produced by dense stereo algorithms. Triangles meshes reconstructed from our repaired depth maps exhibit much higher quality than those produced by state-of-the-art reconstruction algorithms like Screened Poisson-based techniques.
Andújar, Carlos; Brunet, Pere; Buxareu, Jerónimo; Fons, Joan; Laguarda, Narcís; Pascual, Jordi; Pelechano, Nuria
EUROGRAPHICS Workshop on Graphics and Cultural Heritage (EG GCH) . November 12-15. Viena (Austria), pp 47--56, 2018.
DOI: http://dx.doi.org/10.2312/gch.20181340
PDF
Virtual Reality (VR) simulations have long been proposed to allow users to explore both yet-to-built buildings in architectural design, and ancient, remote or disappeared buildings in cultural heritage. In this paper we describe an on-going VR project on an UNESCO World Heritage Site that simultaneously addresses both scenarios: supporting architects in the task of designing the remaining parts of a large unfinished building, and simulating existing parts that define the environment that new designs must conform to. The main challenge for the team of architects is to advance towards the project completion being faithful to the original Gaudí’s project, since many plans, drawings and plaster models were lost. We analyze the main requirements for collaborative architectural design in such a unique scenario, describe the main technical challenges, and discuss the lessons learned after one year of use of the system.
GL-Socket: A CG Plugin-based Framework for Teaching and Assessment
Andújar, Carlos; Chica, Antoni; Fairén, Marta; Vinacua, Àlvar
EG 2018 - Education Papers, pp 25--32, 2018.
DOI: http://dx.doi.org/10.2312/eged.20181003
In this paper we describe a plugin-based C++ framework for teaching OpenGL and GLSL in introductory Computer Graphics courses. The main strength of the framework architecture is that student assignments are mostly independent and thus can be completed, tested and evaluated in any order. When students complete a task, the plugin interface forces a clear separation of initialization, interaction and drawing code, which in turn facilitates code reusability. Plugin code can access scene, camera, and OpenGL window methods through a simple API. The plugin interface is flexible enough to allow students to complete tasks requiring shader development, object drawing, and multiple rendering passes. Students are provided with sample plugins with basic scene drawing and camera control features. One of the plugins that the students receive contains a shader development framework with self-assessment features. We describe the lessons learned after using the tool for four years in a Computer Graphics course involving more than one hundred Computer Science students per year.
Tree Variations
Argudo, Oscar; Andújar, Carlos; Chica, Antoni
CEIG - Spanish Computer Graphics Conference, pp 121--130, 2017.
DOI: http://dx.doi.org/10.2312/ceig.20171218
The cost-effective generation of realistic vegetation is still a challenging topic in computer graphics. The simplest representation of a tree consists of a single texture-mapped billboard. Although a tree billboard does not support top views, this is the most common representation for still image generation in areas such as architecture rendering. In this paper we present a new approach to generate new tree models from a small collection of RGBA images of trees. Key ingredients of our method are the representation of the tree contour space with a small set of basis vectors, the automatic crown/trunk segmentation, and the continuous transfer of RGBA color from the exemplar images to the synthetic target. Our algorithm allows the efficient generation of an arbitrary number of tree variations and thus provides a fast solution to add variety among trees in outdoor scenes
Occlusion aware hand pose recovery from sequences of depth images
Meysam Madadi; Sergio Escalera; Carruesco, Alex; Andújar, Carlos; Xavier Baró; Jordi González
12th IEEE International Conference on Automatic Face & Gesture Recognition (FG 2017), pp 230-237, 2017.
DOI: http://dx.doi.org/10.1109/FG.2017.37
State-of-the-art approaches on hand pose estimation from depth images have reported promising results under quite controlled considerations. In this paper we propose a two-step pipeline for recovering the hand pose from a sequence of depth images. The pipeline has been designed to deal with images taken from any viewpoint and exhibiting a high degree of finger occlusion. In a first step we initialize the hand pose using a part-based model, fitting a set of hand components in the depth images. In a second step we consider temporal data and estimate the parameters of a trained bilinear model consisting of shape and trajectory bases. Results on a synthetic, highly-occluded dataset demonstrate that the proposed method outperforms most recent pose recovering approaches, including those based on CNNs.
Yağmur Güçlütürk; Umut Güçlü; Marc Pérez; Hugo Escalante; Xavier Baró; Isabelle Guyon; Andújar, Carlos; Julio Jacques Jr; Meysam Madadi; Sergio Escalera; M. A. J. van Gerven; R. van Lier
ICCVW, 2017.
DOI: http://dx.doi.org/10.1109/ICCVW.2017.367
Automatic prediction of personality traits is a subjective task that has recently received much attention. Specifically, automatic apparent personality trait prediction from multimodal data has emerged as a hot topic within the filed of computer vision and, more particularly, the so called "looking at people" sub-field. Considering "apparent" personality traits as opposed to real ones considerably reduces the subjectivity of the task. The real world applications are encountered in a wide range of domains, including entertainment, health, human computer interaction, recruitment and security. Predictive models of personality traits are useful for individuals in many scenarios (e.g., preparing for job interviews, preparing for public speaking). However, these predictions in and of themselves might be deemed to be untrustworthy without human understandable supportive evidence. Through a series of experiments on a recently released benchmark dataset for automatic apparent personality trait prediction, this paper characterizes the audio and visual information that is used by a state-of-the-art model while making its predictions, so as to provide such supportive evidence by explaining predictions made. Additionally, the paper describes a new web application, which gives feedback on apparent personality traits of its users by combining model predictions with their explanations.
AdaptiveCave: A new high-resolution, multi-projector VR system
Andújar, Carlos; Brunet, Pere; Díaz-García, Jesús; Vico, Miguel Angel; Vinacua, Àlvar
In Proc. of XXIV Congreso Español de Informática Gráfica (CEIG), pp 11-20, 2014.
In this paper, a novel four wall, passive stereo multi-projector CAVE architecture is presented. It is powered by 40 ¿possibly different¿ off the shelf DLP projectors controlled by 12 PCs. We have achieved high resolution while significantly reducing the overall cost, resulting on a high brigthness, 2000 x 2000 pixel resolution on each of the 4 walls. The AdaptiveCave VR System has an increased versatility both in terms of projectors and screen architecture. First, the system works with any mix of a wide range of projector models that can be substituted ¿one by one¿ at any moment, for more modern or cheaper ones. Second, the self-calibration software, which guarantees a uniform final image with concordance and continuity, can be adapted to many other wall and screen configurations. The AdaptiveCave project includes the set-up and all related software components: geometric and chromatic calibration, simultaneous rendering on 40 projected viewports, synchronization and interaction. The interaction is based on a cableless, kinect-based gesture interface with natural interaction paradigms.
Beacco, Alejandro; Andújar, Carlos; Pelechano, Nuria; Spanlang, Bernhard
Eurographics Symposium on Rendering, pp 1, 2013.
PDF
We presenttwo methods for rendering thousands of animated characters in real-time. We maximize rendering performance by using a collection of pre?computed impostors sampled from a discrete set of view directions. The first method is based on relief impostors and the second one in flat impostors. Our work differs from previous approaches on view-dependent impostors in that we use per?joint rather than per character impostors. Characters are animated by applying the joint rotations directly to the impostors, instead of choosing a single impostor for the whole character from a set of predefined poses. This representation supports any arbitrary pose and thus the agent behavior is not constrained to a small collection of predefined clips. To the best of our knowledge, this is the first time a crowd rendering algorithm encompassing image-based performance, small GPU footprint and animation?independence is proposed.
Muñoz-Pandiella, Imanol; Andújar, Carlos; Patow, Gustavo A.
In Proc. of Eurographics Workshop on Urban Data Modelling and Visualisation, pp 13--16, 2013.
DOI: http://dx.doi.org/10.2312/UDMV/UDMV13/013-016
Real time rendering of cities with realistic global illumination is still an open problem. In this paper we propose a two-step algorithm to simulate the nocturnal illumination of a city. The first step computes an approximate aerial solution using simple textured quads for each street light. The second step uses photon mapping to locally compute the global illumination coming from light sources close to the viewer. Then, we transfer the local, highquality solution to the low resolution buffers used for aerial views, refining it with accurate information from the local simulation. Our approach achieves real time frame rates in commodity hardware.
Interactive rendering of urban models with global illumination
Argudo, Oscar; Andújar, Carlos; Patow, Gustavo A.
In Proc. of Proceedings of Computer Graphics International, pp 1-10, 2012.
PDF
We propose a photon mapping-based technique for the efficient rendering of urban landscapes. Unlike traditional photon mapping approaches, we accumulate the photon energy into a collection of 2D photon buffers encoding the incoming radiance for a superset of the surfaces contributing to the current image. We define an implicit parameterization to map surface points onto photon buffer locations. This is achieved through a cylindrical projection for the building blocks plus an orthogonal projection for the terrain. An adaptive scheme is used to adapt the resolution of the photon buffers to the viewing conditions. Our customized photon mapping algorithm combines multiple acceleration strategies to provide efficient rendering during walkthroughs and flythroughs with minimal temporal artifacts. To the best of our knowledge, the algorithm we present in this paper is the first one to address the problem of interactive global illumination for large urban landscapes.
Output-Sensitive Rendering of Detailed Animated Characters for Crowd Simulation
Beacco, Alejandro; Spanlang, Bernhard; Andújar, Carlos; Pelechano, Nuria
Congreso Español de Informatica Grafica. CEIG'10, 2010.
Rendering detailed animated characters is a major limiting factor in crowd simulation. In this paper we present a new representation for 3D animated characters which supports output-sensitive rendering. Each character is encoded through a small collection of textured boxes storing color and depth values. At runtime, each box is animated according to the rigid transformation of its associated bone. A fragment shader is used to recover the original geometry using an adapted version of relief mapping. Unlike competing outputsensitive approaches, our compact representation is able to recover high-frequency surface details and reproduces view-motion parallax. Furthermore, our approach does not require us to predefine the animation sequences nor to select a subset of discrete views. Our user study demonstrates that our approach allows for much more simulated agents with negligible visual artifacts.
Argelaguet, Ferran; Andújar, Carlos
Proceedings of the 16th ACM Symposium on Virtual Reality Software and Technology, pp 163--170, 2009.
DOI: http://dx.doi.org/10.1145/1643928.1643966
The act of pointing to graphical elements is one of the fundamental tasks in Human-Computer Interaction. In this paper we analyze visual feedback techniques for accurate pointing on stereoscopic displays. Virtual feedback techniques must provide precise information about the pointing tool and its spatial relationship with potential targets. We show both analytically and empirically that current approaches provide poor feedback on stereoscopic displays, resulting in low user performance when accurate pointing is required. We propose a new feedback technique following a camera viewfinder metaphor. The key idea is to locally flatten the scene objects around the pointing direction to facilitate their selection. We present the results of a user study comparing cursor-based and ray-based visual feedback techniques with our approach. Our user studies indicate that our viewfinder metaphor clearly outperforms competing techniques in terms of user performance and binocular fusion.
Trueba, Ramón; Andújar, Carlos; Argelaguet, Ferran
Proceedings of the 15th Joint virtual reality Eurographics conference on Virtual Environments, pp 93--100, 2009.
DOI: http://dx.doi.org/10.2312/EGVE/JVRC09/093-100
The World in Miniature Metaphor (WIM) allows users to select, manipulate and navigate efficiently in virtual environments. In addition to the first-person perspective offered by typical VR applications, the WIM offers a second dynamic viewpoint through a hand-held miniature copy of the environment. In this paper we explore different strategies to allow the user to interact with the miniature replica at multiple levels of scale. Unlike competing approaches, we support complex indoor environments by explicitly handling occlusion. We discuss algorithms for selecting the part of the scene to be included in the replica, and for providing a clear view of the region of interest. Key elements of our approach include an algorithm to recompute the active region from a subdivision of the scene into cells, and a view-dependent algorithm to cull-away occluding geometry through a small set of slicing planes roughly oriented along the main occluding surfaces. We present the results of a user-study showing that our technique clearly outperforms competing approaches on spatial tasks performed in densely-occluded scenes.