<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom">
  <channel>
    <title>Dr Santiago Rentería</title>
    <description>Computer Scientist | Creative Developer</description>
    <link>https://www.renterialab.com/</link>
    <atom:link href="https://www.renterialab.com/feed.xml" rel="self" type="application/rss+xml"/>
    <pubDate>Sat, 07 Mar 2026 05:25:03 +0000</pubDate>
    <lastBuildDate>Sat, 07 Mar 2026 05:25:03 +0000</lastBuildDate>
    <generator>Jekyll v3.10.0</generator>
    
      <item>
        <title>Dubbing Point Processes</title>
        <description>&lt;div class=&quot;12u&quot;&gt;&lt;span class=&quot;image fit&quot;&gt;&lt;img src=&quot;/images/dpp_sampling/hutchinson_924_spectrograms.png&quot; alt=&quot;&quot; /&gt;&lt;/span&gt;&lt;/div&gt;

&lt;p&gt;In this project I play with the implications of sampling and dubbing tape recordings from the &lt;a href=&quot;https://slwa.wa.gov.au/stories/slwa-abc-radio/john-hutchinson-birdsong-collection&quot;&gt;John Hutchinson’s sound archive&lt;/a&gt; with stochastic processes. Hutchinson, a self-taught field recordist, began capturing these unique sounds in 1959 during his work with the Department of Agriculture. As a result, this archive offers a captivating journey through time and diverse regions of Western Australia. Given the sheer volume of recordings (over 130 hours!), a manual exploration (solely by ear) proved impractical. Instead, I adopted a hybrid data-driven approach to &lt;em&gt;sonic foraging&lt;/em&gt;, implementing a sampling mechanism to extract “aural summaries” from the most intriguing parts of the archive. Imagine creating a sonic “thumbnail”, but instead of compressing images we are compressing entire archives by creating representative mosaics with their most salient one-second sound fragments (see a spectrogram representation of the latter in the picture shown above). For this purpose, I used stochastic point processes to select fragments with high diversity. This ensured a representative selection, fast browsing, and avoided the redundancy of (uniform and independent) random sampling. Of course, what counts as representative depends on the task at hand. To address this issue, I relied on Wavelet scattering networks. These are like deep learning networks but with filters defined a priori in terms of Wavelet functions instead of filters approximated from data. Finally, in order to showcase the potentials of this technique, I programmed a bespoke interface to mix the fragments live. The resulting aural exploration was presented at the Data Visualization Institute, University of Technology Sydney.&lt;/p&gt;

&lt;div class=&quot;12u&quot;&gt;&lt;span class=&quot;image fit&quot;&gt;&lt;img src=&quot;/images/dpp_sampling/interface_screenshot.png&quot; alt=&quot;&quot; /&gt;&lt;/span&gt;&lt;/div&gt;

&lt;p&gt;The interface has a granular sustain. Each fragment has its own track with a send to FX. The balance of effects can be controlled before the master output. The number box next to ‘T’ is for transposition. Loop start and end are shown in samples. Sample # represents a draw from the archive using DPP as described . Since actual files from the archive are retrieved (as opposed to being synthesised), it is possible to use the retrieved fragments as entry points to previously ‘unlistened’ regions (ie sonic foraging).&lt;/p&gt;

&lt;h1 id=&quot;aural-summaries-unveiling-diversity-through-stochastic-sampling&quot;&gt;Aural summaries: Unveiling diversity through stochastic sampling&lt;/h1&gt;

&lt;p&gt;Imagine exploring a vast sound library. How do you quickly grasp its essence without listening to &lt;em&gt;everything&lt;/em&gt;?&lt;/p&gt;

&lt;p&gt;Let’s put it in a different way, how do you read through the pages of an infinite book of sound without losing your mind? You can’t use bookmarks because the pages fall through your fingers.&lt;/p&gt;

&lt;p&gt;I tackled this challenge (and kept my sanity in return) by employing a method called Determinantal Point Processes (DPPs). DPPs are mathematical models originally used to represent the behaviour of fermions (such as electrons), which naturally repel each other (in theory they cannot occupy the same quantum state). This same principle can be applied to sampling sound files! Turns out DPPs can help us to select sub-collections of short audio snippets that are as sonically diverse as possible. Think of it like drawing sounds from a bag but the smart way, you’re guaranteed to pick contrasting textures, avoiding redundancy. Along these lines, a recent &lt;a href=&quot;https://dcase.community/documents/workshop2022/proceedings/DCASE2022Workshop_Outidrarine_34.pdf&quot;&gt;paper&lt;/a&gt; proposed DPPs to address data exploration in very large audio recordings. This is of great value in ecoacoustics, where audio recordings are made longitudinally for years to understand environmental changes through persistent patterns in soundscape dynamics.&lt;/p&gt;

&lt;p&gt;So, how does it work?  First, a wavelet scattering network (or for the pros, an adequate feature extractor) is used to analyse each sound fragment, essentially creating a unique “fingerprint” (or feature space) that captures sonic characteristics of interest (see the picture below for a 2D representation of such fingerprints). DPPs then leverage this fingerprint to select audio fragments that differ as much as possible (ie repel each other) in the feature space. Unlike other methods which sample fragments uniformly at random, DPPs do not get stuck repeating similar sounds. Intriguingly, this property is given by a mathematical structure from which the processes derive their name: the probabilities of random subsets are assigned according to the determinant of a function. In our case this function is expressed as a matrix of numbers, the output of the Wavelet scattering network for all sound fragments.&lt;/p&gt;

&lt;div class=&quot;12u&quot;&gt;&lt;span class=&quot;image fit&quot;&gt;&lt;img src=&quot;/images/dpp_sampling/hutchinson_UMAP.png&quot; alt=&quot;&quot; /&gt;&lt;/span&gt;&lt;/div&gt;

&lt;p&gt;Overall, with this approach between high-mathematics and experimental sound sampling, I wanted to push the boundaries of curation and soundscape composition. What a better way to do it than to improvise with a whole archive of wild sounds? These days most generative systems are based on prompts and do not reference their own archival origins. In contrast, with this experimental method of sampling, I advance a new form of creating sound mosaics through archival listening. No longer passively prompting with words, but dubbing and remixing fragments from the archive in real time ala musique concrète. A hot pot of sounds, a mishmash of Wavelets turned Sukiyaki for the ears!&lt;/p&gt;

&lt;p&gt;In more media-archaeological sense. which is a particular mode of inquiry into how media works, I performed the role of a machine learning model. By listening to a batch of 20 windowed 1-second fragments from a whole archive, I attempted to produce soundscapes via improvisation. A genre reflecting the trial-and-error nature of such models. Therefore, the main points put forward by this performance are:&lt;/p&gt;
&lt;ul&gt;
  &lt;li&gt;Most machine learning models do not implement smart-sampling methods like DPP to create training batches (see &lt;a href=&quot;https://arxiv.org/abs/1705.00607&quot;&gt;this&lt;/a&gt; for an exception). In this case it was me who curated what DPP had already sampled from the archive.&lt;/li&gt;
  &lt;li&gt;Machines ‘listen’ through standardised filtering or data-driven feature extraction and not by tapping into the fully-embodied experience of a human listening and performing before an audience.&lt;/li&gt;
  &lt;li&gt;The live-mixing interface displays the potentials of hybridising human and machine capabilities in archive-based soundscape composition. In cultural theory, these cybernetic chimeras appear have been denoted assemblages or social machines.&lt;/li&gt;
&lt;/ul&gt;

&lt;h1 id=&quot;credits-and-event-documentation&quot;&gt;Credits and event documentation&lt;/h1&gt;

&lt;p&gt;Special thanks to &lt;a href=&quot;https://andrewburrell.net&quot;&gt;Andrew Burrell&lt;/a&gt; and &lt;a href=&quot;https://zoesadokierski.com&quot;&gt;Zoë Sadokierski&lt;/a&gt; for their support and including my proposal in the programme.&lt;/p&gt;

&lt;div class=&quot;12u&quot;&gt;&lt;span class=&quot;image fit&quot;&gt;&lt;img src=&quot;/images/dpp_sampling/uts-eflyer.png&quot; alt=&quot;&quot; /&gt;&lt;/span&gt;&lt;/div&gt;

&lt;div class=&quot;12u&quot;&gt;&lt;span class=&quot;image fit&quot;&gt;&lt;img src=&quot;/images/dpp_sampling/uts-doc.JPG&quot; alt=&quot;&quot; /&gt;&lt;/span&gt;&lt;/div&gt;
</description>
        <pubDate>Thu, 13 Jun 2024 00:00:00 +0000</pubDate>
        <link>https://www.renterialab.com/research/dpp_sampling.html</link>
        <guid isPermaLink="true">https://www.renterialab.com/research/dpp_sampling.html</guid>
        
        <category>Media Art</category>
        
        <category>featured</category>
        
        
        <category>research</category>
        
      </item>
    
      <item>
        <title>Dadamining Datamining</title>
        <description>&lt;p&gt;Data mining methods for animal sound are used to produce a multichannel soundscape. Sound fragments of a Western Australian archive are subjected to two Machine listening algorithms. The first one assigns numeric addresses to soundscape regions by signal similarity (ie neighbouring sounds have similar sound texures). The latter segregates the products of algorithmic composition, a monophonic recording, into six audio channels. As a whole, the resulting soundscape displays a mundane industrial process of datamining sound with no analytic purpose but a dadaist impulse of listening to found fragments. Click &lt;a href=&quot;https://www.babbler-research.com/&quot;&gt;here&lt;/a&gt; for more information on the scientific use of this sound archive.&lt;/p&gt;

&lt;div class=&quot;12u&quot;&gt;&lt;span class=&quot;image fit&quot;&gt;&lt;img src=&quot;/images/dadamining/dadamining_diag.png&quot; alt=&quot;&quot; /&gt;&lt;/span&gt;&lt;/div&gt;

&lt;p&gt;In my arts-based research, machine listening is not a double of ear-bodied listening experiences, nor a continuation of computation in the domain of digital audio. As a regime of automation, machine listening constitutes an archival force. Digital sound memories are industrialised with algorithmic addressing schemes and filtering protocols which are never fully autonomous as the myth of artificial intelligence suggests.&lt;/p&gt;

&lt;p&gt;If sound ecologies are open-ended and more-than-human, how can the (re)generative performance of machine listening subvert reductive technical desires (ie anarchive) what has already been fully pre-empted and automated in the archive?&lt;/p&gt;

&lt;p&gt;This work was premiered in Melbourne (Australia) at the festival &lt;a href=&quot;https://nowornever.melbourne.vic.gov.au/event/planetary-auditions&quot;&gt;NONSTOP WKND: Planetary Auditions&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Binaural recording available &lt;a href=&quot;https://santiagorenteria.bandcamp.com/album/spectral-de-compositions&quot;&gt;here&lt;/a&gt;.&lt;/p&gt;
</description>
        <pubDate>Wed, 18 Oct 2023 00:00:00 +0000</pubDate>
        <link>https://www.renterialab.com/works/dadamining.html</link>
        <guid isPermaLink="true">https://www.renterialab.com/works/dadamining.html</guid>
        
        <category>Media Art</category>
        
        <category>featured</category>
        
        
        <category>works</category>
        
      </item>
    
      <item>
        <title>Birdsong Phrase Classification With Siamese Neural Networks</title>
        <description>&lt;p&gt;As part of my masters thesis I developed a “Shazam” for birdsong based on siamese neural networks, a few-shot machine learning technique capable of recognizing &lt;a href=&quot;https://ebird.org/species/casvir&quot;&gt;Cassin’s Vireo&lt;/a&gt; song elements. Below you will find the recorded live stream of my thesis defense.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Abstract&lt;/em&gt;: The process of learning good features to discriminate among numerous and different bird phrases is computationally expensive. Moreover, it might be impossible to achieve acceptable performance in cases where training data is scarce and classes are unbalanced. To address this issue, we propose a few-shot learning task in which an algorithm must make predictions given only a few instances of each class. We compared the performance of different Siamese Neural Networks at metric learning over the set of Cassini’s Vireo syllables. Then, the network features were reused for the few-shot classification task. With this approach we overcame the limitations of data scarcity and class imbalance while achieving state-of-the-art performance.&lt;/p&gt;

&lt;div class=&quot;embed-container&quot;&gt;
  &lt;iframe src=&quot;https://player.vimeo.com/video/428631931&quot; frameborder=&quot;0&quot; webkitallowfullscreen=&quot;&quot; mozallowfullscreen=&quot;&quot; allowfullscreen=&quot;&quot; height=&quot;600&quot; width=&quot;100%&quot;&gt;&lt;/iframe&gt;
&lt;/div&gt;

</description>
        <pubDate>Wed, 01 Jul 2020 00:00:00 +0000</pubDate>
        <link>https://www.renterialab.com/research/birdsong.html</link>
        <guid isPermaLink="true">https://www.renterialab.com/research/birdsong.html</guid>
        
        <category>Deep Learning</category>
        
        <category>Bioacoustics</category>
        
        <category>featured</category>
        
        
        <category>research</category>
        
      </item>
    
      <item>
        <title>Disecciones Sobre Planos</title>
        <description>&lt;div class=&quot;12u&quot;&gt;&lt;span class=&quot;image fit&quot;&gt;&lt;img src=&quot;/images/disecciones/img1.jpg&quot; alt=&quot;&quot; /&gt;&lt;/span&gt;&lt;/div&gt;

&lt;p&gt;Dissections over planes. Essay(s) from Tlatelolco is a transmedia project directed by &lt;a href=&quot;http://pablomz.info/disecciones&quot;&gt;Pablo Martínez Zárate&lt;/a&gt; that explores one of most emblematic architectural sites in Mexico City. Through a combination of a webdocumentary, a book a VR Installation and a live cinema performance, incisions are made over Tlatelolco as an element in Mexico City’s landscape. I developed the VR experience using various media, including digitized analogue recordings, drawings and photographs of a model of Tlatelolco. Pablo composed a soundscape that was part of the VR version.&lt;/p&gt;

&lt;p&gt;To explore the web version please visit: &lt;a href=&quot;http://dsctlatelolco.net&quot;&gt;dsctlatelolco.net&lt;/a&gt;.&lt;/p&gt;

&lt;div class=&quot;12u&quot;&gt;&lt;span class=&quot;image fit&quot;&gt;&lt;img src=&quot;/images/disecciones/img2.jpg&quot; alt=&quot;&quot; /&gt;&lt;/span&gt;&lt;/div&gt;

&lt;h1 id=&quot;posters&quot;&gt;Posters&lt;/h1&gt;

&lt;div class=&quot;12u&quot;&gt;&lt;span class=&quot;image fit&quot;&gt;&lt;img src=&quot;/images/disecciones/poster.jpg&quot; alt=&quot;&quot; /&gt;&lt;/span&gt;&lt;/div&gt;

&lt;div class=&quot;12u&quot;&gt;&lt;span class=&quot;image fit&quot;&gt;&lt;img src=&quot;/images/disecciones/poster2.jpg&quot; alt=&quot;&quot; /&gt;&lt;/span&gt;&lt;/div&gt;

&lt;h1 id=&quot;award&quot;&gt;Award&lt;/h1&gt;

&lt;p&gt;The project was awarded by &lt;a href=&quot;https://filmfreeway.com/festivalinternacionaldecineconmediosalternativos&quot;&gt;Festival Internacional de Cine con Medios Alternativos (FICMA)&lt;/a&gt;.&lt;/p&gt;

&lt;div class=&quot;12u&quot;&gt;&lt;span class=&quot;image fit&quot;&gt;&lt;img src=&quot;/images/disecciones/premio.jpg&quot; alt=&quot;&quot; /&gt;&lt;/span&gt;&lt;/div&gt;
</description>
        <pubDate>Sun, 01 Dec 2019 00:00:00 +0000</pubDate>
        <link>https://www.renterialab.com/works/disecciones.html</link>
        <guid isPermaLink="true">https://www.renterialab.com/works/disecciones.html</guid>
        
        <category>Virtual Reality</category>
        
        
        <category>works</category>
        
      </item>
    
      <item>
        <title>Noise City</title>
        <description>&lt;div class=&quot;12u&quot;&gt;&lt;span class=&quot;image fit&quot;&gt;&lt;img src=&quot;/images/noisecity/noisecity.png&quot; alt=&quot;&quot; /&gt;&lt;/span&gt;&lt;/div&gt;

&lt;p&gt;This work is under development. One of the objectives is reflecting on the psychological effects of the noise in Mexico city. Up to date we have a Max application that distorts video by using the audio signal. In this way acoustic noise is translated into a visual experience. Stay tuned. I want to take this idea to an augmented reality platform.&lt;/p&gt;

&lt;div class=&quot;6u&quot;&gt;&lt;span class=&quot;image fit&quot;&gt;&lt;img src=&quot;/images/noisecity/test.png&quot; alt=&quot;&quot; /&gt;&lt;/span&gt;&lt;/div&gt;
</description>
        <pubDate>Wed, 01 May 2019 00:00:00 +0000</pubDate>
        <link>https://www.renterialab.com/works/noisecity.html</link>
        <guid isPermaLink="true">https://www.renterialab.com/works/noisecity.html</guid>
        
        <category>Media Art</category>
        
        
        <category>works</category>
        
      </item>
    
      <item>
        <title>El Cuerpo es un Archivo</title>
        <description>&lt;div class=&quot;12u&quot;&gt;&lt;span class=&quot;image fit&quot;&gt;&lt;img src=&quot;/images/mapping/main_cover.png&quot; alt=&quot;&quot; /&gt;&lt;/span&gt;&lt;/div&gt;

&lt;p&gt;El cuerpo es un archivo is a 360º video montage with multichannel sound inspired in the &lt;a href=&quot;https://en.wikipedia.org/wiki/Mexican_Movement_of_1968&quot;&gt;Mexican Movement of 1968&lt;/a&gt;. It is part of the permanent exhibition at &lt;a href=&quot;http://tlatelolco.unam.mx/&quot;&gt;Centro Cultural Universitario Tlatelolco in Mexico City&lt;/a&gt;. During the performance dancers interact with historical photographs. In response, a dance unfolds leaving the viewer immersed and inviting the audience to engage with the archive (of Tlatelolco Massacre).&lt;/p&gt;

&lt;p&gt;The project was directed by &lt;a href=&quot;http://pablomz.info/cuerpoarchivo&quot;&gt;Pablo Martínez Zárate&lt;/a&gt; and involved an indisciplinary team. My job here was setting multichannel audio and mapping a 360º video over a cylindrical screen using 6 short throw projectors. Below you can see the process of wrapping the image to the curved surface with the &lt;a href=&quot;https://madmapper.com/&quot;&gt;MadMapper&lt;/a&gt; software.&lt;/p&gt;

&lt;p&gt;Format: 8,16, 35 y 120 mm + HD video &lt;br /&gt;
Duration: 15:00 min&lt;/p&gt;

&lt;h1 id=&quot;process&quot;&gt;Process&lt;/h1&gt;

&lt;div class=&quot;12u&quot;&gt;&lt;span class=&quot;image fit&quot;&gt;&lt;img src=&quot;/images/mapping/process.jpg&quot; alt=&quot;&quot; /&gt;&lt;/span&gt;&lt;/div&gt;

&lt;h1 id=&quot;final-work&quot;&gt;Final work&lt;/h1&gt;

&lt;div class=&quot;12u&quot;&gt;&lt;span class=&quot;image fit&quot;&gt;&lt;img src=&quot;/images/mapping/proyeccion.jpg&quot; alt=&quot;&quot; /&gt;&lt;/span&gt;&lt;/div&gt;

&lt;h1 id=&quot;credits&quot;&gt;Credits&lt;/h1&gt;

&lt;ul&gt;
  &lt;li&gt;Director, screenwriting y photography: Pablo Martínez Zárate.&lt;/li&gt;
  &lt;li&gt;Choreography: Sociedad de Carne y Hueso (Aura Arreola, Teresa Carlos, Raquel Salgado, Marcela Vásquez y Rodrigo López).&lt;/li&gt;
  &lt;li&gt;Illustration: Santiago Moyao.&lt;/li&gt;
  &lt;li&gt;Assistant director: José Luis Rangel.&lt;/li&gt;
  &lt;li&gt;Postproduction: José Luis Rangel, Pablo Martínez Zárate, Hernán Perera.&lt;/li&gt;
  &lt;li&gt;Cameras: Pablo Martínez Zárate, Hernán Perera, Leonor Castro Guerra, José Luis Rangel, Pablo García, Luis Suárez.&lt;/li&gt;
  &lt;li&gt;Audio: Pablo García.&lt;/li&gt;
  &lt;li&gt;Programming: Santiago Rentería.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;With the support of Universidad Iberoamericana, Biblioteca Francisco Xavier Clavijero and Laboratorio Iberoamericano de Documental del Departamento de Comunicación.&lt;/p&gt;
</description>
        <pubDate>Tue, 01 May 2018 00:00:00 +0000</pubDate>
        <link>https://www.renterialab.com/works/mapping.html</link>
        <guid isPermaLink="true">https://www.renterialab.com/works/mapping.html</guid>
        
        <category>Video Mapping</category>
        
        
        <category>works</category>
        
      </item>
    
      <item>
        <title>Bahidora Festival</title>
        <description>&lt;p&gt;&lt;a href=&quot;https://www.spacetime.mx&quot;&gt;Spacetime&lt;/a&gt;, a multidisciplinary architecture and design agency, invited me to collaborate at &lt;a href=&quot;http://bahidora.com/&quot;&gt;Bahidorá Festival&lt;/a&gt; with two art installations.&lt;/p&gt;

&lt;p&gt;The first one, named Colormancy, involed the interplay between code, divination and chance. By mapping peoples’ names to particular colors custom ambiences were created out of light and smoke. Eventually a quote based on song lyrics fragments was shown as a personal fortune in a screen. A Raspberry Pi and a touchscreen were used for controlling the whole interaction: receiveing users’ input, automating multicolor LEDs and smoke machines via DMX and electronic relays.&lt;/p&gt;

&lt;div class=&quot;10u&quot;&gt;&lt;span class=&quot;image fit&quot;&gt;&lt;img src=&quot;/images/bahidora/colormancy.jpg&quot; alt=&quot;&quot; /&gt;&lt;/span&gt;&lt;/div&gt;

&lt;div class=&quot;10u&quot;&gt;&lt;span class=&quot;image fit&quot;&gt;&lt;img src=&quot;/images/bahidora/colormancy01.png&quot; alt=&quot;&quot; /&gt;&lt;/span&gt;&lt;/div&gt;

&lt;div class=&quot;10u&quot;&gt;&lt;span class=&quot;image fit&quot;&gt;&lt;img src=&quot;/images/bahidora/colormancy02.png&quot; alt=&quot;&quot; /&gt;&lt;/span&gt;&lt;/div&gt;

&lt;div class=&quot;8u&quot;&gt;&lt;span class=&quot;image fit&quot;&gt;&lt;img src=&quot;/images/bahidora/light.jpg&quot; alt=&quot;&quot; /&gt;&lt;/span&gt;&lt;/div&gt;

&lt;div class=&quot;8u&quot;&gt;&lt;span class=&quot;image fit&quot;&gt;&lt;img src=&quot;/images/bahidora/colormancyblue.jpg&quot; alt=&quot;&quot; /&gt;&lt;/span&gt;&lt;/div&gt;

&lt;p&gt;The second installation used no electronics but played with basic shapes translated in space and time.&lt;/p&gt;

&lt;div class=&quot;8u&quot;&gt;&lt;span class=&quot;image fit&quot;&gt;&lt;img src=&quot;/images/bahidora/spacetime.jpg&quot; alt=&quot;&quot; /&gt;&lt;/span&gt;&lt;/div&gt;
</description>
        <pubDate>Thu, 01 Mar 2018 00:00:00 +0000</pubDate>
        <link>https://www.renterialab.com/works/bahidora.html</link>
        <guid isPermaLink="true">https://www.renterialab.com/works/bahidora.html</guid>
        
        <category>Installation Art</category>
        
        
        <category>works</category>
        
      </item>
    
      <item>
        <title>Migrante</title>
        <description>&lt;div class=&quot;12u&quot;&gt;&lt;span class=&quot;image fit&quot;&gt;&lt;img src=&quot;/images/migrante/migrante02.jpg&quot; alt=&quot;&quot; /&gt;&lt;/span&gt;&lt;/div&gt;

&lt;p&gt;Migrante is a stage performance that describes the journey of a character (that you can be yourself) in six stages. Migration, from the metaphorical point of view, is a search in which all human beings are immersed either for need or will. From this phenomenon emerges a sense of migration from the awareness of change, that allows us to question our mission in life, as we assume to be free to chart our own course and write a personal (his)story.&lt;/p&gt;

&lt;p&gt;The project was directed by Bernardo Rubinstein with the goal in mind of generating an interdiscipline of dance, emotions and sound. While its nature remained physical it also involed theoretical reflection around the concept of migration and its historic meaning.&lt;/p&gt;

&lt;p&gt;I consider Migrante as one of the most challenging experiences in my life, because it involved embracing my body as a canvas for emotional expression. A task not very usual for a musician.&lt;/p&gt;

&lt;p&gt;If you want to read the essay I wrote for the performance please click &lt;a href=&quot;http://reflexionarte25.blogspot.com/2017/02/metaforas-de-la-migracion.html&quot;&gt;here&lt;/a&gt;.&lt;/p&gt;

&lt;div class=&quot;12u&quot;&gt;&lt;span class=&quot;image fit&quot;&gt;&lt;img src=&quot;/images/migrante/migrante01.jpg&quot; alt=&quot;&quot; /&gt;&lt;/span&gt;&lt;/div&gt;

&lt;h1 id=&quot;poster-and-credits&quot;&gt;Poster and Credits&lt;/h1&gt;

&lt;div class=&quot;10u&quot;&gt;&lt;span class=&quot;image fit&quot;&gt;&lt;img src=&quot;/images/migrante/poster.jpg&quot; alt=&quot;&quot; /&gt;&lt;/span&gt;&lt;/div&gt;

&lt;div class=&quot;10u&quot;&gt;&lt;span class=&quot;image fit&quot;&gt;&lt;img src=&quot;/images/migrante/programa.jpg&quot; alt=&quot;&quot; /&gt;&lt;/span&gt;&lt;/div&gt;
</description>
        <pubDate>Fri, 01 Dec 2017 00:00:00 +0000</pubDate>
        <link>https://www.renterialab.com/works/migrante.html</link>
        <guid isPermaLink="true">https://www.renterialab.com/works/migrante.html</guid>
        
        <category>Theatre</category>
        
        
        <category>works</category>
        
      </item>
    
      <item>
        <title>The Impact of Melody on Short-term Memory</title>
        <description>&lt;p&gt;During the class of Music and psychoacoustics I studied with my team the impact of melodic components in memory. For this purpose we conducted two experiments: The first one consisted in studying the effects of melody in short term memory by evaluating the recall of sung and spoken letter sequences. The second used two groups of sung melodies (structured and chaotic) to study how the regularity of melodic and rhythmic features impacts memorization. Both experiments were carried out with students between 12 and 16 years old. The following video explains the results and methodology in detail.&lt;/p&gt;

&lt;div class=&quot;embed-container&quot;&gt;
  &lt;iframe src=&quot;https://www.youtube.com/embed/UvXIP1s1dFc&quot; frameborder=&quot;0&quot; allowfullscreen=&quot;&quot; height=&quot;600&quot; width=&quot;100%&quot;&gt;&lt;/iframe&gt;
&lt;/div&gt;

</description>
        <pubDate>Fri, 01 Dec 2017 00:00:00 +0000</pubDate>
        <link>https://www.renterialab.com/research/music_memory.html</link>
        <guid isPermaLink="true">https://www.renterialab.com/research/music_memory.html</guid>
        
        <category>Cognitive Science</category>
        
        <category>Memory</category>
        
        
        <category>research</category>
        
      </item>
    
      <item>
        <title>Recording and Mixing</title>
        <description>&lt;p&gt;Thanks to the class of Music Production and Recording Techniques I had the opportunity to collaborate with various musicians and learn from &lt;a href=&quot;https://www.linkedin.com/in/juan-switalski-9407ba89/&quot;&gt;Juan Switalski&lt;/a&gt;, a friend and talented recording engineer.&lt;/p&gt;

&lt;hr /&gt;

&lt;h1 id=&quot;orchestra-recording&quot;&gt;Orchestra recording&lt;/h1&gt;

&lt;p&gt;Recording and mixing of Escuela Superior de Música Orchestra at Centro Cultural Coyoacanense.&lt;/p&gt;

&lt;iframe width=&quot;100%&quot; height=&quot;265&quot; src=&quot;https://clyp.it/3kqia1ke/widget&quot; frameborder=&quot;0&quot;&gt;&lt;/iframe&gt;

&lt;p&gt;&lt;br /&gt;&lt;/p&gt;

&lt;h2 id=&quot;decca-tree-and-ab-recording-techniques&quot;&gt;Decca Tree and AB Recording Techniques&lt;/h2&gt;

&lt;div class=&quot;12u&quot;&gt;&lt;span class=&quot;image fit&quot;&gt;&lt;img src=&quot;/images/recording/orchestra_3.jpg&quot; alt=&quot;&quot; /&gt;&lt;/span&gt;&lt;/div&gt;

&lt;h2 id=&quot;akg-414-mic&quot;&gt;AKG 414 Mic&lt;/h2&gt;

&lt;div class=&quot;12u&quot;&gt;&lt;span class=&quot;image fit&quot;&gt;&lt;img src=&quot;/images/recording/orchestra_1.jpg&quot; alt=&quot;&quot; /&gt;&lt;/span&gt;&lt;/div&gt;

&lt;h2 id=&quot;juan-switalski-and-some-colleagues&quot;&gt;Juan Switalski and some colleagues&lt;/h2&gt;

&lt;div class=&quot;12u&quot;&gt;&lt;span class=&quot;image fit&quot;&gt;&lt;img src=&quot;/images/recording/orchestra_2.jpg&quot; alt=&quot;&quot; /&gt;&lt;/span&gt;&lt;/div&gt;

&lt;hr /&gt;

&lt;h1 id=&quot;studio-recording&quot;&gt;Studio recording&lt;/h1&gt;

&lt;h2 id=&quot;the-risin-sun&quot;&gt;The Risin’ Sun&lt;/h2&gt;

&lt;iframe width=&quot;100%&quot; height=&quot;166&quot; scrolling=&quot;no&quot; frameborder=&quot;no&quot; allow=&quot;autoplay&quot; src=&quot;https://w.soundcloud.com/player/?url=https%3A//api.soundcloud.com/tracks/351166960&amp;amp;color=ff5500&quot;&gt;&lt;/iframe&gt;
&lt;div style=&quot;font-size: 10px; color: #cccccc;line-break: anywhere;word-break: normal;overflow: hidden;white-space: nowrap;text-overflow: ellipsis; font-family: Interstate,Lucida Grande,Lucida Sans Unicode,Lucida Sans,Garuda,Verdana,Tahoma,sans-serif;font-weight: 100;&quot;&gt;&lt;a href=&quot;https://soundcloud.com/santiagorenteria&quot; title=&quot;SantiagoRenteria&quot; target=&quot;_blank&quot; style=&quot;color: #cccccc; text-decoration: none;&quot;&gt; SantiagoRenteria&lt;/a&gt; &lt;a href=&quot;https://soundcloud.com/santiagorenteria/the-ballad-of-being-a-man-the-risin-sun&quot; title=&quot;The Ballad (of Being A Man) - The Risin Sun&quot; target=&quot;_blank&quot; style=&quot;color: #cccccc; text-decoration: none;&quot;&gt;The Ballad (of Being A Man) - The Risin Sun&lt;/a&gt;&lt;/div&gt;

&lt;div class=&quot;12u&quot;&gt;&lt;span class=&quot;image fit&quot;&gt;&lt;img src=&quot;/images/recording/rising_sun.jpg&quot; alt=&quot;&quot; /&gt;&lt;/span&gt;&lt;/div&gt;

&lt;h2 id=&quot;why-dont-you-do-right-peggy-lee-cover&quot;&gt;Why don’t you do right (Peggy Lee Cover)&lt;/h2&gt;

&lt;iframe width=&quot;100%&quot; height=&quot;166&quot; scrolling=&quot;no&quot; frameborder=&quot;no&quot; allow=&quot;autoplay&quot; src=&quot;https://w.soundcloud.com/player/?url=https%3A//api.soundcloud.com/tracks/351168049&amp;amp;color=ff5500&quot;&gt;&lt;/iframe&gt;
&lt;div style=&quot;font-size: 10px; color: #cccccc;line-break: anywhere;word-break: normal;overflow: hidden;white-space: nowrap;text-overflow: ellipsis; font-family: Interstate,Lucida Grande,Lucida Sans Unicode,Lucida Sans,Garuda,Verdana,Tahoma,sans-serif;font-weight: 100;&quot;&gt;&lt;a href=&quot;https://soundcloud.com/santiagorenteria&quot; title=&quot;SantiagoRenteria&quot; target=&quot;_blank&quot; style=&quot;color: #cccccc; text-decoration: none;&quot;&gt;SantiagoRenteria&lt;/a&gt; · &lt;a href=&quot;https://soundcloud.com/santiagorenteria/why-dont-you-do-right-peggy-lee-cover&quot; title=&quot;Why don’t you do right - Peggy Lee (Cover)&quot; target=&quot;_blank&quot; style=&quot;color: #cccccc; text-decoration: none;&quot;&gt;Why don’t you do right - Peggy Lee (Cover)&lt;/a&gt;&lt;/div&gt;

&lt;hr /&gt;

&lt;h1 id=&quot;binaural-recording&quot;&gt;Binaural Recording&lt;/h1&gt;

&lt;p&gt;Please use headphones for best experience.&lt;/p&gt;

&lt;iframe width=&quot;100%&quot; height=&quot;166&quot; scrolling=&quot;no&quot; frameborder=&quot;no&quot; allow=&quot;autoplay&quot; src=&quot;https://w.soundcloud.com/player/?url=https%3A//api.soundcloud.com/tracks/351169774&amp;amp;color=ff5500&quot;&gt;&lt;/iframe&gt;
&lt;div style=&quot;font-size: 10px; color: #cccccc;line-break: anywhere;word-break: normal;overflow: hidden;white-space: nowrap;text-overflow: ellipsis; font-family: Interstate,Lucida Grande,Lucida Sans Unicode,Lucida Sans,Garuda,Verdana,Tahoma,sans-serif;font-weight: 100;&quot;&gt;&lt;a href=&quot;https://soundcloud.com/santiagorenteria&quot; title=&quot;SantiagoRenteria&quot; target=&quot;_blank&quot; style=&quot;color: #cccccc; text-decoration: none;&quot;&gt;SantiagoRenteria&lt;/a&gt; · &lt;a href=&quot;https://soundcloud.com/santiagorenteria/acoustic-intro-live-binaural-recording-contemporary-guitar-ensemble&quot; title=&quot;Acoustic Intro. (Live Binaural Recording) - Contemporary Guitar Ensemble&quot; target=&quot;_blank&quot; style=&quot;color: #cccccc; text-decoration: none;&quot;&gt;Acoustic Intro. (Live Binaural Recording) - Contemporary Guitar Ensemble&lt;/a&gt;&lt;/div&gt;

&lt;iframe width=&quot;100%&quot; height=&quot;166&quot; scrolling=&quot;no&quot; frameborder=&quot;no&quot; allow=&quot;autoplay&quot; src=&quot;https://w.soundcloud.com/player/?url=https%3A//api.soundcloud.com/tracks/351169282&amp;amp;color=ff5500&quot;&gt;&lt;/iframe&gt;
&lt;div style=&quot;font-size: 10px; color: #cccccc;line-break: anywhere;word-break: normal;overflow: hidden;white-space: nowrap;text-overflow: ellipsis; font-family: Interstate,Lucida Grande,Lucida Sans Unicode,Lucida Sans,Garuda,Verdana,Tahoma,sans-serif;font-weight: 100;&quot;&gt;&lt;a href=&quot;https://soundcloud.com/santiagorenteria&quot; title=&quot;SantiagoRenteria&quot; target=&quot;_blank&quot; style=&quot;color: #cccccc; text-decoration: none;&quot;&gt;SantiagoRenteria&lt;/a&gt; · &lt;a href=&quot;https://soundcloud.com/santiagorenteria/asturias-live-binaural-recording-contemporary-guitar-ensemble&quot; title=&quot;Asturias (Live Binaural Recording) - Contemporary Guitar Ensemble&quot; target=&quot;_blank&quot; style=&quot;color: #cccccc; text-decoration: none;&quot;&gt;Asturias (Live Binaural Recording) - Contemporary Guitar Ensemble&lt;/a&gt;&lt;/div&gt;
</description>
        <pubDate>Fri, 01 Sep 2017 00:00:00 +0000</pubDate>
        <link>https://www.renterialab.com/works/recording.html</link>
        <guid isPermaLink="true">https://www.renterialab.com/works/recording.html</guid>
        
        <category>Audio Engineering</category>
        
        
        <category>works</category>
        
      </item>
    
  </channel>
</rss>
