Charla "Robust Sound Recognition in Acoustic Sensor Networks" - Justin Salamon - Adobe Research

Charla "Robust Sound Recognition in Acoustic Sensor Networks" - Justin Salamon - Adobe Research

de Leonardo Steinfeld -
Número de respuestas: 0




-------- Forwarded Message --------
Subject: Robust Sound Recognition in Acoustic Sensor Networks - Justin Salamon - Adobe Research
Date: Mon, 6 May 2019 14:37:09 -0300
From: Martín Rocamora <rocamora@fing.edu.uy>
To: todos_iie <todos_iie@fing.edu.uy>


Recordatorio: la charla es mañana

[se agradece difusión]

Con motivo de la defensa de maestría de Pablo Zinemanas, tendremos la visita de Justin Salamon del Audio Research Group, Adobe Research, San Francisco. Estará dictando una charla denominada "Robust Sound Recognition in Acoustic Sensor Networks", el día martes 7 de mayo a las 10 a.m. en el salón 502 - Azul (5to. piso), Facultad de Ingeniería.

A continuación más información sobre la charla y el disertante. 

Los esperamos.

Saludos

Title: Robust Sound Recognition in Acoustic Sensor Networks

Abstract: The combination of remote acoustic sensors with automatic sound recognition represents a powerful emerging technology for studying both natural and urban environments. At NYU we've been working on two projects whose aim is to develop and leverage this technology: the Sounds of New York City (SONYC) project is using acoustic sensors to understand noise patterns across NYC to improve noise mitigation efforts, and the BirdVox project is using them for the purpose of tracking bird migration patterns in collaboration with the Cornell Lab of Ornithology. Acoustic sensors present both unique opportunities and unique challenges when it comes to developing machine listening algorithms for automatic sound event detection: they facilitate the collection of large quantities of audio data, but the data is unlabeled, constraining our ability to leverage supervised machine learning algorithms. Training generalizable models becomes particularly challenging when training data come from a limited set of sensor locations (and times), and yet our models must generalize to unseen natural and urban environments with unknown and sometimes surprising confounding factors.

In this talk I will present our work towards tackling these challenges along several different lines with neural network architectures, including novel pooling layers that allow us to better leverage weakly labeled training data, self-supervised audio embeddings that allow us to train high-accuracy models with a limited amount of labeled data, and context-adaptive networks that improve the robustness of our models to heterogeneous acoustic environments.

Bio: Justin Salamon is a research scientist and member of the Audio Research Group at Adobe Research in San Francisco. Previously he was a senior research scientist at the Music and Audio Research Laboratory and Center for Urban Science and Progress of New York University. His research focuses on the application of machine learning and signal processing to audio and video signals, with applications in machine listening, music information retrieval, bioacoustics, environmental sound analysis and open source software & data. He holds a B.A. in Computer Science from the University of Cambridge (UK), completed his M.Sc. and Ph.D. in Computer Science with the Music Technology Group of Pompeu Fabra University (Spain), and was a visiting researcher at IRCAM (France). Please visit his personal website for a complete list of publications, research topics, updates and code/data releases.


--
-
Martín Rocamora
-
Universidad de la República, Facultad de Ingeniería
Instituto de Ingeniería Eléctrica, Departamento de Procesamiento de Señales
(598) 2 711 0974 ext. 1214  http://iie.fing.edu.uy/
-
http://iie.fing.edu.uy/~rocamora/