Published on Thu Oct 22 2020

Combination of Deep Speaker Embeddings for Diarisation

Guangzhi Sun, Chao Zhang, Phil Woodland

This paper proposes a method to extract better-performing speaker embeddings. It uses multiple sets of complementary d-vectors derived from different NN components. A neural-based single-pass speaker diarisation pipeline is also proposed. Experiments and detailed analyses are conducted on challenging datasets.

0
0
0
Abstract

Significant progress has recently been made in speaker diarisation after the introduction of d-vectors as speaker embeddings extracted from neural network (NN) speaker classifiers for clustering speech segments. To extract better-performing and more robust speaker embeddings, this paper proposes a c-vector method by combining multiple sets of complementary d-vectors derived from systems with different NN components. Three structures are used to implement the c-vectors, namely 2D self-attentive, gated additive, and bilinear pooling structures, relying on attention mechanisms, a gating mechanism, and a low-rank bilinear pooling mechanism respectively. Furthermore, a neural-based single-pass speaker diarisation pipeline is also proposed in this paper, which uses NNs to achieve voice activity detection, speaker change point detection, and speaker embedding extraction. Experiments and detailed analyses are conducted on the challenging AMI and NIST RT05 datasets which consist of real meetings with 4--10 speakers and a wide range of acoustic conditions. For systems trained on the AMI training set, relative speaker error rate (SER) reductions of 13% and 29% are obtained by using c-vectors instead of d-vectors on the AMI dev and eval sets respectively, and a relative reduction of 15% in SER is observed on RT05, which shows the robustness of the proposed methods. By incorporating VoxCeleb data into the training set, the best c-vector system achieved 7%, 17% and16% relative SER reduction compared to the d-vector on the AMI dev, eval, and RT05 sets respectively