Voice filter for vtubers
The following examples come from real videos. As we can see, the voices from the two vtubers get filtered properly.
Mixed voice | 白上フブキ | 夏色まつり |
---|---|---|
Example_1 | Fbk_1 | Matsuri_1 |
Example_2 | Fbk_2 | Matsuri_2 |
Example_3 | Fbk_3 | Matsuri_3 |
Example_4 | Fbk_4 | Matsuri_4 |
Example_5 | Fbk_5 | Matsuri_5 |
Example_6 | Fbk_6 | Matsuri_6 |
When the vtubers are live streaming together, their voices sometimes get mixed. In this condition, it could be hard for the translation Group to figure out what the target vtuber is saying. So, we would like to propose a model that could filter the voices that come from different vtubers. In this way, the heavy burden of the translation Group could get relieved. Thus, in this project, we come up with a model that could filter the mixed voices come from two vtubers. More vtubers will be taken into consideration in future work. Besides, we need more people to contribute to this project. Please feel free to contact me if you are willing to waste your time on these things :D
The main idea of the model comes from this Google paper. In this paper, the authors are able to filter a specific person’s voice using the d-vector as an extra input. The PyTorch code of this paper exists here. However, we found that their model does not really work for the Japanese vtubers. That is, the dataset they used is not suitable for our task. So, it becomes necessary for us to build the dataset from scratch and modify the model to pursue better performance.
The code of this part could be found here.
So, suppose we would like to filter the mixed voices from speakers A and B. To do this, we first need to obtain the audio that only contains A’s voice and B’s voice. Then, as presented in the voice filter paper, one could easily mix the two person’s voices and build a synthesis dataset to train the model. Thus, at the very beginning, we need to select the data by ourselves. That is, we go to youtube and find the videos that meet the requirement above.
The youtube-dl is utilized here. We directly extract the opus format audio using the –extract-audio command provided by the youtube-dl.
Since the audios may contain background music, one should remove the bgm first. Fortunately, the Spleeter model is ready to use, and it works well. The audios are then split and downsampled from 48000Hz to 8000Hz.
The code of this part could be found here.
We clip the data into 3-second slices this time.
If a speaker does not speak longer than 1.5 seconds within an audio slice, we remove that slice. As it turns out, this data cleaning process is quite important for the model performance.
For better performance, we perform the data augmentation here. That is, for each audio signal sequence, we first normalize it:
s1_target /= np.max(np.abs(s1_target))
s2 /= np.max(np.abs(s2))
Then, we multiply the two waves with two different ratios that are sampled from a uniform distribution.
w1_ratio = np.random.uniform(low=0.05, high=1.2)
w2_ratio = np.random.uniform(low=0.05, high=1.2)
w1 = s1_target * w1_ratio
w2 = s2 * w2_ratio
After that, the two signals are added up and normalized again:
mixed = w1 + w2
norm = np.max(np.abs(mixed)) * 1.1
w1, w2, mixed = w1/norm, w2/norm, mixed/norm
Additionally, we use the Short-time Fourier transform technique to transfer the audio signals to the frequency domain.
The code of this part could be found here. For the model input, we also need to specify the target speaker. In this condition, an embedding vector that specifies the speaker is utilized as an extra input. For more details of this model, please refer to the original paper and our modified code. Here is the model structure:
Note that we mainly modify the original model structure in the following ways.