text
stringlengths 0
4.99k
|
---|
for i in np.random.randint(0, len(predictions), 5):
|
print(f\"Target : {targets[i]}\")
|
print(f\"Prediction: {predictions[i]}\")
|
print(\"-\" * 100)
|
----------------------------------------------------------------------------------------------------
|
Word Error Rate: 0.9998
|
----------------------------------------------------------------------------------------------------
|
Target : two of the nine agents returned to their rooms the seven others proceeded to an establishment called the cellar coffee house
|
Prediction:
|
----------------------------------------------------------------------------------------------------
|
Target : a scaffold was erected in front of that prison for the execution of several convicts named by the recorder
|
Prediction: sss
|
----------------------------------------------------------------------------------------------------
|
Target : it was perpetrated upon a respectable country solicitor
|
Prediction: ss
|
----------------------------------------------------------------------------------------------------
|
Target : oswald like all marine recruits received training on the rifle range at distances up to five hundred yards
|
Prediction:
|
----------------------------------------------------------------------------------------------------
|
Target : chief rowley testified that agents on duty in such a situation usually stay within the building during their relief
|
Prediction: s
|
----------------------------------------------------------------------------------------------------
|
Conclusion
|
In practice, you should train for around 50 epochs or more. Each epoch takes approximately 5-6mn using a GeForce RTX 2080 Ti GPU. The model we trained at 50 epochs has a Word Error Rate (WER) ≈ 16% to 17%.
|
Some of the transcriptions around epoch 50:
|
Audio file: LJ017-0009.wav
|
- Target : sir thomas overbury was undoubtedly poisoned by lord rochester in the reign
|
of james the first
|
- Prediction: cer thomas overbery was undoubtedly poisoned by lordrochester in the reign
|
of james the first
|
Audio file: LJ003-0340.wav
|
- Target : the committee does not seem to have yet understood that newgate could be
|
only and properly replaced
|
- Prediction: the committee does not seem to have yet understood that newgate could be
|
only and proberly replace
|
Audio file: LJ011-0136.wav
|
- Target : still no sentence of death was carried out for the offense and in eighteen
|
thirtytwo
|
- Prediction: still no sentence of death was carried out for the offense and in eighteen
|
thirtytwo
|
Training a sequence-to-sequence Transformer for automatic speech recognition.
|
Introduction
|
Automatic speech recognition (ASR) consists of transcribing audio speech segments into text. ASR can be treated as a sequence-to-sequence problem, where the audio can be represented as a sequence of feature vectors and the text as a sequence of characters, words, or subword tokens.
|
For this demonstration, we will use the LJSpeech dataset from the LibriVox project. It consists of short audio clips of a single speaker reading passages from 7 non-fiction books. Our model will be similar to the original Transformer (both encoder and decoder) as proposed in the paper, \"Attention is All You Need\".
|
References:
|
Attention is All You Need
|
Very Deep Self-Attention Networks for End-to-End Speech Recognition
|
Speech Transformers
|
LJSpeech Dataset
|
import os
|
import random
|
from glob import glob
|
import tensorflow as tf
|
from tensorflow import keras
|
from tensorflow.keras import layers
|
Define the Transformer Input Layer
|
When processing past target tokens for the decoder, we compute the sum of position embeddings and token embeddings.
|
When processing audio features, we apply convolutional layers to downsample them (via convolution stides) and process local relationships.
|
class TokenEmbedding(layers.Layer):
|
def __init__(self, num_vocab=1000, maxlen=100, num_hid=64):
|
super().__init__()
|
self.emb = tf.keras.layers.Embedding(num_vocab, num_hid)
|
self.pos_emb = layers.Embedding(input_dim=maxlen, output_dim=num_hid)
|
def call(self, x):
|
maxlen = tf.shape(x)[-1]
|
x = self.emb(x)
|
positions = tf.range(start=0, limit=maxlen, delta=1)
|
positions = self.pos_emb(positions)
|
return x + positions
|
class SpeechFeatureEmbedding(layers.Layer):
|
def __init__(self, num_hid=64, maxlen=100):
|
super().__init__()
|
self.conv1 = tf.keras.layers.Conv1D(
|
num_hid, 11, strides=2, padding=\"same\", activation=\"relu\"
|
)
|
self.conv2 = tf.keras.layers.Conv1D(
|
num_hid, 11, strides=2, padding=\"same\", activation=\"relu\"
|
)
|
self.conv3 = tf.keras.layers.Conv1D(
|
num_hid, 11, strides=2, padding=\"same\", activation=\"relu\"
|
)
|
self.pos_emb = layers.Embedding(input_dim=maxlen, output_dim=num_hid)
|
def call(self, x):
|
x = self.conv1(x)
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.