I’m trying out the Video SDK for React Native to see if it works for what I need. I’m specifically looking to add text overlays to videos. My idea is to translate sign language into text and then show that text as subtitles in real-time.
Does the SDK support this? If it does, any tips on how to get started would be really helpful.
Thanks!
Hi @alfonsoalejandro023
You can overlay text over the VideoView like so:
<View>
<ZoomView
style={{ width: '100%', position: 'relative', alignSelf: 'center', height: '100%', flex: 1, justifyContent: 'center' }}
userId={user.userId}
fullScreen
videoAspect={VideoAspect.PanAndScan}
/>
<Text style={{ zIndex: 100, position: 'absolute', bottom: 0, left: 0, color: 'white', fontSize: 20 }}>YOUR TEXT HERE</Text>
</View>
Building the actual feature detection pipeline from video data will be a much more involved process.