.Ensure compatibility along with several structures, including.NET 6.0,. Web Platform 4.6.2, and.NET Specification 2.0 and also above.Lessen dependencies to stop model conflicts and also the demand for binding redirects.Transcribing Sound Info.Some of the primary capabilities of the SDK is audio transcription. Programmers may transcribe audio data asynchronously or in real-time. Below is an instance of just how to transcribe an audio file:.making use of AssemblyAI.utilizing AssemblyAI.Transcripts.var client = brand new AssemblyAIClient(" YOUR_API_KEY").var transcript = await client.Transcripts.TranscribeAsync( new TranscriptParams.AudioUrl="https://storage.googleapis.com/aai-docs-samples/nbc.mp3". ).transcript.EnsureStatusCompleted().Console.WriteLine( transcript.Text).For neighborhood documents, similar code may be made use of to attain transcription.wait for utilizing var flow = new FileStream("./ nbc.mp3", FileMode.Open).var records = wait for client.Transcripts.TranscribeAsync(.stream,.brand new TranscriptOptionalParams.LanguageCode = TranscriptLanguageCode.EnUs.).transcript.EnsureStatusCompleted().Console.WriteLine( transcript.Text).Real-Time Sound Transcription.The SDK likewise sustains real-time audio transcription using Streaming Speech-to-Text. This attribute is especially practical for requests calling for urgent processing of audio records.utilizing AssemblyAI.Realtime.wait for making use of var scribe = brand new RealtimeTranscriber( new RealtimeTranscriberOptions.ApiKey="YOUR_API_KEY",.SampleRate = 16_000. ).transcriber.PartialTranscriptReceived.Subscribe( records =>Console.WriteLine($" Limited: transcript.Text "). ).transcriber.FinalTranscriptReceived.Subscribe( transcript =>Console.WriteLine($" Last: transcript.Text "). ).await transcriber.ConnectAsync().// Pseudocode for obtaining sound from a mic as an example.GetAudio( async (portion) => await transcriber.SendAudioAsync( part)).wait for transcriber.CloseAsync().Taking Advantage Of LeMUR for LLM Functions.The SDK includes along with LeMUR to make it possible for creators to build sizable language model (LLM) applications on voice information. Listed here is actually an example:.var lemurTaskParams = new LemurTaskParams.Motivate="Give a short conclusion of the records.",.TranscriptIds = [transcript.Id],.FinalModel = LemurModel.AnthropicClaude3 _ 5_Sonnet..var reaction = await client.Lemur.TaskAsync( lemurTaskParams).Console.WriteLine( response.Response).Sound Cleverness Styles.Additionally, the SDK features integrated support for audio intelligence models, making it possible for conviction study as well as other innovative functions.var transcript = wait for client.Transcripts.TranscribeAsync( brand new TranscriptParams.AudioUrl="https://storage.googleapis.com/aai-docs-samples/nbc.mp3",.SentimentAnalysis = true. ).foreach (var cause transcript.SentimentAnalysisResults!).Console.WriteLine( result.Text).Console.WriteLine( result.Sentiment)// BENEFICIAL, NEUTRAL, or even downside.Console.WriteLine( result.Confidence).Console.WriteLine($" Timestamp: result.Start - result.End ").To find out more, explore the main AssemblyAI blog.Image source: Shutterstock.