Video Call
  • iOS : Objective-C
  • Android
  • Web
  • Flutter
  • React Native
  • Electron
  • Unity3D
  • Cocos Creator
  • Windows
  • macOS
  • Linux
  • Overview
  • Develop your app
    • Integrate the SDK
    • Implement a basic video call
    • Enhance basic feature
      • Use Tokens for authentication
      • Config your video based on scenes
      • Check the room connection status
      • Set up common video config
      • Set up common audio config
  • Best practices
    • Implement a video call for multiple users
  • Upgrade using advanced features
    • Advanced features
      • Configure the video
        • Watermark the video/Take snapshots
        • Improve your appearance in the video
        • Beautify & Change the voice
        • Configure video codec
        • Output the video in H.265
      • Improve video quality
        • Configure bandwidth management
        • Test network and devices in advance
        • Visualize the sound level
        • Monitor streaming quality
      • Message signaling
        • Convey extra information using SEI
        • Broadcast real-time messages to a room
        • Quotas and limits
      • Play media files
        • Play media files
        • Play sound effects
      • Share the screen
      • Mix the video streams
      • Publish multiple video streams
      • Encrypt the video streams
      • Record video media data
    • Distincitve features
      • Join multiple rooms
      • Customize the video and audio
      • Set the voice hearing range
      • Use the bit mask
      • Play streams via URL
      • Play a transparent gift special effect
  • Upgrade using Add-on
  • Resources & Reference
    • SDK
    • Sample codes
    • API reference
      • Client APIs
      • Server APIs
    • Debugging
      • Error codes
      • Logging/Version number
    • FAQs
    • Key concepts
  • Documentation
  • Video Call
  • Upgrade using advanced features
  • Distincitve features
  • Customize the video and audio
  • Customize how the audio captures and renders

Customize how the audio captures and renders

Last updated:2023-11-14 14:43

Introduction

Custom Audio Capture

In the following situations, it is recommended to use the custom audio capture function:

  • The customer needs to obtain input after acquisition from the existing audio stream, audio file, or customized acquisition system, and hand it over to the SDK for transmission.
  • Customers have their own requirements for special sound processing for PCM input sources. After the sound processing, the input will be sent to the SDK for transmission.

Custom Audio Rendering

When developers have their own rendering requirements, such as special applications or processing and rendering of the original PCM data that are pulled, it is recommended to use the custom audio rendering function of the SDK.

Audio capturing and rendering can be divided into three scenarios:

  • Internal capturing and internal rendering
  • Custom capturing and custom rendering
  • Custom capturing and internal rendering

Developers should choose the appropriate audio capturing and rendering method based on their own business scenarios.

Prerequisites

Before custom audio capture and rendering, please make sure:

  • ZEGO Express SDK has been integrated into the project to implement basic real-time audio and video functions. For details, please refer to Quick start .
  • A project has been created in ZEGOCLOUD Console and applied for a valid AppID and AppSign. For details, please refer to Console - Project Information .

Enable Custom Audio Capture and Rendering Function

// Set the audio source to custom capture and render
ZegoCustomAudioConfig *audioConfig = [[ZegoCustomAudioConfig alloc] init];
audioConfig.sourceType = ZegoAudioSourceTypeCustom;

[[ZegoExpressEngine sharedEngine] enableCustomAudioIO:YES config:audioConfig];

Collect Audio Data

After publishing or playing a stream, turn on the audio capture device and pass the collected audio data to the engine through sendCustomAudioCaptureAACData or sendCustomAudioCapturePCMData.

Render Audio Data

Use fetchCustomAudioRenderPCMData to get the data to be rendered from the engine, and then use the rendering device to play the audio data.

FAQ

  1. When is the time to call related interfaces of custom audio capture and rendering?

    The call timing of each interface is as follows:

  2. How often do you call related interfaces for custom audio capture and rendering?

    The best way is to drive according to the clock of the physical audio device, call sendCustomAudioCaptureAACData and sendCustomAudioCapturePCMData when the physical capture device collects data; call fetchCustomAudioRenderPCMData when the physical rendering device needs data.

    If the developer does not have a specific physical device to drive in the actual scenario, it is recommended to call the above interface every 10 ms ~ 20 ms.

  3. When calling fetchCustomAudioRenderPCMData, if the internal data of the SDK is insufficient for "dataLength", how will the SDK handle it?

    Under the condition that the "param" is filled in normally, when the data inside the SDK is insufficient for "dataLength", the insufficient remaining length will be filled in according to the mute data.

Page Directory