Virtual Avatar
  • iOS
  • Android : Java
  • Overview
  • Client SDKs
  • Demo app
  • Getting started
    • Integrate the SDK
    • Create a virtual avatar
    • ZegoCharacterHelper instructions
  • Guides
  • Best practice
  • Error codes
  • Server APIs
  • Documentation
  • Virtual Avatar
  • Best practice
  • Using Avatars when publishing streams

Using Avatars when publishing streams

Last updated:2022-12-23 13:03

Instructions

Introduction

The ZEGO Avatar SDK supports exporting avatar textures and allows users to output their avatar renderings. Developers can make custom post-processing effects for the renderings and then use them for RTC stream publishing.

SDKs

  • ZEGO Express SDK: A real-time audio and video SDK developed by ZEGOCLOUD, which provides basic real-time audio and video features, including live stream publishing and pulling and live co-hosting. It is referred to as Express SDK for short.
  • ZEGO Avatar SDK: A virtual avatar SDK developed by ZEGOCLOUD, which allows users to customize their avatars by choosing the default avatars or creating their personalized avatars. It provides facial expression mirroring, speech simulation, and other features to enable real-time interactions with real persons. It is referred to as Avatar SDK for short.

Prerequisites

Procedures

1. Initialize ZegoAvatarService

Refer to "1. Apply for authentication" and "2. Initialize ZegoAvatarService" in Create a virtual avatar.

2. Set data collection type for stream publishing

2.1 Initialize the Express SDK and log in to a room

Refer to "3.1 Initialize" (It is recommended that developers initialize the SDK when the app starts) and "3.2 Log in to a room" in Getting started - Implement a basic video call.

2.2 Set data collection type for stream publishing and enable custom video capture

  1. Customize the video capture settings of the Express SDK by using ZegoCustomVideoCaptureConfig and call the enableCustomVideoCapture API to enable the custom video capture feature.
  2. Call the setCustomVideoProcessHandler API to set the custom video capture callback.
  3. Call the setVideoConfig API to set video configurations.

When you call the setVideoConfig API to set video configurations, keep the configurations consistent with the avatar texture width, height, and other parameters in 4. Export avatar texture.

// Custom video capture.
ZegoCustomVideoCaptureConfig videoCaptureConfig = new ZegoCustomVideoCaptureConfig();
// Select video frame data of the GL_TEXTURE_2D type.
videoCaptureConfig.bufferType = ZegoVideoBufferType.GL_TEXTURE_2D;
// Enable custom video capture.
engine.enableCustomVideoCapture(true, videoCaptureConfig, ZegoPublishChannel.MAIN);

// Set the custom video capture callback.
engine.setCustomVideoCaptureHandler(mCustomVideoCaptureHandler);

// Set video configurations and keep the configurations consistent with the avatar output size.
ZegoVideoConfig videoConfig = new ZegoVideoConfig(ZegoVideoConfigPreset.PRESET_720P);
// Configure avatars of square texture.
videoConfig.setEncodeResolution(mVideoWidth, mVideoHeight);
engine.setVideoConfig(videoConfig);

// Used to initialize Express SDK engine.
private final IZegoEventHandler mZegoEventHandler = new IZegoEventHandler() {
    // nothing
    @Override
    public void onDebugError(int errorCode, String funcName, String info) {
        Log.e("Avatar", "error: " + errorCode + ", info: " + info);
    }
};

// The operation to start and stop avatar detection and texture export can be performed in the processing handler used when the Express SDK starts RTC stream publishing.
private final IZegoCustomVideoCaptureHandler mCustomVideoCaptureHandler = new IZegoCustomVideoCaptureHandler() {

    @Override
    public void onStart(ZegoPublishChannel channel) {
        // After callbacks are received, developers need to implement the business logic of starting video capture. For example, enable the camera.
        AvatarCaptureConfig config = new AvatarCaptureConfig(mVideoWidth, mVideoHeight);
        // Start to capture texture.
        mCharacterHelper.startCaptureAvatar(config, AvatarStreamActivity.this::onCaptureAvatar);
        // Enable facial expression mirroring.
        startExpression();
    }

    @Override
    public void onStop(ZegoPublishChannel channel) {
        // After callbacks are received, developers need to implement the business logic of stopping video capture. For example, disable the camera.
        // Stop capturing texture.
        // Note that the RTC callback may not be triggered. The following statement needs to be called when users exit the app.
        mCharacterHelper.stopCaptureAvatar();
        stopExpression();
    }
};

3. Create a virtual avatar and start facial expression detection

  1. After the ZegoAvatarService SDK is initialized, create a ZegoCharacterHelper object, introduce the appearance data, such as the face, cloth, and makeup, and set view parameters, such as the width, height, and position, to create a virtual avatar.
  2. After the basic virtual avatar is created, call the startDetectExpression API, set the drive mode to ZegoExpressionDetectModeCamera, and use the front camera to detect facial expressions. Then, use the setExpression API of ZegoCharacterHelper to set facial expressions and drive facial expression changes of the virtual avatar.
  • To export the avatar texture, do not use the setAvatarView API of ZegoCharacterHelper. That is, do not use AvatarView for rendering; otherwise, you cannot obtain avatar texture properly for stream publishing.
  • Before you start the facial expression detection, ensure that the camera permission is enabled.
void initAvatar() {
    // Create a ZegoCharacterHelper class to simplify the implementation process for API call.
    // base.bundle is used to configure heads. To configure the whole body, use human.bundle.
    mCharacterHelper = new ZegoCharacterHelper(getFilesDir().getAbsolutePath() + "/assets/base.bundle");
    mCharacterHelper.setExtendPackagePath(getFilesDir().getAbsolutePath() + "/assets/Packages");
    // Set image configurations.
    mCharacterHelper.setDefaultAvatar(ZegoCharacterHelper.MODEL_ID_MALE);
    // Important: AvatarCapture will be created later. Do not display the avatar on the screen here, or you cannot obtain the avatar texture for stream publishing.
    // mCharacterHelper.setCharacterView(mAvatarView, () -> {
    //     Log.i("ZegoAvatar", "The avatar is displayed on the screen.");
    // });

}

// Enable facial expression detection.
void startExpression() {
    // Before you start the facial expression detection, you need to apply for the camera permission in advance. The camera permission is enabled in MainActivity.
    ZegoAvatarService.getInteractEngine().startDetectExpression(ZegoExpressionDetectMode.Camera, expression -> {
        // Transfer the facial expressions to the avatar drive.
        mCharacterHelper.setExpression(expression);
    });
}

4. Export avatar texture

  • If the ZegoCharacterHelper class is used, no APIs related to IZegoCharacter need to be called. For more information, see ZegoCharacterHelper overview.
  • Avatar texture exporting is an offline rendering. Previous virtual avatars will not be rendered on the screen.

After the basic avatar is created, use the startCaptureAvatar API of ZegoCharacterHelper to introduce the parameters of AvatarCaptureConfig, such as the width and height of the exported texture and start to export the metal texture.

// Start to export texture.
// Set the width (captureWidth) and height (captureHeight) of the returned avatar in the callback as required.
AvatarCaptureConfig config = new AvatarCaptureConfig(captureWidth, captureHeight);
mCharacterHelper.startCaptureAvatar(config, new OnAvatarCaptureCallback() {
    @Override
    public void onAvatarTextureAvailable(int textureID, int width, int height) {
        //Texture ID, length, and width      
    }
});

(Optional) Custom post-processing

After you export the metal texture data by using startCaptureAvatar, you can register the onAvatarTextureAvailable callback of onAvatarCaptureCallback to listen for the result of exporting texture, including the length and width of the metal texture.

You can make custom post-processing effects for the texture data. For example, add a background image (the default avatar background image is transparent). Then, call the sendCustomVideoCaptureTextureData API to publish streams and send video frame data to the SDK.

To set the background color, you can refer to the sample source code obtained in Download.

// Perform the following operations after you obtain the avatar texture.
// Configure the metal texture of the avatar the same as that configured in Express.
public void onCaptureAvatar(int textureId, int width, int height) {

    // The rendered avatar texture is of the RGBA format with a transparent background. In RTC scenarios, the avatar is encoded in YUV format, resulting in the loss of the alpha channel.
    // Before you send the avatar to the Express SDK, paint the background or draw a picture using OpenGL or encapsulated OpenGL libraries.
    if (mAddAvatarBackground) {
        boolean useFBO = true;
        if (mBgRender == null) {
            mBgRender = new TextureBgRender(textureId, useFBO, width, height, Texture2dProgram.ProgramType.TEXTURE_2D_BG);
        }
        mBgRender.setInputTexture(textureId);
        mBgRender.setBgColor(mColor.red(), mColor.green(), mColor.blue() , mColor.alpha());
        mBgRender.draw(useFBO); // To use FBO to render the exported avatar textures, use the reversed coordinates.
        ZegoExpressEngine.getEngine().sendCustomVideoCaptureTextureData(mBgRender.getOutputTextureID(), width, height, System.currentTimeMillis());
    } else {
        // Send to the RTC engine.
        ZegoExpressEngine.getEngine().sendCustomVideoCaptureTextureData(textureId, width, height, System.currentTimeMillis());
    }
}

5. Preview and publish streams

The Avatar SDK only supports metal texture data. To publish streams by using the Express SDK, developers need to convert metal data into video frame data of the GL_TEXTURE_2D type.

You need to convert the data on your own or refer to the sample code in (Optional) Custom post-processing.

After the loginRoom API is called, you can call the startPublishingStream API to introduce the streamID (unique and generated based on your service) and publish the avatar texture video streams to ZEGOCLOUD. You can listen for the onPublisherStateUpdate callback to check whether the stream publishing is successful.

//Start to publish streams.
ZegoExpressEngine.getEngine().startPublishingStream(streamId);
//Start to preview.
ZegoExpressEngine.getEngine().startPreview(new ZegoCanvas(mLocalPreview));

Features

1. Stop facial expression detection

Run the application in the backend or exit the current page. Call the stopDetectExpression API to stop facial expression detection.

// Stop facial expression detection.
void stopExpression() {
    // Stop if you are not using it.
    ZegoAvatarService.getInteractEngine().stopDetectExpression();
}

2. Stop exporting texture

Call the stopCaptureAvatar API to stop exporting the avatar texture.

// Stop exporting texture.
mCharacterHelper.stopCaptureAvatar();

3. Stop publishing streams, stop previewing, and log out of the room

Call the stopPublishingStream API to stop sending local audio and video streams to remote users. If local preview is enabled, call the stopPreview API to stop the preview. Call the logoutRoom API to log out of the room.

ZegoExpressEngine engine = ZegoExpressEngine.getEngine();
if (engine != null) {
    engine.stopPreview();
    engine.stopPublishingStream();
    engine.logoutRoom();
}

4. Destroy engine

If users no longer use the audio and video features, call the destroyEngine API to destroy the engine and release the resources, including the microphone, camera, memory, and CPU.

ZegoExpressEngine.destroyEngine(null);
Page Directory