The video call service provided by ZEGOCLOUD enables you to build audio and video applications through its flexible and easy-to-use API. Meanwhile, another ZEGOCLOUD add-on: AI Effects, is based on the AI algorithm, which enables you to implement a series of beautification features such as face beautification, face shape retouching, and other features.
These two services can be combined by using the two types of SDK together to create a real-time application with beautification features, which can be widely used in entertainment live streaming, live game streaming, video conference, and other live streaming scenarios.
Basic concepts
The ZegoExpress-Video SDK (Hereafter called the Express SDK): The video call SDK provided by ZEGOCLOUD. This SDK enables you to implement real-time audio and video features, in live streaming, live co-hosting streaming, and other scenarios.
The ZegoEffects SDK (Hereafter called the Effects SDK): The AI effects SDK provided by ZEGOCLOUD provides the AI-based image rendering and algorithm abilities that enable you to implement face beautification, face shape retouch, background segmentation, face detection, and other features.
The overall process of the combination of the two SDKs is as follows:
Before you can use the AI features of the Effects SDK, you need to import the resources or models required for these features.
// Specify the absolute path of the face recognition model, which is required for features such as face detection, eyes enlarging, and face slimming.
// Specify the absolute path of the face recognition model, which is required for the face detectionn and background segmentation features.
char* model_path_list[] = {"D:\\YOUR_APP\\FaceDetectionModel.bundle",
"D:\\YOUR_APP\\Segmentation.bundle"};
// Specify the absolute path of the resources.
char* resouce_path_list[] = {"D:\\YOUR_APP\\FaceWhiteningResources.bundle",
"D:\\YOUR_APP\\PendantResources.bundle",
"D:\\YOUR_APP\\RosyResources.bundle",
"D:\\YOUR_APP\\TeethWhiteningResources.bundle"};
// Set the list of model paths, which must be called before calling the `create` method to create a ZegoEffects object.
zego_effects_set_models(model_path_list, 2);
// Set the resource path list, which must be called before calling the `create` method to create a ZegoEffects object.
zego_effects_set_resources(resouce_path_list, 4);
For all resources and models that the Effects SDK supports, see Import resources and models.
To create an Effects SDK object, import the authentication file you obtained in the previous step Prerequisites.
// Pass in the authentication file you obtained.
zego_effects_create(&m_handle,"ABCDEFG");
To initialize the Effects SDK object, call the zego_effects_init_env
method, and pass in the width and height of the incoming video data to be processed.
The following is the sample code of processing 1280 × 720 video images:
// Initialize the Effects SDK object, and pass in the width and height of the incoming video data to be processed.
zego_effects_init_env(handle,1280,720);
To initialize the Express SDK, call the createEngine
method.
ZegoEngineProfile profile;
profile.appID = appID;
profile.scenario = ZegoScenario::ZEGO_SCENARIO_GENERAL;
// Create a ZegoExpressEngine instance
auto engine = ZegoExpressSDK::createEngine(profile, nullptr);
The Express SDK provides two methods to get the video raw data:
The difference between the two methods is as follows, you can choose based on the actual situation:
Method | Description | Advantages |
---|---|---|
Custom video pre-processing |
The ZEGO Express SDK collects the video raw data. |
Together with the Express SDK and the Effects SDK, you do not need to manage the device input sources, but simply manipulate the raw data thrown by the Express SDK and pass it back to the Express SDK. |
Custom video capture |
Capture the video raw data by yourself. |
When multiple manufacturers are taken on, services can be flexibly implemented and performance optimization can be improved. |
To capture the video raw data using this method, do the following:
a. Select the ZEGO_VIDEO_BUFFER_TYPE_RAW_DATA
video frame data type.
b. Call the enableCustomVideoProcessing
method to enable the custom video pre-processing.
c. The SDK sends out the captured video raw data through the callback onCapturedUnprocessedRawData
.
ZegoCustomVideoProcessConfig config;
config.bufferType = ZEGO_VIDEO_BUFFER_TYPE_RAW_DATA;
// Enable the custom video pre-processing.
engine->enableCustomVideoProcessing(true,&config);
For details, see Custom video pre-processing.
For details, see Custom video capture.
In the corresponding callback for receiving the video raw data, the Effects SDK can be used to implement AI features.
After implemented the AI features, the Express SDK encodes and publishes the processed data and sends it to the cloud server. At this time, the remote user can play the processed video streams.
onCapturedUnprocessedCVPixelBuffer
callback.// Take using the custom video pre-processing method as an example.
// Obtain the raw video data through a callback.
// Listen for and handle the callback.
class MyHandler : public IZegoCustomVideoProcessHandler {
// ......
protected:
void onCapturedUnprocessedRawData(const unsigned char** data, unsigned int* dataLength, ZegoVideoFrameParam param, unsigned long long referenceTimeMillisecond, ZegoPublishChannel channel) override;
};
void MyHandler::onCapturedUnprocessedRawData(const unsigned char** data, unsigned int* dataLength, ZegoVideoFrameParam param, unsigned long long referenceTimeMillisecond, ZegoPublishChannel channel) {
// Receive texture from ZegoExpressEngine
int width = param.width;
int height = param.height;
int stride = param.strides[0];
QImage image(const_cast<unsigned char*>(data[0]),width,height,stride,QImage::Format_RGBA32);
zego_effects_video_frame_param frameParam;
frameParam.format = zego_effects_video_frame_format_rgba32;
frameParam.width = image.width();
frameParam.height = image.height();
// Process buffer by ZegoEffects
zego_effects_process_image_buffer_rgb(m_handle,image.bits(), image.bytesPerLine() * image.height(),frameParam);
// Send processed texture to ZegoExpressEngine
engine->sendCustomVideoProcessedRawData((const unsigned char**)data,dataLength,param,referenceTimeMillisecond);
}
auto myHandler = std::make_shared<MyHandler>();
engine->setCustomVideoProcessHandler(myHandler);
To adjust the AI effects during the stream publishing and playing operation, use the Effects SDK to make changes in real time.
// Enable the skin tone enhancement feature.
effects.enableWhiten(handle,true);
// Set the whitening intensity. The value range is [0, 100], and the default value is 50.
ZegoEffectsWhitenParam param = new ZegoEffectsWhitenParam();
param.intensity = 100;
effects.setWhitenParam(handle,¶m);
For more AI features, see Face beautification, Face shape retouch, Backgroud segmentation, Face detection, Stickers, and Filters.