This document describes how to implement basic image processing with the ZegoEffects SDK.
Before implementing the basic image processing functionality, make sure you complete the following steps:
The following diagram shows the API call sequence of basic image processing with the ZegoEffects SDK:
Import the AI resources and models.
To use the SDK's AI features, you must import the necessary AI resources or models by calling the setResources
method. For details, see Quick starts - Import resources and models.
// Specify the absolute path of the face recognition model, which is required for various features including Face detection, eyes enlarging, and face slimming.
ArrayList<String> aiModeInfos = new ArrayList<>();
aiModeInfos.add("sdcard/xxx/xxxxx/FaceDetectionModel.bundle");
aiModeInfos.add("sdcard/xxx/xxxxx/Segmentation.bundle");
// Set the list of model paths, which must be called before calling the create method.
ZegoEffects.setResources(aiModeInfos);
Deploy Advanced Configuration.
Call the setAdvancedConfig interface to deploy advanced configuration items, such as configuring device performance levels. For details, please refer to Configure Device Performance Level.
ZegoEffectsAdvancedConfig config = new ZegoEffectsAdvancedConfig();
// Device performance level can be configured
ZegoEffects.setAdvancedConfig(config);
Create Effects Object.
Pass the AppID and AppSign obtained from the Prerequisites directly to the create interface. After internal authentication by the SDK, it will create an Effects object and return the corresponding error code.
ZegoEffects mEffects;
long appid = *******;
String appSign = "*******";
ZegoEffects.create(appid, appSign, applicationContext, (effects, errorCode) -> {
mEffects = effects;
//Execute custom logic
});
Call the create interface, pass in the authentication file obtained from the Prerequisites to create an Effects object.
// Create effects object, pass in the authentication file License content (authentication content should be based on the actual file obtained)
String license = "xxxxxxx";
ZegoEffects mEffects = ZegoEffects.create(license, applicationContext);
Call the initEnv
method to initialize the ZegoEffects
object, passing in the width and height of the original image to be processed.
// Initialize the ZegoEffects object, passing in the width and height of the original image to be processed.
mEffects.initEnv(1280, 720);
Call the following methods to enable the AI features you want to use.
enableWhiten
enableBigEyes
setPortraitSegmentationBackgroundPath
enablePortraitSegmentation
// 1. Enable the skin tone enhancement feature.
// 2. Enable the eyes enlargeing feature.
// 3. Enable the AI portrait segmentation feature, passing in the absolute path of the segmented background image.
mEffects.enableWhiten(true)
.enableBigEyes(true)
.setPortraitSegmentationBackgroundPath("MY_BACKGROUND_PATH", ZegoEffectsScaleMode.ASPECT_FILL);
.enablePortraitSegmentation(true);
Call the processTexture
method to perform image processing. SDK also supports YUV, Texture, and other formats for image processing. For details, see the following table:
Video frame type | Pixel format / Texture ID | Method |
---|---|---|
Buff |
|
processImageBufferRGB |
Buff |
|
processImageBufferYUV |
Texture | Texture ID | processTexture |
The following sample code calls the processTexture
method for image processing:
ZegoEffectsVideoFrameParam zegoEffectsVideoFrameParam = new ZegoEffectsVideoFrameParam();
zegoEffectsVideoFrameParam.setFormat(ZegoEffectsVideoFrameFormat.RGBA32);
zegoEffectsVideoFrameParam.setWidth(width);
zegoEffectsVideoFrameParam.setHeight(height);
// Pass in the textureID of the original video frame to be processed, and return the textureID of the processed video frame.
zegoTextureId = mEffects.processTexture(mTextureId, zegoEffectsVideoFrameParam);