@Generated(value="com.amazonaws:aws-java-sdk-code-generator") public class GetContentModerationResult extends AmazonWebServiceResult<ResponseMetadata> implements Serializable, Cloneable
Constructor and Description |
---|
GetContentModerationResult() |
Modifier and Type | Method and Description |
---|---|
GetContentModerationResult |
clone() |
boolean |
equals(Object obj) |
String |
getJobStatus()
The current status of the unsafe content analysis job.
|
List<ContentModerationDetection> |
getModerationLabels()
The detected unsafe content labels and the time(s) they were detected.
|
String |
getModerationModelVersion()
Version number of the moderation detection model that was used to detect unsafe content.
|
String |
getNextToken()
If the response is truncated, Amazon Rekognition Video returns this token that you can use in the subsequent
request to retrieve the next set of unsafe content labels.
|
String |
getStatusMessage()
If the job fails,
StatusMessage provides a descriptive error message. |
VideoMetadata |
getVideoMetadata()
Information about a video that Amazon Rekognition analyzed.
|
int |
hashCode() |
void |
setJobStatus(String jobStatus)
The current status of the unsafe content analysis job.
|
void |
setModerationLabels(Collection<ContentModerationDetection> moderationLabels)
The detected unsafe content labels and the time(s) they were detected.
|
void |
setModerationModelVersion(String moderationModelVersion)
Version number of the moderation detection model that was used to detect unsafe content.
|
void |
setNextToken(String nextToken)
If the response is truncated, Amazon Rekognition Video returns this token that you can use in the subsequent
request to retrieve the next set of unsafe content labels.
|
void |
setStatusMessage(String statusMessage)
If the job fails,
StatusMessage provides a descriptive error message. |
void |
setVideoMetadata(VideoMetadata videoMetadata)
Information about a video that Amazon Rekognition analyzed.
|
String |
toString()
Returns a string representation of this object.
|
GetContentModerationResult |
withJobStatus(String jobStatus)
The current status of the unsafe content analysis job.
|
GetContentModerationResult |
withJobStatus(VideoJobStatus jobStatus)
The current status of the unsafe content analysis job.
|
GetContentModerationResult |
withModerationLabels(Collection<ContentModerationDetection> moderationLabels)
The detected unsafe content labels and the time(s) they were detected.
|
GetContentModerationResult |
withModerationLabels(ContentModerationDetection... moderationLabels)
The detected unsafe content labels and the time(s) they were detected.
|
GetContentModerationResult |
withModerationModelVersion(String moderationModelVersion)
Version number of the moderation detection model that was used to detect unsafe content.
|
GetContentModerationResult |
withNextToken(String nextToken)
If the response is truncated, Amazon Rekognition Video returns this token that you can use in the subsequent
request to retrieve the next set of unsafe content labels.
|
GetContentModerationResult |
withStatusMessage(String statusMessage)
If the job fails,
StatusMessage provides a descriptive error message. |
GetContentModerationResult |
withVideoMetadata(VideoMetadata videoMetadata)
Information about a video that Amazon Rekognition analyzed.
|
getSdkHttpMetadata, getSdkResponseMetadata, setSdkHttpMetadata, setSdkResponseMetadata
public void setJobStatus(String jobStatus)
The current status of the unsafe content analysis job.
jobStatus
- The current status of the unsafe content analysis job.VideoJobStatus
public String getJobStatus()
The current status of the unsafe content analysis job.
VideoJobStatus
public GetContentModerationResult withJobStatus(String jobStatus)
The current status of the unsafe content analysis job.
jobStatus
- The current status of the unsafe content analysis job.VideoJobStatus
public GetContentModerationResult withJobStatus(VideoJobStatus jobStatus)
The current status of the unsafe content analysis job.
jobStatus
- The current status of the unsafe content analysis job.VideoJobStatus
public void setStatusMessage(String statusMessage)
If the job fails, StatusMessage
provides a descriptive error message.
statusMessage
- If the job fails, StatusMessage
provides a descriptive error message.public String getStatusMessage()
If the job fails, StatusMessage
provides a descriptive error message.
StatusMessage
provides a descriptive error message.public GetContentModerationResult withStatusMessage(String statusMessage)
If the job fails, StatusMessage
provides a descriptive error message.
statusMessage
- If the job fails, StatusMessage
provides a descriptive error message.public void setVideoMetadata(VideoMetadata videoMetadata)
Information about a video that Amazon Rekognition analyzed. Videometadata
is returned in every page
of paginated responses from GetContentModeration
.
videoMetadata
- Information about a video that Amazon Rekognition analyzed. Videometadata
is returned in
every page of paginated responses from GetContentModeration
.public VideoMetadata getVideoMetadata()
Information about a video that Amazon Rekognition analyzed. Videometadata
is returned in every page
of paginated responses from GetContentModeration
.
Videometadata
is returned in
every page of paginated responses from GetContentModeration
.public GetContentModerationResult withVideoMetadata(VideoMetadata videoMetadata)
Information about a video that Amazon Rekognition analyzed. Videometadata
is returned in every page
of paginated responses from GetContentModeration
.
videoMetadata
- Information about a video that Amazon Rekognition analyzed. Videometadata
is returned in
every page of paginated responses from GetContentModeration
.public List<ContentModerationDetection> getModerationLabels()
The detected unsafe content labels and the time(s) they were detected.
public void setModerationLabels(Collection<ContentModerationDetection> moderationLabels)
The detected unsafe content labels and the time(s) they were detected.
moderationLabels
- The detected unsafe content labels and the time(s) they were detected.public GetContentModerationResult withModerationLabels(ContentModerationDetection... moderationLabels)
The detected unsafe content labels and the time(s) they were detected.
NOTE: This method appends the values to the existing list (if any). Use
setModerationLabels(java.util.Collection)
or withModerationLabels(java.util.Collection)
if you
want to override the existing values.
moderationLabels
- The detected unsafe content labels and the time(s) they were detected.public GetContentModerationResult withModerationLabels(Collection<ContentModerationDetection> moderationLabels)
The detected unsafe content labels and the time(s) they were detected.
moderationLabels
- The detected unsafe content labels and the time(s) they were detected.public void setNextToken(String nextToken)
If the response is truncated, Amazon Rekognition Video returns this token that you can use in the subsequent request to retrieve the next set of unsafe content labels.
nextToken
- If the response is truncated, Amazon Rekognition Video returns this token that you can use in the
subsequent request to retrieve the next set of unsafe content labels.public String getNextToken()
If the response is truncated, Amazon Rekognition Video returns this token that you can use in the subsequent request to retrieve the next set of unsafe content labels.
public GetContentModerationResult withNextToken(String nextToken)
If the response is truncated, Amazon Rekognition Video returns this token that you can use in the subsequent request to retrieve the next set of unsafe content labels.
nextToken
- If the response is truncated, Amazon Rekognition Video returns this token that you can use in the
subsequent request to retrieve the next set of unsafe content labels.public void setModerationModelVersion(String moderationModelVersion)
Version number of the moderation detection model that was used to detect unsafe content.
moderationModelVersion
- Version number of the moderation detection model that was used to detect unsafe content.public String getModerationModelVersion()
Version number of the moderation detection model that was used to detect unsafe content.
public GetContentModerationResult withModerationModelVersion(String moderationModelVersion)
Version number of the moderation detection model that was used to detect unsafe content.
moderationModelVersion
- Version number of the moderation detection model that was used to detect unsafe content.public String toString()
toString
in class Object
Object.toString()
public GetContentModerationResult clone()
Copyright © 2013 Amazon Web Services, Inc. All Rights Reserved.