@Generated(value="com.amazonaws:aws-java-sdk-code-generator") public class WebCrawlerConfiguration extends Object implements Serializable, Cloneable, StructuredPojo
Provides the configuration information required for Amazon Kendra web crawler.
Constructor and Description |
---|
WebCrawlerConfiguration() |
Modifier and Type | Method and Description |
---|---|
WebCrawlerConfiguration |
clone() |
boolean |
equals(Object obj) |
AuthenticationConfiguration |
getAuthenticationConfiguration()
Provides configuration information required to connect to websites using authentication.
|
Integer |
getCrawlDepth()
Specifies the number of levels in a website that you want to crawl.
|
Float |
getMaxContentSizePerPageInMegaBytes()
The maximum size (in MB) of a webpage or attachment to crawl.
|
Integer |
getMaxLinksPerPage()
The maximum number of URLs on a webpage to include when crawling a website.
|
Integer |
getMaxUrlsPerMinuteCrawlRate()
The maximum number of URLs crawled per website host per minute.
|
ProxyConfiguration |
getProxyConfiguration()
Provides configuration information required to connect to your internal websites via a web proxy.
|
List<String> |
getUrlExclusionPatterns()
The regular expression pattern to exclude certain URLs to crawl.
|
List<String> |
getUrlInclusionPatterns()
The regular expression pattern to include certain URLs to crawl.
|
Urls |
getUrls()
Specifies the seed or starting point URLs of the websites or the sitemap URLs of the websites you want to crawl.
|
int |
hashCode() |
void |
marshall(ProtocolMarshaller protocolMarshaller)
Marshalls this structured data using the given
ProtocolMarshaller . |
void |
setAuthenticationConfiguration(AuthenticationConfiguration authenticationConfiguration)
Provides configuration information required to connect to websites using authentication.
|
void |
setCrawlDepth(Integer crawlDepth)
Specifies the number of levels in a website that you want to crawl.
|
void |
setMaxContentSizePerPageInMegaBytes(Float maxContentSizePerPageInMegaBytes)
The maximum size (in MB) of a webpage or attachment to crawl.
|
void |
setMaxLinksPerPage(Integer maxLinksPerPage)
The maximum number of URLs on a webpage to include when crawling a website.
|
void |
setMaxUrlsPerMinuteCrawlRate(Integer maxUrlsPerMinuteCrawlRate)
The maximum number of URLs crawled per website host per minute.
|
void |
setProxyConfiguration(ProxyConfiguration proxyConfiguration)
Provides configuration information required to connect to your internal websites via a web proxy.
|
void |
setUrlExclusionPatterns(Collection<String> urlExclusionPatterns)
The regular expression pattern to exclude certain URLs to crawl.
|
void |
setUrlInclusionPatterns(Collection<String> urlInclusionPatterns)
The regular expression pattern to include certain URLs to crawl.
|
void |
setUrls(Urls urls)
Specifies the seed or starting point URLs of the websites or the sitemap URLs of the websites you want to crawl.
|
String |
toString()
Returns a string representation of this object.
|
WebCrawlerConfiguration |
withAuthenticationConfiguration(AuthenticationConfiguration authenticationConfiguration)
Provides configuration information required to connect to websites using authentication.
|
WebCrawlerConfiguration |
withCrawlDepth(Integer crawlDepth)
Specifies the number of levels in a website that you want to crawl.
|
WebCrawlerConfiguration |
withMaxContentSizePerPageInMegaBytes(Float maxContentSizePerPageInMegaBytes)
The maximum size (in MB) of a webpage or attachment to crawl.
|
WebCrawlerConfiguration |
withMaxLinksPerPage(Integer maxLinksPerPage)
The maximum number of URLs on a webpage to include when crawling a website.
|
WebCrawlerConfiguration |
withMaxUrlsPerMinuteCrawlRate(Integer maxUrlsPerMinuteCrawlRate)
The maximum number of URLs crawled per website host per minute.
|
WebCrawlerConfiguration |
withProxyConfiguration(ProxyConfiguration proxyConfiguration)
Provides configuration information required to connect to your internal websites via a web proxy.
|
WebCrawlerConfiguration |
withUrlExclusionPatterns(Collection<String> urlExclusionPatterns)
The regular expression pattern to exclude certain URLs to crawl.
|
WebCrawlerConfiguration |
withUrlExclusionPatterns(String... urlExclusionPatterns)
The regular expression pattern to exclude certain URLs to crawl.
|
WebCrawlerConfiguration |
withUrlInclusionPatterns(Collection<String> urlInclusionPatterns)
The regular expression pattern to include certain URLs to crawl.
|
WebCrawlerConfiguration |
withUrlInclusionPatterns(String... urlInclusionPatterns)
The regular expression pattern to include certain URLs to crawl.
|
WebCrawlerConfiguration |
withUrls(Urls urls)
Specifies the seed or starting point URLs of the websites or the sitemap URLs of the websites you want to crawl.
|
public void setUrls(Urls urls)
Specifies the seed or starting point URLs of the websites or the sitemap URLs of the websites you want to crawl.
You can include website subdomains. You can list up to 100 seed URLs and up to three sitemap URLs.
When selecting websites to index, you must adhere to the Amazon Acceptable Use Policy and all other Amazon terms. Remember that you must only use the Amazon Kendra web crawler to index your own webpages, or webpages that you have authorization to index.
urls
- Specifies the seed or starting point URLs of the websites or the sitemap URLs of the websites you want to
crawl.
You can include website subdomains. You can list up to 100 seed URLs and up to three sitemap URLs.
When selecting websites to index, you must adhere to the Amazon Acceptable Use Policy and all other Amazon terms. Remember that you must only use the Amazon Kendra web crawler to index your own webpages, or webpages that you have authorization to index.
public Urls getUrls()
Specifies the seed or starting point URLs of the websites or the sitemap URLs of the websites you want to crawl.
You can include website subdomains. You can list up to 100 seed URLs and up to three sitemap URLs.
When selecting websites to index, you must adhere to the Amazon Acceptable Use Policy and all other Amazon terms. Remember that you must only use the Amazon Kendra web crawler to index your own webpages, or webpages that you have authorization to index.
You can include website subdomains. You can list up to 100 seed URLs and up to three sitemap URLs.
When selecting websites to index, you must adhere to the Amazon Acceptable Use Policy and all other Amazon terms. Remember that you must only use the Amazon Kendra web crawler to index your own webpages, or webpages that you have authorization to index.
public WebCrawlerConfiguration withUrls(Urls urls)
Specifies the seed or starting point URLs of the websites or the sitemap URLs of the websites you want to crawl.
You can include website subdomains. You can list up to 100 seed URLs and up to three sitemap URLs.
When selecting websites to index, you must adhere to the Amazon Acceptable Use Policy and all other Amazon terms. Remember that you must only use the Amazon Kendra web crawler to index your own webpages, or webpages that you have authorization to index.
urls
- Specifies the seed or starting point URLs of the websites or the sitemap URLs of the websites you want to
crawl.
You can include website subdomains. You can list up to 100 seed URLs and up to three sitemap URLs.
When selecting websites to index, you must adhere to the Amazon Acceptable Use Policy and all other Amazon terms. Remember that you must only use the Amazon Kendra web crawler to index your own webpages, or webpages that you have authorization to index.
public void setCrawlDepth(Integer crawlDepth)
Specifies the number of levels in a website that you want to crawl.
The first level begins from the website seed or starting point URL. For example, if a website has 3 levels – index level (i.e. seed in this example), sections level, and subsections level – and you are only interested in crawling information up to the sections level (i.e. levels 0-1), you can set your depth to 1.
The default crawl depth is set to 2.
crawlDepth
- Specifies the number of levels in a website that you want to crawl.
The first level begins from the website seed or starting point URL. For example, if a website has 3 levels – index level (i.e. seed in this example), sections level, and subsections level – and you are only interested in crawling information up to the sections level (i.e. levels 0-1), you can set your depth to 1.
The default crawl depth is set to 2.
public Integer getCrawlDepth()
Specifies the number of levels in a website that you want to crawl.
The first level begins from the website seed or starting point URL. For example, if a website has 3 levels – index level (i.e. seed in this example), sections level, and subsections level – and you are only interested in crawling information up to the sections level (i.e. levels 0-1), you can set your depth to 1.
The default crawl depth is set to 2.
The first level begins from the website seed or starting point URL. For example, if a website has 3 levels – index level (i.e. seed in this example), sections level, and subsections level – and you are only interested in crawling information up to the sections level (i.e. levels 0-1), you can set your depth to 1.
The default crawl depth is set to 2.
public WebCrawlerConfiguration withCrawlDepth(Integer crawlDepth)
Specifies the number of levels in a website that you want to crawl.
The first level begins from the website seed or starting point URL. For example, if a website has 3 levels – index level (i.e. seed in this example), sections level, and subsections level – and you are only interested in crawling information up to the sections level (i.e. levels 0-1), you can set your depth to 1.
The default crawl depth is set to 2.
crawlDepth
- Specifies the number of levels in a website that you want to crawl.
The first level begins from the website seed or starting point URL. For example, if a website has 3 levels – index level (i.e. seed in this example), sections level, and subsections level – and you are only interested in crawling information up to the sections level (i.e. levels 0-1), you can set your depth to 1.
The default crawl depth is set to 2.
public void setMaxLinksPerPage(Integer maxLinksPerPage)
The maximum number of URLs on a webpage to include when crawling a website. This number is per webpage.
As a website’s webpages are crawled, any URLs the webpages link to are also crawled. URLs on a webpage are crawled in order of appearance.
The default maximum links per page is 100.
maxLinksPerPage
- The maximum number of URLs on a webpage to include when crawling a website. This number is per
webpage.
As a website’s webpages are crawled, any URLs the webpages link to are also crawled. URLs on a webpage are crawled in order of appearance.
The default maximum links per page is 100.
public Integer getMaxLinksPerPage()
The maximum number of URLs on a webpage to include when crawling a website. This number is per webpage.
As a website’s webpages are crawled, any URLs the webpages link to are also crawled. URLs on a webpage are crawled in order of appearance.
The default maximum links per page is 100.
As a website’s webpages are crawled, any URLs the webpages link to are also crawled. URLs on a webpage are crawled in order of appearance.
The default maximum links per page is 100.
public WebCrawlerConfiguration withMaxLinksPerPage(Integer maxLinksPerPage)
The maximum number of URLs on a webpage to include when crawling a website. This number is per webpage.
As a website’s webpages are crawled, any URLs the webpages link to are also crawled. URLs on a webpage are crawled in order of appearance.
The default maximum links per page is 100.
maxLinksPerPage
- The maximum number of URLs on a webpage to include when crawling a website. This number is per
webpage.
As a website’s webpages are crawled, any URLs the webpages link to are also crawled. URLs on a webpage are crawled in order of appearance.
The default maximum links per page is 100.
public void setMaxContentSizePerPageInMegaBytes(Float maxContentSizePerPageInMegaBytes)
The maximum size (in MB) of a webpage or attachment to crawl.
Files larger than this size (in MB) are skipped/not crawled.
The default maximum size of a webpage or attachment is set to 50 MB.
maxContentSizePerPageInMegaBytes
- The maximum size (in MB) of a webpage or attachment to crawl.
Files larger than this size (in MB) are skipped/not crawled.
The default maximum size of a webpage or attachment is set to 50 MB.
public Float getMaxContentSizePerPageInMegaBytes()
The maximum size (in MB) of a webpage or attachment to crawl.
Files larger than this size (in MB) are skipped/not crawled.
The default maximum size of a webpage or attachment is set to 50 MB.
Files larger than this size (in MB) are skipped/not crawled.
The default maximum size of a webpage or attachment is set to 50 MB.
public WebCrawlerConfiguration withMaxContentSizePerPageInMegaBytes(Float maxContentSizePerPageInMegaBytes)
The maximum size (in MB) of a webpage or attachment to crawl.
Files larger than this size (in MB) are skipped/not crawled.
The default maximum size of a webpage or attachment is set to 50 MB.
maxContentSizePerPageInMegaBytes
- The maximum size (in MB) of a webpage or attachment to crawl.
Files larger than this size (in MB) are skipped/not crawled.
The default maximum size of a webpage or attachment is set to 50 MB.
public void setMaxUrlsPerMinuteCrawlRate(Integer maxUrlsPerMinuteCrawlRate)
The maximum number of URLs crawled per website host per minute.
A minimum of one URL is required.
The default maximum number of URLs crawled per website host per minute is 300.
maxUrlsPerMinuteCrawlRate
- The maximum number of URLs crawled per website host per minute.
A minimum of one URL is required.
The default maximum number of URLs crawled per website host per minute is 300.
public Integer getMaxUrlsPerMinuteCrawlRate()
The maximum number of URLs crawled per website host per minute.
A minimum of one URL is required.
The default maximum number of URLs crawled per website host per minute is 300.
A minimum of one URL is required.
The default maximum number of URLs crawled per website host per minute is 300.
public WebCrawlerConfiguration withMaxUrlsPerMinuteCrawlRate(Integer maxUrlsPerMinuteCrawlRate)
The maximum number of URLs crawled per website host per minute.
A minimum of one URL is required.
The default maximum number of URLs crawled per website host per minute is 300.
maxUrlsPerMinuteCrawlRate
- The maximum number of URLs crawled per website host per minute.
A minimum of one URL is required.
The default maximum number of URLs crawled per website host per minute is 300.
public List<String> getUrlInclusionPatterns()
The regular expression pattern to include certain URLs to crawl.
If there is a regular expression pattern to exclude certain URLs that conflicts with the include pattern, the exclude pattern takes precedence.
If there is a regular expression pattern to exclude certain URLs that conflicts with the include pattern, the exclude pattern takes precedence.
public void setUrlInclusionPatterns(Collection<String> urlInclusionPatterns)
The regular expression pattern to include certain URLs to crawl.
If there is a regular expression pattern to exclude certain URLs that conflicts with the include pattern, the exclude pattern takes precedence.
urlInclusionPatterns
- The regular expression pattern to include certain URLs to crawl.
If there is a regular expression pattern to exclude certain URLs that conflicts with the include pattern, the exclude pattern takes precedence.
public WebCrawlerConfiguration withUrlInclusionPatterns(String... urlInclusionPatterns)
The regular expression pattern to include certain URLs to crawl.
If there is a regular expression pattern to exclude certain URLs that conflicts with the include pattern, the exclude pattern takes precedence.
NOTE: This method appends the values to the existing list (if any). Use
setUrlInclusionPatterns(java.util.Collection)
or withUrlInclusionPatterns(java.util.Collection)
if you want to override the existing values.
urlInclusionPatterns
- The regular expression pattern to include certain URLs to crawl.
If there is a regular expression pattern to exclude certain URLs that conflicts with the include pattern, the exclude pattern takes precedence.
public WebCrawlerConfiguration withUrlInclusionPatterns(Collection<String> urlInclusionPatterns)
The regular expression pattern to include certain URLs to crawl.
If there is a regular expression pattern to exclude certain URLs that conflicts with the include pattern, the exclude pattern takes precedence.
urlInclusionPatterns
- The regular expression pattern to include certain URLs to crawl.
If there is a regular expression pattern to exclude certain URLs that conflicts with the include pattern, the exclude pattern takes precedence.
public List<String> getUrlExclusionPatterns()
The regular expression pattern to exclude certain URLs to crawl.
If there is a regular expression pattern to include certain URLs that conflicts with the exclude pattern, the exclude pattern takes precedence.
If there is a regular expression pattern to include certain URLs that conflicts with the exclude pattern, the exclude pattern takes precedence.
public void setUrlExclusionPatterns(Collection<String> urlExclusionPatterns)
The regular expression pattern to exclude certain URLs to crawl.
If there is a regular expression pattern to include certain URLs that conflicts with the exclude pattern, the exclude pattern takes precedence.
urlExclusionPatterns
- The regular expression pattern to exclude certain URLs to crawl.
If there is a regular expression pattern to include certain URLs that conflicts with the exclude pattern, the exclude pattern takes precedence.
public WebCrawlerConfiguration withUrlExclusionPatterns(String... urlExclusionPatterns)
The regular expression pattern to exclude certain URLs to crawl.
If there is a regular expression pattern to include certain URLs that conflicts with the exclude pattern, the exclude pattern takes precedence.
NOTE: This method appends the values to the existing list (if any). Use
setUrlExclusionPatterns(java.util.Collection)
or withUrlExclusionPatterns(java.util.Collection)
if you want to override the existing values.
urlExclusionPatterns
- The regular expression pattern to exclude certain URLs to crawl.
If there is a regular expression pattern to include certain URLs that conflicts with the exclude pattern, the exclude pattern takes precedence.
public WebCrawlerConfiguration withUrlExclusionPatterns(Collection<String> urlExclusionPatterns)
The regular expression pattern to exclude certain URLs to crawl.
If there is a regular expression pattern to include certain URLs that conflicts with the exclude pattern, the exclude pattern takes precedence.
urlExclusionPatterns
- The regular expression pattern to exclude certain URLs to crawl.
If there is a regular expression pattern to include certain URLs that conflicts with the exclude pattern, the exclude pattern takes precedence.
public void setProxyConfiguration(ProxyConfiguration proxyConfiguration)
Provides configuration information required to connect to your internal websites via a web proxy.
You must provide the website host name and port number. For example, the host name of https://a.example.com/page1.html is "a.example.com" and the port is 443, the standard port for HTTPS.
Web proxy credentials are optional and you can use them to connect to a web proxy server that requires basic authentication. To store web proxy credentials, you use a secret in AWS Secrets Manager.
proxyConfiguration
- Provides configuration information required to connect to your internal websites via a web proxy.
You must provide the website host name and port number. For example, the host name of https://a.example.com/page1.html is "a.example.com" and the port is 443, the standard port for HTTPS.
Web proxy credentials are optional and you can use them to connect to a web proxy server that requires basic authentication. To store web proxy credentials, you use a secret in AWS Secrets Manager.
public ProxyConfiguration getProxyConfiguration()
Provides configuration information required to connect to your internal websites via a web proxy.
You must provide the website host name and port number. For example, the host name of https://a.example.com/page1.html is "a.example.com" and the port is 443, the standard port for HTTPS.
Web proxy credentials are optional and you can use them to connect to a web proxy server that requires basic authentication. To store web proxy credentials, you use a secret in AWS Secrets Manager.
You must provide the website host name and port number. For example, the host name of https://a.example.com/page1.html is "a.example.com" and the port is 443, the standard port for HTTPS.
Web proxy credentials are optional and you can use them to connect to a web proxy server that requires basic authentication. To store web proxy credentials, you use a secret in AWS Secrets Manager.
public WebCrawlerConfiguration withProxyConfiguration(ProxyConfiguration proxyConfiguration)
Provides configuration information required to connect to your internal websites via a web proxy.
You must provide the website host name and port number. For example, the host name of https://a.example.com/page1.html is "a.example.com" and the port is 443, the standard port for HTTPS.
Web proxy credentials are optional and you can use them to connect to a web proxy server that requires basic authentication. To store web proxy credentials, you use a secret in AWS Secrets Manager.
proxyConfiguration
- Provides configuration information required to connect to your internal websites via a web proxy.
You must provide the website host name and port number. For example, the host name of https://a.example.com/page1.html is "a.example.com" and the port is 443, the standard port for HTTPS.
Web proxy credentials are optional and you can use them to connect to a web proxy server that requires basic authentication. To store web proxy credentials, you use a secret in AWS Secrets Manager.
public void setAuthenticationConfiguration(AuthenticationConfiguration authenticationConfiguration)
Provides configuration information required to connect to websites using authentication.
You can connect to websites using basic authentication of user name and password.
You must provide the website host name and port number. For example, the host name of https://a.example.com/page1.html is "a.example.com" and the port is 443, the standard port for HTTPS. You use a secret in AWS Secrets Manager to store your authentication credentials.
authenticationConfiguration
- Provides configuration information required to connect to websites using authentication.
You can connect to websites using basic authentication of user name and password.
You must provide the website host name and port number. For example, the host name of https://a.example.com/page1.html is "a.example.com" and the port is 443, the standard port for HTTPS. You use a secret in AWS Secrets Manager to store your authentication credentials.
public AuthenticationConfiguration getAuthenticationConfiguration()
Provides configuration information required to connect to websites using authentication.
You can connect to websites using basic authentication of user name and password.
You must provide the website host name and port number. For example, the host name of https://a.example.com/page1.html is "a.example.com" and the port is 443, the standard port for HTTPS. You use a secret in AWS Secrets Manager to store your authentication credentials.
You can connect to websites using basic authentication of user name and password.
You must provide the website host name and port number. For example, the host name of https://a.example.com/page1.html is "a.example.com" and the port is 443, the standard port for HTTPS. You use a secret in AWS Secrets Manager to store your authentication credentials.
public WebCrawlerConfiguration withAuthenticationConfiguration(AuthenticationConfiguration authenticationConfiguration)
Provides configuration information required to connect to websites using authentication.
You can connect to websites using basic authentication of user name and password.
You must provide the website host name and port number. For example, the host name of https://a.example.com/page1.html is "a.example.com" and the port is 443, the standard port for HTTPS. You use a secret in AWS Secrets Manager to store your authentication credentials.
authenticationConfiguration
- Provides configuration information required to connect to websites using authentication.
You can connect to websites using basic authentication of user name and password.
You must provide the website host name and port number. For example, the host name of https://a.example.com/page1.html is "a.example.com" and the port is 443, the standard port for HTTPS. You use a secret in AWS Secrets Manager to store your authentication credentials.
public String toString()
toString
in class Object
Object.toString()
public WebCrawlerConfiguration clone()
public void marshall(ProtocolMarshaller protocolMarshaller)
StructuredPojo
ProtocolMarshaller
.marshall
in interface StructuredPojo
protocolMarshaller
- Implementation of ProtocolMarshaller
used to marshall this object's data.