微信公众号搜"智元新知"关注
微信扫一扫可直接关注哦!

扫雪机收集器无法使用 https 协议端口 443

如何解决扫雪机收集器无法使用 https 协议端口 443

我正在尝试在收集器级别启用 ssl 并尝试在端口 443 上运行,但无法这样做。收集器在普通 http 端口(端口 80 或其他端口)上运行良好,但在启用 ssl 的端口 443 上运行不正常。我按照雪犁收集器官方文档中的说明进行操作,但仍然遗漏了一些东西。以下是我用于收集器的配置。 我已经创建了自签名证书并将其转换为 PKCS12 格式。

如果我注释掉端口 80,收集器会在该行抛出错误。如果离开它,未注释的收集器开始在端口 80 上运行。 请帮助我如何配置基于 https 的收集器。

#
# This program is licensed to you under the Apache License Version 2.0,and
# you may not use this file except in compliance with the Apache License
# Version 2.0.  You may obtain a copy of the Apache License Version 2.0 at
# http://www.apache.org/licenses/LICENSE-2.0.
#
# Unless required by applicable law or agreed to in writing,software
# distributed under the Apache License Version 2.0 is distributed on an "AS
# IS" BASIS,WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND,either express or
# implied.  See the Apache License Version 2.0 for the specific language
# governing permissions and limitations there under.

# This file (config.hocon.sample) contains a template with
# configuration options for the Scala Stream Collector.
#
# To use,copy this to 'application.conf' and modify the configuration options.

# 'collector' contains configuration options for the main Scala collector.
collector {
#  # The collector runs as a web service specified on the following interface and port.
interface = "0.0.0.0"
port = 80

# optional SSL/TLS configuration
ssl {
  enable = true
  # whether to redirect HTTP to HTTPS
  redirect = true
  port = 443
}

paths {
  # "/com.acme/track" = "/com.sNowplowanalytics.sNowplow/tp2"
  # "/com.acme/redirect" = "/r/tp2"
  # "/com.acme/iglu" = "/com.sNowplowanalytics.iglu/v1"
}

# Configure the P3P policy header.
p3p {
  policyRef = "/w3c/p3p.xml"
  CP = "NOI DSP COR NID PSA OUR IND COM NAV STA"
}

# Cross domain policy configuration.
# If "enabled" is set to "false",the collector will respond with a 404 to the /crossdomain.xml
# route.
crossDomain {
  enabled = false
  # Domains that are granted access,*.acme.com will match http://acme.com and 
http://sub.acme.com
   domains = [ "*" ]
  # Whether to only grant access to HTTPS or both HTTPS and HTTP sources
  secure = true
}

# The collector returns a cookie to clients for user identification
# with the following domain and expiration.
cookie {
  enabled = true
  expiration = "365 days" # e.g. "365 days"
  # Network cookie name
  name = sp
  domains = ["devopslearn.online" # e.g. "domain.com" -> any origin domain ending with this will be 
  matched and domain.com will be returned
    # ... more domains
 ]
# ... more domains
# If specified,the fallback domain will be used if none of the Origin header hosts matches the list of
# cookie domains configured above. (For example,if there is no Origin header.)
fallbackDomain = "devopslearn.online"
secure = true
httpOnly = false
# The sameSite is optional. You can choose to not specify the attribute,or you can use `Strict`,# `Lax` or `None` to limit the cookie sent context.
#   Strict: the cookie will only be sent along with "same-site" requests.
#   Lax: the cookie will be sent with same-site requests,and with cross-site top-level navigation.
#   None: the cookie will be sent with same-site and cross-site requests.
sameSite = "None"
}

# If you have a do not track cookie in place,the Scala Stream Collector can respect it by
# completely bypassing the processing of an incoming request carrying this cookie,the collector
# will simply reply by a 200 saying "do not track".
# The cookie name and value must match the configuration below,where the names of the cookies 
must
# match entirely and the value Could be a regular expression.
doNottrackCookie {
  enabled = false
  name = dnt
  value = "[Tt][Rr][Uu][Ee]"
}

# When enabled and the cookie specified above is missing,performs a redirect to itself to check
# if third-party cookies are blocked using the specified name. If they are indeed blocked,# fallbackNetworkId is used instead of generating a new random one.
cookieBounce {
  enabled = false
  # The name of the request parameter which will be used on redirects checking that third-party
  # cookies work.
  name = "n3pc"
  name = ${?COLLECTOR_COOKIE_BOUNCE_NAME}
  # Network user id to fallback to when third-party cookies are blocked.
  fallbackNetworkUserId = "00000000-0000-4000-A000-000000000000"
  # Optionally,specify the name of the header containing the originating protocol for use in the
  # bounce redirect location. Use this if behind a load balancer that performs SSL termination.
  # The value of this header must be http or https. Example,if behind an AWS Classic ELB.
  forwardedProtocolHeader = "X-Forwarded-Proto"
}

# When enabled,redirect prefix `r/` will be enabled and its query parameters resolved.
# Otherwise the request prefixed with `r/` will be dropped with `404 Not Found`
# Custom redirects configured in `paths` can still be used.
enableDefaultRedirect = false

# When enabled,the redirect url passed via the `u` query parameter is scanned for a placeholder
# token. All instances of that token are replaced withe the network ID. If the placeholder isn't
# specified,the default value is `${SP_NUID}`.
redirectMacro {
  enabled = false
  # Optional custom placeholder token (defaults to the literal `${SP_NUID}`)
  placeholder = "[TOKEN]"
}

# Customize response handling for requests for the root path ("/").
# Useful if you need to redirect to web content or privacy policies regarding the use of this collector.
rootResponse {
  enabled = false
  statusCode = 302
  # Optional,defaults to empty map
  headers = {
    Location = "https://127.0.0.1/",X-Custom = "something"
  }
  # Optional,defaults to empty string
  body = "302,redirecting"
}

# Configuration related to CORS preflight requests
cors {
  # The Access-Control-Max-Age response header indicates how long the results of a preflight
  # request can be cached. -1 seconds disables the cache. Chromium max is 10m,Firefox is 24h.
  accessControlMaxAge = 5 seconds
}

# Configuration of prometheus http metrics
prometheusMetrics {
  # If metrics are enabled then all requests will be logged as prometheus metrics
  # and '/metrics' endpoint will return the report about the requests
  enabled = false
  # Custom buckets for HTTP_Request_duration_seconds_bucket duration metric
  #durationBucketsInSeconds = [0.1,3,10]
}

streams {
  # Events which have successfully been collected will be stored in the good stream/topic
  good = test-raw-good

  # Events that are too big (w.r.t Kinesis 1MB limit) will be stored in the bad stream/topic
  bad = test-raw-bad

# Whether to use the incoming event's ip as the partition key for the good stream/topic
# Note: Nsq does not make use of partition key.
useIpAddressAsPartitionKey = false

# Enable the chosen sink by uncommenting the appropriate configuration
sink {
  # Choose between kinesis,google-pub-sub,kafka,nsq,or stdout.
  # To use stdout,comment or remove everything in the "collector.streams.sink" section except
  # "enabled" which should be set to "stdout".
  enabled = googlepubsub

  # Or Google Pubsub
  googleProjectId = test-learn-gcp
  ## Minimum,maximum and total backoff periods,in milliseconds
  ## and multiplier between two backoff
  backoffPolicy {
    minBackoff = 1000
    maxBackoff = 5000
    totalBackoff = 10000 # must be >= 10000
    multiplier = 2
  }
}

# Incoming events are stored in a buffer before being sent to Kinesis/Kafka.
# Note: Buffering is not supported by NSQ.
# The buffer is emptied whenever:
# - the number of stored records reaches record-limit or
# - the combined size of the stored records reaches byte-limit or
# - the time in milliseconds since the buffer was last emptied reaches time-limit
buffer {
  byteLimit = 1
  recordLimit = 1 # Not supported by Kafka; will be ignored
  timeLimit = 1
}
}

}

# Akka has a variety of possible configuration options defined at
# http://doc.akka.io/docs/akka/current/scala/general/configuration.html
akka {
 loglevel = OFF # 'OFF' for no logging,'DEBUG' for all logging.
 loggers = ["akka.event.slf4j.Slf4jLogger"]

# akka-http is the server the Stream collector uses and has configurable options defined at
# http://doc.akka.io/docs/akka-http/current/scala/http/configuration.html
http.server {
  # To obtain the hostname in the collector,the 'remote-address' header
  # should be set. By default,this is disabled,and enabling it
  # adds the 'Remote-Address' header to every request automatically.
  remote-address-header = on

  raw-request-uri-header = on

  # Define the maximum request length (the default is 2048)
  parsing {
    max-uri-length = 32768
    uri-parsing-mode = relaxed
  }
}

# By default setting `collector.ssl` relies on JSSE (Java Secure Socket
# Extension) to enable secure communication.
# To override the default settings set the following section as per
# https://lightbend.github.io/ssl-config/ExampleSSLConfig.html
 ssl-config {
   debug = {
     ssl = true
   }
   keyManager = {
     stores = [
       {type = "PKCS12",classpath = false,path = "/root/certificate/collector.p12",password = "" }
     ]
   }
   loose {
     disableHostnameVerification = true
   }
 }
}```

解决方法

我的问题解决了。 我无法绑定到端口 443 的原因是收集器 jar 版本 (0.16.0)。 新的收集器 jar (0.17.0) 解决了我的问题。

版权声明:本文内容由互联网用户自发贡献,该文观点与技术仅代表作者本人。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如发现本站有涉嫌侵权/违法违规的内容, 请发送邮件至 dio@foxmail.com 举报,一经查实,本站将立刻删除。