微信公众号搜"智元新知"关注
微信扫一扫可直接关注哦!

为帖子添加尾部斜杠

如何解决为帖子添加尾部斜杠

我正在使用以下 .htaccess 代码为除主页之外的所有网址添加尾随斜杠。

## Base Redirects ##

# Turn on Rewrite Engine
RewriteEngine On

# Include trailing slash on non-filepath urls
RewriteCond %{REQUEST_URI} !(.+)/$
RewriteCond %{REQUEST_FILENAME} !-d
RewriteCond %{REQUEST_FILENAME} !-f
RewriteRule (.*)$ https://hamilekadin.net/$1/ [R=301,L]

# Remove trailing slash from directory
RewriteCond %{REQUEST_FILENAME} -d
RewriteRule ^(.+)/$ https://hamilekadin.net/$1 [R=301,L]

# Force HTTPS and remove WWW
RewriteCond %{HTTP_HOST} ^www\.(.*)$ [OR,NC]
RewriteCond %{HTTPS} off  
RewriteRule ^(.*)$ https://hamilekadin.net/$1 [R=301,L]

我想要非 www 网址、https 协议以及帖子和页面网址后的斜杠。

使用这个 .htaccess,我在类别、页面、帖子上遇到 404 错误

另外我的固定链接类型是:/%postname%/

解决方法

这是尾部斜杠的代码

python linkdn2.py

keyword: ['data','analyst']

keyword2 : Data%20Analyst
https://www.linkedin.com/in/roshaankhan?miniProfileUrn=urn%3Ali%3Afs_miniProfile%3AACoAACL58nQBKUordklUHOqNKThOLHNSLnirIck
Going to scrape Page 1 data

profile_links : ['https://www.linkedin.com/in/roshaankhan?miniProfileUrn=urn%3Ali%3Afs_miniProfile%3AACoAACL58nQBKUordklUHOqNKThOLHNSLnirIck']
Profile : https://www.linkedin.com/in/roshaankhan?miniProfileUrn=urn%3Ali%3Afs_miniProfile%3AACoAACL58nQBKUordklUHOqNKThOLHNSLnirIck

getting

{'Name': 'Roshaan Khan','Location': '','Work_at': '','Education': '','Profile_image': '','Website': '','Email': ''}
https://www.linkedin.com/in/sabanasimbutt?miniProfileUrn=urn%3Ali%3Afs_miniProfile%3AACoAAB7iVNAB_l8blfjWUwqgsV-bkjV3X_3ODdk
Going to scrape Page 1 data

profile_links : ['https://www.linkedin.com/in/roshaankhan?miniProfileUrn=urn%3Ali%3Afs_miniProfile%3AACoAACL58nQBKUordklUHOqNKThOLHNSLnirIck','https://www.linkedin.com/in/sabanasimbutt?miniProfileUrn=urn%3Ali%3Afs_miniProfile%3AACoAAB7iVNAB_l8blfjWUwqgsV-bkjV3X_3ODdk']
Profile : https://www.linkedin.com/in/roshaankhan?miniProfileUrn=urn%3Ali%3Afs_miniProfile%3AACoAACL58nQBKUordklUHOqNKThOLHNSLnirIck
Traceback (most recent call last):
  File "/home/danish-khan/scrapers/scrpers/lib/python3.8/site-packages/urllib3/connection.py",line 159,in _new_conn
    conn = connection.create_connection(
  File "/home/danish-khan/scrapers/scrpers/lib/python3.8/site-packages/urllib3/util/connection.py",line 84,in create_connection
    raise err
  File "/home/danish-khan/scrapers/scrpers/lib/python3.8/site-packages/urllib3/util/connection.py",line 74,in create_connection
    sock.connect(sa)
ConnectionRefusedError: [Errno 111] Connection refused

During handling of the above exception,another exception occurred:

Traceback (most recent call last):
  File "/home/danish-khan/scrapers/scrpers/lib/python3.8/site-packages/urllib3/connectionpool.py",line 665,in urlopen
    httplib_response = self._make_request(
  File "/home/danish-khan/scrapers/scrpers/lib/python3.8/site-packages/urllib3/connectionpool.py",line 387,in _make_request
    conn.request(method,url,**httplib_request_kw)
  File "/usr/lib/python3.8/http/client.py",line 1255,in request
    self._send_request(method,body,headers,encode_chunked)
  File "/usr/lib/python3.8/http/client.py",line 1301,in _send_request
    self.endheaders(body,encode_chunked=encode_chunked)
  File "/usr/lib/python3.8/http/client.py",line 1250,in endheaders
    self._send_output(message_body,line 1010,in _send_output
    self.send(msg)
  File "/usr/lib/python3.8/http/client.py",line 950,in send
    self.connect()
  File "/home/danish-khan/scrapers/scrpers/lib/python3.8/site-packages/urllib3/connection.py",line 187,in connect
    conn = self._new_conn()
  File "/home/danish-khan/scrapers/scrpers/lib/python3.8/site-packages/urllib3/connection.py",line 171,in _new_conn
    raise NewConnectionError(
urllib3.exceptions.NewConnectionError: <urllib3.connection.HTTPConnection object at 0x7f431515f610>: Failed to establish a new connection: [Errno 111] Connection refused

During handling of the above exception,another exception occurred:

Traceback (most recent call last):
  File "linkdn2.py",line 108,in <module>
    obJH.start()
  File "linkdn2.py",line 104,in start
    self.getData()
  File "linkdn2.py",line 55,in getData
    driver.get(people)
  File "/home/danish-khan/scrapers/scrpers/lib/python3.8/site-packages/selenium/webdriver/remote/webdriver.py",line 333,in get
    self.execute(Command.GET,{'url': url})
  File "/home/danish-khan/scrapers/scrpers/lib/python3.8/site-packages/selenium/webdriver/remote/webdriver.py",line 319,in execute
    response = self.command_executor.execute(driver_command,params)
  File "/home/danish-khan/scrapers/scrpers/lib/python3.8/site-packages/selenium/webdriver/remote/remote_connection.py",line 374,in execute
    return self._request(command_info[0],body=data)
  File "/home/danish-khan/scrapers/scrpers/lib/python3.8/site-packages/selenium/webdriver/remote/remote_connection.py",line 397,in _request
    resp = self._conn.request(method,body=body,headers=headers)
  File "/home/danish-khan/scrapers/scrpers/lib/python3.8/site-packages/urllib3/request.py",line 79,in request
    return self.request_encode_body(
  File "/home/danish-khan/scrapers/scrpers/lib/python3.8/site-packages/urllib3/request.py",in request_encode_body
    return self.urlopen(method,**extra_kw)
  File "/home/danish-khan/scrapers/scrpers/lib/python3.8/site-packages/urllib3/poolmanager.py",line 330,in urlopen
    response = conn.urlopen(method,u.request_uri,**kw)
  File "/home/danish-khan/scrapers/scrpers/lib/python3.8/site-packages/urllib3/connectionpool.py",line 747,in urlopen
    return self.urlopen(
  File "/home/danish-khan/scrapers/scrpers/lib/python3.8/site-packages/urllib3/connectionpool.py",line 719,in urlopen
    retries = retries.increment(
  File "/home/danish-khan/scrapers/scrpers/lib/python3.8/site-packages/urllib3/util/retry.py",line 436,in increment
    raise MaxRetryError(_pool,error or ResponseError(cause))
urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='127.0.0.1',port=56707): Max retries exceeded with url: /session/b7431e8051979e6a9a308bdfd59bf60a/url (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7f431515f610>: Failed to establish a new connection: [Errno 111] Connection refused'))

对于非www url,有两种选择

选项 1:

您需要运行 SQL 查询以从 url 中删除 www

选项 2

在.htacess中添加以下代码

RewriteCond %{REQUEST_FILENAME} !-d
RewriteRule ^(.*)/$ http://yourdomainname.com/$1 [L,R=301]

根据我的说法,删除 www 的最佳方式是选项 1,您只需要运行查询即可。

版权声明:本文内容由互联网用户自发贡献,该文观点与技术仅代表作者本人。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如发现本站有涉嫌侵权/违法违规的内容, 请发送邮件至 dio@foxmail.com 举报,一经查实,本站将立刻删除。