* fixes ArgumentError when proxy is used
* Update app/lib/request.rb
Co-authored-by: Claire <claire.github-309c@sitedethib.com>
Co-authored-by: Eugen Rochko <eugen@zeonfederated.com>
Co-authored-by: Claire <claire.github-309c@sitedethib.com>
Switch to monkey-patching http.rb rather than a runtime extend of each
response, so as to avoid busting the global method cache. A guard is
included that will provide developer feedback in development and test
environments should the monkey patch ever collide.
* Move ActivityPub::FetchRemoteAccountService to ActivityPub::FetchRemoteActorService
ActivityPub::FetchRemoteAccountService is kept as a wrapper for when the actor is
specifically required to be an Account
* Refactor SignatureVerification to allow non-Account actors
* fixup! Move ActivityPub::FetchRemoteAccountService to ActivityPub::FetchRemoteActorService
* Refactor ActivityPub::FetchRemoteKeyService to potentially return non-Account actors
* Refactor inbound ActivityPub payload processing to accept non-Account actors
* Refactor inbound ActivityPub processing to accept activities relayed through non-Account
* Refactor how Account key URIs are built
* Refactor Request and drop unused key_id_format parameter
* Rename ActivityPub::Dereferencer `signature_account` to `signature_actor`
* Add a more descriptive PrivateNetworkAddressError exception class
* Remove unnecessary exception class to rescue clause
* Remove unnecessary include to JsonLdHelper
* Give more neutral error message when too many webfinger redirects
* Remove unnecessary guard condition
* Rework how “ActivityPub::FetchRemoteAccountService” handles errors
Add “suppress_errors” keyword argument to avoid raising errors in
ActivityPub::FetchRemoteAccountService#call (default/previous behavior).
* Rework how “ActivityPub::FetchRemoteKeyService” handles errors
Add “suppress_errors” keyword argument to avoid raising errors in
ActivityPub::FetchRemoteKeyService#call (default/previous behavior).
* Fix Webfinger::RedirectError not being a subclass of Webfinger::Error
* Add suppress_errors option to ResolveAccountService
Defaults to true (to preserve previous behavior). If set to false,
errors will be raised instead of caught, allowing the caller to be
informed of what went wrong.
* Return more precise error when failing to fetch account signing AP payloads
* Add tests
* Fixes
* Refactor error handling a bit
* Fix various issues
* Add specific error when provided Digest is not 256 bits of base64-encoded data
* Please CodeClimate
* Improve webfinger error reporting
- update http gem to avoid errors
- update blurhash gem to avoid shared object loading error
- update goldfinger gem so the http gem could be updated
- update json gem to avoid warnings
* Disable incorrect check for hidden services in Socket
Hidden services can only be accessed with an HTTP proxy, in which
case the host seen by the Socket class will be the proxy, not the
target host.
Hidden services are already filtered in `Request#initialize`.
* Use our Socket class to connect to HTTP proxies
Avoid the timeout logic being bypassed
* Add support for IP addresses in Request::Socket
* Refactor a bit, no need to keep the DNS resolver around
* Add request pool to improve delivery performance
Fix#7909
* Ensure connection is closed when exception interrupts execution
* Remove Timeout#timeout from socket connection
* Fix infinite retrial loop on HTTP::ConnectionError
* Close sockets on failure, reduce idle time to 90 seconds
* Add MAX_REQUEST_POOL_SIZE option to limit concurrent connections to the same server
* Use a shared pool size, 512 by default, to stay below open file limit
* Add some tests
* Add more tests
* Reduce MAX_IDLE_TIME from 90 to 30 seconds, reap every 30 seconds
* Use a shared pool that returns preferred connection but re-purposes other ones when needed
* Fix wrong connection being returned on subsequent calls within the same thread
* Reduce mutex calls on flushes from 2 to 1 and add test for reaping
* Fix connect timeout not being enforced
The loop was catching the timeout exception that should stop execution, so the next IP would no longer be within a timed block, which led to requests taking much longer than 10 seconds.
* Use timeout on each IP attempt, but limit to 2 attempts
* Fix code style issue
* Do not break Request#perform if no block given
* Update method stub in spec for Request
* Move timeout inside the begin/rescue block
* Use Resolv::DNS with timeout of 1 to get IP addresses
* Update Request spec to stub Resolv::DNS instead of Addrinfo
* Fix Resolve::DNS stubs in Request spec
* If an Update is signed with known key, skip re-following procedure
Because it means the remote actor did *not* lose their database
* Add CLI method for rotating keys
bin/tootctl accounts rotate [USERNAME]
Generates a new RSA key per account and sends out an Update activity
signed with the old key.
* Key rotation: Space out Update fan-outs every 5 minutes per 1000 accounts
* Skip suspended accounts in key rotation
If Mastodon accesses to the hidden service via transparent proxy, it's needed to avoid checking whether it's a private address, since `.onion` is resolved to a private address.
I was previously using the `HIDDEN_SERVICE_VIA_TRANSPARENT_PROXY` to provide that function. However, I realized that using `HIDDEN_SERVICE_VIA_TRANSPARENT_PROXY` is redundant, since this specification is always used with `ALLOW_ACCESS_TO_HIDDEN_SERVICE`. Therefore, I decided to integrate the setting of `HIDDEN_SERVICE_VIA_TRANSPARENT_PROXY` into` ALLOW_ACCESS_TO_HIDDEN_SERVICE`.
* Add support for HTTP client proxy
* Add access control for darknet
Supress error when access to darknet via transparent proxy
* Fix the codes pointed out
* Lint
* Fix an omission + lint
* any? -> include?
* Change detection method to regexp to avoid test fail
to_s method of HTTP::Response keeps blocking while it receives the whole
content, no matter how it is big. This means it may waste time to receive
unacceptably large files. It may also consume memory and disk in the
process. This solves the inefficency by checking response length while
receiving.
HTTP connections must be explicitly closed in many cases, and letting
perform method close connections makes its callers less redundant and
prevent them from forgetting to close connections.