HDDS-10889. Remove certificate revocation related code.#6725
HDDS-10889. Remove certificate revocation related code.#6725dombizita merged 3 commits intoapache:masterfrom
Conversation
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/server/SCMCertStore.java
Outdated
Show resolved
Hide resolved
...ds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/server/SCMSecurityProtocolServer.java
Outdated
Show resolved
Hide resolved
|
Thank you for the review @Galsza if you spot any other issue, please feel free to note it, I have addressed the two comments. |
|
FYI: This feature is tracked in https://issues.apache.org/jira/browse/HDDS-7333 |
| conf.set(scmHttpAddrKey, localhostWithFreePort()); | ||
| conf.set(scmHttpsAddrKey, localhostWithFreePort()); | ||
| conf.set(scmSecurityAddrKey, localhostWithFreePort()); | ||
| conf.set("ozone.scm.update.service.port", "0"); |
There was a problem hiding this comment.
Is this related to this PR?
There was a problem hiding this comment.
Hi @kerneltime,
yes it is related, the CRL distribution mechanism uses the SCMUpdateService (actually the CRL distribution is the only thing uses it), and it has a config object UpdateServiceConfig which defines this config.
As in this PR I am removing the update service as it is not used by anything else, it is reasonable to remove the config option and its usages also.
|
LGTM +1 |
...op-hdds/container-service/src/test/java/org/apache/hadoop/ozone/TestHddsDatanodeService.java
Show resolved
Hide resolved
...rvice/src/test/java/org/apache/hadoop/ozone/container/common/report/TestReportPublisher.java
Show resolved
Hide resolved
hadoop-hdds/framework/src/main/java/org/apache/hadoop/hdds/protocol/SCMSecurityProtocol.java
Outdated
Show resolved
Hide resolved
...rc/main/java/org/apache/hadoop/hdds/security/x509/certificate/authority/DefaultCAServer.java
Show resolved
Hide resolved
...c/main/java/org/apache/hadoop/hdds/security/x509/certificate/authority/CertificateStore.java
Show resolved
Hide resolved
...ds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/server/SCMSecurityProtocolServer.java
Show resolved
Hide resolved
|
@kerneltime @dombizita @Galsza |
|
Thanks for working on this @fapifta! Thanks for the reviews @adoroszlai, @kerneltime and @Galsza! |
What changes were proposed in this pull request?
In this PR, I propose to remove and stop maintaining Certificate Revocation related code in its current form.
The main reason behind this move is that it is not finished, it does not work currently as it misses some things still, and it is unreachable for any regular user.
The secondary reason to remove this code from our codebase is that, for CRL distribution there are simple mechanisms in the SSL layer that we can use. There is the CRL distribution endpoint that can be encoded to the certificates, and CRL checks can be pushed down to the SSL layer for communication.
Moreover, the current code still use these certs to verify the signature of Delegation Tokens but that is something we are planning to change to use symmetric encryption similarly to block and container tokens, as of now the JCE code does not check the CRL Distribution endpoint when it verifies a signature, but that is exactly the same case as we have today, so functionally the system does not degrade by this move.
There is one more reason though, and that is the burden on maintaining this code part in the compliance mode related development, as it has some hard to replace bouncycastle usages also.
With all that, as we still use protoc2.5 and we don't have a safe option to reserve fields that were once declared and prevent accidental identifier re-use, I kept the relevant declarations in our proto files, with some comments.
Ideally in a protoc3 world we can use the reserved keyword for unused fields and just remove them from messages, or remove (with that empty) all messages that are affected. But as of now we are compiling quite a few things with protoc 2.5, because if we do use a different gRPC base Java runtime, as it would not fly with a protoc3 compiled version, as the RPC libraries in the Hadoop2 filesystem lib are not able to use the protoc3 compiled stuff without the proto 3 runtime, which we can not really enforce to be loeaded by hadoop2 properly.
After we migrate to proto3 we can do the proper removal of the proto fields, so this patch does not want to change proto definitions just mark them.
The resulting code, if it meets with an old client that tries to send extra field data that is not processed anywhere, it is free to do it, as the protoc parsing will either read those fileds and our code won't use it (with proto 2.5.), or it will read it as an unknown field (with the reserved tags provided by prtoc 3 and suggested by the comments.
What is the link to the Apache JIRA
https://issues.apache.org/jira/browse/HDDS-10889
How was this patch tested?
No new logic added, and the removed code is dead code, current tests should cover.