Name |
Description |
CVE-2023-46674 |
An issue was identified that allowed the unsafe deserialization of java objects from hadoop or spark configuration properties that could have been modified by authenticated users. Elastic would like to thank Yakov Shafranovich, with Amazon Web Services for reporting this issue.
|
CVE-2023-38188 |
Azure Apache Hadoop Spoofing Vulnerability
|
CVE-2023-26031 |
Relative library resolution in linux container-executor binary in Apache Hadoop 3.3.1-3.3.4 on Linux allows local user to gain root privileges. If the YARN cluster is accepting work from remote (authenticated) users, this MAY permit remote users to gain root privileges. Hadoop 3.3.0 updated the " YARN Secure Containers https://hadoop.apache.org/docs/stable/hadoop-yarn/hadoop-yarn-site/SecureContainer.html " to add a feature for executing user-submitted applications in isolated linux containers. The native binary HADOOP_HOME/bin/container-executor is used to launch these containers; it must be owned by root and have the suid bit set in order for the YARN processes to run the containers as the specific users submitting the jobs. The patch " YARN-10495 https://issues.apache.org/jira/browse/YARN-10495 . make the rpath of container-executor configurable" modified the library loading path for loading .so files from "$ORIGIN/" to ""$ORIGIN/:../lib/native/". This is the a path through which libcrypto.so is located. Thus it is is possible for a user with reduced privileges to install a malicious libcrypto library into a path to which they have write access, invoke the container-executor command, and have their modified library executed as root. If the YARN cluster is accepting work from remote (authenticated) users, and these users' submitted job are executed in the physical host, rather than a container, then the CVE permits remote users to gain root privileges. The fix for the vulnerability is to revert the change, which is done in YARN-11441 https://issues.apache.org/jira/browse/YARN-11441 , "Revert YARN-10495". This patch is in hadoop-3.3.5. To determine whether a version of container-executor is vulnerable, use the readelf command. If the RUNPATH or RPATH value contains the relative path "./lib/native/" then it is at risk $ readelf -d container-executor|grep 'RUNPATH\|RPATH' 0x000000000000001d (RUNPATH) Library runpath: [$ORIGIN/:../lib/native/] If it does not, then it is safe: $ readelf -d container-executor|grep 'RUNPATH\|RPATH' 0x000000000000001d (RUNPATH) Library runpath: [$ORIGIN/] For an at-risk version of container-executor to enable privilege escalation, the owner must be root and the suid bit must be set $ ls -laF /opt/hadoop/bin/container-executor ---Sr-s---. 1 root hadoop 802968 May 9 20:21 /opt/hadoop/bin/container-executor A safe installation lacks the suid bit; ideally is also not owned by root. $ ls -laF /opt/hadoop/bin/container-executor -rwxr-xr-x. 1 yarn hadoop 802968 May 9 20:21 /opt/hadoop/bin/container-executor This configuration does not support Yarn Secure Containers, but all other hadoop services, including YARN job execution outside secure containers continue to work.
|
CVE-2023-2358 |
Hitachi Vantara Pentaho Business Analytics Server prior to versions 9.5.0.0 and 9.3.0.4, including 8.3.x.x, saves passwords of the Hadoop Copy Files step in plaintext.
|
CVE-2022-39379 |
Fluentd collects events from various data sources and writes them to files, RDBMS, NoSQL, IaaS, SaaS, Hadoop and so on. A remote code execution (RCE) vulnerability in non-default configurations of Fluentd allows unauthenticated attackers to execute arbitrary code via specially crafted JSON payloads. Fluentd setups are only affected if the environment variable `FLUENT_OJ_OPTION_MODE` is explicitly set to `object`. Please note: The option FLUENT_OJ_OPTION_MODE was introduced in Fluentd version 1.13.2. Earlier versions of Fluentd are not affected by this vulnerability. This issue was patched in version 1.15.3. As a workaround do not use `FLUENT_OJ_OPTION_MODE=object`.
|
CVE-2022-26612 |
In Apache Hadoop, The unTar function uses unTarUsingJava function on Windows and the built-in tar utility on Unix and other OSes. As a result, a TAR entry may create a symlink under the expected extraction directory which points to an external directory. A subsequent TAR entry may extract an arbitrary file into the external directory using the symlink name. This however would be caught by the same targetDirPath check on Unix because of the getCanonicalPath call. However on Windows, getCanonicalPath doesn't resolve symbolic links, which bypasses the check. unpackEntries during TAR extraction follows symbolic links which allows writing outside expected base directory on Windows. This was addressed in Apache Hadoop 3.2.3
|
CVE-2022-25168 |
Apache Hadoop's FileUtil.unTar(File, File) API does not escape the input file name before being passed to the shell. An attacker can inject arbitrary commands. This is only used in Hadoop 3.3 InMemoryAliasMap.completeBootstrapTransfer, which is only ever run by a local user. It has been used in Hadoop 2.x for yarn localization, which does enable remote code execution. It is used in Apache Spark, from the SQL command ADD ARCHIVE. As the ADD ARCHIVE command adds new binaries to the classpath, being able to execute shell scripts does not confer new permissions to the caller. SPARK-38305. "Check existence of file before untarring/zipping", which is included in 3.3.0, 3.1.4, 3.2.2, prevents shell commands being executed, regardless of which version of the hadoop libraries are in use. Users should upgrade to Apache Hadoop 2.10.2, 3.2.4, 3.3.3 or upper (including HADOOP-18136).
|
CVE-2021-37404 |
There is a potential heap buffer overflow in Apache Hadoop libhdfs native code. Opening a file path provided by user without validation may result in a denial of service or arbitrary code execution. Users should upgrade to Apache Hadoop 2.10.2, 3.2.3, 3.3.2 or higher.
|
CVE-2021-36151 |
In Apache Gobblin, the Hadoop token is written to a temp file that is visible to all local users on Unix-like systems. This affects versions <= 0.15.0. Users should update to version 0.16.0 which addresses this issue.
|
CVE-2021-33036 |
In Apache Hadoop 2.2.0 to 2.10.1, 3.0.0-alpha1 to 3.1.4, 3.2.0 to 3.2.2, and 3.3.0 to 3.3.1, a user who can escalate to yarn user can possibly run arbitrary commands as root user. Users should upgrade to Apache Hadoop 2.10.2, 3.2.3, 3.3.2 or higher.
|
CVE-2021-25642 |
ZKConfigurationStore which is optionally used by CapacityScheduler of Apache Hadoop YARN deserializes data obtained from ZooKeeper without validation. An attacker having access to ZooKeeper can run arbitrary commands as YARN user by exploiting this. Users should upgrade to Apache Hadoop 2.10.2, 3.2.4, 3.3.4 or later (containing YARN-11126) if ZKConfigurationStore is used.
|
CVE-2020-9492 |
In Apache Hadoop 3.2.0 to 3.2.1, 3.0.0-alpha1 to 3.1.3, and 2.0.0-alpha to 2.10.0, WebHDFS client might send SPNEGO authorization header to remote URL without proper verification.
|
CVE-2020-8908 |
A temp directory creation vulnerability exists in all versions of Guava, allowing an attacker with access to the machine to potentially access data in a temporary directory created by the Guava API com.google.common.io.Files.createTempDir(). By default, on unix-like systems, the created directory is world-readable (readable by an attacker with access to the system). The method in question has been marked @Deprecated in versions 30.0 and later and should not be used. For Android developers, we recommend choosing a temporary directory API provided by Android, such as context.getCacheDir(). For other Java developers, we recommend migrating to the Java 7 API java.nio.file.Files.createTempDirectory() which explicitly configures permissions of 700, or configuring the Java runtime's java.io.tmpdir system property to point to a location whose permissions are appropriately configured.
|
CVE-2019-20444 |
HttpObjectDecoder.java in Netty before 4.1.44 allows an HTTP header that lacks a colon, which might be interpreted as a separate header with an incorrect syntax, or might be interpreted as an "invalid fold."
|
CVE-2019-19354 |
An insecure modification vulnerability in the /etc/passwd file was found in the operator-framework/hadoop as shipped in Red Hat Openshift 4. An attacker with access to the container could use this flaw to modify /etc/passwd and escalate their privileges.
|
CVE-2019-16869 |
Netty before 4.1.42.Final mishandles whitespace before the colon in HTTP headers (such as a "Transfer-Encoding : chunked" line), which leads to HTTP request smuggling.
|
CVE-2019-10172 |
A flaw was found in org.codehaus.jackson:jackson-mapper-asl:1.9.x libraries. XML external entity vulnerabilities similar CVE-2016-3720 also affects codehaus jackson-mapper-asl libraries but in different classes.
|
CVE-2019-0201 |
An issue is present in Apache ZooKeeper 1.0.0 to 3.4.13 and 3.5.0-alpha to 3.5.4-beta. ZooKeeper’s getACL() command doesn’t check any permission when retrieves the ACLs of the requested node and returns all information contained in the ACL Id field as plaintext string. DigestAuthenticationProvider overloads the Id field with the hash value that is used for user authentication. As a consequence, if Digest Authentication is in use, the unsalted hash value will be disclosed by getACL() request for unauthenticated or unprivileged users.
|
CVE-2018-8088 |
org.slf4j.ext.EventData in the slf4j-ext module in QOS.CH SLF4J before 1.8.0-beta2 allows remote attackers to bypass intended access restrictions via crafted data. EventData in the slf4j-ext module in QOS.CH SLF4J, has been fixed in SLF4J versions 1.7.26 later and in the 2.0.x series.
|
CVE-2018-8042 |
Apache Ambari, version 2.5.0 to 2.6.2, passwords for Hadoop credential stores are exposed in Ambari Agent informational log messages when the credential store feature is enabled for eligible services. For example, Hive and Oozie.
|
CVE-2018-8029 |
In Apache Hadoop versions 3.0.0-alpha1 to 3.1.0, 2.9.0 to 2.9.1, and 2.2.0 to 2.8.4, a user who can escalate to yarn user can possibly run arbitrary commands as root user.
|
CVE-2018-8009 |
Apache Hadoop 3.1.0, 3.0.0-alpha to 3.0.2, 2.9.0 to 2.9.1, 2.8.0 to 2.8.4, 2.0.0-alpha to 2.7.6, 0.23.0 to 0.23.11 is exploitable via the zip slip vulnerability in places that accept a zip file.
|
CVE-2018-6185 |
In Cloudera Navigator Key Trustee KMS 5.12 and 5.13, incorrect default ACL values allow remote access to purge and undelete API calls on encryption zone keys. The Navigator Key Trustee KMS includes 2 API calls in addition to those in Apache Hadoop KMS: purge and undelete. The KMS ACL values for these commands are keytrustee.kms.acl.PURGE and keytrustee.kms.acl.UNDELETE respectively. The default value for the ACLs in Key Trustee KMS 5.12.0 and 5.13.0 is "*" which allows anyone with knowledge of the name of an encryption zone key and network access to the Key Trustee KMS to make those calls against known encryption zone keys. This can result in the recovery of a previously deleted, but not purged, key (undelete) or the deletion of a key in active use (purge) resulting in loss of access to encrypted HDFS data.
|
CVE-2018-1296 |
In Apache Hadoop 3.0.0-alpha1 to 3.0.0, 2.9.0, 2.8.0 to 2.8.3, and 2.5.0 to 2.7.5, HDFS exposes extended attribute key/value pairs during listXAttrs, verifying only path-level search access to the directory rather than path-level read permission to the referent.
|
CVE-2018-11768 |
In Apache Hadoop 3.1.0 to 3.1.1, 3.0.0-alpha1 to 3.0.3, 2.9.0 to 2.9.1, and 2.0.0-alpha to 2.8.4, the user/group information can be corrupted across storing in fsimage and reading back from fsimage.
|
CVE-2018-11767 |
In Apache Hadoop 2.9.0 to 2.9.1, 2.8.3 to 2.8.4, 2.7.5 to 2.7.6, KMS blocking users or granting access to users incorrectly, if the system uses non-default groups mapping mechanisms.
|
CVE-2018-11766 |
In Apache Hadoop 2.7.4 to 2.7.6, the security fix for CVE-2016-6811 is incomplete. A user who can escalate to yarn user can possibly run arbitrary commands as root user.
|
CVE-2018-11765 |
In Apache Hadoop versions 3.0.0-alpha2 to 3.0.0, 2.9.0 to 2.9.2, 2.8.0 to 2.8.5, any users can access some servlets without authentication when Kerberos authentication is enabled and SPNEGO through HTTP is not enabled.
|
CVE-2018-11764 |
Web endpoint authentication check is broken in Apache Hadoop 3.0.0-alpha4, 3.0.0-beta1, and 3.0.0. Authenticated users may impersonate any user even if no proxy user is configured.
|
CVE-2018-10237 |
Unbounded memory allocation in Google Guava 11.0 through 24.x before 24.1.1 allows remote attackers to conduct denial of service attacks against servers that depend on this library and deserialize attacker-provided data, because the AtomicDoubleArray class (when serialized with Java serialization) and the CompoundOrdering class (when serialized with GWT serialization) perform eager allocation without appropriate checks on what a client has sent and whether the data size is reasonable.
|
CVE-2017-7669 |
In Apache Hadoop 2.8.0, 3.0.0-alpha1, and 3.0.0-alpha2, the LinuxContainerExecutor runs docker commands as root with insufficient input validation. When the docker feature is enabled, authenticated users can run commands as root.
|
CVE-2017-7565 |
Splunk Hadoop Connect App has a path traversal vulnerability that allows remote authenticated users to execute arbitrary code, aka ERP-2041.
|
CVE-2017-3166 |
In Apache Hadoop versions 2.6.1 to 2.6.5, 2.7.0 to 2.7.3, and 3.0.0-alpha1, if a file in an encryption zone with access permissions that make it world readable is localized via YARN's localization mechanism, that file will be stored in a world-readable location and can be shared freely with any application that requests to localize that file.
|
CVE-2017-3162 |
HDFS clients interact with a servlet on the DataNode to browse the HDFS namespace. The NameNode is provided as a query parameter that is not validated in Apache Hadoop before 2.7.0.
|
CVE-2017-3161 |
The HDFS web UI in Apache Hadoop before 2.7.0 is vulnerable to a cross-site scripting (XSS) attack through an unescaped query parameter.
|
CVE-2017-18640 |
The Alias feature in SnakeYAML before 1.26 allows entity expansion during a load operation, a related issue to CVE-2003-1564.
|
CVE-2017-15718 |
The YARN NodeManager in Apache Hadoop 2.7.3 and 2.7.4 can leak the password for credential store provider used by the NodeManager to YARN Applications.
|
CVE-2017-15713 |
Vulnerability in Apache Hadoop 0.23.x, 2.x before 2.7.5, 2.8.x before 2.8.3, and 3.0.0-alpha through 3.0.0-beta1 allows a cluster user to expose private files owned by the user running the MapReduce job history server process. The malicious user can construct a configuration file containing XML directives that reference sensitive files on the MapReduce job history server host.
|
CVE-2016-6811 |
In Apache Hadoop 2.x before 2.7.4, a user who can escalate to yarn user can possibly run arbitrary commands as root user.
|
CVE-2016-5393 |
In Apache Hadoop 2.6.x before 2.6.5 and 2.7.x before 2.7.3, a remote user who can authenticate with the HDFS NameNode can possibly run arbitrary commands with the same privileges as the HDFS service.
|
CVE-2016-5001 |
This is an information disclosure vulnerability in Apache Hadoop before 2.6.4 and 2.7.x before 2.7.2 in the short-circuit reads feature of HDFS. A local user on an HDFS DataNode may be able to craft a block token that grants unauthorized read access to random files by guessing certain fields in the token.
|
CVE-2016-3086 |
The YARN NodeManager in Apache Hadoop 2.6.x before 2.6.5 and 2.7.x before 2.7.3 can leak the password for credential store provider used by the NodeManager to YARN Applications.
|
CVE-2015-7430 |
The Hadoop connector 1.1.1, 2.4, 2.5, and 2.7.0-0 before 2.7.0-3 for IBM Spectrum Scale and General Parallel File System (GPFS) allows local users to read or write to arbitrary GPFS data via unspecified vectors.
|
CVE-2015-1889 |
The Big SQL component in IBM InfoSphere BigInsights 3.0 through 3.0.0.2 allows remote authenticated users to bypass intended HDFS data-access restrictions via (1) a crafted CREATE HADOOP TABLE statement referencing the data of an arbitrary user or (2) an import of a certain Hive table definition with the HCAT_SYNC_OBJECTS procedure.
|
CVE-2015-1776 |
Apache Hadoop 2.6.x encrypts intermediate data generated by a MapReduce job and stores it along with the encryption key in a credentials file on disk when the Intermediate data encryption feature is enabled, which allows local users to obtain sensitive information by reading the file.
|
CVE-2014-8733 |
Cloudera Manager 5.2.0, 5.2.1, and 5.3.0 stores the LDAP bind password in plaintext in unspecified world-readable files under /etc/hadoop, which allows local users to obtain this password.
|
CVE-2014-4611 |
Integer overflow in the LZ4 algorithm implementation, as used in Yann Collet LZ4 before r118 and in the lz4_uncompress function in lib/lz4/lz4_decompress.c in the Linux kernel before 3.15.2, on 32-bit platforms might allow context-dependent attackers to cause a denial of service (memory corruption) or possibly have unspecified other impact via a crafted Literal Run that would be improperly handled by programs not complying with an API limitation, a different vulnerability than CVE-2014-4715.
|
CVE-2014-3627 |
The YARN NodeManager daemon in Apache Hadoop 0.23.0 through 0.23.11 and 2.x before 2.5.2, when using Kerberos authentication, allows remote cluster users to change the permissions of certain files to world-readable via a symlink attack in a public tar archive, which is not properly handled during localization, related to distributed cache.
|
CVE-2014-0229 |
Apache Hadoop 0.23.x before 0.23.11 and 2.x before 2.4.1, as used in Cloudera CDH 5.0.x before 5.0.2, do not check authorization for the (1) refreshNamenodes, (2) deleteBlockPool, and (3) shutdownDatanode HDFS admin commands, which allows remote authenticated users to cause a denial of service (DataNodes shutdown) or perform unnecessary operations by issuing a command.
|
CVE-2013-2192 |
The RPC protocol implementation in Apache Hadoop 2.x before 2.0.6-alpha, 0.23.x before 0.23.9, and 1.x before 1.2.1, when the Kerberos security features are enabled, allows man-in-the-middle attackers to disable bidirectional authentication and obtain sensitive information by forcing a downgrade to simple authentication.
|
CVE-2012-4449 |
Apache Hadoop before 0.23.4, 1.x before 1.0.4, and 2.x before 2.0.2 generate token passwords using a 20-bit secret when Kerberos security features are enabled, which makes it easier for context-dependent attackers to crack secret keys via a brute-force attack.
|
CVE-2012-3376 |
DataNodes in Apache Hadoop 2.0.0 alpha does not check the BlockTokens of clients when Kerberos is enabled and the DataNode has checked out the same BlockPool twice from a NodeName, which might allow remote clients to read arbitrary blocks, write to blocks to which they only have read access, and have other unspecified impacts.
|
CVE-2012-2945 |
Hadoop 1.0.3 contains a symlink vulnerability.
|
CVE-2012-1574 |
The Kerberos/MapReduce security functionality in Apache Hadoop 0.20.203.0 through 0.20.205.0, 0.23.x before 0.23.2, and 1.0.x before 1.0.2, as used in Cloudera CDH CDH3u0 through CDH3u2, Cloudera hadoop-0.20-sbin before 0.20.2+923.197, and other products, allows remote authenticated users to impersonate arbitrary cluster user accounts via unspecified vectors.
|
CVE-2012-0881 |
Apache Xerces2 Java Parser before 2.12.0 allows remote attackers to cause a denial of service (CPU consumption) via a crafted message to an XML service, which triggers hash table collisions.
|