This page contains all of the release notes for the JDK 9 General Availability (GA) releases:
January 16, 2018
The full version string for this update release is 9.0.4+11 (where "+" means "build"). The version number is 9.0.4.
For the January CPU, two different JDK9 bundles were released:
This page provides release notes for both bundles. Content that only applies to a specific bundle is presented in sections that contain either OpenJDK or Oracle JDK in their titles. Changes that apply to both bundles are presented in sections that do not have OpenJDK or Oracle JDK in their titles.
NOTE: This is the final planned release for JDK 9.
Users of JDK 9 should update to JDK 10 between its release in March 2018 and the next planned Critical Update Release in April 2018.
JDK 9.0.4 contains IANA time zone data version 2017c. For more information, refer to Timezone Data Versions in the JRE Software.
The security baselines for the Java Runtime Environment (JRE) at the time of the release of JDK 9.0.4 are specified in the following table:
JRE Family Version | JRE Security Baseline (Full Version String) |
---|---|
9 | 9.0.4+11 |
8 | 1.8.0_161-b12 |
7 | 1.7.0_171-b11 |
6 | 1.6.0_181-b10 |
The JRE expires whenever a new release with security vulnerability fixes becomes available. Critical patch updates, which contain security vulnerability fixes, are announced one year in advance on Critical Patch Updates, Security Alerts and Third Party Bulletin. This JRE (version 9.0.4) will expire with the release of the next critical patch update scheduled for April 17, 2018.
For systems unable to reach the Oracle Servers, a secondary mechanism expires this JRE (version 9.0.4) on May 17, 2018. After either condition is met (new release becoming available or expiration date reached), the JRE will provide additional warnings and reminders to users to update to the newer version. For more information, see JRE Expiration Date.
The OpenJDK 9 binary for Linux x64 contains an empty cacerts
keystore. This prevents TLS connections from being established because there are no Trusted Root Certificate Authorities installed. As a workaround for OpenJDK 9 binaries, users had to set the javax.net.ssl.trustStore
System Property to use a different keystore.
"JEP 319: Root Certificates" [1] addresses this problem by populating the cacerts
keystore with a set of root certificates issued by the CAs of Oracle's Java SE Root CA Program. As a prerequisite, each CA must sign the Oracle Contributor Agreement (OCA) http://www.oracle.com/technical-resources/oracle-contributor-agreement.html, or an equivalent agreement, to grant Oracle the right to open-source their certificates.
[1] JDK-8191486
Support has been added for the TLS session hash and extended master secret extension (RFC 7627) in JDK JSSE provider. Note that in general, server certificate change is restricted if endpoint identification is not enabled and the previous handshake is a session-resumption abbreviated initial handshake, unless the identities represented by both certificates can be regarded as the same. However, if the extension is enabled or negotiated, the server certificate changing restriction is not necessary and will be discarded accordingly. In case of compatibility issues, an application may disable negotiation of this extension by setting the System Property jdk.tls.useExtendedMasterSecret
to false
in the JDK. By setting the System Property jdk.tls.allowLegacyResumption
to false
, an application can reject abbreviated handshaking when the session hash and extended master secret extension is not negotiated. By setting the System Property jdk.tls.allowLegacyMasterSecret
to false
, an application can reject connections that do not support the session hash and extended master secret extension.
The JDK SunJSSE implementation now supports the TLS FFDHE mechanisms defined in RFC 7919. If a server cannot process the supported_groups
TLS extension or the named groups in the extension, applications can either customize the supported group names with jdk.tls.namedGroups
, or turn off the FFDHE mechanisms by setting the System Property jsse.enableFFDHEExtension
to false
.
Applications that either explicitly or implicitly call org.omg.CORBA.ORB.string_to_object
, and wish to ensure the integrity of the IDL stub type involved in the ORB::string_to_object
call flow, should specify additional IDL stub type checking. This is an "opt in" feature and is not enabled by default.
To take advantage of the additional type checking, the list of valid IDL interface class names of IDL stub classes is configured by one of the following:
Specifying the security property com.sun.CORBA.ORBIorTypeCheckRegistryFilter
located in the file conf/security/java.security
in Java SE 9 or in jre/lib/security/java.security
in Java SE 8 and earlier.
Specifying the system property com.sun.CORBA.ORBIorTypeCheckRegistryFilter
with the list of classes. If the system property is set, its value overrides the corresponding property defined in the java.security
configuration.
If the com.sun.CORBA.ORBIorTypeCheckRegistryFilter
property is not set, the type checking is only performed against a set of class names of the IDL interface types corresponding to the built-in IDL stub classes.
In 9.0.4, the RSA implementation in the SunRsaSign provider will reject any RSA public key that has an exponent that is not in the valid range as defined by PKCS#1 version 2.2. This change will affect JSSE connections as well as applications built on JCE.
This change updates the JDK providers to use 2048 bits as the default key size for DSA instead of 1024 bits when applications have not explicitly initialized the java.security.KeyPairGenerator
and java.security.AlgorithmParameterGenerator
objects with a key size.
If compatibility issues arise, existing applications can set the system property jdk.security.defaultKeySize
introduced in JDK-8181048 with the algorithm and its desired default key size.
The generateSecret(String)
method has been mostly disabled in the javax.crypto.KeyAgreement
services of the SunJCE and SunPKCS11 providers. Invoking this method for these providers will result in a NoSuchAlgorithmException
for most algorithm string arguments. The previous behavior of this method can be re-enabled by setting the value of the jdk.crypto.KeyAgreement.legacyKDF
system property to true
(case insensitive). Re-enabling this method by setting this system property is not recommended.
Prior to this change, the following code could be used to produce secret keys for AES using Diffie-Hellman:
The issue with this code is that it is unspecified how the provider should derive a secret key from the output of the Diffie-Hellman operation. There are several options for how this key derivation function can work, and each of these options has different security properties. For example, the key derivation function may bind the secret key to some information about the context or the parties involved in the key agreement. Without a clear specification of the behavior of this method, there is a risk that the key derivation function will not have some security property that is expected by the client.
To address this risk, the generateSecret(String)
method of KeyAgreement
was mostly disabled in the DiffieHellman services, and code like the example above will now result in a java.security.NoSuchAlgorithmException
. Clients still may use the no-argument generateSecret
method to obtain the raw Diffie-Hellman output, which can be used with an appropriate key derivation function to produce a secret key.
Existing applications that use the generateSecret(String)
method of this service will need to be modified. Here are a few options:
A) Implement the key derivation function from an appropriate standard. For example, NIST SP 800-56Ar2[1] section 5.8 describes how to derive keys from Diffie-Hellman output.
B) Implement the following simple key derivation function:
KeyAgreement.generateSecret()
to get the shared secret as a byte arraySecretKeySpec
. This constructor also requires the standard name of the secret-key algorithm (e.g. "AES") This is a simple key derivation function that may provide adequate security in a typical application. Developers should note that this method provides no protection against the reuse of key agreement output in different contexts, so it is not appropriate for all applications. Also, some additional effort may be required to enforce key size restrictions like the ones in Table 2 of NIST SP 800-57pt1r4[2].
C) Set the jdk.crypto.KeyAgreement.legacyKDF
system property to "true". This will restore the previous behavior of this KeyAgreement
service. This solution should only be used as a last resort if the application code cannot be modified, or if the application must interoperate with a system that cannot be modified. The "legacy" key derivation function and its security are unspecified.
To improve the strength of SSL/TLS connections, exportable cipher suites have been disabled in SSL/TLS connections in the JDK by the jdk.tls.disabledAlgorithms
Security Property.
New public attributes, RMIConnectorServer.CREDENTIALS_FILTER_PATTERN
and RMIConnectorServer.SERIAL_FILTER_PATTERN
have been added to RMIConnectorServer.java
. With these new attributes, users can specify the deserialization filter pattern strings to be used while making a RMIServer.newClient()
remote call and while sending deserializing parameters over RMI to server respectively.
The user can also provide a filter pattern string to the default agent via management.properties. As a result, a new attribute is added to management.properties.
Existing attribute RMIConnectorServer.CREDENTIAL_TYPES
is superseded by RMIConnectorServer.CREDENTIALS_FILTER_PATTERN
and has been removed.
Java SE 9 changes the JDK's Transform
, Validation
and XPath
implementations to use the JDK's system-default parser even when a third party parser is on the classpath. In order to override the JDK system-default parser, applications need to explicitly set the new System property jdk.xml.overrideDefaultParser
.
Support through the API
The overrideDefaultParser
property is supported by the following APIs:
Support as a System property
The overrideDefaultParser
property can be set through the System.setProperty.
Support as a JAXP system property
The overrideDefaultParser
property can be set in the JAXP configuration file jaxp.properties
.
Scope and order
The overrideDefaultParser
property follows the same rule as other JDK JAXP properties in that a setting of a narrower scope takes preference over that of a wider scope. A setting through the API overrides the System property which in turn overrides that in the jaxp.properties
file.
The following are some of the notable bug fixes included in this release:
Web-start applications cannot be launched when clicking JNLP link from IE 11 on Windows 10 Creators Update when 64-bit JRE is installed. Workaround is to uninstall 64-bit JRE and use only 32-bit JRE.
This release also contains fixes for security vulnerabilities described in the Oracle Critical Patch Update. For a more complete list of the bug fixes included in this release, see the JDK 9.0.4 Bug Fixes page.
October 17, 2017
The full version string for this update release is 9.0.1+11 (where "+" means "build"). The version number is 9.0.1.
JDK 9.0.1 contains IANA time zone data version 2017b. For more information, refer to Timezone Data Versions in the JRE Software.
The security baselines for the Java Runtime Environment (JRE) at the time of the release of JDK 9.0.1 are specified in the following table:
JRE Family Version | JRE Security Baseline (Full Version String) |
---|---|
9 | 9.0.1+11 |
8 | 1.8.0_151-b12 |
7 | 1.7.0_161-b13 |
6 | 1.6.0_171-b13 |
The JRE expires whenever a new release with security vulnerability fixes becomes available. Critical patch updates, which contain security vulnerability fixes, are announced one year in advance on Critical Patch Updates, Security Alerts and Third Party Bulletin. This JRE (version 9.0.1) will expire with the release of the next critical patch update scheduled for January 16, 2018.
For systems unable to reach the Oracle Servers, a secondary mechanism expires this JRE (version 9.0.1) on February 16, 2018. After either condition is met (new release becoming available or expiration date reached), the JRE will provide additional warnings and reminders to users to update to the newer version. For more information, see JRE Expiration Date.
Timeouts used by the FTP URL protocol handler have been changed from infinite to 5 minutes. This will result in an IOException from connect and read operations if the FTP server is unresponsive. For example, new URL("ftp://example.com").openStream().read(),
will fail with java.net.SocketTimeoutException
in case a connection or reading could not be completed within 5 minutes.
To revert this behaviour to that of previous releases, the following system properties may be used, sun.net.client.defaultReadTimeout=0
, sun.net.client.defaultConnectTimeout=0
The OpenJDK 9 binary for Linux x64 contains an empty cacerts
keystore. This prevents TLS connections from being established because there are no Trusted Root Certificate Authorities installed. You may see an exception like the following:
javax.net.ssl.SSLException: java.lang.RuntimeException: Unexpected error: java.security.InvalidAlgorithmParameterException: the trustAnchors parameter must be non-empty
As a workaround, users can set the javax.net.ssl.trustStore
System Property to use a different keystore. For example, the ca-certificates
package on Oracle Linux 7 contains the set of Root CA certificates chosen by the Mozilla Foundation for use with the Internet PKI. This package installs a trust store at /etc/pki/java/cacerts
, which can be used by OpenJDK 9.
Only the OpenJDK 64 bit Linux download is impacted. This issue does not apply to any Oracle JRE/JDK download.
Progress on open-sourcing the Oracle JDK Root CAs can be tracked through the issue JDK-8189131.
One Swisscom root certificate has been revoked by Swisscom and has been removed:
Swisscom Root EV CA 2
alias: "swisscomrootevca2 [jdk]"
DN: CN=Swisscom Root EV CA 2, OU=Digital Certificate Services, O=Swisscom, C=ch
Two important changes have been made for this issue:
1. A new system property has been introduced that allows users to configure the default key size used by the JDK provider implementations of KeyPairGenerator and AlgorithmParameterGenerator. This property is named "jdk.security.defaultKeySize
" and the value of this property is a list of comma-separated entries. Each entry consists of a case-insensitive algorithm name and the corresponding default key size (in decimal) separated by ":". In addition, white space is ignored.
By default, this property will not have a value, and JDK providers will use their own default values. Entries containing an unrecognized algorithm name will be ignored. If the specified default key size is not a parseable decimal integer, that entry will be ignored as well.
2. The DSA KeyPairGenerator implementation of the SUN provider no longer implements java.security.interfaces.DSAKeyPairGenerator
. Applications which cast the SUN provider's DSA KeyPairGenerator object to a java.security.interfaces.DSAKeyPairGenerator
can set the system property "jdk.security.legacyDSAKeyPairGenerator
". If the value of this property is "true", the SUN provider will return a DSA KeyPairGenerator object which implements the java.security.interfaces.DSAKeyPairGenerator
interface. This legacy implementation will use the same default value as specified by the javadoc in the interface.
By default, this property will not have a value, and the SUN provider will return a DSA KeyPairGenerator object which does not implement the forementioned interface and thus can determine its own provider-specific default value as stated in the java.security.KeyPairGenerator
class or by the "jdk.security.defaultKeySize
" system property if set.
Deserialization of certain collection instances will cause arrays to be allocated. The ObjectInputFilter.checkInput()
method is now called prior to allocation of these arrays. Deserializing instances of ArrayDeque
, ArrayList
, IdentityHashMap
, PriorityQueue
, java.util.concurrent.CopyOnWriteArrayList
, and the immutable collections (as returned by List.of
, Set.of
, and Map.of
) will call checkInput()
with a FilterInfo instance whose serialClass()
method returns Object[].class. Deserializing
instances of HashMap
, HashSet
, Hashtable
, and Properties will call checkInput()
with a FilterInfo instance whose serialClass()
method returns Map.Entry[].class
. In both cases, the FilterInfo.arrayLength()
method will return the actual length of the array to be allocated. The exact circumstances under which the serialization filter is called, and with what information, is subject to change in future releases.
When keytool is operating on a JKS or JCEKS keystore, a warning may be shown that the keystore uses a proprietary format and migrating to PKCS12 is recommended. The keytool's -importkeystore
command is also updated so that it can convert a keystore from one type to another if the source and destination point to the same file.
Applications that either explicitly or implicitly call org.omg.CORBA.ORB.string_to_object
, and wish to ensure the integrity of the IDL stub type involved in the ORB::string_to_object
call flow, should specify additional IDL stub type checking. This is an "opt in" feature and is not enabled by default.
To take advantage of the additional type checking, the list of valid IDL interface class names of IDL stub classes is configured by one of the following:
Specifying the security property com.sun.CORBA.ORBIorTypeCheckRegistryFilter
located in the file conf/security/java.security
in Java SE 9 or in jre/lib/security/java.security
in Java SE 8 and earlier.
Specifying the system property com.sun.CORBA.ORBIorTypeCheckRegistryFilter
with the list of classes. If the system property is set, its value overrides the corresponding property defined in the java.security
configuration.
If the com.sun.CORBA.ORBIorTypeCheckRegistryFilter
property is not set, the type checking is only performed against a set of class names of the IDL interface types corresponding to the built-in IDL stub classes.
This release contains fixes for security vulnerabilities described in the Oracle Critical Patch Update. For a more complete list of the bug fixes included in this release, see the JDK 9.0.1 Bug Fixes page.
The Java Platform, Standard Edition 9 Development Kit (JDK 9) is a feature release of the Java SE platform. It contains new features and enhancements in many functional areas.
You can use the links on this page to view the Release Notes describing important changes, enhancements, removed APIs and features, deprecated APIs and features, and other information about JDK 9 and Java SE 9.
Links to other sources of information about JDK 9 are also provided. The JDK Guides and Reference Documentation link below displays a page containing links to the user guides, troubleshooting information, and specific information of interest to users moving from previous versions of the JDK. Links to the JDK 9 API Specification and the Java Language and Virtual Machine Specifications are provided below in the JDK 9 Specifications group.
Note: The Release Notes files are located only on our website.
The following sections are included in these Release Notes:
The following items describe important changes and information about this release. In some cases, the descriptions provide links to additional detailed information about an issue or a change. This page does not duplicate the descriptions provided by the other JDK 9 Release Notes pages and:
You should be aware of the content in those documents as well as the items described in this page.
The descriptions below also identify potential compatibility issues that you might encounter when migrating to JDK 9. See the JDK 9 Migration Guide for descriptions of specific compatibility issues.
The Kinds of Compatibility page on the OpenJDK wiki identifies three types of potential compatibility issues for Java programs used in these descriptions:
Source: Source compatibility concerns translating Java source code into class files.
Binary: Binary compatibility is defined in The Java Language Specification as preserving the ability to link without error.
Behavioral: Behavioral compatibility includes the semantics of the code that is executed at runtime.
See the Compatibility & Specification Review (CSR) page on the OpenJDK wiki for more information about compatibility as it relates to JDK 9.
JDK 9 uses a new version string format. The most notable changes are the removal of the “1.” from the beginning of the version string and the use of 3 or more separate elements to specify major, minor, and security updates. All code that parses the value of the system properties java.version
, java.specification.version
, or java.vm.specification.version
should be examined to ensure that it works with the new scheme. Maintainers of code that parses these properties should also be aware of the new Runtime.version()
API.
Details for the new version string format can be found in JEP 223: New Version-String Scheme.
Java SE and the JDK have been significantly updated by the introduction of the Java Platform Module System (JSR 376) and using the module system to modularize the Java SE Platform and the JDK. The compatibility issues due to the changes are documented in the "Risks and Assumptions" section of JEP 261 and also summarized here.
All JDK internal classes are now encapsulated at compile-time. Using javac
to compile source code with references to JDK internal classes will now fail. This differs to previous releases where javac
emitted warnings of the form "XXX is an internal proprietary API and may be removed in a future release". JEP 261 documents the --add-exports
option which may be used as a temporary workaround to compile source code with references to JDK internal classes.
All JDK internal classes are also encapsulated at run-time but most remain accessible to applications and libraries on the class path. Specifically, all public classes in JDK internal packages that existed in JDK 8 remain accessible to code on the class path. Furthermore, these JDK internal packages, and the standard packages in Java SE 8, are open in JDK 9 for so-called deep reflection by code on the class path. This allows existing code on the class path that relies on the use of setAccessible
to break into JDK internals, or to do other illegal access on members of classes in these packages, to work as per previous releases. A future JDK release will change this policy so that these packages will not be open and illegal access to members of classes in these packages will be denied. To help identify code that needs to be fixed, the JDK emits a warning to standard error on the first use of core reflection that performs an illegal access. The warning is not suppressible.
Developers of applications that observe "illegal access" warnings caused by code in libraries that they use are encouraged to submit bugs to the library maintainers.
Developers of libraries using core reflection that may rely on illegal access are encouraged to test with --illegal-access=warn
or --illegal-access=debug
to identify code in their library that may need updating.
All developers are encouraged to use the jdeps
tool to identify any static references to JDK internal classes. The jdeps
tool was introduced in JDK 8 and has many significant improvements in JDK 9.
In preparation for a JDK release that denies illegal access, applications and libraries should be tested with --illegal-access=deny
. As documented in JEP 261, running with -Dsun.reflect.debugModuleAccessChecks=access
may help to locate code that silently ignores IllegalAccessException
or InaccessibleObjectException
.
As detailed in JEP 261, the default set of root modules for applications on the class path is the java.se
module rather than the java.se.ee
module. Applications and libraries that make use of classes in module java.xml.bind
(JAXB), module java.xml.ws
(JAX-WS), module java.corba
(CORBA), or other modules shared between Java SE and Java EE may need changes to how they are compiled and deployed. Furthermore, these modules have been deprecated in Java SE 9 for removal in a future release so applications and libraries using these APIs will need to eventually migrate to using the standalone releases of these modules. The JDK 9 Migration Guide details the options for applications and libraries using these APIs.
As documented in JEP 261, if a package is defined in both a named module and on the class path then the package on the class path will be ignored. This may impact applications that have (perhaps unknowingly) added classes to Java SE or JDK packages by means of the class path.
The boot class path has been mostly removed in this release. The java -Xbootclasspath
and -Xbootclasspath/p
options have been removed. The javac -bootclaspath
option can only be used when compiling to JDK 8 or older. The system property sun.boot.class.path
has been removed. Deployments that rely on overriding platform classes for testing purposes with -Xbootclasspath/p
will need to changed to use the --patch-module
option that is documented in JEP 261. The -Xbootclasspath/a
option is unchanged.
The application class loader is no longer an instance of java.net.URLClassLoader
(an implementation detail that was never specified in previous releases). Code that assumes that ClassLoader::getSytemClassLoader
returns a URLClassLoader
object will need to be updated. Note that Java SE and the JDK do not provide an API for applications or libraries to dynamically augment the class path at run-time.
The classes in many non-core modules are now defined to the platform class loader rather than the boot class loader. This may impact code that creates class loaders with null
as the parent class loader and assumes that all platform classes are visible to the parent class loader. Such code may need to be changed to use the platform class loader as the parent class loader (see java.lang.ClassLoader::getPlatformClassLoader
). Tool agents that add supporting classes to the boot class path may also assume that all platform classes are visible to the boot class loader. The java.lang.instrument
package description provides more information on this topic for maintainers of java agents.
The java.lang.Package
API has been updated to represent a run-time package. The Class::getPackage
method returns a Package
object whose name is an empty string for a class in the unnamed package. This may impact code that expects Class::getPackage
to return null for a class in the unnamed package. In addition, Package::getPackages
and ClassLoader::getPackages
may return an array with more than one Package
object of the same package name, each defined by a different class loader in the class loader hierarchy. This differs to previous releases that only one Package
object of a package name will be included in the returned array.
The java.lang.Package
objects created by the built-in class loaders for packages in named modules do not have specification or implementation versioning information. This differs to previous releases where specification and implementation versioning information was read from the main manifest of rt.jar
. This change may impact code that invokes getPackage
on a platform class and expects the Package::getSpecificationXXX
or Package::getImplementationXXX
methods to return non-null values.
The Class::getResource
and Class::getResourceAsStream
methods have been updated in Java SE 9 so that invoking them on a Class in a named module will only locate the resource in the module. This may impact code that invokes these methods on platform classes on the assumption that the class path will be searched. Code that needs to search the class path for a resource should be changed to use ClassLoader::getSystemResource
or ClassLoader::getSystemResourceAsStream
.
JDK internal resources, other than class files, in the standard and JDK modules can no longer be located with the ClassLoader::getResourceXXX
APIs. This may impact code that relies on using these APIs to get at JDK internal properties files or other resources. The Class::getResourceXXX
APIs will continue to locate JDK internal resources in packages that are open for illegal access (see above). A further change to these APIs is that the permission needed, when running with a security manager, to locate resources in the run-time image has changed. The permission needed to locate resources in modules in the run-time image is RuntimePermission("accessSystemModules")
and so different to previous releases where permission to read ${java.home}/lib/rt.jar
was needed.
Stack traces have been updated to include module names in the stack trace elements for classes in named modules. Code that parses stack traces may need to be updated.
The JDK may not start (meaning java -version
fails) in some unsupported configurations. In particular, if the system property file.encoding
is set on the command line with the name of a charset that is not in the java.base
module then startup will fail with an error message to indicate that the charset is not supported.
In previous releases, the "double equals" syntax could be used when setting the security policy to override the JDK security policy file (e.g. -Djava.security.policy==appserver.policy
). This has changed in JDK 9 so that it augments the permissions granted to the standard and JDK modules. This change means that application servers that override the JDK policy file do not need to copy the permissions granted to standard and JDK modules. More details on this issue can be found in JDK-8159752.
Maintainers of JVM TI agents to instrument or profile code that executes early in VM startup should review the changes in JEP 261 and the changes in the JVM TI specification. The default behavior has changed so that the ClassFileLoadHook
event is not sent during the primordial phase, and the VMStart
event (to signal the beginning of the start phase) is delayed until the module system is initialized. The JVM TI specification has been updated to define new capabilities for agents that need events for code that is executed before the VM is fully initialized.
Source Compatibility Issues
Java SE 9 adds the Module
class to the java.lang
package which is implicitly imported on demand (i.e., import java.lang.*
). If code in an existing source file imports some other package on demand, and that package declares a Module
type, and the existing code refers to that type, then the source file will not compile without changing it to use a single-type-import declaration (i.e., import otherlib.Module
).
Java SE 9 adds two abstract methods to java.lang.instrument.Instrumentation
. This interface isn't intended to be implemented outside of the java.instrument
module but if there are implementations they will not compile with JDK 9 until they are updated to implement the new methods added in Java SE 9.
Java SE 9 adds a six-parameter transform
method to java.lang.instrument.ClassFileTransformer
as a default method. This means that ClassFileTransformer
is no longer a functional interface (JLS 9.8). Existing source code that uses the pre-existing five-parameter transform method as a functional interface will no longer compile.
In JDK 9, the default locale data uses data derived from the Unicode Consortium's Common Locale Data Repository (CLDR). As a result, users may see differences in locale sensitive services behavior and/or translations. For example, CLDR does not provide localized display names for most 3-letter time zone IDs, thus the display names may be different from JDK 8 and older. The JDK continues to ship with the legacy JRE locale data and the system property java.locale.providers
can be used to configure the lookup order. To enable behavior compatible with JDK 8, the system property can be set with:
-Djava.locale.providers=COMPAT,SPI
For more detail, refer to the JEP 252.
In JDK 9, the default garbage collector is G1 when a garbage collector is not explicitly specified. G1 provides a better overall experience for most users when compared to a throughput-oriented collector such as the Parallel GC, which was previously the default.
The options to configure the G1 collector are documented in the java command page (link
). See also JEP 248 for more information on this change.
The JDK and JRE run-time images have been restructured as documented in JEP 220. The compatibility issues due to the changes are documented in the "Risks and Assumptions" section of the JEP and also summarized here.
Tools and libraries that rely on the existence of the jre
directory, rt.jar
, or tools.jar
may need to be updated to work with the new layout and/or may need to be updated to use the jrt
file system provider to access class files and other resources in the runtime image.
All user-editable configuration files are now located in the JDK/JRE conf
directory. This includes the security policy file and other properties files. Scripts or procedures that rely on the old location of these files may need to be updated.
On Linux and Solaris, libjvm.so
is now located in the JDK/JRE lib
directory (it was located in lib/$ARCH
in previous releases). Applications that use the JNI invocation API to create the VM may need to be updated to locate libjvm.so
in its new location.
src.zip
has moved from the top-level directory to the lib
directory and now includes both the JDK and JavaFX source files in module directories. IDEs or tools that open this zip file may need to be updated.
The deprecated Endorsed-Standards Override Mechanism has been removed in this release. The runtime will refuse to start if ${java.home}/lib/endorsed
exists or the system property java.endorsed.dirs
is specified on the command line. The javac
compiler will only accept the -endorseddirs
option when compiling to JDK 8 or older. Applications that rely on this mechanism should migrate to the upgradeable modules mechanism, documented in JEP 261.
The deprecated Extensions Mechanism has been removed. The runtime will refuse to start if ${java.home}/lib/ext
exists or the system property java.ext.dirs
is specified on the command line. The javac
compiler will only accept the -extdirs
option when compiling to JDK 8 or older. The rmic
compiler will no longer accept the -extdirs
options. Applications that rely on this mechanism should consider deploying the libraries on the class path or as modules on the module path.
The javac
command no longer supports -source
or -target
values for releases before 6/1.6. However, older class files are still readable by javac
. Source code for older release can be ported to a newer source level. To generate class files usable by releases older than JDK 6, a javac
from a JDK 6, 7, or 8 release family can be used.
JEP 182 documents the policy for retiring old -source
and -target
options.
When generating class files in conjunction with -target 9
(specified either explicitly or implicitly), javac
will generate class files with a major version number of 53. For details of version 53 class files, see the Java Virtual Machine Specification.
The JDK classes themselves mostly use version 53 class files.
Tools or libraries that rely on ASM or other bytecode manipulation libraries may need updated versions of these libraries to work with version 53 class files.
Serialization Filtering introduces a new mechanism which allows incoming streams of object-serialization data to be filtered in order to improve both security and robustness. Every ObjectInputStream applies a filter, if configured, to the stream contents during deserialization. Filters are set using either a system property or a configured security property. The value of the "jdk.serialFilter" patterns are described in JEP 290 Serialization Filtering and in <JRE>/lib/security/java.security. Filter actions are logged to the 'java.io.serialization' logger, if enabled.
In JDK 9 the internal character storage of the java.lang.String, StringBuilder and StringBuffer classes has been changed from a UTF-16 char array to a byte array plus a one-byte encoding-flag field. The new storage representation stores/encodes the characters either as ISO-8859-1/Latin-1 (one byte per character), or as UTF-16 (two bytes per character), based upon the contents of the string. The newly added encoding flag field indicates which encoding is used. This feature reduces, by 50%, the amount of space required for String objects to store the characters if the String object only contains single-byte/ latin-1 characters.
A new jvm option -XX:-CompactStrings has been introduced in JDK 9 to disable this feature, which might be worth considering when
Several APIs have been deprecated in Java SE 9. This will cause javac
to emit a variety of warnings during compilation. A deprecation warning will be emitted at the use site of an API deprecated with forRemoval=false
. A removal warning will be emitted at the use site of an API deprecated with forRemoval=true
.
A deprecation or removal warning is a recommendation that code be migrated away from the deprecated API. A removal warning is particularly strenuous, as it is an indication that the deprecated API will generally be removed from the next major release of the platform. However, it is not always practical to migrate code immediately. Therefore, two mechanisms have been provided for controlling the warnings that are emitted by javac
: command-line options and annotations in source code.
The javac
command-line options -Xlint:deprecation
and -Xlint:removal
will enable the respective warning types, and -Xlint:-deprecation
and -Xlint:-removal
will disable the respective warning types. Note that removal warnings are enabled by default.
The other mechanism is to add the @SuppressWarnings("deprecation")
or @SuppressWarnings("removal")
annotation to the source code. This annotation can be added at the declaration of a module, class, method, field, or local variable to suppress the respective warning types emitted within that declaration.
For further information about deprecation, see JEP 277 and the documentation for the java.lang.Deprecated
annotation type.
The JDK 9 release includes support for Unicode 8.0. Since the release of JDK 8, which supported Unicode 6.2.0, the Unicode 8.0 introduced the following new features that are now included in JDK 9:
The system property jdk.nio.maxCachedBufferSize
has been introduced in JDK 9 to limit the memory used by the "temporary buffer cache". The temporary buffer cache is a per-thread cache of direct memory used by the NIO implementation to support applications that do I/O with buffers backed by arrays in the java heap. The value of the property is the maximum capacity of a direct buffer that can be cached. If the property is not set then no limit is put on the size of buffers that are cached. Applications with certain patterns of I/O usage may benefit from using this property. In particular if an application does I/O with large multi-megabyte buffers at startup but therefore does I/O with small buffers may see a benefit to using this property. Applications that do I/O using direct buffers will not see any benefit to using this system property.
Applications running on server editions of Microsoft Windows that make heavy use of loopback connections may see latency and performance improvements if SIO_LOOPBACK_FAST_PATH is enabled. The system property "jdk.net.useFastTcpLoopback" controls whether the JDK enables SIO_LOOPBACK_FAST_PATH on Microsoft Windows. It is disabled by default but can be enabled by setting the system property on the command line with -Djdk.net.useFastTcpLoopback
or -Djdk.net.useFastTcpLoopback=true
.
Applications running on server editions of Microsoft Windows that make heavy use of java.nio.channels.FileChannel.transferTo
may see performance improvements if the implementation uses TransmitFile
. TransmitFile
makes use of the Windows cache manager to provide high-performance file data transfer over sockets. The system property "jdk.nio.enableFastFileTransfer
" controls whether the JDK uses TransmitFile
on Microsoft Windows. It is disabled by default but can be enabled by setting the system property on the command line with -Djdk.nio.enableFastFileTransfer
or -Djdk.nio.enableFastFileTransfer=true
.
This release adds IBM1166 character set. It provides support for cyrillic multilingual with euro for Kazakhstan. Aliases for this new character set include "cp1166","ibm1166", "ibm-1166", "1166".
RMI Registry and Distributed Garbage Collection use the mechanisms of JEP 290 Serialization Filtering to improve service robustness. RMI Registry and DGC implement built-in white-list filters for the typical classes expected to be used with each service. Additional filter patterns can be configured using either a system property or a security property. The "sun.rmi.registry.registryFilter" and "sun.rmi.transport.dgcFilter" property pattern syntax is described in JEP 290 and in <JRE>/lib/security/java.security.
Properties files in UTF-8 encoding are now supported by ResourceBundle, with automatic fall back to ISO-8859-1 encoding if needed. For more detail, refer to PropertiyResourceBundle class description.
The constructors for the utility visitors in javax.lang.model.util that correspond to the RELEASE_6 source level have been deprecated since the reference implementation regards -source 6 as obsolete. Authors of annotation processors should update their processors to support newer source versions.
New JMX agent property - jmxremote.host
A new property, com.sun.management.jmxremote.host
, is introduced that specifies the bind address for the default JMX agent. If the latter is not specified, the default JMX agent will listen on all interfaces (0.0.0.0) and the host value placed in the agent service URL (JMXServiceURL) is the IP address returned from invocation of InetAddress.getLocalHost()
method.
com.sun.management.jmxremote.host
management.properties
).
A new java attribute is defined for the environment to allow a JMX RMI JRMP server to specify a list of class names, these names correspond to the closure of class names that are expected by the server when deserializing credentials. For instance, if the expected credentials were a List
By default this attribute is used only by the default agent with { "[Ljava.lang.String;", "java.lang.String" }, so that only arrays of Strings and Strings will be accepted when deserializing the credentials.
The attribute name is: "jmx.remote.rmi.server.credential.types"
Here is an example for a user to start a server with the specified credentials class names: Map<String, Object> env = new HashMap<>(1); env.put("jmx.remote.rmi.server.credential.types", new String[]{ String[].class.getName(), String.class.getName() }); JMXConnectorServer server = JMXConnectorServerFactory.newJMXConnectorServer( url, env, mbeanServer);
the new feature should be used by specifying directly: "jmx.remote.rmi.server.credential.types"
A new ManagementAgent.status diagnostic command is introduced for querying the JMX agent's status.
The status will be relayed to the user in the following form:
Agent: <enabled|disabled>
(
ConnectionType: <local|remote>
Protocol: <rmi|...>
Host: <IP or host name>
URL: <valid JMX connector URL>
(
Properties:
(
<propertyname>=<propertyvalue>
)+
)?
)+
Where:
<name> means an arbitrary value
| means 'or'
( and ) denote a block
+ block repeats one or more times
? block appears at most once
Web Start applications can now specify requested JREs with their arch attributes, and select the first one available that matches, even if it is not the same arch (32 bit vs 64 bit) as the currently running JRE. For example, the JNLP content below would place first preference on 64 bit JDK8, and if not available, 32 bit JDK9:
<resources arch="x86_64">
<java version="1.8"/>
</resources>
<resources arch="x86">
<java version="1.9"/>
</resources>
Note that in the above example, in order to launch a 64 bit 1.8 JRE, a 64 bit 9 JRE must be installed. If only a 32 bit 9 JRE is installed, the 64 bit 1.8 JRE is unavailable.
The ability to specify a preference to launch a Java Web Start application in 64-bit or 32-bit architectures is now supported, by adding the 'arch' attribute to the JNLP resources block.
G1 now tries to collect humongous objects of primitive type (char, integer, long, double) with few or no references from other objects at any young collection. During young collection, G1 checks if any remaining incoming references to these humongous objects are current. G1 will reclaim any humongous object having no remaining incoming references.
Three new experimental JVM options to control this behavior that have been added with this change:
On platforms that support the concept of a thread name on their native threads, the java.lang.Thread.setName()
method will also set that native thread name. However, this will only occur when called by the current thread, and only for threads started through the java.lang.Thread
class (not for native threads that have attached via JNI). The presence of a native thread name can be useful for debugging and monitoring purposes. Some platforms may limit the native thread name to a length much shorter than that used by the java.lang.Thread
, which may result in some threads having the same native name.
Two new JVM flags have been added:
A non-ASN.1 encoded form for DSA and ECDSA signatures has been implemented. This new signature output format concatenates the r and s values from the signature in conformance with IEEE P1363. Signature objects using this format must provide one of the following algorithm Strings to the Signature.getInstance() method:
For DSA: NONEwithDSAinP1363Format SHA1withDSAinP1363Format SHA224withDSAinP1363Format SHA256withDSAinP1363Format
For ECDSA: NONEwithECDSAinP1363Format SHA1withECDSAinP1363Format SHA224withECDSAinP1363Format SHA256withECDSAinP1363Format SHA384withECDSAinP1363Format SHA512withECDSAinP1363Format
'New certpath constraint: jdkCA In the java.security
file, an additional constraint named "jdkCA" is added to the jdk.certpath.disabledAlgorithms
property. This constraint prohibits the specified algorithm only if the algorithm is used in a certificate chain that terminates at a marked trust anchor in the lib/security/cacerts keystore. If the jdkCA constraint is not set, then all chains using the specified algorithm are restricted. jdkCA may only be used once in a DisabledAlgorithm expression.
Example: To apply this constraint to SHA-1 certificates, include the following: SHA1 jdkCA
Enhance the JDK security providers to support 3072-bit DiffieHellman and DSA parameters generation, pre-computed DiffieHellman parameters up to 8192 bits and pre-computed DSA parameters up to 3072 bits.
The system property jdk.tls.client.cipherSuites
can be used to customize the default enabled cipher suites for the client side of SSL/TLS connections. In a similar way, the system property jdk.tls.server.cipherSuites
can be used for customization on the server side.
The system properties contain a comma-separated list of supported cipher suite names that specify the default enabled cipher suites. All other supported cipher suites are disabled for this default setting. Unrecognized or unsupported cipher suite names specified in properties are ignored. Explicitly setting enabled cipher suites will override the system properties.
Refer to the Java Cryptography Architecture Standard Algorithm Name Documentation for the standard JSSE cipher suite names, and the Java Cryptography Architecture Oracle Providers Documentation for the cipher suite names supported by the SunJSSE provider.
Note that the actual use of enabled cipher suites is restricted by algorithm constraints.
Note also that these system properties are currently supported by the JDK Reference Implementation. They are not guaranteed to be supported by other implementations.
Warning: These system properties can be used to configure weak cipher suites, or the configured cipher suites may become more weak over time. We do not recommend using the system properties unless you understand the security implications. Use them at your own risk.
The SHA224withDSA and SHA256withDSA algorithms are now supported in the TLS 1.2 "signature_algorithms" extension in the SunJSSE provider. Note that this extension does not apply to TLS 1.1 and previous versions.
JEP 244 has enhanced the Java Secure Socket Extension (JSSE) to provide support for the TLS Application-Layer Protocol Negotiation (ALPN) Extension (RFC 7301). New methods have been added to the javax.net.ssl
classes SSLEngine
, SSLSocket
, and SSLParameters
to allow clients and servers to negotiate an application layer value as part of the TLS handshake.
The output of ExtendedGSSContext.inquireSecContext()
is now available as negotiated properties for the SASL GSSAPI mechanism using the name "com.sun.security.jgss.inquiretype.<type_name>", where "type_name" is the string form of the InquireType
enum parameter in lower case. For example, "com.sun.security.jgss.inquiretype.krb5_get_session_key_ex" for the session key of an established Kerberos 5 security context.
A new security property named jdk.xml.dsig.secureValidationPolicy
has been added that allows you to configure the individual restrictions that are enforced when the secure validation mode of XML Signature is enabled. The default value for this property in the java.security
configuration file is:
jdk.xml.dsig.secureValidationPolicy=\
disallowAlg http://www.w3.org/TR/1999/REC-xslt-19991116,\
disallowAlg http://www.w3.org/2001/04/xmldsig-more#rsa-md5,\
disallowAlg http://www.w3.org/2001/04/xmldsig-more#hmac-md5,\
disallowAlg http://www.w3.org/2001/04/xmldsig-more#md5,\
maxTransforms 5,\
maxReferences 30,\
disallowReferenceUriSchemes file http https,\
noDuplicateIds,\
noRetrievalMethodLoops
Please refer to the definition of the property in the java.security
file for more information.
A new jdk.security.jarsigner.JarSigner
API is added to the jdk.jartool
module which can be used to sign a jar file.
Besides "true" and "false", krb5.conf now also accepts "yes" and "no" for boolean-valued settings.
The krb5.conf file now supports including other files using either the "include FILENAME" or "includedir DIRNAME" directives. FILENAME or DIRNAME must be an absolute path. The named file or directory must exist and be readable. Including a directory includes all files within the directory whose names consist solely of alphanumeric characters, dashes, or underscores. An included file can include other files but no recursion is allowed.
Also, before this change, when the same setting for a single-valued option (For example, default_realm) is defined more than once in krb5.conf, the last value was chosen. After this change, the first value is chosen. This is to be consistent with other krb5 vendors.
If the javadoc deprecated tag is used on an element without it also being deprecated using the @Deprecated annotation, the compiler will by default produce a new warning to this effect.
The new warning can be suppressed either by adding the command line option -Xlint:-dep-ann to the javac command line or by using @SuppressWarnings("dep-ann") annotation (as with any other warning-suppressing annotation, it is always a good practice to add such an annotation as close to the member being deprecated as possible).
In a future version of Java SE, the compiler may no longer treat @deprecated javadoc tag as indicating formal deprecation.
Provide an interactive tool to evaluate declarations, statements, and expressions of the Java programming language, together with an API so that other applications can leverage this functionality. Adds Read-Eval-Print Loop (REPL) functionality for Java.
The jshell
tool accepts "snippets" of Java code, evaluating them and immediately displaying the results. Snippets include variable and method declarations without enclosing class. An expression snippet immediately shows its value. The jshell
tool also accepts commands for displaying and controlling snippets.
The jshell
tool is built on the JShell API, making the evaluation of snippets of Java code available any Java program.
See:
The java
launcher now supports reading arguments from "argument files" specified on the command line. It is not uncommon that the java
launcher is invoked with very long command lines (a long class path for example). Many operating systems impose a limit on the length of a command line, something that "argument files" can be used to work around.
In JDK 9, java now can read arguments from specified files as they are put on the command line. See java command reference and java Command-Line Argument Files for more details.
JDK 9 supports a new environment variable JDK_JAVA_OPTIONS
to prepend options to those specified on the command line. The new environment variable has several advantages over the legacy/unsupported _JAVA_OPTIONS
environment variable including the ability to include java
launcher options and @file
support. The new environment variable may also be useful when migrating from JDK 8 to JDK 9 for cases where new command line options (that are not supported by JDK 8) are needed.
For more details, see java launcher reference guide.
Java SE 9 improves the javax.xml.xpath
API with new APIs that make use of modern language features to facilitate ease of use and extend support of the XPath specification.
javax.xml.xpath
supported explicit data types defined by the XPath specification. However, it was missing the importantANY
type without which the XPath API assumes that an explicit type is always known, which is not true in some circumstances. The new API now supports theANY
type so that an XPath evalution can be performed when the return type is unknown.
For ease of use, four new
evaluateExpression
methods are added to thejavax.xml.xpath.XPath
andjavax.xml.xpath.XPathExpression
interfaces to allow specifying explicit types as follows:
When specified explicitly, the new methods return the specific types, including
Boolean
,Double
,Integer
,Long
,String
andorg.w3c.dom.Node
.
When the return type is expected to be
NODESET
, the new methods will return a newXPathNodes
type.XPathNodes
is a new interface that extendsIterable<Node>
which makes it easier to use than the traditionalorg.w3c.dom.NodeList
.
When the return type is unknown or
ANY
, the new methods return a newXPathEvaluationResult
type.XPathEvaluationResult
provides anXPathResultType
enum that defines the supported types that areANY
,BOOLEAN
,NUMBER
,STRING
,NODESET
, andNODE
.
Java SE 9 introduces a standard XML Catalog API that supports the OASIS XML Catalogs version 1.1 standard. The API defines catalog and catalog-resolver abstractions that can be used as an intrinsic or external resolver with the JAXP processors that accept resolvers.
Existing libraries or applications that use the internal catalog API shall consider migrating to the new API in order to take advantage of the new features.
A new property "maxXMLNameLimit" is added to limit the maximum size of XML names, including element name, attribute name and namespace prefix and URI. It is recommended that users set the limit to the smallest possible number so that malformed XML files can be caught quickly. For more about XML processing limits, please see The Java Tutorials, Processing Limits.
All methods that refer to types defined in the java.awt.peer and java.awt.dnd.peer packages (the "peer types") were removed from the Java API in Java SE 9 . Application code which calls any such method which accepts or returns a type defined in these packages will no longer link. This is a BINARY incompatible change.
Additional information is provided here: http://mail.openjdk.java.net/pipermail/awt-dev/2015-February/008924.html
com.sun.image.codec.jpeg has been shipped as a non-standard API since JDK 1.2. It was always advertised as a stop-gap measure until a proper standard equivalent was provided. That replacement (javax.imageio) has been there since JDK 1.4. As a result JDK 9 finally removes the long deprecated com.sun.image.codec.jpeg
API which has been flagged as intended for removal for several releases. Applications which still depend on it will need to be re-coded in order to run on JDK9.
The public static constant JFrame.EXIT_ON_CLOSE was removed in favour of WindowConstants.EXIT_ON_CLOSE.
The default java.policy
no longer grants stopThread
runtime permission in JDK 9.
In previous releases, untrusted code had the stopThread
runtime permission by default. This allows untrusted code to call Thread::stop
( on threads other than the current one ). Having an arbitrary exception thrown asynchronously is not something that trusted code should be expected to handle gracefully. So this permission is removed by default in JDK 9. The following line is deleted from the file conf/security/java.policy
: permission java.lang.RuntimePermission "stopThread";
The system property sun.lang.ClassLoader.allowArraySyntax
was introduced as a temporary workaround to give customers time to remove their source dependency on calling ClassLoader.loadClass
with the array syntax that is not supported since JDK 6. This temporary workaround is removed in JDK 9 and setting sun.lang.ClassLoader.allowArraySyntax
system property will have no effect to ClassLoader.loadClass
. Existing code that calls ClassLoader.loadClass
to create a Class
object of an array class shall be replaced with Class.forName
; otherwise it will get ClassNotFoundException
.
The netdoc
protocol handler has been removed in JDK 9. Code that attempts to construct a java.net.URL
with the netdoc
protocol, for example "netdoc:http://foo.com/index.html" will throw a MalformedURLException
.
The netdoc
protocol was used to point to network documents either on the local file system or externally through an HTTP URL. This capability is essentially defunct and is not supported by Safari, Firefox, and other major browsers.
The lib/content-types.properties
file has been removed from the Java run-time image. The lib/content-types.properties
file contained the default MIME content-types table used to map content type to file extension, etc, and is used primarily by the URLConnection API. The lib/content-types.properties
file was never intended to be user editable. Instead there is a system property, content.types.user.table
, that allows one to define their own content types.
See JDK-8039362, for further details on the use of content.types.user.table
.
Previous JDK releases documented how to configure java.net.InetAddress
to use the JNDI DNS service provider as the name service. This mechanism, and the system properties to configure it, have been removed in JDK 9.
A new mechanism to configure the use of a hosts file has been introduced.
A new system property jdk.net.hosts.file
has been defined. When this system property is set, the name and address resolution calls of InetAddress
, i.e getByXXX
, retrieve the relevant mapping from the specified file. The structure of this file is equivalent to that of the /etc/hosts
file.
When the system property jdk.net.hosts.file
is set, and the specified file doesn't exist, the name or address lookup will result in an UnknownHostException. Thus, a non existent hosts file is handled as if the file is empty.
The poll based SelectorProvider sun.nio.ch.PollSelectorProvider
has been removed in JDK 9. It has been superseded for several releases by improved or higher performance implementations on all supported platforms.
The mechanism of proxying RMI requests through HTTP, which was deprecated in Java SE 8, has been removed in Java SE 9. This mechanism used a web CGI script called java-rmi.cgi
. This script has also been removed. The default mechanism for transmitting RMI requests is now simply a direct socket connection.
The deprecated addPropertyListener
and removePropertyListener
methods have been removed from java.util.jar.Pack200.Packer
and java.util.jar.Pack200.Unpacker
. Applications that need to monitor progress of a packer or unpacker should poll the value of the PROGRESS
property instead.
The zip library implementation has been improved in JDK 9. The new java.util.zip.ZipFile implementation does not use mmap to map ZIP file central directory into memory anymore. As a result, the sun.zip.disableMemoryMapping
system property is no longer needed and has been removed.
The deprecated addPropertyListener
and removePropertyListener
methods have been removed from java.util.logging.LogManager
. Code that relies on a listener to be invoked when logging configuration changes should use the new addConfigurationListener
and removeConfigurationListener
methods instead.
javax.naming.Context.APPLET
has been deprecated. If the environment specified when creating an InitialContext
contains Context.APPLET
then it is ignored. Applets with JNDI configuration in applet parameters should use the Applet.getParameter(String)
method to read the parameters and use the values to create the JNDI context.
The possibility to provide provide subclasses of jdk.nashorn.internal.runtime.CodeStore through the java.util.ServiceLoader API has been removed in JDK 9.
The methods monitorEnter, monitorExit and tryMonitorEnter on sun.misc.Unsafe are removed in JDK 9. These methods are not used within the JDK itself and are very rarely used outside of the JDK.
The following unsupported APIs are removed:
com.sun.tracing
com.sun.tracing.dtrace
The Serviceability Agent (SA) Core and PID debugger Connectors have been removed in this release. It is no longer possible to use a Java Debugger to attach to a core file or process with the SA mechanism.
The JMX RMIConnector only supports the JRMP transport in JDK 9. Support for the optional IIOP transport has been removed in this release.
native2ascii tool is removed in JDK 9. JDK 9 supports UTF-8 based properties resource bundles (see JEP 226 and the conversion for UTF-8 based properties resource bundles to ISO-8859-1 is no longer needed.
management-agent.jar has been removed. Tools that have been using the Attach API to load this agent into a running VM should be aware that the Attach API has been updated in JDK 9 to define two new methods for starting a management agent:
com.sun.tools.attach.VirtualMachine.startManagementAgent(Properties agentProperties)
com.sun.tools.attach.VirtualMachine.startLocalManagementAgent()
The experimental/unsupported jhat
tool has been removed.
The serialver -show
option has been removed in this release.
The extcheck
tool has been removed in this release.
The experimental rmic -Xnew
option has been disabled for this release.
Support for serialized applets has been removed. The "OBJECT" attribute of the <APPLET>
tag and "object" and "java_object" applet parameter tags will no longer be recognized during applet launching, and will be ignored.
The -XX:SafepointPollOffset
flag has been removed because it was introduced only to reproduce a problem with the C1 compiler and is no longer needed.
The -XX:BackEdgeThreshold
flag has been removed because it is no longer supported. Users now need to use -XX:OnStackReplacePercentage
instead.
The -XX:EnableInvokeDynamic
flag has been removed because the VM does no longer support execution without invokedynamic.
The -XX:+Use486InstrsOnly
flag has been removed because it is no longer supported.
Per-thread compiler performance counters have been removed because they became obsolete in the presence of more fine-grained and precise compilation events. The corresponding interface in sun.management.*
has been deprecated since it will no longer provide information without the performance counters. Users can get similar or more fine-grained information via global performance counters, the event tracing API (JFR) or -XX:+PrintCompilation
.
These internal command line flags, which have been deprecated or aliased since JDK 6, have been removed:
CMSParPromoteBlocksToClaim
, ParCMSPromoteBlocksToClaim
, ParallelGCOldGenAllocBufferSize
, ParallelGCToSpaceAllocBufferSize
, UseGCTimeLimit
, CMSPermGenSweepingEnabled
, ResizeTLE
, PrintTLE
, TLESize
, UseTLE
, MaxTLERatio
, TLEFragmentationRatio
, TLEThreadRatio
In addition to this, these internal flags have been deprecated:
CMSMarkStackSizeMax
, ParallelMarkingThreads
, ParallelCMSThreads
, CMSMarkStackSize
, G1MarkStackSize
The GC combinations that were deprecated in JDK 8 have now been removed. This means that the following GC combinations no longer exist:
The command line flags that were removed are: -Xincgc, -XX:+CMSIncrementalMode, -XX:+UseCMSCompactAtFullCollection, -XX:+CMSFullGCsBeforeCompaction
and -XX:+UseCMSCollectionPassing
.
The command line flag -XX:+UseParNewGC
no longer has any effect. ParNew can only be used with CMS and CMS requires ParNew. Thus, the -XX:+UseParNewGC
flag has been deprecated and will likely be removed in a future release.
The VM Options -XX:AdjustConcurrency
and -XX:PrintJVMWarnings
are removed from JDK 9.
The VM option -XX:AdjustConcurrency
was only needed on Solaris 8/9 (when using the T1 threading library).
The VM option -XX:PrintJVMWarnings
was a development option only used by unimplemented VM functions that have themselves been removed in JDK 9.
On Oracle Solaris, the JDK and JRE no longer have an ISA (Instruction Specific Architecture) bin directory. The $JAVA_HOME/bin/sparcv9
and $JAVA_HOME/bin/amd64
directories, and the sym links in the directories, were present in JDK 8 to aid migration after 32-bit support was removed. Scripts or applications that rely on these locations should be updated to use $JAVA_HOME/bin
.
The lib/$ARCH directory, which used to contain native-code shared objects (.so files) for the VM and the libraries, has been removed and the contents has moved up one level into the lib/ directory.
Several deprecated and undocumented "impl_*" methods have been removed from JDK 9.
In prior releases, many public JavaFX classes in exported packages had public or protected implementation methods that were named with "impl_*" in the name, marked as "@Deprecated" with the stated intention of removing them, and hidden from the API documentation with the "@treatAsPrivate" javadoc tag.
These methods were never supported and were not intended to be used by applications. JavaFX applications that were using these undocumented methods will need to stop calling them.
The JavaFX builder classes, which were previously deprecated in JDK 8 with the stated intention to remove them, have been removed from JDK 9. JavaFX applications that use the builder classes should instead construct the needed scene graph objects directly and set the desired properties with the equivalent method calls.
The com.apple.concurrent.Dispatch
API was a Mac-only API and was carried into JDK 7u4 with the port of Apple's JDK 6 code. This seldom-used and unsupported API has been removed in JDK 9. Developers are encouraged to use the standard java.util.concurrent.Executor
and java.util.concurrent.ExecutorService
APIs instead.
The AppleScript engine implementing javax.script engine API has been removed without replacement. The AppleScript engine has worked inconsistently. The services configuration (META-INF/services) file was missing and only worked by accident when installing JDK 7 or JDK 8 on systems that had Apple's version of AppleScriptEngine.jar already on the system.
The JDK-specific annotation @jdk.Exported
has been removed in JDK 9. The information that @jdk.Exported
conveyed is now recorded in the exports declarations of modules. Tools that scan for this annotation should be updated to make use of the new API support in javax.lang.model
and java.lang.module
.
The javax.crypto.ExemptionMechanism.finalize() method has been removed from both the specification and the implementation.
The com.sun.security.auth.callback.DialogCallbackHandler
class has been removed. This class, which is in the JDK-specific extensions to JAAS, was deprecated in JDK 8 and previously flagged for removal.
The Launch-Time JRE Version Selection also known as Multiple JRE or mJRE functionality will no longer be available with the java launcher. This means the java launcher will not invoke another JRE version, and will exit with an error.
The presence of "-version:x.y.z", "-jre-restrict-search" and "-jre-no-restrict-search" on the java launcher's command-line will cause it to exit with an error message. The environment variable "JRE_VERSION_PATH" will be ignored.
The Java Archive (jar) manifest entry "JRE-version" will cause the java launcher to emit a warning, and "JRE-Restrict-Search" will be ignored.
Visual VM is a tool that provides information about code running on a Java Virtual Machine. It was provided with Oracle JDK 6, Oracle JDK 7, and Oracle JDK 8.
Starting from JDK 9, the tool (jvisualvm) is no longer included in Oracle JDK. Users can still download the tool from the official project website at https://visualvm.github.io
The AppletViewer tool was deprecated as a part of "JEP C161: Deprecate the Java Plug-in", and its use isn't recommended,
For more information about AppletViewer, see: appletviewer
The method sun.misc.Unsafe.defineClass
is deprecated for removal. Use the method java.lang.invoke.MethodHandles.Lookup.defineClass
to define a class to the same class loader and in the same runtime package and protection domain of a given Lookup
's lookup class.
Classes Boolean
, Byte
, Short
, Character
, Integer
, Long
, Float
, and Double
are "box" classes that correspond to primitive types. The constructors of these classes have been deprecated.
Given a value of the corresponding primitive type, it is generally unnecessary to construct new instances of these box classes. The recommended alternatives to construction are autoboxing or the valueOf
static factory methods. In most cases, autoboxing will work, so an expression whose type is a primitive can be used in locations where a box class is required. This is covered in the Java Language Specification, section 5.1.7, "Boxing Conversion." For example, given List<Integer> intList
, the code to add an Integer
might be as follows:
intList.add(new Integer(347));
This can be replaced with:
intList.add(347);
Autoboxing should not be used in places where it might affect overload resolution. For example, there are two overloads of the List.remove
method:
List.remove(int i) // removes the element at index i
List.remove(Object obj) // removes an element equal to obj
The code to remove the Integer
value 347 might be as follows:
intList.remove(new Integer(347));
If this code is changed in an attempt to use autoboxing:
intList.remove(347);
This will not remove the Integer
value 347, but instead it will resolve to the other overloaded method, and it will attempt to remove the element at index 347.
Autoboxing cannot be used in such cases. Instead, code should be changed to use the valueOf
static factory method:
intList.remove(Integer.valueOf(347));
Autoboxing is preferable from a readability standpoint, but a safer transformation is to replace calls to the box constructors with calls to the valueOf
static factory method.
Using autoboxing or the valueOf
method reduces memory footprint compared to the constructors, as the integral box types will generally cache and reuse instances corresponding to small values. The special case of Boolean
has static fields for the two cached instances, namely Boolean.FALSE
and Boolean.TRUE
.
With the exception of Character
, the box classes also have constructors that take a String
argument. These parse and convert the string value and return a new instance of the box class. A valueOf
overload taking a String
is the equivalent static factory method for this constructor. Usually it's preferable to call one of the parse
methods (Integer.parseInt
, Double.parseDouble
, etc.) which convert the string and return primitive values instead of boxed instances.
The java.lang.Object.finalize
method has been deprecated. The finalization mechanism is inherently problematic and can lead to performance issues, deadlocks, and hangs. The java.lang.ref.Cleaner
and java.lang.ref.PhantomReference
provide more flexible and efficient ways to release resources when an object becomes unreachable. For further information, please see the java.lang.Object.finalize
method specification.
The policytool
security tool is deprecated in JDK 9. It will be removed in a future release.
The ability to double-jar (jarjar) a set of class files in Java deployment technologies has been deprecated. The following warning will be issued if a jarjar file is downloaded:
"WARNING: A jarjar file has been loaded. Jarjar files are deprecated and will be removed in a future Java release. This application may not function properly in the future. Jarjar file URL: {URL}"
Java Applet and WebStart functionality, including the Applet API, The Java plug-in, the Java Applet Viewer, JNLP and Java Web Start including the javaws tool are all deprecated in JDK 9 and will be removed in a future release.
The '-makeall' argument of the Java Packager's command line interface has been deprecated. Use of '-makeall' will result in a warning. In lieu of the '-makeall' command, independent commands should be issued to perform compilation, createjar, and deploy, steps.
The com.sun.java.browser.plugin2.DOM, and sun.plugin.dom.DOMObject APIs have been deprecated and will be removed in a future release of Java. Applications can use netscape.javascript.JSObject to manipulate the DOM.
The flag -XX:ExplicitGCInvokesConcurrentAndUnloadsClasses
has been deprecated and will be removed in a future release. A user can enable the same functionality by setting the two flags -XX:+ExplicitGCInvokesConcurrent
and -XX:+ClassUnloadingWithConcurrentMark
.
This option was deprecated in JDK 9, along with the -XX:AutoGCSelectPauseMillis option.
The CMS garbage collector was deprecated in JDK 9. For more information, see -XX:+UseConcMarkSweepGC
This option was deprecated in JDK 9, following the deprecation of the -XX:+UseAutoGCSelectPolicy option.
The VM Option "-Xprof" is deprecated in JDK 9 and will be removed in a future release. The option provides some profiling data for code being executed to standard output. Better data can be gathered with other tools such as Java Flight Recorder and therefore "-Xprof" will no longer be available in a future release.
The HostServices.getWebContext method is deprecated in JDK 9 and is marked as forRemoval=true indicating that it will be removed in a future version of the JDK. Applets are deprecated in JDK 9, and this method is only used when running an FX application as an Applet in a browser.
Support for VP6 video encoding format and FXM/FLV container are deprecated in JavaFX Media and it will be removed in a future release. Users encouraged to use H.264/AVC1 in MP4 container or HTTP Live Streaming instead.
The java.security.acl API has been deprecated. The classes in this package should no longer be used. The java.security package contains suitable replacements. See Policy and related classes for details.
The following pre-1.2 deprecated java.lang.SecurityManager methods and fields have been marked with forRemoval=true: the inCheck field, and the getInCheck, classDepth, classLoaderDepth, currentClassLoader, currentLoadedClass, inClass, and inClassLoader methods. This field and these methods should no longer be used and are subject to removal in a future version of Java SE.
The com.sun.jarsigner
package is now deprecated. This includes the ContentSigner
class, the ContentSignerParameters
interface, and the jarsigner command's "-altsigner" and "-altsignerpath" options.
The classes and interfaces in the java.security.acl
and javax.security.cert
packages have been superseded by replacements for a long time and are deprecated in JDK 9. Two methods javax.net.ssl.HandshakeCompletedEvent.getPeerCertificateChain()
and javax.net.ssl.SSLSession.getPeerCertificateChain()
are also deprecated since they return the javax.security.cert.X509Certificate
type.
The javax.security.cert API has been deprecated. The classes in this package should no longer be used. The java.security.cert package contains suitable replacements.
The java.net.ssl.HandshakeCompletedEvent.getPeerCertificateChain and java.net.ssl.SSLSession.getPeerCertificateChain methods have been deprecated. New applications should use the getPeerCertificates method instead.
The standard doclet is the doclet in the JDK that produces the default HTML-formatted API output. The version that was available in previous releases (com.sun.tools.doclets.standard.Standard) has been replaced by a new version (jdk.javadoc.doclet.Standard). The old version is now deprecated and is subject to removal in a future version of Java SE. For more details, see JEP 221. For more details on the new Doclet API, see the jdk.javadoc module.
The java launcher's data model switches, -d32 and -d64, were used primarily on Solaris platforms. With the removal of 32-bit JDK/JRE on Solaris in JDK8, these options are now obsolete and will be removed in a future release, causing the launcher to fail with an invalid option.
The value of the static final int field java.awt.font.OpenType.TAG_OPBD
was incorrect
It was erroneously using the same value as TAG_MORT
0x6D6F7274UL
and it has been changed to the correct 0x6F706264UL
Although this is strictly an incompatible binary change the likelihood of any practical impact on applications is near zero. The opbd table is used only in AAT fonts: https://developer.apple.com/fonts/TrueType-Reference-Manual/RM06/Chap6opbd.html and as such is likely to be extremely rare in the wild as they are natively understood only by MacOS and iOS. This table is not critical to rendering of text by Java or anything else. As such nothing goes looking for the table and nothing inside JDK utilises any part of this class.
The JDK does not provide a way to directly utilize these values. No Java API currently exists that accepts them and the class can not become useful unless an additional Java API is added.
Even if an application were to use it by passing the Java field's value to some custom native code to look up a table then it is likely to return "null" both before and afterward since:
A representative sampling of 6 OS X fonts found none of them to have either table
The lifecycle management of AWT menu components exposed problems on certain platforms. This fix improves state synchronization between menus and their containers.
There are some platforms like Mac OS X 10.11 that may not support showing the user-specified title in a file dialog.
The following description is added to the java.awt.FileDialog class constructors and setTitle(String) method: "Note: Some platforms may not support showing the user-specified title in a file dialog. In this situation, either no title will be displayed in the file dialog's title bar or, on some systems, the file dialog's title bar will not be displayed".
Three static fields exposing event listener instances whose types are internal and intended use was internal are now made private. These are very unlikely to have been used by many applications as until recently they were shipped only as an unbundled component.
Since Java SE 1.4 javax.imageio.spi.ServiceRegistry
provided a facility roughly equivalent to the Java SE 1.6 java.util.ServiceLoader
. This image i/o facility is now restricted to supporting SPIs defined as part of javax.imageio
. Applications which use it for other purposes need to be re-coded to use ServiceLoader
.
The MouseWheelEvent.getWheelRotation()
method returned rounded native NSEvent deltaX/Y
events on Mac OS X. The latest macOS Sierra 10.12 produces very small NSEvent deltaX/Y
values so rounding and summing them leads to the huge value returned from the MouseWheelEvent.getWheelRotation()
. The JDK-8166591 fix accumulates NSEvent deltaX/Y
and the MouseWheelEvent.getWheelRotation()
method returns non-zero values only when the accumulated value exceeds a threshold and zero value. This is compliant with the MouseWheelEvent.getWheelRotation()
specification:
https://docs.oracle.com/javase/8/docs/api/java/awt/event/MouseWheelEvent.html#getWheelRotation--
Returns the number of "clicks" the mouse wheel was rotated, as an integer. A partial rotation may occur if the mouse supports a high-resolution wheel. In this case, the method returns zero until a full "click" has been accumulated.
For the precise wheel rotation values, use the MouseWheelEvent.getPreciseWheelRotation()
method instead.
The focus behavior of Swing toggle button controls (JRadioButton and JCheckBox) changed when they belonged to a button group. Now, if the input focus is requested to any toggle button in the group through either focus traversal or window activation, the currently selected toggle button is focused regardless of the focus traversal policy used in the container. If the selected toggle button is not eligible to be a focus owner, the focus is set according to the focus traversal policy.
The ProgressMonitor dialog can be closed in following ways :
If the ProgressMonitor dialog is closed, ProgressMonitor.isCanceled() method used to return 'true' in only cases (1) and (2) above. This fix corrects the behavior where ProgressMonitor.isCanceled() method will return 'true' in case the ProgressMonitor dialog is closed by pressing Escape key.
There is low compatibility impact of this fix : This change may impact user code that (incorrectly) assumes ProgressMonitor.isCanceled() will return false even if the ProgressMonitor dialog is closed as a result of pressing Escape key. Also, with this change, now there is no way to get the ProgressMonitor dialog out of way whilst having progress continue.
Some applications have used core reflection to instantiate JDK internal Swing L&Fs, i.e system L&Fs such as The Windows L&F : Class.forName(" com.sun.java.swing.plaf.windows.WindowsLookAndFeel")
These classes are internal to the JDK and applications should have always treated them as such.
As of JDK 9 whether these are accessible to applications depends on the configuration of the Java Platform Module System and the value of the --illegal-access setting. By default in JDK 9 its value is "permit", but this is expected to change to "deny" in a future release.
Applications which need to create a system L&F must migrate to use the new method : javax.swing.UIManager.createLookAndFeel(String name)
.
JDK
The java.io
classes CharArrayReader
, PushbackReader
, and StringReader
might now block in close()
if there is another thread holding the Reader.lock
lock.
The read()
method of these classes could previously throw a NullPointerException
if the internal state of the instance had become inconsistent. This was caused by a race condition due to close()
not obtaining a lock before modifying the internal state of the Reader
. This lock is now obtained which can result in close()
blocking if another thread simultaneously holds the same lock on the Reader
.
Prior to JDK 9, creating a FilePermission object canonicalized its pathname, and the implies and equals methods were based on this canonicalized pathname. For example, if "file" and "/path/to/current/directory/file" point to the same file in the file system, two FilePermission objects from these pathnames are equal and imply each other if their actions are also the same.
In JDK 9, the pathname will not be canonicalized by default. This means two FilePermission objects will not equal each other if one uses an absolute path and the other a relative path, or one uses a symbolic link and the other the target, or one uses a Windows long name and the other a DOS-style 8.3 name, even if they point to the same file in the file system.
A compatibility layer has been added to ensure that granting a FilePermission for a relative path will still permit applications to access the file with an absolute path (and vice versa). This works for the default Policy provider and the limited doPrivileged (http://openjdk.java.net/jeps/140) calls. For example, although a FilePermission on a file with a relative pathname of "a" no longer implies a FilePermission on the same file with an absolute pathname of "/pwd/a" (suppose "pwd" is the current working directory), granting code a FilePermission to read "a" allows that code to also read "/pwd/a" when a Security Manager is enabled. This compatibility layer does not cover translations between symbolic links and targets, or Windows long names and DOS-style 8.3 names, or any other different name forms that can be canonicalized to the same name.
A system property named jdk.io.permissionsUseCanonicalPath has been introduced. When it is set to "true", FilePermission will canonicalize its pathname as it did before JDK 9. The default value of this property is "false".
Another system property named jdk.security.filePermCompat has also been introduced. When set to "true", the compatibility layer described above will also apply to third-party Policy implementations. The default value of this property is "false".
Class.getSimpleName() was changed to use the name recorded in the InnerClasses attribute of the class file. This change may affect applications which generate custom bytecode with incomplete or incorrect information recorded in the InnerClasses attribute.
This enhancement changes phantom references to be automatically cleared by the garbage collector as soft and weak references.
An object becomes phantom reachable after it has been finalized. This change may cause the phantom reachable objects to be GC'ed earlier - previously the referent is kept alive until PhantomReference objects are GC'ed or cleared by the application. This potential behavioral change might only impact existing code that would depend on PhantomReference being enqueued rather than when the referent be freed from the heap.
The deprecated checkTopLevelWindow
, checkSystemClipboard
, and checkAccessAwtEventQueueAccess
in java.lang.SecurityManager
have been changed to check AllPermission
, they no longer check AWTPermission
. Libraries that invoke these SecurityManager methods to do permission checks may require users of the library to change their policy files.
The spec of the following java.lang.ClassLoader
methods for locating a resource by name are updated to throw NullPointerException
when the specified name is null:
getResource(String)
getResourceAsStream(String)
getResources(String)
Custom class loader implementations that override these methods should be updated accordingly to conform to this spec.
java.lang.ref.Reference.enqueue
method clears the reference object before it is added to the registered queue. When the enqueue
method is called, the reference object is cleared and get()
method will return null in JDK 9.
Typically when a reference object is enqueued, it is expected that the reference object is cleared explicitly via the clear
method to avoid memory leak because its referent is no longer referenced. In other words the get
method is expected not to be called in common cases once the enqueue
method is called. In the case when the get
method from an enqueued reference object and existing code attempts to access members of the referent, NullPointerException
may be thrown. Such code will need to be updated.
The internal package sun.invoke.anon has been removed. The functionality it used to provide, namely anonymous class loading with possible constant pool patches, is available via the Unsafe.defineAnonymousClass() method.
A behavioural change has been made to class java.lang.invoke.LambdaMetafactory
so that it is no longer possible to construct an instance. This class only has static methods to create "function objects" (commonly utilized as bootstrap methods) and should not be instantiated. The risk of source and binary incompatibility is very low; analysis of existing code bases found no instantiations.
The invokedynamic
byte code instruction is no longer specified by the Java Virtual Machine Specification to wrap any Throwable
thrown during linking in java.lang.invoke.BootstrapMethodError
, which is then thrown to the caller.
If during linking an instance of Error
, or a subclass of, is thrown then that Error
is no longer wrapped and is thrown directly to the caller. Any other instance of Throwable
, or subclass of, is still wrapped in java.lang.invoke.BootstrapMethodError
.
This change in behaviour ensures that errors such as OutOfMemoryError
or ThreadDeath
are thrown unwrapped and may be acted on or reported directly, thereby enabling more uniform replacement of byte code with an invokedynamic
instruction whose call site performs the same functionality as the replaced byte code (and may throw the same errors).
The method java.lang.invoke.MethodHandles.bind
has been fixed to correctly obey the access rules when binding a receiver object to a protected
method.
The javadoc for the Class.getMethod and Class.getMethods refer to the definition of inheritance in the Java Language Specification. Java SE 8 changed these rules in order to support default methods and reduce the number of redundant methods inherited from superinterfaces (see JLS 8, 8.4.8).
Class.getMethod and Class.getMethods were not updated with the 8 release to match the new inheritance definition (both may return non-inherited superinterface methods). The implementation has now been changed to filter out methods that are not members of the class.
java.lang.reflect.Field.get(), Field.get{primitive}() and java.lang.reflect.Method.invoke() have been updated to use the primitive wrapper classes' valueOf() (for example Integer.valueOf()) instead of always creating new wrappers with "new" (for example new Integer()) after the reflection libraries have (potentially) optimised the Field/Method instance. This can affect applications that depended on two wrappers being != while still being .equals().
The behavior of getAnnotatedReceiverType()
has been clarified to return an empty AnnotatedType object only for a method/constructor which could conceptually have a receiver parameter but does not have one at present. (Since there is no receiver parameter, there are no annotations to return.) In addition, the behavior of getAnnotatedReceiverType() has been clarified to return null for a method/constructor which cannot ever have a receiver parameter (and therefore cannot have annotations on the type of a receiver parameter): static methods, and constructors of non-inner classes. Incompatibility: Behavioral
The exact toString output of an annotation is deliberately not specified; from java.lang.annotation.Annotation.toString():
Returns a string representation of this annotation. The details of the representation are implementation-dependent [...]
Previously, the toString format of an annotation did not output certain information in a way that would be usable for a source code representation of an annotation, string values were not surrounded by double quote characters, array values were surrounded by brackets ("[]") rather than braces ("{}"), etc.
As a behavioral change, the annotation output has been updated to be faithful to a source code representation of the annotation.
In Java SE 9 the requirement to support multicasting has been somewhat relaxed, in order to support a small number of platforms where multicasting is not available. The specification for the java.net.MulticastSocket::joinGroup
and the java.nio.channels.MulticastChannel::join
methods has been updated to indicate that an UnsupportedOperationException
will be thrown if invoked on a platform that does not support multicasting.
There is no impact to Oracle JDK platforms, since they do support multicasting.
In some environments certain authentication schemes may be undesirable when proxying HTTPS. Accordingly, the Basic
authentication scheme has been deactivated, by default, in the Oracle Java Runtime, by adding Basic
to the jdk.http.auth.tunneling.disabledSchemes
networking property in the net.properties file. Now, proxies requiring Basic
authentication when setting up a tunnel for HTTPS will no longer succeed by default. If required, this authentication scheme can be reactivated by removing Basic
from the jdk.http.auth.tunneling.disabledSchemes
networking property, or by setting a system property of the same name to "" ( empty ) on the command line.
Additionally, the jdk.http.auth.tunneling.disabledSchemes
and jdk.http.auth.proxying.disabledSchemes
networking properties, and system properties of the same name, can be used to disable other authentication schemes that may be active when setting up a tunnel for HTTPS, or proxying plain HTTP, respectively.
The behavior of HttpURLConnection when using a ProxySelector has been modified with this JDK release. HttpURLConnection used to fall back to a DIRECT connection attempt if the configured proxy(s) failed to make a connection. This release introduces a change whereby no DIRECT connection will be attempted in such a scenario. Instead, the HttpURLConnection.connect() method will fail and throw an IOException which occurred from the last proxy tested.
Class loaders created by the java.net.URLClassLoader.newInstance
methods can be used to load classes from a list of given URLs. If the calling code does not have access to one or more of the URLs, and the URL artifacts that can be accessed do not contain the required class, then a ClassNotFoundException, or similar, will be thrown. Previously, a SecurityException would have been thrown when access to a URL was denied. If required to revert to the old behavior, this change can be disabled by setting the jdk.net.URLClassPath.disableRestrictedPermissions
system property.
A new JDK implementation specific system property to control caching for HTTP NTLM connection is introduced. Caching for HTTP NTLM connection remains enabled by default, so if the property is not explicitly specified, there will be no behavior change.
On some platforms, the HTTP NTLM implementation in the JDK can support transparent authentication, where the system user credentials are used at the system level. When transparent authentication is not available or unsuccessful, the JDK only supports getting credentials from a global authenticator. If connection to the server is successful, the authentication information will then be cached and reused for further connections to the same server. In addition, connecting to an HTTP NTLM server usually involves keeping the underlying connection alive and reusing it for further requests to the same server. In some applications, it may be desirable to disable all caching for the HTTP NTLM protocol in order to force requesting new authentication with each new request to the server.
With this fix, we now provide a new system property that will allow control of the caching policy for HTTP NTLM connections. If jdk.ntlm.cache
is defined and evaluates to false
, then all caching will be disabled for HTTP NTLM connections. Setting this system property to false may, however, result in undesirable side effects:
Authenticator
implementation, may result in a popup asking the user for credentials for every new request.
The current implementation of java.net.HttpCookie
can only be used to parse cookie headers generated by a server and sent in a HTTP response as a Set-Cookie
or Set-Cookie2
header. It does not support parsing of client generated cookie headers.
This is not completely clear from the API documentation of that class. The documentation could be updated to make the current behavior clearer, or preferably, the implementation could be updated to support both behaviors in a future release.
A new JDK implementation specific system property to control caching for HTTP SPNEGO (Negotiate/Kerberos) connections is introduced. Caching for HTTP SPNEGO connections remains enabled by default, so if the property is not explicitly specified, there will be no behavior change.
When connecting to an HTTP server which uses SPNEGO to negotiate authentication, and when connection and authentication with the server is successful, the authentication information will then be cached and reused for further connections to the same server. In addition, connecting to an HTTP server using SPNEGO usually involves keeping the underlying connection alive and reusing it for further requests to the same server. In some applications, it may be desirable to disable all caching for the HTTP SPNEGO (Negotiate/Kerberos) protocol in order to force requesting new authentication with each new requests to the server.
With this fix, we now provide a new system property that will allow control of the caching policy for HTTP SPNEGO connections. If jdk.spnego.cache
is defined and evaluates to false
, then all caching will be disabled for HTTP SPNEGO connections. Setting this system property to false may however result in undesirable side effects:
Authenticator
implementation, may result in a popup asking the user for credentials for every new request.
The sentence
"The SecurityManager.checkDelete(String)
method is invoked to check delete access if the file is opened with the DELETE_ON_CLOSE
option."
was appended to the verbiage of the SecurityException throws clause in the specifications of the newBufferedWriter()
and write()
methods of java.nio.file.Files.
The java.nio.channels.FileLock
constructors will now throw a NullPointerException
if called with a null
Channel
parameter. To avoid an unexpected behavior change, subclasses of FileLock should therefore ensure that the Channel
they pass to the superclass constructor is non-null
.
The RMI multiplex protocol is disabled by default. It can be re-enabled by setting the system property "sun.rmi.transport.tcp.enableMultiplexProtocol" to "true".
The performance of java.time.zone.ZoneRulesProvider.getAvailableZoneIds() is improved by returning an unmodifiable set of zone ids; previously the set was modifiable.
Boundaries specified by java.time.temporal.ChronoField.EPOCH_DAY have been corrected to match the epoch day of LocalDate.MIN and LocalDate.MAX
The Java SE 8 specification for java.time.Clock
states that ''The system factory methods provide clocks based on the best available system clock. This may use System.currentTimeMillis()
, or a higher resolution clock if one is available.'' In JDK 8 the implementation of the clock returned was based on System.currentTimeMillis()
, and thus has only a millisecond resolution. In JDK 9, the implementation is based on the underlying native clock that System.currentTimeMillis()
is using, providing the maximum resolution available from that clock. On most systems this can be microseconds, or sometimes even tenth of microseconds.
An application making the assumption that the clock returned by these system factory methods will always have milliseconds precision and actively depends on it, may therefore need to be updated in order to take into account the possibility of a greater resolution, as was stated in the API documentation. It is also worth noting that a new Clock.tickMillis(zoneId)
method has been added to allow time to be obtained at only millisecond precision - see: http://download.java.net/java/jdk9/docs/api/java/time/Clock.html#tickMillis-java.time.ZoneId-.
JDK JDK 9 contains IANA time zone data version 2016j. For more information, refer to Timezone Data Versions in the JRE Software.
JDK JDK 9 contains IANA time zone data version 2016d. For more information, refer to Timezone Data Versions in the JRE Software.
JDK JDK 9 contains IANA time zone data version 2016f. For more information, refer to Timezone Data Versions in the JRE Software.
JDK JDK 9 contains IANA time zone data version 2016i. For more information, refer to Timezone Data Versions in the JRE Software.
java.util.Properties
is a subclass of the legacy Hashtable
class, which synchronizes on itself for any access. System properties are stored in a Properties
object. They are a common way to change default settings, and sometimes must be read during classloading.
System.getProperties()
returns the same Properties
instance accessed by the system, which any application code might synchronize on. This situation has lead to deadlocks in the past, such as 6977738.
The Properties
class has been updated to store its values in an internal ConcurrentHashMap
(instead of using the inherited Hashtable
mechanism), and its getter methods and legacy Enumeration
s are no longer synchronized. This should reduce the potential for deadlocks. It also means that since Properties
' Iterator
s are now generated by ConcurrentHashMap
, they don't fail-fast - ConcurrentModificationException
s are no longer thrown.
The specification of the class java.util.prefs.Preferences
was modified to disallow the use of any String containing the null control character, code point U+0000, in any String used as the key or value parameter in any of the abstract put*(), get*(), and remove methods. If such a character is detected, an IllegalArgumentException
shall be thrown.
The specification of the class java.util.prefs.AbstractPreferences
was modified according to the corresponding change in its superclass java.util.prefs.Preferences to disallow the use of any String containing the null control character, code point U+0000, in any String used as the key or value parameter in any of the put*(), get*(), and remove() method implementations. These method implementations were modified to throw an IllegalArgumentException
upon encountering such a character in a key or value String in these contexts. Also, the class specification was modified to correct the erroneous reference to the flush() and sync() methods as returning a boolean value when they are in fact void.
java.util.Properties
defines the loadFromXML
and storeToXML
methods for Properties
stored in XML documents. XML specifications only require XML processors to read entities in UTF-8 and UTF-16 and the API docs for these methods only require an implementation to support UTF-8 and UTF-16. The implementation of these methods has changed in JDK 9 to use a smaller XML parser which may impact applications that have been using these methods with other encodings. The new implementation does not support all encodings that the legacy implementation had support for, in particular it does not support UTF-32/UCS-4, IBM* or x-IBM-* encodings. For maximum portability, applications are encouraged to use UTF-8 and UTF-16.
As part of the fix for JDK-8006627, a check of the String
parameter of java.util.UUID.fromString(String)
was added which will result in an IllegalArgumentException
being thrown if the length of the parameter is greater than 36.
The specification of the default locales used in Formatter related classes has been clarified to designate the default locale for formatting (Locale.Category.FORMAT).
In Java SE 9, threads that are part of the fork/join common pool will always return the system class loader as their thread context class loader. In previous releases, the thread context class loader may have been inherited from whatever thread causes the creation of the fork/join common pool thread, e.g. by submitting a task. An application cannot reliably depend on when, or how, threads are created by the fork/join common pool, and as such cannot reliably depend on a custom defined class loader to be set as the thread context class loader.
The ZipFile implementation has changed significantly in JDK 9 to improve reliability. A consequence of these changes is that the implementation now rejects ZIP files where the month or day in a MS-DOS date/time field is 0. While technically invalid, these ZIP files were not rejected in previous release. A future release will address this issue.
zlib issue #275 tracks an issue in zlib 1.2.11
that may impact applications using the java.util.zip.Deflater
API when this version of zlib
is installed (Ubuntu 17.04 for example). Specifically, it may impact code that changes the compression level or strategy and then resets the deflater. More details can be found in JDK-8184306. The JDK includes a patched version of zlib
on Microsoft Windows so this issue does not impact that platform.
java.util.zip.ZipEntry API doc specifies "A directory entry is defined to be one whose name ends with a '/'". However, in previous JDK releases java.util.zip.ZipFile.getEntry(String entryName) may return a ZipEntry instance with an entry name that does not end with '/' for an existing zip directory entry when the passed in argument entryName does not end with a '/' and there is a matching zip directory entry with name entryName + '/' in the zip file. With JDK 9 the name of the ZipEntry instance returned from java.util.zip.ZipFile.getEntry() always ends with '/' for any zip directory entry.
An ArrayIndexOutOfBoundsException
will be thrown in java.util.jar.JarFile
if the Java run-time encounters the backtick (`) character in a JAR file's manifest. This can be worked around by removing backtick characters from the JAR file's manifest.
LogRecord now stores the event time in the form of a java.time.Instant. XMLFormatter DTD
is upgraded to print the new higher time resolution.
In Java SE 9 java.util.logging
is updated to use java.time
instead of System.currentTimeMillis()
and java.util.Date
. This allows for higher time stamp precision in LogRecord
.
As a consequence, the implementation of the methods getMillis()
and setMillis(long)
in java.util.logging.LogRecord
has been changed to use java.lang.Instant
, and the method setMillis(long)
has been deprecated in favor of the new method LogRecord.setInstant(java.time.Instant)
. The java.util.logging.SimpleFormatter
has been updated to pass a java.time.ZonedDateTime
object instead of java.util.Date
to String.format
. The java.util.logging.XMLFormatter
has been updated to print a new optional <nanos>
XML element after the <millis>
element. The <nanos>
element contains a nano seconds adjustment to the number of milliseconds printed in the <millis>
element. The XMLFormatter
will also print the full java.time.Instant
in the <date>
field, using the java.time.format.DateTimeFormatter.ISO_INSTANT
formatter.
Compatibility with previous releases:
The LogRecord
serial form, while remaining fully backward/forward compatible, now contains an additional serial nanoAdjustment
field of type int, which corresponds to a nano seconds adjustment to the number of milliseconds contained in the serial millis
field. If a LogRecord
is serialized and transmitted to an application running on a previous release of the JDK, the application will simply see a LogRecord
with a time truncated at the millisecond resolution. Similarly, if a LogRecord
serialized by an application running on a previous release of the JDK, is transmitted to an application running on Java SE 9 or later, only the millisecond resolution will be available.
Applications that parse logs produced by the XMLFormatter
, and which perform validation, may need to be upgraded with the newer version of the logger.dtd, available in the appendix A of the Logging Overview. In order to mitigate the compatibilty risks, the XMLFormatter
class (and subclasses) can be configured to revert to the old XML format from Java SE 8 and before. See the java.util.logging.XMLFormatter
API documentation for more details.
There could also be an issue if a subclass of LogRecord
overrides getMillis/setMillis
without calling the implementation of the super class. In that case, the event time as seen by the formatters and other classes may be wrong, as these have been updated to no longer call getMillis()
but use getInstant()
instead.
LogManager.readConfiguration
calls Properties.load
, which may throw IllegalArgumentException
if it encounters an invalid unicode escape sequence in the input stream. In previous versions of the JDK, the IllegalArgumentException was simply propagated to the caller. This was however in violation of the specification, since LogManager.readConfiguration
is not specified to throw IllegalArgumentException
. Instead, it is specified to throw IOException
''if there are problems reading from the stream''. In Java SE 9, LogManager.readConfiguration
will no longer propagate such IllegalArgumentException
directly, but will wrap it inside an IOException
in order to conform to the specification.
A new "java.util.logging.FileHandler.maxLocks" configurable property is added to java.util.logging.FileHandler.
This new logging property can be defined in the logging configuration file and makes it possible to configure the maximum number of concurrent log file locks a FileHandler can handle. The default value is 100.
In a highly concurrent environment where multiple (more than 101) standalone client applications are using the JDK Logging API with FileHandler simultaneously, it may happen that the default limit of 100 is reached, resulting in a failure to acquire FileHandler file locks and causing an IO Exception to be thrown. In such a case, the new logging property can be used to increase the maximum number of locks before deploying the application.
If not overridden, the default value of maxLocks (100) remains unchanged. See java.util.logging.LogManager and java.util.logging.FileHandler API documentation for more details.
When a logger has a handler configured in the logging configuration file (using the <logger>.handlers
property), a reference to that logger will be internally kept by the LogManager until LogManager.reset()
is called, in order to ensure that the associated handlers are properly closed on reset. As a consequence, such loggers won't be garbage collected until LogManager.reset()
is called. An application that needs to allow garbage collection of these loggers before reset is called can revert to the old behaviour by additionally specifying <logger>.handlers.ensureCloseOnReset=false
in the logging configuration file. Note however that doing so will reintroduce the resource leak that JDK-8060132 is fixing. Such an application must therefore take the responsibility of keeping the logger alive as long as it is needed, and close any handler attached to it before the logger gets garbage collected. See LogManager API documentation for more details.
A new JDK implementation specific system property jdk.internal.FileHandlerLogging.maxLocks
has been introduced to control the java.util.logging.FileHandler
MAX_LOCKS limit. The default value of the current MAX_LOCKS (100) is retained if this new system property is not set or an invalid value is provided to the property. Valid values for this property are integers ranging from 1 to Integer.MAX_VALUE-1.
java.util.logging.Formatter.formatMessage
API specification specified that MessageFormat
would be called if the message string contained "{0". In practice MessageFormat
was called if the message string contained either "{0", "{1", "{2" or "{3".
In Java SE 9, the specification and implementation of this method have been changed to call MessageFormat
if the message string contains "{<digit>", where <digit> is in [0..9].
In practice, this should be transparent for calling applications.
The only case where an application might see a behaviour change is if the application passes a format string that does not contain any formatter of the form "{0", "{1", "{2" or "{3", but contains "{<digit>" with <digit> within [4..9], along with an array of parameters that contains at least <digit>+1 elements, and depends on MessageFormat
not to be called. In that case the method will return a formatted message instead of the format string.
In java.util.regex.Pattern
using a character class of the form [^a-b[c-d]]
, the negation ^
negates the entire class, not just the first range. The negation operator "^" has the lowest precedence among the character class operators, intersection "&&", union, range "-" and nested class "[ ]", so it is always applied last.
Previously, the negation was applied only to the first range or group leading to inconsistent and misunderstood matches. Detail and examples in the issue and http://mail.openjdk.java.net/pipermail/core-libs-dev/2011-June/006957.html.
Pattern.compile(String, int) will throw IllegalArgumentException if anything other than a combination of predefined values is passed as the second argument, in accordance with the specification.
The Arrays.asList()
API returns an instance of List
. Calling the toArray()
method on that List
instance is specified always to return Object[]
, that is, an array of Object
. In previous releases, it would sometimes return an array of some subtype. Note that the declared return type of Collection.toArray()
is Object[]
, which permits an instance of an array of a subtype to be returned. The specification wording, however, clearly requires an array of Object
to be returned.
The toArray()
method has been changed to conform to the specification, and it now always returns Object[]
. This may cause code that was expecting the old behavior to fail with a ClassCastException
. An example of code that worked in previous releases but that now fails is the following:
List<String> list = Arrays.asList("a", "b", "c");
String[] array = (String[]) list.toArray();
If this problem occurs, rewrite the code to use the one-arg form toArray(T[])
, and provide an instance of the desired array type. This will also eliminate the need for a cast.
String[] array = list.toArray(new String[0]);
Before the JDK 9 release, invocation of the method Collections.asLifoQueue with a null argument value would not throw a NullPointerException as specified by the class documentation. Instead a NullPointerException would be thrown when operating on the returned Queue. The JDK 9 release corrects the implementation of Collections.asLifoQueue to conform to the specification. Behavioral compatibility is not preserved but it is expected that the impact will be minimal given analysis of existing usages.
Previously the default implementation of List.spliterator
derived a Spliterator
from the List
's iterator, which is poorly splitting and that affects the performance of a parallel stream returned by List.parallelStream
. The default implementation of List.spliterator
now returns an optimal splitting Spliterator
implementation for List
implementations that implement java.util.RandomAccess
. As a result parallel stream performance may be improved for third-party List
implementations, such as those provided by Eclipse collections, that do not override List.spliterator
for compatibility across multiple major versions of the Java platform. This enhancement is a trade-off. It requires that the List.get
method, of such lists implementing RandomAccess
, have no side-effects, ensuring safe concurrent execution of the method when parallel stream pipeline is executed.
The locale data based on the Unicode Consortium's CLDR (Common Locale Data Registry) has been upgraded in JDK 9 to release release 29. See http://cldr.unicode.org/index/downloads/cldr-29 for more detail.
Prior to JDK 9, SPI implementations of java.awt.im.spi, java.text.spi, and java.util.spi packages used the Java Extension Mechanism. In JDK 9, this mechanism has been removed. SPI implementations should now be deployed on the application class path or as module on the module path.
In releases through JDK 8, SPI implementations of java.util.spi.ResourceBundleControlProvider were loaded using Java Extension Mechanism. In JDK 9, this mechanism is no longer available. Instead, SPI implementations may be placed on an application's class path.
The default locale data provider lookup does not load SPI based locale sensitive services. If it is needed, the system property "java.locale.providers" needs to designate "SPI" explicitly. For more detail, refer to LocaleServiceProvider.
Remote class loading via JNDI object factories stored in naming and directory services, is disabled by default. To enable remote class loading by the RMI Registry or COS Naming service provider, set the following system property to the string "true", as appropriate:
com.sun.jndi.rmi.object.trustURLCodebase
com.sun.jndi.cosnaming.object.trustURLCodebase
The javax.naming.CompoundName
, an extensible type, has a protected member, impl
whose type, javax.naming.NameImpl
, is package-private. This is a long standing issue where an inaccessible implementation type has mistakenly made its way into the public Java SE API.
The new javac
lint option javac -Xlint
helped identify this issue. In Java SE 9, this protected member has been removed from the public API.
Since the type of the member is package-private it cannot be directly referenced by non-JDK code. The member type does not implement or extend any super type directly, therefore any non-JDK subtype of javax.naming.CompoundName
could only refer to this member as Object. It is possible that such a subtype might invoke the toString
, or any of Object
's methods on this member, or even synchronize on it. In such a case such subtypes of javax.naming.CompoundName
will require updating.
Code making a static reference to the member will fail to compile, e.g. error: impl has private access in CompoundName
Previously compiled code executed with JDK 9, accessing the member directly will fail, e.g. java.lang.IllegalAccessError: tried to access field javax.naming.CompoundName.impl from class CompoundName$MyCompoundName
The JDK was throwing a NullPointerException when a non-compliant REFERRAL status result was sent but no referral values were included. With this change, a NamingException with message value of "Illegal encoding: referral is empty" will be thrown in such circumstances. See JDK-8149450 and JDK-8154304 for more details
The JDWP socket connector has been changed to bind to localhost only if no ip address or hostname is specified on the agent command line. A hostname of asterisk (*) may be used to achieve the old behavior which is to bind the JDWP socket connector to all available interfaces; this is not secure and not recommended.
When running a java application with the options "-javaagent:myagent.jar -Djava.system.classloader=MyClassLoader", myagent.jar is added to the custom system class loader rather than the application class loader.
In addition, the java.lang.instrument package description has a small update making it clear that a custom system class loader needs to define appendToClassPathForInstrumentation in order to load the agent at startup. Before custom system class loaders were required to implement this method only if the agents are started in the live phase (Agent_OnAttach).
In Java SE 9 the java.util.logging.LoggingMXBean
interface is deprecated in favor of the java.lang.management.PlatformLoggingMXBean
interface. The java.util.logging.LogManager.getLoggingMXBean()
method is also deprecated in favor of java.lang.mangement.ManagementFactory.getPlatformMXBean(PlatformLoggingMXBean.class)
.
The concrete implementation of the logging MXBean registered in the MBeanServer and obtained from the ManagementFactory will only implement java.lang.management.PlatformLoggingMXBean
, and no longer java.util.logging.LoggingMXBean
. It must be noted that PlatformLoggingMXBean
and LoggingMXBean
attributes are exactly the same. The PlatformLoggingMXBean
interface has all the methods defined in LoggingMXBean
, and so PlatformLoggingMXBean
by itself provides the full management capability of logging facility.
This should be mostly transparent to remote and local clients of the API.
Compatibility:
Calls to ManagementFactory.newPlatformMXBeanProxy(MBeanServerConnection, ObjectName, java.util.logging.LoggingMXBean.class)
and calls to JMX.newMXBeanProxy(MBeanServerConnection, ObjectName, java.util.logging.LoggingMXBean.class)
will continue to work as before.
Remote clients running any version of the JDK should see no changes, except for the interface name in MBeanInfo
, and the change in isInstanceOf
reported in 1. and 2. below.
The behavioral change and source incompatibility due to this change are as follows:
ManagementFactory.getPlatformMBeanServer().isInstanceOf(ObjectName, "java.util.logging.LoggingMXBean")
will now return 'false
' instead of 'true
'.
If an application depends on this, then a workaround is to change the source of the calling code to check for java.lang.management.PlatformLoggingMXBean
instead.
The Logging MXBean MBeanInfo
will now report that its management interface is java.lang.management.PlatformLoggingMXBean
instead of the non standard sun.management.ManagementFactoryHelper$LoggingMXBean
name it used to display.
The new behavior has the advantage that the reported interface name is now a standard class.
Local clients which obtain an instance of the logging MXBean by calling ManagementFactory.getPlatformMXBean(PlatformLoggingMXBean.class)
will no longer be able to cast the result to java.util.logging.LoggingMXBean
.
PlatformLoggingMXBean
already has all the methods defined in LoggingMXBean
, therefore a simple workaround is to change the code to accept PlatformLoggingMXBean
instead - or change it to use the deprecated LogManager.getLoggingMXBean()
instead.
com.sun.management.HotSpotDiagnostic::dumpHeap API is modified to throw IllegalArgumentException if the supplied file name does not end with “.hprof” suffix. Existing applications which do not provide a file name ending with the “.hprof” extension will fail with IllegalArgumentException. In that case, applications can either choose to handle the exception or restore old behaviour by setting system property 'jdk.management.heapdump.allowAnyFileSuffix' to true.
A new annotation @javax.management.ConstructorParameters in the java.management module is introduced.
The newly introduced annotation will be 1:1 copy of @java.beans.ConstructorProperties. Constructors annotated by @java.beans.ConstructorProperties will still be recognized and processed.
In case a constructor is annotated by both @javax.management.ConstructorParameters and @java.beans.ConstructorProperties only the @javax.management.ConstructorParameters will be used.
JMX ObjectName class is refactored and 8 bytes of class member metadata was reduced.
Each instance size of JMX ObjectName in memory is 8 bytes less than JDK8 ObjectName instance.
A new restriction on domain name length is introduced. The domain name now is a case sensitive string of limited length. The domain name length limit is Integer.MAX_VALUE/4.
The Javadoc Standard Doclet documentation has been enhanced to specify that it doesn't validate the content of documentation comments for conformance, nor does it attempt to correct any errors in documentation comments. See the Conformance section in the Doclet documentation.
The implementation of Attach API has changed in JDK 9 to disallow attaching to the current VM by default. This change should have no impact on tools that use the Attach API to attach to a running VM. It may impact libraries that mis-use this API as a way to get at the java.lang.instrument API. The system property jdk.attach.allowAttachSelf
may be set on the command line to mitigate any compatibility with this change.
A warning has been added to the plugin authentication dialog in cases where HTTP Basic authentication (credentials are sent unencrypted) is used while using a proxy or while not using SSL/TLS protocols:
"WARNING: Basic authentication scheme will effectively transmit your credentials in clear text. Do you really want to do this?"
JDK 9 no longer contains samples, including the JnlpDownloadServlet. If you need to use the JnlpDownloadServlet, you can get it from the latest update of JDK 8.
The Deployment Toolkit API installLatestJRE() and installJRE(requestedVersion) methods from deployJava.js and install() method from dtjava.js no longer installs the JRE. If a user's version of Java is below the security baseline, it redirects the user to java.com to get an updated JRE.
Starting with JDK 9, support for deployment technologies designed to access Java Applications through a web browser is limited to client and development platforms. Use of deployment technologies on server platforms such as Oracle Linux, Suse Linux, Windows Server 2016, and others is not supported. See the (link to complete list)JDK 9 and JRE 9 Certified System Configurations(link) page for a complete list.
JDK-8080977 introduced delay on applet launch, the delay appears only on IE and lasts about 20 seconds. JDK-8136759 removed this delay.
Documentation for the Java Packager states that the -srcfiles argument is not mandatory, and if omitted all files in the directory specified by the -srcdir argument will be used. This is not functioning as expected. When -srcfiles is omitted, the resultant bundle may issue a class not found error.
New option "Use roaming profile" added in JCP (Windows only).
When the option is set, the following data is stored in the roaming profile:
The rest of the cache ( the cache without LAP), temp and log folders are always stored in LocalLow regardless of the roaming profile settings.
Web-start applications cannot be launched when clicking JNLP link from IE 11 on Windows 10 Creators Update when 64-bit JRE is installed. Workaround is to uninstall 64-bit JRE and use only 32-bit JRE.
Both jcontrol and javaws -viewer do not work on Oracle Linux 6. Java Control Panel functionality is dependent on JavaFX technology, which is not supported on Oracle Linux 6 in the JDK 9 release. Users reliant on the Java Control Panel are encouraged to use the most up-to-date JDK 8 release.
JavaFX applications deployed with
<application-desc type="JavaFX"> <param name="param1" value="foo"/> </application-desc>
will have their <param> elements ignored. It is recommended that JavaFX applications relying on parameter values continue to use the <javafx-desc> element of the xml extension until this is resolved.
In 8u20, the custom XML parser that was used in Java Web Start to parse jnlp file was replaced with the standard SAX parser. When a parsing error occurred, the code would print a warning message to the Java Console and Trace file, and then try again using the custom XML parser. In JDK 9 this fallback has been removed. If the jnlp file cannot be parsed by the SAX parser an error dialog will show and the app will not run. This could cause compatibility errors with existing JNLP files that don't follow the XML rules that are enforced by the SAX parser.
New-style JVM arguments, those with embedded spaces (e.g., "--add-modules <module>" and "--add-exports <module>" instead of, "--add-modules=<module>" and "--add-exports=<module>") will not be supported when passed through Java Web Start or Plug-in. If arguments with embedded spaces are passed, they could be processed incorrectly.
In JDK 9, Java Web Start applications are prohibited from using URLStreamHandlerFactory
. Using URLStreamHandlerFactor
y via javaws
will result in an exception with the message "factory already defined."
Applications launched directly with java
command are not impacted.
JDK 9 will support code generation for AVX-512 (AVX3) instructions set on x86 CPUs, but not by default. A maximum of AVX2 is supported by default in JDK 9. The flag -XX:UseAVX=3 can be used to enable AVX-512 code generation on CPUs that support it.
The 32-bit Client VM was removed from linux-x86 and Windows. As a result, the -client
flag is ignored with 32-bit versions of Java on this platform. The 32-bit Server VM is used instead. However, due to limited virtual address space on Windows in 32-bit mode, by default the Server VM emulates the behavior of the Client VM and only uses the C1 JIT compiler, Serial GC, 32Mb CodeCache. To revert to server mode, the flag -XX:{+|-}TieredCompilation
can be used. On linux-x86 there is no Client VM mode emulation.
When performing OSR on loops with huge stride and/or initial values, in very rare cases, the tiered/server compilers could produce non-canonical loop shapes that produce nondeterministic answers when the answers should be deterministic. This issue has now been fixed.
In 8u40, and 7u80, a new feature was introduced to use the PICL library on Solaris to get some system information. If this library was not found, we printed an error message:
Java HotSpot(TM) Server VM warning: PICL (libpicl.so.1) is missing. Performance will not be optimal.
This warning was misleading. Not finding the PICL library is a very minor issue, and the warnings mostly lead to confusion. In this release, the warning was removed.
According to the Java VM Specification, final fields can be modified by the putfield
byte code instruction only if the instruction appears in the instance initializer method <init>
of the field's declaring class. Similar, static final fields can be modified by a putstatic
instruction only if the instruction appears in the class initializer method <clinit>
of the field's declaring class. With the JDK 9 release, the HotSpot VM fully enforces the previously mentioned restrictions, but only for class files with version number >= 53. For class files with version numbers < 53, restrictions are only partially enforced (as it is done by releases preceding JDK 9). That is, for class files with version number < 53, final fields can be modified in any method of the class declaring the field (not only class/instance initializers).
We have implemented improvements that will improve performance of several security algorithms, especially when using ciphers with key lengths of 2048-bit or greater. To turn on these improvements, use the options -XX:+UseMontgomeryMultiplyIntrinsic and -XX:+UseMontgomerySquareIntrinsic. This improvement is only for Linux and Solaris on x86_64 architecture.
The IEEE 754 standard distinguishes between signaling and quiet NaNs. When executing floating point operations, some processors silently convert signaling NaNs to quiet NaNs. The 32-bit x86 version of the HotSpot JVM allows silent conversions to happen. With JVM releases preceding JDK 9, silent conversions happen depending on whether the floating point operations are part of compiled or interpreted code. With the JDK 9 release, interpreted and compiled code behaves consistently with respect to signaling and quiet NaNs.
This enhancement provides a way to specify more granular levels for the GC verification enabled using the "VerifyBeforeGC", "VerifyAfterGC" and "VerifyDuringGC" diagnostic options. It introduces a new diagnostic option VerifySubSet using which one can specify the subset of the memory system that should be verified.
With this new option, one or more sub-systems can be specified in a comma separated string. Valid memory sub-systems are: threads, heap, symbol_table, string_table, codecache, dictionary, classloader_data_graph, metaspace, jni_handles, c-heap and codecache_oops.
During the GC verification, only the sub-systems specified using VerifySubSet get verified:
D:\tests>java -XX:+UnlockDiagnosticVMOptions -XX:+VerifyBeforeGC -XX:VerifySubSet="threads,c-heap" -Xlog:gc+verify=debug Test
[0.095s][debug ][gc,verify] Threads
[0.099s][debug ][gc,verify] C-heap
[0.105s][info ][gc,verify] Verifying Before GC (0.095s, 0.105s) 10.751ms
[0.120s][debug ][gc,verify] Threads
[0.124s][debug ][gc,verify] C-heap
[0.130s][info ][gc,verify] Verifying Before GC (0.120s, 0.130s) 9.951ms
[0.148s][debug ][gc,verify] Threads
[0.152s][debug ][gc,verify] C-heap
If any invalid memory sub-systems are specified with VerifySubSet, Java process exits with the following error message:
D:\tests>java -XX:+UnlockDiagnosticVMOptions -XX:+VerifyBeforeGC -XX:VerifySubSet="threads,c-heap,hello" -Xlog:gc+verify=debug oom
Error occurred during initialization of VM
VerifySubSet: 'hello' memory sub-system is unknown, please correct it
The logging for all garbage collectors in HotSpot have been changed to make use of a new logging framework that is configured through the -Xlog
command line option. The command line flags -XX:+PrintGC, -XX:+PrintGCDetails
and -Xloggc
have been deprecated and will likely be removed in a future release. They are currently mapped to similar -Xlog
configurations. All other flags that were used to control garbage collection logging have been removed. See the documentation for -Xlog
for details on how to now configure and control the logging. These are the flags that were removed:
CMSDumpAtPromotionFailure
, CMSPrintEdenSurvivorChunks
, G1LogLevel
, G1PrintHeapRegions
, G1PrintRegionLivenessInfo
, G1SummarizeConcMark
, G1SummarizeRSetStats
, G1TraceConcRefinement
, G1TraceEagerReclaimHumongousObjects
, G1TraceStringSymbolTableScrubbing
, GCLogFileSize
, NumberOfGCLogFiles
, PrintAdaptiveSizePolicy
, PrintClassHistogramAfterFullGC
, PrintClassHistogramBeforeFullGC
, PrintCMSInitiationStatistics
, PrintCMSStatistics
, PrintFLSCensus
, PrintFLSStatistics
, PrintGCApplicationConcurrentTime
, PrintGCApplicationStoppedTime
, PrintGCCause
, PrintGCDateStamps
, PrintGCID
, PrintGCTaskTimeStamps
, PrintGCTimeStamps
, PrintHeapAtGC
, PrintHeapAtGCExtended
, PrintJNIGCStalls
, PrintOldPLAB
, PrintParallelOldGCPhaseTimes
, PrintPLAB
, PrintPromotionFailure
, PrintReferenceGC
, PrintStringDeduplicationStatistics
, PrintTaskqueue
, PrintTenuringDistribution
, PrintTerminationStats
, PrintTLAB
, TraceDynamicGCThreads
, TraceMetadataHumongousAllocation
, UseGCLogFileRotation
, VerifySilently
On Linux kernels 2.6 and later, the JDK would include time spent waiting for IO completion as "CPU usage". During periods of heavy IO activity, this could result in misleadingly high values reported as CPU consumption in various tools like Flight Recorder and performance counters. This issue has been resolved.
Some linux kernel versions (including, but not limited to 3.13.0-121-generic and 4.4.0-81-generic) are known to contain an incorrect fix for a linux kernel stack overflow issue (See CVE-2017-1000364). The incorrect fix can trigger crashes in the Java Virtual Machine. Upgrading the kernel to a version that includes the corrected fix addresses the problem.
This change enforces the unqualified name format checks for NameAndType
strings as outlined in the JVM specification sections 4.4.6 and 4.2.2, meaning that some illegal names and descriptors that users may be utilizing in their classfiles will now be caught with a Class Format Error. This includes format checking for all strings under non-referenced NameAndType
's. Users will see a change if they (A) are using Java classfile version 6 or below and have an illegal NameAndType descriptor with no Methodref or Fieldref reference to it; or (B) are using any Java classfile version and have an illegal NameAndType name with no Methodref or Fieldref reference to it.
In both (A) and (B) the users will now receive a ClassFormatError for those illegal strings, which is an enforcement of unqualified name formats as delineated in JVMS 4.2.2.
The current version of the Java Native Interface (JNI) needs to be updated due to the addition of new application programmatic interfaces to support Jigsaw. JNI_VERSION_9
was added with a value of 0x00090000 to the available versions and CurrentVersion
was changed to this new value.
The JVM has been fixed to check that the constant pool types JVM_CONSTANT_Methodref or JVM_CONSTANT_InterfaceMethodref are consistent with the type of method referenced. These checks are made during method resolution and are also checked for methods that are referenced by JVM_CONSTANT_MethodHandle.
If consistency checks fail an IncompatibleClassChangeError is thrown.
javac has never generated inconsistent constant pool entries, but some bytecode generating software may. In many cases, if ASM is embedded in the application, upgrading to the latest version ASM 5.1 resolves the exception. After upgrading ASM, be sure to replace all uses of deprecated functions with calls to the new functions, particularly new functions that pass a boolean whether the method is an interface method: visitMethodInsn and Handle.
JDK 8 and below offered a client JVM and a server JVM for Windows 32-bit systems with the default being the client JVM. JDK 9 will offer only the server JVM.
The server JVM has better performance although it might require more resources. The change is made to reduce complexity and to benefit from the increased capabilities of computers.
The JNI function DetachCurrentThread
has been added to the list of JNI functions that can safely be called with an exception pending. The HotSpot Virtual Machine has always supported this as it reports that the exception occurred in a similar manner to the default handling of uncaught exceptions at the Java level. Other implementations are not obligated to do anything with the pending exception.
The VM Options "-Xoss", "-Xsqnopause", "-Xoptimize" and "-Xboundthreads" are obsolete in JDK 9 and are ignored. Use of these options will result in a warning being issued in JDK 9 and they may be removed completely in a future release.
The VM Options "-Xoss", "-Xsqnopause", "-Xoptimize" options were already silently ignored for a long time.
The VM Option "-Xboundthreads" was only needed on Solaris 8/9 (when using the T1 threading library).
The -XX:-JNIDetachReleasesMonitors
flag requested that the VM run in a pre-JDK 6 compatibility mode with regard to not releasing monitors when a JNI attached thread detaches. This option is obsolete in JDK 9, and is ignored, as the VM always conforms to the JNI Specification and releases monitors. Use of this option will result in a warning being issued in JDK 9 and it may be removed completely in a future release.
The VM Options -XX:AdaptiveSizePausePolicy
and -XX:ParallelGCRetainPLAB
are obsolete in JDK 9 and are ignored. Use of these options will result in a warning being issued in JDK 9 and they may be removed completely in a future release.
The VM Option -XX:AdaptiveSizePausePolicy
has been unused for some time.
The VM Option -XX:ParallelGCRetainPLAB
was a diagnostic flag relating to garbage collector combinations that no longer exist.
When a large TLS (Thread local storage) size is set for Threads, the JVM results in a stack overflow exception. The reason for this behavior is that the reaper thread was created with a low stack size of 32768k. When a large TLS size is set, it steals space from the threads stack, which eventually results in a stack overflow. This is a known glibc bug. To overcome this issue, we have introduced a workaround (jdk.lang.processReaperUseDefaultStackSize) in which the user can set the reaper threads stack size to a default instead of to 32768. This gives the reaper thread a bigger stack size, so for a large TLS size, such as 32k, the process will not fail. Users can set this flag in one of two ways:
The problem has been observed only when JVM is started from JNI code in which TLS is declared using "__thread"
When dumping the heap in binary format, HPROF format 1.0.2 is always used now. Previously, format 1.0.1 was used for heaps smaller than 2GB. HPROF format 1.0.2 is also used by jhsdb jmap for the serviceability agent.
The jsadebugd command to start remote debug server can now be launched from the common SA launcher jhsdb .
The new command to start remote debug server is jhsdb debugd .
The Java runtime now uses system zlib library (the zlib library installed on the underlying operation system) for its zlib compression support (the deflation and inflation functionality in java.util.zip, for example) on Solaris and Linux platforms.
release
file removed
OS_VERSION
property is no longer present in the release
file. Scripts or tools that read the release
file may need to be updated to handle this change.
The REMOVEOUTOFDATEJRES feature does not work when the install is run via LocalSystem user. LocalSystem user is not a domain user, therefore it does not have network access to get the list of out of date jre's.
Users running Internet Explorer Enhance Security Configuration (ESC) on Windows Server 2008 R2 may have experienced issues installing Java in interactive mode. This issue has been resolved in the 8u71 release. Installers executed in interactive mode will no longer appear to be stalled on ESC configurations.
Demos were removed from the package tar.Z bundle(JDK-7066713). There is a separate Demos&Samples bundle beginning with 7u2 b08 and 6u32 b04, but Solaris patches still contain SUNWj7dmo/SUNWj6dmo. The 64 bit packages are SUNWj7dmx/SUNWj6dmx.
Demo packages remain in the existing Solaris patches; however, just because they are there doesn't mean that they are installed. They will be patched only if the end user has them installed on the system.
http://docs.oracle.com/javase/7/docs/webnotes/install/solaris/solaris-jdk.html
The link above is to the Solaris OS Install Directions for the JDK. The SUNWj7dmx package is mentioned in the tar.Z portion of the directions. This is confusing to some as, according to the cited bug, the SUNWj7dmx package shouldn't be part of the tar.Z bundle.
Starting with the JDK 9 release, a Stage on Mac and Linux platforms will be initially filled using the Fill property of the Scene if its Fill is a Color. An average color, computed within the stops range, will be used if the Fill is a LinearGradient or RadialGradient. Previously, it was initially filled with WHITE, irrespective of the Fill in the Scene. This change in behavior will reduce the flashing that can be seen with a dark Scene background, but applications should be aware of this change in behavior so they can set an appropriate Fill color for their Scene.
The bug fix for JDK-8089861, which was first integrated in JDK 8u102, fixes a memory leak when Java objects are passed into JavaScript. Prior to JDK 8u102, the WebView JavaScript runtime held a strong reference to such bound objects, which prevented them from being garbage collected. After the fix for JDK-8089861, the WebView JavaScript runtime uses weak references to refer to bound Java objects. The specification was updated to make it clear that this is the intended behavior.
Applications which rely on the previously unspecified behavior might be affected by the updated behavior if the application does not hold a strong reference to an object passed to JavaScript. In such case, the Java object might be garbage collected prematurely. The solution is to modify the application to hold a strong reference in Java code for objects that should remain live after being passed into JavaScript.
The javax.rmi.CORBA.Util class provides methods that can be used by stubs and ties to perform common operations. It also acts as a factory for ValueHandlers. The javax.rmi.CORBA.ValueHandler interface provides services to support the reading and writing of value types to GIOP streams. The security awareness of these utilities has been enhanced with the introduction of a permission java.io.SerializablePermission("enableCustomValueHanlder"). This is used to establish a trust relationship between the users of the javax.rmi.CORBA.Util and javax.rmi.CORBA.ValueHandler APIs.
The required permission is "enableCustomValueHanlder" SerializablePermission. Third party code running with a SecurityManager installed, but not having the new permission while invoking Util.createValueHandler(), will fail with an AccessControlException.
This permission check behaviour can be overridden, in JDK8u and previous releases, by defining a system property, "jdk.rmi.CORBA.allowCustomValueHandler".
As such, external applications that explicitly call javax.rmi.CORBA.Util.createValueHandler require a configuration change to function when a SecurityManager is installed and neither of the following two requirements is met:
The java.io.SerializablePermission("enableCustomValueHanlder") is not granted by SecurityManager.
In the case of applications running on JDK8u and before, the system property "jdk.rmi.CORBA.allowCustomValueHandler" is either not defined or is defined equal to "false" (case insensitive).
==== Please note that "enableCustomValueHanlder" typo will be corrected in the October 2016 releases. In those and future JDK releases "enableCustomValueHandler" will be the correct SerializationPermission to use.
If the singleton ORB is configured with the system property org.omg.CORBA.ORBSingletonClass
or the equivalent key in orb.properties
then the class must be visible to the system class loader. Previous releases incorrectly attempted to load the class using the Thread Context Class Loader (TCCL). An @implNote
has been added to org.omg.CORBA.ORB
to document the behavior.
The change does not impact the loading of ORB implementations configured with the system property org.omg.CORBA.ORBClass
or the equivalent key in orb.properties
. The ORB implementation configured with this property is loaded using the TCCL to allow for applications that bundle an ORB implementation with the application.
orb.idl
and ir.idl
have moved from the JDK lib
directory to the include
directory. Applications that use a CORBA IDL compiler in their build may need to change the include path from $JAVA_HOME/lib
to $JAVA_HOME/include
.
org.omg.CORBA.ORB specifies the search order to locate an ORB's orb.properties file, and this includes searching ${java.home}/lib. The JDK 9 release will include a ${java.home}/conf directory as the location for properties files. As such, the ORB.init processing has been amended, to include ${java.home}/conf directory in its search path for an orb.properties file. Thus, the preferred approach is to use the ${java.home}/conf directory, in preference to the ${java.home}/lib directory, as a location for an orb.properties file.
With one exception, keytool will always print a warning if the certificate, certificate request, or CRL it is parsing, verifying, or generating is using a weak algorithm or key. When a certificate is from an existing TrustedCertificateEntry
, either in the keystore directly operated on or in the cacerts
keystore when the -trustcacerts
option is specified for the -importcert
command, keytool will not print a warning if it is signed with a weak signature algorithm. For example, suppose the file cert
contains a CA certificate signed with a weak signature algorithm, keytool -printcert -file cert
and keytool -importcert -file cert -alias ca -keystore ks
will print out a warning, but after the last command imports it into the keystore, keytool -list -alias ca -keystore ks
will not show a warning anymore.
An algorithm or a key is weak if it matches the value of the jdk.certpath.disabledAlgorithms
security property defined in the conf/security/java.security
file.
One new root certificate has been added:
ISRG Root X1
alias: letsencryptisrgx1
DN: CN=ISRG Root X1, O=Internet Security Research Group, C=US
Classes loaded from the extensions directory are no longer granted AllPermission
by default. See JDK-8040059.
A custom java.security.Policy
provider that was using the extensions mechanism may be depending on the policy grant statement that had previously granted it AllPermission
. If the policy provider does anything that requires a permission check, the local policy file may need to be adjusted to grant those permissions.
Also, custom policy providers are loaded by the system class loader. The classpath
may need to be configured to allow the provider to be located.
When using a SecurityManager
, the permissions required by JDK modules are granted by default, and are not dependent on the policy.url
properties that are set in the java.security
file.
This also applies if you are setting the java.security.policy
system property with either the '=' or '==' option.
Two new root certificates have been added :
D-TRUST Root Class 3 CA 2 2009 alias: dtrustclass3ca2 DN: CN=D-TRUST Root Class 3 CA 2 2009, O=D-Trust GmbH, C=DE
D-TRUST Root Class 3 CA 2 EV 2009 alias: dtrustclass3ca2ev DN: CN=D-TRUST Root Class 3 CA 2 EV 2009, O=D-Trust GmbH, C=DE
Three new root certificates have been added :
IdenTrust Public Sector Root CA 1 alias: identrustpublicca DN: CN=IdenTrust Public Sector Root CA 1, O=IdenTrust, C=US
IdenTrust Commercial Root CA 1 alias: identrustcommercial DN: CN=IdenTrust Commercial Root CA 1, O=IdenTrust, C=US
IdenTrust DST Root CA X3 alias: identrustdstx3 DN: CN=DST Root CA X3, O=Digital Signature Trust Co.
This JDK release introduces a new restriction on how MD5 signed JAR files are verified. If the signed JAR file uses MD5, signature verification operations will ignore the signature and treat the JAR as if it were unsigned. This can potentially occur in the following types of applications that use signed JAR files:
The list of disabled algorithms is controlled via the security property, jdk.jar.disabledAlgorithms, in the java.security file. This property contains a list of disabled algorithms and key sizes for cryptographically signed JAR files.
To check if a weak algorithm or key was used to sign a JAR file, one can use the jarsigner binary that ships with this JDK. Running jarsigner -verify on a JAR file signed with a weak algorithm or key will print more information about the disabled algorithm or key.
For example, to check a JAR file named test.jar, use the following command : jarsigner -verify test.jar
If the file in this example was signed with a weak signature algorithm like MD5withRSA, this output would be displayed:
"The jar will be treated as unsigned, because it is signed with a weak algorithm that is now disabled. Re-run jarsigner with the -verbose option for more details."
More details can be seen with the verbose option: jarsigner -verify -verbose test.jar
The following output would be displayed:
- Signed by "CN=weak_signer"
Digest algorithm: MD5 (weak)
Signature algorithm: MD5withRSA (weak), 512-bit key (weak)
Timestamped by "CN=strong_tsa" on Mon Sep 26 08:59:39 CST 2016
Timestamp digest algorithm: SHA-256
Timestamp signature algorithm: SHA256withRSA, 2048-bit key
To address the issue, the JAR file will need to be re-signed with a stronger algorithm or key size. Alternatively, the restrictions can be reverted by removing the applicable weak algorithms or key sizes from the jdk.jar.disabledAlgorithms security property; however, this option is not recommended. Before re-signing affected JARs, the existing signature(s) should be removed from the JAR file. This can be done with the zip utility, as follows:
zip -d test.jar 'META-INF/.SF' 'META-INF/.RSA' 'META-INF/*.DSA'
Please periodically check the Oracle JRE and JDK Cryptographic Roadmap at http://java.com/cryptoroadmap for planned restrictions to signed JARs and other security components.
The OpenJDK 9 binary for Linux x64 contains an empty cacerts
keystore. This prevents TLS connections from being established because there are no Trusted Root Certificate Authorities installed. You may see an exception like the following:
javax.net.ssl.SSLException: java.lang.RuntimeException: Unexpected error: java.security.InvalidAlgorithmParameterException: the trustAnchors parameter must be non-empty
As a workaround, users can set the javax.net.ssl.trustStore
System Property to use a different keystore. For example, the ca-certificates
package on Oracle Linux 7 contains the set of Root CA certificates chosen by the Mozilla Foundation for use with the Internet PKI. This package installs a trust store at /etc/pki/java/cacerts
, which can be used by OpenJDK 9.
Only the OpenJDK 64 bit Linux download is impacted. This issue does not apply to any Oracle JRE/JDK download.
Progress on open-sourcing the Oracle JDK Root CAs can be tracked through the issue JDK-8189131.
The following have been added to the security algorithm requirements for JDK implementations (keysize in parentheses):
Signature
: SHA256withDSAKeyPairGenerator
: DSA (2048), DiffieHellman (2048, 4096), RSA (4096)AlgorithmParameterGenerator
: DSA (2048), DiffieHellman (2048)Cipher
: AES/GCM/NoPadding (128), AES/GCM/PKCS5Padding (128)SSLContext
: TLSv1.1, TLSv1.2TrustManagerFactory
: PKIX
This JDK release introduces new restrictions on how signed JAR files are verified. If the signed JAR file uses a disabled algorithm or key size less than the minimum length, signature verification operations will ignore the signature and treat the JAR as if it were unsigned. This can potentially occur in the following types of applications that use signed JAR files:
The list of disabled algorithms is controlled via a new security property, jdk.jar.disabledAlgorithms, in the java.security file. This property contains a list of disabled algorithms and key sizes for cryptographically signed JAR files.
The following algorithms and key sizes are restricted in this release:
1. MD2 (in either the digest or signature algorithm)
2. RSA keys less than 1024 bits
NOTE: We are planning to restrict MD5-based signatures in signed JARs in the January 2017 CPU.
To check if a weak algorithm or key was used to sign a JAR file, one can use the jarsigner binary that ships with this JDK. Running jarsigner -verify -J-Djava.security.debug=jar on a JAR file signed with a weak algorithm or key will print more information about the disabled algorithm or key.
e.g. to check a JAR file named test.jar, use this command : jarsigner -verify -J-Djava.security.debug=jar test.jar
If the file in this example was signed with a weak signature algorithm like MD2withRSA, this output would be seen : jar: beginEntry META-INF/my_sig.RSA jar: processEntry: processing block jar: processEntry caught: java.security.SignatureException: Signature check failed. Disabled algorithm used: MD2withRSA jar: done with meta!
The updated jarsigner command will exit with this warning printed to standard output : "Signature not parsable or verifiable. The jar will be treated as unsigned. The jar may have been signed with a weak algorithm that is now disabled. For more information, rerun jarsigner with debug enabled (-J-Djava.security.debug=jar)"
To address the issue, the jar file will need to be re-signed with a stronger algorithm or key size. Alternatively, the restrictions can be reverted by removing the applicable weak algorithms or key sizes from the jdk.jar.disabledAlgorithms security property; however, this option is not recommended. Before re-signing affected JARs, the existing signature(s) should be removed from the JAR. This can be done with the zip utility, as follows:
zip -d test.jar 'META-INF/.SF' 'META-INF/.RSA' 'META-INF/*.DSA'
Please periodically check the Oracle JRE and JDK Cryptographic Roadmap at http://java.com/cryptoroadmap for planned restrictions to signed JARs and other security components. In particular, please note the current plan to restrict MD5-based signatures in signed JARs in the January 2017 CPU.
To test if your JARs have been signed with MD5, add "MD5" to the jdk.jar.disabledAlgorithms security property, ex:
jdk.jar.disabledAlgorithms=MD2, MD5, RSA keySize < 1024
and then run jarsigner -verify -J-Djava.security.debug=jar on your JARs as described above.
DSA keys less than 1024 bits are not strong enough and should be restricted in certification path building and validation. Accordingly, DSA keys less than 1024 bits have been deactivated by default by adding "DSA keySize < 1024" to the "jdk.certpath.disabledAlgorithms" security property. Applications can update this restriction in the security property ("jdk.certpath.disabledAlgorithms") and permit smaller key sizes if really needed (for example, "DSA keySize < 768").
The implementation of the checkPackageAccess
and checkPackageDefinition
methods of java.lang.SecurityManager
now automatically restrict all non-exported packages of JDK modules loaded by the platform class loader or its ancestors. This is in addition to any packages listed in the package.access
and package.definition
security properties. A "non-exported package" refers to a package that is not exported to all modules. Specifically, it refers to a package that either is not exported at all by its containing module or is exported in a qualified fashion by its containing module.
If your application is running with a SecurityManager
, it will need to be granted an appropriate accessClassInPackage.{package} RuntimePermission
to access any internal JDK APIs (in addition to specifying an appropriate --add-exports
option). If the application has not been granted access, a SecurityException
will be thrown.
Note that an upgraded JDK module may have a different set of internal packages than the corresponding system module, and therefore may require a different set of permissions.
The package.access
and package.definition
properties no longer contain internal JDK packages that are not exported. Therefore, if an application calls Security.getProperty("package.access")
, it will not include the builtin non-exported JDK packages.
Also, when running under a SecurityManager
, an attempt to access a type in a restricted package that does not contain any classes now throws a ClassNotFoundException
instead of an AccessControlException
. For example, loading sun.Foo
now throws a ClassNotFoundException
instead of an AccessControlException
because there are no classes in the sun
package.
An error was corrected for PBE using 256-bit AES ciphers such that the derived key may be different and not equivalent to keys previously derived from the same password.
To improve security, the default key size for the RSA and DiffieHellman KeyPairGenerator
implementations and the DiffieHellman AlgorithmParameterGenerator
implementations has been increased from 1024 bits to 2048 bits. The default key size for the DSA KeyPairGenerator
and AlgorithmParameterGenerator
implementations remains at 1024 bits to preserve compatibility with applications that are using keys of that size with the SHA1withDSA signature algorithm.
With increases in computing power and advances in cryptography, the minimum recommended key size increases over time. Therefore, future versions of the platform may increase the default size.
For signature generation, if the security strength of the digest algorithm is weaker than the security strength of the key used to sign the signature (e.g. using (2048, 256)-bit DSA keys with SHA1withDSA signature), the operation will fail with the error message: "The security strength of SHA1 digest algorithm is not sufficient for this key size."
The Comodo "UTN - DATACorp SGC" root CA certificate has been removed from the cacerts file.
As of JDK 9, the default keystore type (format) is "pkcs12" which is based on the RSA PKCS12 Personal Information Exchange Syntax Standard. Previously, the default keystore type was "jks" which is a proprietary format. Other keystore formats are available, such as "jceks" which is an alternate proprietary keystore format with stronger encryption than "jks" and "pkcs11", which is based on the RSA PKCS11 Standard and supports access to cryptographic tokens such as hardware security modules and smartcards.
Due to the more rigorous procedure of reading a keystore content, some keystores (particularly, those created with old versions of the JDK or with a JDK from other vendors) might need to be regenerated.
The following procedure can be used to import the keystore:
Before you start, create a backup of your keystore. For example, if your keystore file is /DIR/KEYSTORE
, make a copy of it:
cp /DIR/KEYSTORE /DIR/KEYSTORE.BK
Download an older release of the JDK, prior CPU17_04, and install it in a separate location. For example: 6u161, 7u151, or 8u141. Suppose, that older JDK is installed in the directory /JDK8U141
Make sure that the keystore can be successfully read with the keytool from that older directory. For example, if the keystore file is located in /DIR/KEYSTORE
, the following command should successfully list its content:
/JDK8U141/bin/keytool -list /DIR/KEYSTORE
Import the keystore. For example:
/JDK8U141/bin/keytool -importkeystore \
-srckeystore /DIR/KEYSTORE \
-srcstoretype JCEKS \
-srcstorepass PASSWORD \
-destkeystore /DIR/KEYSTORE.NEW \
-deststoretype JCEKS \
-deststorepass PASSWORD
Verify that the newly created keystore is correct. At the very least, make sure that the keystore can be read with keytool from a newer JDK:
/NEW_JDK/bin/keytool -list /DIR/KEYSTORE.NEW
After successful verification, replace the old keystore with the new one:
mv /DIR/KEYSTORE.NEW /DIR/KEYSTORE
Keep the backup copy of the keystore at least until you are sure the imported keystore is correct.
A new constraint named 'usage' has been added to the 'jdk.certpath.disabledAlgorithms' security property, that when set, restricts the algorithm if it is used in a certificate chain for the specified usage(s). Three usages are initially supported: 'TLSServer' for restricting authentication of TLS server certificate chains, 'TLSClient' for restricting authentication of TLS client certificate chains, and 'SignedJAR' for restricting certificate chains used with signed JARs. This should be used when disabling an algorithm for all usages is not practical. The usage type follows the keyword and more than one usage type can be specified with a whitespace delimiter. For example, to disable SHA1 for TLS server and client certificate chains, add the following to the property: "SHA1 usage TLSServer TLSClient"
The 'denyAfter' constraint has been added to the 'jdk.jar.disabledAlgorithms' security property. When set, it restricts the specified algorithm if it is used in a signed JAR after the specified date, as follows:
a. if the JAR is not timestamped, it will be restricted (treated as unsigned) after the specified date
b. if the JAR is timestamped, it will not be restricted if it is timestamped before the specified date.
For example, to restrict usage of SHA1 in jar files signed after January 1, 2018, add the following to the property: "SHA1 denyAfter 2018-01-01".
Applications which use static ProtectionDomain objects (created using the 2-arg constructor) with an insufficient set of permissions may now get an AccessControlException with this fix. They should either replace the static ProtectionDomain objects with dynamic ones (using the 4-arg constructor) whose permission set will be expanded by the current Policy or construct the static ProtectionDomain object with all the necessary permissions.
Default signature algorithms for jarsigner
and keytool
are determined by both the algorithm and the key size of the private key which makes use of comparable strengths as defined in Tables 2 and 3 of NIST SP 800-57 Part 1-Rev.4. Specifically, for a DSA or RSA key with a key size greater than 7680 bits, or an EC key with a key size greater than or equal to 512 bits, SHA-512 will be used as the hash function for the signature algorithm. For a DSA or RSA key with a key size greater than 3072 bits, or an EC key with a key size greater than or equal to 384 bits, SHA-384 will be used. Otherwise, SHA-256 will be used. The value may change in the future.
For DSA keys, the default key size for keytool
has changed from 1024 bits to 2048 bits.
There are a few potential compatibility risks associated with these changes:
If you use jarsigner
to sign JARs with the new defaults, previous versions (than this release) might not support the stronger defaults and will not be able to verify the JAR. jarsigner -verify
on such a release will output the following error:
jar is unsigned. (signatures missing or not parsable)
If you add -J-Djava.security.debug=jar
to the jarsigner
command line, the cause will be output:
jar: processEntry caught: java.security.NoSuchAlgorithmException: SHA256withDSA Signature not available
If compatibility with earlier releases is important, you can, at your own risk, use the -sigalg
option of jarsigner
and specify the weaker SHA1withDSA algorithm.
If you use a PKCS11
keystore, the SunPKCS11 provider may not support the SHA256withDSA
algorithm. jarsigner
and some keytool
commands may fail with the following exception if PKCS11
is specified with the -storetype
option, ex:
keytool error: java.security.InvalidKeyException: No installed provider supports this key: sun.security.pkcs11.P11Key$P11PrivateKey
A similar error may occur if you are using NSS with the SunPKCS11 provider. The workaround is to use the -sigalg
option of keytool
and specify SHA1withDSA.
If you have a script that uses the default key size of keytool
to generate a DSA keypair but then subsequently specifies a specific signature algorithm, ex:
keytool -genkeypair -keyalg DSA -keystore keystore -alias mykey ...
keytool -certreq -sigalg SHA1withDSA -keystore keystore -alias mykey ...
it will fail with one of the following exceptions, because the new 2048-bit keysize default is too strong for SHA1withDSA:
keytool error: java.security.InvalidKeyException: The security strength of SHA-1 digest algorithm is not sufficient for this key size
keytool error: java.security.InvalidKeyException: DSA key must be at most 1024 bits
You will see a similar error if you use jarsigner
to sign JARs using the new 2048-bit DSA key with -sigalg SHA1withDSA
set.
The workaround is to remove the -sigalg
option and use the stronger SHA256withDSA default or, at your own risk, use the -keysize
option of keytool
to create new keys of a smaller key size (1024).
See JDK-8057810, JDK-8056174 and JDK-8138766 for more details.
In order to support longer key lengths and stronger signature algorithms, a new JCE Provider Code Signing root certificate authority has been created and its certificate added to Oracle JDK. New JCE provider code signing certificates issued from this CA will be used to sign JCE providers at a date in the near future. By default, new requests for JCE provider code signing certificates will be issued from this CA.
Existing certificates from the current JCE provider code signing root will continue to validate. However, this root CA may be disabled at some point in the future. We recommend that new certificates be requested and existing provider JARs be re-signed.
For details on the JCE provider signing process, please refer to the "How to Implement a Provider in the Java Cryptography Architecture" documentation.
Inputs to the javax.security.auth.Subject class now prohibit null values in the constructors and modification operations on the Principal and credential Set objects returned by Subject methods.
For the non-default constructor, the principals, pubCredentials, and privCredentials parameters may not be null, nor may any element within the Sets be null. A NullPointerException will be thrown if null values are provided.
For operations performed on Set objects returned by getPrincipals(), getPrivateCredentials() and getPublicCredentials(), a NullPointerException is thrown under the following conditions:
The jarsigner tool has been enhanced to show details of the algorithms and keys used to generate a signed JAR file and will also provide an indication if any of them are considered weak.
Specifically, when "jarsigner -verify -verbose filename.jar" is called, a separate section is printed out showing information of the signature and timestamp (if it exists) inside the signed JAR file, even if it is treated as unsigned for various reasons. If any algorithm or key used is considered weak, as specified in the Security property jdk.jar.disabledAlgorithms
, it will be labeled with "(weak)".
For example:
- Signed by "CN=weak_signer"
Digest algorithm: MD2 (weak)
Signature algorithm: MD2withRSA (weak), 512-bit key (weak)
Timestamped by "CN=strong_tsa" on Mon Sep 26 08:59:39 CST 2016
Timestamp digest algorithm: SHA-256
Timestamp signature algorithm: SHA256withRSA, 2048-bit key
SecureRandom
objects are safe for use by multiple concurrent threads. A SecureRandom
service provider can advertise that it is thread-safe by setting the service provider attribute "ThreadSafe" to "true" when registering the provider. Otherwise, the SecureRandom
class will synchronize access to the following SecureRandomSpi
methods: SecureRandomSpi.engineSetSeed(byte[])
, SecureRandomSpi.engineNextBytes(byte[])
, SecureRandomSpi.engineNextBytes(byte[], SecureRandomParameters)
, SecureRandomSpi.engineGenerateSeed(int)
, and SecureRandomSpi.engineReseed(SecureRandomParameters)
.
More checks are added to the DER encoding parsing code to catch various encoding errors. In addition, signatures which contain constructed indefinite length encoding will now lead to IOException during parsing. Note that signatures generated using JDK default providers are not affected by this change.
Keytool now prints out the key algorithm and key size of a certificate's public key, in the form of "Subject Public Key Algorithm: <size>-bit RSA key", where <size>
is the key size in bits (ex: 2048).
As part of work for JEP 220 "Modular Run-Time Images", security providers loading mechanism is enhanced to support modular providers through java.util.ServiceLoader. The default JDK security providers have been refactored to be modular providers and registered inside java.security file by provider name instead of provider class name. As for providers which have not been re-worked to modules, they should be registered by provider class names in java.security file.
SecureRandom.PKCS11 from the SunPKCS11 Provider is disabled by default on Solaris because the native PKCS11 implementation has poor performance and is not recommended. If your application requires SecureRandom.PKCS11, you can re-enable it by removing "SecureRandom" from the disabledMechanisms list in conf/security/sunpkcs11-solaris.cfg
Performance improvements have also been made in the java.security.SecureRandom class. Improvements in the JDK implementation has allowed for synchronization to be removed from the java.security.SecureRandom.nextBytes(byte[] bytes) method.
The "Sonera Class1 CA" root CA certificate has been removed from the cacerts file.
A new -tsadigestalg option is added to jarsigner to specify the message digest algorithm that is used to generate the message imprint to be sent to the TSA server. In older JDK releases, the message digest algorithm used was SHA-1. If this new option is not specified, SHA-256 will be used on JDK 7 Updates and later JDK family versions. On JDK 6 Updates, SHA-1 will remain the default but a warning will be printed to the standard output stream.
If a jar file was signed with a timestamp when the signer certificate was still valid, it should be valid even after the signer certificate expires. However, jarsigner will incorrectly show a warning that that signer's certificate chain is not validated. This will be fixed in a future release.
In this update, MD5 is added to the jdk.certpath.disabledAlgorithms security property and the use of the MD5 hash algorithm in certification path processing is restricted in the Oracle JRE. Applications using certificates signed with a MD5 hash algorithm should upgrade their certificates as soon as possible.
Note that this is a behavior change of the Oracle JRE. It is not guaranteed that the security property (jdk.certpath.disabledAlgorithms) is examined and used by other JRE implementations.
Eight new root certificates have been added :
QuoVadis Root CA 1 G3 alias: quovadisrootca1g3 DN: CN=QuoVadis Root CA 1 G3, O=QuoVadis Limited, C=BM
QuoVadis Root CA 2 G3 alias: quovadisrootca2g3 DN: CN=QuoVadis Root CA 2 G3
QuoVadis Root CA 3 G3 alias: quovadisrootca3g3 DN: CN=QuoVadis Root CA 3 G3, O=QuoVadis Limited, C=BM
DigiCert Assured ID Root G2 alias: digicertassuredidg2 DN: CN=DigiCert Assured ID Root G2, OU=www.digicert.com, O=DigiCert Inc, C=US
DigiCert Assured ID Root G3 alias: digicertassuredidg3 DN: CN=DigiCert Assured ID Root G3, OU=www.digicert.com, O=DigiCert Inc, C=US
DigiCert Global Root G2 alias: digicertglobalrootg2 DN: CN=DigiCert Global Root G2, OU=www.digicert.com, O=DigiCert Inc, C=US
DigiCert Global Root G3 alias: digicertglobalrootg3 DN: CN=DigiCert Global Root G3, OU=www.digicert.com, O=DigiCert Inc, C=US
DigiCert Trusted Root G4 alias: digicerttrustedrootg4 DN: CN=DigiCert Trusted Root G4, OU=www.digicert.com, O=DigiCert Inc, C=US
The JDK uses the Java Cryptography Extension (JCE) Jurisdiction Policy files to configure cryptographic algorithm restrictions. Previously, the Policy files in the JDK placed limits on various algorithms. This release ships with both the limited and unlimited jurisdiction policy files, with unlimited being the default. The behavior can be controlled via the new crypto.policy
Security property found in the <java-home>/lib/java.security
file. Refer to that file for more information on this property.
To meet cryptographic regulations in previous releases, 3rd Party Java Cryptography Extension (JCE) Provider code is required to be packaged in a JAR file and properly signed by a public key/certificate issued by a JCE Certificate Authority (CA). This requirement also exists in JDK 9.
JDK 9 introduces the concept of modules along with some new file formats to support things such as custom images, but signed modules (e.g. signed JMOD files) are not currently supported. The jlink
tool also does not preserve signature information when creating such custom run-time images. Thus all 3rd Party JCE Providers must still be packaged as either signed JAR or signed modular JAR files, and deployed by placing them either on the class path (unnamed modules) or module path (automatic/named modules).
This release introduces several changes to the JCE Jurisdiction Policy files.
Previously, to allow unlimited cryptography in the JDK, separate JCE Jurisdiction Policy files had be be downloaded and installed. The download and install steps are no longer necessary.
Both the strong but "limited" (traditional default) and the "unlimited" policy files are included in this release.
A new Security property (crypto.policy
) was introduced to control which policy files are active. The new default is "unlimited".
The files are now user-editable to allow for customized Policy configuration.
Please see the Java Cryptography Architecture (JCA) Reference Guide for more information.
Also see:
JDK-8186093: java.security configuration file still says that "strong but limited" is the default value
Java SE KeyStore does not allow certificates that have the same aliases. http://docs.oracle.com/javase/8/docs/api/java/security/KeyStore.html
However, on Windows, multiple certificates stored in one keystore are allowed to have non-unique friendly names.
The fix for JDK-6483657 makes it possible to operate on such non-uniquely named certificates through the Java API by artificially making the visible aliases unique.
Please note, this fix does not enable creating same-named certificates with the Java API. It only allows you to deal with same-named certificates that were added to the keystore by 3rd party tools.
It is still recommended that your design not use multiple certificates with the same name. In particular, the following sentence will not be removed from the Java documentation: "In order to avoid problems, it is recommended not to use aliases in a KeyStore that only differ in case." http://docs.oracle.com/javase/8/docs/api/java/security/KeyStore.html
SunPKCS11 provider has re-enabled support for various message digest algorithms such as MD5, SHA1, and SHA2 on Solaris. If you are using Solaris 10 and experience a CloneNotSupportedException or PKCS11 error CKR_SAVED_STATE_INVALID, you should verify and apply the following patches or newer versions of them: 150531-02 on sparc 150636-01 on x86
For SSL/TLS/DTLS protocols, the security strength of 3DES cipher suites is not sufficient for persistent connections. By adding the "3DES_EDE_CBC" to "jdk.tls.legacyAlgorithms" security property by default in JDK, 3DES cipher suites will not be negotiated unless there are no other candidates during the establishing of SSL/TLS/DTLS connections.
At their own risk, applications can update this restriction in the security property ("jdk.tls.legacyAlgorithms") if 3DES cipher suites are really preferred.
Diffie-Hellman keys less than 1024 bits are considered too weak to use in practice and should be restricted by default in SSL/TLS/DTLS connections. Accordingly, Diffie-Hellman keys less than 1024 bits have been disabled by default by adding DH keySize < 1024
to the jdk.tls.disabledAlgorithms
security property in the java.security
file. Although it is not recommended, administrators can update the security property (jdk.tls.disabledAlgorithms
) and permit smaller key sizes (for example, by setting DH keySize < 768
).
To improve the default strength of EC cryptography, EC keys less than 224 bits have been deactivated in certification path processing (via the "jdk.certpath.disabledAlgorithms" Security Property) and SSL/TLS/DTLS connections (via the "jdk.tls.disabledAlgorithms" Security Property) in JDK. Applications can update this restriction in the Security Properties and permit smaller key sizes if really needed (for example, "EC keySize < 192").
EC curves less than 256 bits are removed from the SSL/TLS/DTLS implementation in JDK. The new System Property, "jdk.tls.namedGroups", defines a list of enabled named curves for EC cipher suites in order of preference. If an application needs to customize the default enabled EC curves or the curves preference, please update the System Property accordingly. For example:
jdk.tls.namedGroups="secp256r1, secp384r1, secp521r1"
Note that the default enabled or customized EC curves follow the algorithm constraints. For example, the customized EC curves cannot re-activate the disabled EC keys defined by the Java Security Properties.
Recent JDK updates introduced an issue for applications that depend on having a delayed provider selection mechanism. The issue was introduced in JDK 8u71, JDK 7u95 and JDK 6u111. The main error seen corresponded to an exception like the following :
handling exception: javax.net.ssl.SSLProtocolException: Unable to process PreMasterSecret, may be too big
A recent issue from the JDK-8148516 fix can cause issue for some TLS servers. The problem originates from an IllegalArgumentException thrown by the TLS handshaker code.
java.lang.IllegalArgumentException: System property jdk.tls.namedGroups(null) contains no supported elliptic curves
The issue can arise when the server doesn't have elliptic curve cryptography support to handle an elliptic curve name extension field (if present). Users are advised to upgrade to this release. By default, JDK 7 Updates and later JDK families ship with the SunEC security provider which provides elliptic curve cryptography support. Those releases should not be impacted unless security providers are modified.
The MD5withRSA signature algorithm is now considered insecure and should no longer be used. Accordingly, MD5withRSA has been deactivated by default in the Oracle JSSE implementation by adding "MD5withRSA" to the "jdk.tls.disabledAlgorithms" security property. Now, both TLS handshake messages and X.509 certificates signed with MD5withRSA algorithm are no longer acceptable by default. This change extends the previous MD5-based certificate restriction ("jdk.certpath.disabledAlgorithms") to also include handshake messages in TLS version 1.2. If required, this algorithm can be reactivated by removing "MD5withRSA" from the "jdk.tls.disabledAlgorithms" security property.
The requirement to have the Authority Key Identifier (AKID) and Subject Key Identifier (SKID) fields matching when building X509 certificate chains has been modified for some cases.
SunJSSE allows SHA224 as an available signature and hash algorithm for TLS 1.2 connections. However, the current implementation of SunMSCAPI does not support SHA224 yet. This can cause problems if SHA224 and SunMSCAPI private keys are used at the same time.
To mitigate the problem, we remove SHA224 from the default support list if SunMSCAPI is enabled.
Ephemeral DH keys less than 768 bits are deactivated in JDK. New algorithm restriction "DH keySize < 768" is added to Security Property "jdk.tls.disabledAlgorithms".
In TLS, a ciphersuite defines a specific set of cryptography algorithms used in a TLS connection. JSSE maintains a prioritized list of ciphersuites. In this update, GCM-based cipher suites are configured as the most preferable default cipher suites in the SunJSSE provider.
In the SunJSSE provider, the following ciphersuites are now the most preferred by default:
TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384
TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256
TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384
TLS_RSA_WITH_AES_256_GCM_SHA384
TLS_ECDH_ECDSA_WITH_AES_256_GCM_SHA384
TLS_ECDH_RSA_WITH_AES_256_GCM_SHA384
TLS_DHE_RSA_WITH_AES_256_GCM_SHA384
TLS_DHE_DSS_WITH_AES_256_GCM_SHA384
TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256
TLS_RSA_WITH_AES_128_GCM_SHA256
TLS_ECDH_ECDSA_WITH_AES_128_GCM_SHA256
TLS_ECDH_RSA_WITH_AES_128_GCM_SHA256
TLS_DHE_RSA_WITH_AES_128_GCM_SHA256
TLS_DHE_DSS_WITH_AES_128_GCM_SHA256
Note that this is a behavior change of the SunJSSE provider in the JDK, it is not guaranteed to be examined and used by other JSSE providers. There is no guarantee the cipher suites priorities will remain the same in future updates or releases.
After this change, besides implementing the necessary methods (initialize
, login
, logout
, commit
, abort
), any login module must implement the LoginModule
interface. Otherwise a LoginException
will be thrown when the login module is used.
The secure validation mode of the XML Signature implementation has been enhanced to restrict RSA and DSA keys less than 1024 bits by default as they are no longer secure enough for digital signatures. Additionally, a new security property named jdk.xml.dsig.SecureValidationPolicy
has been added to the java.security
file and can be used to control the different restrictions enforced when the secure validation mode is enabled.
The secure validation mode is enabled either by setting the xml signature property org.jcp.xml.dsig.secureValidation
to true with the javax.xml.crypto.XMLCryptoContext.setProperty
method, or by running the code with a SecurityManager
.
If an XML Signature is generated or validated with a weak RSA or DSA key, an XMLSignatureException will be thrown with the message "RSA keys less than 1024 bits are forbidden when secure validation is enabled" or "DSA keys less than 1024 bits are forbidden when secure validation is enabled".
The XML Digital Signature APIs (the javax.xml.crypto
package and subpackages) have been enhanced to better support Generics, as follows:
Collection
and Iterator
parameters and return types have been changed to parameterized typesjavax.xml.crypto.NodeSetData
interface has been changed to a generic type that implements Iterable
so that it can be used in for-each loops
An interoperability issue is found between Java and the native Kerberos implementation on BSD (including macOS) on the kdc_timeout setting in krb5.conf, where Java interpreted it as milliseconds and BSD as seconds when no unit is specified. This code change adds support for the "s" (second) unit. Therefore if the timeout is 5 seconds, Java accepts both "5000" and "5s". Customers concerned about interoperability between Java and BSD should use "5s".
This JDK release introduces some changes to how Kerberos requests are handled when a security manager is present.
Note that if a security manager is installed while a KerberosPricipal is being created, a {@link ServicePermission} must be granted and the service principal of the permission must minimally be inside the {@code KerberosPrincipal}'s realm. For example, if the result of {@code new KerberosPrincipal("user")} is {@code user@EXAMPLE.COM}, then a {@code ServicePermission} with service principal {@code host/www.example.com@EXAMPLE.COM} (and any action) must be granted.
Also note that if a single GSS-API principal entity that contains a Kerberos name element without providing its realm is being created via the org.ietf.jgss.GSSName interface and a security manager is installed, then this release introduces a new requirement. A {@link javax.security.auth.kerberos.ServicePermission ServicePermission} must be granted and the service principal of the permission must minimally be inside the Kerberos name element's realm. For example, if the result of{@link GSSManager#createName(String, Oid) createName("user", NT_USER_NAME)} contains a Kerberos name element {@code user@EXAMPLE.COM}, then a {@code ServicePermission} with service principal {@code host/www.example.com@EXAMPLE.COM} (and any action) must be granted. Otherwise, the creation will throw a {@link GSSException} containing the {@code GSSException.FAILURE} error code.
The hash algorithm used in the Kerberos 5 replay cache file (rcache) is updated from MD5 to SHA256 with this change. This is also the algorithm used by MIT krb5-1.15. This change is interoperable with earlier releases of MIT krb5, which means Kerberos 5 acceptors from JDK 9 and MIT krb5-1.14 can share the same rcache file.
A new system property named jdk.krb5.rcache.useMD5 is introduced. If the system property is set to "true", JDK 9 will still use the MD5 hash algorithm in rcache. This is useful when both of the following conditions are true: 1) the system has a very coarse clock and has to depend on hash values in replay attack detection, and 2) interoperability with earlier versions of JDK for rcache files is required. The default value of this system property is "false".
The end times for native TGTs (ticket-granting tickets) are now compared with UTC time stamps.
javac was erroneously accepting receiver parameters in annotations methods. This implies that tests cases like the one below were being accepted:
@interface MethodRun {
int value(MethodRun this);
}
The JLS 8, see JLS8 9.6.1, doesn't allow any formal parameter in annotation methods, this extends to receiver parameters. More specifically, the grammar for annotation types does not allow arbitrary method declarations, instead allowing only AnnotationTypeElementDeclarations. The allowed syntax is:
AnnotationTypeElementDeclaration:
{AnnotationTypeElementModifier} UnannType Identifier ( ) [Dims] [DefaultValue];
Note that nothing is allowed between the parentheses.
The compiler specification, see JLS8 18.5.2, modified the treatment of nested generic method invocations for which the return type is an inference variable. The compiler has been adapted to implement the new logic. This is important to minimize incompatibility with the javac 7 inference algorithm. Three cases are considered:
The compiler update implies an eager resolution for generic method invocations, provided that the return type is an inference variable.
Prior to JDK 9, javac set the 'static' modifier on anonymous classes declared in a static context, e.g., in static methods or static initialization blocks. This contradicts the Java Language Specification, which states that anonymous classes are never static. In JDK 9, javac does not mark anonymous classes 'static', whether they are declared in a static context or not.
The support for "argument files" on the command lines for javac, javadoc, and javah, has been updated to better align with the support for argument files on the launcher command line. This includes the following two new features:
Some obscure, undocumented escape sequences are no longer supported. The files are still read using the default platform file encoding, whereas argument files on the launcher command line should use ASCII or an ASCII-compatible encoding, such as UTF-8.
Output directories required by javac, specified with the -d, -s, -h options, will be created if they do not already exist.
The classfile format (see JVMS section 4.7.2) defines an attribute called ConstantValue
which is used to describe the constant value associated with a given (constant) field. The layout of this attribute is as follows:
ConstantValue_attribute {
u2 attribute_name_index;
u4 attribute_length;
u2 constantvalue_index;
}
Historically, javac
has never performed any kind of range validation of the value contained in the constant pool entry at constantvalue_index
. As such, it is possible for a constant field of type e.g. boolean
to have a constant value other than 0
or 1
(the only legal values allowed for a boolean). Starting from JDK 9, javac
will start detecting ill-formed ConstantValue
attributes, and report errors if out-of-range values are found.
javac
does not generate unchecked warnings when checking method reference return types.
import java.util.function.*;
import java.util.*;
class Test {
void m() {
IntFunction<List<String>[]> sls = List[]::new; //warning
Supplier<List<String>> sls = this::l; //warning
}
List l() { return null; }
}
Starting from JDK 9, javac
will emit a warning when unchecked conversion is required for a method reference to be compatible with a functional interface target.
This change brings the compiler in sync with JLS section 15.13.2:
A compile-time unchecked warning occurs if unchecked conversion was necessary for the compile-time declaration to be applicable, and this conversion would cause an unchecked warning in an invocation context.
and,
A compile-time unchecked warning occurs if unchecked conversion was necessary for the return type R', described above, to be compatible with the function type's return type, R, and this conversion would cause an unchecked warning in an assignment context.
Javac was not in sync with JLS 8 §15.12.1, specifically:
If the form is TypeName . super . [TypeArguments] Identifier, then: ...
Let T be the type declaration immediately enclosing the method invocation. It is a compile-time error if I is not a direct superinterface of T, or if there exists some other direct superclass or direct superinterface of T, J, such that J is a subtype of I.
So javac was not issuing a compiler error for cases like:
interface I {
default int f(){return 0;}
}
class J implements I {}
class T extends J implements I {
public int f() {
return I.super.f();
}
}
The compiler had some checks for method invocations of the form:
TypeName . super . [TypeArguments] Identifier
but there was one issue. If TypeName
is an interface I
and T
is the type declaration immediately enclosing the method invocation, the compiler must issue a compile-time error if there exists some other direct superclass or superinterface of T
, let's call it J
such that J
is a subtype of I
, as in the example above.
Reporting previously silent errors found during incorporation, JLS 8 §18.3, was supposed to be a clean-up with performance only implications. But consider the test case:
import java.util.Arrays;
import java.util.List;
class Klass {
public static <A> List<List<A>> foo(List<? extends A>... lists) {
return foo(Arrays.asList(lists));
}
public static <B> List<List<B>> foo(List<? extends List<? extends B>> lists) {
return null;
}
}
This code was not accepted before the patch for [1], but after this patch the compiler is accepting it. Accepting this code is the right behavior as not reporting incorporation errors was a bug in the compiler.
While determining the applicability of method:
<B> List<List<B>> foo(List<? extends List<? extends B>> lists)
For which we have the constraints:
b <: Object
t <: List<? extends B>
t<: Object
List<? extends A> <: t
First, inference variable b is selected for instantiation:
b = CAP1 of ? extends A
so this implies that:
t <: List<? extends CAP1 of ? extends A>
t<: Object
List<? extends A> <: t
Now all the bounds are checked for consistency. While checking if List<? extends A> is a subtype of List<? extends CAP1 of ? extends A> a bound error is reported. Before the compiler was just swallowing it. As now the error is reported while inference variable b is being instantiated, the bound set is rolled back to it's initial state, 'b' is instantiated to Object, and with this instantiation the constraint set is solvable, the method is applicable, it's the only applicable one and the code is accepted as correct. The compiler behavior in this case is defined at JLS 8 §18.4
This fix has source compatibility impact, right now code that wasn't being accepted is now being accepted by the javac compiler. Currently there are no reports of any other kind of incompatibility.
[1] https://bugs.openjdk.java.net/browse/JDK-8078024
The javac compiler's behavior when handling wildcards and "capture" type variables has been improved for conformance to the language specification. This improves type checking behavior in certain unusual circumstances. It is also a source-incompatible change: certain uses of wildcards that have compiled in the past may fail to compile because of a program's reliance on the javac bug.
The javadoc tool will now reject any occurrences of JavaScript code in the javadoc documentation comments and command-line options, unless the command-line option, --allow-script-in-comments
is specified.
With the --allow-script-in-comments
option, the javadoc tool will preserve JavaScript code in documentation comments and command-line options. An error will be given by the javadoc tool if JavaScript code is found and the command-line option is not set.
If any errors are encountered while reading or analyzing the source code, the javadoc tool will treat them as unrecoverable errors and exit.
Previously javadoc would emit "public" and "abstract" modifiers for methods and fields in annotation types. These flags are not needed in source code and are elided for non-annotation interface types. With this change, those modifiers are also omitted for methods and fields defined in annotation types.
Previously javadoc would include "value=" when displaying annotations even when that text was not necessary in the source because the annotations were of single-element annotation type (JLS 9.6. Annotation Type Elements ). The extraneous "value=" text is now omitted, leading to more concise annotation display.
In previous releases, on platforms that supported more than one VM, the launcher could use ergonomics to select the Server VM over the Client VM. Ergonomics would identify a "server-class" machine based on the number of CPUs and the amount of memory. With modern hardware platforms most machines are identified as server-class, and so now, only the Server VM is provided on most platforms. Consequently the ergonomic selection is redundant and has been removed. Users are advised to use the appropriate launcher VM selection flag on those systems where multiple VMs still exist.
The @Deprecated
annotation was incorrectly added to the newFactory()
method in javax.xml.stream.XMLInputFactory
. The method should not be deprecated. The newInstance()
method can be used to avoid the deprecation warning. A future release will correct this.
In accordance with XSL Transformations (XSLT) Version 1.0 (http://www.w3.org/TR/xslt), the xsl:import
element is only allowed as a top-level element. The xsl:import
element children must precede all other element children of an xsl:stylesheet
element, including any xsl:include
element children.
The JDK implementation has previously allowed the xsl:import
element erroneously placed anywhere in a stylesheet. This issue has been fixed in the JDK 9 release. The JDK implementation now rejects any XSLT stylesheets with erroneously placed import elements.
The defining class loader of java.xml.ws, java.xml.bind, and java.activation module and their classes is changed to the platform class loader (non-null) (see the specification for java.lang.ClassLoader::getPlatformClassLoader
).
Existing code that assumes the defining class loader of JAX-WS, JAXB, JAF classes may be impacted by this change (e.g. custom class loader delegation to the bootstrap class loader skipping the extension class loader).
The wsimport tool has been changed to disallow DTDs in Web Service descriptions, specifically:
To restore to the previous behavior:
JAXBContext
specifies that classes annotated with @XmlRootType
should be specified at context creation time to JAXBContext.newInstance(Class[] classesToBeBound, ...)
If the client classes are in a named module, than openness of the packages containing these classes is not propagated correctly when the root JAXB classes reference JAXB types in another package.
For example, given the following Java classes:
@XmlRootElement2 class Foo { Bar b;}
@XmlType class Bar { FooBar fb;}
@XmlType class FooBar { int x; }
The invocation of JAXBContext.newInstance(Foo.class
) registers Foo
and the statically referenced classes, Bar
and FooBar.
If Bar
and FooBar
are in different package than Foo
, than openness is not propagated for them with current implementation.
The issue can be worked around by opening the package with the opens
directive in the module declaration. Alternatively, the --add-opens
command line option can be used to open the package:
For example: --add-opens foo.mymodule/bar.baz=ALL-UNNAMED
(for JAXB-RI on classpath) --add-opens foo.mymodule/bar.baz=<jaxb-impl>
(for JABX implementations on the application module path)
An event-based XML parsers may return character data in chunks.
SAX specification: states that SAX parsers may return all contiguous character data in a single chunk, or they may split it into several chunks.
StAX specification: did not specify explicitly.
The JDK implementation before JDK 9 returns all character data in a CData section in a single chunk by default. As of JDK 9, an implementation-only property jdk.xml.cdataChunkSize
is added to instruct a parser to return the data in a CData section in a single chunk when the property is zero or unspecified, or in multiple chunks when it is greater than zero. The parser will split the data by linebreaks, and any chunks that are larger than the specified size to ones that are equal to or smaller than the size.
The property
jdk.xml.cdataChunkSize
is supported through the following means:
SAXParser
or XMLReader
for SAX, and XMLInputFactory
for StAX. If the property is set, its value will be used in preference over any of the other settings.jaxp.properties
file. The value in jaxp.properties
may be overridden by the system property or an API setting.
The JAXP library through the Transformer
and LSSerializer
supports a pretty print feature that can format the output by adding whitespaces and newlines to produce a more readable form of an XML document. As of the JDK 9 release, this feature has been enhanced to generate a format similar to that of the major web browsers. In addition, the xml:space
attribute as defined in the XML specification (https://www.w3.org/TR/2006/REC-xml-20060816/#sec-white-space) is now supported.
The Pretty-Print
feature does not define the actual format. The output format can change over time or vary from implementation to implementation, and therefore should not be relied on for exact text comparison. It is recommended that such applications turn off the Pretty-Print
feature and perform an XML to XML comparison.
Before the Java SE 9 release, the DOM API package in org.w3c.dom
included sub-packages that were not defined as a part of the Jave SE API. As of Java SE 9, these sub-packages are moved out of the java.xml
module to a separate module called jdk.xml.dom
. These packages are as follows:
org.w3c.dom.css
org.w3c.dom.html
org.w3c.dom.stylesheets
org.w3c.dom.xpath
The JAXP library in JDK 9 has been updated to Xerces-J 2.11.0 release in the following areas:
This update includes improvement and bug fixes in the above areas up to the Xerces-J 2.11.0 release, but not the experimental support for XML Schema 1.1 features. Refer to Xerces-J 2.11.0 Release Notes for more details.
The class path specified to the java launcher is expected to be a sequence of file paths. Previous releases incorrectly accepted a Windows path with a leading slash, for example -classpath /C:/classes
, which is not the intended behavior. The implementation of the application class loader has been changed in JDK 9 to use the new file system API which detects that /C:/classes
is not a valid file path. Existing applications specifying a file URI on the class path will need to change and specify a valid file path.