Which Part of the Jdk Source Code Touches You the Most#collection

2-Which section of the JDK source code touches you the most#collection

JDK source code .variable length arrays are implemented


The length of the array has been limited in the initialization phase, and the operation on the elements of the array cannot exceed the bounds. Many of the collection processing tool classes provided by JDK for us are of variable length. This point was not addressed in the early stage of java learning, and how its variable-length array was implemented was ignored.
Regarding the implementation of variable-length arrays, I found out from the commonly used ArrayList.add (E e).

/**
* Appends the specified element to the end of this list.
*
* @param e element to be appended to this list
* @return true (as specified by {@link Collection#add})
*/
public boolean add(E e) {
ensureCapacityInternal(size + 1); // Increments modCount!!
elementData[size++] = e;
return true;
}


/**
* The array buffer into which the elements of the ArrayList are stored.
* The capacity of the ArrayList is the length of this array buffer. Any
* empty ArrayList with elementData == DEFAULTCAPACITY_EMPTY_ELEMENTDATA
* will be expanded to DEFAULT_CAPACITY when the first element is added.
*/
transient Object[] elementData; // non-private to simplify nested class access

ArrayList actually operates on its own element Object[] elementData, and all data is stored in elementData. Implementing variable arrays is also expanded around the array pointed to by elementData.
After the ArrayList is initialized, elementData points to a global empty object. When the add method is executed, it will prejudge whether the length of the added one bit is longer than the length of the array pointed to by elementData. If the above judgment is true, then the array expansion will be performed at this time. mechanism.
The first expansion is to predict that the array will be extended by 1.5 times the existing length. If the extended length exceeds MAX_ARRAY_SIZE (Integer.MAX_VALUE - 8), then the array can only be extended to the length of Integer.MAX_VALUE. In other words, the array cannot It is the longest that grows infinitely, that is, the length of Integer.MAX_VALUE
After determining the length of the array to grow, call the system's native method System.arraycopy to copy the old array pointed to by elementData to the new array. When the copy is completed, elementData points to the new array to complete the expansion of the space

After the above execution is completed, just point the next bit of the last bit of the current array to the element to be added. After looking at the above code, I can't help but feel that once the array is expanded, it is undoubtedly a relatively large system overhead, and the frequent triggering of the expansion operation will undoubtedly bring performance loss to the system.

Therefore, the use of variable-length arrays should be restrained. That is, before using a variable array, predict the maximum length of the array in this case, do not use it without the initial length, especially when the array contains a lot of elements in the future, be sure to initialize a more appropriate length, try not to give It is better to trigger the expansion of the array frequently. Of course, don't define it too long, the principle is enough. If you can use the addall method is the best.

2 JDK source code .methods for adding new elements of ArrayList


public boolean add(E e) {
//Ensure that the size of the array is enough, not enough to need expansion
ensureCapacityInternal(size + 1); // Increments modCount!!
// Direct assignment, thread-unsafe
elementData[size++] = e;
return true;
}
private void ensureCapacityInternal(int minCapacity) {
//If it is an empty array, take the maximum value between the minimum capacity and the default capacity of 10
if (elementData == DEFAULTCAPACITY_EMPTY_ELEMENTDATA) {
minCapacity = Math.max(DEFAULT_CAPACITY, minCapacity);
}
// make sure the volume is sufficient
ensureExplicitCapacity(minCapacity);
}
private void ensureExplicitCapacity(int minCapacity) {
//The record array is modified
modCount++;
// If the minimum size we want is greater than the current length of the array, then expand
if (minCapacity - elementData.length > 0)
grow(minCapacity);
}
//The size of the old array is doubled, and finally copy the existing data into the new array
private void grow(int minCapacity) {
// overflow-conscious code
int oldCapacity = elementData.length;
// oldCapacity >> 1 means oldCapacity / 2
int newCapacity = oldCapacity + (oldCapacity >> 1);

// If the expanded value < our expected value, the expanded value is equal to our expected value
if (newCapacity - minCapacity < 0)
newCapacity = minCapacity;

// If the expanded value is greater than the maximum value of the array that can be allocated by the jvm, then take the maximum value of Integer
if (newCapacity - MAX_ARRAY_SIZE > 0)
newCapacity = hugeCapacity(minCapacity);
// minCapacity is usually close to size, so this is a win:
// Expansion by copying
elementData = Arrays.copyOf(elementData, newCapacity);
}

//Delete according to the subscript of the array
public E remove(int index) {
rangeCheck(index);
modCount++;
E oldValue = elementData(index);

int numMoved = size - index - 1;
if (numMoved > 0)
System.arraycopy(elementData, index+1, elementData, index,
numMoved);
elementData[--size] = null; // clear to let GC do its work

return oldValue;
}
In fact, first of all, I will say that there are two methods for adding new elements of ArrayList: public boolean add(E e) and public void add( int index, E element). Why are these two methods?
1.It is because first of all, according to these two methods, the "Java Development Manual" has been verified. When initializing the ArrayList, you must first specify the capacity size.
2.What is the benefit of specifying a capacity size?
First of all, in these two methods, the size of the array will be confirmed first. If the size of the array is not set, it will default to 10. If the size of the array is not enough, it will expand 1.5 times the size of the original array. It is 1.5 times, and the information on the Internet says that: because 1.5 can make full use of the shift operation, it can reduce the floating point number or the operation time and the number of operations. Seeing this is, is the shift operation an extreme optimization point for performance optimization? If you look deeper here, you will find that if the capacity is not enough, the capacity will be expanded frequently, and then the array will be frequently copied to the newly allocated memory address, which will affect the efficiency. This verifies the benefits of specifying the size when initializing the ArrayList.

At the same time, a point of expansion is found: when expanding, there is an awareness of array size overflow. After expansion, the size of the array cannot be less than 0, and cannot be greater than the maximum value of Integer.
Compared with the add method of adding elements, the add method of adding elements at any position has one difference:
1.JDK source code .The former is added to the specified position, and all the following elements need to be rearranged
2.JDK source code .The latter method is to add directly to the end, excluding the problem of expansion, and there will be no elements to be rearranged
3.JDK source code .Here you will realize that when comparing with LinkedList (linked list), add the add method at any position of the element. If the size of our ArrayList is initialized large enough, the time spent by any method of ArrayList's add is better than that of LinkedList.

JDK source.Binary search method in Arrays


(Take double as an example)
// Like public version, but without range checks.
private static int binarySearch0(double[] a, int fromIndex, int toIndex,
double key) {
int low = fromIndex;
int high = toIndex - 1;

while (low <= high) {
int mid = (low + high) >>> 1;
double midVal = a[mid];

if (midVal < key)
low = mid + 1; // Neither val is NaN, thisVal is smaller
else if (midVal > key)
high = mid - 1; // Neither val is NaN, thisVal is larger
else {
long midBits = Double.doubleToLongBits(midVal);
long keyBits = Double.doubleToLongBits(key);
if (midBits == keyBits) // Values are equal
return mid; // Key found
else if (midBits < keyBits) // (-0.0, 0.0) or (!NaN, NaN)
low = mid + 1;
else // (0.0, -0.0) or (NaN, !NaN)
high = mid - 1;
}
}
return -(low + 1); // key not found.
}
}
// 片段2 Double类中的代码
public static long doubleToLongBits(double value) {
long result = doubleToRawLongBits(value);
// Check for NaN based on values of bit fields, maximum
// exponent and nonzero significand.
if ( ((result & DoubleConsts.EXP_BIT_MASK) ==
DoubleConsts.EXP_BIT_MASK) &&
(result & DoubleConsts.SIGNIF_BIT_MASK) != 0L)
result = 0x7ff8000000000000L;
return result;
}

#reason#
1.Basically, binary search cannot be avoided in interviews. Sometimes, some examples will be found on the Internet. In fact, the examples in JDK are very standard and standardized.
2.For the selection of the intermediate value, an unsigned bit operation is used, int mid = (low + high) >>> 1; the conventional writing method (low + high) / 2 may have the possibility of operation overflow. The low + (high - low) / 2 operation is not efficient enough. The JDK writing method can be perfectly solved.
3.In the above code, the comparison of double is selected. Due to the precision of floating-point numbers, the comparison of double is also a problem we often encounter. In the code comments in fragment 2, the IEEE 754 floating-point double-precision floating-point number is explained in detail. storage format. Bit 63 (sign bit): represents the sign of the floating-point number Bits 62-52 (exponent bit): represent the exponent. Bits 51-0 (significand bit): represent the significand (sometimes called the mantissa) of the floating-point number

#feeling#
1.JDK source code .Mastering the necessary computer knowledge is very important to understand the source code
2.JDK source code .When you encounter some problems, you can try to look at the implementation ideas of JDK

JDK source code .Overflow in container code


JDK source code .The thing that strikes me the most in the JDK is the overflow awareness in some common container code . ArrayList, Vector, HashTable, AbstrctStringBuilder and other classes have such considerations, the following uses ArrayList as an example.
ArrayList mainly relies on methods such as newCapacity() and hugeCapacity() for capacity expansion. When expanding, the JDK strictly controls the new capacity:
•For the lower bound, JDK judges minCapacity < 0 many times to prevent negative capacity from being returned
•For the upper bound, based on capacity requirements, the JDK limits it to:
onewCapacity after 1.5 times expansion
oMAX_ARRAY_SIZE constant (value Integer.MAX_VALUE - 8)
oInteger.MAX_VALUE is finally stopped within the range of Integer.MAX_VALUE to prevent exceeding the range represented by Integer.
It is worth learning from JDK's careful handling of boundary conditions and special cases.
Attached are the comments made when reading the source code before:
private int newCapacity(int minCapacity) {
// The following code is overflow aware
// oldCapacity is the old capacity newCapacity is the new capacity
int oldCapacity = elementData.length;
// Use bit operations to expand capacity by 50% (shift right by 1 bit and divide by 2)
// Before JDK6, take ceil, and later versions take floor
int newCapacity = oldCapacity + (oldCapacity >> 1);
// Check whether the capacity is enough after 50% expansion, if not, make the following judgment
if (newCapacity - minCapacity <= 0) {
// The first special case: the capacity update brought by the no-argument constructor (updated to 10)
if (elementData == DEFAULTCAPACITY_EMPTY_ELEMENTDATA)
return Math.max(DEFAULT_CAPACITY, minCapacity);
// second special case: overflow
if (minCapacity < 0)
throw new OutOfMemoryError();
// Otherwise, it is true that the capacity is not enough, and directly returns the minimum required capacity
return minCapacity;
}
// If the capacity newCapacity is enough after the expansion by 50%, the following judgments need to be made
// Compare ideal newCapacity with MAX_ARRAY_SIZE
// If newCapacity is within the tolerance range of MAX_ARRAY_SIZE, return directly
// Otherwise, call the hugeCapacity method to apply for a large capacity, with minCapacity as the parameter
return (newCapacity - MAX_ARRAY_SIZE <= 0)
? newCapacity
: hugeCapacity(minCapacity);
}

private static int hugeCapacity(int minCapacity) {
if (minCapacity < 0) // overflow
throw new OutOfMemoryError();
// If minCapacity is not within the tolerance range of MAX_ARRAY_SIZE, use Integer.MAX_VALUE as the capacity
// otherwise use MAX_ARRAY_SIZE as the capacity
return (minCapacity > MAX_ARRAY_SIZE)
? Integer.MAX_VALUE
: MAX_ARRAY_SIZE;
}

The mechanism of fail-fast for collections.
I remember that when I first entered the industry, I often encountered ConcurrentModificationException for collection operations. Later, after reading the source code of the list, I realized that the operation of concurrent modification can be avoided by maintaining a modCount .
This fail-fast idea is very suitable for daily development, as far as possible to ensure that the overhead of system resources is stopped as soon as possible when failure/dissatisfaction occurs. It can also improve the readability and robustness of the code

JDK source code.for example


1. The client sends a request with an invalid business, and the server can make the request fail through verification methods, instead of letting the request continue to the business layer, causing some necessary service overhead 2. When writing the method, try to make it invalid/error Put the scene in the front, such as null value judgment, permission verification, etc.




  1. Short Message Service(SMS) & Mail Service

    50,000 email package starts as low as USD 1.99, 120 short messages start at only USD 1.00