Java Fork and Join using ForkJoinPool The ForkJoinPool was added to Java in Java 7. The ForkJoinPool is similar to the Java ExecutorService but with one difference. The ForkJoinPool makes it easy for tasks to split their work up into smaller tasks which are then submitted to the ForkJoinPool too. Tasks can keep splitting their work into smaller subtasks for as long as it makes to split up the task. It may sound a bit abstract, so in this fork and join tutorial I will explain how the ForkJoinPool works, and how splitting tasks up work. Fork and Join Explained Before we look at the ForkJoinPool I want to explain how the fork and join principle works in general. The fork and join principle consists of two steps which are performed recursively. These two steps are the fork step and the join step. Fork A task that uses the fork and join principle can fork (split) itself into smaller subtasks which can be executed concurrently. This is illustrated in the diagram below: By splitting itself up into subtasks, each subtask can be executed in parallel by different CPUs, or different threads on the same CPU. A task only splits itself up into subtasks if the work the task was given is large enough for this to make sense. There is an overhead to splitting up a task into subtasks, so for small amounts of work this overhead may be greater than the speedup achieved by executing subtasks concurrently. The limit for when it makes sense to fork a task into subtasks is also called a threshold. It is up to each task to decide on a sensible threshold. It depends very much on the kind of work being done. Join When a task has split itself up into subtasks, the task waits until the subtasks have finished executing. Once the subtasks have finished executing, the task may join (merge) all the results into one result. This is illustrated in the diagram below: Of course, not all types of tasks may return a result. If the tasks do not return a result then a task just waits for its subtasks to complete. No result merging takes place then. The ForkJoinPool The ForkJoinPool is a special thread pool which is designed to work well with fork-and-join task splitting. The ForkJoinPool located in the java.util.concurrent package, so the full class name is java.util.concurrent.ForkJoinPool. Creating a ForkJoinPool You create a ForkJoinPool using its constructor. As a parameter to the ForkJoinPool constructor you pass the indicated level of parallelism you desire. The parallelism level indicates how many threads or CPUs you want to work concurrently on on tasks passed to the ForkJoinPool. Here is a ForkJoinPool creation example: ForkJoinPool forkJoinPool = new ForkJoinPool(4); This example creates a ForkJoinPool with a parallelism level of 4. Submitting Tasks to the ForkJoinPool You submit tasks to a ForkJoinPool similarly to how you submit tasks to an ExecutorService. You can submit two types of tasks. A task that does not return any result (an "action"), and a task which does return a result (a "task"). These two types of tasks are represented by the RecursiveAction and RecursiveTask classes. How to use both of these tasks and how to submit them will be covered in the following sections. RecursiveAction A RecursiveAction is a task which does not return any value. It just does some work, e.g. writing data to disk, and then exits. A RecursiveAction may still need to break up its work into smaller chunks which can be executed by independent threads or CPUs. You implement a RecursiveAction by subclassing it. Here is a RecursiveAction example:import java.util.ArrayList; import java.util.List; import java.util.concurrent.RecursiveAction; public class MyRecursiveAction extends RecursiveAction { private long workLoad = 0; public MyRecursiveAction(long workLoad) { this.workLoad = workLoad; } @Override protected void compute() { //if work is above threshold, break tasks up into smaller tasks if(this.workLoad > 16) { System.out.println("Splitting workLoad : " + this.workLoad); List
This example is very simplified. The MyRecursiveAction simply takes a fictive workLoad as parameter to its constructor. If the workLoad is above a certain threshold, the work is split into subtasks which are also scheduled for execution (via the .fork() method of the subtasks. If the workLoad is below a certain threshold then the work is carried out by the MyRecursiveAction itself. You can schedule a MyRecursiveAction for execution like this: MyRecursiveAction myRecursiveAction = new MyRecursiveAction(24); forkJoinPool.invoke(myRecursiveAction); RecursiveTask A RecursiveTask is a task that returns a result. It may split its work up into smaller tasks, and merge the result of these smaller tasks into a collective result. The splitting and merging may take place on several levels. Here is a RecursiveTask example: import java.util.ArrayList; import java.util.List; import java.util.concurrent.RecursiveTask; public class MyRecursiveTask extends RecursiveTasksubtasks = new ArrayList (); subtasks.addAll(createSubtasks()); for(RecursiveAction subtask : subtasks){ subtask.fork(); } } else { System.out.println("Doing workLoad myself: " + this.workLoad); } } private List createSubtasks() { List subtasks = new ArrayList (); MyRecursiveAction subtask1 = new MyRecursiveAction(this.workLoad / 2); MyRecursiveAction subtask2 = new MyRecursiveAction(this.workLoad / 2); subtasks.add(subtask1); subtasks.add(subtask2); return subtasks; } } import java.util.ArrayList; import java.util.List; import java.util.concurrent.RecursiveAction; public class MyRecursiveAction extends RecursiveAction { private long workLoad = 0; public MyRecursiveAction(long workLoad) { this.workLoad = workLoad; } @Override protected void compute() { //if work is above threshold, break tasks up into smaller tasks if(this.workLoad > 16) { System.out.println("Splitting workLoad : " + this.workLoad); List subtasks = new ArrayList (); subtasks.addAll(createSubtasks()); for(RecursiveAction subtask : subtasks){ subtask.fork(); } } else { System.out.println("Doing workLoad myself: " + this.workLoad); } } private List createSubtasks() { List subtasks = new ArrayList (); MyRecursiveAction subtask1 = new MyRecursiveAction(this.workLoad / 2); MyRecursiveAction subtask2 = new MyRecursiveAction(this.workLoad / 2); subtasks.add(subtask1); subtasks.add(subtask2); return subtasks; } } { private long workLoad = 0; public MyRecursiveTask(long workLoad) { this.workLoad = workLoad; } protected Long compute() { //if work is above threshold, break tasks up into smaller tasks if(this.workLoad > 16) { System.out.println("Splitting workLoad : " + this.workLoad); List subtasks = new ArrayList (); subtasks.addAll(createSubtasks()); for(MyRecursiveTask subtask : subtasks){ subtask.fork(); } long result = 0; for(MyRecursiveTask subtask : subtasks) { result += subtask.join(); } return result; } else { System.out.println("Doing workLoad myself: " + this.workLoad); return workLoad * 3; } } private List createSubtasks() { List subtasks = new ArrayList (); MyRecursiveTask subtask1 = new MyRecursiveTask(this.workLoad / 2); MyRecursiveTask subtask2 = new MyRecursiveTask(this.workLoad / 2); subtasks.add(subtask1); subtasks.add(subtask2); return subtasks; } } This example is similar to the RecursiveAction example except it returns a result. The class MyRecursiveTask extends RecursiveTask which means that the result returned from the task is a Long . The MyRecursiveTask example also breaks the work down into subtasks, and schedules these subtasks for execution using their fork() method. Additionally, this example then receives the result returned by each subtask by calling the join() method of each subtask. The subtask results are merged into a bigger result which is then returned. This kind of joining / mergining of subtask results may occur recursively for several levels of recursion. You can schedule a RecursiveTask like this: MyRecursiveTask myRecursiveTask = new MyRecursiveTask(128); long mergedResult = forkJoinPool.invoke(myRecursiveTask); System.out.println("mergedResult = " + mergedResult); Notice how you get the final result out from the ForkJoinPool.invoke() method call. ForkJoinPool Critique It seems not everyone is equally happy with the new ForkJoinPool in Java 7. While searching for experiences with, and opinions about, the ForkJoinPool, there are some critique: A Java Fork-Join Calamity It is well worth a read before you plan to use the ForkJoinPool in your own projects.
Java 8 New Features New Features There are dozens of features added to Java 8, the most significant ones are mentioned below − Lambda expression − Adds functional processing capability to Java. Method references − Referencing functions by their names instead of invoking them directly. Using functions as parameter. Default method − Interface to have default method implementation. New tools − New compiler tools and utilities are added like ‘jdeps’ to figure out dependencies. Stream API − New stream API to facilitate pipeline processing. Date Time API − Improved date time API. Optional − Emphasis on best practices to handle null values properly. Nashorn, JavaScript Engine − A Java-based engine to execute JavaScript code. Along with these new featuers, lots of feature enhancements are done under-the-hood, at both compiler and JVM level. - sample for java 8 import java.util.Collections; import java.util.List; import java.util.ArrayList; import java.util.Comparator; public class Java8Tester { public static void main(String args[]){ Listnames1 = new ArrayList (); names1.add("Mahesh "); names1.add("Suresh "); names1.add("Ramesh "); names1.add("Naresh "); names1.add("Kalpesh "); List names2 = new ArrayList (); names2.add("Mahesh "); names2.add("Suresh "); names2.add("Ramesh "); names2.add("Naresh "); names2.add("Kalpesh "); Java8Tester tester = new Java8Tester(); System.out.println("Sort using Java 7 syntax: "); tester.sortUsingJava7(names1); System.out.println(names1); System.out.println("Sort using Java 8 syntax: "); tester.sortUsingJava8(names2); System.out.println(names2); } //sort using java 7 private void sortUsingJava7(List names){ Collections.sort(names, new Comparator () { @Override public int compare(String s1, String s2) { return s1.compareTo(s2); } }); } //sort using java 8 private void sortUsingJava8(List names){ Collections.sort(names, (s1, s2) -> s1.compareTo(s2)); } } Lambda expression facilitates functional programming, and simplifies the development a lot. A lambda expression is characterized by the following syntax − parameter -> expression body Following are the important characteristics of a lambda expression − Optional type declaration − No need to declare the type of a parameter. The compiler can inference the same from the value of the parameter. Optional parenthesis around parameter − No need to declare a single parameter in parenthesis. For multiple parameters, parentheses are required. Optional curly braces − No need to use curly braces in expression body if the body contains a single statement. Optional return keyword − The compiler automatically returns the value if the body has a single expression to return the value. Curly braces are required to indicate that expression returns a value. public class Java8Tester { public static void main(String args[]){ Java8Tester tester = new Java8Tester(); //with type declaration MathOperation addition = (int a, int b) -> a + b; //with out type declaration MathOperation subtraction = (a, b) -> a - b; //with return statement along with curly braces MathOperation multiplication = (int a, int b) -> { return a * b; }; //without return statement and without curly braces MathOperation division = (int a, int b) -> a / b; System.out.println("10 + 5 = " + tester.operate(10, 5, addition)); System.out.println("10 - 5 = " + tester.operate(10, 5, subtraction)); System.out.println("10 x 5 = " + tester.operate(10, 5, multiplication)); System.out.println("10 / 5 = " + tester.operate(10, 5, division)); //with parenthesis GreetingService greetService1 = message -> System.out.println("Hello " + message); //without parenthesis GreetingService greetService2 = (message) -> System.out.println("Hello " + message); greetService1.sayMessage("Mahesh"); greetService2.sayMessage("Suresh"); } interface MathOperation { int operation(int a, int b); } interface GreetingService { void sayMessage(String message); } private int operate(int a, int b, MathOperation mathOperation){ return mathOperation.operation(a, b); } } Lambda expressions are used primarily to define inline implementation of a functional interface, i.e., an interface with a single method only. In the above example, we've used various types of lambda expressions to define the operation method of MathOperation interface. Then we have defined the implementation of sayMessage of GreetingService. Lambda expression eliminates the need of anonymous class and gives a very simple yet powerful functional programming capability to Java. Method references help to point to methods by their names. A method reference is described using :: (double colon) symbol. A method reference can be used to point the following types of methods − Static methods Instance methods Constructors using new operator (TreeSet::new) Method Reference Example Let's look into an example of method referencing to get a more clear picture. Write the following program in an code editor and match the results. import java.util.List; import java.util.ArrayList; public class Java8Tester { public static void main(String args[]){ List names = new ArrayList(); names.add("Mahesh"); names.add("Suresh"); names.add("Ramesh"); names.add("Naresh"); names.add("Kalpesh"); names.forEach(System.out::println); } }
Vector or ArrayList -- which is better? Sometimes Vector is better; sometimes ArrayList is better; sometimes you don't want to use either. I hope you weren't looking for an easy answer because the answer depends upon what you are doing. There are four factors to consider: API Synchronization Data growth Usage patterns Let's explore each in turn. API In The Java Programming Language (Addison-Wesley, June 2000) Ken Arnold, James Gosling, and David Holmes describe the Vector as an analog to the ArrayList. So, from an API perspective, the two classes are very similar. However, there are still some major differences between the two classes. Synchronization Vectors are synchronized. Any method that touches the Vector's contents is thread safe. ArrayList, on the other hand, is unsynchronized, making them, therefore, not thread safe. With that difference in mind, using synchronization will incur a performance hit. So if you don't need a thread-safe collection, use the ArrayList. Why pay the price of synchronization unnecessarily? Data growth Internally, both the ArrayList and Vector hold onto their contents using an Array. You need to keep this fact in mind while using either in your programs. When you insert an element into an ArrayList or a Vector, the object will need to expand its internal array if it runs out of room. A Vector defaults to doubling the size of its array, while the ArrayList increases its array size by 50 percent. Depending on how you use these classes, you could end up taking a large performance hit while adding new elements. It's always best to set the object's initial capacity to the largest capacity that your program will need. By carefully setting the capacity, you can avoid paying the penalty needed to resize the internal array later. If you don't know how much data you'll have, but you do know the rate at which it grows, Vector does possess a slight advantage since you can set the increment value. Usage patterns Both the ArrayList and Vector are good for retrieving elements from a specific position in the container or for adding and removing elements from the end of the container. All of these operations can be performed in constant time -- O(1). However, adding and removing elements from any other position proves more expensive -- linear to be exact: O(n-i), where n is the number of elements and i is the index of the element added or removed. These operations are more expensive because you have to shift all elements at index i and higher over by one element. So what does this all mean? It means that if you want to index elements or add and remove elements at the end of the array, use either a Vector or an ArrayList. If you want to do anything else to the contents, go find yourself another container class. For example, the LinkedList can add or remove an element at any position in constant time -- O(1). However, indexing an element is a bit slower -- O(i) where i is the index of the element. Traversing an ArrayList is also easier since you can simply use an index instead of having to create an iterator. The LinkedList also creates an internal object for each element inserted. So you have to be aware of the extra garbage being created. Finally, in "PRAXIS 41" from Practical Java (Addison-Wesley, Feb. 2000) Peter Haggar suggests that you use a plain old array in place of either Vector or ArrayList -- especially for performance-critical code. By using an array you can avoid synchronization, extra method calls, and suboptimal resizing. You just pay the cost of extra development time.
What to do when your eclipse maven project shows errors in eclipse and compiled successfully !!!! When your Maven eclipse project shows errors in eclipse editor and it can compile with eclipse or console at that time there must be project configuration issue with eclipse happened, when you open close multiple projects its happens sometime with eclipse. do mvn eclipse : eclipse from your command line by closing eclipse and open eclipse again and refresh projects, error will go away!
Java 7's new Features There are a number of features in Java 7 that will please developers. Things such as strings in switch statements, multi-catch exception handling, try-with-resource statements, the new File System API, extensions of the JVM, support for dynamically-typed languages, the fork and join framework for task parallelism, and a few others will certainly be embraced by the community. Below I outline the features and provide examples where appropriate. A zip file containing code snippets used in this post can be downloaded here. Language enhancements Java 7 includes a few new language features via Project Coin. These features are quite handy for a developer. Diamond Operator You may have noted on many occasions your IDE complaining of types when working with Generics. For example, if we have to declare a map of trades using Generics, we write the code as follows: Map> trades = new TreeMap > (); The not-so-nice thing about this declaration is that we must declare the types on both the sides, although the right-hand side seems a bit redundant. Can the compiler infer the types by looking at the left-hand-side declaration? Not unless you’re using Java 7. In 7, it’s written like this: Map > trades = new TreeMap <> (); How cool is that? You don’t have to type the whole list of types for the instantiation. Instead you use the <> symbol, which is called diamond operator. Note that while not declaring the diamond operator is legal, as trades = new TreeMap (), it will make the compiler generate a couple of type-safety warnings. Using strings in switch statements Switch statements work either with primitive types or enumerated types. Java 7 introduced another type that we can use in Switch statements: the String type. Say we have a requirement to process a Trade based on its status. Until now we used to do this by using if-else statements. private void processTrade(Trade t) { String status = t.getStatus(); if (status.equalsIgnoreCase(NEW)) { newTrade(t); } else if (status.equalsIgnoreCase(EXECUTE)) { executeTrade(t); } else if (status.equalsIgnoreCase(PENDING)) { pendingTrade(t); } } This method of working on strings is crude. In Java 7, we can improve the program by utilizing the enhanced Switch statement, which takes a String type as an argument. public void processTrade(Trade t) { String status = t.getStatus(); switch (status) { case NEW: newTrade(t); break; case EXECUTE: executeTrade(t); break; case PENDING: pendingTrade(t); break; default: break; } } In the above program, the status field is always compared against the case label by using theString.equals() method. Automatic resource management Resources such as Connections, Files, Input/OutStreams, etc. should be closed manually by the developer by writing bog-standard code. Usually we use a try-finally block to close the respective resources. See the current practice of creating a resource, using it and finally closing it: public void oldTry() { try { fos = new FileOutputStream("movies.txt"); dos = new DataOutputStream(fos); dos.writeUTF("Java 7 Block Buster"); } catch (IOException e) { e.printStackTrace(); } finally { try { fos.close(); dos.close(); } catch (IOException e) { // log the exception } } } However, Java 7 has introduced another cool feature to manage the resources automatically. It is simple in operation, too. All we have to do is declare the resources in the try as follows: try(resources_to_be_cleant){ // your code } The above method with the old try can finally can be re-written using this new feature as shown below: public void newTry() { try (FileOutputStream fos = new FileOutputStream("movies.txt"); DataOutputStream dos = new DataOutputStream(fos)) { dos.writeUTF("Java 7 Block Buster"); } catch (IOException e) { // log the exception } } The above code also represents another aspect of this feature: working with multiple resources. The FileOutputStream and DataOutputStream resources are enclosed in the try statement one after the other, each one separated by a semicolon (;) separator. We do not have to nullify or close the streams manually, as they are closed automatically once the control exists the try block. Behind the scenes, the resources that should be auto closed must implementjava.lang.AutoCloseable interface. Any resource that implements AutoCloseble interface can be a candidate for automatic resource management. The AutoCloseable is the parent of java.io.Closeable interface and has just one method close() that would be called by the JVM when the control comes out of the try block. Numeric literals with underscores Numerical literals are definitely eye strainers. I am sure you would start counting the zeroes like me if you’ve been given a number with, say, ten zeros. It’s quite error prone and cumbersome to identify a literal if it’s a million or a billion unless you count the places from right to left. Not anymore. Java 7 introduced underscores in identifying the places. For example, you can declare 1000 as shown below: int thousand = 1_000; or 1000000 (one million) as follows int million = 1_000_000 Note that binary literals are also introduced in this release too — for example “0b1″ — so developers don’t have to convert them to hexadecimals any more. Improved exception handling There are a couple of improvements in the exception handling area. Java 7 introduced multi-catch functionality to catch multiple exception types using a single catch block. Let’s say you have a method that throws three exceptions. In the current state, you would deal them individually as shown in below: public void oldMultiCatch() { try { methodThatThrowsThreeExceptions(); } catch (ExceptionOne e) { // log and deal with ExceptionOne } catch (ExceptionTwo e) { // log and deal with ExceptionTwo } catch (ExceptionThree e) { // log and deal with ExceptionThree } } Catching an endless number of exceptions one after the other in a catch block looks cluttered. And I have seen code that catches a dozen exceptions, too. This is incredibly inefficient and error prone. Java 7 has brought in a new language change to address this ugly duckling. See the improved version of the method oldMultiCatch method below: public void newMultiCatch() { try { methodThatThrowsThreeExceptions(); } catch (ExceptionOne | ExceptionTwo | ExceptionThree e) { // log and deal with all Exceptions } } The multiple exceptions are caught in one catch block by using a ‘|’ operator. This way, you do not have to write dozens of exception catches. However, if you have bunch of exceptions that belong to different types, then you could use “multi multi-catch” blocks too. The following snippet illustrates this: public void newMultiMultiCatch() { try { methodThatThrowsThreeExceptions(); } catch (ExceptionOne e) { // log and deal with ExceptionOne } catch (ExceptionTwo | ExceptionThree e) { // log and deal with ExceptionTwo and ExceptionThree } } In the above case, the ExceptionTwo and ExceptionThree belong to a different hierarchy, so you would want to handle them differently but with a single catch block. New file system API (NIO 2.0) Those who worked with Java IO may still remember the headaches that framework caused. It was never easy to work seamlessly across operating systems or multi-file systems. There were methods such as delete or rename that behaved unexpected in most cases. Working with symbolic links was another issue. In an essence, the API needed an overhaul. With the intention of solving the above problems with Java IO, Java 7 introduced an overhauled and in many cases new API. The NIO 2.0 has come forward with many enhancements. It’s also introduced new classes to ease the life of a developer when working with multiple file systems. Working with Path A new java.nio.file package consists of classes and interfaces such as Path, Paths,FileSystem, FileSystems and others. A Path is simply a reference to a file path. It is the equivalent (and with more features) tojava.io.File. The following snippet shows how to obtain a path reference to the “temp” folder: public void pathInfo() { Path path = Paths.get("c:\Temp\temp"); System.out.println("Number of Nodes:" + path.getNameCount()); System.out.println("File Name:" + path.getFileName()); System.out.println("File Root:" + path.getRoot()); System.out.println("File Parent:" + path.getParent()); } The console output would be: Number of Nodes:2 File Name:temp.txt File Root:c: File Parent:c:Temp Deleting a file or directory is as simple as invoking a delete method on Files (note the plural) class. The Files class exposes two delete methods, one that throws NoSuchFileException and the other that does not. The following delete method invocation throws NoSuchFileException, so you have to handle it: Files.delete(path); Where as Files.deleteIfExists(path) does not throw exception (as expected) if the file/directory does not exist. You can use other utility methods such as Files.copy(..) and Files.move(..) to act on a file system efficiently. Similarly, use the createSymbolicLink(..) method to create symbolic links using your code. File change notifications One of my favorite improvements in the JDK 7 release is the addition of File Change Notifications. This has been a long-awaited feature that’s finally carved into NIO 2.0. TheWatchService API lets you receive notification events upon changes to the subject (directory or file). The steps involved in implementing the API are: Create a WatchService. This service consists of a queue to hold WatchKeys Register the directory/file you wish to monitor with this WatchService While registering, specify the types of events you wish to receive (create, modify or delete events) You have to start an infinite loop to listen to events When an event occurs, a WatchKey is placed into the queue Consume the WatchKey and invoke queries on it Let’s follow this via an example. We create a DirPolice Java program whose responsibility is to police a particular directory. The steps are provided below: 1. Creating a WatchService object: WatchService watchService = FileSystems.getDefault().newWatchService(); 2. Obtain a path reference to your watchable directory. I suggest you parameterize this directory so you don’t hard code the file name. path = Paths.get("C:\Temp\temp\"); 3. The next step is to register the directory with the WatchService for all types of events: dirToWatch.register(watchService, ENTRY_CREATE, ENTRY_MODIFY, ENTRY_DELETE); These are java.nio.file.StandardWatchEventKinds event types 4. Initiate the infinite loop and start taking the events: while(true) { WatchKey key = watchService.take(); // this would return you keys … } 5. Run through the events on the key: for (WatchEvent> event : key.pollEvents()) { Kind> kind = event.kind(); System.out.println("Event on " + event.context().toString() + " is " + kind); } For example, if you modify or delete the temp directory, you would see statement as shown below on the console respectively: Event on temp is ENTRY_MODIFY Event on temp is ENTRY_DELETE The relevant methods of the DirPolice source code are posted below (download the full source code): /** * This initiates the police */ private void init() { path = Paths.get("C:\Temp\temp\"); try { watchService = FileSystems.getDefault().newWatchService(); path.register(watchService, ENTRY_CREATE, ENTRY_DELETE, ENTRY_MODIFY); } catch (IOException e) { System.out.println("IOException"+ e.getMessage()); } } /** * The police will start making rounds */ private void doRounds() { WatchKey key = null; while(true) { try { key = watchService.take(); for (WatchEvent> event : key.pollEvents()) { Kind> kind = event.kind(); System.out.println("Event on " + event.context().toString() + " is " + kind); } } catch (InterruptedException e) { System.out.println("InterruptedException: "+e.getMessage()); } boolean reset = key.reset(); if(!reset) break; } } Fork and Join The effective use of parallel cores in a Java program has always been a challenge. There were few home-grown frameworks that would distribute the work across multiple cores and then join them to return the result set. Java 7 has incorporated this feature as a Fork and Join framework. Basically the Fork-Join breaks the task at hand into mini-tasks until the mini-task is simple enough that it can be solved without further breakups. It’s like a divide-and-conquer algorithm. One important concept to note in this framework is that ideally no worker thread is idle. They implement a work-stealing algorithm in that idle workers “steal” the work from those workers who are busy. The core classes supporting the Fork-Join mechanism are ForkJoinPool and ForkJoinTask. TheForkJoinPool is basically a specialized implementation of ExecutorService implementing thework-stealing algorithm we talked about above. We create an instance of ForkJoinPool by providing the target parallelism level — the number of processors as shown below: ForkJoinPool pool = new ForkJoinPool(numberOfProcessors) Where numberOfProcessors = Runtime.getRunTime().availableProcessors(); However, the default ForkJoinPool instantiation would set the parallelism level equal to the same number obtained as above. The problem that needs to be solved is coded in a ForkJoinTask. However, there are two implementations of this class out of the box: the RecursiveAction and RecursiveTask. The only difference between these two classes is that the former one does not return a value while the latter returns an object of specified type. Here’s how to create a RecursiveAction or RecursiveTask class that represents your requirement problem (I use the RecursiveAction class): public class MyBigProblemTask extends RecursiveAction { @Override protected void compute() { . . . // your problem invocation goes here } } You have to override the compute method where in you need to provide the computing functionality. Now, provide this ForkJoinTask to the Executor by calling invoke method on theForkJoinPool: pool.invoke(task); Supporting dynamism Java is a statically typed language — the type checking of the variables, methods and return values is performed at compile time. The JVM executes this strongly-typed bytecode at runtime without having to worry about finding the type information. There’s another breed of typed languages — the dynamically typed languages. Ruby, Python and Clojure are in this category. The type information is unresolved until runtime in these languages. This is not possible in Java as it would not have any necessary type information. There is an increasing pressure on Java folks improvise running the dynamic languages efficiently. Although it is possible to run these languages on a JVM (using Reflection), it’s not without constraints and restrictions. In Java 7, a new feature called invokedynamic was introduced. This makes VM changes to incorporate non-Java language requirements. A new package, java.lang.invoke, consisting of classes such as MethodHandle, CallSite and others, has been created to extend the support of dynamic languages.
What is java.lang.OutOfMemoryError in Java ? OutOfMemoryError in Java is a subclass of java.lang.VirtualMachineError and JVM throws java.lang.OutOfMemoryError when it ran out of memory in the heap. OutOfMemoryError in Java can come anytime in heap mostly while you try to create an object and there is not enough space on the heap to allocate that object. Javadoc of OutOfMemoryError is not very informative about this, though. I have seen mainly two types of OutOfMemoryError in Java: 1) The java.lang.OutOfMemoryError: Java heap space 2) The java.lang.OutOfMemoryError: PermGen space Though both of them occur because JVM ran out of memory they are quite different to each other and their solutions are independent of each other. Since in most of JVM default size of Perm Space is around "64MB" you can easily run out of memory if you have too many classes or a huge number of Strings in your project. How to solve java.lang.OutOfMemoryError: Java heap space 1) An easy way to solve OutOfMemoryError in java is to increase the maximum heap size by using JVM options "-Xmx512M", this will immediately solve your OutOfMemoryError. This is my preferred solution when I get OutOfMemoryError in Eclipse, Maven or ANT while building project because based upon size of project you can easily run out of Memory.here is an example of increasing maximum heap size of JVM, Also its better to keep -Xmx to -Xms ration either 1:1 or 1:1.5 if you are setting heap size in your java application export JVM_ARGS="-Xms1024m -Xmx1024m" 2) The second way to resolve OutOfMemoryError in Java is rather hard and comes when you don't have much memory and even after increase maximum heap size you are still getting java.lang.OutOfMemoryError, in this case, you probably want to profile your application and look for any memory leak. You can use Eclipse Memory Analyzer to examine your heap dump or you can use any profiler like Netbeans or JProbe. This is tough solution and requires some time to analyze and find memory leaks. How to solve java.lang.OutOfMemoryError: PermGen space As explained in above paragraph this OutOfMemory error in java comes when Permanent generation of heap filled up. To fix this OutOfMemoryError in Java, you need to increase heap size of Perm space by using JVM option "-XX: MaxPermSize". You can also specify initial size of Perm space by using "-XX: PermSize" and keeping both initial and maximum Perm Space you can prevent some full garbage collection which may occur when Perm Space gets re-sized. Here is how you can specify initial and maximum Perm size in Java: export JVM_ARGS="-XX:PermSize=64M -XX:MaxPermSize=256m" Some time java.lang.OutOfMemoryError in Java gets tricky and on those cases profiling remains ultimate solution.Though you have the freedom to increase heap size in java, it’s recommended that to follow memory management practices while coding and setting null to any unused references.
Print strings in sequence with multiple threads public class MTest { public static void main(String[] args) throws InterruptedException { Thread t1 = new Thread(new Runnable() { public void run() { for (int i = 0; i < 1000; i ++) { System.out.println("hello " + i); } } }); Thread t2 = new Thread(new Runnable() { public void run() { System.out.println("firstName"); } }); Thread t3 = new Thread(new Runnable() { public void run() { System.out.println("lastName"); } }); t1.start(); t1.join(); t2.start(); t2.join(); t3.start(); } }
Create DeadLock in Java public class MyDeadlock { String str1 = "Java"; String str2 = "UNIX"; Thread trd1 = new Thread("My Thread 1"){ public void run(){ while(true){ synchronized(str1){ synchronized(str2){ System.out.println(str1 + str2); } } } } }; Thread trd2 = new Thread("My Thread 2"){ public void run(){ while(true){ synchronized(str2){ synchronized(str1){ System.out.println(str2 + str1); } } } } }; public static void main(String a[]){ MyDeadlock mdl = new MyDeadlock(); mdl.trd1.start(); mdl.trd2.start(); } }
ThreadPoolExecutor The java.util.concurrent.ThreadPoolExecutor is an implementation of the ExecutorService interface. The ThreadPoolExecutor executes the given task (Callable or Runnable) using one of its internally pooled threads. The thread pool contained inside the ThreadPoolExecutor can contain a varying amount of threads. The number of threads in the pool is determined by these variables: corePoolSize maximumPoolSize If less than corePoolSize threads are created in the the thread pool when a task is delegated to the thread pool, then a new thread is created, even if idle threads exist in the pool. If the internal queue of tasks is full, and corePoolSize threads or more are running, but less than maximumPoolSize threads are running, then a new thread is created to execute the task. Here is a diagram illustrating the ThreadPoolExecutor principles: However, unless you need to specify all these parameters explicitly for your ThreadPoolExecutor, it is often easier to use one of the factory methods in the java.util.concurrent.Executors class, as shown in the ExecutorService text. ExecutorService Example Here is a simple Java ExectorService example: ExecutorService executorService = Executors.newFixedThreadPool(10); executorService.execute(new Runnable() { public void run() { System.out.println("Asynchronous task"); } }); executorService.shutdown(); First an ExecutorService is created using the newFixedThreadPool() factory method. This creates a thread pool with 10 threads executing tasks. Second, an anonymous implementation of the Runnable interface is passed to the execute() method. This causes the Runnable to be executed by one of the threads in the ExecutorService.
Life cycle of a Thread (Thread States) A thread can be in one of the five states. According to sun, there is only 4 states in thread life cycle in java new, runnable, non-runnable and terminated. There is no running state. But for better understanding the threads, we are explaining it in the 5 states. The life cycle of the thread in java is controlled by JVM. The java thread states are as follows: 1. New 2. Runnable 3. Running 4. Non-Runnable (Blocked) 5. Terminated 1) New The thread is in new state if you create an instance of Thread class but before the invocation of start() method. 2) Runnable The thread is in runnable state after invocation of start() method, but the thread scheduler has not selected it to be the running thread. 3) Running The thread is in running state if the thread scheduler has selected it. 4) Non-Runnable (Blocked) This is the state when the thread is still alive, but is currently not eligible to run. 5) Terminated A thread is in terminated or dead state when its run() method exits.
Java.lang.Thread.yield() Method Description The java.lang.Thread.yield() method causes the currently executing thread object to temporarily pause and allow other threads to execute. Declaration Following is the declaration for java.lang.Thread.yield() method public static void yield() Parameters NA Return Value This method does not return any value. Exception Example import java.lang.*; public class ThreadDemo implements Runnable { Thread t; ThreadDemo(String str) { t = new Thread(this, str); // this will call run() function t.start(); } public void run() { for (int i = 0; i < 5; i++) { // yields control to another thread every 5 iterations if ((i % 5) == 0) { System.out.println(Thread.currentThread().getName() + " yielding control..."); /* causes the currently executing thread object to temporarily pause and allow other threads to execute */ Thread.yield(); } } System.out.println(Thread.currentThread().getName() + " has finished executing."); } public static void main(String[] args) { new ThreadDemo("Thread 1"); new ThreadDemo("Thread 2"); new ThreadDemo("Thread 3"); } } output-- Thread 1 is yielding control... Thread 2 is yielding control... Thread 3 is yielding control... Thread 1 has finished executing. Thread 2 has finished executing. Thread 3 has finished executing.
Java.lang.Thread.join() Method Description The java.lang.Thread.join() method waits for this thread to die. Declaration Following is the declaration for java.lang.Thread.join() method public final void join() throws InterruptedException Parameters NA Return Value This method does not return any value. Exception InterruptedException − if any thread has interrupted the current thread. The interrupted status of the current thread is cleared when this exception is thrown. Example import java.lang.*; public class ThreadDemo implements Runnable { public void run() { Thread t = Thread.currentThread(); System.out.print(t.getName()); //checks if this thread is alive System.out.println(", status = " + t.isAlive()); } public static void main(String args[]) throws Exception { Thread t = new Thread(new ThreadDemo()); // this will call run() function t.start(); // waits for this thread to die t.join(); System.out.print(t.getName()); //checks if this thread is alive System.out.println(", status = " + t.isAlive()); } } Output - Thread-0, status = true Thread-0, status = false join method with time arguments /** * Waits at most {@code millis} milliseconds for this thread to * die. A timeout of {@code 0} means to wait forever. * * This implementation uses a loop of {@code this.wait} calls * conditioned on {@code this.isAlive}. As a thread terminates the * {@code this.notifyAll} method is invoked. It is recommended that * applications not use {@code wait}, {@code notify}, or * {@code notifyAll} on {@code Thread} instances. * * @param millis * the time to wait in milliseconds * * @throws IllegalArgumentException * if the value of {@code millis} is negative * * @throws InterruptedException * if any thread has interrupted the current thread. The * interrupted status of the current thread is * cleared when this exception is thrown. */ public final synchronized void join(long millis)
What is the difference between preemptive scheduling and time slicing? Preemptive scheduling: The highest priority task executes until it enters the waiting or dead states or a higher priority task comes into existence. Time slicing: A task executes for a predefined slice of time and then reenters the pool of ready tasks. The scheduler then determines which task should execute next, based on priority and other factors.
What is Inheritance in Java ? Inheritance in Java or OOPS (Object oriented programming) is a feature which allows coding reusability. In other words, Inheritance self-implies inheriting or we can say acquiring something from others. Along with Abstraction, Encapsulation and Polymorphism, Inheritance forms the backbone of Object oriented programming and Java. In Java, we use the term inheritance when one object acquires some property from other objects. In Java, inheritance is defined in terms of superclass and subclass. it is normally used when some object wants to use existing feature of some class and also want to provide some special feature, so we can say inheritance has given the advantage of reusability. By using Inheritance between Superclass and Subclass, a IS-A Relationship is formed which means you can use any subclass object in place of the super class object e.g. if a method expects a superclass object, you can pass a subclass object to it. Inheritance in Java is also used to provide a concrete implementation of abstract class and interface in Java. Inheritance in Java- Things to remember Here are some important points about Inheritance in Java which is worth remembering: One subclass can extend only one super class in Java but it can implement the multiple interfaces. A private member of the super class can not be inherited in subclass e.g. private field and private methods. Default member can only be inherited in same package subclass, not in another package. The constructor in Java is not inherited by the subclass. If a class implements Interface or extends an abstract class, it needs to override all abstract methods untile it is not abstract. Multiple inheritances are not supported in java but we can achieve this by using interface.One class can implement multiple interfaces In Java class never extends the interface rather it implements interface One interface can extend another interface in Java. Why multiple inheritances are not supported in Java ? The diamond problem A diamond class inheritance diagram. The "diamond problem" (sometimes referred to as the "deadly diamond of death"[4]) is an ambiguity that arises when two classes B and C inherit from A, and class D inherits from both B and C. If there is a method in A that B and C have overridden, and D does not override it, then which version of the method does D inherit: that of B, or that of C? For example, in the context of GUI software development, a class Button may inherit from both classes Rectangle (for appearance) and Clickable (for functionality/input handling), and classes Rectangle and Clickable both inherit from the Object class. Now if the equals method is called for a Button object and there is no such method in the Button class but there is an overridden equals method in Rectangle or Clickable (or both), which method should be eventually called? It is called the "diamond problem" because of the shape of the class inheritance diagram in this situation. In this case, class A is at the top, both B and C separately beneath it, and D joins the two together at the bottom to form a diamond shape. Java 8 introduces default methods on interfaces. If A,B,C are interfaces, B,C can each provide a different implementation to an abstract method of A, causing the diamond problem. Either class D must reimplement the method (the body of which can simply forward the call to one of the super implementations), or the ambiguity will be rejected as a compile error.[6] Prior to Java 8, Java was not subject to the Diamond problem risk, as Java does not support multiple inheritance. How hashmap works in java ? Single statement answer If anybody asks me to describe “How HashMap works?“, I simply answer: “On principle of Hashing“. As simple as it is. Now before answering it, one must be very sure to know at least basics of Hashing. Right?? What is Hashing Hashing in its simplest form, is a way to assigning a unique code for any variable/object after applying any formula/algorithm on its properties. A true Hashing function must follow this rule: Hash function should return the same hash code each and every time, when function is applied on same or equal objects. In other words, two equal objects must produce same hash code consistently. All objects in java inherit a default implementation of hashCode() function defined in Object class. This function produce hash code by typically converting the internal address of the object into an integer, thus producing different hash codes for all different objects. A little about Entry class A map by definition is : “An object that maps keys to values”. Very easy.. right? So, there must be some mechanism in HashMap to store this key value pair. Answer is YES. HashMap has an inner class Entry, which looks like this: static class Entryimplements Map.Entry { final K key; V value; Entry next; final int hash; ...//More code goes here } What put() method actually does Before going into put() method’s implementation, it is very important to learn that instances of Entry class are stored in an array. HashMap class defines this variable as: /** * The table, resized as necessary. Length MUST Always be a power of two. */ transient Entry[] table; Now look at code implementation of put() method: /** * Associates the specified value with the specified key in this map. If the * map previously contained a mapping for the key, the old value is * replaced. * * @param key * key with which the specified value is to be associated * @param value * value to be associated with the specified key * @return the previous value associated with key, or null * if there was no mapping for key. (A null return * can also indicate that the map previously associated * null with key.) */ public V put(K key, V value) { if (key == null) return putForNullKey(value); int hash = hash(key.hashCode()); int i = indexFor(hash, table.length); for (Entry e = table[i]; e != null; e = e.next) { Object k; if (e.hash == hash && ((k = e.key) == key || key.equals(k))) { V oldValue = e.value; e.value = value; e.recordAccess(this); return oldValue; } } modCount++; addEntry(hash, key, value, i); return null; } Lets note down the steps one by one: 1) First of all, key object is checked for null. If key is null, value is stored in table[0] position. Because hash code for null is always 0. 2) Then on next step, a hash value is calculated using key’s hash code by calling its hashCode() method. This hash value is used to calculate index in array for storing Entry object. JDK designers well assumed that there might be some poorly written hashCode() functions that can return very high or low hash code value. To solve this issue, they introduced another hash() function, and passed the object’s hash code to this hash() function to bring hash value in range of array index size. 3) Now indexFor(hash, table.length) function is called to calculate exact index position for storing the Entry object. 4) Here comes the main part. Now, as we know that two unequal objects can have same hash code value, how two different objects will be stored in same array location [called bucket]. Answer is LinkedList. If you remember, Entry class had an attribute “next”. This attribute always points to next object in chain. This is exactly the behavior of LinkedList. So, in case of collision, Entry objects are stored in LinkedList form. When an Entry object needs to be stored in particular index, HashMap checks whether there is already an entry?? If there is no entry already present, Entry object is stored in this location. If there is already an object sitting on calculated index, its next attribute is checked. If it is null, and current Entry object becomes next node in LinkedList. If next variable is not null, procedure is followed until next is evaluated as null. What if we add the another value object with same key as entered before. Logically, it should replace the old value. How it is done? Well, after determining the index position of Entry object, while iterating over LinkedList on calculated index, HashMap calls equals method on key object for each Entry object. All these Entry objects in LinkedList will have similar hash code but equals() method will test for true equality. If key.equals(k) will be true then both keys are treated as same key object. This will cause the replacing of value object inside Entry object only. In this way, HashMap ensure the uniqueness of keys. How get() methods works internally Now we have got the idea, how key-value pairs are stored in HashMap. Next big question is : what happens when an object is passed in get method of HashMap? How the value object is determined? Answer we already should know that the way key uniqueness is determined in put() method , same logic is applied in get() method also. The moment HashMap identify exact match for the key object passed as argument, it simply returns the value object stored in current Entry object. If no match is found, get() method returns null. Let have a look at code: /** * Returns the value to which the specified key is mapped, or {@code null} * if this map contains no mapping for the key. * * * More formally, if this map contains a mapping from a key {@code k} to a * value {@code v} such that {@code (key==null ? k==null : * key.equals(k))}, then this method returns {@code v}; otherwise it returns * {@code null}. (There can be at most one such mapping.) * *
* A return value of {@code null} does not necessarily indicate that * the map contains no mapping for the key; it's also possible that the map * explicitly maps the key to {@code null}. The {@link #containsKey * containsKey} operation may be used to distinguish these two cases. * * @see #put(Object, Object) */ public V get(Object key) { if (key == null) return getForNullKey(); int hash = hash(key.hashCode()); for (Entry
e = table[indexFor(hash, table.length)]; e != null; e = e.next) { Object k; if (e.hash == hash && ((k = e.key) == key || key.equals(k))) return e.value; } return null; } Above code is same as put() method till if (e.hash == hash && ((k = e.key) == key || key.equals(k))), after this simply value object is returned. Key Notes Data structure to store Entry objects is an array named table of type Entry. A particular index location in array is referred as bucket, because it can hold the first element of a LinkedList of Entry objects. Key object’s hashCode() is required to calculate the index location of Entry object. Key object’s equals() method is used to maintain uniqueness of Keys in map. Value object’s hashCode() and equals() method are not used in HashMap’s get() and put() methods. Hash code for null keys is always zero, and such Entry object is always stored in zero index in Entry[]. [Update] Improvements in Java 8 As part of the work for JEP 180, there is a performance improvement for HashMap objects where there are lots of collisions in the keys by using balanced trees rather than linked lists to store map entries. The principal idea is that once the number of items in a hash bucket grows beyond a certain threshold, that bucket will switch from using a linked list of entries to a balanced tree. In the case of high hash collisions, this will improve worst-case performance from O(n) to O(log n). Basically when a bucket becomes too big (currently: TREEIFY_THRESHOLD = 8), HashMap dynamically replaces it with an ad-hoc implementation of tree map. This way rather than having pessimistic O(n) we get much better O(log n). Bins (elements or nodes) of TreeNodes may be traversed and used like any others, but additionally support faster lookup when overpopulated. However, since the vast majority of bins in normal use are not overpopulated, checking for existence of tree bins may be delayed in the course of table methods. Tree bins (i.e., bins whose elements are all TreeNodes) are ordered primarily by hashCode, but in the case of ties, if two elements are of the same “class C implements Comparable “, type then their compareTo() method is used for ordering. Because TreeNodes are about twice the size of regular nodes, we use them only when bins contain enough nodes. And when they become too small (due to removal or resizing) they are converted back to plain bins (currently: UNTREEIFY_THRESHOLD = 6). In usages with well-distributed user hashCodes, tree bins are rarely used. I hope, i have correctly communicated my thoughts by this article. If you find any difference or need any help in any point, please drop a comment. Happy Learning !! http://howtodoinjava.com/core-java/collections/how-hashmap-works-in-java/
Working with hashCode and equals methods in java The general contract of hashCode is: Whenever it is invoked on the same object more than once during an execution of a Java application, the hashCode method must consistently return the same integer, provided no information used in equals comparisons on the object is modified. This integer need not remain consistent from one execution of an application to another execution of the same application. If two objects are equal according to the equals(Object) method, then calling the hashCode method on each of the two objects must produce the same integer result. It is not required that if two objects are unequal according to the equals(java.lang.Object) method, then calling the hashCode method on each of the two objects must produce distinct integer results. However, the programmer should be aware that producing distinct integer results for unequal objects may improve the performance of hash tables. Usage of hashCode() and equals() hashCode() method is used to get a unique integer for given object. This integer is used for determining the bucket location, when this object needs to be stored in some HashTable like data structure. By default, Object’s hashCode() method returns and integer representation of memory address where object is stored. equals() method, as name suggest, is used to simply verify the equality of two objects. Default implementation simply check the object references of two objects to verify their equality. Overriding the default behavior Everything works fine until you do not override any of these methods in your classes. But, sometimes application needs to change the default behavior of some objects. Lets take an example where your application has Employee object. Lets create a minimal possible structure of Employee class:: public class Employee { private Integer id; private String firstname; private String lastName; private String department; public Integer getId() { return id; } public void setId(Integer id) { this.id = id; } public String getFirstname() { return firstname; } public void setFirstname(String firstname) { this.firstname = firstname; } public String getLastName() { return lastName; } public void setLastName(String lastName) { this.lastName = lastName; } public String getDepartment() { return department; } public void setDepartment(String department) { this.department = department; } } Above Employee class has some very basic attributes and there accessor methods. Now consider a simple situation where you need to compare two employee objects. public class EqualsTest { public static void main(String[] args) { Employee e1 = new Employee(); Employee e2 = new Employee(); e1.setId(100); e2.setId(100); //Prints false in console System.out.println(e1.equals(e2)); } } No prize for guessing. Above method will print “false“. But, is it really correct after knowing that both objects represent same employee. In a real time application, this must return true. To achieve correct behavior, we need to override equals method as below: public boolean equals(Object o) { if(o == null) { return false; } if (o == this) { return true; } if (getClass() != o.getClass()) { return false; } Employee e = (Employee) o; return (this.getId() == e.getId()); } Add this method to your Employee class, and EqualsTest will start returning “true“. So are we done? Not yet. Lets test again above modified Employee class in different way. import java.util.HashSet; import java.util.Set; public class EqualsTest { public static void main(String[] args) { Employee e1 = new Employee(); Employee e2 = new Employee(); e1.setId(100); e2.setId(100); //Prints 'true' System.out.println(e1.equals(e2)); Setemployees = new HashSet (); employees.add(e1); employees.add(e2); //Prints two objects System.out.println(employees); } } Above class prints two objects in second print statement. If both employee objects have been equal, in a Set which stores only unique objects, there must be only one instance inside HashSet, after all both objects refer to same employee. What is it we are missing?? We are missing the second important method hashCode(). As java docs say, if you override equals() method then you must override hashCode() method. So lets add another method in our Employee class. @Override public int hashCode() { final int PRIME = 31; int result = 1; result = PRIME * result + getId(); return result; } Once above method is added in Employee class, the second statement start printing only single object in second statement, and thus validating the true equality of e1 and e2. Overriding hashCode() and equals() using Apache Commons Lang Apache commons provide two excellent utility classes HashCodeBuilder and EqualsBuilder for generating hash code and equals methods. Below is its usage: import org.apache.commons.lang3.builder.EqualsBuilder; import org.apache.commons.lang3.builder.HashCodeBuilder; public class Employee { private Integer id; private String firstname; private String lastName; private String department; public Integer getId() { return id; } public void setId(Integer id) { this.id = id; } public String getFirstname() { return firstname; } public void setFirstname(String firstname) { this.firstname = firstname; } public String getLastName() { return lastName; } public void setLastName(String lastName) { this.lastName = lastName; } public String getDepartment() { return department; } public void setDepartment(String department) { this.department = department; } @Override public int hashCode() { final int PRIME = 31; return new HashCodeBuilder(getId()%2==0?getId()+1:getId(), PRIME).toHashCode(); } @Override public boolean equals(Object o) { if (o == null) return false; if (o == this) return true; if (o.getClass() != getClass()) return false; Employee e = (Employee) o; return new EqualsBuilder(). append(getId(), e.getId()). isEquals(); } } Alternatively, if you are using any code editor, they also must be capable of generating some good structure for you. For example, Eclipse IDE has option under right click on class >> source > Generate hashCode() and equals() … will generate a very good implementation for you. Important things to remember 1) Always use same attributes of an object to generate hashCode() and equals() both. As in our case, we have used employee id. 2) equals() must be consistent (if the objects are not modified, then it must keep returning the same value). 3) Whenever a.equals(b), then a.hashCode() must be same as b.hashCode(). 4) If you override one, then you should override the other. Special Attention When Using in ORM If you’re dealing with an ORM, make sure to always use getters, and never field references in hashCode() and equals(). This is for reason, in ORM, occasionally fields are lazy loaded and not available until called their getter methods. For example, In our Employee class if we use e1.id == e2.id. It is very much possible that id field is lazy loaded. So in this case, one might be zero or null, and thus resulting in incorrect behavior. But if uses e1.getId() == e2.getId(), we can be sure even if field is lazy loaded; calling getter will populate the field first. This is all i know about hashCode() and equals() methods. I hope, it will help someone somewhere. If you feel, I am missing something or wrong somewhere, please leave a comment. I will update this post again to help others. Happy Learning !! http://howtodoinjava.com/core-java/basics/working-with-hashcode-and-equals-methods-in-java/ Using HashMap in non-synchronized code in multi-threaded application In normal cases, it can leave the hashmap in inconsistent state where key-value pairs added and retrieved can be different. Apart from this, other surprising behavior like NullPointerException can come into picture. In worst case, It can cause infinite loop. YES. You got it right. It can cause infinite loop. What did you asked, How?? Well, here is the reason. HashMap has the concept of rehashing when it reaches to its upper limit of size. This rehashing is the process of creating a new memory area, and copying all the already present key-value pairs in new memory are. Lets say Thread A tried to put a key-value pair in map and then rehashing started. At the same time, thread B came and started manipulating the buckets using put operation. Here while rehashing process, there are chances to generate the cyclic dependency where one element in linked list [in any bucket] can point to any previous node in same bucket. This will result in infinite loop, because rehashing code contains a “while(true) { //get next node; }” block and in cyclic dependency it will run infinite. To watch closely, look art source code of transfer method which is used in rehashing: public Object get(Object key) { Object k = maskNull(key); int hash = hash(k); int i = indexFor(hash, table.length); Entry e = table[i]; //While true is always a bad practice and cause infinite loops while (true) { if (e == null) return e; if (e.hash == hash && eq(k, e.key)) return e.value; e = e.next; } } I will write a more detailed article on this in future. I hope I was able to put some more items on your knowledge bucket. If you find this article helpful, please consider it sharing with your friends. Happy Learning !!
Volatile Vs Atomic The effect of the volatile keyword is approximately that each individual read or write operation on that variable is atomic. Notably, however, an operation that requires more than one read/write -- such as i++, which is equivalent to i = i + 1, which does one read and one write -- is not atomic, since another thread may write to i between the read and the write. The Atomic classes, like AtomicInteger and AtomicReference, provide a wider variety of operations atomically, specifically including increment for AtomicInteger.
How can I sort a LinkedHashMap based on its values given that the LinkedHashMap contains of String and Integer. So I need to sort it based on the Values which are Integers. map.entrySet().stream() .sorted(Map.Entry.comparingByValue()) .forEach(entry -> ... );
Immutable Objects An object is considered immutable if its state cannot change after it is constructed. Maximum reliance on immutable objects is widely accepted as a sound strategy for creating simple, reliable code. Immutable objects are particularly useful in concurrent applications. Since they cannot change state, they cannot be corrupted by thread interference or observed in an inconsistent state. Programmers are often reluctant to employ immutable objects, because they worry about the cost of creating a new object as opposed to updating an object in place. The impact of object creation is often overestimated, and can be offset by some of the efficiencies associated with immutable objects. These include decreased overhead due to garbage collection, and the elimination of code needed to protect mutable objects from corruption. The following subsections take a class whose instances are mutable and derives a class with immutable instances from it. In so doing, they give general rules for this kind of conversion and demonstrate some of the advantages of immutable objects. Immutable objects are simply objects whose state (the object's data) cannot change after construction. Examples of immutable objects from the JDK include String and Integer. Immutable objects greatly simplify your program, since they: are simple to construct, test, and use are automatically thread-safe and have no synchronization issues don't need a copy constructor don't need an implementation of clone allow hashCode to use lazy initialization, and to cache its return value don't need to be copied defensively when used as a field make good Map keys and Set elements (these objects must not change state while in the collection) have their class invariant established once upon construction, and it never needs to be checked again always have "failure atomicity" (a term used by Joshua Bloch): if an immutable object throws an exception, it's never left in an undesirable or indeterminate state Immutable objects have a very compelling list of positive qualities. Without question, they are among the simplest and most robust kinds of classes you can possibly build. When you create immutable classes, entire categories of problems simply disappear. Make a class immutable by following these guidelines: ensure the class cannot be overridden - make the class final, or use static factories and keep constructors private make fields private and final force callers to construct an object completely in a single step, instead of using a no-argument constructor combined with subsequent calls to setXXX methods (that is, avoid the Java Beans convention) do not provide any methods which can change the state of the object in any way - not just setXXX methods, but any method which can change state if the class has any mutable object fields, then they must be defensively copied when they pass between the class and its caller In Effective Java, Joshua Bloch makes this compelling recommendation : "Classes should be immutable unless there's a very good reason to make them mutable....If a class cannot be made immutable, limit its mutability as much as possible." It's interesting to note that BigDecimal is technically not immutable, since it's not final. Example import java.util.Date; /** * Planet is an immutable class, since there is no way to change * its state after construction. */ public final class Planet { public Planet (double aMass, String aName, Date aDateOfDiscovery) { fMass = aMass; fName = aName; //make a private copy of aDateOfDiscovery //this is the only way to keep the fDateOfDiscovery //field private, and shields this class from any changes that //the caller may make to the original aDateOfDiscovery object fDateOfDiscovery = new Date(aDateOfDiscovery.getTime()); } /** * Returns a primitive value. * * The caller can do whatever they want with the return value, without * affecting the internals of this class. Why? Because this is a primitive * value. The caller sees its "own" double that simply has the * same value as fMass. */ public double getMass() { return fMass; } /** * Returns an immutable object. * * The caller gets a direct reference to the internal field. But this is not * dangerous, since String is immutable and cannot be changed. */ public String getName() { return fName; } // /** // * Returns a mutable object - likely bad style. // * // * The caller gets a direct reference to the internal field. This is usually dangerous, // * since the Date object state can be changed both by this class and its caller. // * That is, this class is no longer in complete control of fDate. // */ // public Date getDateOfDiscovery() { // return fDateOfDiscovery; // } /** * Returns a mutable object - good style. * * Returns a defensive copy of the field. * The caller of this method can do anything they want with the * returned Date object, without affecting the internals of this * class in any way. Why? Because they do not have a reference to * fDate. Rather, they are playing with a second Date that initially has the * same data as fDate. */ public Date getDateOfDiscovery() { return new Date(fDateOfDiscovery.getTime()); } // PRIVATE /** * Final primitive data is always immutable. */ private final double fMass; /** * An immutable object field. (String objects never change state.) */ private final String fName; /** * A mutable object field. In this case, the state of this mutable field * is to be changed only by this class. (In other cases, it makes perfect * sense to allow the state of a field to be changed outside the native * class; this is the case when a field acts as a "pointer" to an object * created elsewhere.) */ private final Date fDateOfDiscovery; } If immutable objects are good, why do people keep creating mutable objects? Both mutable and immutable objects have their own uses, pros and cons. Immutable objects do indeed make life simpler in many cases. They are especially applicable for value types, where objects don't have an identity so they can be easily replaced. And they can make concurrent programming way safer and cleaner (most of the notoriously hard to find concurrency bugs are ultimately caused by mutable state shared between threads). However, for large and/or complex objects, creating a new copy of the object for every single change can be very costly and/or tedious. And for objects with a distinct identity, changing an existing objects is much more simple and intuitive than creating a new, modified copy of it. Think about a game character. In games, speed is top priority, so representing your game characters with mutable objects will most likely make your game run significantly faster than an alternative implementation where a new copy of the game character is spawned for every little change. Moreover, our perception of the real world is inevitably based on mutable objects. When you fill up your car with fuel at the gas station, you perceive it as the same object all along (i.e. its identity is maintained while its state is changing) - not as if the old car with an empty tank got replaced with consecutive new car instances having their tank gradually more and more full. So whenever we are modeling some real-world domain in a program, it is usually more straightforward and easier to implement the domain model using mutable objects to represent real-world entities. Apart from all these legitimate reasons, alas, the most probable cause why people keep creating mutable objects is inertia of mind, a.k.a. resistance to change. Note that most developers of today have been trained well before immutability (and the containing paradigm, functional programming) became "trendy" in their sphere of influence, and don't keep their knowledge up to date about new tools and methods of our trade - in fact, many of us humans positively resist new ideas and processes. "I have been programming like this for nn years and I don't care about the latest stupid fads!"
Difference between final, finally and finalize Final is used to apply restrictions on class, method and variable. Final class can't be inherited, final method can't be overridden and final variable value can't be changed. Finally is used to place important code, it will be executed whether exception is handled or not. Finalize is used to perform clean up processing just before object is garbage collected. Final is a keyword. Finally is a block. Finalize is a method. class FinalExample{ public static void main(String[] args){ final int x=100; x=200;//Compile Time Error }} class FinallyExample{ public static void main(String[] args){ try{ int x=300; }catch(Exception e){System.out.println(e);} finally{System.out.println("finally block is executed");} }} class FinalizeExample{ public void finalize(){System.out.println("finalize called");} public static void main(String[] args){ FinalizeExample f1=new FinalizeExample(); FinalizeExample f2=new FinalizeExample(); f1=null; f2=null; System.gc(); }}
Callable and Future Example in Java import java.util.concurrent.Callable; import java.util.concurrent.ExecutionException; import java.util.concurrent.ExecutorService; import java.util.concurrent.Executors; import java.util.concurrent.Future; /** * Simple Java program to demonstrate how to use Callable and Future class in * Java. You can use FutureTask for asynchronous processing. * * */ public class CallableAndFuture { public static void main(String... args) throws InterruptedException, ExecutionException { // creating thread pool to execute task which implements Callable ExecutorService es = Executors.newSingleThreadExecutor(); System.out.println("submitted callable task to calculate factorial of 10"); Futureresult10 = es.submit(new FactorialCalculator(10l)); System.out.println("submitted callable task to calculate factorial of 15"); Future result15 = es.submit(new FactorialCalculator(15l)); System.out.println("submitted callable task to calculate factorial of 20"); Future result20 = es.submit(new FactorialCalculator(20l)); es.shutdown(); System.out.println("Calling get method of Future to fetch result of factorial 10"); Long factorialof10 = result10.get(); System.out.println("factorial of 100 is : " + factorialof10); System.out.println("Calling get method of Future to get result of factorial 15"); Long factorialof15 = result15.get(); System.out.println("factorial of 15 is : " + factorialof15); System.out.println("Calling get method of Future to get result of factorial 20"); Long factorialof20 = result20.get(); System.out.println("factorial of 20 is : " + factorialof20); } } class FactorialCalculator implements Callable { private Long number; public FactorialCalculator(Long number){ this.number = number; } @Override public Long call() throws Exception { return factorial(number); } private long factorial(Long n) throws InterruptedException { long result = 1; while (n != 0) { result = n * result; n = n - 1; Thread.sleep(100); } return result; } } Output submitted callable task to calculate factorial of 10 submitted callable task to calculate factorial of 15 submitted callable task to calculate factorial of 20 Calling get method of Future to fetch result of factorial 10 factorial of 10 is : 3628800 Calling get method of Future to get result of factorial 15 factorial of 15 is : 1307674368000 Calling get method of Future to get result of factorial 20 factorial of 20 is : 2432902008176640000 1) Callable is a SAM type interface, so it can be used in lambda expression. 2) Callable has just one method call() which holds all the code needs to executed asynchronously. 3) In Runnable interface, there was no way to return the result of computation or throw checked exception but with Callable you can both return a value and can throw checked exception. 4) You can use get() method of Future to retrieve result once computation is done. You can check if computation is finished or not by using isDone() method. 5) You can cancel the computation by using Future.cancel() method. 6) get() is a blocking call and it blocks until computation is completed.
Java PriorityQueue Example class Book implements Comparable{ int id; String name,author,publisher; int quantity; public Book(int id, String name, String author, String publisher, int quantity) { this.id = id; this.name = name; this.author = author; this.publisher = publisher; this.quantity = quantity; } public int compareTo(Book b) { if(id>b.id){ return 1; }else if(id queue = new PriorityQueue (); //Creating Books Book b1 = new Book(121, "Let us C", "Yashwant Kanetkar", "BPB", 8); Book b2 = new Book(233, "Operating System", "Galvin", "Wiley", 6); Book b3 = new Book(101, "Data Communications & Networking", "Forouzan", "Mc Graw Hill", 4); Book b4 = new Book(121, "Operating System 2", "Galvin", "Wiley", 6); //Adding Books to the queue queue.add(b1); queue.add(b2); queue.add(b3); queue.add(b4); System.out.println("Traversing the queue elements:"); //Traversing queue elements for (Book b : queue) { System.out.println(b.id + " " + b.name + " " + b.author + " " + b.publisher + " " + b.quantity); } queue.remove(); System.out.println("After removing one book record:"); for (Book b : queue) { System.out.println(b.id + " " + b.name + " " + b.author + " " + b.publisher + " " + b.quantity); } } } --Output Traversing the queue elements: 101 Data Communications & Networking Forouzan Mc Graw Hill 4 121 Operating System 2 Galvin Wiley 6 121 Let us C Yashwant Kanetkar BPB 8 233 Operating System Galvin Wiley 6 After removing one book record: 121 Operating System 2 Galvin Wiley 6 233 Operating System Galvin Wiley 6 121 Let us C Yashwant Kanetkar BPB 8
What GCs were introduced in java 7,8?Garbage Collection in Earlier versions
Java 1.5
The major enhancements in Java 1.5 Garbage Collector (GC) has changed from the previous serial collector (-XX:+UseSerialGC) to a parallel collector (-XX:+UseParallelGC). You can override this default by using the -XX:+UseSerialGC command-line option to the java command. Enormous changes have made to the heap size.Before J2SE 5.0, the default initial heap size was a reasonable minimum, which varies by platform. Now it is increased and we can override this default maximum size of the heap using the -Xmx command-line option. The implementation of certain policies for parallel collector has changed to consider three goals :- a desired maximum GC pause goal a desired application throughput goal minimum footprint Garbage Collector in Java 1.5 was with some specific constraints to follow as strategies for an application to attain the above goals.Java 1.6
The Major Enhancements in Java 1.6 are Parallel Compaction Collector It is a feature that enables the Garbage Collector to do collections in parallel leading to less garbage collection overhead and increase the performance for applications which require lager heaps. The platforms with more than two processors are best suited. two total types of GC in java 1.6 1) Ergonomics in the 6.0 Java Virtual Machine 2) Concurrent Low Pause Collector: Concurrent Mark Sweep Collector more at https://javabeat.net/g1-garbage-collector-in-java-7-0/Java 1.7 G1 Garbage Collector
In Java 1.7 might have a new garbage collection strategy by default. It is called G1, which is short for Garbage First. It has been experimentally launched in the Java 1.6 update 14 to replace the regular Concurrent Mark and Sweep Garbage Collectors with increased performance. G1 is considered as “server centric” with following attributes. G1 uses parallelism which are mostly used in hardware today.The main advantage of G1 is designed in such a way to make use of all available CPU’s and utilize the processing power of all CPU’s and increase the performance and speed in Garbage Collection. Concurrency feature of G1 allows to run Java threads to minimize the heap operations at stop pauses. Next feature which plays a key role in increasing the Garbage Collection is treating the young objects(newly created) and the old objects(which lived for some time) differently.G1 mainly focuses on young objects as they can be reclaimable when traversing the old objects. Heap compaction is done to eliminate fragmentation problems. G1 can be more predictable when compared to CMS. Features of G1 Garbage Collector A single contiguous heap which is split into same-sized regions. No separation between younger and older regions. G1 uses evacuation pauses.Evacuation pauses are done in parallel by using all the available processors. G1 uses a pause prediction model to meet user-defined pause time targets Like CMS,G1 also periodically performs a concurrent marking phase. Unlike CMS, G1 does not perform a concurrent sweeping phase.
Making Singleton thread safeThread Safety in Java Singleton Classes
In general we follow below steps to create a singleton class: 1) Override the private constructor to avoid any new object creation with new operator. 2) Declare a private static instance of the same class 3) Provide a public static method that will return the singleton class instance variable. If the variable is not initialized then initialize it or else simply return the instance variable.package com.journaldev.designpatterns; public class ASingleton { private static ASingleton instance = null; private ASingleton() { } public static ASingleton getInstance() { if (instance == null) { instance = new ASingleton(); } return instance; } }
In the above code, getInstance() method is not thread safe. Multiple threads can access it at the same time and for the first few threads when the instance variable is not initialized, multiple threads can enters the if loop and create multiple instances and break our singleton implementation. There are three ways through which we can achieve thread safety. Create the instance variable at the time of class loading. Pros: Thread safety without synchronization Easy to implement Cons: Early creation of resource that might not be used in the application. The client application can’t pass any argument, so we can’t reuse it. For example, having a generic singleton class for database connection where client application supplies database server properties. Synchronize the getInstance() method Pros: Thread safety is guaranteed. Client application can pass parameters Lazy initialization achieved Cons: Slow performance because of locking overhead. Unnecessary synchronization that is not required once the instance variable is initialized. Use synchronized block inside the if loop and volatile variable Pros: Thread safety is guaranteed Client application can pass arguments Lazy initialization achieved Synchronization overhead is minimal and applicable only for first few threads when the variable is null. Cons: Extra if condition Looking at all the three ways to achieve thread safety, I think third one is the best option and in that case the modified class will look like:package com.journaldev.designpatterns; public class ASingleton { private static volatile ASingleton instance; private static Object mutex = new Object(); private ASingleton() { } public static ASingleton getInstance() { ASingleton result = instance; if (result == null) { synchronized (mutex) { result = instance; if (result == null) instance = result = new ASingleton(); } } return result; } }
Local variable result seems unnecessary. But it’s there to improve performance of our code. In cases where instance is already initialized (most of the time), the volatile field is only accessed once (due to “return result;” instead of “return instance;”). This can improve the method’s overall performance by as much as 25 percent. If you think there are better ways to achieve this or the thread safety is compromised in the above implementation, please comment and share with all of us.
Linkedlist vs ArraylistLinkedlist vs Arraylist
added at 8 July, 2018 LinkedList and ArrayList are two different implementations of the List interface. LinkedList implements it with a doubly-linked list. ArrayList implements it with a dynamically re-sizing array. As with standard linked list and array operations, the various methods will have different algorithmic runtimes. For LinkedListget(int index) is O(n) (with n/4 steps on average) add(E element) is O(1) add(int index, E element) is O(n) (with n/4 steps on average), but O(1) when index = 0 <--- main benefit of LinkedList remove(int index) is O(n) (with n/4 steps on average) Iterator.remove() is O(1). <--- main benefit of LinkedList ListIterator.add(E element) is O(1) This is one of the main benefits of LinkedList Note: Many of the operations need n/4 steps on average, constant number of steps in the best case (e.g. index = 0), and n/2 steps in worst case (middle of list) For ArrayList get(int index) is O(1) <--- main benefit of ArrayList add(E element) is O(1) amortized, but O(n) worst-case since the array must be resized and copied add(int index, E element) is O(n) (with n/2 steps on average) remove(int index) is O(n) (with n/2 steps on average) Iterator.remove() is O(n) (with n/2 steps on average) ListIterator.add(E element) is O(n) (with n/2 steps on average) Note: Many of the operations need n/2 steps on average, constant number of steps in the best case (end of list), n steps in the worst case (start of list) LinkedList allows for constant-time insertions or removals using iterators, but only sequential access of elements. In other words, you can walk the list forwards or backwards, but finding a position in the list takes time proportional to the size of the list. Javadoc says "operations that index into the list will traverse the list from the beginning or the end, whichever is closer", so those methods are O(n) (n/4 steps) on average, though O(1) for index = 0. ArrayList , on the other hand, allow fast random read access, so you can grab any element in constant time. But adding or removing from anywhere but the end requires shifting all the latter elements over, either to make an opening or fill the gap. Also, if you add more elements than the capacity of the underlying array, a new array (1.5 times the size) is allocated, and the old array is copied to the new one, so adding to an ArrayList is O(n) in the worst case but constant on average. So depending on the operations you intend to do, you should choose the implementations accordingly. Iterating over either kind of List is practically equally cheap. (Iterating over an ArrayList is technically faster, but unless you're doing something really performance-sensitive, you shouldn't worry about this -- they're both constants.) The main benefits of using a LinkedList arise when you re-use existing iterators to insert and remove elements. These operations can then be done in O(1) by changing the list locally only. In an array list, the remainder of the array needs to be moved (i.e. copied). On the other side, seeking in a LinkedList means following the links in O(n) (n/2 steps) for worst case, whereas in an ArrayList the desired position can be computed mathematically and accessed in O(1). Another benefit of using a LinkedList arise when you add or remove from the head of the list, since those operations are O(1), while they are O(n) for ArrayList. Note that ArrayDeque may be a good alternative to LinkedList for adding and removing from the head, but it is not a List. Also, if you have large lists, keep in mind that memory usage is also different. Each element of a LinkedList has more overhead since pointers to the next and previous elements are also stored. ArrayLists don't have this overhead. However, ArrayLists take up as much memory as is allocated for the capacity, regardless of whether elements have actually been added. The default initial capacity of an ArrayList is pretty small (10 from Java 1.4 - 1.8). But since the underlying implementation is an array, the array must be resized if you add a lot of elements. To avoid the high cost of resizing when you know you're going to add a lot of elements, construct the ArrayList with a higher initial capacity.