Threads III : Synchronization - 2020
Without clear understanding of priorities, it will be hard to understand the yield() method. Thread always runs with priority, usually represented as a number between 1 and 10.
The scheduler uses priority-based scheduling which implies some sort of time slicing. This does not mean that all JVMs use time slicing. The JVM specification does not require a JVM to implement a time-slicing scheduler, where each thread is allocated a fair amount of time and then sent back to runnable to give another thread a chance.
If a thread enters the runnable state, and it has a higher priority than any of the threads in the pool and a higher priority than the currently running thread, the lower-priority running thread usually will be bumped back to runnable and the highest-priority thread will be chosen to run. In other words, at any given time the currently running thread usually will not have a priority that is lower than any of the threads in the pool. In most cases, the running thread will be of equal or greater priority than the highest priority threads in the pool.
This is as close to a guarantee about the scheduling as we'll get from the JVM specification, so we must not rely on thread priorities to guarantee the correct behavior of our program.
Don't rely on thread priorities when designing your multithreaded application. Because thread-scheduling priority behavior is not guaranteed, use thread priorities as a way to improve the efficiency of your code, but just be sure your program doesn't depend on that behavior for correctness.
Here is a code which has three thread each has different value of priority: a (minimum), b(normal), and c (maximum).
class MyRunnable implements Runnable { public static void main(String[] args) { MyRunnable r = new MyRunnable(); Thread a = new Thread(r); Thread b = new Thread(r); Thread c = new Thread(r); a.setName("thread a "); b.setName("thread b "); c.setName("thread c "); a.setPriority(Thread.MIN_PRIORITY); b.setPriority(Thread.NORM_PRIORITY); c.setPriority(Thread.MAX_PRIORITY); a.start(); b.start(); c.start(); } public void run() { for (int i = 1; i <= 10; i++) { System.out.println( Thread.currentThread().getName() + i + " is running"); try { Thread.sleep(2000); } catch (InterruptedException e) { e.printStackTrace(); } } } }
Output from the code is:
thread a 1 is running thread c 1 is running thread b 1 is running thread c 2 is running thread b 2 is running thread a 2 is running thread c 3 is running thread b 3 is running thread a 3 is running thread c 4 is running thread a 4 is running thread b 4 is running thread b 5 is running thread a 5 is running thread c 5 is running thread c 6 is running thread a 6 is running thread b 6 is running thread c 7 is running thread a 7 is running thread b 7 is running thread c 8 is running thread b 8 is running thread a 8 is running thread c 9 is running thread b 9 is running thread a 9 is running thread c 10 is running thread a 10 is running thread b 10 is running
Although the thread c gets the most running state, priority does not seem to give us the guarantee for the correct behavior.
What yield() is supposed to do is to make the currently running thread head back to runnable to allow other threads of the same priority to get their run. So the intention is to use yield() to promote run-taking among equal-priority threads. But the yield() method isn't guaranteed to do what it claims. No guarantee!
The non-static joins() method of class Thread lets one thread join onto the end of another thread. If we have a thread B can't do its work until another thread A has completed its work, then we want thread B to join thread A. This means that thread B will not become runnable until thread A has finished.
The code takes the currently running thread and joins it to the end of the thread t. This blocks the current thread from becoming runnable until after the thread t is no longer alive. In other words, the code
t.join();means "Join me (the current thread) to the end of t. so that t must finish before I (the current thread) can run again.
Thread t = new Thread(); t.start(); t.join();
Concurrency issues lead to race conditions. Race conditions lead to data corruption. And so on....
It all comes down to one potentially deadly scenario: two or more threads have access to a single object's data or resource. In other words, methods executing on two different stacks are both calling getters or setters on a single object on the heap.
Here is a brief description from wiki
Because computations in a concurrent system can interact with each other while they are executing, the number of possible execution paths in the system can be extremely large, and the resulting outcome can be indeterminate. Concurrent use of shared resources can be a source of indeterminacy leading to issues such as deadlock, and starvation.[1]
The design of concurrent systems often entails finding reliable techniques for coordinating their execution, data exchange, memory allocation, and execution scheduling to minimize response time and maximize throughput.
Here is one of the well known Bank Account example. The two main characters of John Steinbeck's novel "The Pearl" were sharing an account. They were puzzled when they found the balance became minus even though they checked the balance before making any withdrawals. So, what happened?
public class AsyncAccount implements Runnable { class Account { private int balance = 100; public int getBalance() { return balance; } public void withdraw (int amount) { balance -= amount; } } private Account ac = new Account(); public static void main(String[] args) { AsyncAccount r = new AsyncAccount(); Thread kino = new Thread(r); Thread juana = new Thread(r); kino.setName("Kino"); juana.setName("Juana"); kino.start(); juana.start(); } public void run() { for (int i = 1; i < 5; i++) { makeWithdrawal(20); if(ac.getBalance() < 0) { System.out.println("Overdrawn!"); } } } private void makeWithdrawal(int amount) { if (ac.getBalance() >= amount) { System.out.println( Thread.currentThread().getName() + " is about to withdraw"); try { Thread.sleep(1000); } catch (InterruptedException e) { e.printStackTrace(); } ac.withdraw(amount); System.out.println( Thread.currentThread().getName() + " completes the withdrawal"); } else { System.out.println( Thread.currentThread().getName() + ", sorry, not enough fund:" + " your current balance is " + ac.getBalance()); } } }
The output is:
1.Kino is about to withdraw 2. Juana is about to withdraw 3. Juana completes the withdrawal: 80 4. Kino completes the withdrawal: 60 5. Juana is about to withdraw 6. Kino is about to withdraw 7. Kino completes the withdrawal: 40 8. Kino is about to withdraw 9. Juana completes the withdrawal: 20 10. Juana is about to withdraw 11. Juana completes the withdrawal: 0 12. Kino completes the withdrawal: -20 13. Overdrawn! 14. Kino, sorry, not enough fund: your current balance is -20 15. Overdrawn! 16. Overdrawn! 17. Juana, sorry, not enough fund: your current balance is -20 18. Overdrawn!
On line 8, Kino checked the balance and found that he can safely withdraw and then fell asleep. Mean while, Juana withdrew the money and they had no balance left in their account. Then, Kino withdrew 20 thinking he still had money because he checked it before he fell asleep. So, after all the efforts, they failed. Why?
The account is now overdrawn by 20.
This problem is known as a race condition, where multiple threads can access the same resource, and can produce corrupted data if one thread races in too quickly before an operation that supposed to be atomic has completed.
The solution is that we must guarantee the two steps of the withdrawal: checking the balance and making the withdrawal shouldn't be split apart. We need to make sure the two steps to be performed as one operation. In other words, the two steps must be an atomic operation. That means any other thread should act on the same data.
Issue here is that we can't guarantee that a single thread will stay running throughout the entire atomic operation.
However, we can guarantee that even if the thread running the atomic operation moves in and out of the running state, no other running thread will be able to act on the same data. In other words, if Kino falls asleep after checking the balance, we can stop Juana from checking the balance until after Kino wakes up and completes his withdrawal.The best solution is to:
- Mark the variable private and
- Synchronize the code that modifies the variables.
Here is a new code for the method makeWithdrawal(), we used the synchronized keyword to modify the method:
private synchronized void makeWithdrawal(int amount) { if (ac.getBalance() >= amount) { System.out.println( Thread.currentThread().getName() + " is about to withdraw"); try { Thread.sleep(1000); } catch (InterruptedException e) { e.printStackTrace(); } ac.withdraw(amount); System.out.println( Thread.currentThread().getName() + " completes the withdrawal"); } else { System.out.println( Thread.currentThread().getName() + ", sorry, not enough fund:" + " your current balance is " + ac.getBalance()); } }
With the change, we have a new output. No overdrawn anymore!
That's how we protect the bank account. We don't put a lock on the back account itself. We lock the method that does the banking transaction. That way, one thread gets to complete the whole transaction, start to finish, even if that thread falls asleep in the middle of the method.
So if we don't lock the bank account, then what exactly is locked?
Kino is about to withdraw Kino completes the withdrawal: 80 Kino is about to withdraw Kino completes the withdrawal: 60 Kino is about to withdraw Kino completes the withdrawal: 40 Kino is about to withdraw Kino completes the withdrawal: 20 Juana is about to withdraw Juana completes the withdrawal: 0 Juana, sorry, not enough fund: your current balance is 0 Juana, sorry, not enough fund: your current balance is 0 Juana, sorry, not enough fund: your current balance is 0
Now we've guaranteed that once a thread starts the withdrawal process, the other thread cannot enter makeWithdrawal() until the first one completes the process.
Every object has a lock. Most of the time it's unlocked. Object lock only comes into play when the object has synchronized method code. When we enter a synchronized not-static method, we automatically acquire the lock associated with the current instance of the class whose code we're execution (this instance). Acquiring a lock for an object is also known as getting the lock, or locking the object, locking on the object, or synchronizing on the object.
The locks are not per method, they are per object. If one thread has the lock, no other thread can acquire the lock until the first thread releases the lock. This means no other thread can enter the synchronized method of that object until the lock has been returned. Releasing a lock usually means the thread holding the lock exits the synchronized method. At that point, the lock is free until some other thread enters a synchronized method on that object.
Here are some key points on lockand and synchronization.
- Only method can be synchronized, not variables or classes.
- Each object has just one lock.
- Not all methods in a class need to be synchronized.
- Once a thread acquires the lock on an object, no other thread can enter any of the synchronized methods in that class for that object.
- If a thread goes to sleep, it holds any locks it has and it doesn't release them.
- We can synchronize a block of code rather than a method.
There are several different ways to partition behavior in a concurrent application. Here are some basic definitions.
- Deadlock
We should be careful when we use synchronized our code, because nothing will bring our code to its knees like thread deadlock. Thread deadlock happens when we have two threads, both of which are holding a key the other thread wants. There's no way out of this scenario, so the two threads will simply sit and wait. And wait. And wait..... - Bound Resources
Resource of a fixed size or number used in a concurrent environment. Examples include database connections and fixed-size read/write buffers. - Mutual Exclusion
Only one thread can access shared data or a shared resource at a time. - Starvation
One thread or a group of threads is prohibited from proceeding for an excessively long time or forever. For example, always letting fast-running threads through first could starve out longer running threads if there is no end to the fast-running threads. - Livelock
- Thread in lockstep, each trying to do work but finding another "in the way." Due to resonance, threads continue trying to make progress but are unable to for an excessively long time or forever.
When a class has been carefully synchronized to protect its data, we say the class is "thread-safe." Many classes in Java APIs already use synchronization internally in order to make the class "thread-safe."
For example,
However, even when a class is "thread-save," it is often dangerous to rely on these classes to provide the thread protection we need.
Ph.D. / Golden Gate Ave, San Francisco / Seoul National Univ / Carnegie Mellon / UC Berkeley / DevOps / Deep Learning / Visualization