Commit Graph

18 Commits

Author SHA1 Message Date
kleines Filmröllchen
d0e614c045 Kernel: Don't check that interrupts are enabled during early boot
The interrupts enabled check in the Kernel mutex is there so that we
don't lock mutexes within a spinlock, because mutexes reenable
interrupts and that will mess up the spinlock in more ways than one if
the thread moves processors. This check is guarded behind a debug flag
because it's too hard to fix all the problems at once, but we regressed
and weren't even getting to init stage 2 with it enabled. With this
commit, we get to stage 2 again. In early boot, there are no interrupts
enabled and spinlocks used, so we can sort of kind of safely ignore the
interrupt state. There might be a better solution with another boot
state flag that checks whether APs are up (because they have interrupts
enabled from the start) but that seems overkill.
2022-07-19 12:12:13 +01:00
Jelle Raaijmakers
14fc05e912 Kernel: Verify mutex big lock behavior
These two methods are big lock specific, so verify our mutex' behavior.
2022-04-09 15:55:20 +02:00
Jelle Raaijmakers
bb02e9a7b9 Kernel: Unblock big lock waiters correctly
If the regular exclusive and shared lists were empty (which they
always should be for the big lock), we were not unblocking any waiters.
2022-04-09 15:55:20 +02:00
Jelle Raaijmakers
7826729ab2 Kernel: Track big lock blocked threads in separate list
When we lock a mutex, eventually `Thread::block` is invoked which could
in turn invoke `Process::big_lock().restore_exclusive_lock()`. This
would then try to add the current thread to a different blocked thread
list then the one in use for the original mutex being locked, and
because it's an intrusive list, the thread is removed from its original
list during the `.append()`. When the original mutex eventually
unblocks, we no longer have the thread in the intrusive blocked threads
list and we panic.

Solve this by making the big lock mutex special and giving it its own
blocked thread list. Because the process big lock is temporary and is
being actively removed from e.g. syscalls, it's a matter of time before
we can also remove the fix introduced by this commit.

Fixes issue #9401.
2022-04-06 18:27:19 +02:00
Andreas Kling
b28beb691e Kernel: Protect Mutex's thread lists with a spinlock 2022-04-05 14:44:50 +02:00
Andreas Kling
b0e5406ae2 Kernel: Update terminology around Thread's "blocking mutex"
It's more accurate to say that we're blocking on a mutex, rather than
blocking on a lock. The previous terminology made sense when this code
was using something called Kernel::Lock, but since it was renamed to
Kernel::Mutex, this updates brings the language back in sync.
2022-01-30 16:21:59 +01:00
Idan Horowitz
e28af4a2fc Kernel: Stop using HashMap in Mutex
This commit removes the usage of HashMap in Mutex, thereby making Mutex
be allocation-free.

In order to achieve this several simplifications were made to Mutex,
removing unused code-paths and extra VERIFYs:
 * We no longer support 'upgrading' a shared lock holder to an
   exclusive holder when it is the only shared holder and it did not
   unlock the lock before relocking it as exclusive. NOTE: Unlike the
   rest of these changes, this scenario is not VERIFY-able in an
   allocation-free way, as a result the new LOCK_SHARED_UPGRADE_DEBUG
   debug flag was added, this flag lets Mutex allocate in order to
   detect such cases when debugging a deadlock.
 * We no longer support checking if a Mutex is locked by the current
   thread when the Mutex was not locked exclusively, the shared version
   of this check was not used anywhere.
 * We no longer support force unlocking/relocking a Mutex if the Mutex
   was not locked exclusively, the shared version of these functions
   was not used anywhere.
2022-01-29 16:45:39 +01:00
Hendiadyoin1
e5cf395a54 Kernel: Collapse blocking logic for exclusive Mutex' restore_lock()
Clang-tidy pointed out that the `need_to_block = true;` block was
duplicate, and if we collapse these if statements, we should do so
fully.
2021-12-15 23:34:11 -08:00
Hendiadyoin1
1ad4a190b5 Kernel: Add implied auto-specifiers in Locking
As per clang-tidy.
2021-12-15 23:34:11 -08:00
Brian Gianforcaro
bb58a4d943 Kernel: Make all Spinlocks use u8 for storage, remove template
The default template argument is only used in one place, and it
looks like it was probably just an oversight. The rest of the Kernel
code all uses u8 as the type. So lets make that the default and remove
the unused template argument, as there doesn't seem to be a reason to
allow the size to be customizable.
2021-09-05 20:46:02 +02:00
Andrew Kaster
72de228695 Kernel: Verify interrupts are disabled when interacting with Mutexes
This should help prevent deadlocks where a thread blocks on a Mutex
while interrupts are disabled, and makes it impossible for the holder of
the Mutex to make forward progress because it cannot be scheduled in.

Hide it behind a new debug macro LOCK_IN_CRITICAL_DEBUG for now, because
Ext2FS takes a series of Mutexes from the page fault handler, which
executes with interrupts disabled.
2021-08-28 20:53:38 +02:00
Andreas Kling
d60635cb9d Kernel: Convert Processor::in_irq() to static current_in_irq()
This closes the race window between Processor::current() and a context
switch happening before in_irq().
2021-08-23 00:02:09 +02:00
Andreas Kling
c922a7da09 Kernel: Rename ScopedSpinlock => SpinlockLocker
This matches MutexLocker, and doesn't sound like it's a lock itself.
2021-08-22 03:34:10 +02:00
Andreas Kling
55adace359 Kernel: Rename SpinLock => Spinlock 2021-08-22 03:34:10 +02:00
Brian Gianforcaro
464dc42640 Kernel: Convert lock debug APIs to east const 2021-08-13 20:42:39 +02:00
Brian Gianforcaro
bea74f4b77 Kernel: Reduce LOCK_DEBUG ifdefs by utilizing Kernel::LockLocation
The LOCK_DEBUG conditional code is pretty ugly for a feature that we
only use rarely. We can remove a significant amount of this code by
utilizing a zero sized fake type when not building in LOCK_DEBUG mode.

This lets us keep the same API, but just let the compiler optimize it
away when don't actually care about the location the caller came from.
2021-08-13 20:42:39 +02:00
Jean-Baptiste Boric
2c3b0baf76 Kernel: Move SpinLock.h into Locking/ 2021-08-07 11:48:00 +02:00
Jean-Baptiste Boric
f7f794e74a Kernel: Move Mutex into Locking/ 2021-08-07 11:48:00 +02:00