# Giant Locked?



## unitrunker (Aug 7, 2021)

What does "giant locked" mean in dmesg?

Example:



> WARNING: Device "psm" is Giant locked and may be deleted before FreeBSD 14.0.



If the device gets deleted, what - if anything - replaces it?


----------



## Tieks (Aug 7, 2021)

> What does "giant locked" mean in dmesg?



It's a complete lock by the kernel to maintain concurrency in a multi-CPU environment. If it will be deleted in a future version I suppose there will be a replacement that provides more fine-grained control.


----------



## mer (Aug 7, 2021)

Tieks gave a good concise summary of what it is.
Pretty much all operating systems start out with a single lock on syscalls:  basically userland program makes a call into the kernel requires getting a lock, data is copied to kernel space, lock is released.  Lock is then obtained when data needs to go the other way.
As hardware made it easier to run multiple threads in the kernel, the "BKL" (Big Kernel Lock) becomes the biggest point to performance increases, so developers start moving locks inward and downward so they lock the smallest needed bit of code or data (finer grained).  Sun did a lot of work with Solaris to get to "fully preemptible kernel".  It's a non trivial exercise;  the more locks that need to be taken for a given execution path, the greater probability one can deadlock.

FreeBSD has very few things still under Giant, psm and kbd are two that come immediately to mind.  I think work in 14-CURRENT is getting rid of more of them.  Just like "vt" replaced "sc", there will be something that replaces the psm and kbd devices when they are ready.  May be the same name, may be different.


----------



## Phishfry (Nov 12, 2021)

I see the same message for the module speaker. Giant Locked to be removed in FreeBSD 14.

What will be a replacement for speaker? I use it for a startup tune.








						Beep on sucessful bootup
					

I have several headless machines and I would like to have a beep "song" play on successful bootup. Just like pfSense. On my APU's it beeps on BIOS POST test but i want more.  What is recommended? Do I need to make a rc.d daemon that starts last?  Any tutorials or similar threads? I can't find...




					forums.freebsd.org


----------



## ct85711 (Nov 12, 2021)

One part to keep a note on, the version that it says it will be removed on; has little meaning beyond what the goal is; not what will happen (meaning the lock message will be updated to the next version if the goal wasn't met).


----------



## Andriy (Nov 12, 2021)

Don't worry about psm (or kbd), it won't get removed.

The warning is about any use of the Giant lock. In some cases that warning makes sense, but in others it is just noise (or rather a reminder to developers to get rid of the Giant in a driver).


----------



## grahamperrin@ (Nov 13, 2021)

Please explain Giant-locked
					

I've been wondering for some time now just what are the entries in dmesg followed by [GIANT-LOCKED], [ITHREAD], [FILTER]...  On my server, FreeBSD 8.0-RELEASE-p2, CPU P4 3ghz, 4gb memory, X, fluxbox... In particular, the [GIANT-LOCKED] entries refer to nvidia0, atkbd0, psm0.




					forums.freebsd.org
				




There are more recent explanations/contexts but I don't have the links handy. Oh, this in my browser history:






						Giant lock - Wikipedia
					






					en.wikipedia.org
				




Also <https://books.google.co.uk/books?id=vebgS-r9fP8C&q=giant+lock#v=snippet&q=giant lock&f=false>


----------



## dr3mro (Mar 19, 2022)

very informative thanks you all.
I am having the same warning and I got paniced as I thought my device will be unsupported or some hardware will stop working but that sounds great as It seems those changes will make freeBSD more responsive than ever.


----------



## gpw928 (Mar 20, 2022)

Here is something I wrote about GIANT, some time ago.

With multi-CPU systems, any CPU might be modifying any kernel data structure at any time.  Hence read-modify-write actions on a single memory location may get tangled with multiple CPUs acting simultaneously.  So, a sophisticated set of locking mechanisms is required to arbitrate write access to the kernel's data structures.

However, back in the days of Unix on the PDP11 and VAX, each system had just one CPU.  This was quite "normal" for its time, comparable, for example, to a car having just one engine.  [Unlike electric cars today.]

The only way a process could lose its exclusive control of the CPU was:

if it got usurped by a hardware interrupt, in which this case the CPU was stolen away to run the interrupt service routine of the hardware device that caused the interrupt; or
if a system call became blocked, usually waiting on some future event (e.g. I/O completion), in which case arrangements would be made for a wakeup call (see WCHAN for ps(1), and the CPU was surrendered, by calling the scheduler.
In the case of the interrupt, the process would usually get control of the CPU back, at the exact point where the interrupt happened, except when the clock caused the interrupt, in which case the scheduler might be called, to perform a context switch (if the incumbent's time slice had expired).

Mutual exclusion was implemented by raising the priority of the CPU to lock out (mask) devices from interrupting during critical sections of code.  With only one CPU present, and its priority set so high that no similar device could interrupt, exclusive access to the device's kernel data structures could be guaranteed.  [The real time clock was a special case exception to that rule -- see the footnote.]

The CPU priorities were based on the PDP11 Bus Request (BR) levels.

The real time clock interrupted at a CPU priority of 6.

The ttys, tapes, and disks, and most other devices interrupted at a CPU priority of 5.

The line printer interrupted at a CPU priority of 4.

CPU priority 3 was used by BSD (but not USG) kernels for "software interrupts" (higher priority than user processes, but lower priority than any "hardware interrupt"), but they complicate the story unnecessarily, and can be examined separately.

Once the CPU priority was set, all device interrupts at or below the set level were masked (delayed) until the CPU priority was dropped to below the BR level of the interrupting device(s).  For devices interrupting at the same BR level, interrupts were serviced one at a time, and priority was determined by electrical proximity of the device on the bus.

The kernel can be entered in two distinctly different ways.  Because of this, the terms "top half" of the kernel (entry by system call) and "bottom half" of the kernel (entry by interrupt) were coined.

Access to the data structures modified asynchronously by the interrupt service routines in the "bottom half" had to be arbitrated for mutual exclusion, because the "top half" also needed to access and modify them.

Interrupt service routines always ran at a CPU priority appropriate to the device being serviced (e.g. BR5 for a disk).  This meant that no other interrupt could happen for the same, or similar, device while the interrupt service routine was running.  So the "bottom half" didn't need to take any extra measures to arbitrate access to data structures.  It had control of the only CPU at a priority that (practically) guaranteed exclusive access.  [But see the note on the real time clock at BR6 below.]

On the other hand, a user process runs at CPU priority zero.  A user process dropping into a system call in the "top half", will generally remain at CPU priority zero.  The kernel system call routines were written in the sure knowledge that there was only one CPU, so no two system calls could be executed at the same time -- and concurrence was not an issue, *except* for a hardware interrupt occurring at a priority greater than that at which the system was currently running.

Since system calls generally ran at CPU priority zero, they had to be aware that a device could interrupt at any time, and demand to have its interrupt service routine executed.

The code in device drivers that implemented system calls for the "top half" (open, close, read, write, and ioctl) used to be festooned with critical code sections protected from interrupts.  They looked like:
	
	



```
spl5();  /* mask all interrupts for similar devices */
fiddle_some_data_structure_modified_by_the_bottom_half();
spl0();  /* allow all interrupts */
```
Thus mutual exclsion was managed by setting CPU priority to a level that blocked out all other potential access to data structures shared by the top and bottom halves for any given type of device (at BR5 in the example above).

The GIANT lock was introduced to prevent more than one CPU entering the kernel at any one time.  So, in a multi-CPU system, you could have many CPUs executing code in user processes, but only one operating inside the kernel.

That allowed the original assumptions regarding concurrency on single-CPU systems to continue to work on multi-CPU systems -- allowing traditional kernels to be quickly and easily adapted to work on mutli-CPU systems.

The task of kernel developers since the introduction of GIANT (circa 1998) has been to completely re-engineer concurrency locking -- allowing multiple CPUs to operate in both the top and bottom half of the kernel simultaneously.  The cute thing is that GIANT can be selectively retained, while work continues implementing the new locking mechanisms on a device-by-device basis.

NOTE: The real time clock is a special case.  At BR6, it needed to be aware that it could potentially be interrupting a critical section of code running at elevated prority (e.g. BR5) .  To implement "concurrence protection" the clock's interrupt service routine was curtailed if the CPU priority was non-zero at the time the clock interrupted.  Basically, it didn't touch shared "bottom half" data structures unless it was completely sure that nothing important had been interrupted.


----------

