# Nvidia Optimus Driver for FreeBSD



## pouya-eghbali (Jul 17, 2019)

Hi, I made a modified version of x11/nvidia-driver that works with optimus laptops/devices (muxless). It provides an optimus service (managing extra X server we need, nvidia modules, configs and etc) and a `optirun` command to run programs on the nvidia gpu:






Right now I could only test it on my own laptop (ASUS G56JK), it's a muxless haswell + gtx850m, running FreeBSD 12, nvidia drivers version 390 and 430 both work perfectly. If anyone is interested in testing and/or helping feel free to leave a message or contact me. You can find the Makefile and binary releases here: https://github.com/pouya-eghbali/freebsd-nvidia-optimus

If anyone has any questions or needs help to make this work on their devices, I'll be glad to help. If you try this and it works for you, sharing your hardware info is appreciated and can probably help others. Please note this is a pre-release.


----------



## SirDice (Jul 17, 2019)

pouya-eghbali said:


> If anyone has any questions or needs help to make this work on their devices, I'll be glad to help.


Have a look through https://bugs.freebsd.org/bugzilla/ for open PRs regarding this. I'm fairly certain there are one or more open cases about this. If you can't find an open case, open one yourself. Help out there and we all benefit. You do need to register but anyone can respond, provide patches or help out in any other way.


----------



## pouya-eghbali (Jul 17, 2019)

SirDice said:


> Have a look through https://bugs.freebsd.org/bugzilla/ for open PRs regarding this. I'm fairly certain there are one or more open cases about this.



I'm aware of an open PR that has been a _work in progress_ since 2014, that's a long time and I guess the patches provided there won't work anymore and one of the solutions there makes the intel gpu unusable when we run something on the nvidia gpu. All this, I decided to start my own version. Another issue is the version of the nvidia driver, there are open PRs (which I used to patch my Makefile and make it v430 ready) but it's been a long time since they're posted and they're not merged yet.

I will definitely respond to these PRs and provide my patches so at least people can study them.


----------



## SirDice (Jul 17, 2019)

pouya-eghbali said:


> I will definitely respond to these PRs and provide my patches so at least people can study them.


Yes, please do. And yes, leave the really old ones, they're likely too old to be useful. The bug tracking system we have isn't perfect and sometimes things get a little cluttered.


----------



## shkhln (Jul 17, 2019)

pouya-eghbali, have you seen https://github.com/therontarigo/freebsd-gpu-headless?



pouya-eghbali said:


> Another issue is the version of the nvidia driver, there are open PRs (which I used to patch my Makefile and make it v430 ready) but it's been a long time since they're posted and they're not merged yet.



Looks like the issue is still below the maintainer annoyance threshold. Just write in that PR that the patch works for you and you wouldn't mind seeing it merged.


----------



## pouya-eghbali (Jul 18, 2019)

shkhln said:


> have you seen https://github.com/therontarigo/freebsd-gpu-headless



No I didn't know of it, I reviewed what they have done and it's very similar to what I did (this is basically how bumblebee works on Linux). I searched a lot to see if there's an easy solution to this but couldn't find, surprisingly my empty 'freebsd nvidia' repository is result #4 when if you google 'freebsd nvidia optimus'. Well, after 2 years I returned to freebsd and decided to make this work for real.


----------



## alfix (Jul 22, 2019)

Nice! it runs on Acer 5742G nVidia GeForce GT540M (I wrote a gl test and played trigger-rally)


----------



## rigoletto@ (Jul 22, 2019)

I advise you to contact the x11/nvidia-driver maintainer directly too (after the PR), that is danfe@.


----------



## shkhln (Jul 22, 2019)

pouya-eghbali said:


> surprisingly my empty 'freebsd nvidia' repository is result #4 when if you google 'freebsd nvidia optimus'. Well, after 2 years I returned to freebsd and decided to make this work for real.



Interesting… I didn't realize there was another repository.

Now, regarding your (current) work:

1. Bundling everything together (in one single commit, even) makes it difficult to see the changes. If you really must, then keep the port files _unchanged_ for the initial commit and commit your work on top of that.

2. "Nvidia Optimus Driver" makes it sound like it is the definitive solution to that problem, while in reality besides VirtualGL there are also Primus and Primus-VK as well as reverse PRIME GPU offloading, which is the closest thing to the "real" Optimus. Although I am unsure to what extent PRIME is actually usable.


----------



## pouya-eghbali (Jul 23, 2019)

shkhln said:


> 1. Bundling everything together (in one single commit, even) makes it difficult to see the changes. If you really must, then keep the port files _unchanged_ for the initial commit and commit your work on top of that.



That's why I still haven't made a PR to the nvidia driver port. I didn't want to do all this for v390 driver, I wanted the latest 430 so I had to patch the nvidia driver makefile, otherwise it's possible to provide an optimus option and do a slave port, or just do a slave port and move the files after build is finished.



shkhln said:


> 2. "Nvidia Optimus Driver" makes it sound like it is the definitive solution to that problem, while in reality besides VirtualGL there are also Primus and Primus-VK as well as reverse PRIME GPU offloading, which is the closest thing to the "real" Optimus. Although I am unsure to what extent PRIME is actually usable.



I'm aware of these, bumblebee on linux has several backends, virtualgl was the default but they added primus because virtualgl has some performance issues. However, primus was last updated 7 years ago and it's not available for freebsd, maybe we can make it work on freebsd but I'm not really willing to port a dead project. About PRIME, it uses randr, might be possible under freebsd, I'll do some experiments on that.

What I prefer, is adding optimus option to nvidia driver port, making a slave port for nvidia optimus (which will install the required nvidia libraries), and have some optimus-utilities-${backend} ports where backend can be virtualgl, primus or prime or whatever else that works.


----------



## Theron Tarigo (Jul 23, 2019)

Usually I communicate my freebsd-related work on the mailing lists and bug trackers, but I've just registered to the forum as well.

I'd like to bring to your attention a similar effort I began over a year ago.

Now the situation that we have both worked on this and not shared efforts so far is entirely my fault; however I hope we can combine the efforts for the best way forward.

Version 0.1 of freebsd-gpu-headless was two parts:
 - A slave port of nvidia-driver, called nvidia-headless-driver, with no changes other than file paths fixed up to avoid breaking Mesa GLX.  
 - A set of scripts, called nvidia-headless-utils, with nvidia-xconfig wrapper to ease configuration, and virtualgl wrapper to enable rendering to integrated GPU display.
Support was only basic, and required a bit of manual configuration.  I never got this version polished well enough to submit it into FreeBSD, but maybe it would have been good enough.  I did a very poor job of publicizing the work, its only mention is at https://lists.freebsd.org/pipermail/freebsd-x11/2018-March/020675.html

Since then (for the last year) I have had a "version 0.2" as local changes to my installation, with working nvidia-xconfig, ondemand power switching with acpi_call, and Linux support (both 32- and 64-bit).  Unfortunately I did not have sufficient time and motivation to give this the attention it deserved towards making it official.

Realizing, as I did this last weekend, that others are moving ahead on this and potentially duplicating this effort, I've taken the time to wrap this up and publish a version 0.3.  It is essentially ready to be submitted for inclusion in Ports, pending your feedback.









						GitHub - therontarigo/freebsd-gpu-headless: OpenGL on headless Nvidia GPU on FreeBSD
					

OpenGL on headless Nvidia GPU on FreeBSD. Contribute to therontarigo/freebsd-gpu-headless development by creating an account on GitHub.




					github.com
				




Some parts I have had working are not quite ready, including the Linux support and power switching.  The rest is there, including the rc.d script for running Nvidia in the background.  In fact keeping Xorg attached achieves the majority of the power savings; as it prevents the GPU from spinning and wasting power.  The consumption difference in my Dell XPS between running with Xorg idle vs. disabling the device by ACPI is only ~0.5 Watt whereas leaving it in its power-on state, kernel module loaded or not, with no Xorg, wastes about 5 Watts!

Other things I have addressed:
 - User expects nvidia-xconfig to just work.  This is accomplished with a very minor change to that port.
 - Display number used for Nvidia is configurable, and may be overridden in environment.
 - Two ports for two use-cases:
   - nvidia-headless-utils: For using Nvidia as a compute resource on a headless server.
   - nvidia-hybrid-graphics: For notebook computers with iGPU+dGPU ("Optimus")
 - nvidia-headless-driver is just a minimal slave port.  The changes to nvidia-driver needed are also minimal and noninvasive.  This is very important for the future maintainability.

Since you have gotten this far with your "optimus" port, I gather that you have a good understanding of what is needed, and might have the time to help integrate what I have here.  I strongly recommend adopting the approach of breaking it down into small understandable components with the discrete purposes, rather than as a single port for all.  I agree with shkhln that having the changes to a single Makefile is hard to follow (and it is all in there alongside the nvidia-driver internals).

I've avoided calling mine "Optimus" (although I do include that name in the description so it can be searched) since, as others have noted, Optimus is the proprietary switching software, something that will never be on FreeBSD unless provided by Nvidia themselves.  "Hybrid" seems an acceptable genericized name.

So far, nvidia-hybrid-graphics depends on VirtualGL because currently that is the only working solution.

Real PRIME support would be great!


----------



## pouya-eghbali (Jul 23, 2019)

Hi Theron. I wasn't aware of your port, I searched a lot in past (a few years ago) and couldn't find a solution so I decided to make my own, I managed to make it work a few years ago but it wasn't clean. Now I've returned to FreeBSD and I want to stay at all costs (well it's difficult to be a FreeBSD desktop user... but things are getting better).

My current port, is just a quick port to test how things will work, and also I wanted the latest drivers so I had to patch the nvidia driver port, but what I planned was making several different ports for this, possibly with different backends and for different uses (with defaults that make sense and doesn't require the user to do extensive research), I am aware my port isn't perfect, neither are the scripts I wrote.

The idea of having `nvidia-headless-utils`, `nvidia-hybrid-graphics` and `nvidia-headless-driver` is really nice, I like it and it's close to what I had in mind. I'll be glad to work together on this and maintain it together.

About the power savings, I was planning to provide methods to turn on/off the gpu (possibly automatically, before and after we start the X server) but I noticed very strange behavior. For me calling the correct turn off method using acpi_call (I extracted my DSDT to find out the correct method, but there's the famous https://people.freebsd.org/~xmj/turn_off_gpu.sh which does the exact same thing anyways) makes the GPU fan go crazy, I cannot even make it stop by rebooting I have to power off (that's why I decided to not include this in my port).

Anyways, we can share our experience and make a working, perfect solution.


----------



## shkhln (Jul 23, 2019)

pouya-eghbali said:


> About PRIME, it uses randr, might be possible under freebsd, I'll do some experiments on that.





Theron Tarigo said:


> Real PRIME support would be great!



I believe it is limited to the Linux driver at the moment.


----------



## Theron Tarigo (Jul 25, 2019)

pouya-eghbali said:


> Hi Theron. I wasn't aware of your port, I searched a lot in past (a few years ago) and couldn't find a solution so I decided to make my own, I managed to make it work a few years ago but it wasn't clean. Now I've returned to FreeBSD and I want to stay at all costs (well it's difficult to be a FreeBSD desktop user... but things are getting better).
> 
> My current port, is just a quick port to test how things will work, and also I wanted the latest drivers so I had to patch the nvidia driver port, but what I planned was making several different ports for this, possibly with different backends and for different uses (with defaults that make sense and doesn't require the user to do extensive research), I am aware my port isn't perfect, neither are the scripts I wrote.
> 
> The idea of having `nvidia-headless-utils`, `nvidia-hybrid-graphics` and `nvidia-headless-driver` is really nice, I like it and it's close to what I had in mind. I'll be glad to work together on this and maintain it together.


I thought this was the most sensible thing, and I am glad to hear that you agree.  In that case, I think it makes sense to go with those ports and add in your features.  Having the daemon load the kernel module is good, since it is one fewer rc.conf entry to require of the user.  Modeset vs. non-modeset option is also good, and Xorg server flags might be useful to someone.  I have Xorg display number handled by ${LOCALBASE}/etc/nvidia-headless.conf but maybe there is also a legitimate reason to override it from rc.conf; I don't know.

I avoided using "Optimus", "opti", etc. for reasons discussed but there may be significant value in keeping "optirun" name for sake of familiarity to ex- Linux users.

I'd like to find a better name for the daemon than either "nvidia_headless_xorg" (annoying to type) or "optimus".
I do like keeping things general: when a FreeBSD user/admin on a headless Nvidia-powered compute server might find headless-utils useful and not want to worry about anything laptop-related.  However, I would imagine that existing nvidia-driver is perfectly adequate in this situation, so maybe this is a silly concern.



pouya-eghbali said:


> About the power savings, I was planning to provide methods to turn on/off the gpu (possibly automatically, before and after we start the X server) but I noticed very strange behavior. For me calling the correct turn off method using acpi_call (I extracted my DSDT to find out the correct method, but there's the famous https://people.freebsd.org/~xmj/turn_off_gpu.sh which does the exact same thing anyways) makes the GPU fan go crazy, I cannot even make it stop by rebooting I have to power off (that's why I decided to not include this in my port).


That is unfortunate.  I think I have heard of this issue, in a discussion of MacOS on unlicensed hardware (usually good source of information on DSDT).  If I recall correctly, that user's fix was to get the Nvidia driver working rather than power off the GPU.  Do you know how Linux and/or Microsoft handles this?  Do the other _OFF or similarly named ACPI methods do anything useful?



pouya-eghbali said:


> Anyways, we can share our experience and make a working, perfect solution.


Other than responding to messages, (and wasting some time fiddling with PRIME, see below), I don't know when I will next be able to work on this.  If you have time, could you try the freebsd-gpu-headless ports and look into adding the missing features?  I probably have time at least to review contributions.  Feedback from others would be useful too.

Waiting for PR 232645 makes sense.  Does the oldest GPU supported by 430 predate hybrid mobile graphics, or will someone appreciate a 390 or earlier flavor of nvidia-headless?



shkhln said:


> I believe it is limited to the Linux driver at the moment.


I tried creating an xorg.conf mostly following the documentation relevant to PRIME.  Xorg attaches to both GPUs (each in its own Screen, i.e. :0.0 and :0.1) but there aren't any special RandR resources available for outputting from one to the other.  Or is that not really how PRIME is supposed to work?
PRIME's performance edge comes from handling the DMAs directly in kernel, yes?  Maybe something can be done without explicit cooperation from Nvidia, close to the binary interface between the closed-source Nvidia modules and the rest of the stack.  Primus ("PRIME in User Space"?) performed reasonably well for me on Linux, but wouldn't build on FreeBSD toolchain.


----------



## shkhln (Jul 25, 2019)

Theron Tarigo said:


> Does the oldest GPU supported by 430 predate hybrid mobile graphics, or will someone appreciate a 390 or earlier flavor of nvidia-headless?



I have to agree here. With muxless laptops being in vogue since roughly 2012, there likely to be very few models based on Fermi GPUs. Not to mention that with the usual laptop build quality by this point in time I would expect most of them to be mechanically broken into multiple separate laptop pieces.



Theron Tarigo said:


> I tried creating an xorg.conf mostly following the documentation relevant to PRIME.  Xorg attaches to both GPUs (each in its own Screen, i.e. :0.0 and :0.1) but there aren't any special RandR resources available for outputting from one to the other.



The Linux documentation mentions DRM, which I believe is provided by _nvidia-drm_ kernel module available on Linux. That's presumably a hard requirement. The module is not that large (~4000 LOC) and it is available with full source code, and it seems to be well suited for porting with linuxkpi. The big question is, of course, who would do that work?

So far there have been only one, quickly abandoned, attempt at porting _nvidia-drm_. I wouldn't mind trying my hand at it either, despite the lack of kernel development knowledge, very modest C skill and being a lousy programmer in general, but I can't stand crashing my desktop environment every five minutes and getting GPU passthrough working under bhyve (for development) is an even bigger puzzle.



Theron Tarigo said:


> Or is that not really how PRIME is supposed to work?



There are some news going around which make me think I don't really understand what is supported already and what is yet to be supported. See also PRIME and PRIME Synchronization thread on Nvidia's forum.


----------



## twschulz (Jul 26, 2019)

First of all, I want to say thanks to everyone who is looking at this. It's been one of those "if I have time" things that I really wanted to spend time looking at. I have a Thinkpad T420 and a T560 that both have Nvidia graphics and would like to see this work in FreeBSD.



pouya-eghbali said:


> About the power savings, I was planning to provide methods to turn on/off the gpu (possibly automatically, before and after we start the X server) but I noticed very strange behavior. For me calling the correct turn off method using acpi_call (I extracted my DSDT to find out the correct method, but there's the famous https://people.freebsd.org/~xmj/turn_off_gpu.sh which does the exact same thing anyways) makes the GPU fan go crazy, I cannot even make it stop by rebooting I have to power off (that's why I decided to not include this in my port).
> 
> Anyways, we can share our experience and make a working, perfect solution.



For me, when I run `turn_off_gpu` script on my T420, it finds an ACPI command that turns off one of my USB buses and appears to leave the NVidia card alone—at least that's what I can see from the dmesg "detach" output.  So, it may that these addresses need updating too, but I have not investigated more.

Anyway, I'll keep watching and hope that I might get some time this weekend to try the work out on the T420.


----------



## Theron Tarigo (Jul 26, 2019)

twschulz said:


> For me, when I run `turn_off_gpu` script on my T420, it finds an ACPI command that turns off one of my USB buses and appears to leave the NVidia card alone—at least that's what I can see from the dmesg "detach" output.  So, it may that these addresses need updating too, but I have not investigated more.


One way to see all the ACPI methods: `acpidump -o dsdt && acpiexec -b "Methods;" dsdt`
`acpiexec` is sysutils/acpica-tools.
Then use https://github.com/Bumblebee-Project/Bumblebee/wiki/ACPI-for-Developers#naming as a guide for names to look for.
The killing of USB by that turn_off_gpu script is an old problem, but I don't remember where I heard of it.


----------



## twschulz (Jul 30, 2019)

Hi again,

Thank you for the post ot the the ACPI methods. I looked and there were some extra items for a Thinkpad W520, which seems close to what I have. I will need to experiment a bit more with that.
In other news, I tried the nvidia_gpu_headless port. It built and installed OK. I had to do some hand adjustments to the config using pouya's xorg.conf.nv (needed to add the ConnectedMonitor option).

However, I can't seem to get anything to show up on the screen. In my regular X session (i.e., the one I start through SDDM), when I run `nvrun glxgears`, I get output on stderr about it syncronizing the rate and a message about the frame rate every 5 seconds, but I get no window. Is there anything special that I'm missing? Does the virtualgl need some sort of configuration?

I can send config and log files if desired.


----------



## Theron Tarigo (Jul 31, 2019)

Hi, can you try nvidia-hybrid-graphics port and `nvrun-vgl` in place of `nvrun`?  That should work with VirtualGL without extra configuration, unless I have missed something that is only on my system.

No window with `nvrun` is expected: it is using Nvidia as a headless device, so there is nothing to show the window.  I should document this better.  I use it for GPGPU application where VirtualGL would only contribute a slowdown.

Thanks for helping to test.  What does ConnectedMonitor option do?  I'd like to bring Pouya's contributions into the freebsd-gpu ports I have.


----------



## twschulz (Jul 31, 2019)

OK. It seems that the nvidia-headless-utils didn't install nvrun-vgl (but it was in the GitHub repo). There were also a couple of other files (nvidia-hybrid*) that also weren't installed. I updated the pkg-plist, rebuild and updated the port and running glxgears with `nvrun-vgl` works. Hooray! Sorry, didn't have a screen shot tool handy though.

I sent you a pull request for the missing files in the pkg-plist.

As for the ConnectedMonitor option, when running `nvidia-headless-xconfig` it fails when trying to run the server and the error in the log was something like "Can't find a display to connect to." So, I combined the config that was left in /var/cache with Pouya's Xorg config—Basically the ConnectedMonitor and UseEDID options and put it in the /usr/local/etc/X11/xorg-nvidia-headless.conf. Then, I was able to start the server. It could be that later Nvidia cards don't care about this, but at least the one on my machine wanted a monitor to connect to.

It was a little bit fiddly, but it seems to work now. I'll have to examine the acpi stuff at another time. Maybe it will be as you say, that as soon as X11 connects to it, the power drops, we'll see. Regardless, at least I can use the Nvidia card on the T420 without switching things in the BIOS, something that had bothered me for a while, and something to further experiment with. Thank you for the effort. I very much appreciate it and hope that you can get the port committed.


----------



## Theron Tarigo (Aug 1, 2019)

Theron Tarigo said:


> Does the oldest GPU supported by 430 predate hybrid mobile graphics, or will someone appreciate a 390 or earlier flavor of nvidia-headless?





shkhln said:


> I have to agree here. With muxless laptops being in vogue since roughly 2012, there likely to be very few models based on Fermi GPUs. Not to mention that with the usual laptop build quality by this point in time I would expect most of them to be mechanically broken into multiple separate laptop pieces.


My laptop from 2012 still works and is held together by metal pieces I have glued in in the right places...
Evidently the oldest Optimus-capable GPU is a GT 540M: https://www.geforce.com/hardware/technology/optimus/supported-gpus?field_gpu_type_value=All&page=2
It is still supported by 390.87: https://www.geforce.com/drivers/results/137279
Similarly, the 650M in my 2012 laptop (which I do use on occasion) is supported by 390.87 and 418.x but not 430.x.
Therefore someone might appreciate nvidia-headless-driver-390 once nvidia-driver goes to 430 (what is holding up that PR?).


----------



## shkhln (Aug 1, 2019)

Theron Tarigo said:


> It is still supported by 390.87



390 for FreeBSD is extra annoying since it doesn't work with libglvnd, other than that I don't care.



Theron Tarigo said:


> (what is holding up that PR?)



Why don't you ask the maintainer?


----------



## Theron Tarigo (Aug 9, 2019)

Replying here primarily to reassure others that this hasn't been forgotten.

danfe@ says he needs several more days (as of Aug. 3) to move ahead with Nvidia 430.x: PR 232645
In the mean time, I will rebase my nvidia-hybrid Github on Tomoaki AOKI's 430.40 / 390.129 patch.
390.129 uses GLVND but also supports oldest Optimus-capable card, so the nvidia-headless-driver-390 slave port is trivial once I work around the complication of GLVND in 430.40.
That said, I can't be exactly sure when I can next work on this.  Forks, patches, and/or pull requests are welcome; I am more likely to have time to at least review these.

GLVND may be useful as an alternative to the way I currently propose to handle conflicting libGL, but really should be done for all GL on FreeBSD, so for now I will keep it discrete so we don't get dragged down with concerns about GLVND from Intel and AMD users who are happy with Mesa as-is.

Next step from there would be to have the Nvidia driver port take responsibility for installing GLVND as the libGL for all apps and without breaking Mesa, but similarly I don't want to see that get in the way of hybrid Intel/Nvidia support, which really does not need GLVND at all as long as users are okay with executing `optirun`.  Eventually GLVND should make nvidia-headless-driver (as opposed to nvidia-driver) obsolete, but I think it is needed until then.


----------



## shkhln (Aug 9, 2019)

Theron Tarigo said:


> 390.129 uses GLVND



Are you sure?



Theron Tarigo said:


> Next step from there would be to have the Nvidia driver port take responsibility for installing GLVND



430.40 already does that. It can't work any other way.



Theron Tarigo said:


> and without breaking Mesa



Somebody has to enable libglvnd in Mesa port. In fact, for now we can forget about nvidia-driver port and its libGL.so override habit. If Mesa is compiled with libglvnd support, it will work with that configuration anyway. It just means we would have two copies of every libglvnd binary installed (one from nvidia-driver and another from libglvnd port), which is a bit messy.


----------



## Theron Tarigo (Aug 9, 2019)

shkhln said:


> Are you sure?


No, more likely I got the 430.x parts confused with the 390.x parts of the patch.

Requirement of the actual libGL implementation to explicitly support GLVND (and inability of such an implementation to work without GLVND once it supports it) seem like serious design bugs in GLVND itself, but what can we do?  There are other engineering decisions from Nvidia that I don't question, so if it's what they give us with their drivers, we do have to accept it in that context, just not everywhere.



shkhln said:


> 430.40 already does that. It can't work any other way.


430.40 needs libglvnd _somewhere_; it doesn't have to be /usr/local/lib/libGL.so.1.



shkhln said:


> Somebody has to enable libglvnd in Mesa port. In fact, for now we can forget about nvidia-driver port and its libGL.so override habit. If Mesa is compiled with libglvnd support, it will work with that configuration anyway. It just means we would have two copies of every libglvnd binary installed (one from nvidia-driver and another from libglvnd port), which is a bit messy.


No, it can work without cooperation from Mesa port; VirtualGL has always made that possible (unless for some reason GLVND breaks VirtualGL, but it was working with GLVND AND VirtualGL in Linux compat, so I don't think that is a problem).

From the perspective of FreeBSD project, what does GLVND accomplish?
Ability to switch GL provider simply by specifying a different DISPLAY variable?  How was needing `optirun` worse than that?
Ability for one program to use multiple GPUs with differing drivers?  That is significantly more useful, but nothing much uses this (I've done this only in a research setting; it was pre-GLVND and required one procedure table per GL implementation, essentially a customized OpenGL loading library).
Ability for an application to be linked once against a system GL library, and then work on any implementation?  This has already worked for a long time thanks to ELF.

My point is that GLVND is marginally useful but neither significantly helps nor hinders the work that is needed.


----------



## shkhln (Aug 10, 2019)

Theron Tarigo said:


> My point is that GLVND is marginally useful but neither significantly helps nor hinders the work that is needed.



I'm not talking about libglvnd usefulness for your work. It is relevant to you in a sense that you have to support both libglvnd and legacy configurations, whether libglvnd is actually useful to you is not for me to decide.



Theron Tarigo said:


> From the perspective of FreeBSD project, what does GLVND accomplish?



Libmap overrides are static, while libglvnd selects appropriate OpenGL implementation dynamically. That doesn't seem like much, but once you actually tried to explain to multiple people why OpenGL applications break on Intel GPU with nvidia-driver port installed (with zero success rate in my case), it's pretty clear that this functionality is indeed necessary.



Theron Tarigo said:


> 430.40 needs libglvnd _somewhere_; it doesn't have to be /usr/local/lib/libGL.so.1.



Yes, I know that, otherwise I wouldn't refer to nvidia-driver shenanigans.


----------



## shkhln (Aug 13, 2019)

Theron Tarigo said:


> Requirement of the actual libGL implementation to explicitly support GLVND (and inability of such an implementation to work without GLVND once it supports it) seem like serious design bugs in GLVND itself, but what can we do?



Let's take a closer look. Both legacy libGL.so.430.40 and GLVND-enabled libGLX_nvidia.so.430.40 from the 430.40 Linux driver have the same size in bytes, although they are not exactly identical. Swapping one for another seems to work fine in either direction. Whatever explicit support there exists for libglvnd, it likely consists of some very subtle changes, if any.

This might or might not work for 390.x FreeBSD drivers.


----------



## shak (Aug 24, 2019)

Can someone explain me why the cpu sky rocks when i start the optirun service ? It won't start automatically and if i start it manually the cpu and gpu fans go crazy. I have to stop the service to fix it.

ANy ideas


----------



## miahshin (Sep 6, 2019)

I was battling with starting up the service.
According to https://www.freebsd.org/doc/en/articles/rc-scripting/rcng-hookup.html, you must have the field 

```
# PROVIDE: optimus
```
in your rc script in order for it to hook up with rc.d and start at boot time, otherwise you can only start it manually.


----------



## Theron Tarigo (Sep 10, 2019)

Looks like I should be moving ahead on bringing my ports up to date with everyone's contributions and ready for PR on the assumption that PR 232645 is stalled indefinitely.



shkhln said:


> Libmap overrides are static, while libglvnd selects appropriate OpenGL implementation dynamically. That doesn't seem like much, but once you actually tried to explain to multiple people why OpenGL applications break on Intel GPU with nvidia-driver port installed (with zero success rate in my case), it's pretty clear that this functionality is indeed necessary.



What I meant by suggesting GLVND is unnecessary is this:

The only reason nvidia-driver port "needs" to break Mesa libGL graphics is libmap and nothing more.  Libmap is needed because otherwise the user would need LD_LIBRARY_PATH.  Now, what about a hybrid graphics system?  Some change of environment is always needed to select the other device, no matter whether it is "optirun" command or an environment variable such as DISPLAY or DRI_PRIME.  Then, something like LD_LIBRARY_PATH is not such a big deal (and software such as VirtualGL and Bumblebee have worked fine without GLVND).  Sure, GLVND becomes very useful when a single application wants to use multiple devices.  But in that case it is reasonable to explicitly build the application against it, rather than to use GLVND system-wide.

So long as the default Mesa libGL configuration on FreeBSD doesn't use GLVND, I certainly wouldn't try to enable it just to make Nvidia happy.

So, it should be possible to move forward with everything that previously was working (64-bit freebsd, 64-bit linux, and 32-bit linux OpenGL apps on Nvidia through VirtualGL) without making any special accommodation for GLVND's new layers.


----------



## shkhln (Sep 10, 2019)

Theron Tarigo said:


> Looks like I should be moving ahead on bringing my ports up to date with everyone's contributions and ready for PR on the assumption that



Ok, doesn't hurt to try that.



Theron Tarigo said:


> PR 232645 is stalled indefinitely.



Just my luck, I suppose.


----------



## Theron Tarigo (Oct 13, 2019)

I am currently taking some time to work on this.  As I start, here is a list of what I plan to change/add:

Rename nvidia_headless_xorg to nvidia_xorg
Add kmod load to service startup
Add nvidia vs. nvidia-modeset option
Add ConnectedMonitor and UseEDID options to nvidia xorg conf template
Add port option to symlink "optirun" to nvrun-vgl
Test nvrun-vgl with x11/linux-virtualgl to graphically run Linux apps (I've had this working in the past)

Did I miss anything important?  In particular I don't want to be leaving out any critical features of Pouya's port.


----------



## Theron Tarigo (Oct 14, 2019)

Update: I've got this all done with the exception of the Linux compat, but I need to figure out what changed that nvidia-headless-driver is once again replacing Xorg's libglx.so with its own.


----------



## shkhln (Oct 14, 2019)

Theron Tarigo said:


> that nvidia-headless-driver is once again replacing Xorg's libglx.so with its own.



It really shouldn't, https://svnweb.freebsd.org/ports?view=revision&revision=503722.


----------



## Theron Tarigo (Oct 14, 2019)

shkhln said:


> It really shouldn't, https://svnweb.freebsd.org/ports?view=revision&revision=503722.


I was referencing libglx.so, which is server-side (part of Xorg), not relevant to the client-side GL libraries or to Linux libs at all.


----------



## shkhln (Oct 14, 2019)

Ah, I forgot about that one. It has been renamed to libglxserver_nvidia.so, however my patch still keeps the bit that copies it over Xorg's libglx.so for a smaller diff to 390.87.


----------



## shkhln (Oct 14, 2019)

Oh, wait. Actually it doesn't. 390.x does, but not later versions. I think for 390.x pouya-eghbali has modified pkg-install.in for his port to avoid overwriting libglx.so.


----------



## Theron Tarigo (Oct 14, 2019)

shkhln said:


> Ah, I forgot about that one. It has been renamed to libglxserver_nvidia.so, however my patch still keeps the bit that copies it over Xorg's libglx.so for a smaller diff to 390.87.


What patch is this?


----------



## shkhln (Oct 14, 2019)

Theron Tarigo said:


> What patch is this?



https://bugs.freebsd.org/bugzilla/attachment.cgi?id=206516&action=edit


----------



## Theron Tarigo (Oct 14, 2019)

shkhln said:


> https://bugs.freebsd.org/bugzilla/attachment.cgi?id=206516&action=edit


Forgive me, I find it unclear, is this meant to coexist with Intel/AMD (i.e. Mesa) GLX or is it simply an update from what FreeBSD has now to 430.x?


----------



## Theron Tarigo (Oct 14, 2019)

Frustratingly, it is x11-servers/xorg-server whose install script is doing the wrong thing in this case, see https://svnweb.freebsd.org/ports/he...es/pkg-install.in?revision=303429&view=markup.


----------



## Theron Tarigo (Oct 15, 2019)

It's a little thing, but the name "headless" for the ports is bothering me, and I think maybe I should call it "secondary" or something else.  So "nvidia-headless-driver" becomes "nvidia-secondary-driver".  I bring it up now because once it is committed as a port the name will never change.

Right now, the only direct use of nvidia-headless-driver is by a headless Xorg process (hence the name), but with PRIME that could change.

So, with the principle difference between the two driver ports being that one asserts itself as the default GL/GLX provider (conflicting with Intel) and the other does not, how best to name the new driver port?


----------



## shkhln (Oct 15, 2019)

Theron Tarigo said:


> Forgive me, I find it unclear, is this meant to coexist with Intel/AMD (i.e. Mesa) GLX or is it simply an update from what FreeBSD has now to 430.x?



The latter. I don't personally need anything else and I'm not interested in refactoring that port.


----------



## shkhln (Oct 15, 2019)

Theron Tarigo said:


> Frustratingly, it is x11-servers/xorg-server whose install script is doing the wrong thing in this case, see https://svnweb.freebsd.org/ports/he...es/pkg-install.in?revision=303429&view=markup.



I see. This is quite an annoyance indeed.


----------



## Theron Tarigo (Oct 26, 2019)

I've committed to my Github the changes I discussed: https://github.com/therontarigo/freebsd-gpu-headless/commit/7aaa72f6f19b2ab4ed91251a99b0423c7f04f6e7
The patch to x11/nvidia-driver contained therein is applicable to last month's revision of the port.  I'll need more time to look at the recent changes and update my patch accordingly.


----------



## Theron Tarigo (Nov 10, 2019)

I've updated my nvidia-driver patch to work with the latest ports svn, see https://github.com/therontarigo/freebsd-gpu-headless/blob/440/ports/x11_nvidia-driver.diff
It's switched to a new branch, "440", since right now it breaks 390x, which is something I will want to fix.
Please help test.


----------



## pbp_jackd (Nov 18, 2019)

@Theron Tarigo

Testsetup:
OS: FreeBSD 13.0-CURRENT
Ports: ports version head ( 2019-11-18)
Graphics Card: MX150
Device: Huawei Matebook X Pro

Thumb Up! Just tested against ports head as of today using your 440 branch. The only minor hickup was that I had to reboot after the installation was done due nvrun-vgl wasn't able to run an X server on display :8 . Just tested nvrun-vgl command so far.

Would love to see you bringing this into the portstree. Keep up with the good work. 

Thank you and also shkhln !


----------



## shkhln (Nov 18, 2019)

I'm not involved in this work at all.


----------



## Theron Tarigo (Nov 18, 2019)

After installation: did you `service nvidia_xorg (one)start` before trying `nvrun(-vgl)`, and it still did not work until reboot?

I myself am not having such a good experience with it since switch to 440.x (these problems never existed before):

Having Nvidia loaded prevents suspend from working (it seems to start to suspend but bounces); having nvidia _not_ loaded prevents it from working after suspend until a dance with acpi_call and more suspend/resumes to unstick its power state.

Nvidia Xorg seems to "fall asleep" causing all users of it to suffer very low performance until I use x11vnc to send input to the server, then it recovers.  Maybe someone knows there is an Xorg config or command option to fix this?

Granted, if this is the best Nvidia can do, and it doesn't implicate a problem with the port itself, I guess there is nothing I can do but complain to Nvidia and wait for an update.

Just barely relevant: Firefox worked flawlessly with VirtualGL until some recent versions, now a webpage creating a WebGL context hangs the browser more often than not.  It's distressing to see a major app break compatibility with the only currently working solution on FreeBSD for piping graphics from one GPU to another.


----------



## pbp_jackd (Nov 19, 2019)

Theron Tarigo 


> After installation: did you service nvidia_xorg (one)start before trying nvrun(-vgl), and it still did not work until reboot?


I did do that. I guess irrelevant but there was no nvidia driver installed before on this system.



> Having Nvidia loaded prevents suspend from working (it seems to start to suspend but bounces); having nvidia _not_ loaded prevents it from working after suspend until a dance with acpi_call and more suspend/resumes to unstick its power state.


No issue witht that so far. zzz or closing the lid just works. Resuming also does work without any noticeable issue so far.
I did not enable the ACPI powersaving support option while installing the driver. I wanted to test with all default first.
Would enabling this option make a difference ?



> Nvidia Xorg seems to "fall asleep" causing all users of it to suffer very low performance until I use x11vnc to send input to the server, then it recovers. Maybe someone knows there is an Xorg config or command option to fix this?


Also not seen yet. How to test that ?



> Just barely relevant: Firefox worked flawlessly with VirtualGL until some recent versions, now a webpage creating a WebGL context hangs the browser more often than not. It's distressing to see a major app break compatibility with the only currently working solution on FreeBSD for piping graphics from one GPU to another.


Just tested Firefox 70.0.1 real quick. No issue so far with WebGL.


----------



## Theron Tarigo (Nov 26, 2019)

Nvidia hybrid graphics ports are currently under review at https://reviews.freebsd.org/D22521.


----------



## shkhln (Dec 6, 2019)

Theron Tarigo said:


> Nvidia Xorg seems to "fall asleep" causing all users of it to suffer very low performance until I use x11vnc to send input to the server, then it recovers.  Maybe someone knows there is an Xorg config or command option to fix this?



Likely this issue: https://devtalk.nvidia.com/default/...-drops-to-1-fps-after-running-for-10-minutes/.


----------



## Theron Tarigo (Dec 7, 2019)

shkhln said:


> Likely this issue: https://devtalk.nvidia.com/default/...-drops-to-1-fps-after-running-for-10-minutes/.


Thanks, looks like I should include 
	
	



```
Option "HardDPMS" "false"
```
 in the xorg template.



Theron Tarigo said:


> Having Nvidia loaded prevents suspend from working (it seems to start to suspend but bounces); having nvidia _not_ loaded prevents it from working after suspend until a dance with acpi_call and more suspend/resumes to unstick its power state.


I discovered this was my fault, caused by some leftover devd hooks running acpi_call from when I was experimenting with ondemand Nvidia power switching.




pbp_jackd said:


> Just tested Firefox 70.0.1 real quick. No issue so far with WebGL.


The website Shadertoy.com reliably hangs Firefox tabs after a shader page reload in Firefox 70 but not in Firefox 68.


----------



## Hakaba (Jan 2, 2020)

How can I test nvidia hybrid graphics port ?
I have a laptop with recent intel processor (i7 gen 8th) and Nvidia dicrete GPU (1060 Max Q).
I just update my laptop to 13.0-CURRENT version

```
#uname -a
FreeBSD msi 13.0-CURRENT FreeBSD 13.0-CURRENT #0 8d00ce82bf9-c265417(master): Mon Dec 30 21:56:43 CET 2019     hakaba@msi:/usr/obj/usr/src/amd64.amd64/sys/GENERIC  amd64
```

I presume I have to deinstall nvidia-driver in ports, update /usr/ports/, apply path in /usr/port/x11/nvidia-driver and make install.
But maybe it exists a tools to do shis step for me ?


----------



## SirDice (Jan 2, 2020)

Hakaba said:


> I just update my laptop to 13.0-CURRENT version


Topics about unsupported FreeBSD versions


----------



## Hakaba (Jan 2, 2020)

The nvidia optimus driver is available on 12.1 ? I did not find it (that is why I upgrade on 13.0-CURRENT).


----------



## twschulz (Jan 3, 2020)

The port is currently under review (see post #51 in this thread). So, it's not currently in the ports tree yet.

You should be able to test it on 12.1. The steps would be something like the following (just guessing, I haven't done this… yet)

1) Get the ports tree (for example by using portsnap), assuming that ends up in /usr/ports
2) Download the raw diff from reviews (D22521.diff)
3) apply the raw diff to your ports tree (cd /usr/ports && patch < D22521.diff)
4) assuming the patch applies cleanly: cd /usr/ports/x11/nvidia-hybrid-graphics && make

Keep in mind you are kind of "on your own" with this. But, you can probably add comments here or on the actual review.


----------



## Hakaba (Jan 3, 2020)

I do it before test 13.0 but I did not have nvidia-hybrid-graphics port in /usr/ports/x11/ neither the nvidia-headless-utils mentionned in the diff file.
That is why I am lost.


----------



## SirDice (Jan 3, 2020)

Hakaba all versions of FreeBSD use the exact same ports tree.


----------



## twschulz (Jan 6, 2020)

Hakaba said:


> I do it before test 13.0 but I did not have nvidia-hybrid-graphics port in /usr/ports/x11/ neither the nvidia-headless-utils mentionned in the diff file.
> That is why I am lost.



Ah, OK. The diff creates new files (the new ports). You may need to touch those files referenced in the ports tree ahead of time and then apply the patch.  You can always run patch with --dry-run to see what happens and it may give you a guide to knowing which files to touch.

Good luck!


----------



## Theron Tarigo (Jan 17, 2020)

I haven't been keeping up well with this, I'll check now that patches are still applicable and try to address the remaining concerns.

Firefox's problems with VirtualGL seem to have gone away, hopefully Mozilla do not reintroduce the problem but that is out of my hands.


----------



## Hakaba (Feb 22, 2020)

Little Update for my MSI laptop :
Back in FreeBSD 12.1, I apply the patch and rebuild all from scratch.
I found a conf template in /usr/local/etc/X11/xorg.conf folder named xorg-nvidia-headless-template.conf
Without this config file, Xorg works only on the laptop screen with scfb.

With this config file, the HDMI screen receive video. The laptop screen is black with a blanc square on top left.
If I type some text in keyboard, the text appear on the laptop screen.

No error for X server, so logs do not help me.
I do not have config in /etc/X11 folder.
I change the input device config in xorg-nvidia-headless-template.conf to have a working mouse and keyboard.


----------



## Theron Tarigo (Feb 24, 2020)

Hi Hakaba, it sounds like your laptop is wired such that internal display uses Intel graphics, while HDMI port uses Nvidia.  First be sure scfb or intel driver is working well for laptop display (your primary Xorg session should not have the nvidia driver in it). Now, if your displays are wired how I suspect, `env DISPLAY=:0 xev` should show a window on the laptop and `env DISPLAY=:8 xev` should show on the HDMI display. Is that the case?

There is not any way that I know for these two displays to share one set of input devices.  A single Xorg server attached to both GPUs with PRIME would be needed for that, but PRIME is either missing or undocumented and broken on FreeBSD.
Possibly your BIOS has an option for changing the wiring of GPUs to outputs?

(These FreeBSD Optimus support drivers are only working for laptops where Intel GPU manages all displays, but if you can select Nvidia to manage all displays, then you will only need nvidia-driver package.)


----------



## Hakaba (Feb 28, 2020)

Theron Tarigo said:


> env DISPLAY=:0 xev should show a window on the laptop and  env DISPLAY=:8 xev should show on the HDMI display. Is that the case?



DISPLAY=:0 works well.
`DISPLAY=:8 xev`
xev: unable to open display ':8'
(With scfb on the laptop screen or nvidia in the hdmi screen)

I probably have to install Intel driver, scfb only allow 800x600 px (xrandr).

What surprise me is : if I want to use the HDMI screen without X, it is not possible ? Why the mist screen is related with X and WM and not with the system ?

With nvidia, when my HDMI screen works :
`nvidia-settings`
Could not open display :8

More tests tomorow...


----------



## Theron Tarigo (Mar 1, 2020)

Hakaba said:


> `DISPLAY=:8 xev`
> xev: unable to open display ':8'





Hakaba said:


> `nvidia-settings`
> Could not open display :8


Only after `service nvidia_xorg start`
It's an instance of Xorg intended to run exclusively on whichever GPU has no displays connected, but it seems your system is more complex than that.


----------



## Hakaba (Mar 1, 2020)

Thanks. This time I can see the window on HDMI screen.
So Xev works.

`env DISPLAY=:8 optirun glxgears` show the gears on my HDMI screen.
Without optirun if failed.
I search a way to change the max resolution and to be sure that NVidia graphics is used (in the laptop display, I have the same perfs with glxgears and optirun glxgears).

Thanks a lot, I will be able to test this patch.


----------



## Theron Tarigo (Mar 1, 2020)

Hakaba said:


> to be sure that NVidia graphics is used (in the laptop display, I have the same perfs with glxgears and optirun glxgears).



`glxgears` is not a test of overall GPU power, the limiting factor is usually the framebuffer throughput of the method used for transporting frames to the display (naturally, direct Integrated graphics typically outperforms Nvidia->Integrated proxy).  FPS for an on-screen demo at or above monitor refresh rate means nearly nothing for GPU rendering power.

You can use `glxinfo -B` or `glxgears -info` to see a summary of OpenGL support including which GPU is used in that environment.


----------



## Hakaba (Mar 1, 2020)

I have different score with glmark2 :
`glmark2` 487
`optirun glmark2` 826
`env DISPLAY=:8 optirun glmark2` 798

`glxinfo -B` show me a strange things... «OpenGL vendor string: VMWare»
`optirun glxinfo -B` use NVidia


----------



## u666sa (Mar 5, 2020)

How to start optimus service automatically??  I do have optimus_enable="YES" in rc.conf but after reboot service is not started, I have to start it manually?

Acer Aspire 5742G GeForce 420M works/


----------



## Theron Tarigo (Mar 7, 2020)

u666sa said:


> How to start optimus service automatically??  I do have optimus_enable="YES" in rc.conf but after reboot service is not started, I have to start it manually?
> 
> Acer Aspire 5742G GeForce 420M works/


Hi, are you using https://github.com/pouya-eghbali/freebsd-nvidia-optimus (deprecated) or https://reviews.freebsd.org/D22521 ?
For the latter, which I will try to get submitted to ports tree (I've been neglecting some little remaining issues), the service name is "nvidia_xorg".

(Currently waiting on danfe@FreeBSD.org for review)


----------



## Hakaba (Apr 18, 2020)

Hello, some news here...

I install the Intel graphic driver that match my CPU and use it now as default.

With glmark2, I have a very good score with Intel graphics (2350) and still 760 with the NVidia card.
That means I have an issue with the NVidia driver too (I installed the latest version for an NVidia 1060 Max-Q) ?


----------



## Theron Tarigo (Apr 23, 2020)

What does glmark2 measure?  If it has anything to do with framerate then it won't be useful here.  Rendering directly from Intel graphics should have a higher frame throughput than with nvrun-vgl/optirun since needing to transfer frames from Nvidia to Intel over the PCIe bus can become the limiting factor.  However in either case it should keep up with your monitor refresh, otherwise there is a real problem.

To reiterate: Optimus GPU performance is expected to be worse than Integrated for simple workloads, but still "fast enough" for display.  For complex workloads, Nvidia should outperform the Integrated.


----------



## Hakaba (Apr 23, 2020)

Ok, I will find an another benchmark to measure Intel/Nvidia perfs not based on framerate, but on calculation.
I notice no bug with programs lauched via optimus.


----------



## blitztide (Nov 1, 2021)

I have tried to install the nvidia-hybrid-graphics driver, it appears to work, 
I get a desktop session and I the output of nvrun glxinfo -B indicates that I can offload to the nvidia card, however I am unable to output via HDMI to an external monitor on this laptop.
Is this functionality supported by the driver?



Spoiler: pciconf



hostb0@pci0:0:0:0:      class=0x060000 rev=0x07 hdr=0x00 vendor=0x8086 device=0x1910 subvendor=0x103c subdevice=0x8257
    vendor     = 'Intel Corporation'
    device     = 'Xeon E3-1200 v5/E3-1500 v5/6th Gen Core Processor Host Bridge/DRAM Registers'
    class      = bridge
    subclass   = HOST-PCI
pcib1@pci0:0:1:0:       class=0x060400 rev=0x07 hdr=0x01 vendor=0x8086 device=0x1901 subvendor=0x103c subdevice=0x8257
    vendor     = 'Intel Corporation'
    device     = '6th-10th Gen Core Processor PCIe Controller (x16)'
    class      = bridge
    subclass   = PCI-PCI
vgapci1@pci0:0:2:0:     class=0x030000 rev=0x06 hdr=0x00 vendor=0x8086 device=0x191b subvendor=0x103c subdevice=0x8257
    vendor     = 'Intel Corporation'
    device     = 'HD Graphics 530'
    class      = display
    subclass   = VGA
none0@pci0:0:4:0:       class=0x118000 rev=0x07 hdr=0x00 vendor=0x8086 device=0x1903 subvendor=0x103c subdevice=0x8257
    vendor     = 'Intel Corporation'
    device     = 'Xeon E3-1200 v5/E3-1500 v5/6th Gen Core Processor Thermal Subsystem'
    class      = dasp
xhci0@pci0:0:20:0:      class=0x0c0330 rev=0x31 hdr=0x00 vendor=0x8086 device=0xa12f subvendor=0x103c subdevice=0x8257
    vendor     = 'Intel Corporation'
    device     = '100 Series/C230 Series Chipset Family USB 3.0 xHCI Controller'
    class      = serial bus
    subclass   = USB
pchtherm0@pci0:0:20:2:  class=0x118000 rev=0x31 hdr=0x00 vendor=0x8086 device=0xa131 subvendor=0x103c subdevice=0x8257
    vendor     = 'Intel Corporation'
    device     = '100 Series/C230 Series Chipset Family Thermal Subsystem'
    class      = dasp
none1@pci0:0:22:0:      class=0x078000 rev=0x31 hdr=0x00 vendor=0x8086 device=0xa13a subvendor=0x103c subdevice=0x8257
    vendor     = 'Intel Corporation'
    device     = '100 Series/C230 Series Chipset Family MEI Controller'
    class      = simple comms
ahci0@pci0:0:23:0:      class=0x010601 rev=0x31 hdr=0x00 vendor=0x8086 device=0xa103 subvendor=0x103c subdevice=0x8257
    vendor     = 'Intel Corporation'
    device     = 'HM170/QM170 Chipset SATA Controller [AHCI Mode]'
    class      = mass storage
    subclass   = SATA
pcib2@pci0:0:28:0:      class=0x060400 rev=0xf1 hdr=0x01 vendor=0x8086 device=0xa114 subvendor=0x103c subdevice=0x8257
    vendor     = 'Intel Corporation'
    device     = '100 Series/C230 Series Chipset Family PCI Express Root Port'
    class      = bridge
    subclass   = PCI-PCI
pcib3@pci0:0:28:5:      class=0x060400 rev=0xf1 hdr=0x01 vendor=0x8086 device=0xa115 subvendor=0x103c subdevice=0x8257
    vendor     = 'Intel Corporation'
    device     = '100 Series/C230 Series Chipset Family PCI Express Root Port'
    class      = bridge
    subclass   = PCI-PCI
pcib4@pci0:0:28:6:      class=0x060400 rev=0xf1 hdr=0x01 vendor=0x8086 device=0xa116 subvendor=0x103c subdevice=0x8257
    vendor     = 'Intel Corporation'
    device     = '100 Series/C230 Series Chipset Family PCI Express Root Port'
    class      = bridge
    subclass   = PCI-PCI
isab0@pci0:0:31:0:      class=0x060100 rev=0x31 hdr=0x00 vendor=0x8086 device=0xa14e subvendor=0x103c subdevice=0x8257
    vendor     = 'Intel Corporation'
    device     = 'HM170 Chipset LPC/eSPI Controller'
    class      = bridge
    subclass   = PCI-ISA
none2@pci0:0:31:2:      class=0x058000 rev=0x31 hdr=0x00 vendor=0x8086 device=0xa121 subvendor=0x103c subdevice=0x8257
    vendor     = 'Intel Corporation'
    device     = '100 Series/C230 Series Chipset Family Power Management Controller'
    class      = memory
hdac0@pci0:0:31:3:      class=0x040300 rev=0x31 hdr=0x00 vendor=0x8086 device=0xa170 subvendor=0x103c subdevice=0x8257
    vendor     = 'Intel Corporation'
    device     = '100 Series/C230 Series Chipset Family HD Audio Controller'
    class      = multimedia
    subclass   = HDA
ichsmb0@pci0:0:31:4:    class=0x0c0500 rev=0x31 hdr=0x00 vendor=0x8086 device=0xa123 subvendor=0x103c subdevice=0x8257
    vendor     = 'Intel Corporation'
    device     = '100 Series/C230 Series Chipset Family SMBus'
    class      = serial bus
    subclass   = SMBus
vgapci0@pci0:1:0:0:     class=0x030200 rev=0xa1 hdr=0x00 vendor=0x10de device=0x1427 subvendor=0x103c subdevice=0x8257
    vendor     = 'NVIDIA Corporation'
    device     = 'GM206M [GeForce GTX 965M]'
    class      = display
    subclass   = 3D
rtsx0@pci0:7:0:0:       class=0xff0000 rev=0x01 hdr=0x00 vendor=0x10ec device=0x522a subvendor=0x103c subdevice=0x8257
    vendor     = 'Realtek Semiconductor Co., Ltd.'
    device     = 'RTS522A PCI Express Card Reader'
iwm0@pci0:8:0:0:        class=0x028000 rev=0x61 hdr=0x00 vendor=0x8086 device=0x095a subvendor=0x8086 subdevice=0x5010
    vendor     = 'Intel Corporation'
    device     = 'Wireless 7265'
    class      = network
re0@pci0:9:0:0: class=0x020000 rev=0x15 hdr=0x00 vendor=0x10ec device=0x8168 subvendor=0x103c subdevice=0x8257
    vendor     = 'Realtek Semiconductor Co., Ltd.'
    device     = 'RTL8111/8168/8411 PCI Express Gigabit Ethernet Controller'
    class      = network
    subclass   = ethernet





Spoiler: xrandr



Screen 0: minimum 320 x 200, current 1920 x 1080, maximum 16384 x 16384
eDP-1 connected primary 1920x1080+0+0 (normal left inverted right x axis y axis) 344mm x 194mm
   1920x1080     60.02*+  59.93    40.03  
   1680x1050     59.95    59.88  
   1400x1050     59.98  
   1600x900      59.95    59.82  
   1280x1024     60.02  
   1400x900      59.96    59.88  
   1280x960      60.00  
   1368x768      59.88    59.85  
   1280x800      59.97    59.81    59.91  
   1280x720      59.99    59.86    59.74  
   1024x768      60.04    60.00  
   960x720       60.00  
   928x696       60.05  
   896x672       60.01  
   1024x576      59.95    59.96    59.90    59.82  
   960x600       59.93    60.00  
   960x540       59.96    59.99    59.63    59.82  
   800x600       60.00    60.32    56.25  
   840x525       60.01    59.88  
   864x486       59.92    59.57  
   700x525       59.98  
   800x450       59.95    59.82  
   640x512       60.02  
   700x450       59.96    59.88  
   640x480       60.00    59.94  
   720x405       59.51    58.99  
   684x384       59.88    59.85  
   640x400       59.88    59.98  
   640x360       59.86    59.83    59.84    59.32  
   512x384       60.00  
   512x288       60.00    59.92  
   480x270       59.63    59.82  
   400x300       60.32    56.34  
   432x243       59.92    59.57  
   320x240       60.05  
   360x202       59.51    59.13  
   320x180       59.84    59.32  
HDMI-1 disconnected (normal left inverted right x axis y axis)





Spoiler: rc.conf



kld_list="i915kms"
clear_tmp_enable="YES"
sendmail_enable="NONE"
hostname="Polaris"
keymap="uk.kbd"
wlans_iwm0="wlan0"
ifconfig_wlan0="WPA DHCP"
ifconfig_wlan0_ipv6="inet6 accept_rtadv"
create_args_wlan0="country GB"
sshd_enable="YES"
powerd_enable="YES"
# Set dumpdev to "AUTO" to enable crash dumps, "NO" to disable
dumpdev="AUTO"
zfs_enable="YES"
nvidia_xorg_enable="YES"
dbus_enable="YES"
sddm_enable="YES"


----------



## Phishfry (Nov 2, 2021)

blitztide said:


> Is this functionality supported by the driver?


Some of this is going to depend on the hardware.
Some will wire the HDMI to Optimus and some to base Intel Graphics.
Looks like your main display is eDP and HDMI1 is not connected.
If it is wired to the NVidia chip then you might be able to use the NVidia control panel.
x11/nvidia-settings
If not you might need to make a custom conf for it. Same with Intel Graphics.


----------



## shkhln (Nov 2, 2021)

This is not going to work no matter what.


----------



## blitztide (Nov 2, 2021)

shkhln said:


> This is not going to work no matter what.


I can get it to output to individual screens, but not both at the same time.
I have modified the xorg.conf as follows:


```
Section "ServerLayout"
        Identifier "Xorg"
        Screen  0 "Screen0"
        Screen  1 "Screen1"
EndSection

Section "Module"
        Load "modesetting"
EndSection

Section "Device"
        Identifier "Card0"
        Driver "intel"
        BusID "PCI:0:2:0"
EndSection

Section "Device"
        Identifier "Card1"
        Driver "nvidia"
        BusID "PCI:1:0:0"
        Option "AllowEmptyInitialConfiguration"
EndSection

Section "Monitor"
        Identifier "eDP1"
        Option "Primary" "true"
EndSection

Section "Monitor"
        Identifier "HDMI-1"
EndSection

Section "Screen"
        Identifier "Screen0"
        Device "Card0"
        Option "Monitor-eDP1" "eDP1"
EndSection

Section "Screen"
        Identifier "Screen1"
        Device "Card1"
EndSection
```

when I change the Intel driver to vesa I will get eDP1 to output, but it will not load the nvidia driver. and if left configured as above it will load the nvidia driver and I will get HDMI-1 output.
I've been searching heavily through the forums and I am not sure what the differences in implementation between a Linux host and a FreeBSD host are.


----------



## shkhln (Nov 2, 2021)

Presumably http://download.nvidia.com/XFree86/Linux-x86_64/495.44/README/randr14.html depends on nvidia-drm.ko, which is yet to be ported properly. (There is an experimental port at https://github.com/amshafer/nvidia-driver.) On the other hand, http://download.nvidia.com/XFree86/Linux-x86_64/495.44/README/primerenderoffload.html works with the latest (495.xx) driver, but that's not what you want.


----------



## blitztide (Nov 3, 2021)

I am able to get Xinerama to use dual screen, but at a performance cost.
Without Xinerama I can get xorg to create a screen for integrated graphics, and one for dedicated however I can only display on one at a time.
An improvement from before as the dedicated GPU screen will detect and display a black desktop over HDMI, however my display manager will not load a desktop, a weird thing is that TWM will allow use of both desktops but not XFCE or KDE.


----------

