[PATCH 0/16 v6] PCI: Linux kernel SR-IOV support - Kernel

This is a discussion on [PATCH 0/16 v6] PCI: Linux kernel SR-IOV support - Kernel ; Greetings (from a new lurker to the list), To your question Greg, "yes" and "sort of" ;-). I have started taking a look at these patches with a strong interest in understanding how they work. I've built a kernel with ...

+ Reply to Thread
Page 2 of 4 FirstFirst 1 2 3 4 LastLast
Results 21 to 40 of 70

Thread: [PATCH 0/16 v6] PCI: Linux kernel SR-IOV support

  1. Re: [PATCH 0/16 v6] PCI: Linux kernel SR-IOV support


    Greetings (from a new lurker to the list),

    To your question Greg, "yes" and "sort of" ;-). I have started taking a look at these patches with a strong interest in understanding how they work. I've built a kernel with them and tried out a few things with real SR-IOV hardware.

    --
    Lance Hartmann




    --- On Wed, 11/5/08, Greg KH wrote:
    >
    > Is there any actual users of this API around yet? How was
    > it tested as
    > there is no hardware to test on? Which drivers are going
    > to have to be
    > rewritten to take advantage of this new interface?
    >
    > thanks,
    >
    > greg k-h
    > _______________________________________________
    > Virtualization mailing list
    > Virtualization@lists.linux-foundation.org
    > https://lists.linux-foundation.org/m...virtualization





    --
    To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
    the body of a message to majordomo@vger.kernel.org
    More majordomo info at http://vger.kernel.org/majordomo-info.html
    Please read the FAQ at http://www.tux.org/lkml/

  2. Re: [PATCH 0/16 v6] PCI: Linux kernel SR-IOV support

    On Thu, Nov 06, 2008 at 07:40:12AM -0800, H L wrote:
    >
    > Greetings (from a new lurker to the list),


    Welcome!

    > To your question Greg, "yes" and "sort of" ;-). I have started taking
    > a look at these patches with a strong interest in understanding how
    > they work. I've built a kernel with them and tried out a few things
    > with real SR-IOV hardware.


    Did you have to modify individual drivers to take advantage of this
    code? It looks like the core code will run on this type of hardware,
    but there seems to be no real advantage until a driver is modified to
    use it, right?

    Or am I missing some great advantage to having this code without
    modified drivers?

    thanks,

    greg k-h
    --
    To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
    the body of a message to majordomo@vger.kernel.org
    More majordomo info at http://vger.kernel.org/majordomo-info.html
    Please read the FAQ at http://www.tux.org/lkml/

  3. Re: [PATCH 0/16 v6] PCI: Linux kernel SR-IOV support

    I have not modified any existing drivers, but instead I threw together a bare-bones module enabling me to make a call to pci_iov_register() and then poke at an SR-IOV adapter's /sys entries for which no driver was loaded.

    It appears from my perusal thus far that drivers using these new SR-IOV patches will require modification; i.e. the driver associated with the Physical Function (PF) will be required to make the pci_iov_register() call along with the requisite notify() function. Essentially this suggests to me a model for the PF driver to perform any "global actions" or setup on behalf of VFs before enabling them after which VF drivers could be associated.

    I have so far only seen Yu Zhao's "7-patch" set. I've not yet looked at his subsequently tendered "15-patch" set so I don't know what has changed. The hardware/firmware implementation for any given SR-IOV compatible device, will determine the extent of differences required between a PF driver and a VF driver.

    --
    Lance Hartmann


    --- On Thu, 11/6/08, Greg KH wrote:


    > Date: Thursday, November 6, 2008, 9:43 AM
    > On Thu, Nov 06, 2008 at 07:40:12AM -0800, H L wrote:
    > >
    > > Greetings (from a new lurker to the list),

    >
    > Welcome!
    >
    > > To your question Greg, "yes" and "sort

    > of" ;-). I have started taking
    > > a look at these patches with a strong interest in

    > understanding how
    > > they work. I've built a kernel with them and

    > tried out a few things
    > > with real SR-IOV hardware.

    >
    > Did you have to modify individual drivers to take advantage
    > of this
    > code? It looks like the core code will run on this type of
    > hardware,
    > but there seems to be no real advantage until a driver is
    > modified to
    > use it, right?
    >
    > Or am I missing some great advantage to having this code
    > without
    > modified drivers?
    >
    > thanks,
    >
    > greg k-h





    --
    To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
    the body of a message to majordomo@vger.kernel.org
    More majordomo info at http://vger.kernel.org/majordomo-info.html
    Please read the FAQ at http://www.tux.org/lkml/

  4. Re: [PATCH 0/16 v6] PCI: Linux kernel SR-IOV support


    A: No.
    Q: Should I include quotations after my reply?

    On Thu, Nov 06, 2008 at 08:41:53AM -0800, H L wrote:
    > I have not modified any existing drivers, but instead I threw together
    > a bare-bones module enabling me to make a call to pci_iov_register()
    > and then poke at an SR-IOV adapter's /sys entries for which no driver
    > was loaded.
    >
    > It appears from my perusal thus far that drivers using these new
    > SR-IOV patches will require modification; i.e. the driver associated
    > with the Physical Function (PF) will be required to make the
    > pci_iov_register() call along with the requisite notify() function.
    > Essentially this suggests to me a model for the PF driver to perform
    > any "global actions" or setup on behalf of VFs before enabling them
    > after which VF drivers could be associated.


    Where would the VF drivers have to be associated? On the "pci_dev"
    level or on a higher one?

    Will all drivers that want to bind to a "VF" device need to be
    rewritten?

    > I have so far only seen Yu Zhao's "7-patch" set. I've not yet looked
    > at his subsequently tendered "15-patch" set so I don't know what has
    > changed. The hardware/firmware implementation for any given SR-IOV
    > compatible device, will determine the extent of differences required
    > between a PF driver and a VF driver.


    Yeah, that's what I'm worried/curious about. Without seeing the code
    for such a driver, how can we properly evaluate if this infrastructure
    is the correct one and the proper way to do all of this?

    thanks,

    greg k-h
    --
    To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
    the body of a message to majordomo@vger.kernel.org
    More majordomo info at http://vger.kernel.org/majordomo-info.html
    Please read the FAQ at http://www.tux.org/lkml/

  5. Re: git repository for SR-IOV development?

    On Thu, Nov 06, 2008 at 08:51:09AM -0800, H L wrote:
    >
    > Has anyone initiated or given consideration to the creation of a git
    > repository (say, on kernel.org) for SR-IOV development?


    Why? It's only a few patches, right? Why would it need a whole new git
    tree?

    thanks,

    greg k-h
    --
    To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
    the body of a message to majordomo@vger.kernel.org
    More majordomo info at http://vger.kernel.org/majordomo-info.html
    Please read the FAQ at http://www.tux.org/lkml/

  6. RE: [PATCH 0/16 v6] PCI: Linux kernel SR-IOV support

    > On Thu, Nov 06, 2008 at 08:41:53AM -0800, H L wrote:
    > > I have not modified any existing drivers, but instead I threw

    > together
    > > a bare-bones module enabling me to make a call to pci_iov_register()
    > > and then poke at an SR-IOV adapter's /sys entries for which no driver
    > > was loaded.
    > >
    > > It appears from my perusal thus far that drivers using these new
    > > SR-IOV patches will require modification; i.e. the driver associated
    > > with the Physical Function (PF) will be required to make the
    > > pci_iov_register() call along with the requisite notify() function.
    > > Essentially this suggests to me a model for the PF driver to perform
    > > any "global actions" or setup on behalf of VFs before enabling them
    > > after which VF drivers could be associated.

    >
    > Where would the VF drivers have to be associated? On the "pci_dev"
    > level or on a higher one?


    A VF appears to the Linux OS as a standard (full, additional) PCI device. The driver is associated in the same way as for a normal PCI device. Ideally, you would use SR-IOV devices on a virtualized system, for example, using Xen. A VF can then be assigned to a guest domain as a full PCI device.

    > Will all drivers that want to bind to a "VF" device need to be
    > rewritten?


    Currently, any vendor providing a SR-IOV device needs to provide a PF driver and a VF driver that runs on their hardware. A VF driver does not necessarily need to know much about SR-IOV but just run on the presented PCI device. You might want to have a communication channel between PF and VF driver though, for various reasons, if such a channel is not provided in hardware.

    > > I have so far only seen Yu Zhao's "7-patch" set. I've not yet looked
    > > at his subsequently tendered "15-patch" set so I don't know what has
    > > changed. The hardware/firmware implementation for any given SR-IOV
    > > compatible device, will determine the extent of differences required
    > > between a PF driver and a VF driver.

    >
    > Yeah, that's what I'm worried/curious about. Without seeing the code
    > for such a driver, how can we properly evaluate if this infrastructure
    > is the correct one and the proper way to do all of this?


    Yu's API allows a PF driver to register with the Linux PCI code and use it to activate VFs and allocate their resources. The PF driver needs to be modified to work with that API. While you can argue about how that API is supposed to look like, it is clear that such an API is required in some form. The PF driver needs to know when VFs are active as it might want to allocate further (device-specific) resources to VFs or initiate further (device-specific) configurations. While probably a lot of SR-IOV specific code has to be in the PF driver, there is also support required from the Linux PCI subsystem, which is to some extend provided by Yu's patches.

    Anna
    --
    To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
    the body of a message to majordomo@vger.kernel.org
    More majordomo info at http://vger.kernel.org/majordomo-info.html
    Please read the FAQ at http://www.tux.org/lkml/

  7. Re: [PATCH 0/16 v6] PCI: Linux kernel SR-IOV support

    On Thu, Nov 06, 2008 at 08:49:19AM -0800, Greg KH wrote:
    > On Thu, Nov 06, 2008 at 08:41:53AM -0800, H L wrote:
    > > I have not modified any existing drivers, but instead I threw together
    > > a bare-bones module enabling me to make a call to pci_iov_register()
    > > and then poke at an SR-IOV adapter's /sys entries for which no driver
    > > was loaded.
    > >
    > > It appears from my perusal thus far that drivers using these new
    > > SR-IOV patches will require modification; i.e. the driver associated
    > > with the Physical Function (PF) will be required to make the
    > > pci_iov_register() call along with the requisite notify() function.
    > > Essentially this suggests to me a model for the PF driver to perform
    > > any "global actions" or setup on behalf of VFs before enabling them
    > > after which VF drivers could be associated.

    >
    > Where would the VF drivers have to be associated? On the "pci_dev"
    > level or on a higher one?
    >
    > Will all drivers that want to bind to a "VF" device need to be
    > rewritten?


    The current model being implemented by my colleagues has separate
    drivers for the PF (aka native) and VF devices. I don't personally
    believe this is the correct path, but I'm reserving judgement until I
    see some code.

    I don't think we really know what the One True Usage model is for VF
    devices. Chris Wright has some ideas, I have some ideas and Yu Zhao has
    some ideas. I bet there's other people who have other ideas too.

    --
    Matthew Wilcox Intel Open Source Technology Centre
    "Bill, look, we understand that you're interested in selling us this
    operating system, but compare it to ours. We can't possibly take such
    a retrograde step."
    --
    To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
    the body of a message to majordomo@vger.kernel.org
    More majordomo info at http://vger.kernel.org/majordomo-info.html
    Please read the FAQ at http://www.tux.org/lkml/

  8. Re: [PATCH 0/16 v6] PCI: Linux kernel SR-IOV support

    On Thu, Nov 06, 2008 at 10:47:41AM -0700, Matthew Wilcox wrote:
    > On Thu, Nov 06, 2008 at 08:49:19AM -0800, Greg KH wrote:
    > > On Thu, Nov 06, 2008 at 08:41:53AM -0800, H L wrote:
    > > > I have not modified any existing drivers, but instead I threw together
    > > > a bare-bones module enabling me to make a call to pci_iov_register()
    > > > and then poke at an SR-IOV adapter's /sys entries for which no driver
    > > > was loaded.
    > > >
    > > > It appears from my perusal thus far that drivers using these new
    > > > SR-IOV patches will require modification; i.e. the driver associated
    > > > with the Physical Function (PF) will be required to make the
    > > > pci_iov_register() call along with the requisite notify() function.
    > > > Essentially this suggests to me a model for the PF driver to perform
    > > > any "global actions" or setup on behalf of VFs before enabling them
    > > > after which VF drivers could be associated.

    > >
    > > Where would the VF drivers have to be associated? On the "pci_dev"
    > > level or on a higher one?
    > >
    > > Will all drivers that want to bind to a "VF" device need to be
    > > rewritten?

    >
    > The current model being implemented by my colleagues has separate
    > drivers for the PF (aka native) and VF devices. I don't personally
    > believe this is the correct path, but I'm reserving judgement until I
    > see some code.


    Hm, I would like to see that code before we can properly evaluate this
    interface. Especially as they are all tightly tied together.

    > I don't think we really know what the One True Usage model is for VF
    > devices. Chris Wright has some ideas, I have some ideas and Yu Zhao has
    > some ideas. I bet there's other people who have other ideas too.


    I'd love to hear those ideas.

    Rumor has it, there is some Xen code floating around to support this
    already, is that true?

    thanks,

    greg k-h
    --
    To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
    the body of a message to majordomo@vger.kernel.org
    More majordomo info at http://vger.kernel.org/majordomo-info.html
    Please read the FAQ at http://www.tux.org/lkml/

  9. Re: [PATCH 0/16 v6] PCI: Linux kernel SR-IOV support

    On Thu, Nov 06, 2008 at 05:38:16PM +0000, Fischer, Anna wrote:
    > > On Thu, Nov 06, 2008 at 08:41:53AM -0800, H L wrote:
    > > > I have not modified any existing drivers, but instead I threw

    > > together
    > > > a bare-bones module enabling me to make a call to pci_iov_register()
    > > > and then poke at an SR-IOV adapter's /sys entries for which no driver
    > > > was loaded.
    > > >
    > > > It appears from my perusal thus far that drivers using these new
    > > > SR-IOV patches will require modification; i.e. the driver associated
    > > > with the Physical Function (PF) will be required to make the
    > > > pci_iov_register() call along with the requisite notify() function.
    > > > Essentially this suggests to me a model for the PF driver to perform
    > > > any "global actions" or setup on behalf of VFs before enabling them
    > > > after which VF drivers could be associated.

    > >
    > > Where would the VF drivers have to be associated? On the "pci_dev"
    > > level or on a higher one?

    >
    > A VF appears to the Linux OS as a standard (full, additional) PCI
    > device. The driver is associated in the same way as for a normal PCI
    > device. Ideally, you would use SR-IOV devices on a virtualized system,
    > for example, using Xen. A VF can then be assigned to a guest domain as
    > a full PCI device.


    It's that "second" part that I'm worried about. How is that going to
    happen? Do you have any patches that show this kind of "assignment"?

    > > Will all drivers that want to bind to a "VF" device need to be
    > > rewritten?

    >
    > Currently, any vendor providing a SR-IOV device needs to provide a PF
    > driver and a VF driver that runs on their hardware.


    Are there any such drivers available yet?

    > A VF driver does not necessarily need to know much about SR-IOV but
    > just run on the presented PCI device. You might want to have a
    > communication channel between PF and VF driver though, for various
    > reasons, if such a channel is not provided in hardware.


    Agreed, but what does that channel look like in Linux?

    I have some ideas of what I think it should look like, but if people
    already have code, I'd love to see that as well.

    > > > I have so far only seen Yu Zhao's "7-patch" set. I've not yet looked
    > > > at his subsequently tendered "15-patch" set so I don't know what has
    > > > changed. The hardware/firmware implementation for any given SR-IOV
    > > > compatible device, will determine the extent of differences required
    > > > between a PF driver and a VF driver.

    > >
    > > Yeah, that's what I'm worried/curious about. Without seeing the code
    > > for such a driver, how can we properly evaluate if this infrastructure
    > > is the correct one and the proper way to do all of this?

    >
    > Yu's API allows a PF driver to register with the Linux PCI code and
    > use it to activate VFs and allocate their resources. The PF driver
    > needs to be modified to work with that API. While you can argue about
    > how that API is supposed to look like, it is clear that such an API is
    > required in some form.


    I totally agree, I'm arguing about what that API looks like

    I want to see some code...

    thanks,

    greg k-h
    --
    To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
    the body of a message to majordomo@vger.kernel.org
    More majordomo info at http://vger.kernel.org/majordomo-info.html
    Please read the FAQ at http://www.tux.org/lkml/

  10. Re: [PATCH 0/16 v6] PCI: Linux kernel SR-IOV support


    [Anna, can you fix your word-wrapping please? Your lines appear to be
    infinitely long which is most unpleasant to reply to]

    On Thu, Nov 06, 2008 at 05:38:16PM +0000, Fischer, Anna wrote:
    > > Where would the VF drivers have to be associated? On the "pci_dev"
    > > level or on a higher one?

    >
    > A VF appears to the Linux OS as a standard (full, additional) PCI
    > device. The driver is associated in the same way as for a normal PCI
    > device. Ideally, you would use SR-IOV devices on a virtualized system,
    > for example, using Xen. A VF can then be assigned to a guest domain as
    > a full PCI device.


    It's not clear thats the right solution. If the VF devices are _only_
    going to be used by the guest, then arguably, we don't want to create
    pci_devs for them in the host. (I think it _is_ the right answer, but I
    want to make it clear there's multiple opinions on this).

    > > Will all drivers that want to bind to a "VF" device need to be
    > > rewritten?

    >
    > Currently, any vendor providing a SR-IOV device needs to provide a PF
    > driver and a VF driver that runs on their hardware. A VF driver does not
    > necessarily need to know much about SR-IOV but just run on the presented
    > PCI device. You might want to have a communication channel between PF
    > and VF driver though, for various reasons, if such a channel is not
    > provided in hardware.


    That is one model. Another model is to provide one driver that can
    handle both PF and VF devices. A third model is to provide, say, a
    Windows VF driver and a Xen PF driver and only support Windows-on-Xen.
    (This last would probably be an exercise in foot-shooting, but
    nevertheless, I've heard it mooted).

    > > Yeah, that's what I'm worried/curious about. Without seeing the code
    > > for such a driver, how can we properly evaluate if this infrastructure
    > > is the correct one and the proper way to do all of this?

    >
    > Yu's API allows a PF driver to register with the Linux PCI code and use
    > it to activate VFs and allocate their resources. The PF driver needs to
    > be modified to work with that API. While you can argue about how that API
    > is supposed to look like, it is clear that such an API is required in some
    > form. The PF driver needs to know when VFs are active as it might want to
    > allocate further (device-specific) resources to VFs or initiate further
    > (device-specific) configurations. While probably a lot of SR-IOV specific
    > code has to be in the PF driver, there is also support required from
    > the Linux PCI subsystem, which is to some extend provided by Yu's patches.


    Everyone agrees that some support is necessary. The question is exactly
    what it looks like. I must confess to not having reviewed this latest
    patch series yet -- I'm a little burned out on patch review.

    --
    Matthew Wilcox Intel Open Source Technology Centre
    "Bill, look, we understand that you're interested in selling us this
    operating system, but compare it to ours. We can't possibly take such
    a retrograde step."
    --
    To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
    the body of a message to majordomo@vger.kernel.org
    More majordomo info at http://vger.kernel.org/majordomo-info.html
    Please read the FAQ at http://www.tux.org/lkml/

  11. Re: [PATCH 0/16 v6] PCI: Linux kernel SR-IOV support

    On Thu, Nov 06, 2008 at 10:05:39AM -0800, H L wrote:
    >
    > --- On Thu, 11/6/08, Greg KH wrote:
    >
    > > On Thu, Nov 06, 2008 at 08:41:53AM -0800, H L wrote:
    > > > I have not modified any existing drivers, but instead

    > > I threw together
    > > > a bare-bones module enabling me to make a call to

    > > pci_iov_register()
    > > > and then poke at an SR-IOV adapter's /sys entries

    > > for which no driver
    > > > was loaded.
    > > >
    > > > It appears from my perusal thus far that drivers using

    > > these new
    > > > SR-IOV patches will require modification; i.e. the

    > > driver associated
    > > > with the Physical Function (PF) will be required to

    > > make the
    > > > pci_iov_register() call along with the requisite

    > > notify() function.
    > > > Essentially this suggests to me a model for the PF

    > > driver to perform
    > > > any "global actions" or setup on behalf of

    > > VFs before enabling them
    > > > after which VF drivers could be associated.

    > >
    > > Where would the VF drivers have to be associated? On the
    > > "pci_dev"
    > > level or on a higher one?

    >
    >
    > I have not yet fully grocked Yu Zhao's model to answer this. That
    > said, I would *hope* to find it on the "pci_dev" level.


    Me too.

    > > Will all drivers that want to bind to a "VF"
    > > device need to be
    > > rewritten?

    >
    > Not necessarily, or perhaps minimally; depends on hardware/firmware
    > and actions the driver wants to take. An example here might assist.
    > Let's just say someone has created, oh, I don't know, maybe an SR-IOV
    > NIC. Now, for 'general' I/O operations to pass network traffic back
    > and forth there would ideally be no difference in the actions and
    > therefore behavior of a PF driver and a VF driver. But, what do you
    > do in the instance a VF wants to change link-speed? As that physical
    > characteristic affects all VFs, how do you handle that? This is where
    > the hardware/firmware implementation part comes to play. If a VF
    > driver performs some actions to initiate the change in link speed, the
    > logic in the adapter could be anything like:




    Yes, I agree that all of this needs to be done, somehow.

    It's that "somehow" that I am interested in trying to see how it works
    out.

    > >
    > > > I have so far only seen Yu Zhao's

    > > "7-patch" set. I've not yet looked
    > > > at his subsequently tendered "15-patch" set

    > > so I don't know what has
    > > > changed. The hardware/firmware implementation for

    > > any given SR-IOV
    > > > compatible device, will determine the extent of

    > > differences required
    > > > between a PF driver and a VF driver.

    > >
    > > Yeah, that's what I'm worried/curious about.
    > > Without seeing the code
    > > for such a driver, how can we properly evaluate if this
    > > infrastructure
    > > is the correct one and the proper way to do all of this?

    >
    >
    > As the example above demonstrates, that's a tough question to answer.
    > Ideally, in my view, there would only be one driver written per SR-IOV
    > device and it would contain the logic to "do the right things" based
    > on whether its running as a PF or VF with that determination easily
    > accomplished by testing the existence of the SR-IOV extended
    > capability. Then, in an effort to minimize (if not eliminate) the
    > complexities of driver-to-driver actions for fielding "global events",
    > contain as much of the logic as is possible within the adapter.
    > Minimizing the efforts required for the device driver writers in my
    > opinion paves the way to greater adoption of this technology.


    Yes, making things easier is the key here.

    Perhaps some of this could be hidden with a new bus type for these kinds
    of devices? Or a "virtual" bus of pci devices that the original SR-IOV
    device creates that corrispond to the individual virtual PCI devices?
    If that were the case, then it might be a lot easier in the end.

    thanks,

    greg k-h
    --
    To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
    the body of a message to majordomo@vger.kernel.org
    More majordomo info at http://vger.kernel.org/majordomo-info.html
    Please read the FAQ at http://www.tux.org/lkml/

  12. Re: git repository for SR-IOV development?

    --- On Thu, 11/6/08, Greg KH wrote:

    > On Thu, Nov 06, 2008 at 08:51:09AM -0800, H L wrote:
    > >
    > > Has anyone initiated or given consideration to the

    > creation of a git
    > > repository (say, on kernel.org) for SR-IOV

    > development?
    >
    > Why? It's only a few patches, right? Why would it
    > need a whole new git
    > tree?



    So as to minimize the time and effort patching a kernel, especially if the tree (and/or hash level) against which the patches were created fails to be specified on a mailing-list. Plus, there appears to be questions raised on how, precisely, the implementation should ultimately be modeled and especially given that, who knows at this point what number of patches will ultimately be submitted? I know I've built the "7-patch" one (painfully, by the way), and I'm aware there's another 15-patch set out there which I've not yet examined.





    --
    To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
    the body of a message to majordomo@vger.kernel.org
    More majordomo info at http://vger.kernel.org/majordomo-info.html
    Please read the FAQ at http://www.tux.org/lkml/

  13. RE: [PATCH 0/16 v6] PCI: Linux kernel SR-IOV support

    > Subject: Re: [PATCH 0/16 v6] PCI: Linux kernel SR-IOV support
    >
    > On Thu, Nov 06, 2008 at 05:38:16PM +0000, Fischer, Anna wrote:
    > > > On Thu, Nov 06, 2008 at 08:41:53AM -0800, H L wrote:
    > > > > I have not modified any existing drivers, but instead I threw
    > > > together
    > > > > a bare-bones module enabling me to make a call to

    > pci_iov_register()
    > > > > and then poke at an SR-IOV adapter's /sys entries for which no

    > driver
    > > > > was loaded.
    > > > >
    > > > > It appears from my perusal thus far that drivers using these new
    > > > > SR-IOV patches will require modification; i.e. the driver

    > associated
    > > > > with the Physical Function (PF) will be required to make the
    > > > > pci_iov_register() call along with the requisite notify()

    > function.
    > > > > Essentially this suggests to me a model for the PF driver to

    > perform
    > > > > any "global actions" or setup on behalf of VFs before enabling

    > them
    > > > > after which VF drivers could be associated.
    > > >
    > > > Where would the VF drivers have to be associated? On the "pci_dev"
    > > > level or on a higher one?

    > >
    > > A VF appears to the Linux OS as a standard (full, additional) PCI
    > > device. The driver is associated in the same way as for a normal PCI
    > > device. Ideally, you would use SR-IOV devices on a virtualized

    > system,
    > > for example, using Xen. A VF can then be assigned to a guest domain

    > as
    > > a full PCI device.

    >
    > It's that "second" part that I'm worried about. How is that going to
    > happen? Do you have any patches that show this kind of "assignment"?


    That depends on your setup. Using Xen, you could assign the VF to a guest domain like any other PCI device, e.g. using PCI pass-through. For VMware, KVM, there are standard ways to do that, too. I currently don't see why SR-IOV devices would need any specific, non-standard mechanism for device assignment.


    > > > Will all drivers that want to bind to a "VF" device need to be
    > > > rewritten?

    > >
    > > Currently, any vendor providing a SR-IOV device needs to provide a PF
    > > driver and a VF driver that runs on their hardware.

    >
    > Are there any such drivers available yet?


    I don't know.


    > > A VF driver does not necessarily need to know much about SR-IOV but
    > > just run on the presented PCI device. You might want to have a
    > > communication channel between PF and VF driver though, for various
    > > reasons, if such a channel is not provided in hardware.

    >
    > Agreed, but what does that channel look like in Linux?
    >
    > I have some ideas of what I think it should look like, but if people
    > already have code, I'd love to see that as well.


    At this point I would guess that this code is vendor specific, as are the drivers. The issue I see is that most likely drivers will run in different environments, for example, in Xen the PF driver runs in a driver domain while a VF driver runs in a guest VM. So a communication channel would need to be either Xen specific, or vendor specific. Also, a guest using the VF might run Windows while the PF might be controlled under Linux.

    Anna
    --
    To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
    the body of a message to majordomo@vger.kernel.org
    More majordomo info at http://vger.kernel.org/majordomo-info.html
    Please read the FAQ at http://www.tux.org/lkml/

  14. RE: [PATCH 0/16 v6] PCI: Linux kernel SR-IOV support

    > Subject: Re: [PATCH 0/16 v6] PCI: Linux kernel SR-IOV support
    >
    > On Thu, Nov 06, 2008 at 10:05:39AM -0800, H L wrote:
    > >
    > > --- On Thu, 11/6/08, Greg KH wrote:
    > >
    > > > On Thu, Nov 06, 2008 at 08:41:53AM -0800, H L wrote:
    > > > > I have not modified any existing drivers, but instead
    > > > I threw together
    > > > > a bare-bones module enabling me to make a call to
    > > > pci_iov_register()
    > > > > and then poke at an SR-IOV adapter's /sys entries
    > > > for which no driver
    > > > > was loaded.
    > > > >
    > > > > It appears from my perusal thus far that drivers using
    > > > these new
    > > > > SR-IOV patches will require modification; i.e. the
    > > > driver associated
    > > > > with the Physical Function (PF) will be required to
    > > > make the
    > > > > pci_iov_register() call along with the requisite
    > > > notify() function.
    > > > > Essentially this suggests to me a model for the PF
    > > > driver to perform
    > > > > any "global actions" or setup on behalf of
    > > > VFs before enabling them
    > > > > after which VF drivers could be associated.
    > > >
    > > > Where would the VF drivers have to be associated? On the
    > > > "pci_dev"
    > > > level or on a higher one?

    > >
    > >
    > > I have not yet fully grocked Yu Zhao's model to answer this. That
    > > said, I would *hope* to find it on the "pci_dev" level.

    >
    > Me too.
    >
    > > > Will all drivers that want to bind to a "VF"
    > > > device need to be
    > > > rewritten?

    > >
    > > Not necessarily, or perhaps minimally; depends on hardware/firmware
    > > and actions the driver wants to take. An example here might assist.
    > > Let's just say someone has created, oh, I don't know, maybe an SR-IOV
    > > NIC. Now, for 'general' I/O operations to pass network traffic back
    > > and forth there would ideally be no difference in the actions and
    > > therefore behavior of a PF driver and a VF driver. But, what do you
    > > do in the instance a VF wants to change link-speed? As that physical
    > > characteristic affects all VFs, how do you handle that? This is

    > where
    > > the hardware/firmware implementation part comes to play. If a VF
    > > driver performs some actions to initiate the change in link speed,

    > the
    > > logic in the adapter could be anything like:

    >
    >
    >
    > Yes, I agree that all of this needs to be done, somehow.
    >
    > It's that "somehow" that I am interested in trying to see how it works
    > out.
    >
    > > >
    > > > > I have so far only seen Yu Zhao's
    > > > "7-patch" set. I've not yet looked
    > > > > at his subsequently tendered "15-patch" set
    > > > so I don't know what has
    > > > > changed. The hardware/firmware implementation for
    > > > any given SR-IOV
    > > > > compatible device, will determine the extent of
    > > > differences required
    > > > > between a PF driver and a VF driver.
    > > >
    > > > Yeah, that's what I'm worried/curious about.
    > > > Without seeing the code
    > > > for such a driver, how can we properly evaluate if this
    > > > infrastructure
    > > > is the correct one and the proper way to do all of this?

    > >
    > >
    > > As the example above demonstrates, that's a tough question to answer.
    > > Ideally, in my view, there would only be one driver written per SR-

    > IOV
    > > device and it would contain the logic to "do the right things" based
    > > on whether its running as a PF or VF with that determination easily
    > > accomplished by testing the existence of the SR-IOV extended
    > > capability. Then, in an effort to minimize (if not eliminate) the
    > > complexities of driver-to-driver actions for fielding "global

    > events",
    > > contain as much of the logic as is possible within the adapter.
    > > Minimizing the efforts required for the device driver writers in my
    > > opinion paves the way to greater adoption of this technology.

    >
    > Yes, making things easier is the key here.
    >
    > Perhaps some of this could be hidden with a new bus type for these
    > kinds
    > of devices? Or a "virtual" bus of pci devices that the original SR-IOV
    > device creates that corrispond to the individual virtual PCI devices?
    > If that were the case, then it might be a lot easier in the end.


    I think a standard communication channel in Linux for SR-IOV devices
    would be a good start, and help to adopt the technology. Something
    like the virtual bus you are describing. It means that vendors do
    not need to write their own communication channel in the drivers.
    It would need to have well defined APIs though, as I guess that
    devices will have very different capabilities and hardware
    implementations for PFs and VFs, and so they might have very
    different events and information to propagate.

    Anna
    --
    To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
    the body of a message to majordomo@vger.kernel.org
    More majordomo info at http://vger.kernel.org/majordomo-info.html
    Please read the FAQ at http://www.tux.org/lkml/

  15. Re: [PATCH 0/16 v6] PCI: Linux kernel SR-IOV support

    On Thu, Nov 06, 2008 at 09:53:08AM -0800, Greg KH wrote:
    > On Thu, Nov 06, 2008 at 10:47:41AM -0700, Matthew Wilcox wrote:
    > > On Thu, Nov 06, 2008 at 08:49:19AM -0800, Greg KH wrote:
    > > > On Thu, Nov 06, 2008 at 08:41:53AM -0800, H L wrote:
    > > > > I have not modified any existing drivers, but instead I threw together
    > > > > a bare-bones module enabling me to make a call to pci_iov_register()
    > > > > and then poke at an SR-IOV adapter's /sys entries for which no driver
    > > > > was loaded.
    > > > >
    > > > > It appears from my perusal thus far that drivers using these new
    > > > > SR-IOV patches will require modification; i.e. the driver associated
    > > > > with the Physical Function (PF) will be required to make the
    > > > > pci_iov_register() call along with the requisite notify() function.
    > > > > Essentially this suggests to me a model for the PF driver to perform
    > > > > any "global actions" or setup on behalf of VFs before enabling them
    > > > > after which VF drivers could be associated.
    > > >
    > > > Where would the VF drivers have to be associated? On the "pci_dev"
    > > > level or on a higher one?
    > > >
    > > > Will all drivers that want to bind to a "VF" device need to be
    > > > rewritten?

    > >
    > > The current model being implemented by my colleagues has separate
    > > drivers for the PF (aka native) and VF devices. I don't personally
    > > believe this is the correct path, but I'm reserving judgement until I
    > > see some code.

    >
    > Hm, I would like to see that code before we can properly evaluate this
    > interface. Especially as they are all tightly tied together.
    >
    > > I don't think we really know what the One True Usage model is for VF
    > > devices. Chris Wright has some ideas, I have some ideas and Yu Zhao has
    > > some ideas. I bet there's other people who have other ideas too.

    >
    > I'd love to hear those ideas.
    >
    > Rumor has it, there is some Xen code floating around to support this
    > already, is that true?


    Xen patches were posted to xen-devel by Yu Zhao on the 29th of September [1].
    Unfortunately the only responses that I can find are a) that the patches
    were mangled and b) they seem to include changes (by others) that have
    been merged into Linux. I have confirmed that both of these concerns
    are valid.

    I have not yet examined the difference, if any, in the approach taken by Yu
    to SR-IOV in Linux and Xen. Unfortunately comparison is less than trivial
    due to the gaping gap in kernel versions between Linux-Xen (2.6.18.8) and
    Linux itself.

    One approach that I was considering in order to familiarise myself with the
    code was to backport the v6 Linux patches (this thread) to Linux-Xen. I made a
    start on that, but again due to kernel version differences it is non-trivial.

    [1] http://lists.xensource.com/archives/.../msg00923.html

    --
    Simon Horman
    VA Linux Systems Japan K.K., Sydney, Australia Satellite Office
    H: www.vergenet.net/~horms/ W: www.valinux.co.jp/en

    --
    To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
    the body of a message to majordomo@vger.kernel.org
    More majordomo info at http://vger.kernel.org/majordomo-info.html
    Please read the FAQ at http://www.tux.org/lkml/

  16. Re: [PATCH 0/16 v6] PCI: Linux kernel SR-IOV support

    Matthew Wilcox wrote:
    > [Anna, can you fix your word-wrapping please? Your lines appear to be
    > infinitely long which is most unpleasant to reply to]
    >
    > On Thu, Nov 06, 2008 at 05:38:16PM +0000, Fischer, Anna wrote:
    >
    >>> Where would the VF drivers have to be associated? On the "pci_dev"
    >>> level or on a higher one?
    >>>

    >> A VF appears to the Linux OS as a standard (full, additional) PCI
    >> device. The driver is associated in the same way as for a normal PCI
    >> device. Ideally, you would use SR-IOV devices on a virtualized system,
    >> for example, using Xen. A VF can then be assigned to a guest domain as
    >> a full PCI device.
    >>

    >
    > It's not clear thats the right solution. If the VF devices are _only_
    > going to be used by the guest, then arguably, we don't want to create
    > pci_devs for them in the host. (I think it _is_ the right answer, but I
    > want to make it clear there's multiple opinions on this).
    >


    The VFs shouldn't be limited to being used by the guest.

    SR-IOV is actually an incredibly painful thing. You need to have a VF
    driver in the guest, do hardware pass through, have a PV driver stub in
    the guest that's hypervisor specific (a VF is not usable on it's own),
    have a device specific backend in the VMM, and if you want to do live
    migration, have another PV driver in the guest that you can do teaming
    with. Just a mess.

    What we would rather do in KVM, is have the VFs appear in the host as
    standard network devices. We would then like to back our existing PV
    driver to this VF directly bypassing the host networking stack. A key
    feature here is being able to fill the VF's receive queue with guest
    memory instead of host kernel memory so that you can get zero-copy
    receive traffic. This will perform just as well as doing passthrough
    (at least) and avoid all that ugliness of dealing with SR-IOV in the guest.

    This eliminates all of the mess of various drivers in the guest and all
    the associated baggage of doing hardware passthrough.

    So IMHO, having VFs be usable in the host is absolutely critical because
    I think it's the only reasonable usage model.

    Regards,

    Anthony Liguori
    --
    To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
    the body of a message to majordomo@vger.kernel.org
    More majordomo info at http://vger.kernel.org/majordomo-info.html
    Please read the FAQ at http://www.tux.org/lkml/

  17. Re: [PATCH 0/16 v6] PCI: Linux kernel SR-IOV support

    Greg KH wrote:
    > On Thu, Nov 06, 2008 at 10:47:41AM -0700, Matthew Wilcox wrote:
    >
    >> I don't think we really know what the One True Usage model is for VF
    >> devices. Chris Wright has some ideas, I have some ideas and Yu Zhao has
    >> some ideas. I bet there's other people who have other ideas too.
    >>

    >
    > I'd love to hear those ideas.
    >


    We've been talking about avoiding hardware passthrough entirely and just
    backing a virtio-net backend driver by a dedicated VF in the host. That
    avoids a huge amount of guest-facing complexity, let's migration Just
    Work, and should give the same level of performance.

    Regards,

    Anthony Liguori

    > Rumor has it, there is some Xen code floating around to support this
    > already, is that true?
    >
    > thanks,
    >
    > greg k-h
    > --
    > To unsubscribe from this list: send the line "unsubscribe kvm" in
    > the body of a message to majordomo@vger.kernel.org
    > More majordomo info at http://vger.kernel.org/majordomo-info.html
    >


    --
    To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
    the body of a message to majordomo@vger.kernel.org
    More majordomo info at http://vger.kernel.org/majordomo-info.html
    Please read the FAQ at http://www.tux.org/lkml/

  18. Re: [PATCH 0/16 v6] PCI: Linux kernel SR-IOV support

    On Thu, Nov 06, 2008 at 04:38:40PM -0600, Anthony Liguori wrote:
    > >It's not clear thats the right solution. If the VF devices are _only_
    > >going to be used by the guest, then arguably, we don't want to create
    > >pci_devs for them in the host. (I think it _is_ the right answer, but I
    > >want to make it clear there's multiple opinions on this).

    >
    > The VFs shouldn't be limited to being used by the guest.
    >
    > SR-IOV is actually an incredibly painful thing. You need to have a VF
    > driver in the guest, do hardware pass through, have a PV driver stub in
    > the guest that's hypervisor specific (a VF is not usable on it's own),
    > have a device specific backend in the VMM, and if you want to do live
    > migration, have another PV driver in the guest that you can do teaming
    > with. Just a mess.


    Not to mention that you basically have to statically allocate them up
    front.

    > What we would rather do in KVM, is have the VFs appear in the host as
    > standard network devices. We would then like to back our existing PV
    > driver to this VF directly bypassing the host networking stack. A key
    > feature here is being able to fill the VF's receive queue with guest
    > memory instead of host kernel memory so that you can get zero-copy
    > receive traffic. This will perform just as well as doing passthrough
    > (at least) and avoid all that ugliness of dealing with SR-IOV in the guest.


    This argues for ignoring the SR-IOV mess completely. Just have the
    host driver expose multiple 'ethN' devices.

    > This eliminates all of the mess of various drivers in the guest and all
    > the associated baggage of doing hardware passthrough.
    >
    > So IMHO, having VFs be usable in the host is absolutely critical because
    > I think it's the only reasonable usage model.
    >
    > Regards,
    >
    > Anthony Liguori
    > --
    > To unsubscribe from this list: send the line "unsubscribe linux-pci" in
    > the body of a message to majordomo@vger.kernel.org
    > More majordomo info at http://vger.kernel.org/majordomo-info.html


    --
    Matthew Wilcox Intel Open Source Technology Centre
    "Bill, look, we understand that you're interested in selling us this
    operating system, but compare it to ours. We can't possibly take such
    a retrograde step."
    --
    To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
    the body of a message to majordomo@vger.kernel.org
    More majordomo info at http://vger.kernel.org/majordomo-info.html
    Please read the FAQ at http://www.tux.org/lkml/

  19. Re: git repository for SR-IOV development?

    On Thu, Nov 06, 2008 at 11:58:25AM -0800, H L wrote:
    > --- On Thu, 11/6/08, Greg KH wrote:
    >
    > > On Thu, Nov 06, 2008 at 08:51:09AM -0800, H L wrote:
    > > >
    > > > Has anyone initiated or given consideration to the

    > > creation of a git
    > > > repository (say, on kernel.org) for SR-IOV

    > > development?
    > >
    > > Why? It's only a few patches, right? Why would it
    > > need a whole new git
    > > tree?

    >
    >
    > So as to minimize the time and effort patching a kernel, especially if
    > the tree (and/or hash level) against which the patches were created fails
    > to be specified on a mailing-list. Plus, there appears to be questions
    > raised on how, precisely, the implementation should ultimately be modeled
    > and especially given that, who knows at this point what number of patches
    > will ultimately be submitted? I know I've built the "7-patch" one
    > (painfully, by the way), and I'm aware there's another 15-patch set out
    > there which I've not yet examined.


    FWIW, the v6 patch series (this thread) applied to both 2.6.28-rc3
    and the current Linus tree after a minor tweak to the first patch, as below.

    --
    Simon Horman
    VA Linux Systems Japan K.K., Sydney, Australia Satellite Office
    H: www.vergenet.net/~horms/ W: www.valinux.co.jp/en

    From: Yu Zhao

    [PATCH 1/16 v6] PCI: remove unnecessary arg of pci_update_resource()

    This cleanup removes unnecessary argument 'struct resource *res' in
    pci_update_resource(), so it takes same arguments as other companion
    functions (pci_assign_resource(), etc.).

    Cc: Alex Chiang
    Cc: Grant Grundler
    Cc: Greg KH
    Cc: Ingo Molnar
    Cc: Jesse Barnes
    Cc: Matthew Wilcox
    Cc: Randy Dunlap
    Cc: Roland Dreier
    Signed-off-by: Yu Zhao
    Upported-by: Simon Horman

    ---
    drivers/pci/pci.c | 4 ++--
    drivers/pci/setup-res.c | 7 ++++---
    include/linux/pci.h | 2 +-
    3 files changed, 7 insertions(+), 6 deletions(-)

    * Fri, 07 Nov 2008 09:05:18 +1100, Simon Horman
    - Minor rediff of include/linux/pci.h section to apply to 2.6.28-rc3

    diff --git a/drivers/pci/pci.c b/drivers/pci/pci.c
    index 4db261e..ae62f01 100644
    --- a/drivers/pci/pci.c
    +++ b/drivers/pci/pci.c
    @@ -376,8 +376,8 @@ pci_restore_bars(struct pci_dev *dev)
    return;
    }

    - for (i = 0; i < numres; i ++)
    - pci_update_resource(dev, &dev->resource[i], i);
    + for (i = 0; i < numres; i++)
    + pci_update_resource(dev, i);
    }

    static struct pci_platform_pm_ops *pci_platform_pm;
    diff --git a/drivers/pci/setup-res.c b/drivers/pci/setup-res.c
    index 2dbd96c..b7ca679 100644
    --- a/drivers/pci/setup-res.c
    +++ b/drivers/pci/setup-res.c
    @@ -26,11 +26,12 @@
    #include "pci.h"


    -void pci_update_resource(struct pci_dev *dev, struct resource *res, int resno)
    +void pci_update_resource(struct pci_dev *dev, int resno)
    {
    struct pci_bus_region region;
    u32 new, check, mask;
    int reg;
    + struct resource *res = dev->resource + resno;

    /*
    * Ignore resources for unimplemented BARs and unused resource slots
    @@ -162,7 +163,7 @@ int pci_assign_resource(struct pci_dev *dev, int resno)
    } else {
    res->flags &= ~IORESOURCE_STARTALIGN;
    if (resno < PCI_BRIDGE_RESOURCES)
    - pci_update_resource(dev, res, resno);
    + pci_update_resource(dev, resno);
    }

    return ret;
    @@ -197,7 +198,7 @@ int pci_assign_resource_fixed(struct pci_dev *dev, int resno)
    dev_err(&dev->dev, "BAR %d: can't allocate %s resource %pR\n",
    resno, res->flags & IORESOURCE_IO ? "I/O" : "mem", res);
    } else if (resno < PCI_BRIDGE_RESOURCES) {
    - pci_update_resource(dev, res, resno);
    + pci_update_resource(dev, resno);
    }

    return ret;
    diff --git a/include/linux/pci.h b/include/linux/pci.h
    index 085187b..43e1fc1 100644
    --- a/include/linux/pci.h
    +++ b/include/linux/pci.h
    @@ -626,7 +626,7 @@ int pcix_get_mmrbc(struct pci_dev *dev);
    int pcie_set_readrq(struct pci_dev *dev, int rq);
    int pci_reset_function(struct pci_dev *dev);
    int pci_execute_reset_function(struct pci_dev *dev);
    -void pci_update_resource(struct pci_dev *dev, struct resource *res, int resno);
    +void pci_update_resource(struct pci_dev *dev, int resno);
    int __must_check pci_assign_resource(struct pci_dev *dev, int i);
    int pci_select_bars(struct pci_dev *dev, unsigned long flags);

    --
    1.5.6.4

    --
    To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
    the body of a message to majordomo@vger.kernel.org
    More majordomo info at http://vger.kernel.org/majordomo-info.html
    Please read the FAQ at http://www.tux.org/lkml/

    --
    To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
    the body of a message to majordomo@vger.kernel.org
    More majordomo info at http://vger.kernel.org/majordomo-info.html
    Please read the FAQ at http://www.tux.org/lkml/

  20. Re: [PATCH 0/16 v6] PCI: Linux kernel SR-IOV support

    * Greg KH (greg@kroah.com) wrote:
    > On Thu, Nov 06, 2008 at 10:47:41AM -0700, Matthew Wilcox wrote:
    > > On Thu, Nov 06, 2008 at 08:49:19AM -0800, Greg KH wrote:
    > > > On Thu, Nov 06, 2008 at 08:41:53AM -0800, H L wrote:
    > > > > I have not modified any existing drivers, but instead I threw together
    > > > > a bare-bones module enabling me to make a call to pci_iov_register()
    > > > > and then poke at an SR-IOV adapter's /sys entries for which no driver
    > > > > was loaded.
    > > > >
    > > > > It appears from my perusal thus far that drivers using these new
    > > > > SR-IOV patches will require modification; i.e. the driver associated
    > > > > with the Physical Function (PF) will be required to make the
    > > > > pci_iov_register() call along with the requisite notify() function.
    > > > > Essentially this suggests to me a model for the PF driver to perform
    > > > > any "global actions" or setup on behalf of VFs before enabling them
    > > > > after which VF drivers could be associated.
    > > >
    > > > Where would the VF drivers have to be associated? On the "pci_dev"
    > > > level or on a higher one?
    > > >
    > > > Will all drivers that want to bind to a "VF" device need to be
    > > > rewritten?

    > >
    > > The current model being implemented by my colleagues has separate
    > > drivers for the PF (aka native) and VF devices. I don't personally
    > > believe this is the correct path, but I'm reserving judgement until I
    > > see some code.

    >
    > Hm, I would like to see that code before we can properly evaluate this
    > interface. Especially as they are all tightly tied together.
    >
    > > I don't think we really know what the One True Usage model is for VF
    > > devices. Chris Wright has some ideas, I have some ideas and Yu Zhao has
    > > some ideas. I bet there's other people who have other ideas too.

    >
    > I'd love to hear those ideas.


    First there's the question of how to represent the VF on the host.
    Ideally (IMO) this would show up as a normal interface so that normal tools
    can configure the interface. This is not exactly how the first round of
    patches were designed.

    Second there's the question of reserving the BDF on the host such that
    we don't have two drivers (one in the host and one in a guest) trying to
    drive the same device (an issue that shows up for device assignment as
    well as VF assignment).

    Third there's the question of whether the VF can be used in the host at
    all.

    Fourth there's the question of whether the VF and PF drivers are the
    same or separate.

    The typical usecase is assigning the VF to the guest directly, so
    there's only enough functionality in the host side to allocate a VF,
    configure it, and assign it (and propagate AER). This is with separate
    PF and VF driver.

    As Anthony mentioned, we are interested in allowing the host to use the
    VF. This could be useful for containers as well as dedicating a VF (a
    set of device resources) to a guest w/out passing it through.

    thanks,
    -chris
    --
    To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
    the body of a message to majordomo@vger.kernel.org
    More majordomo info at http://vger.kernel.org/majordomo-info.html
    Please read the FAQ at http://www.tux.org/lkml/

+ Reply to Thread
Page 2 of 4 FirstFirst 1 2 3 4 LastLast