[PATCH] HP iLO driver - Kernel

This is a discussion on [PATCH] HP iLO driver - Kernel ; A driver for the HP iLO/iLO2 management processor, which allows userspace programs to query the management processor. Programs can open a channel to the device (/dev/hpilo/dXccbN), and use this to send/receive queries. The O_EXCL open flag is used to indicate ...

+ Reply to Thread
Results 1 to 3 of 3

Thread: [PATCH] HP iLO driver

  1. [PATCH] HP iLO driver

    A driver for the HP iLO/iLO2 management processor, which allows userspace
    programs to query the management processor. Programs can open a channel
    to the device (/dev/hpilo/dXccbN), and use this to send/receive queries.
    The O_EXCL open flag is used to indicate that a particular channel cannot
    be shared between processes. This driver will replace various packages
    HP has shipped, including hprsm and hp-ilo.

    v1 -> v2
    Changed device path to /dev/hpilo/dXccbN.
    Removed a volatile from fifobar variable.
    Changed ILO_NAME to remove spaces.

    Please CC me on any replies, thanks for your time.

    Signed-off-by: David Altobelli
    ---
    Kconfig | 13 +
    Makefile | 1
    hpilo.c | 695 ++++++++++++++++++++++++++++++++++++++++++++++++++ +++++++++++++
    hpilo.h | 218 +++++++++++++++++++
    4 files changed, 927 insertions(+)
    diff -urpN linux-2.6.25.orig/drivers/char/hpilo.c linux-2.6.25/drivers/char/hpilo.c
    --- linux-2.6.25.orig/drivers/char/hpilo.c 1969-12-31 18:00:00.000000000 -0600
    +++ linux-2.6.25/drivers/char/hpilo.c 2008-06-16 08:56:31.000000000 -0500
    @@ -0,0 +1,695 @@
    +/*
    + * Driver for HP iLO/iLO2 management processor.
    + *
    + * Copyright (C) 2008 Hewlett-Packard Development Company, L.P.
    + * David Altobelli
    + *
    + * This program is free software; you can redistribute it and/or modify
    + * it under the terms of the GNU General Public License version 2 as
    + * published by the Free Software Foundation.
    + */
    +#include
    +#include
    +#include
    +#include
    +#include
    +#include
    +#include
    +#include
    +#include
    +#include
    +#include
    +#include
    +#include
    +#include "hpilo.h"
    +
    +static struct class *ilo_class;
    +static unsigned int ilo_major;
    +static char ilo_hwdev[MAX_ILO_DEV];
    +
    +/*
    + * FIFO queues, shared with hardware.
    + *
    + * If a queue has empty slots, an entry is added to the queue tail,
    + * and that entry is marked as occupied.
    + * Entries can be dequeued from the head of the list, when the device
    + * has marked the entry as consumed.
    + *
    + * Returns true on successful queue/dequeue, false on failure.
    + */
    +static int fifo_enqueue(struct ilo_hwinfo *hw, char *fifobar, int entry)
    +{
    + struct fifo *Q = FIFOBARTOHANDLE(fifobar);
    + int ret = 0;
    +
    + spin_lock(&hw->fifo_lock);
    + if (!(Q->fifobar[(Q->tail + 1) & Q->imask] & ENTRY_MASK_O)) {
    + Q->fifobar[Q->tail & Q->imask] |=
    + ((entry & ENTRY_MASK_NOSTATE) | Q->merge);
    + Q->tail += 1;
    + ret = 1;
    + }
    + spin_unlock(&hw->fifo_lock);
    +
    + return ret;
    +}
    +
    +static int fifo_dequeue(struct ilo_hwinfo *hw, char *fifobar, int *entry)
    +{
    + struct fifo *Q = FIFOBARTOHANDLE(fifobar);
    + int ret = 0;
    + u64 c;
    +
    + spin_lock(&hw->fifo_lock);
    + c = Q->fifobar[Q->head & Q->imask];
    + if (c & ENTRY_MASK_C) {
    + if (entry)
    + *entry = c & ENTRY_MASK_NOSTATE;
    +
    + Q->fifobar[Q->head & Q->imask] = ((c | ENTRY_MASK) + 1);
    + Q->head += 1;
    + ret = 1;
    + }
    + spin_unlock(&hw->fifo_lock);
    +
    + return ret;
    +}
    +
    +static int ilo_pkt_enqueue(struct ilo_hwinfo *hw, struct ccb *ccb,
    + int dir, int id, int len)
    +{
    + char *fifobar;
    + int entry;
    +
    + if (dir == SENDQ)
    + fifobar = ccb->ccb_u1.send_fifobar;
    + else
    + fifobar = ccb->ccb_u3.recv_fifobar;
    +
    + entry = id << ENTRY_BITPOS_DESCRIPTOR |
    + QWORDS(len) << ENTRY_BITPOS_QWORDS;
    +
    + return fifo_enqueue(hw, fifobar, entry);
    +}
    +
    +static int ilo_pkt_dequeue(struct ilo_hwinfo *hw, struct ccb *ccb,
    + int dir, int *id, int *len, void **pkt)
    +{
    + char *fifobar, *desc;
    + int entry = 0, pkt_id = 0;
    + int ret;
    +
    + if (dir == SENDQ) {
    + fifobar = ccb->ccb_u1.send_fifobar;
    + desc = ccb->ccb_u2.send_desc;
    + } else {
    + fifobar = ccb->ccb_u3.recv_fifobar;
    + desc = ccb->ccb_u4.recv_desc;
    + }
    +
    + ret = fifo_dequeue(hw, fifobar, &entry);
    + if (ret) {
    + pkt_id = GETDESC(entry);
    + if (id)
    + *id = pkt_id;
    + if (len)
    + *len = GETQWORDS(entry) << 3;
    + if (pkt)
    + *pkt = (void *)(desc + DESC_MEM_SZ(pkt_id));
    + }
    +
    + return ret;
    +}
    +
    +static void ctrl_setup(struct ccb *ccb, int nr_desc, int l2desc_sz)
    +{
    + /* for simplicity, use the same parameters for send and recv ctrls */
    + CTRL_SET(ccb->send_ctrl, l2desc_sz, nr_desc-1, nr_desc-1, 0, 1);
    + CTRL_SET(ccb->recv_ctrl, l2desc_sz, nr_desc-1, nr_desc-1, 0, 1);
    +}
    +
    +static void fifo_setup(void *base_addr, int nr_entry)
    +{
    + struct fifo *Q = base_addr;
    + int i;
    +
    + /* set up an empty fifo */
    + Q->head = 0;
    + Q->tail = 0;
    + Q->reset = 0;
    + Q->nrents = nr_entry;
    + Q->imask = nr_entry - 1;
    + Q->merge = ENTRY_MASK_O;
    +
    + for (i = 0; i < nr_entry; i++)
    + Q->fifobar[i] = 0;
    +}
    +
    +static void ilo_ccb_close(struct pci_dev *pdev, struct ccb_data *data)
    +{
    + struct ccb *driver_ccb;
    + struct ccb __iomem *device_ccb;
    + int retries;
    +
    + driver_ccb = &data->driver_ccb;
    + device_ccb = data->mapped_ccb;
    +
    + /* complicated dance to tell the hw we are stopping */
    + DOORBELL_CLR(driver_ccb);
    + iowrite32(ioread32(&device_ccb->send_ctrl) & ~(1 << CTRL_BITPOS_G),
    + &device_ccb->send_ctrl);
    + iowrite32(ioread32(&device_ccb->recv_ctrl) & ~(1 << CTRL_BITPOS_G),
    + &device_ccb->recv_ctrl);
    +
    + /* give iLO some time to process stop request */
    + for (retries = 1000; retries > 0; retries--) {
    + DOORBELL_SET(driver_ccb);
    + udelay(1);
    + if (!(ioread32(&device_ccb->send_ctrl) & (1 << CTRL_BITPOS_A))
    + &&
    + !(ioread32(&device_ccb->recv_ctrl) & (1 << CTRL_BITPOS_A)))
    + break;
    + }
    + if (retries == 0)
    + dev_err(&pdev->dev, "Closing, but controller still active\n");
    +
    + /* clear the hw ccb */
    + memset_io(device_ccb, 0, sizeof(struct ccb));
    +
    + /* free resources used to back send/recv queues */
    + pci_free_consistent(pdev, data->dma_size, data->dma_va, data->dma_pa);
    +}
    +
    +static int ilo_ccb_open(struct ilo_hwinfo *hw, struct ccb_data *data, int slot)
    +{
    + char *dma_va, *dma_pa;
    + int pkt_id, pkt_sz, i, error;
    + struct ccb *driver_ccb, *ilo_ccb;
    + struct pci_dev *pdev;
    +
    + driver_ccb = &data->driver_ccb;
    + ilo_ccb = &data->ilo_ccb;
    + pdev = hw->ilo_dev;
    +
    + data->dma_size = 2 * FIFO_SZ(NR_QENTRY) +
    + 2 * DESC_MEM_SZ(NR_QENTRY) +
    + ILO_START_ALIGN + ILO_CACHE_SZ;
    +
    + error = -ENOMEM;
    + data->dma_va = pci_alloc_consistent(pdev, data->dma_size,
    + &data->dma_pa);
    + if (!data->dma_va)
    + goto out;
    +
    + dma_va = (char *)data->dma_va;
    + dma_pa = (char *)data->dma_pa;
    +
    + memset(dma_va, 0, data->dma_size);
    +
    + dma_va = (char *)roundup((unsigned long)dma_va, ILO_START_ALIGN);
    + dma_pa = (char *)roundup((unsigned long)dma_pa, ILO_START_ALIGN);
    +
    + /*
    + * Create two ccb's, one with virt addrs, one with phys addrs.
    + * Copy the phys addr ccb to device shared mem.
    + */
    + ctrl_setup(driver_ccb, NR_QENTRY, L2_QENTRY_SZ);
    + ctrl_setup(ilo_ccb, NR_QENTRY, L2_QENTRY_SZ);
    +
    + fifo_setup(dma_va, NR_QENTRY);
    + driver_ccb->ccb_u1.send_fifobar = (dma_va + FIFOHANDLESIZE);
    + ilo_ccb->ccb_u1.send_fifobar = (dma_pa + FIFOHANDLESIZE);
    + dma_va += FIFO_SZ(NR_QENTRY);
    + dma_pa += FIFO_SZ(NR_QENTRY);
    +
    + dma_va = (char *)roundup((unsigned long)dma_va, ILO_CACHE_SZ);
    + dma_pa = (char *)roundup((unsigned long)dma_pa, ILO_CACHE_SZ);
    +
    + fifo_setup(dma_va, NR_QENTRY);
    + driver_ccb->ccb_u3.recv_fifobar = (dma_va + FIFOHANDLESIZE);
    + ilo_ccb->ccb_u3.recv_fifobar = (dma_pa + FIFOHANDLESIZE);
    + dma_va += FIFO_SZ(NR_QENTRY);
    + dma_pa += FIFO_SZ(NR_QENTRY);
    +
    + driver_ccb->ccb_u2.send_desc = dma_va;
    + ilo_ccb->ccb_u2.send_desc = dma_pa;
    + dma_pa += DESC_MEM_SZ(NR_QENTRY);
    + dma_va += DESC_MEM_SZ(NR_QENTRY);
    +
    + driver_ccb->ccb_u4.recv_desc = dma_va;
    + ilo_ccb->ccb_u4.recv_desc = dma_pa;
    +
    + driver_ccb->channel = slot;
    + ilo_ccb->channel = slot;
    +
    + driver_ccb->ccb_u5.db_base = hw->db_vaddr + (slot << L2_DB_SIZE);
    + ilo_ccb->ccb_u5.db_base = NULL; /* hw ccb's doorbell is not used */
    +
    + /* copy the ccb with physical addrs to device memory */
    + data->mapped_ccb = (struct ccb __iomem *)
    + (hw->ram_vaddr + (slot * ILOHW_CCB_SZ));
    + memcpy_toio(data->mapped_ccb, ilo_ccb, sizeof(struct ccb));
    +
    + /* put packets on the send and receive queues */
    + pkt_sz = 0;
    + for (pkt_id = 0; pkt_id < NR_QENTRY; pkt_id++) {
    + ilo_pkt_enqueue(hw, driver_ccb, SENDQ, pkt_id, pkt_sz);
    + DOORBELL_SET(driver_ccb);
    + }
    +
    + pkt_sz = DESC_MEM_SZ(1);
    + for (pkt_id = 0; pkt_id < NR_QENTRY; pkt_id++)
    + ilo_pkt_enqueue(hw, driver_ccb, RECVQ, pkt_id, pkt_sz);
    +
    + DOORBELL_CLR(driver_ccb);
    +
    + /* make sure iLO is really handling requests */
    + for (i = 1000; i > 0; i--) {
    + if (ilo_pkt_dequeue(hw, driver_ccb, SENDQ, &pkt_id, NULL, NULL))
    + break;
    + udelay(1);
    + }
    +
    + if (i) {
    + ilo_pkt_enqueue(hw, driver_ccb, SENDQ, pkt_id, 0);
    + DOORBELL_SET(driver_ccb);
    + } else {
    + dev_err(&pdev->dev, "Open could not dequeue a packet\n");
    + error = -EBUSY;
    + goto free;
    + }
    +
    + return 0;
    +free:
    + pci_free_consistent(pdev, data->dma_size, data->dma_va, data->dma_pa);
    +out:
    + return error;
    +}
    +
    +static void ilo_locked_reset(struct ilo_hwinfo *hw)
    +{
    + int slot;
    +
    + /*
    + * Mapped memory is zeroed on ilo reset, so set a per ccb flag
    + * to indicate that this ccb needs to be closed and reopened.
    + */
    + for (slot = 0; slot < MAX_CCB; slot++) {
    + if (!hw->ccb_alloc[slot])
    + continue;
    + SET_CHANNEL_RESET(&hw->ccb_alloc[slot]->driver_ccb);
    + }
    +
    + CLEAR_DEVICE(hw);
    +}
    +
    +static void ilo_reset(struct ilo_hwinfo *hw)
    +{
    + spin_lock(&hw->alloc_lock);
    +
    + /* reset might have been handled after lock was taken */
    + if (IS_DEVICE_RESET(hw))
    + ilo_locked_reset(hw);
    +
    + spin_unlock(&hw->alloc_lock);
    +}
    +
    +static ssize_t ilo_read(struct file *fp, char __user *buf,
    + size_t len, loff_t *off)
    +{
    + int err, found, cnt, pkt_id, pkt_len;
    + struct ccb_data *data;
    + struct ccb *driver_ccb;
    + struct ilo_hwinfo *hw;
    + void *pkt;
    +
    + data = fp->private_data;
    + driver_ccb = &data->driver_ccb;
    + hw = data->ilo_hw;
    +
    + if (IS_DEVICE_RESET(hw) || IS_CHANNEL_RESET(driver_ccb)) {
    + /*
    + * If the device has been reset, applications
    + * need to close and reopen all ccbs.
    + */
    + ilo_reset(hw);
    + return -ENODEV;
    + }
    +
    + /*
    + * This function is to be called when data is expected
    + * in the channel, and will return an error if no packet is found
    + * during the loop below. The sleep/retry logic is to allow
    + * applications to call read() immediately post write(),
    + * and give iLO some time to process the sent packet.
    + */
    + cnt = 20;
    + do {
    + /* look for a received packet */
    + found = ilo_pkt_dequeue(hw, driver_ccb, RECVQ, &pkt_id,
    + &pkt_len, &pkt);
    + if (found)
    + break;
    + cnt--;
    + msleep(100);
    + } while (!found && cnt);
    +
    + if (!found)
    + return -EAGAIN;
    +
    + /* only copy the length of the received packet */
    + if (pkt_len < len)
    + len = pkt_len;
    +
    + err = copy_to_user(buf, pkt, len);
    +
    + /* return the received packet to the queue */
    + ilo_pkt_enqueue(hw, driver_ccb, RECVQ, pkt_id, DESC_MEM_SZ(1));
    +
    + return (err ? -EFAULT : len);
    +}
    +
    +static ssize_t ilo_write(struct file *fp, const char __user *buf,
    + size_t len, loff_t *off)
    +{
    + int err, pkt_id, pkt_len;
    + struct ccb_data *data;
    + struct ccb *driver_ccb;
    + struct ilo_hwinfo *hw;
    + void *pkt;
    +
    + data = fp->private_data;
    + driver_ccb = &data->driver_ccb;
    + hw = data->ilo_hw;
    +
    + if (IS_DEVICE_RESET(hw) || IS_CHANNEL_RESET(driver_ccb)) {
    + /*
    + * If the device has been reset, applications
    + * need to close and reopen all ccbs.
    + */
    + ilo_reset(hw);
    + return -ENODEV;
    + }
    +
    + /* get a packet to send the user command */
    + if (!ilo_pkt_dequeue(hw, driver_ccb, SENDQ, &pkt_id, &pkt_len, &pkt))
    + return -EBUSY;
    +
    + /* limit the length to the length of the packet */
    + if (pkt_len < len)
    + len = pkt_len;
    +
    + /* on failure, set the len to 0 to return empty packet to the device */
    + err = copy_from_user(pkt, buf, len);
    + if (err)
    + len = 0;
    +
    + /* send the packet */
    + ilo_pkt_enqueue(hw, driver_ccb, SENDQ, pkt_id, len);
    + DOORBELL_SET(driver_ccb);
    +
    + return (err ? -EFAULT : len);
    +}
    +
    +static int ilo_close(struct inode *ip, struct file *fp)
    +{
    + int slot;
    + struct ccb_data *data;
    + struct ilo_hwinfo *hw;
    +
    + slot = iminor(ip) % MAX_CCB;
    + hw = container_of(ip->i_cdev, struct ilo_hwinfo, cdev);
    +
    + spin_lock(&hw->alloc_lock);
    +
    + if (IS_DEVICE_RESET(hw))
    + ilo_locked_reset(hw);
    +
    + if (hw->ccb_alloc[slot]->ccb_cnt == 1) {
    +
    + data = fp->private_data;
    +
    + ilo_ccb_close(hw->ilo_dev, data);
    +
    + kfree(data);
    + hw->ccb_alloc[slot] = NULL;
    + } else
    + hw->ccb_alloc[slot]->ccb_cnt--;
    +
    + spin_unlock(&hw->alloc_lock);
    +
    + return 0;
    +}
    +
    +static int ilo_open(struct inode *ip, struct file *fp)
    +{
    + int slot, error;
    + struct ccb_data *data;
    + struct ilo_hwinfo *hw;
    +
    + slot = iminor(ip) % MAX_CCB;
    + hw = container_of(ip->i_cdev, struct ilo_hwinfo, cdev);
    +
    + spin_lock(&hw->alloc_lock);
    +
    + if (IS_DEVICE_RESET(hw))
    + ilo_locked_reset(hw);
    +
    + /* each fd private_data holds sw/hw view of ccb */
    + if (hw->ccb_alloc[slot] == NULL) {
    + /* new ccb allocation */
    + error = -ENOMEM;
    + data = kzalloc(sizeof(struct ccb_data), GFP_KERNEL);
    + if (!data)
    + goto out;
    +
    + /* create a channel control block for this minor */
    + error = ilo_ccb_open(hw, data, slot);
    + if (error)
    + goto free;
    +
    + hw->ccb_alloc[slot] = data;
    + hw->ccb_alloc[slot]->ccb_cnt = 1;
    + hw->ccb_alloc[slot]->ccb_excl = fp->f_flags & O_EXCL;
    + hw->ccb_alloc[slot]->ilo_hw = hw;
    + } else if (fp->f_flags & O_EXCL || hw->ccb_alloc[slot]->ccb_excl) {
    + /* either this open or a previous open wants exclusive access */
    + error = -EBUSY;
    + goto out;
    + } else
    + hw->ccb_alloc[slot]->ccb_cnt++;
    +
    + spin_unlock(&hw->alloc_lock);
    +
    + fp->private_data = hw->ccb_alloc[slot];
    +
    + return 0;
    +free:
    + kfree(data);
    +out:
    + spin_unlock(&hw->alloc_lock);
    + return error;
    +}
    +
    +static const struct file_operations ilo_fops = {
    + THIS_MODULE,
    + .read = ilo_read,
    + .write = ilo_write,
    + .open = ilo_open,
    + .release = ilo_close,
    +};
    +
    +static void ilo_unmap_device(struct pci_dev *pdev, struct ilo_hwinfo *hw)
    +{
    + pci_iounmap(pdev, hw->db_vaddr);
    + pci_iounmap(pdev, hw->ram_vaddr);
    + pci_iounmap(pdev, hw->mmio_vaddr);
    +}
    +
    +static int __devinit ilo_map_device(struct pci_dev *pdev, struct ilo_hwinfo *hw)
    +{
    + int error = -ENOMEM;
    +
    + /* map the memory mapped i/o registers */
    + hw->mmio_vaddr = pci_iomap(pdev, 1, 0);
    + if (hw->mmio_vaddr == NULL) {
    + dev_err(&pdev->dev, "Error mapping mmio\n");
    + goto out;
    + }
    +
    + /* map the adapter shared memory region */
    + hw->ram_vaddr = pci_iomap(pdev, 2, MAX_CCB * ILOHW_CCB_SZ);
    + if (hw->ram_vaddr == NULL) {
    + dev_err(&pdev->dev, "Error mapping shared mem\n");
    + goto mmio_free;
    + }
    +
    + /* map the doorbell aperture */
    + hw->db_vaddr = pci_iomap(pdev, 3, MAX_CCB * ONE_DB_SIZE);
    + if (hw->db_vaddr == NULL) {
    + dev_err(&pdev->dev, "Error mapping doorbell\n");
    + goto ram_free;
    + }
    +
    + return 0;
    +ram_free:
    + pci_iounmap(pdev, hw->ram_vaddr);
    +mmio_free:
    + pci_iounmap(pdev, hw->mmio_vaddr);
    +out:
    + return error;
    +}
    +
    +static void ilo_remove(struct pci_dev *pdev)
    +{
    + int i, minor;
    + struct ilo_hwinfo *ilo_hw = pci_get_drvdata(pdev);
    +
    + CLEAR_DEVICE(ilo_hw);
    +
    + minor = MINOR(ilo_hw->cdev.dev);
    + for (i = minor; i < minor + MAX_CCB; i++)
    + device_destroy(ilo_class, MKDEV(ilo_major, i));
    +
    + cdev_del(&ilo_hw->cdev);
    + ilo_unmap_device(pdev, ilo_hw);
    + pci_release_regions(pdev);
    + pci_disable_device(pdev);
    + kfree(ilo_hw);
    + ilo_hwdev[(minor / MAX_CCB)] = 0;
    +}
    +
    +static int __devinit ilo_probe(struct pci_dev *pdev,
    + const struct pci_device_id *ent)
    +{
    + int devnum, minor, start, error;
    + struct ilo_hwinfo *ilo_hw;
    +
    + /* find a free range for device files */
    + for (devnum = 0; devnum < MAX_ILO_DEV; devnum++) {
    + if (ilo_hwdev[devnum] == 0) {
    + ilo_hwdev[devnum] = 1;
    + break;
    + }
    + }
    +
    + if (devnum == MAX_ILO_DEV) {
    + dev_err(&pdev->dev, "Error finding free device\n");
    + return -ENODEV;
    + }
    +
    + /* track global allocations for this device */
    + error = -ENOMEM;
    + ilo_hw = kzalloc(sizeof(struct ilo_hwinfo), GFP_KERNEL);
    + if (!ilo_hw)
    + goto out;
    +
    + ilo_hw->ilo_dev = pdev;
    + spin_lock_init(&ilo_hw->alloc_lock);
    + spin_lock_init(&ilo_hw->fifo_lock);
    +
    + error = pci_enable_device(pdev);
    + if (error)
    + goto free;
    +
    + pci_set_master(pdev);
    +
    + error = pci_request_regions(pdev, ILO_NAME);
    + if (error)
    + goto disable;
    +
    + error = ilo_map_device(pdev, ilo_hw);
    + if (error)
    + goto free_regions;
    +
    + pci_set_drvdata(pdev, ilo_hw);
    + CLEAR_DEVICE(ilo_hw);
    +
    + cdev_init(&ilo_hw->cdev, &ilo_fops);
    + ilo_hw->cdev.owner = THIS_MODULE;
    + start = devnum * MAX_CCB;
    + error = cdev_add(&ilo_hw->cdev, MKDEV(ilo_major, start), MAX_CCB);
    + if (error) {
    + dev_err(&pdev->dev, "Could not add cdev\n");
    + goto unmap;
    + }
    +
    + for (minor = 0 ; minor < MAX_CCB; minor++) {
    + if (IS_ERR(device_create(ilo_class, &pdev->dev,
    + MKDEV(ilo_major, minor),
    + "hpilo!d%dccb%d", devnum, minor)))
    + dev_err(&pdev->dev, "Could not create files\n");
    + }
    +
    + return 0;
    +unmap:
    + ilo_unmap_device(pdev, ilo_hw);
    +free_regions:
    + pci_release_regions(pdev);
    +disable:
    + pci_disable_device(pdev);
    +free:
    + kfree(ilo_hw);
    +out:
    + ilo_hwdev[devnum] = 0;
    + return error;
    +}
    +
    +static struct pci_device_id ilo_devices[] = {
    + { PCI_DEVICE(PCI_VENDOR_ID_COMPAQ, 0xB204) },
    + { }
    +};
    +MODULE_DEVICE_TABLE(pci, ilo_devices);
    +
    +static struct pci_driver ilo_driver = {
    + .name = ILO_NAME,
    + .id_table = ilo_devices,
    + .probe = ilo_probe,
    + .remove = __devexit_p(ilo_remove),
    +};
    +
    +static int __init ilo_init(void)
    +{
    + int error;
    + dev_t dev;
    +
    + ilo_class = class_create(THIS_MODULE, "iLO");
    + if (IS_ERR(ilo_class)) {
    + error = PTR_ERR(ilo_class);
    + goto out;
    + }
    +
    + error = alloc_chrdev_region(&dev, 0, MAX_OPEN, ILO_NAME);
    + if (error)
    + goto class_destroy;
    +
    + ilo_major = MAJOR(dev);
    +
    + error = pci_register_driver(&ilo_driver);
    + if (error)
    + goto chr_remove;
    +
    + return 0;
    +chr_remove:
    + unregister_chrdev_region(dev, MAX_OPEN);
    +class_destroy:
    + class_destroy(ilo_class);
    +out:
    + return error;
    +}
    +
    +static void __exit ilo_exit(void)
    +{
    + pci_unregister_driver(&ilo_driver);
    + unregister_chrdev_region(MKDEV(ilo_major, 0), MAX_OPEN);
    + class_destroy(ilo_class);
    +}
    +
    +MODULE_VERSION("0.02");
    +MODULE_ALIAS(ILO_NAME);
    +MODULE_DESCRIPTION(ILO_NAME);
    +MODULE_AUTHOR("David Altobelli ");
    +MODULE_LICENSE("GPL v2");
    +
    +module_init(ilo_init);
    +module_exit(ilo_exit);
    diff -urpN linux-2.6.25.orig/drivers/char/hpilo.h linux-2.6.25/drivers/char/hpilo.h
    --- linux-2.6.25.orig/drivers/char/hpilo.h 1969-12-31 18:00:00.000000000 -0600
    +++ linux-2.6.25/drivers/char/hpilo.h 2008-06-16 08:56:31.000000000 -0500
    @@ -0,0 +1,218 @@
    +/*
    + * linux/drivers/char/hpilo.h
    + *
    + * Copyright (C) 2008 Hewlett-Packard Development Company, L.P.
    + * David Altobelli
    + *
    + * This program is free software; you can redistribute it and/or modify
    + * it under the terms of the GNU General Public License version 2 as
    + * published by the Free Software Foundation.
    + */
    +#ifndef __HPILO_H
    +#define __HPILO_H
    +
    +#define ILO_NAME "hpilo"
    +
    +/* max number of open channel control blocks per device, hw limited to 32 */
    +#define MAX_CCB 8
    +/* max number of supported devices */
    +#define MAX_ILO_DEV 1
    +/* max number of files */
    +#define MAX_OPEN MAX_CCB * MAX_ILO_DEV
    +
    +/*
    + * Per device, used to track global memory allocations.
    + */
    +struct ilo_hwinfo {
    + /* mmio registers on device */
    + char __iomem *mmio_vaddr;
    +
    + /* doorbell registers on device */
    + char __iomem *db_vaddr;
    +
    + /* shared memory on device used for channel control blocks */
    + char __iomem *ram_vaddr;
    +
    + /* files corresponding to this device */
    + struct ccb_data *ccb_alloc[MAX_CCB];
    +
    + struct pci_dev *ilo_dev;
    +
    + spinlock_t alloc_lock;
    + spinlock_t fifo_lock;
    +
    + struct cdev cdev;
    +};
    +
    +/* offset from mmio_vaddr */
    +#define DB_OUT 0xD4
    +/* check for global reset condition */
    +#define IS_DEVICE_RESET(hw) (ioread32(&(hw)->mmio_vaddr[DB_OUT]) & (1 << 26))
    +/* clear the device (reset bits, pending channel entries) */
    +#define CLEAR_DEVICE(hw) (iowrite32(-1, &(hw)->mmio_vaddr[DB_OUT]))
    +
    +/*
    + * Channel control block. Used to manage hardware queues.
    + * The format must match hw's version. The hw ccb is 128 bytes,
    + * but the context area shouldn't be touched by the driver.
    + */
    +#define ILOSW_CCB_SZ 64
    +#define ILOHW_CCB_SZ 128
    +struct ccb {
    + union {
    + char *send_fifobar;
    + u64 padding1;
    + } ccb_u1;
    + union {
    + char *send_desc;
    + u64 padding2;
    + } ccb_u2;
    + u64 send_ctrl;
    +
    + union {
    + char *recv_fifobar;
    + u64 padding3;
    + } ccb_u3;
    + union {
    + char *recv_desc;
    + u64 padding4;
    + } ccb_u4;
    + u64 recv_ctrl;
    +
    + union {
    + char __iomem *db_base;
    + u64 padding5;
    + } ccb_u5;
    +
    + u64 channel;
    +
    + /* unused context area (64 bytes) */
    +};
    +
    +/* ccb queue parameters */
    +#define SENDQ 1
    +#define RECVQ 2
    +#define NR_QENTRY 4
    +#define L2_QENTRY_SZ 12
    +#define DESC_MEM_SZ(_descs) ((_descs) << L2_QENTRY_SZ)
    +
    +/* ccb ctrl bitfields */
    +#define CTRL_BITPOS_L2SZ 0
    +#define CTRL_BITPOS_FIFOINDEXMASK 4
    +#define CTRL_BITPOS_DESCLIMIT 18
    +#define CTRL_BITPOS_A 30
    +#define CTRL_BITPOS_G 31
    +
    +#define CTRL_SET(_c, _l, _f, _d, _a, _g) \
    + ((_c) = \
    + (((_l) << CTRL_BITPOS_L2SZ) |\
    + ((_f) << CTRL_BITPOS_FIFOINDEXMASK) |\
    + ((_d) << CTRL_BITPOS_DESCLIMIT) |\
    + ((_a) << CTRL_BITPOS_A) |\
    + ((_g) << CTRL_BITPOS_G)))
    +
    +/* ccb doorbell macros */
    +#define L2_DB_SIZE 14
    +#define ONE_DB_SIZE (1 << L2_DB_SIZE)
    +#define DOORBELL_SET(_ccb) (iowrite8(1, (_ccb)->ccb_u5.db_base))
    +#define DOORBELL_CLR(_ccb) (iowrite8(2, (_ccb)->ccb_u5.db_base))
    +
    +/*
    + * Per fd structure used to track the ccb allocated to that dev file.
    + */
    +struct ccb_data {
    + /* software version of ccb, using virtual addrs */
    + struct ccb driver_ccb;
    +
    + /* hardware version of ccb, using physical addrs */
    + struct ccb ilo_ccb;
    +
    + /* hardware ccb is written to this shared mapped device memory */
    + struct ccb __iomem *mapped_ccb;
    +
    + /* dma'able memory used for send/recv queues */
    + void *dma_va;
    + dma_addr_t dma_pa;
    + size_t dma_size;
    +
    + /* pointer to hardware device info */
    + struct ilo_hwinfo *ilo_hw;
    +
    + /* usage count, to allow for shared ccb's */
    + int ccb_cnt;
    +
    + /* open wanted exclusive access to this ccb */
    + int ccb_excl;
    +};
    +
    +/*
    + * FIFO queue structure, shared with hw.
    + */
    +#define ILO_START_ALIGN 4096
    +#define ILO_CACHE_SZ 128
    +struct fifo {
    + u64 nrents; /* user requested number of fifo entries */
    + u64 imask; /* mask to extract valid fifo index */
    + u64 merge; /* O/C bits to merge in during enqueue operation */
    + u64 reset; /* set to non-zero when the target device resets */
    + u8 pad_0[ILO_CACHE_SZ - (sizeof(u64) * 4)];
    +
    + u64 head;
    + u8 pad_1[ILO_CACHE_SZ - (sizeof(u64))];
    +
    + u64 tail;
    + u8 pad_2[ILO_CACHE_SZ - (sizeof(u64))];
    +
    + u64 fifobar[1];
    +};
    +
    +/* convert between struct fifo, and the fifobar, which is saved in the ccb */
    +#define FIFOHANDLESIZE (sizeof(struct fifo) - sizeof(u64))
    +#define FIFOBARTOHANDLE(_fifo) \
    + ((struct fifo *)(((char *)(_fifo)) - FIFOHANDLESIZE))
    +
    +/* set a flag indicating this channel needs a reset */
    +#define SET_CHANNEL_RESET(_ccb) \
    + (FIFOBARTOHANDLE((_ccb)->ccb_u1.send_fifobar)->reset = 1)
    +
    +/* check for this particular channel needing a reset */
    +#define IS_CHANNEL_RESET(_ccb) \
    + (FIFOBARTOHANDLE((_ccb)->ccb_u1.send_fifobar)->reset)
    +
    +/* overall size of a fifo is determined by the number of entries it contains */
    +#define FIFO_SZ(_num) (((_num)*sizeof(u64)) + FIFOHANDLESIZE)
    +
    +/* the number of qwords to consume from the entry descriptor */
    +#define ENTRY_BITPOS_QWORDS 0
    +/* descriptor index number (within a specified queue) */
    +#define ENTRY_BITPOS_DESCRIPTOR 10
    +/* state bit, fifo entry consumed by consumer */
    +#define ENTRY_BITPOS_C 22
    +/* state bit, fifo entry is occupied */
    +#define ENTRY_BITPOS_O 23
    +
    +#define ENTRY_BITS_QWORDS 10
    +#define ENTRY_BITS_DESCRIPTOR 12
    +#define ENTRY_BITS_C 1
    +#define ENTRY_BITS_O 1
    +#define ENTRY_BITS_TOTAL \
    + (ENTRY_BITS_C + ENTRY_BITS_O + \
    + ENTRY_BITS_QWORDS + ENTRY_BITS_DESCRIPTOR)
    +
    +/* extract various entry fields */
    +#define ENTRY_MASK ((1 << ENTRY_BITS_TOTAL) - 1)
    +#define ENTRY_MASK_C (((1 << ENTRY_BITS_C) - 1) << ENTRY_BITPOS_C)
    +#define ENTRY_MASK_O (((1 << ENTRY_BITS_O) - 1) << ENTRY_BITPOS_O)
    +#define ENTRY_MASK_QWORDS \
    + (((1 << ENTRY_BITS_QWORDS) - 1) << ENTRY_BITPOS_QWORDS)
    +#define ENTRY_MASK_DESCRIPTOR \
    + (((1 << ENTRY_BITS_DESCRIPTOR) - 1) << ENTRY_BITPOS_DESCRIPTOR)
    +
    +#define ENTRY_MASK_NOSTATE (ENTRY_MASK >> (ENTRY_BITS_C + ENTRY_BITS_O))
    +
    +#define GETQWORDS(_e) (((_e) & ENTRY_MASK_QWORDS) >> ENTRY_BITPOS_QWORDS)
    +#define GETDESC(_e) (((_e) & ENTRY_MASK_DESCRIPTOR) >> ENTRY_BITPOS_DESCRIPTOR)
    +
    +#define QWORDS(_B) (((_B) & 7) ? (((_B) >> 3) + 1) : ((_B) >> 3))
    +
    +#endif /* __HPILO_H */
    diff -urpN linux-2.6.25.orig/drivers/char/Kconfig linux-2.6.25/drivers/char/Kconfig
    --- linux-2.6.25.orig/drivers/char/Kconfig 2008-04-30 16:42:55.000000000 -0500
    +++ linux-2.6.25/drivers/char/Kconfig 2008-06-16 08:23:36.000000000 -0500
    @@ -1049,5 +1049,18 @@ config DEVPORT

    source "drivers/s390/char/Kconfig"

    +config HP_ILO
    + tristate "Channel interface driver for HP iLO/iLO2 processor"
    + default n
    + help
    + The channel interface driver allows applications to communicate
    + with iLO/iLO2 management processors present on HP ProLiant
    + servers. Upon loading, the driver creates /dev/hpilo/dXccbN files,
    + which can be used to gather data from the management processor,
    + via read and write system calls.
    +
    + To compile this driver as a module, choose M here: the
    + module will be called hpilo.
    +
    endmenu

    diff -urpN linux-2.6.25.orig/drivers/char/Makefile linux-2.6.25/drivers/char/Makefile
    --- linux-2.6.25.orig/drivers/char/Makefile 2008-04-30 16:42:56.000000000 -0500
    +++ linux-2.6.25/drivers/char/Makefile 2008-05-22 16:22:28.000000000 -0500
    @@ -97,6 +97,7 @@ obj-$(CONFIG_CS5535_GPIO) += cs5535_gpio
    obj-$(CONFIG_GPIO_VR41XX) += vr41xx_giu.o
    obj-$(CONFIG_GPIO_TB0219) += tb0219.o
    obj-$(CONFIG_TELCLOCK) += tlclk.o
    +obj-$(CONFIG_HP_ILO) += hpilo.o

    obj-$(CONFIG_MWAVE) += mwave/
    obj-$(CONFIG_AGP) += agp/
    --
    To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
    the body of a message to majordomo@vger.kernel.org
    More majordomo info at http://vger.kernel.org/majordomo-info.html
    Please read the FAQ at http://www.tux.org/lkml/

  2. Re: [PATCH] HP iLO driver

    On Mon, 16 Jun 2008 08:07:29 -0600 David Altobelli wrote:

    > A driver for the HP iLO/iLO2 management processor, which allows userspace
    > programs to query the management processor. Programs can open a channel
    > to the device (/dev/hpilo/dXccbN), and use this to send/receive queries.
    > The O_EXCL open flag is used to indicate that a particular channel cannot
    > be shared between processes. This driver will replace various packages
    > HP has shipped, including hprsm and hp-ilo.
    >
    > v1 -> v2
    > Changed device path to /dev/hpilo/dXccbN.
    > Removed a volatile from fifobar variable.
    > Changed ILO_NAME to remove spaces.
    >
    > Please CC me on any replies, thanks for your time.
    >
    >
    > ...
    >
    > +static int ilo_open(struct inode *ip, struct file *fp)
    > +{
    > + int slot, error;
    > + struct ccb_data *data;
    > + struct ilo_hwinfo *hw;
    > +
    > + slot = iminor(ip) % MAX_CCB;
    > + hw = container_of(ip->i_cdev, struct ilo_hwinfo, cdev);
    > +
    > + spin_lock(&hw->alloc_lock);
    > +
    > + if (IS_DEVICE_RESET(hw))
    > + ilo_locked_reset(hw);
    > +
    > + /* each fd private_data holds sw/hw view of ccb */
    > + if (hw->ccb_alloc[slot] == NULL) {
    > + /* new ccb allocation */
    > + error = -ENOMEM;
    > + data = kzalloc(sizeof(struct ccb_data), GFP_KERNEL);


    Doing a GFP_KERNEL allocation inside spin_lock() is a flagrant bug
    which would have been detected had you enabled
    CONFIG_DEBUG_SPINLOCK_SLEEP (which ia64 appears to support (if this is
    an ia64-only driver?)) while testing.

    Please see Documentation/SubmitChecklist

    Using GFP_ATOMIC would be a poor solution for this - it is unreliable.
    Better would be to speculatively perform the allocation outside the
    spinlocked region and then toss it away if you didn't use it:

    p = kmalloc(size, GFP_KERNEL);
    spin_lock();
    if (expr) {
    q = p;
    p = NULL;
    }
    spin_unlock();
    kfree(p);

    > + if (!data)
    > + goto out;
    > +
    > + /* create a channel control block for this minor */
    > + error = ilo_ccb_open(hw, data, slot);


    That's large but _looks_ OK for called-under-spinlock.

    > + if (error)
    > + goto free;
    > +
    > + hw->ccb_alloc[slot] = data;
    > + hw->ccb_alloc[slot]->ccb_cnt = 1;
    > + hw->ccb_alloc[slot]->ccb_excl = fp->f_flags & O_EXCL;
    > + hw->ccb_alloc[slot]->ilo_hw = hw;
    > + } else if (fp->f_flags & O_EXCL || hw->ccb_alloc[slot]->ccb_excl) {
    > + /* either this open or a previous open wants exclusive access */
    > + error = -EBUSY;
    > + goto out;
    > + } else
    > + hw->ccb_alloc[slot]->ccb_cnt++;
    > +
    > + spin_unlock(&hw->alloc_lock);
    > +
    > + fp->private_data = hw->ccb_alloc[slot];
    > +
    > + return 0;
    > +free:
    > + kfree(data);
    > +out:
    > + spin_unlock(&hw->alloc_lock);
    > + return error;
    > +}
    > +
    > +static const struct file_operations ilo_fops = {
    > + THIS_MODULE,


    .owner = THIS_MODULE

    > + .read = ilo_read,
    > + .write = ilo_write,
    > + .open = ilo_open,
    > + .release = ilo_close,
    > +};
    > +
    > +static void ilo_unmap_device(struct pci_dev *pdev, struct ilo_hwinfo *hw)
    > +{
    > + pci_iounmap(pdev, hw->db_vaddr);
    > + pci_iounmap(pdev, hw->ram_vaddr);
    > + pci_iounmap(pdev, hw->mmio_vaddr);
    > +}
    >
    > ...
    >
    > +#define ENTRY_MASK_QWORDS \
    > + (((1 << ENTRY_BITS_QWORDS) - 1) << ENTRY_BITPOS_QWORDS)
    > +#define ENTRY_MASK_DESCRIPTOR \
    > + (((1 << ENTRY_BITS_DESCRIPTOR) - 1) << ENTRY_BITPOS_DESCRIPTOR)
    > +
    > +#define ENTRY_MASK_NOSTATE (ENTRY_MASK >> (ENTRY_BITS_C + ENTRY_BITS_O))


    hm, I spose these make sens as macros.

    > +#define GETQWORDS(_e) (((_e) & ENTRY_MASK_QWORDS) >> ENTRY_BITPOS_QWORDS)
    > +#define GETDESC(_e) (((_e) & ENTRY_MASK_DESCRIPTOR) >> ENTRY_BITPOS_DESCRIPTOR)


    But these could be lower-case inline functions. Why use pretend
    functions when we can use real ones?

    > +#define QWORDS(_B) (((_B) & 7) ? (((_B) >> 3) + 1) : ((_B) >> 3))


    And this one is buggy - will do weird things if passed an expression
    which has side-effects.

    General rule: only impement code within macros when macros MUST be used.

    > +#endif /* __HPILO_H */
    > diff -urpN linux-2.6.25.orig/drivers/char/Kconfig linux-2.6.25/drivers/char/Kconfig
    > --- linux-2.6.25.orig/drivers/char/Kconfig 2008-04-30 16:42:55.000000000 -0500
    > +++ linux-2.6.25/drivers/char/Kconfig 2008-06-16 08:23:36.000000000 -0500
    > @@ -1049,5 +1049,18 @@ config DEVPORT
    >
    > source "drivers/s390/char/Kconfig"
    >
    > +config HP_ILO
    > + tristate "Channel interface driver for HP iLO/iLO2 processor"
    > + default n
    > + help
    > + The channel interface driver allows applications to communicate
    > + with iLO/iLO2 management processors present on HP ProLiant
    > + servers. Upon loading, the driver creates /dev/hpilo/dXccbN files,
    > + which can be used to gather data from the management processor,
    > + via read and write system calls.
    > +
    > + To compile this driver as a module, choose M here: the
    > + module will be called hpilo.
    > +
    > endmenu


    OK, so this is available on all architectures?

    There are pros and cons. No avr32 user is likely to use this driver
    , but otoh having it compiled on other architectures can expose
    problems.


    --
    To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
    the body of a message to majordomo@vger.kernel.org
    More majordomo info at http://vger.kernel.org/majordomo-info.html
    Please read the FAQ at http://www.tux.org/lkml/

  3. RE: [PATCH] HP iLO driver

    Andrew Morton wrote:
    >
    > Doing a GFP_KERNEL allocation inside spin_lock() is a flagrant bug
    > which would have been detected had you enabled
    > CONFIG_DEBUG_SPINLOCK_SLEEP (which ia64 appears to support (if this is
    > an ia64-only driver?)) while testing.
    >
    > Please see Documentation/SubmitChecklist


    Oops, sorry. Right now, this is basically an x86 only driver.

    > Using GFP_ATOMIC would be a poor solution for this - it is unreliable.
    > Better would be to speculatively perform the allocation outside the
    > spinlocked region and then toss it away if you didn't use it:


    Will do.

    > OK, so this is available on all architectures?
    >
    > There are pros and cons. No avr32 user is likely to use this driver
    > , but otoh having it compiled on other architectures can expose
    > problems.


    My hope was to write code that would work across architectures,
    but the device is currently shipped only on x86.
    --
    To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
    the body of a message to majordomo@vger.kernel.org
    More majordomo info at http://vger.kernel.org/majordomo-info.html
    Please read the FAQ at http://www.tux.org/lkml/

+ Reply to Thread