ICode9

精准搜索请尝试: 精确搜索
首页 > 其他分享> 文章详细

NanoPi NEO Air使用十五:使用V4L2驱动USB摄像头

2022-01-05 22:02:48  阅读:333  来源: 互联网

标签:NEO USB int buf NanoPi VIDIOC exit device V4L2


USB摄像头初识

  Linux UVC driver(uvc) 该驱动适用于符合USB视频类(USB Video Class)规范的摄像头设备,它包括V4L2内核设备驱动和用户空间工具补丁。大多数大容量存储器设备(如优盘)都遵循USB规范,因而仅用一个单一驱动就可以操作它们。与此类似,UVC兼容外设只需要一个通用驱动即可。
  USB摄像头大体上可以分为UVC cameras和non-UVC cameras。推荐购买UVC cameras。UVC是一个开放的标准,拥有维护良好的驱动,它属于内核代码的一部分。non- UVC cameras通常情况下不比UVC cameras工作出色,驱动并不遵循通用的协议,需要针对每种摄像头做出单独的处理。Linux内核中已经集成了常见的摄像头驱动,所以,随便买一个摄像头进行测试一般都不成问题。

摄像头类型的确定方法
  关于摄像头类型的确定,最简单的方法就是查看USB摄像头的硬件ID,硬件ID主要分为VID和PID,在Winows中可以在设备管理器中查看,方法如下图:
在这里插入图片描述
  在linux下,执行lsusb即可查看
在这里插入图片描述

  在这个图中能够看到VID和PID为1871:0141。可以通过这个网页来查看是否是否支持UVC。如果没有查到,可以百度或者问商家。

V4L2内核设备驱动

什么是V4L2

  V4L2 全称 Video for Linux Two API Specification,它是Linux 内核中关于视频设备的子系统,它为linux 下的视频驱动提供了统一的接口,使得应用程序可以使用统一的API 函数操作不同的视频设备,极大地简化了视频系统的开发和维护。

V4L2可以支持多种设备,它提供有以下几种接口:
(1)视频采集接口(video capture interface):这种应用的设备可以是高频头或者摄像头。V4L2的最初设计就是应用于这种功能的。
(2)视频输出接口(video output interface):可以驱动计算机的外围视频图像设备,可以输出视频到电视信号格式的设备。
(3) 直接传输视频接口(video overlay interface):它的主要工作是把从视频采集设备采集过来的信号直接输出到输出设备之上,而不用经过系统的CPU。
(4)视频间隔消隐信号接口(VBI interface):它可以使应用可以访问传输消隐期的视频信号。
(5)收音机接口(radio interface):可用来处理从AM或FM高频头设备接收来的音频流。

使用V4L2

对V4L2 Device编程,通常包含以下步骤:

1、打开设备文件

fd = open (dev_name, O_RDWR | O_NONBLOCK, 0);//打开设备

2、查询设备能力-capability

int ioctl(intfd, int request, struct v4l2_capability *argp);  

v4l2_capability 结构体定义如下:
structv4l2_capability  
{  
__u8 driver[16];     // 驱动名 
__u8 card[32];       // 设备名
__u8 bus_info[32];   // 设备在系统总线中的信息
__u32 version;       // 驱动版本号  
__u32 capabilities;  // 设备支持的操作--我们最关心的属性  
__u32 reserved[4];    
};
capabilities支持的定义如下:

V4L2_CAP_VIDEO_CAPTURE  0x00000001 //The device supports the Video Capture interface.
V4L2_CAP_VIDEO_OUTPUT   0x00000002  //The device supports the Video Output interface.
V4L2_CAP_VIDEO_OVERLAY  0x00000004  //The device supports the Video Overlay interface. A video overlay device typically stores captured images directly in the video memory of a graphics card, with hardware clipping and scaling.
V4L2_CAP_VBI_CAPTURE    0x00000010  //The device supports the Raw VBI Capture interface, providing Teletext and Closed Caption data.
V4L2_CAP_VBI_OUTPUT 0x00000020  //The device supports the Raw VBI Output interface.
V4L2_CAP_SLICED_VBI_CAPTURE 0x00000040  The device supports the Sliced VBI Capture interface.
V4L2_CAP_SLICED_VBI_OUTPUT  0x00000080  //The device supports the Sliced VBI Output interface.
V4L2_CAP_RDS_CAPTURE    0x00000100  //The device supports the RDS interface.
V4L2_CAP_VIDEO_OUTPUT_OVERLAY   0x00000200  //The device supports the Video Output Overlay (OSD) interface. Unlike the Video Overlay interface, this is a secondary function of video output devices and overlays an image onto an outgoing video signal. When the driver sets this flag, it must clear the V4L2_CAP_VIDEO_OVERLAY flag and vice versa.[a]
V4L2_CAP_HW_FREQ_SEEK   0x00000400  //The device supports the VIDIOC_S_HW_FREQ_SEEK ioctl for hardware frequency seeking.
V4L2_CAP_TUNER  0x00010000  //The device has some sort of tuner to receive RF-modulated video signals. For more information about tuner programming see Section 1.6, “Tuners and Modulators”.
V4L2_CAP_AUDIO  0x00020000  //The device has audio inputs or outputs. It may or may not support audio recording or playback, in PCM or compressed formats. PCM audio support must be implemented as ALSA or OSS interface. For more information on audio inputs and outputs see Section 1.5, “Audio Inputs and Outputs”.
V4L2_CAP_RADIO  0x00040000  //This is a radio receiver.
V4L2_CAP_MODULATOR  0x00080000  //The device has some sort of modulator to emit RF-modulated video/audio signals. For more information about modulator programming see Section 1.6, “Tuners and Modulators”.
V4L2_CAP_READWRITE  0x01000000  //The device supports the read() and/or write() I/O methods.
V4L2_CAP_ASYNCIO    0x02000000  //The device supports the asynchronous I/O methods.
V4L2_CAP_STREAMING  0x04000000  //The device supports the streaming I/O method. 

其中我们最关心的是V4L2_CAP_VIDEO_CAPTUREV4L2_CAP_STREAMING
操作代码如下:

struct v4l2_capability cap;

if (-1 == xioctl (fd, VIDIOC_QUERYCAP, &cap)) {
    if (EINVAL == errno) {
        fprintf (stderr, "%s is no V4L2 device/n",dev_name);
        exit (EXIT_FAILURE);
    } else {
        errno_exit ("VIDIOC_QUERYCAP");
    }
}

if (!(cap.capabilities & V4L2_CAP_VIDEO_CAPTURE)) {
    fprintf (stderr, "%s is no video capture device\n",dev_name);
    exit (EXIT_FAILURE);
}

if (!(cap.capabilities & V4L2_CAP_STREAMING)) {
    fprintf (stderr, "%s does not support streaming i/o\n",dev_name);
    exit (EXIT_FAILURE);
}

3、格式查询

//查询驱动支持的数据格式
struct v4l2_format format;
memset(&format,0,sizeof(format));
format.type = V4L2_BUF_TYPE_VIDEO_CAPTURE;
if (-1 == ioctl(fd,VIDIOC_G_FMT,&format))
{
    perror("VIDIOC_G_FMT(VIDEO_CAPTURE)");
    return -1;
}
//struct v4l2_format数据结构定义如下:
struct v4l2_format
{
    enum v4l2_buf_type type;
    union
    {
        struct v4l2_pix_format pix;
        struct v4l2_window win;
        struct v4l2_vbi_format vbi;
        struct v4l2_sliced_vbi_format sliced;
        __u8 raw_data[200];
    } fmt;
};

//v4l2_buf_type  type; 输入信息,让用户选择是哪种类型的设备。这里又与Driver中对应起来了。
enum v4l2_buf_type
{
    V4L2_BUF_TYPE_VIDEO_CAPTURE
    V4L2_BUF_TYPE_VIDEO_OUTPUT
    V4L2_BUF_TYPE_VIDEO_OVERLAY
    V4L2_BUF_TYPE_VBI_CAPTURE
    V4L2_BUF_TYPE_VBI_OUTPUT
    V4L2_BUF_TYPE_SLICED_VBI_CAPTURE
    V4L2_BUF_TYPE_SLICED_VBI_OUTPUT
    V4L2_BUF_TYPE_VIDEO_OUTPUT_OVERLAY
    V4L2_BUF_TYPE_PRIVATE
}
//最重要的是struct v4l2_pix_format pix; 这个也是新手最容出错的地方!!
struct v4l2_pix_format
{
    __u32 width;    //帧宽度
    __u32 height;   //桢高度
    __u32 pixelformat;  //像素格式。例如:V4L2_PIX_FMT_YUYV,V4L2_PIX_FMT_RGB332等。
    enum v4l2_field field; //image包含逐行数据还是隔行数据。
    __u32 bytesperline; //每行多少字节,通过width,height,pixelformat可以算出
    __u32 sizeimage;    //每桢多少字节。也可以算出。但需要加入field信息才能算出
    enum v4l2_colorspace colorspace;
    __u32 priv;
};

pixelformat——像素格式有十几种之多,设备驱动不可能支持输出所有的数据格式,而我们要将摄像头中的图像数据正确的解析就必须首先搞明白我的视频驱动支持输出什么格式!

4、格式设置

通过格式查询我已经知道我的系统只支持YUYV4:2:2格式输出,所以,格式设置已经没什么好设置了,主要是设置图像宽度、高度信息等。

struct v4l2_format fmt;

fmt.type = V4L2_BUF_TYPE_VIDEO_CAPTURE;
fmt.pix.width = 320;  
fmt.pix.height = 240;
fmt.pix.pixelformat = V4L2_PIX_FMT_YUYV;
fmt.pix.field = V4L2_FIELD_INTERLACED;
if (-1 == xioctl (fd, VIDIOC_S_FMT, &fmt))
    errno_exit ("VIDIOC_S_FMT");

5、视频数据流读取

V4L2支持两种方式来采集图像:内存映射方式(mmap)和直接读取方式(read)。前者一般用于连续视频数据的采集,后者常用于静态图片数据的采集。

mmap方式,驱动将内部数据空间映射到应用程序空间上,双方直接在这个空间进行数据交换,是效率最高的方法,也是最常用的方式。
直接读取设备文件方式:调用read()、write()函数进行数据的读入和输出,该方法一般配合select()使用。

下文主要讲解mmap方式,对于mmap方式可以参考和菜鸟一起学linux之V4L2摄像头应用流程

mmap视频采集流程如下:
(1)申请若干视频采集的帧缓冲区,并将这些帧缓冲区从内核空间映射到用户空间,便于应用程序读取/处理视频数据;
(2)将申请到的帧缓冲区在视频采集输入队列排队,并启动视频采集;
(3)驱动开始视频数据的采集,应用程序从视频采集输出队列取出帧缓冲区,处理完后,将帧缓冲区重新放入视频采集输入队列,循环往复采集连续的视频数据;
循环流程如下:
在这里插入图片描述

操作源码如下:

struct v4l2_requestbuffers req;
struct buffer * buffers = NULL;

req.count = 4; //缓冲区数量
req.type = V4L2_BUF_TYPE_VIDEO_CAPTURE;
req.memory = V4L2_MEMORY_MMAP;//设置为mmap方式

ioctl (fd, VIDIOC_REQBUFS, &req); //申请缓冲
if (req.count < 2)
    printf("Insufficient buffer memory\n");

buffers = calloc (req.count, sizeof (*buffers));//内存中建立对应空间

for (n_buffers = 0; n_buffers < req.count; ++n_buffers)
{
    struct v4l2_buffer buf; //驱动中的一帧

    CLEAR (buf);
    buf.type = V4L2_BUF_TYPE_VIDEO_CAPTURE;
    buf.memory = V4L2_MEMORY_MMAP;
    buf.index = n_buffers;

    if (-1 == ioctl (fd, VIDIOC_QUERYBUF, &buf)) //映射用户空间
        printf ("VIDIOC_QUERYBUF error\n");

    buffers[n_buffers].length = buf.length;
    buffers[n_buffers].start =
    mmap (NULL /* start anywhere */, //通过mmap建立映射关系
            buf.length,
            PROT_READ | PROT_WRITE /* required */,
            MAP_SHARED /* recommended */,
            fd, buf.m.offset);

    if (MAP_FAILED == buffers[n_buffers].start)
        printf ("mmap failed\n");
}

for (i = 0; i < n_buffers; ++i)
{
    struct v4l2_buffer buf;

    CLEAR (buf);

    buf.type = V4L2_BUF_TYPE_VIDEO_CAPTURE;
    buf.memory = V4L2_MEMORY_MMAP;
    buf.index = i;

    if (-1 == ioctl (fd, VIDIOC_QBUF, &buf))//申请到的缓冲进入列队
        printf ("VIDIOC_QBUF failed\n");
}

6、启动视频采集

enum v4l2_buf_type type;
type = V4L2_BUF_TYPE_VIDEO_CAPTURE;
//开始捕捉图像数据
if (-1 == ioctl (fd, VIDIOC_STREAMON, &type)) 
    printf ("VIDIOC_STREAMON failed\n");

7、读取数据帧并进行数据格式转换

虽然已经采集到了视频图像数据,但是YUV格式的图像我们并不能够直接看到,并且我们的液晶屏都是RGB格式的,所以就需要将YUV格式数据转换成RGB格式后才能显示在液晶屏上。比如yuv4:2:2格式转换为rgb24格式:

/*yuv4:2:2格式转换为rgb24格式*/
int convert_yuv_to_rgb_pixel(int y, int u, int v)
{
    uint pixel32 = 0;
    uchar *pixel = (uchar *)&pixel32;
    int r, g, b;
    r = y + (1.370705 * (v-128));
    g = y - (0.698001 * (v-128)) - (0.337633 * (u-128));
    b = y + (1.732446 * (u-128));
    if(r > 255) r = 255;
    if(g > 255) g = 255;
    if(b > 255) b = 255;
    if(r < 0) r = 0;
    if(g < 0) g = 0;
    if(b < 0) b = 0;
    pixel[0] = r * 220 / 256;
    pixel[1] = g * 220 / 256;
    pixel[2] = b * 220 / 256;

    return pixel32;
}

8、关闭视频设备

ioctl来操作总结:
int ioctl (int __fd,unsigned long int __request,...);

__request是V4L2一些ioctl命令,常见如下.

VIDIOC_REQBUFS:分配内存
VIDIOC_QUERYBUF:把VIDIOC_REQBUFS中分配的数据缓存转换成物理地址
VIDIOC_QUERYCAP:查询驱动功能
VIDIOC_ENUM_FMT:获取当前驱动支持的视频格式
VIDIOC_S_FMT:设置当前驱动的频捕获格式
VIDIOC_G_FMT:读取当前驱动的频捕获格式
VIDIOC_TRY_FMT:验证当前驱动的显示格式
VIDIOC_CROPCAP:查询驱动的修剪能力
VIDIOC_S_CROP:设置视频信号的边框
VIDIOC_G_CROP:读取视频信号的边框
VIDIOC_QBUF:把数据从缓存中读取出来
VIDIOC_DQBUF:把数据放回缓存队列
VIDIOC_STREAMON:开始视频显示函数
VIDIOC_STREAMOFF:结束视频显示函数
VIDIOC_QUERYSTD:检查当前视频设备支持的标准,例如PAL或NTSC。

下例为开源的V4L2测试实例(默认为YUYV格式转RGB24),该实例要求系统中有fb0设备存在,在ubuntu上也可以测试,只要ctl+F1切换到控制台再执行程序即可。

#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <assert.h>
#include <getopt.h>  
#include <fcntl.h>  
#include <unistd.h>
#include <errno.h>
#include <sys/stat.h>
#include <sys/types.h>
#include <sys/time.h>
#include <sys/mman.h>
#include <sys/ioctl.h>
#include <asm/types.h>
#include <linux/videodev2.h>
#include <linux/fb.h>

#define uchar unsigned char
#define uint unsigned int
#define CLEAR(x) memset (&(x), 0, sizeof (x))

struct buffer {
    void * start;
    size_t length;
};

static char * dev_name = NULL;
static int fd = -1;
struct buffer * buffers = NULL;
static unsigned int n_buffers = 0;
static int time_in_sec_capture=5;
static int fbfd = -1;
static struct fb_var_screeninfo vinfo;
static struct fb_fix_screeninfo finfo;
static char *fbp=NULL;
static long screensize=0;

static void errno_exit (const char * s)
{
    fprintf (stderr, "%s error %d, %s\n",s, errno, strerror (errno));
    exit (EXIT_FAILURE);
}
/*yuv4:2:2格式转换为rgb24格式*/
int convert_yuv_to_rgb_pixel(int y, int u, int v)
{
    uint pixel32 = 0;
    uchar *pixel = (uchar *)&pixel32;
    int r, g, b;
    r = y + (1.370705 * (v-128));
    g = y - (0.698001 * (v-128)) - (0.337633 * (u-128));
    b = y + (1.732446 * (u-128));
    if(r > 255) r = 255;
    if(g > 255) g = 255;
    if(b > 255) b = 255;
    if(r < 0) r = 0;
    if(g < 0) g = 0;
    if(b < 0) b = 0;
    pixel[0] = r * 220 / 256;
    pixel[1] = g * 220 / 256;
    pixel[2] = b * 220 / 256;

    return pixel32;
}

int convert_yuv_to_rgb_buffer(uchar *yuv, uchar *rgb, uint width,uint height)
{
    uint in, out = 0;
    uint pixel_16;
    uchar pixel_24[3];
    uint pixel32;
    int y0, u, y1, v;

    for(in = 0; in < width * height * 2; in += 4) {
        pixel_16 =
        yuv[in + 3] << 24 |
        yuv[in + 2] << 16 |
        yuv[in + 1] <<  8 |
        yuv[in + 0];//YUV422每个像素2字节,每两个像素共用一个Cr,Cb值,即u和v,RGB24每个像素3个字节
        y0 = (pixel_16 & 0x000000ff);
        u  = (pixel_16 & 0x0000ff00) >>  8;
        y1 = (pixel_16 & 0x00ff0000) >> 16;
        v  = (pixel_16 & 0xff000000) >> 24;
        pixel32 = convert_yuv_to_rgb_pixel(y0, u, v);
        pixel_24[0] = (pixel32 & 0x000000ff);
        pixel_24[1] = (pixel32 & 0x0000ff00) >> 8;
        pixel_24[2] = (pixel32 & 0x00ff0000) >> 16;
        rgb[out++] = pixel_24[0];
        rgb[out++] = pixel_24[1];
        rgb[out++] = pixel_24[2];//rgb的一个像素
        pixel32 = convert_yuv_to_rgb_pixel(y1, u, v);
        pixel_24[0] = (pixel32 & 0x000000ff);
        pixel_24[1] = (pixel32 & 0x0000ff00) >> 8;
        pixel_24[2] = (pixel32 & 0x00ff0000) >> 16;
        rgb[out++] = pixel_24[0];
        rgb[out++] = pixel_24[1];
        rgb[out++] = pixel_24[2];
    }
    return 0;
}
static int xioctl (int fd,int request,void * arg)
{
    int r;
    do r = ioctl (fd, request, arg);
    while (-1 == r && EINTR == errno);
    return r;
}

inline int clip(int value, int min, int max) {
    return (value > max ? max : value < min ? min : value);
  }

static void process_image (const void * p){

    unsigned char* in=(char*)p;
    int width=320;
    int height=240;
    int istride=640;
    int x,y,j;
    int y0,u,y1,v,r,g,b;
    long location=0;

    convert_yuv_to_rgb_buffer(in,fbp,320,240);
}

static int read_frame (void)
{
    struct v4l2_buffer buf;
    unsigned int i;

    CLEAR (buf);
    buf.type = V4L2_BUF_TYPE_VIDEO_CAPTURE;
    buf.memory = V4L2_MEMORY_MMAP;

    if (-1 == xioctl (fd, VIDIOC_DQBUF, &buf)) {  //把数据放回缓存队列
        switch (errno) {
            case EAGAIN:
                return 0;
            case EIO:    
            default:
                errno_exit ("VIDIOC_DQBUF");
        }
    }

    assert (buf.index < n_buffers);
    printf("v4l2_pix_format->field(%d)\n", buf.field);

    process_image (buffers[buf.index].start);
    if (-1 == xioctl (fd, VIDIOC_QBUF, &buf))  //把数据从缓存中读取出来
        errno_exit ("VIDIOC_QBUF");

    return 1;
}

static void run (void)
{
    unsigned int count;
    int frames;
    frames = 30 * time_in_sec_capture;  //每秒30帧*时间=总帧数

    while (frames-- > 0) {
        for (;;) {
            fd_set fds;
            struct timeval tv;
            int r;
            FD_ZERO (&fds);
            FD_SET (fd, &fds);

            tv.tv_sec = 2;
            tv.tv_usec = 0;

            r = select (fd + 1, &fds, NULL, NULL, &tv);

            if (-1 == r) {
                if (EINTR == errno)
                    continue;
                errno_exit ("select");
            }

            if (0 == r) {
                fprintf (stderr, "select timeout/n");
                exit (EXIT_FAILURE);
            }
            read_frame ();
        }
    }
}

static void stop_capturing (void)
{
    enum v4l2_buf_type type;

    type = V4L2_BUF_TYPE_VIDEO_CAPTURE;
    if (-1 == xioctl (fd, VIDIOC_STREAMOFF, &type))
        errno_exit ("VIDIOC_STREAMOFF");
}

static void start_capturing (void)
{
    unsigned int i;
    enum v4l2_buf_type type;

    for (i = 0; i < n_buffers; ++i) {
        struct v4l2_buffer buf;
        CLEAR (buf);

        buf.type = V4L2_BUF_TYPE_VIDEO_CAPTURE;
        buf.memory = V4L2_MEMORY_MMAP;
        buf.index = i;

        if (-1 == xioctl (fd, VIDIOC_QBUF, &buf))  //把数据从缓存中读取出来
            errno_exit ("VIDIOC_QBUF");
        }

    type = V4L2_BUF_TYPE_VIDEO_CAPTURE;

    if (-1 == xioctl (fd, VIDIOC_STREAMON, &type))  //开始视频显示函数
        errno_exit ("VIDIOC_STREAMON");
}

static void uninit_device (void)
{
    unsigned int i;

    for (i = 0; i < n_buffers; ++i)
        if (-1 == munmap (buffers[i].start, buffers[i].length))
            errno_exit ("munmap");

    if (-1 == munmap(fbp, screensize)) {
          printf(" Error: framebuffer device munmap() failed.\n");
          exit (EXIT_FAILURE) ;
        }    
    free (buffers);
}


static void init_mmap (void)
{
    struct v4l2_requestbuffers req;

    //mmap framebuffer
    fbp = (char *)mmap(NULL,screensize,PROT_READ | PROT_WRITE,MAP_SHARED ,fbfd, 0);
    if ((int)fbp == -1) {
        printf("Error: failed to map framebuffer device to memory.\n");
        exit (EXIT_FAILURE) ;
    }
    memset(fbp, 0, screensize);
    CLEAR (req);

    req.count = 4;
    req.type = V4L2_BUF_TYPE_VIDEO_CAPTURE;
    req.memory = V4L2_MEMORY_MMAP;

    if (-1 == xioctl (fd, VIDIOC_REQBUFS, &req)) {
        if (EINVAL == errno) {
            fprintf (stderr, "%s does not support memory mapping\n", dev_name);
            exit (EXIT_FAILURE);
        } else {
            errno_exit ("VIDIOC_REQBUFS");
        }
    }

    if (req.count < 4) {    //if (req.count < 2)
        fprintf (stderr, "Insufficient buffer memory on %s\n",dev_name);
        exit (EXIT_FAILURE);
    }

    buffers = calloc (req.count, sizeof (*buffers));

    if (!buffers) {
        fprintf (stderr, "Out of memory\n");
        exit (EXIT_FAILURE);
    }

    for (n_buffers = 0; n_buffers < req.count; ++n_buffers) {
        struct v4l2_buffer buf;

        CLEAR (buf);

        buf.type = V4L2_BUF_TYPE_VIDEO_CAPTURE;
        buf.memory = V4L2_MEMORY_MMAP;
        buf.index = n_buffers;

        if (-1 == xioctl (fd, VIDIOC_QUERYBUF, &buf))  //把VIDIOC_REQBUFS中分配的数据缓存转换成物理地址
            errno_exit ("VIDIOC_QUERYBUF");

        buffers[n_buffers].length = buf.length;
        buffers[n_buffers].start =mmap (NULL,buf.length,PROT_READ | PROT_WRITE ,MAP_SHARED,fd, buf.m.offset);

        if (MAP_FAILED == buffers[n_buffers].start)
            errno_exit ("mmap");
    }
}

static void init_device (void)
{
    struct v4l2_capability cap;
    struct v4l2_cropcap cropcap;
    struct v4l2_crop crop;
    struct v4l2_format fmt;
    unsigned int min;


    // Get fixed screen information 获取屏幕信息
    if (-1==xioctl(fbfd, FBIOGET_FSCREENINFO, &finfo)) {
        printf("Error reading fixed information.\n");
        exit (EXIT_FAILURE);
    }

    // Get variable screen information 获取屏幕信息
    if (-1==xioctl(fbfd, FBIOGET_VSCREENINFO, &vinfo)) {
        printf("Error reading variable information.\n");
        exit (EXIT_FAILURE);
    }
    screensize = 320*240 * vinfo.bits_per_pixel / 8;
    printf("vinfo.xres=%d\n",vinfo.xres);
    printf("vinfo.yres=%d\n",vinfo.yres);
    printf("vinfo.bits_per_pixel=%d\n",vinfo.bits_per_pixel);
    printf("screensize=%d\n",screensize);

    if (-1 == xioctl (fd, VIDIOC_QUERYCAP, &cap)) {   //查询驱动功能,检查摄像头是否支持V4L2
        if (EINVAL == errno) {
            fprintf (stderr, "%s is no V4L2 device/n",dev_name);
            exit (EXIT_FAILURE);
        } else {
            errno_exit ("VIDIOC_QUERYCAP");
        }
    }

    if (!(cap.capabilities & V4L2_CAP_VIDEO_CAPTURE)) {
        fprintf (stderr, "%s is no video capture device\n",dev_name);
        exit (EXIT_FAILURE);
    }

    if (!(cap.capabilities & V4L2_CAP_STREAMING)) {
        fprintf (stderr, "%s does not support streaming i/o\n",dev_name);
        exit (EXIT_FAILURE);
    }

    CLEAR (cropcap);

    cropcap.type = V4L2_BUF_TYPE_VIDEO_CAPTURE;

    if (0 == xioctl (fd, VIDIOC_CROPCAP, &cropcap)) {  //查询驱动的修剪能力
        crop.type = V4L2_BUF_TYPE_VIDEO_CAPTURE;
        crop.c = cropcap.defrect;

        if (-1 == xioctl (fd, VIDIOC_S_CROP, &crop)) {  //设置视频信号的边框
            switch (errno) {
            case EINVAL:    
            break;
            default:
            break;
            }
        }
    }else {    }

    CLEAR (fmt);

    fmt.type = V4L2_BUF_TYPE_VIDEO_CAPTURE;
    fmt.fmt.pix.width = 320;  
    fmt.fmt.pix.height = 240;
    fmt.fmt.pix.pixelformat = V4L2_PIX_FMT_YUYV;  //V4L2_PIX_FMT_MJPEG;
    fmt.fmt.pix.field = V4L2_FIELD_INTERLACED;

    if (-1 == xioctl (fd, VIDIOC_S_FMT, &fmt))    //设置当前驱动的频捕获格式
        errno_exit ("VIDIOC_S_FMT");

    init_mmap ();
}

static void close_device (void)
{
    if (-1 == close (fd))
    errno_exit ("close");
    fd = -1;
    close(fbfd);
}

static void open_device (void)
{
    struct stat st;  

    if (-1 == stat (dev_name, &st)) {
        fprintf (stderr, "Cannot identify '%s': %d, %s\n",dev_name, errno, strerror (errno));
        exit (EXIT_FAILURE);
    }

    if (!S_ISCHR (st.st_mode)) {
        fprintf (stderr, "%s is no device\n", dev_name);
        exit (EXIT_FAILURE);
    }

    //open framebuffer
    fbfd = open("/dev/fb0", O_RDWR);
    if (fbfd==-1) {
        printf("Error: cannot open framebuffer device.\n");
        exit (EXIT_FAILURE);
    }

    //open camera
    fd = open (dev_name, O_RDWR| O_NONBLOCK, 0);
    if (-1 == fd) {
        fprintf (stderr, "Cannot open '%s': %d, %s\n",dev_name, errno, strerror (errno));
        exit (EXIT_FAILURE);
    }
}

static void usage (FILE * fp,int argc,char ** argv)
{
    fprintf (fp,
            "Usage: %s [options]\n\n"
            "Options:/n"
            "-d | --device name Video device name [/dev/video]\n"
            "-h | --help Print this message\n"
            "-t | --how long will display in seconds\n"
            "",
            argv[0]);
}

static const char short_options [] = "d:ht:";
static const struct option long_options [] = {
{ "device", required_argument, NULL, 'd' },
{ "help", no_argument, NULL, 'h' },
{ "time", no_argument, NULL, 't' },
{ 0, 0, 0, 0 }
};

int main (int argc,char ** argv)
{
    dev_name = "/dev/video0";  //摄像头设备名

    for (;;)  
    {
        int index;
        int c;

        c = getopt_long (argc, argv,short_options, long_options,&index);
        if (-1 == c)
        break;

        switch (c) {
            case 0:
                break;

            case 'd':                               //-d 参数设置摄像头设备名
                dev_name = optarg;
                break;

            case 'h':                               //-h 参数显示帮忙信息
                usage (stdout, argc, argv);
                exit (EXIT_SUCCESS);
            
            case 't':  
                time_in_sec_capture = atoi(optarg); //-t 参数设置视频抓取时间,单位秒
                break;

            default:
                usage (stderr, argc, argv);
                exit (EXIT_FAILURE);
        }
    }

    open_device ();

    init_device ();

    start_capturing ();

    run ();

    stop_capturing ();

    uninit_device ();

    close_device ();

    exit (EXIT_SUCCESS);

    return 0;
}

参考:
【原创】IP摄像头技术纵览(一)—linux 内核编译,USB摄像头设备识别

【原创】IP摄像头技术纵览(二)—linux 视频开发接口V4L2概述

标签:NEO,USB,int,buf,NanoPi,VIDIOC,exit,device,V4L2
来源: https://blog.csdn.net/qlexcel/article/details/122330618

本站声明: 1. iCode9 技术分享网(下文简称本站)提供的所有内容,仅供技术学习、探讨和分享;
2. 关于本站的所有留言、评论、转载及引用,纯属内容发起人的个人观点,与本站观点和立场无关;
3. 关于本站的所有言论和文字,纯属内容发起人的个人观点,与本站观点和立场无关;
4. 本站文章均是网友提供,不完全保证技术分享内容的完整性、准确性、时效性、风险性和版权归属;如您发现该文章侵犯了您的权益,可联系我们第一时间进行删除;
5. 本站为非盈利性的个人网站,所有内容不会用来进行牟利,也不会利用任何形式的广告来间接获益,纯粹是为了广大技术爱好者提供技术内容和技术思想的分享性交流网站。

专注分享技术,共同学习,共同进步。侵权联系[81616952@qq.com]

Copyright (C)ICode9.com, All Rights Reserved.

ICode9版权所有