ImageVerifierCode 换一换
格式:DOC , 页数:7 ,大小:68.50KB ,
资源ID:4876471      下载积分:12 金币
快捷下载
登录下载
邮箱/手机:
温馨提示:
快捷下载时,用户名和密码都是您填写的邮箱或者手机号,方便查询和重复下载(系统自动生成)。 如填写123,账号就是123,密码也是123。
特别说明:
请自助下载,系统不会自动发送文件的哦; 如果您已付费,想二次下载,请登录后访问:我的下载记录
支付方式: 支付宝    微信支付   
验证码:   换一换

加入VIP,免费下载
 

温馨提示:由于个人手机设置不同,如果发现不能下载,请复制以下地址【https://www.bingdoc.com/d-4876471.html】到电脑端继续下载(重复下载不扣费)。

已注册用户请登录:
账号:
密码:
验证码:   换一换
  忘记密码?
三方登录: 微信登录   QQ登录  

下载须知

1: 本站所有资源如无特殊说明,都需要本地电脑安装OFFICE2007和PDF阅读器。
2: 试题试卷类文档,如果标题没有明确说明有答案则都视为没有答案,请知晓。
3: 文件的所有权益归上传用户所有。
4. 未经权益所有人同意不得将文件中的内容挪作商业或盈利用途。
5. 本站仅提供交流平台,并不能对任何下载内容负责。
6. 下载文件中如有侵权或不适当内容,请与我们联系,我们立即纠正。
7. 本站不保证下载资源的准确性、安全性和完整性, 同时也不承担用户因使用这些下载资源对自己和他人造成任何形式的伤害或损失。

版权提示 | 免责声明

本文(视频监控外文翻译.doc)为本站会员(wj)主动上传,冰点文库仅提供信息存储空间,仅对用户上传内容的表现方式做保护处理,对上载内容本身不做任何修改或编辑。 若此文所含内容侵犯了您的版权或隐私,请立即通知冰点文库(发送邮件至service@bingdoc.com或直接QQ联系客服),我们立即给予删除!

视频监控外文翻译.doc

1、 京 江 学 院JINGJIANG COLLEGE OF J I A N G S U U N I V E R S I T Y 外 文 文 献 翻 译学生学号: 3081155033 学生姓名: 缪成鹏 专业班级: J电子信息工程0802 指导教师姓名: 李正明 指导教师职称: 教授 2012年6月A System for Remote Video Surveillance and MonitoringThe thrust of CMU research under the DARPA Video Surveillance and Monitoring (VSAM) project is coo

2、perative multi-sensor surveillance to support battlefield awareness. Under our VSAM Integrated Feasibility Demonstration (IFD) contract, we have developed automated video understanding technology that enables a single human operator to monitor activities over a complex area using a distributed netwo

3、rk of active video sensors. The goal is to automatically collect and disseminate real-time information from the battlefield to improve the situational awareness of commanders and staff. Other military and federal law enforcement applications include providing perimeter security for troops, monitorin

4、g peace treaties or refugee movements from unmanned air vehicles, providing security for embassies or airports, and staking out suspected drug or terrorist hide-outs by collecting time-stamped pictures of everyone entering and exiting the building.Automated video surveillance is an important researc

5、h area in the commercial sector as well. Technology has reached a stage where mounting cameras to capture video imagery is cheap, but finding available human resources to sit and watch that imagery is expensive. Surveillance cameras are already prevalent in commercial establishments, with camera out

6、put being recorded to tapes that are either rewritten periodically or stored in video archives. After a crime occurs a store is robbed or a car is stolen investigators can go back after the fact to see what happened, but of course by then it is too late. What is needed is continuous 24-hour monitori

7、ng and analysis of video surveillance data to alert security officers to a burglary in progress, or to a suspicious individual loitering in the parking lot, while options are still open for avoiding the crime.Keeping track of people, vehicles, and their interactions in an urban or battlefield enviro

8、nment is a difficult task. The role of VSAM video understanding technology in achieving this goal is to automatically “parse” people and vehicles from raw video, determine their geolocations, and insert them into dynamic scene visualization. We have developed robust routines for detecting and tracki

9、ng moving objects. Detected objects are classified into semantic categories such as human, human group, car, and truck using shape and color analysis, and these labels are used to improve tracking using temporal consistency constraints. Further classification of human activity, such as walking and r

10、unning, has also been achieved. Geolocations of labeled entities are determined from their image coordinates using either wide-baseline stereo from two or more overlapping camera views, or intersection of viewing rays with a terrain model from monocular views. These computed locations feed into a hi

11、gher level tracking module that tasks multiple sensors with variable pan, tilt and zoom to cooperatively and continuously track an object through the scene. All resulting object hypotheses from all sensors are transmitted as symbolic data packets back to a central operator control unit, where they a

12、re displayed on a graphical user interface to give a broad overview of scene activities. These technologies have been demonstrated through a series of yearly demos, using a testbed system developed on the urban campus of CMU.Detection of moving objects in video streams is known to be a significant,

13、and difficult, research problem. Aside from the intrinsic usefulness of being able to segment video streams into moving and background components, detecting moving blobs provides a focus of attention for recognition, classification, and activity analysis, making these later processes more efficient

14、since only “moving” pixels need be considered.There are three conventional approaches to moving object detection: temporal differencing ; background subtraction; and optical flow. Temporal differencing is very adaptive to dynamic environments, but generally does a poor job of extracting all relevant

15、 feature pixels. Background subtraction provides the most complete feature data, but is extremely sensitive to dynamic scene changes due to lighting and extraneous events. Optical flow can be used to detect independently moving objects in the presence of camera motion; however, most optical flow com

16、putation methods are computationally complex, and cannot be applied to full-frame video streams in real-time without specialized hardware.Under the VSAM program, CMU has developed and implemented three methods for moving object detection on the VSAM testbed. The first is a combination of adaptive ba

17、ckground subtraction and three-frame differencing . This hybrid algorithm is very fast, and surprisingly effective indeed, it is the primary algorithm used by the majority of the SPUs in the VSAM system. In addition, two new prototype algorithms have been developed to address shortcomings of this st

18、andard approach. First, a mechanism for maintaining temporal object layers is developed to allow greater disambiguation of moving objects that stop for a while, are occluded by other objects, and that then resume motion. One limitation that affects both this method and the standard algorithm is that

19、 they only work for static cameras, or in a ”stepand stare” mode for pan-tilt cameras. To overcome this limitation, a second extension has beendeveloped to allow background subtraction from a continuously panning and tilting camera . Through clever accumulation of image evidence, this algorithm can

20、be implemented in real-time on a conventional PC platform. A fourth approach to moving object detection from a moving airborne platform has also been developed, under a subcontract to the Sarnoff Corporation. This approach is based on image stabilization using special video processing hardware.The c

21、urrent VSAM IFD testbed system and suite of video understanding technologies are the end result of a three-year, evolutionary process. Impetus for this evolution was provided by a series of yearly demonstrations. The following tables provide a succinct synopsis of the progress made during the last t

22、hree years in the areas of video understanding technology, VSAM testbed architecture, sensor control algorithms, and degree of user interaction. Although the program is over now, the VSAM IFD testbed continues to provide a valuable resource for the development and testing of new video understanding

23、capabilities. Future work will be directed towards achieving the following goals:1. better understanding of human motion, including segmentation and tracking of articulated body parts;2.improved data logging and retrieval mechanisms to support 24/7 system operations;3.bootstrapping functional site m

24、odels through passive observation of scene activities;4.better detection and classification of multi-agent events and activities;5.better camera control to enable smooth object tracking at high zoom; and6.acquisition and selection of “best views” with the eventual goal of recognizing individuals in

25、the scene.远程视频监控系统在美国国防部高级研究计划局,视频监控系统项目下进行的一系列监控装置研究是一项合作性的多层传感监控,用以支持战场决策。在我们的视频监控综合可行性示范条约下,我们已经研发出自动视频解析技术,使得单个操作员通过动态视频传感器的分布式网络来监测一个复杂区域的一系统活动。我们的目标是自动收集和传播实时的战场信息,以改善战场指挥人员的战场环境意识。在其他军事和联邦执法领域的应用包括为部队提供边境安防,通过无人驾驶飞机监控和平条约及难民流动,保证使馆和机场的安全,通过收集建筑物每个进口和出口的印时图片识别可疑毒品和恐怖分子藏匿场所。自动视频监控在商业领域同样也是一个重要的

26、研究课题。随着科技的发展,安装摄像头捕捉视频图像已经非常廉价,但是通过人为监视图像的成本则非常高昂。监视摄像头已经在商业机构中普遍存在,与相机输出记录到磁带或者定期重写或者存储在录像档案。在犯罪发生后-比如商店被窃或汽车被盗后,再查看当时录像,往往为时已晚。尽管避免犯罪还有许多其他的选择,但现在需要的是连续24小时的监测和分析数据,由视频监控系统提醒保安人员,及时发现正在进行的盗窃案,或游荡在停车场的可疑人员。在城市或战场环境中追踪人员、车辆是一项艰巨的任务。VSAM视频解析技术视频,确定其geolocations,并插入到动态场景可视化。我们已经开发强有力的例程为发现和跟踪移动的物体。被测物

27、体分为语义类别,如人力,人力组,汽车和卡车使用形状和颜色分析,这些标签是用来改善跟踪一致性使用时间限制。进一步分类的人类活动,如散步,跑步,也取得了。 Geolocations标记实体决心从自己的形象坐标使用广泛的基准立体声由两个或两个以上的重叠相机的意见,或查看射线相交的地形模型由单眼意见。这些计算机的位置饲料进入了更高一级的跟踪模块,任务多传感器变盘,倾斜和缩放,以合作,不断追踪的对象,通过现场。所有产生的对象假设所有传感器转交作为象征性的数据包返回到一个中央控制单元操作者,他们都显示在图形用户界面提供了广泛概述了现场活动。这些技术已证明,通过一系列每年演示,使用的试验系统上发展起来的城市

28、校园的债务工具中央结算系统。检测移动物体的视频流被认为是一个重要和困难,研究问题。除了固有的作用能够部分进入移动视频流和背景的组成部分,移动块检测提供了一个关注的焦点识别,分类,分析和活动,使这些后来过程更有效率,因为只有“移动”像素需要加以考虑。有三种常规方法来进行移动物体的检测:时间差分法;背景减法;和光流法。时间差分非常适应动态环境,但通常是一个贫穷的工作中提取所有相关的功能像素。背景减除提供最完整的功能数据,但极为敏感,动态场景的变化,由于灯光和不相干的活动。光流可以用来检测独立移动的物体,在场的摄像机运动,但大多数的光流计算方法的计算复杂,不能适用于全帧视频流的实时没有专门的硬件。根

29、据VSAM计划,债务工具中央结算系统制定并实施了三种方法的运动目标检测的VSAM试验。首先是结合自适应背景减除与三帧差分。这种混合算法是非常快,令人惊讶的有效的-事实上,它是主要的算法所使用的大多数SPUs在VSAM系统。此外,两个新的原型已经开发的算法来解决这一缺陷的标准办法。首先,一个机制,保持颞对象层次开发,使更多的歧义的移动物体,可以有效地阻止了一会儿,是闭塞的其他物体,而且然后恢复动议。一个限制,影响到该方法和标准算法是他们唯一的工作静态相机,或在“ stepand凝视”模式泛倾斜相机。为了克服这一局限,第二次延长了beendeveloped让背景减法从不断平移和倾斜相机。通过巧妙的

30、积累形象的证据,该算法可以实现实时的传统PC平台。第四个办法来探测移动物体从空中移动平台,也得到了发展,根据分包合同的Sarnoff公司。这种方法是基于图像稳定使用特殊的视频处理硬件。目前VSAM通用试验系统和一套视频理解技术的最终结果是一项为期三年的,渐进的过程。推动这一演变提供了一系列每年示威。下列表格提供了一个简明的大纲方面所取得的进展在过去三年中在视频领域的理解,技术, VSAM试验架构,传感器控制算法,并一定程度的用户交互。虽然该计划是在现在, VSAM通用试验继续提供宝贵的资源开发和测试新的视频理解能力。今后的工作将致力于实现以下目标:1、 更好地理解人类的议案,其中包括分割和跟踪阐明身体部位;2、 改善数据记录和检索机制,以支持24 / 7系统的运作3、引导功能的网站模式,通过被动观察现场活动;4、更好的检测和分类Multi -l Agent的事件和活动5、更好的相机控制,实现了流畅的目标跟踪高变焦;和6、购置和选择的“最佳意见”的最终目标是承认个人在现场。

copyright@ 2008-2023 冰点文库 网站版权所有

经营许可证编号:鄂ICP备19020893号-2