摘要: | 影像處理在近幾年因為工業4.0而有了大幅度的進化和需求,也明顯改善了影像的解析度和品質,如衛星拍攝,醫學顯微鏡等等,也可以應用在數位美工、尺寸測定、影像重建…等。機器人隨著時間的進步,技術也突飛猛進,使得工業4.0的發展腳步更為迅速,其中辨識更是機器人不可或缺的一部份。本論文將視訊鏡頭結合在夾爪上,並應用在移動機器人上,將類神經網路模型植入微控制器中,讓本研究更為人性,可幫助夾爪辨識和抓取目標,也為影像辨識提供另一種解決方法。
影像辨識在機器人中是很重要的一部份,本論文所做的影像辨識可應用於機器人夾爪之待抓物位置及種類辨識。論文中利用樹梅派(Raspberry Pi)搭配OpenCV的視覺辨識功能函數對捕獲的待抓物進行影像前置處理,座標及特徵值計算。在辨識方面,所使用的方法是由影像不變矩與自適應類神經整合而成,將簡化的不變矩值當作輸入,匯入類神經後,建立自適應類神經網路,再將此類神經網路匯入樹梅派中,進行機器人及夾爪控制。另外基底移動平台為使用Arduino所開發的TSK模糊控制器,搭配了超音波感測器進行測距,使用TSK模糊控制器進行位置控制,並與樹梅派進行溝通判斷是否已經到達定位,可以進行夾取。
為驗證所提方法之有效性,研究過程將類神經網路以MATLAB模擬,然後實作一夾爪並進行測試,從實驗數據來看,所提之類神經網路模型能有效辨識待抓物,然後進行夾取。
Because the trends of industry 4.0, image processing has significant evolutions and improvement recently. The image resolution and quality have been promoted, such as satellite photography, medical microscope, etc., The technology can also be used in digital art, size measurement, image reconstruction, and so on. With the rapid technology development of machine and robot, the industrial 4.0 is developing rapidly. Among the technologies, image identification is an indispensable part of the robot. In this thesis, the camera module is integrated with the handling gripper and the gripper is mounted in the moving robot. The neural-network identification, which makes this study more humanity, can help the action of the claw grasping. The proposed method can provide another solution for image identification.
Image identification is an important part of the robot, it can be used to identify the position and type of the gripping object. In this paper, we use the Raspberry Pi with the visual identification functions of OpenCV to execute the image preprocessing, coordinate values and eigenvalues calculation of the captured objects. In the identification architecture, the proposed method is composed of the image invariant moments and the heuristic neural-network. The neural-network model is trained and established via the input data, which is collected in the database including the invariant moment value and responding type. Then the neural-network model is embedded into the Raspberry Pi to control the handling gripper. In addition, the TSK fuzzy controller, the Arduino MCU with the ultrasonic sensor being adopted, is designed to control the base mobile platform. The Arduino MCU can connect with the Raspberry Pi to decide whether the appropriated location has been reached or not.
To verify the effectiveness of the proposed method, the simulation results using the Matlab package are provided and a handing gripper prototype is implemented. Then the experimental results are also provided. From the results, the proposed neural-network model can correctly identify the grasping object and grasp it. |