文化大學機構典藏 CCUR:Item 987654321/22061
English  |  正體中文  |  简体中文  |  Items with full text/Total items : 47225/51091 (92%)
Visitors : 13993187      Online Users : 239
RC Version 6.0 © Powered By DSPACE, MIT. Enhanced by NTU Library IR team.
Scope Tips:
  • please add "double quotation mark" for query phrases to get precise results
  • please goto advance search for comprehansive author search
  • Adv. Search
    HomeLoginUploadHelpAboutAdminister Goto mobile version


    Please use this identifier to cite or link to this item: https://irlib.pccu.edu.tw/handle/987654321/22061


    Title: The Affect of Training Pattern Sequences to the Learning and Ability of a Single-Layer Perceptron
    Other Titles: 訓練樣本的順序對單層感知機的學習與能力的影響
    Authors: 翁志祁
    Contributors: 工學院
    Keywords: 類神經網路
    單層感知機
    硬限制器
    線性轉換函數
    非線性轉換
    Single layer perceptron
    artificial neural network
    hard-limiter nonlinearity
    piecewise linear activate function
    Date: 2004-06-01
    Issue Date: 2012-04-25 13:54:04 (UTC+8)
    Abstract: 本研究使用了一個單層感知機的類神經網路的模型,此網路的輸入層有四個節點,輸出層有一個節點,以模擬一摑四位元的邏輯或閘。在經過兩組內容相同而訓練順序相反的樣本的訓練之後,我們發覺這兩種訓練後,感知機節點之間的權重有很大的不同,這個現象強烈暗示,感知機在不同的訓練順序之下,雖然都可以得到正確的結果,但是,確學習到不盡相同的知識。第一個訓練集是由四個輸入均為0的樣本開始訓練。我們觀察到不同位元具有各別的權值,連接高位元的權重都比連接低位元的權重大,而其結果則是所有有效位元的權重之和,顯然這是一種累積式的學習。第二個訓練集是從四個輸入均為1的樣本開始訓練,在這種情形下,我們發現連接到不同位元的權重都一樣大,而其總合是有效位元數目確的結果的權重之倍數,所以這是一種邏輯式的學習。雖然,訓練順序不同都可以得到正確的知識結果,但是不同權重的分布情形,卻暗示感知機在不同的訓練順序之下,學習到不同的知識。這種現象可以是想多了解感知機如何學習以及學習到什麼的一個可能的途徑,這會是一個有趣的研究方向。另外,由於單層感知機只能處理單純的直線分割問題,不同樣本訓練順序在多層感知機是否會產生如單層感知機類似的效應,更值得我們做更進一步的研究。

    A single layer perceptron neural network with four input nodes and one output node is used for simulating the learning process of a four-input OR gate. Two sets of different training sequences are used for training the perceptron. After the training, it is observed that the weights between nodes under different sequences are very different. This phenomenon strongly shows that the perceptron learns not exactly the same characteristics of the same test patterns under different training sequences, while performing the correct logical results in all cases. In one case, the training pattern begins with less l's has learned in an arithmetic-way, the weights for the higher-order nodes have larger values. The output is the sum of weights for each effective bit. In the other cases, the training pattern begins with more l's has learned in a logical-way, the weights between different nodes have the same values. The output is a multiple of weights of the number of effective bits. Although the results under different training sequences give the correct outputs, the different weight distribution implies that the perceptron learns differently when training with different sequences. This phenomenon can be a passage for a better understanding of how and what the perceptron learn, and may be worth further researches. Furthermore, a single layer perceptron can only solve the simple straight-line cut problems. It would be interesting to know whether or not different training sequences have the similar effect on the multiple layers perceptron like it is in the single layer perceptron.
    Relation: 華岡工程學報 ; 18期 (2004 / 06 / 01) , P83 - 88
    Appears in Collections:[College of Engineering] Chinese Culture University Hwa Kang Journal of Engineering

    Files in This Item:

    File Description SizeFormat
    index.html0KbHTML652View/Open


    All items in CCUR are protected by copyright, with all rights reserved.


    DSpace Software Copyright © 2002-2004  MIT &  Hewlett-Packard  /   Enhanced by   NTU Library IR team Copyright ©   - Feedback