摘要: | 本研究使用了一個單層感知機的類神經網路的模型,此網路的輸入層有四個節點,輸出層有一個節點,以模擬一摑四位元的邏輯或閘。在經過兩組內容相同而訓練順序相反的樣本的訓練之後,我們發覺這兩種訓練後,感知機節點之間的權重有很大的不同,這個現象強烈暗示,感知機在不同的訓練順序之下,雖然都可以得到正確的結果,但是,確學習到不盡相同的知識。第一個訓練集是由四個輸入均為0的樣本開始訓練。我們觀察到不同位元具有各別的權值,連接高位元的權重都比連接低位元的權重大,而其結果則是所有有效位元的權重之和,顯然這是一種累積式的學習。第二個訓練集是從四個輸入均為1的樣本開始訓練,在這種情形下,我們發現連接到不同位元的權重都一樣大,而其總合是有效位元數目確的結果的權重之倍數,所以這是一種邏輯式的學習。雖然,訓練順序不同都可以得到正確的知識結果,但是不同權重的分布情形,卻暗示感知機在不同的訓練順序之下,學習到不同的知識。這種現象可以是想多了解感知機如何學習以及學習到什麼的一個可能的途徑,這會是一個有趣的研究方向。另外,由於單層感知機只能處理單純的直線分割問題,不同樣本訓練順序在多層感知機是否會產生如單層感知機類似的效應,更值得我們做更進一步的研究。
A single layer perceptron neural network with four input nodes and one output node is used for simulating the learning process of a four-input OR gate. Two sets of different training sequences are used for training the perceptron. After the training, it is observed that the weights between nodes under different sequences are very different. This phenomenon strongly shows that the perceptron learns not exactly the same characteristics of the same test patterns under different training sequences, while performing the correct logical results in all cases. In one case, the training pattern begins with less l's has learned in an arithmetic-way, the weights for the higher-order nodes have larger values. The output is the sum of weights for each effective bit. In the other cases, the training pattern begins with more l's has learned in a logical-way, the weights between different nodes have the same values. The output is a multiple of weights of the number of effective bits. Although the results under different training sequences give the correct outputs, the different weight distribution implies that the perceptron learns differently when training with different sequences. This phenomenon can be a passage for a better understanding of how and what the perceptron learn, and may be worth further researches. Furthermore, a single layer perceptron can only solve the simple straight-line cut problems. It would be interesting to know whether or not different training sequences have the similar effect on the multiple layers perceptron like it is in the single layer perceptron. |