역 전파를 사용하여 신경망을 훈련시키는 데 시간이 얼마나 복잡합니까?


17

NN 에 각 레이어에 n 숨겨진 레이어, m 교육 예제, x 기능 및 ni 노드가 포함되어 있다고 가정합니다 . 역 전파를 사용하여이 NN을 교육하는 데 시간이 얼마나 복잡합니까?

알고리즘의 시간 복잡성을 찾는 방법에 대한 기본 아이디어가 있지만 여기서는 반복, 레이어, 각 레이어의 노드, 학습 예제 및 더 많은 요소와 같이 여기에서 고려해야 할 4 가지 요소가 있습니다. 나는 여기 에서 답을 찾았 지만 충분히 명확하지 않았다.

위에서 언급 한 것 외에 NN 교육 알고리즘의 시간 복잡성에 영향을 미치는 다른 요소가 있습니까?


답변:


11

신뢰할 수있는 출처의 답변을 보지 못했지만 간단한 예 (현재 알고있는)로 직접 대답하려고합니다.

일반적으로 역 전파를 사용하여 MLP를 훈련하는 것은 일반적으로 행렬로 구현됩니다.

행렬 곱셈의 시간 복잡성

MijMjk 대한 행렬 곱셈의 시간 복잡도 는 단순히 O(ijk) 입니다.

여기서 우리는 가장 간단한 곱셈 알고리즘을 가정합니다. 시간 복잡성이 약간 더 좋은 다른 알고리즘이 있습니다.

피드 포워드 패스 알고리즘

피드 포워드 전파 알고리즘은 다음과 같습니다.

먼저, 레이어 i 에서 j 로 이동 하려면

Sj=WjiZi

그런 다음 활성화 기능을 적용합니다

Zj=f(Sj)

우리가 있다면 N (입출력 층을 포함한다) 층을,이 실행 N1 회.

예를 들어, 4 레이어가 있는 MLP의 순방향 패스 알고리즘에 대한 시간 복잡도를 계산해 봅시다 . 여기서 i 는 입력 레이어 의 노드 수 , j 는 두 번째 레이어 의 노드 수 , k 는 노드 수 세 번째 계층과 l 출력 계층의 노드 수.

이 때문에 4 층, 당신은 필요한 3 이 개 사이의 가중치를 나타내는 행렬. 그럼으로 그들을 나타낸다하자 Wji , WkjWlk 여기서, Wji 가진 행렬 ji 열 ( Wji 따라서 층으로부터가는 가중치 포함 i 층으로 j ).

당신이 가정 t 훈련 예. 레이어 i 에서 j 전파 하기 위해 먼저

Sjt=WjiZit

and this operation (i.e. matrix multiplcation) has O(jit) time complexity. Then we apply the activation function

Zjt=f(Sjt)

and this has O(jt) time complexity, because it is an element-wise operation.

So, in total, we have

O(jit+jt)=O(jt(t+1))=O(jit)

Using same logic, for going jk, we have O(kjt), and, for kl, we have O(lkt).

In total, the time complexity for feedforward propagation will be

O(jit+kjt+lkt)=O(t(ij+jk+kl))

I'm not sure if this can be simplified further or not. Maybe it's just O(tijkl), but I'm not sure.

Back-propagation algorithm

The back-propagation algorithm proceeds as follows. Starting from the output layer lk, we compute the error signal, Elt, a matrix containing the error signals for nodes at layer l

Elt=f(Slt)(ZltOlt)

where means element-wise multiplication. Note that Elt has l rows and t columns: it simply means each column is the error signal for training example t.

We then compute the "delta weights", DlkRl×k (between layer l and layer k)

Dlk=EltZtk

where Ztk is the transpose of Zkt.

We then adjust the weights

Wlk=WlkDlk

lkO(lt+lt+ltk+lk)=O(ltk).

Now, going back from kj. We first have

Ekt=f(Skt)(WklElt)

Then

Dkj=EktZtj

And then

Wkj=WkjDkj

where Wkl is the transpose of Wlk. For kj, we have the time complexity O(kt+klt+ktj+kj)=O(kt(l+j)).

And finally, for ji, we have O(jt(k+i)). In total, we have

O(ltk+tk(l+j)+tj(k+i))=O(t(lk+kj+ji))

which is same as feedforward pass algorithm. Since they are same, the total time complexity for one epoch will be

O(t(ij+jk+kl)).

This time complexity is then multiplied by number of iterations (epochs). So, we have

O(nt(ij+jk+kl)),
where n is number of iterations.

Notes

Note that these matrix operations can greatly be paralelized by GPUs.

Conclusion

We tried to find the time complexity for training a neural network that has 4 layers with respectively i, j, k and l nodes, with t training examples and n epochs. The result was O(nt(ij+jk+kl)).

We assumed the simplest form of matrix multiplication that has cubic time complexity. We used batch gradient descent algorithm. The results for stochastic and mini-batch gradient descent should be same. (Let me know if you think the otherwise: note that batch gradient descent is the general form, with little modification, it becomes stochastic or mini-batch)

Also, if you use momentum optimization, you will have same time complexity, because the extra matrix operations required are all element-wise operations, hence they will not affect the time complexity of the algorithm.

I'm not sure what the results would be using other optimizers such as RMSprop.

Sources

The following article http://briandolhansky.com/blog/2014/10/30/artificial-neural-networks-matrix-form-part-5 describes an implementation using matrices. Although this implementation is using "row major", the time complexity is not affected by this.

If you're not familiar with back-propagation, check this article:

http://briandolhansky.com/blog/2013/9/27/artificial-neural-networks-backpropagation-part-4


Your answer is great..I could not find any ambiguity till now, but you forgot the no. of iterations part, just add it...and if no one answers in 5 days i'll surely accept your answer
DuttaA

@DuttaA I tried to put every thing I knew. it may not be 100% correct so feel free to leave this unaccepted :) I'm also waiting for other answers to see what other points I missed.
M.kazem Akhgary

4

For the evaluation of a single pattern, you need to process all weights and all neurons. Given that every neuron has at least one weight, we can ignore them, and have O(w) where w is the number of weights, i.e., nni, assuming full connectivity between your layers.

The back-propagation has the same complexity as the forward evaluation (just look at the formula).

So, the complexity for learning m examples, where each gets repeated e times, is O(wme).

The bad news is that there's no formula telling you what number of epochs e you need.


From the above answer don't you think itdepends on more factors?
DuttaA

1
@DuttaA No. There's a constant amount of work per weight, which gets repeated e times for each of m examples. I didn't bother to compute the number of weights, I guess, that's the difference.
maaartinus

1
I think the answers are same. in my answer I can assume number of weights w = ij + jk + kl. basically sum of n * n_i between layers as you noted.
M.kazem Akhgary

1

A potential disadvantage of gradient-based methods is that they head for the nearest minimum, which is usually not the global minimum.

This means that the only difference between these search methods is the speed with which solutions are obtained, and not the nature of those solutions.

An important consideration is time complexity, which is the rate at which the time required to find a solution increases with the number of parameters (weights). In short, the time complexities of a range of different gradient-based methods (including second-order methods) seem to be similar.

Six different error functions exhibit a median run-time order of approximately O(N to the power 4) on the N-2-N encoder in this paper:

Lister, R and Stone J "An Empirical Study of the Time Complexity of Various Error Functions with Conjugate Gradient Back Propagation" , IEEE International Conference on Artificial Neural Networks (ICNN95), Perth, Australia, Nov 27-Dec 1, 1995.

Summarised from my book: Artificial Intelligence Engines: A Tutorial Introduction to the Mathematics of Deep Learning.


Hi J. Stone. Thanks for trying to contribute to the site. However, please, note that this is not a place for advertising yourself. Anyway, you can surely provide a link to your own books if they are useful for answering the questions and provided you're not just trying to advertise yourself.
nbro

@nbro If James Stone can provide an insightful answer - and it seems so - then i'm fine with him also mentioning some of his work. Having experts on this network is a solid contribution to the quality and level.
javadba

Dear nbro, That is a fair comment. I dislike adverts too. But it is possible for a book and/or paper to be relevant to a question, as I believe it is in this case. regards, Jim Stone
James V Stone
당사 사이트를 사용함과 동시에 당사의 쿠키 정책개인정보 보호정책을 읽고 이해하였음을 인정하는 것으로 간주합니다.
Licensed under cc by-sa 3.0 with attribution required.