SMO, Random forest and Bayes net algorithms: why does Random forest perform better?

I analyzed a dataset using those 3 different algorithms.
As I can see, Random forest performs better in most cases.
My dataset is composed of 4000 instances of two classes (class A 2000 elements, class B 2000 elements).
I use 207 metrics to classify the instances, but I also use the first 20 or 10 best metrics for InformationGain.
My question is: why sometimes an algorithm performs better than another one (in this case I’m only comparing this 3).
I read about them but I would like to have a complete scenario of why in some case RF is better than Bayes net and why sometimes is the opposite. And why SMO is always worst than the other two, in my experiences. Thank you so much!