Artificial neural networks (ANNs) represent the most advanced method in machine learning and they are being increasingly applied to find solutions for different problems. Nevertheless, owing to the high dimensionality of the search domain, one of the biggest ANN challenges to this day remains the complexity of ANN's training itself. It is particularly hard to determine the right values for parameters such are weights and biases. Poorly chosen parameters can lead to inaccurate results, increased execution time, and prolong the development of the network. Genetic algorithms (GAs) have proved a promising technique for training ANNs. Basic GA exhibits the behavior of slow and premature convergence by trapping in the local optima of the search space. To overcome this drawback, in this work we present a hybridized approach between GA and artificial bee colony (ABC) swarm intelligence metaheuristics. By incorporating an exploration procedure from the ABC algorithm, the GA’s deficiencies concerning local optima stagnation and premature convergence are suppressed. The proposed hybrid approach was applied to feedforward multi-layer perceptron (MLP) training. Simulations were performed with three standard medical datasets. Based on simulation results and obtained performance metrics, the proposed method shows robust performance, in terms of the classification test error rate.