Banner Portal
Redes neuronales para regresión univariante y multivariante
PDF (Portugués)

Palabras clave

Redes neuronales artificiales
Regresión
Algoritmo genético
Recocido simulado
Retropropagación
Optimización

Cómo citar

1.
Pereira GC, Custodio R. Redes neuronales para regresión univariante y multivariante. Rev. Chemkeys [Internet]. 2021 Sep. 9 [cited 2025 Oct. 26];3(00):e021003. Available from: https://econtents.sbu.unicamp.br/inpec/index.php/chemkeys/article/view/15880

Resumen

Las Redes Neuronales Artificiales han ganado notoriedad en la aproximación de funciones uni y multivariadas debido a la alta capacidad de aproximación de este tipo de modelos. En este artículo se presenta una descripción de los modelos de regresión basados ​​en redes neuronales junto con los algoritmos comúnmente utilizados para optimizarlos. El rendimiento de este tipo de modelo se ejemplifica mediante la aproximación de una función univariante que relaciona la fracción molar en la fase líquida de uno de los componentes de una mezcla de agua y acetona con su fracción molar en la fase de vapor. El rendimiento del modelo también se compara con el rendimiento de otros modelos basados ​​en métodos de regresión clásicos utilizados para resolver el mismo problema. Al final del texto, se presenta el código PYTHON para crear el modelo de red neuronal discutido aquí.

PDF (Portugués)

Referencias

.[1] - Custodio R, Andrade J.C., Augusto F. Curve fitting of mathematical functions to experimental data. Química Nova 1997; 20(2): 219-225.

– Reinders W, Minjet C. Vapour equilibria in ternary systems. VI. The system water-acetone-chloroform. Recueil des Travaux Chimiques des Pays-Bas 1947; 66(9): 576-604.

- Funahashi K. On the approximate realization of continuous mappings by neural networks. Neural Networks 1989; 2(3): 183-192.

- Lewicki G, Marino G. Approximation by superpositions of a sigmoidal function. Zeitschrift fur Analysis und ihre Anwendung 2003; 22(2): 463-470.

– Stinchcombe, White. Universal approximation using feedforward networks with non-sigmoid hidden layer activation functions. Trabalho apresentado no International 1989 Joint Conference on Neural Networks; 1989 613-617 [acesso em 21 jul. 2021]; Washington. Disponível em: https://ieeexplore.ieee.org/document/118640/citations?tabFilter=patents

– Cotter N. The Stone-Weierstrass Theorem and Its Application to Neural Networks. IEEE Transactions on Neural Networks 1990; 1(4), 1990: 290-295.

– Ito Y. Representation of functions by superpositions of a step or sigmoid function and their applications to neural network theory. Neural Networs 1991; 4(3): 385-394.

– Hornik K. Approximation capabilities of multilayer feedforward networks. Neural Networks 1991; 4(2): 251-257.

– Hornik k, Stinchcombe, White H. Multilayer feedforward networks are universal approximators. Neural Networks 1989; 2(5): 359-366.

– Kreinovich V. Arbitrary nonlinearity is sufficient to represent all functions by neural networks: A theorem. Neural Networks 1991; 4(3): 381-383.

– Ripley B. Pattern recognition and neural networks. Cambridge: Cambridge University Press; 1996.

– Silva I, Spatti D, Flauzino R. Redes Neurais Artificiais Para Engenharia e Ciências Aplicadas. São Paulo: Artliber; 2010.

– Hassoun M. Fundamentals of Artificial Neural Networks. Massachusetts: MIT-press; 1995.

– Sammut C, Webb G. Encyclopedia of Machine Learning. Boston: Springer; 2010.

– Yang X. Nature-Inspired Optimization Algorithms. 2. ed. London: Academic Press; 2011.

– Haeser G, Gomes M. Aspectos Teóricos de Simulated Annealing e um Algoritmo duas Fases em Otimização Global. Trends in Computational and applied Mathematics 2008; 9(3): 395-404.

– Sloss A, Gustafson S. 2019 Evolutionary Algorithms Review. Neural and Evolutionary Computing. [Internet]. 2019 [ acesso em 21 jul. 2021]. Disponível em: https://arxiv.org/pdf/1906.08870.pdf

– Radcliffe N. Equivalence Class Analysis of Genetic Algorithms. Complex Systems 1991; 5(2):183-205.

– Wright A. Genetic Algorithms for Real Parameter Optimization. Foundations of Genetic Algorithms 1991; 1: 205-218.

– Michalewics Z. Genetic Algorithms + Data Structures = Evolution Programs. 3.ed. Berlin: Springer-Berlin-Heidelberg; 1992.

– Mühlenbein H, Schlierkamp-Voosen D. Predictive Models for the Breeder Genetic Algorithm I. Continuous Parameter Optimization. Evolutionary Computation 1993; 1: 25-49.

– Eshelman L. Schaffer D. Real-Coded Genetic Algorithms and Interval Schemata. Foundations of Genetic Algorithms 1993; 2: 187-202.

– Rodrigues M, Machado C, Lima M. Simulated annealing aplicado ao problema de alocação de berços. Journal of Transport Literature 2013; 7(3): 117-136.

– Chen Y, Roux B. Generalized Metropolis acceptance criterion for hybrid non-equilibrium molecular dynamics – Monte Carlo simulations. J. Chem. Phys. 2015; 142: 024101.

– Duchi J, Singer Y. Adaptative Subgradient Methods for Online Learning and Stochastic Optimization. Journal of Machine Learning Research 2011; 12: 2121-2159.

– Graves A. Generating Sequences with Recurrent Neural Networks. Neural and Evolutionary Computing. [Internet]. 2013 [acesso em 21 jul 2021]. Disponível em: https://arxiv.org/abs/1308.0850

– Yu, E. A method of solving a convex programming problem with convergence rate O(1/k^2). Dokl. Akad. Nauk SSSR 1983; 269(3): 543-547.

– Zeiler M. ADADELTA: An adaptive Learning Rate Method. Machine Learning. [Internet]. 2012 [acesso em 21 jul 2021]. Disponível em: https://arxiv.org/abs/1212.5701

– Kingma D, Ba J. Adam: A method for stochastic optimization. [Internet]. 2017 [acesso em 21 jul 2021]. Disponível em: https://arxiv.org/abs/1412.6980

– Ruder S. An overview of gradient descente optimization algorithms. [Internet]. 2016 [acesso em 21 jul 2021]. Disponível em: https://arxiv.org/abs/1609.04747

– Vamathevan J, Clark D, Czodrowski P, Dunham I, Edgardo F, George L. et al. Applications of machine learning in drug discovery and development. Nature Reviews Drug Discovery 2019; 18(6): 463-477.

– Patel L, Shukla T, Huang X, Ussery D, Wang S. Machine Learning Methods in Drug Discovery. Molecules 2020; 25(22): 5277.

– Chen, H, Engkvist O, Wang Y, Olivecrona M, Blaschke T. The rise of deep learning in drug discovery. Drug Discovery Today 2018;23(6): 1241-1250.

– Tkatchenko A. Machine learning for chemical discovery. Nature Communications 2020; 11: 1-4.

– von Lilienfeld O, Burke K. Retrospective on a decade of machine learning for chemical discovery. Nature Communications 2020; 11: 1-4.

– Cova T, Pais A. Deep Learning for Deep Chemistry: Optimizing the Prediction of Chemical Patterns. Frontiers in Chemistry 2019; 7: 809.

– Gastegger M, Behler P, Marquetand. Machine learning molecular dynamics for the simulation of infrared spectra. Chemical Science 2017; 8(10): 6924-6935.

– Gebhardt J, Kiesel M, Riniker S, Hansen N. Combining Molecular Dynamics and Machine Learning to Predict Self-Solvation Free Energies and Limiting Activity Coefficients. Journal of Chemical Information and Modeling 2020; 60(11): 5319-5330.

– Li H, Collins C, Tanha M, Gordon J, Yaron D. A Density Functional Tight Binding Layer for Deep Learning of Chemical Hamiltonians. Journal of Chemical Theory and Computation 2018; 14(11): 5764-5776.

– von Lilienfeld O, Ramakrishnan R, Rupp M, Knoll A. Fourier series of atomic radial distribution functions: A molecular fingerprint for machine learning models of quantum chemical properties. International Journal of Quantum Chemistry 2015; 115(16): 1084-1093.

– Lopez-Bezanilla A, von Lilienfeld O. Modeling electronic quantum transport with machine learning. Physical Review B – Condensed Matter and Materials Physics 2014; 89: 235411.

Creative Commons License

Esta obra está bajo una licencia internacional Creative Commons Atribución-NoComercial-CompartirIgual 4.0.

Derechos de autor 2021 Gabriel César Pereira, Rogério Custodio

Downloads

Download data is not yet available.