Mean-field theory is commonly used to analyze the dynamics of large neural network models. In this approach, the interactions of the original network are replaced by appropriately structured noise driving uncoupled units in a self-consistent manner. This allows properties of the network dynamics to be predicted and the behavior of the network to be understood as a whole. Results in random matrix theory have been used to relate the structure of the connectivity of neural networks to their mean-field dynamics. In my talk I will explain the mean-field approach, discuss its relation to random matrix theory, and analyze how the dynamics of neural network models are related to their connectivity structure. I will provide examples of networks that the mean-field theory describes accurately as well as examples, analyzed with the use of matrix theory, in which small modifications in the connectivity matrix can result in large deviations from mean-field predictions.