In order to gain insight into the role of substructure in complex systems we investigate attractor neural network models with substructure. Partially subdivided neural networks enable the storage of highly correlated states, and may also be useful as a model for brain function. Two possible biological examples are the separation of color, shape and motion in visual processing, and the use of parts of speech in language. With this motivation we analyze the ability of partially subdivided attractor networks to store both imprinted neural states and composite states formed out of combinations of imprinted subnetwork states. As the strength of intersubnetwork synapses are reduced more combinations of imprinted substates are recalled by the network. We perform a mean field analysis of the stability of composite patterns. Analytic solution of the equations at zero temperature show how stability of a particular composite pattern is controlled by the number of subdivisions that represent each imprinted pattern. Numerical minimization of the free energy is used to obtain the phase transition diagrams for networks with 2, 3 or 4 subdivisions.