This paper proposes a new approximate dynamic programming algorithm to solve the infinite-horizon optimal control problem for weakly coupled nonlinear systems. The algorithm is implemented as a three-critic/four-actor approximators structure, where the critic approximators are used to learn the optimal costs, while the actor approximators are used to learn the optimal control policies. Simultaneous continuous-time adaptation of both critic and actor approximators is implemented, a method commonly known as synchronous policy iteration. The adaptive control nature of the algorithm requires a persistence of excitation condition to be a priori guaranteed, but this can be relaxed by using previously stored data concurrently with current data in the update of the critic approximators. Appropriate robustifying terms are added to the controllers to eliminate the effects of the residual errors, leading to asymptotic stability of the equilibrium point of the closed-loop system. Simulation results show the effectiveness of the proposed approach for a sixth-order dynamical example.