PCA and SVD are equivalent in certain sense, so no need to worry their differences. Assume X is the standardized data matrix, and the columns are variables, rows are samples. SVD of X is <bblatex>X = UDV^T</bblatex>, where the matrix <bblatex>UD</bblatex> is just the principal component matrix that is used commonly, and if you just use the first k columns of <bblatex>UD</bblatex>, then it is the first k principal components. Actually, the implementation of most PCA functions in software is using SVD. PCA is an eigen decomposition of covariance matrix (note that this is the singular value decomposition for the covariance matrix, but this SVD is not the SVD we usually mean. When we say SVD, we normally refer the the SVD of X). PCA has its certain interpretations such as the largest variance directions. So one can argue that PCA focus more on covariance matrix while SVD focus more on the data itself. But there is no fundamental differences between PCA and SVD. From a matrix viewpoint, SVD are best low rank matrix approximation in Frobenius norm, as well as as the low dimensional hyperplane approximation to the data cloud, which give much wider applications than PCA. As mentioned by nan.xiao, SVD is also widely used for sparse matrix or matrix completion with missing values, which is based on its interpretation which is NOT reflected by PCA.