In this paper, we present a theoretical discussion on AI deep learning neural network uncertainty investigation based on the classical Rademacher complexity and Shannon entropy. First it is shown that the classical Rademacher complexity and Shannon entropy is closely related by quantity by definitions. Secondly based on the Shannon mathematical theory on communication , we derive a criteria to ensure AI correctness and accuracy in classifications problems. Last but not the least based on Peter Barlette's work, we show both a relaxing condition and a stricter condition to guarantee the correctness and accuracy in AI classification . By elucidating in this paper criteria condition in terms of Shannon entropy based on Shannon theory, it becomes easier to explore other criteria in terms of other complexity measurements such as Vapnik-Cheronenkis, Gaussian complexity by taking advantage of the relations studies results in other references. A close to 0.5 criteria on Shannon entropy is derived in this paper for the theoretical investigation of AI accuracy and correctness for classification problems.