The growing dependence on eTextbooks and Massive Open Online Courses (MOOCs) has led to an increase in the amount of students' learning data. By carefully analyzing this data, educators can identify difficult exercises, and evaluate the quality of the exercises when teaching a particular topic. In this study, an analysis of log data from the use of a semester of the OpenDSA eTextbook was offered to identify the most difficult data structure course exercises and to evaluate the quality of the course exercises. Our study is based on analyzing students' responses to the course exercises. To identify the difficult exercises, we applied two different approaches, the first of which involved analyzing student responses to exercises using item response theory (IRT) analysis and a latent trait model (LTM) technique, and the second involved determining which exercises were more difficult based on how students interacted with them. We computed different measures for every exercise such that difficulty level, trial and error, and hint ratio. We generated an item characteristics curve, item information curve, and test information function for each exercise. To evaluate the quality of the exercises, we applied the IRT analysis to the students' responses to the exercises and, we computed the difficulty and discrimination index for each exercise. We classified whether the exercise is good or poor based on these two measures. Our findings showed that the exercises that related to algorithm analysis topics represented most of the difficult exercises that students struggle with, and there existing six exercises out of 56 exercises are classified as poor exercises which could be rejected or improved. Some of these poor exercises do not differentiate between students with different abilities; the others give preference to low-ability students to answer these exercises over high-ability students.