Retrieving and analyzing transit feeds relies on working with analytical workflows that can handle the massive volume of data streams that are relevant to understand the dynamics of transit networks which are entirely deterministic in the geographical space in which they takes place. In this paper, we consider the fundamental issues in developing a streaming analytical workflow for analyzing the continuous arrival of multiple, unbounded transit data feeds for automatically processing and enriching them with additional information containing higher level concepts accordingly to a particular mobility context. This workflow consists of three tasks: (1) stream data retrieval for creating time windows; (2) data cleaning for handling missing data, overlap data or redundant data; and (3) data contextualization for computing actual arrival and departure times as well as the stops and moves during a bus trip, and also performing mobility context computation. The workflow was implemented in a Hadoop cloud ecosystem using data streams from the CODIAC Transit System of the city of Moncton, NB. The Map() function of MapReduce is used to retrieve and bundle data streams into numerous clusters which are subsequently handled in a parallel manner by the Reduce() function in order to execute the data contextualization step. The results validate the need for cloud computing for achieving high performance and scalability, however, due to the delay in computing and networking, it is clear that data cleaning tasks should not only be deployed using a cloud environment, paving the way to combine it with fog computing in the near future.