Graph representation learning aim at integrating node contents with graph structure to learn nodes/graph representations. Nevertheless, it is found that many existing graph learning methods do not work well on data with high heterophily level that accounts for a large proportion of edges between different class labels. Recent efforts to this problem focus on improving the message passing mechanism. However, it remains unclear whether heterophily truly does harm to the performance of graph neural networks (GNNs). The key is to unfold the relationship between a node and its immediate neighbors, e.g., are they heterophilous or homophilious? From this perspective, here we study the role of heterophily in graph representation learning before/after the relationships between connected nodes are disclosed. In particular, we propose an end-to-end framework that both learns the type of edges (i.e., heterophilous/homophilious) and leverage edge type information to improve the expressiveness of graph neural networks. We implement this framework in two different ways. Specifically, to avoid messages passing through heterophilous edges, we can optimize the graph structure to be homophilious by dropping heterophilous edges identified by an edge classifier. Alternatively, it is possible to exploit the information about the presence of heterophilous neighbors for feature learning, so a hybrid message passing approach is devised to aggregate homophilious neighbors and diversify heterophilous neighbors based on edge classification. Extensive experiments demonstrate the remarkable performance improvement of GNNs with the proposed framework on multiple datasets across the full spectrum of homophily level.