COTS: Collaborative Two-Stream Vision-Language Pre-Training Model for Cross-Modal Retrieval — arXiv2