Artificial intelligence (AI) is critical in evolving 5G and developing 6G networks, running on edge devices, and solving resource management challenges. The burgeoning number of edge devices draws attention to the potential of low-earth orbit (LEO) satellite networks with their onboard computing capabilities for edge inference. This paper explores LEO scenarios where multiple remote sensing edge AI inference tasks concurrently process data from a single source. However, due to there being parts with the same functions between different AI applications, traditional monolithic edge AI architecture must be deployed repeatedly and falls short in efficiently harnessing the heterogeneous resources of LEO satellite networks. To solve this problem, we utilize the microservice architecture to decouple a single AI application into several independent microservices to reuse these same functions. However, due to the high latency caused by multiple microservices' communication, we need to design a deployment strategy to fully utilize resources to reduce the service latency. We present a microservice deployment model to minimize the total service latency across all AI applications and meet resource constraints with the constraints of hardware, energy, and memory limitations. This latency optimization problem is rewritten as a Markov decision process (MDP) to effectively deal with the challenge posed by the time-varying transmission rate caused by satellite mobility. To increase the training data utilization, we employ a Proximal Policy Optimization (PPO) based reinforcement learning algorithm to meet the dynamic environment challenge. Finally, we obtain a sub-optimal solution with minimal accuracy loss and an acceptable solution time.