3D object detection using point cloud is an essential task for autonomous driving. With the development of infrastructures, roadside perception can extend the view range of the autonomous vehicles through communication technology. Computation time and power consumption are two main concerns when deploying object detection tasks, and a light-weighted detection model applied in an embedded system is a convenient solution for both roadside and vehicleside. In this study, a 3D Point cLoud Object deTection (PLOT) network is proposed to reduce heavy computing and ensure real-time object detection performance in an embedded system. First, a bird’s eye view representation of the point cloud is calculated using pillar-based encoding method. Then a cross-stage partial network-based backbone and a feature pyramid network-based neck are implemented to generate the high-dimensional feature map. Finally, a multioutput head using a shared convolutional layer is attached to predict classes, bounding boxes, and the orientations of the objects at the same time. Extensive experiments using the Waymo Open Dataset and our own dataset are conducted to demonstrate the accuracy and efficiency of the proposed method.