1说明 1。1本次基于OpenPose用pythonopencv高级实现人体骨架舞,再次细化并写这篇文章,是为了回答我的一位粉丝的疑问。 1。2基于我的第一篇文章:《OpenPose:实现抖音很火的人体骨架和视频动态舞》。 1。3对视频单人舞蹈的人体骨架舞的代码进行删除、优化、修改、注释,提高可读性和可操作性。 1。4熟悉python和opencv的编程思维和熟悉cv2的函数。 1。5环境:python3。8opencv4。4。0深度deepinlinux操作系统微软编辑器vscode。 2原图骨架舞 2。1效果图 视频节选 2。2我的第一篇文章中有代码,对源代码进行修改后的代码第1步:导入模块importcv2importtimeimportnumpyasnp第2步:模块加载模块下载之间有介绍,此处省略protoFilehomexgjDesktoplearnopencv3OpenPoseOKposecocoposedeploylinevec。prototxtweightsFilehomexgjDesktoplearnopencv3OpenPoseOKposecocoposeiter440000。caffemodel参数初始化设置,建议不要动参数,默认18个节点nPoints18节点关系,固定POSEPAIRS〔〔1,0〕,〔1,2〕,〔1,5〕,〔2,3〕,〔3,4〕,〔5,6〕,〔6,7〕,〔1,8〕,〔8,9〕,〔9,10〕,〔1,11〕,〔11,12〕,〔12,13〕,〔0,14〕,〔0,15〕,〔14,16〕,〔15,17〕〕inWidth368inHeight368threshold0。1模型加载netcv2。dnn。readNetFromCaffe(protoFile,weightsFile)启动CPU,非GPUnet。setPreferableBackend(cv2。dnn。DNNTARGETCPU)print(UsingCPUdevice)第3步:cv2读取本地单人舞蹈视频capcv2。VideoCapture(0)读取摄像头实时检测人体骨架,有点卡,省略读取视频文件,注意单个人的姿势和人体骨架,目前多人出现bugcapcv2。VideoCapture(homexgjDesktoplearnopencv3OpenPoseOKsamplevideo。mp4)hasFrame,framecap。read()生成本目录下的视频vidwritercv2。VideoWriter(homexgjDesktoplearnopencv3OpenPoseOKoutput2。avi,cv2。VideoWriterfourcc(M,J,P,G),10,(frame。shape〔1〕,frame。shape〔0〕))第4步:循环whilecv2。waitKey(1)0:初始化时间ttime。time()hasFrame,framecap。read()frameCopynp。copy(frame)退出设置ifnothasFrame:cv2。waitKey()breakframeWidthframe。shape〔1〕frameHeightframe。shape〔0〕inpBlobcv2。dnn。blobFromImage(frame,1。0255,(inWidth,inHeight),(0,0,0),swapRBFalse,cropFalse)net。setInput(inpBlob)outputnet。forward()Houtput。shape〔2〕Woutput。shape〔3〕Emptylisttostorethedetectedkeypointspoints〔〕foriinrange(nPoints):confidencemapofcorrespondingbodyspart。probMapoutput〔0,i,:,:〕FindglobalmaximaoftheprobMap。minVal,prob,minLoc,pointcv2。minMaxLoc(probMap)Scalethepointtofitontheoriginalimagex(frameWidthpoint〔0〕)Wy(frameHeightpoint〔1〕)Hifprobthreshold:cv2。circle(frameCopy,(int(x),int(y)),8,(0,255,255),thickness1,lineTypecv2。FILLED)cv2。putText(frameCopy,{}。format(i),(int(x),int(y)),cv2。FONTHERSHEYSIMPLEX,1,(0,0,255),2,lineTypecv2。LINEAA)Addthepointtothelistiftheprobabilityisgreaterthanthethresholdpoints。append((int(x),int(y)))else:points。append(None)DrawSkeletonforpairinPOSEPAIRS:partApair〔0〕partBpair〔1〕ifpoints〔partA〕andpoints〔partB〕:画骨架线和圆点,颜色可自定义cv2。line(frame,points〔partA〕,points〔partB〕,(0,255,255),3,lineTypecv2。LINEAA)cv2。circle(frame,points〔partA〕,8,(0,0,255),thickness1,lineTypecv2。FILLED)cv2。circle(frame,points〔partB〕,8,(0,0,255),thickness1,lineTypecv2。FILLED)显示时间和帧cv2。putText(frame,timetaken{:。2f}sec。format(time。time()t),(50,50),cv2。FONTHERSHEYCOMPLEX,。8,(255,50,0),2,lineTypecv2。LINEAA)动态显示cv2。imshow(OutputSkeleton,frame)写入视频vidwriter。write(frame)vidwriter。release() 3纯骨架舞 3。1效果图: 3。2操作步骤简单,注意路径即可,采用cpu法,大概这个视频需要30分钟。 3。3代码第1步:导入模块importcv2importtimeimportnumpyasnp第2步:模块加载模块下载之间有介绍,此处省略protoFilehomexgjDesktoplearnopencv3OpenPoseOKposecocoposedeploylinevec。prototxtweightsFilehomexgjDesktoplearnopencv3OpenPoseOKposecocoposeiter440000。caffemodel参数初始化设置,建议不要动参数,默认18个节点nPoints18节点关系,固定POSEPAIRS〔〔1,0〕,〔1,2〕,〔1,5〕,〔2,3〕,〔3,4〕,〔5,6〕,〔6,7〕,〔1,8〕,〔8,9〕,〔9,10〕,〔1,11〕,〔11,12〕,〔12,13〕,〔0,14〕,〔0,15〕,〔14,16〕,〔15,17〕〕inWidth368inHeight368threshold0。1模型加载netcv2。dnn。readNetFromCaffe(protoFile,weightsFile)启动CPU,非GPUnet。setPreferableBackend(cv2。dnn。DNNTARGETCPU)print(UsingCPUdevice)第3步:cv2读取本地单人舞蹈视频capcv2。VideoCapture(0)读取摄像头实时检测人体骨架,有点卡,省略读取视频文件,注意单个人的姿势和人体骨架,目前多人出现bugcapcv2。VideoCapture(homexgjDesktoplearnopencv3OpenPoseOKsamplevideo。mp4)hasFrame,framecap。read()生成本目录下的视频vidwritercv2。VideoWriter(homexgjDesktoplearnopencv3OpenPoseOKoutput3。avi,cv2。VideoWriterfourcc(M,J,P,G),10,(frame。shape〔1〕,frame。shape〔0〕))第4步:循环whilecv2。waitKey(1)0:初始化时间ttime。time()hasFrame,framecap。read()frameCopynp。copy(frame)以上未修改增加一张输出的黑色图片,用于显示骨架和数字outnp。zeros(frame。shape,np。uint8)修改处,此处增加以下未修改退出设置ifnothasFrame:cv2。waitKey()breakframeWidthframe。shape〔1〕frameHeightframe。shape〔0〕inpBlobcv2。dnn。blobFromImage(frame,1。0255,(inWidth,inHeight),(0,0,0),swapRBFalse,cropFalse)net。setInput(inpBlob)outputnet。forward()Houtput。shape〔2〕Woutput。shape〔3〕Emptylisttostorethedetectedkeypointspoints〔〕foriinrange(nPoints):confidencemapofcorrespondingbodyspart。probMapoutput〔0,i,:,:〕FindglobalmaximaoftheprobMap。minVal,prob,minLoc,pointcv2。minMaxLoc(probMap)Scalethepointtofitontheoriginalimagex(frameWidthpoint〔0〕)Wy(frameHeightpoint〔1〕)Hifprobthreshold:以上未修改修改处:将frame改为outcv2。circle(out,(int(x),int(y)),8,(0,255,255),thickness1,lineTypecv2。FILLED)cv2。putText(out,{}。format(i),(int(x),int(y)),cv2。FONTHERSHEYSIMPLEX,1,(0,0,255),2,lineTypecv2。LINEAA)以下未修改Addthepointtothelistiftheprobabilityisgreaterthanthethresholdpoints。append((int(x),int(y)))else:points。append(None)DrawSkeletonforpairinPOSEPAIRS:partApair〔0〕partBpair〔1〕ifpoints〔partA〕andpoints〔partB〕:以上未修改修改处:将frame改为outcv2。line(out,points〔partA〕,points〔partB〕,(0,255,255),3,lineTypecv2。LINEAA)cv2。circle(out,points〔partA〕,8,(0,0,255),thickness1,lineTypecv2。FILLED)cv2。circle(out,points〔partB〕,8,(0,0,255),thickness1,lineTypecv2。FILLED)修改处:将frame改为outcv2。putText(out,timetaken{:。2f}sec。format(time。time()t),(50,50),cv2。FONTHERSHEYCOMPLEX,。8,(255,50,0),2,lineTypecv2。LINEAA)cv2。imshow(OutputSkeleton,out)vidwriter。write(out)vidwriter。release() 4小结 4。1讲解非常仔细,操作非常简单,是在源代码进行修改,非终端输入法,直接点击微软编辑器运行按钮即可。 4。2采用摄像头获取实时个人视频,可能有点卡,但是生成视频文件播放应该是正常的,可以试试。我没有试过。 希望大家喜欢。