I've been there and done that on both sides of the 'Peer Review' process. Many in the public square like to point to something as having been 'peer reviewed' as being the gold standard of scientific righteousness. Here's my experience---and I did my publications and peer reviews in a discipline that is not terribly politicized, in others I suspect it is far worse.:
1) Nobody generally does even a back of a napkin recomputation during a peer review--a gut check at most is the norm. Most assuredly nobody will rerun the code, simulations, or experiments in question. What they will do is
2) Look to place the work in the context of existing scholarship, especially with an eye to making certain that their own work is cited if even vaguely related. This is because most departments have tenure metrics that work based off 'academic impact', which is to say how many publications you've written and had accepted to a journal of at least Nth tier and how many times said publications have been cited by other publications. Think we don't game the system?
I've also quite a bit of experience in industry, where the equivalent is called a technical review group (often abbreviated TRG). Such groups actually generally DO examine the math, the assumptions, the code, and the results in general for correctness. Their incentives are to make certain that whatever is being reviewed actually works, because they have the figurative small amount of skin in the game. My impression is that a lot of people speaking of peer reviewed science actually have something akin to a TRG in their mind, and that is far from the truth. Only in a very few fields where there is strong hostility towards certain types of research but insufficient hostility to prevent publication and scientific discourse (Psychometrics comes to mind) do you occasionally see something like an open field TRG in the peer review process.
Faster, Pussycat — Kill! Kill!
7 hours ago