Nature shots have been cheating for a while, because most errors are still plausible.
The major tell is how screen-space anything is. In real life, there’s very few angles where the top of a close thing stops at the bottom of a far thing… but neural networks aren’t modeling depth. Probably. So things are tangent or coincident all the dang time. Even in the patterns of grass and brush and whatnot, where the network does T-junction patterns like brickwork or cracked pottery, when it should be closer to woven or thatched.
Nature shots have been cheating for a while, because most errors are still plausible.
The major tell is how screen-space anything is. In real life, there’s very few angles where the top of a close thing stops at the bottom of a far thing… but neural networks aren’t modeling depth. Probably. So things are tangent or coincident all the dang time. Even in the patterns of grass and brush and whatnot, where the network does T-junction patterns like brickwork or cracked pottery, when it should be closer to woven or thatched.