As a co-processor. Or maybe like the physX. RayX.
Maybe, but the most important part of physX is the software, by far. This is a bit different since performance is trash without the hardware, and since it's already being used to promote GPUs it's probably going to stay that way.
I don’t know what I’m talking about, but is the hardware for Raytracing the same as what is needed for conventional graphics?
The BSDF of Raytracing is probably very similar to the Pixel-shader hardware in Rasterization. The main issue, as Redocbew points out, is the BVH tree traversal, which the classical GPU design is suboptimal at (but still better than CPUs at). NVidia added special units for extra-fast BVH traversal
BVH is the "quick and dirty" estimate for where rays may be intersecting with the geometry of the scene. If a pixel starts at location (0,0,0), and is launched with (0,0,1) bearings, which triangle will it hit (or does even any triangle exist in that direction?) If there are multiple triangles in that direction, which one is closest? The pixel-shading question is "what angle is the triangle pointing at" (the normal vector), which leads to the question "where should the ray "bounce" towards ?? And once the ray bounces, its a new BVH traversal to find the new triangle that the ray points at.
BSDF of Raytracing would probably replace Pixel shaders... but Vertex and Geometry shaders would take place "before" the raytracer would run (even if you completely got rid of rasterization: there are too many effects that video game programmers use in the Vertex and Geometry shader parts of the pipeline). So that stuff probably would remain the same.
Actually, CF/SLI might make a big comeback if the gaming industry takes a serious turn towards pathtracing/ray-tracing. The rendering pathways are comically parallel so scalability isn't that much of a problem unlike rasterization. The economic realities of monothic chips are catching up. It will begin to make more sense to make specialized chiplets if the industry wants to continue the pathtracing/ray-tracing route.
AMD RTG didn't throw infinity fabric into the Navi architecture for a quick laugh. I wouldn't be surprise if they end-up making a dedicated ray-tracing chiplet and GPU-hybird solution as their full answer to Turing dynasty.
Infinity Fabric is only ~50GBps per link, right? GDDR6 is going to be like 400+ GBps. No matter how you look at it, chip-to-chip communication will be grossly slower than communicating with graphics-RAM. The GPU programmers will have to handle it.