Yes, on RedHat systems, typically a person uses numactl
While it cannot guarantee a process binds to a specific CPU (as it will permit binding to another CPU should the desired CPU be unavailable, it will configure the job launch to make the best effort to bind and (if the process sleeps) rebind to the desired CPUs.
Note that in this case, CPU means "thing that can run a program" and not a physical item. This means that (for this conversation) a real core is one CPU, it's hyperthread is another, and often there are many CPUs in a physical package. To get a listing of these CPUs:
cat /proc/cpuinfo
numactl --hardware shows the hardware layout. Each node is a "memory boundary" meaning that it is an isolated bit of RAM that is accessible from some CPUs more quickly than from other CPUs. The reason is typically because it's directly accessible by a set of CPUs, and other CPUs must make requests across these boundaries to access that bit of RAM. This is important because you can also direct numactl to use only certain memory boundaries. It is a good idea to specify a memory boundary that is local to the CPU, if you are specifying specific CPUs.
numactl --physcpubind=0-7 <command> will launch whatever you would normally run with <command> on cores 0 through 7.
numactl --physcpubind=0,7 <command> will launch <command> on either core 0 or 7.
Of course, both of these can "core miss" which is when the OS decides the core won't be available and launches the program on a non-specified core rather than delay the launch. The numactl option --localalloc will attempt to use memory local to the core, while --membind=... permits more explicit memory location binding.
numastat shows the numa_hits and numa_misses in statistic form for numactl launched processes. To see if any specific process hit or missed, you need to read the details from the /proc filesystem before the process terminates.