When electric track circuits for occupancy detection were first developed, they simply connected two DC polarities to the rails. When a train shorted it, the relay opened and the block was occupied. However, depending on thequality of your rail bonds (those doubled wires between pins driven into holes at the end of rails at joints), the resistance of your ballast (and salty slush at grade crossings), block length had to be limited to perhaps 2 miles under the best conditions. Talk to an EE or a signal maintainer to find out exactly why.
Though this cost more, it did allow trailing sections of scheduled passenger trains to run fairly close together at 50-70 MPH (you need a minimum of two blocks between trains at all times). It also made it easier to "fleet" freight trains (an operating plan where several trains in the same direction are scheduled for more or less the same time, in order to get more capacity out of a single track line with a limited number of sidings).
As CTC came into wide use, the vendors developed "coded track circuit" schemes. I've never seen details of how these work, but I know they allowed much longer blocks and so a side-effect of CTC work on a number of RRs was a big increase in signal spacing on lightly-used lines. The most visible case I recall from circa 1970 was the B&M from North Beverly to Newburyport - the searchlights in service were placed 2-3 mi. apart but you could see the old semaphore foundations every mile or so.