We start with the default. We know many animals do not have free will. Bacteria and worms have behavior that is completely described by their chemical processes (whether from within their fluids or without, in the environment).
The onus is on the case to show that some animals are different. That some animals have what we describe as free will. Cats and humans are also built from cells, with chemistry within and without, so as a first premise, it is certainly plausible that we too are described by our chemistry (albeit extremely complex chemistry).
So, the question at-hand is: though they may be quantitatively more complex than bacteria, what compelling argument is there that cats and humans are qualitatively different than worms and bacteria? Until you can show that - a person deciding to get up from the couch to go to the fridge for a drink, or to pick up a book and read it - is fundamentally a different process from a cat or a worm searching for food, you cannot be sure that humans have free will.
Something different starts to happen in the lower animals as they develop ganglia. It looks like a combinatorial network (from which some sequential paths are formed). The process appears no different from yet simpler forms, just that there is now a new organ to work out more complex decisions and to infiltrate all the other organs and tissues to better increase the detail in sense and motor functions.
The human brain and nervous system is just many layers of complexity above the ganglia of, say, planarians. The processes are still chemical (albeit electrochemical) but a virtual aspect has emerged, in which a particular result may be influenced by a prior state (as in synapse).
In a way the lower animals had some virtual states, such as we might think of osmotic pressure being a state that decides which way water will flow across a membrane. It would be interesting to see a treatise on the inner workings of the pseudopod and flagella/cilia of lower zoa, developed from this standpoint and the "need" to move.
So with the advent of nervous tissue, stimulus-response is transcribed into some God-awful algorithm that manages to solve problems like locomotion in a worm.
Since we can live without any will whatsoever (as when asleep, drunk, drugged, comatose whatever) it seems that the virtual machine that hosts free will is sufficiently independent of other life sustaining functions, but heavily dependent on the activity of the conscious mind. In other words, consciousness is something like an awareness springing out of the box that hosts it. It has a sense of its choices, but ultimately it is confined to the same laws of chemistry as an amoeba. The "freedom" from simpler constructs is found in the "choices" neurons have to grow axons and dendrites, and form synaptic junctions, which then ultimately define the architecture of the machine that hosts the ideation that we call free will.
At this level of analysis, I have a jillion jumbled up connections that somehow host an idea. When the urge to act arises, other signals are launched, and other chemicals, such as hormones, are released, and a bunch of other insanely intricate things happen, and suddenly my fingers are flying across the keyboard, or whatever.
That's probably about as far as I can reasonably run with that ball. Because we don't imagine that there's anything like a one-to-one correspondence between a particular synapse being formed and a particular impulse arising, which we might call an instance of free will.
All I can say with certainty is that free will arises out of layer upon layer of complexity, having chemistry at its base, and that it is just too exceedingly complex to model, but it seems plausible to attribute it to a virtual state machine of some kind.