Not usually. In most software it is a couple of logical levels below. The AI folks are trying to jump level, and may be succeeding.
Nothing but handwaving.
You have yet to explain, in any of your posts, how the logical levels have any bearing on the matter in hand.
How they turn something that has no possibility of selecting other than the one it does (I.e. not free) into something that does.
And the field of physics is devoted to answering that question - in this case, in the affirmative.
Yet you've provided nothing but hind waved to some examples that conclude simply on the appearance.
To wit:
The human mind does make decisions according to criteria, according to multiple different investigative approaches into the underlying reality.
They conclude a process is followed, yes.
They make no claim as to whether that process is free with regards being able to do otherwise at the moment of choice.
It's not the same level - even between a brick and a computer there are significant logical levels. Bricks don't process information, and waving your hands at them won't give them that ability.
You're the one waving your hands at a process and expecting something to not just seem to appear but to actually be what it appears to be: free.
You have offered nothing in that regard.
Aren't your hands and arms sore from all the waving?
Trying to restrict the word "actual" to the supernatural is strange, and unmotivated. There is nothing counterfactual about the abilities of a driver approaching a light to stop and to go, both.
The is everything counterfactual about it.
If a train sees two paths ahead and the switch is already set (i.e. predetermined) how is it genuinely/actually able to go down the other path at that time?
To say "if the switch was in the other position it could have gone down the other path" is a counterfactual example.
And your entire notion of what it means to be free is hollow because it relies on such counterfactuals.
In fact, it is a logical necessity of their eventual behavior being determined by the color of the light - if it were not the case, their eventual behavior would be independent of the color of the light, and we would be unable to observe a decision being made (in our brain scans, etc).
???
If their behaviour is determined by the colour of the light, as you suggest (same colour light, same response) then it is an even simpler case to examine: their behaviour is no more free than an an autonomous vehicle that is told how to act.
It has no choice but to do what its programming says.
No will of its own (unless the program is called "WILL" or some such).
Can it choose to do anything different?
No, because, as you have examples, the colour of the light determines its response.
We observe the ability to make decisions - fact.
We certainly do, in the same way that a thermostat does.
Only we call our process "decision making".
Otherwise it is fundamentally the same.
It has never been disputed that the process is carried out, so do try to move on from merely repeating that the process is carried out.
So? The process of choice nevertheless exists, and involves selection among what the chooser has the ability to do.
Yes, the process exist.
Again, not disputed.
The selection, however, is not free.
It has no ability to select, at the time it does, anything other than the one it does, even if prior to that it can imagine many other counterfactual examples of what it has the ability to do.
Having the theoretical ability to do both X and Y at different times based on different inputs is not the same as having freedom to do other than it does at any given moment.
Do you agree that when a computer plays chess it has the ability to move many possible pieces?
I mean, on its opening move it could move any of the pawns either one or two spaces forward, so that's 16 possible moves right there.
It selects one.
Now, assuming there is no inherent randomness in the programming, as were talking about a deterministic system, are you saying that the computer was "free" to select any of the others at that time?
It certainly looks ahead and assesses possible combinations of moves - so thereyouhave your direct analogy to what humans do when they make a choice.
It then makes its move.
It has made a choice.
Are you honestly saying that it had the ability to do other than the move it made?
In the absence of randomness you could run that simulation at any time and it would play out the same way and it would behave exactly the same way, precisely because it is not free.
You, though, see the 16 or more moves it could make on its opening move and claim that it has the ability to do otherwise, as if that somehow means that the move it took was chosen freely.
That is the supernatural assumption. It is not necessary. If you can find some way to drop it, the discussion can proceed.
It's not an assumption, and I try not to appeal to consequence.
If the conclusion is that our will isn't free, I don't then look for definitions of "free" that give me that warm fuzzy feeling that my free will can be considered intact.
If you're content with your hollow notion, a sense of "free" that, as relevant to the process of choice, is available to any process that assesses what it considers are possible futures before making its predetermined move, then I will leave you to that, and laugh every time I play a computer game as to how "free" you think it is.