@RichardRahl said in #10:
> @dboing my code is public under github.com/Philipp-Sc/learning and almost everything is allowed, except commercial use is prohibited. This app is very early and unfurnished, it is indeed only intended for my own need. I think that is the best way to make progress. Once my chess improves using it, I might promote it.
I have the same attitude, and freedom of usage of Github. Maybe not enough stamina or time to do anything though. But I sure can comment and share my thoughts...
Thanks for sharing the link. I hope I can interact with you there, if I see something worth both our focus.
But I would repeat, don't be in a hurry to add complexity, do the maximal characterization and alllow all your features blocks to interact functionnaly in different ways as building block.
count your parameters,. do simple experiements (even if they are both not best predictors of SF on test data, limits to that point though).
if you have a simple good one. try swapping instead of piling up. SF has been piling up a lot on its heuristic being human knowledge imbued, and is now forced to use tuning with only objective being engine X engine tournament in the stratosphere, which I heard have not change much at the constraint definition levels, some cultural inertia involved is my impression. Point is the human knowledge claim has been moot for a while now.
one could probably just integrate and contrast the resulting "mass" in SF classical eval design, a real valued function implemented by the SF heuristic static evaluation, over many diverse positions; not just those with near material imbalance, but also an equal amount of those where people would commonly agree that positional thinking (conscious or not, worded or not, but no near material imbalance computable by strict alternance). Standardize it. And then do the same over a partition with on one side all positions with high material imbalance, and the opposite on the other side (NNue territory by the way), one would find that it is very hard for any combination of positional feature to amount to a pawn difference.
So piling up without traceability and proper input-output characterization of each elemental building block, seems like an already tried dead end. Global optimization is great at input-ouput level. but not if you want null-hypothesis type of isolated feature answers (that requires statistical confidence in any one parameters, while all you get realy is confidence about the input-output quantity accuracy. It is true for NN by design, but also with any piling up... ask around. and tell me wrong. I would like somebody to tell me wrong (not opinions only, at least an elementary argument).
> @dboing my code is public under github.com/Philipp-Sc/learning and almost everything is allowed, except commercial use is prohibited. This app is very early and unfurnished, it is indeed only intended for my own need. I think that is the best way to make progress. Once my chess improves using it, I might promote it.
I have the same attitude, and freedom of usage of Github. Maybe not enough stamina or time to do anything though. But I sure can comment and share my thoughts...
Thanks for sharing the link. I hope I can interact with you there, if I see something worth both our focus.
But I would repeat, don't be in a hurry to add complexity, do the maximal characterization and alllow all your features blocks to interact functionnaly in different ways as building block.
count your parameters,. do simple experiements (even if they are both not best predictors of SF on test data, limits to that point though).
if you have a simple good one. try swapping instead of piling up. SF has been piling up a lot on its heuristic being human knowledge imbued, and is now forced to use tuning with only objective being engine X engine tournament in the stratosphere, which I heard have not change much at the constraint definition levels, some cultural inertia involved is my impression. Point is the human knowledge claim has been moot for a while now.
one could probably just integrate and contrast the resulting "mass" in SF classical eval design, a real valued function implemented by the SF heuristic static evaluation, over many diverse positions; not just those with near material imbalance, but also an equal amount of those where people would commonly agree that positional thinking (conscious or not, worded or not, but no near material imbalance computable by strict alternance). Standardize it. And then do the same over a partition with on one side all positions with high material imbalance, and the opposite on the other side (NNue territory by the way), one would find that it is very hard for any combination of positional feature to amount to a pawn difference.
So piling up without traceability and proper input-output characterization of each elemental building block, seems like an already tried dead end. Global optimization is great at input-ouput level. but not if you want null-hypothesis type of isolated feature answers (that requires statistical confidence in any one parameters, while all you get realy is confidence about the input-output quantity accuracy. It is true for NN by design, but also with any piling up... ask around. and tell me wrong. I would like somebody to tell me wrong (not opinions only, at least an elementary argument).