Well, That Won’t Work…

I was poking around at my code today, and I realized that after resolving the internal inconsistency in the design model for my neural net, I can’t get away with the somewhat cheaty method for resolving solutions (https://github.com/silasray/aiexplore/blob/7c92466c5d8c128418d439cc3715e3ac8f5a1ceb/net.py#L125), because the math doesn’t math now that there’s an upper bound to the activation levels for neurons, no matter how many times a resolve is run. I was thinking I could just brute force my way to a solution by rerunning resolve() until some output finally fully activated, but that is impossible now.

So, what needs to be done now.

First, I need to switch over from waiting for full activation of an output, or at least for that being the only mechanism for resolution, to returning a bunch of scored outputs. Surprise surprise, there’s a reason that systems work this way.

Second, since I’m now going to have more than one output per an activation, the reinforce pathway (https://github.com/silasray/aiexplore/blob/7c92466c5d8c128418d439cc3715e3ac8f5a1ceb/net.py#L136) will need to be adjusted. I’m not sure yet if I want to make the reinforce fully centered on an output rather than on an activation, but I think I’m leaning in that direction, because it just feels like it makes sense that way.

I did add a test or 2 today, but nothing worth committing. Looks like the docket is set for tomorrow though.


Leave a comment