[BlindMath] Strange math / logic question
kperry at blinksoft.com
kperry at blinksoft.com
Sat Aug 17 00:46:21 UTC 2024
I have a strange math / logic problem. I think I could teach a neural net
to figure this out, but I am trying to produce a formula that would give me
a number that is bigger the more columns are used. An example would be I
have a 50 row and 20 column matrices of things. The matrix of things will
have things in the first column all the way up to the 50th column. If the
matrix is square and fills all the columns. ON all the rows.
. I want a confidence level close to 100 %
The less columns are used or the more ragged the right side is I.e., if One
or more random rows are missing 20 10 or 30 etc. then I want a much lower
confidence in fact in around 30 to 50. If a whole group is missing a lot of
columns, then again it should be exceptionally low number.
If the last row is short though I want the confidence level to still be
high. I could chop it off before calculating and then deciding on the last
row separately.
Does anyone have an easy method of I guess I would call it a fullness
factor right justified calculation function. Note it doesn't matter what I
stick in the column items I just want it as a count of things starting from
the left and a confidence factor the more ragged or less full the matrix is.
Note this is not a school thing it is a software Engineering question for
something I am working on.
I am writing this in python if you have a suggestion, a library, a formula .
Let me know.
More information about the BlindMath
mailing list