In this context “weight” is a mathematical term. Have you ever heard the term “weighted average”? Basically it means calculating an average where some elements are more “influent/important” than others, the number that indicates the importance of an element is called a weight.
One oversimplification of how any neural network work could be this:
- The NN receives some values in input
- The NN calculates many weighted averages from those values. Each average uses a different list of weights.
- The NN does a simple special operation on each average. It’s not important what the operation actually is, but it must be there. Without this, every NN would be a single layer. It can be anything except sums and multiplications
- The modified averages are the input values for the next layer.
- Each layer has different lists of weights.
- In reality this is all done using some mathematical and computational tricks, but the basic idea is the same.
Training an AI means finding the weights that give the best results, and thus, for an AI to be open-source, we need both the weights and the training code that generated them.
Personally, I feel that we should also have the original training data itself to call it open source, not just weights and code.
My sister is an urologist. So for me this is basically what happens opening pictures on the siblings group chat