FAQ Neural Networks
Q: What can I do with Neural Networks?
A: Predict properties or cluster/segment data. Neural Networks can inspect large amounts of data and find patterns for you. Moreover, they can apply what they’ve found on even more data. This gives predictions of properties, or classification into segments or classes - for entire cubes, lines, horizons, etc. There are always 2 phases: the ‘Training’ and the ‘Apply’ phase:
- During training, you offer examples of your problem to the NN. If you also provide what you want to see (the ‘Target’), the training is called ‘Supervised’. If you want the NN to find patterns by itself (i.e. cluster the data), the training is called ‘Unsupervised’
- During Apply, you offer the same input data to the network, although you never know the outcome, (as you do in supervised training). The NN will apply and the output will be a prediction of your property, or, in the unsupervised case, a classification of the data.
Neural Networks are non-linear statistical tools that conveniently find and apply relations between many input variables and what you want to know. For example, the input can be various attributes from various seismic or AI cubes, the output can be porosity. The training set is then collected at well locations - this enables the network to see both input and output and establish a relation. Applying that relation using the entire cubes will then yield a porosity cube also defined where you have no well control.
Q: I’m getting negative (or much too high) porosities! Is that a bug?
A: No. Neural Networks are non-linear estimators. You train them on data that should be representative for all the input that is supplied when applying. If the input does not cover all possible input combinations, then the NN will have to interpolate - or worse, extrapolate. This will be done in a non-linear fashion. And because the NN has no notion of what porosity is, it will not be ashamed to produce values below zero.
Porosities below zero are simply caused by input data not comparable to what the network was trained for. If you can’t stand having the values on displays, it is easy to define an attribute to post-process the data. The new ‘Squeeze’ option in the Scaling attribute is ideal for this purpose, but also something simple like the Mathematics attribute with a formula like: ‘por < 0 ? 0 : por’.