I treat this page as a pasture for free writing. The ideas presented are in the stage incubation, yet written sanguinely for inspiration. Commentaries, critiques, and discussions are generously welcomed.
Tighter Connection: High-speed Bridge of Information from Sensors to Humans
Some factors affecting human perceptions and therefore performances may be have no effect on sensors: mood, stress level, fatigue, hormonal fluctuations, familiarity with environments… These are, among many others, cases where human calibration is needed to return to baseline performance (assuming the baseline is desirable and well-calibrated).
The main questions that is asked can me summarized as: What is needed to calibrate human perceptions with minimal human conscious effort? Of course, the main question is coupled with yet to be tested assumptions, some of which may include: Is there a utility of supplying veridical measurements to humans? How much utility does calibration of perception translate in terms of performances? Is the calibration method easy to learn? What are the biological costs for such calibration?
Putting the thoughts in context, let’s frame the question in terms of calibration of distance for a war-fighter. The aforementioned questions in this context (in sequence) would be: What is needed for humans to find out distance without checking a sensor? Would war-fighters, when receiving corrections on their perception of distances, be impacted, for example, reduced in adroitness because mis-alignment with reality has higher utility? How much improvement of performance can be measured if we calibrate the sense of distance? Will it be possible to have the calibration done in the background, subconsciously? How much reduction in cognitive resources is associated with background calibration?
On-device AI: Extracting Information from Randomness
Sensors tend to be viewed as reliable machines producing veridical outputs, where humans are viewed as messy biological systems generating unpredictable data as we deliberate in the world of randomness. Yet fluctuations of human-generated data do not occur without reasons. Are we there yet to extract knowledge from noise as to inform us more of the state of the world? What is needed to utilize humans, in the sense of exploiting directly measurable human-generated physiological data, to integrate humans into a system (of sensors)? Do we need to understand the brain fully before we can exploit the encoded knowledge? Are we there yet to implement AI at the edge on sensors?
I have not arrived at formulated ideas for these questions, yet the intuition is suggesting to start with miniaturizing sensors for easy collection of human-generated data. Maybe progress would be made by first bombarding ourselves with enough data to play with. For example, can we miniaturize in-ear EEG to be packaged together with bluetooth earplugs? Recent study have shown high mutual information between traditional scalp EEG and un-cumbersome in-ear EEG. Can we use the signals as early detection of diseases, such as that of Alzheimer’s? Or can we use signals collected from VR-mounted EEGs to correspond visual stimuli with human expectations as to reduce nausea?
Some of the thoughts may be quixotic, but I am curious of the result as we encourage symbiotic relationship between sensors and humans.