(no subject)
Sep. 26th, 2017 08:52 pmI have an emergency project at work. We have a chip that automatically self-calibrates as it's starting up, and we (well, I) may have found a way to set it up in a perfectly reasonable design that users would actually choose, that interrupts the self-calibration just long enough to make it miscalibrate.
It is temperature-dependent.
The result of the failed calibration is complicated, and while not dire, it is not what the datasheet says and is definitely not what we want.
The way I can tell this is by measuring an output parameter, the output current it's regulating, because depending on how it calibrates, it will supply a slightly different output current on subsequent power-cycles.
Here's where things get messy. Ideally the output current would be a very sharp gaussian
distribution. (That's the whole point of the auto-calibration: to decrease the distribution width.) What I actually get is a bimodal distribution, with two very sharp peaks, each gaussian.
(By 'sharp' I mean peak value +/- 2% will include 99.95% of the measurements, and the two peaks are about 30% away from each other, so there is zero overlap for reasonable sample sizes.)
But as the chip's temperature changes via external forcing, both peaks stay the same distance apart but drift upwards about 20%, and as the chip's internal temperature changes as it self-heats during operation, the two peaks move apart from each other by about 10%, to their final 30% separation, starting at 20% or so.
I'm trying to write some sort of binning algorithm that can dynamically assign and fill two bins.
So far, the best I've come up with is to take the first reading, assign it to an arbitrary bin, then start taking a running average of subsequent readings and use that running average as the center point of the bin, and whenever I get a value that is more than 15% above or below that, assign it to the other bin, and take a running average of that, and over time I can create a bin2-bin1 difference and use that as my bin-dividing criterion.
I've read about kernel density estimation, and the math there seems overwhelming, frankly. I'm wondering if people have any ideas about how my algorithm would fail or how it could be improved.
In other news I went to my mom's house, tore out an old cracked toilet, and installed a new one tonight. The intent was to also pressure-drain the sprinkler system, but the vacuum breakers are misbehaving and I can't keep them closed for long enough to get the whole system emptied. I've never had to deal with this before. Water pressure will keep them closed, but air pressure won't unless it's so high I'm worried about it damaging the sprinkler system. I'd hate to burst lines trying to blow out the water so they don't burst from freezing. (How much alcohol would I have to inject to prevent them freezing?)
It is temperature-dependent.
The result of the failed calibration is complicated, and while not dire, it is not what the datasheet says and is definitely not what we want.
The way I can tell this is by measuring an output parameter, the output current it's regulating, because depending on how it calibrates, it will supply a slightly different output current on subsequent power-cycles.
Here's where things get messy. Ideally the output current would be a very sharp gaussian
distribution. (That's the whole point of the auto-calibration: to decrease the distribution width.) What I actually get is a bimodal distribution, with two very sharp peaks, each gaussian.
(By 'sharp' I mean peak value +/- 2% will include 99.95% of the measurements, and the two peaks are about 30% away from each other, so there is zero overlap for reasonable sample sizes.)
But as the chip's temperature changes via external forcing, both peaks stay the same distance apart but drift upwards about 20%, and as the chip's internal temperature changes as it self-heats during operation, the two peaks move apart from each other by about 10%, to their final 30% separation, starting at 20% or so.
I'm trying to write some sort of binning algorithm that can dynamically assign and fill two bins.
So far, the best I've come up with is to take the first reading, assign it to an arbitrary bin, then start taking a running average of subsequent readings and use that running average as the center point of the bin, and whenever I get a value that is more than 15% above or below that, assign it to the other bin, and take a running average of that, and over time I can create a bin2-bin1 difference and use that as my bin-dividing criterion.
I've read about kernel density estimation, and the math there seems overwhelming, frankly. I'm wondering if people have any ideas about how my algorithm would fail or how it could be improved.
In other news I went to my mom's house, tore out an old cracked toilet, and installed a new one tonight. The intent was to also pressure-drain the sprinkler system, but the vacuum breakers are misbehaving and I can't keep them closed for long enough to get the whole system emptied. I've never had to deal with this before. Water pressure will keep them closed, but air pressure won't unless it's so high I'm worried about it damaging the sprinkler system. I'd hate to burst lines trying to blow out the water so they don't burst from freezing. (How much alcohol would I have to inject to prevent them freezing?)