• ben_dover@lemmy.world
    link
    fedilink
    arrow-up
    12
    ·
    11 months ago

    as someone who has played around with offline speech recognition before - there is a reason why ai assistants only use it for the wake word, and the rest is processed in the cloud: it sucks. it’s quite unreliable, you’d have to pronounce things exactly as expected. so you need to “train” it for different accents and ways to pronounce something if you want to capture it properly, so the info they could siphon this way is imho limited to a couple thousand words. which is considerable already, and would allow for proper profiling, but couldn’t capture your interest in something more specific like a mazda 323f.

    but offline speech recognition also requires a fair amount of compute power. at least on our phones, it would inevitably drain the battery