Have you ever wondered whether screen-sharing could pose a threat to your privacy? Or, perhaps imagine whether it is truly safe to keep your screen-sharing mode active when typing passwords, even if they're masked on-screen? Think about it: during video meetings, we frequently share our screens, giving our audience a real-time view of the characters and symbols as we type them. Some of us don't even bother to stop the screen sharing mode while typing passwords, believing that since the password is masked (hidden) on the screen, there is no potential threat to our privacy. However, while this behavior may not matter to human audiences, a computer vision model observing the screen-sharing session can gain a lot of information. It can determine the precise time a certain character is typed, how often we make mistakes in our typing, and even the delay between one character we type and the next. These metrics, unique to everyone, can be used to identify our generic typing behaviors. This way, an adversary can easily impersonate a victim's typing behavior without the need to install additional software/hardware such as keyloggers. In this presentation, we'll unveil the exploitation algorithms to extract an individual's typing behavior from a recorded screen-sharing video. We'll also demonstrate a staggering 67% chance that an attacker can mimic a victim's typing behavior and deceive a keystroke biometric authentication system to steal the victim's access or identity, just by using a recorded screen-sharing video. Furthermore, we'll demonstrate how an attacker could possibly recover one's typed password by using the mimicked typing pattern. Finally, we'll highlight some recommendations on how to prevent our keystrokes from being mimicked and stolen out, although we believe there isn't yet a silver-bullet approach that could completely annihilate the risks.