Russia hacker

A Conversation With Alisa Esage, a Russian Hacker Who Had Her Company Sanctioned After the 2016 Election

Editor's Note: In December 2016, then-President Barack Obama signed an executive order that announced sanctions on Russian individuals and organizations in response to election interference efforts.

The list included several notorious hackers, as well as Russia's Federal Security Service (FSB) and Main Intelligence Directorate (GRU). Also on the list was a lesser-known organization that left many puzzled: ZOR Security. The company was founded by Alisa Shevchenko, who worked as a virus analytics expert for Kaspersky Labs for several years and was passionate about building a co-working space for hackers and computer geeks. She had also been credited by the U.S. Department of Homeland Security for assisting Schneider Electric in finding vulnerabilities in their software.

According to the announcement, Shevchenko's company was a supplier to the GRU, the group said to be behind the hack on the Democratic National Committee and other political organizations. Shevchenko, who also goes by Alisa Esage, said U.S. authorities were mistaken, and that she had already closed her company. She currently works as an independent researcher, and is the founder of the Zero-Day Engineering project, which shares technical knowledge and training on software vulnerability research.

Shevchenko talked to Recorded Future cyber threat intelligence expert Dmitry Smilyanets last week about her experience in 2016, her favorite vulnerabilities, and what it's like to be a hacker in Russia. The conversation below, conducted in English, has been lightly edited for clarity:

Dmitry Smilyanets: What was your reaction when you found out the U.S. government imposed sanctions on your company, ZOR Security?

Alisa Esage: I tried to keep calm and prevent the major press from portraying me as a kind of dangerous evil bitch that masterminded the hack of the century. Meanwhile, I kept working on my projects.

eage-768x603.png

DS: How did it change your life?

AE: I guess it made me a specialist in thriving under U.S. sanctions.

DS: You said authorities were mistaken—in what way?

AE: I don't care about this anymore. People have the right to make mistakes. And the U.S. government likes to sanction everyone to assert power where they don't have it, so let them.

DS: Do you still feel the pressure of sanctions?

AE: I definitely feel that something is happening, but I wouldn't call it pressure. If somebody can't work with me or benefit from my products because they fear the U.S. government, it's their own problem.

DS: How did you get into cybersecurity and hacking?

AE: It all started with my father, Andrey. He was a talented electronics engineer, one of the first in Russia who started to assemble personal computers as a hobby when they were not generally available, based on spare parts and articles published in foreign technical magazines. He taught me to solder when I was 5 years old, I think. So I started reading books about computers and programming in early school and taught myself to code in C++ and x86 assembly language as soon as I got a PC at age 15. While still in school I learned reverse-engineering, solved 'crackme' challenges, hacked PC games and coded 'keygens' for fun, and participated in the Russian and international underground hacking scene, mostly on the side of low-level software cracking. That was the beginning of it.

DS: Do you think it's getting easier or more difficult for people to follow in your path?

AE: It's a bit of both. When I was getting started, there was very little information available on computer security specifically. The internet was slow, and only a handful of publications might help you, aside from some generic books on computer programming. I recall there was a website named... something about ethical hacking, presumably ran by a woman; the articles published on it inspired me and helped me to get started. Nowadays it's totally different. The information is abundant. There are thousands of public sources on all topics in computer security, from tutorials for beginners to professional deep technical trainings on advanced topics (like the ones that I teach, offered by my company Zero Day Engineering). This is the easy side.

On the other hand, computer technologies are getting more and more complex, technological developments are accelerating, learning curves are steeper and as a result, it's getting harder to achieve high levels of expertise in this. It takes more time—years and decades. Remember the old assumption, "hacker equals teenager," and that a hacking career ought to wrap up before one's 30th birthday? So untrue today. The majority of the best hackers I know are well into their 30s, and they are just getting started. This is what it takes to gain true mastery in advanced hacking of modern computer systems: a lifetime dedication.

"For women [hackers] specifically: don't listen to anyone, and keep doing what you love."

— Alisa Esage

Either way, I think that reaching truly advanced levels on this path is more of a destiny than a choice. It takes a very peculiar mindset that seems to be based on genetic data, among other factors, plus special circumstances in life to keep doing the impossible every day. There are plenty of easier ways to thrive and succeed in modern human society than pitching your brain against yet another hardened system. You know that you're in it when you can't live without it. And when it happens—congratulations, there is no turning back.

DS: How would you advise other young women to get into a career like yours? What are the pros and cons of being a hacker?

AE: Find something that excites you, pick a specific task, and solve it until completion. Reflect and repeat. Solving technical tasks and completing them is essential to progress on this path, while passive consumption of information would mostly litter your brain and disrupt your technical creativity. Dedicate yourself to practice, read mostly primary sources (such as technical specifications, classical books, and high-quality blogs from researchers whose work you admire). It's the same advice for both women and men.

For women specifically: don't listen to anyone, and keep doing what you love. Especially if you already know that deep technical work is your passion, then don't let anyone side-track you into management, lecturing, report-writing, marketing, or whatever else supportive roles in the cyber industry. Men and women definitely face different challenges in this career path, though it does get better over time (though a lot much slower and to a lesser extent than it's commonly suggested).

Pros and cons? It's risky and fun, obviously.

DS: What do you like most about your line of work? What about your least favorite parts?

AE: I like the uncertainty, the adversarial environment, hunting for bug bounties, and solving hard challenges. I like that my work is valued extremely high, so that I can earn a high-end annual salary in three days with my brain alone, while sitting on the beach in pajamas. I dislike that I have to compete with many smart guys, who comprise a dominant majority in this space. Men and women were not made to compete with each other.

DS: What is the coolest vulnerability you ever found?

AE: A DLL Hijacking remote code execution bug in Outlook Express in Windows XP. It was a long time ago when Windows XP was still in use on every personal computer, and not just on ATMs and POS terminals like nowadays. In my early career in vuln research, I developed a proprietary methodology to discover zero-day bugs of this class and found many issues including the one in OE. For many years after I found it, I used to check if it's still unpatched and it was. Nowadays it's probably a "forever-0day", since XP is EoL. Long-lived and reliable exploits like that are always fascinating.

Nowadays DLL Hijacking issues (also known as Insecure Library Loading) are still very prominent. For instance, a bug of this type was recently patched in Zoom Client for Windows. It's trivial to find and trivial to exploit. The fact that world-dominating software developers still allow such trivial programming mistakes indicates a severe lack of vulnerability awareness in the global software industry. It's one of the reasons why my company is developing specialized vulnerability intelligence feeds that (aside from our trainings) should help to bridge this knowledge gap and make the Internet more secure for everyone.

DS: In your opinion, what's the easiest way to compromise an organization in 2021?

YoK64_9z_400x400.jpg

AE: There is no such way, generally. Organizations are not uniform in their levels of security. What's truly important to realize is that there exists a spectrum of offensive cyber technologies that may be leveraged to compromise an organization, from trivial to advanced. Attackers would typically go for the weakest link in the system, that can be broken with the easiest tech in the spectrum. As opposed to it, my professional choice is to specialize in the most advanced extreme of it—zero-day vulnerabilities and exploits, especially in hardened systems—because once you can solve a hard problem, everything else in the spectrum is a piece of cake. Also, because it's kind of unstoppable. You can train your employees to avoid clicking on phishing that brings ransomware, you can establish a proper corporate policy to block insiders that deliver supply chain implants and leak your business secrets, but you can't avoid technology altogether. And technological systems have bugs, that may be leveraged for arbitrary code execution. And game over.

"I like that my work is valued extremely high, so that I can earn a high-end annual salary in three days with my brain alone, while sitting on the beach in pajamas."

Another key point to realize is that, regardless of a security compromise scenario, the root cause of it is always a vulnerability: either a human vulnerability or a technical one. I noticed that many analytical publications about cybersecurity compromises tend to somewhat obscure this fact, and fail to point out the culprit vulnerability(-ies) of the case while explaining in detail various peripheral, non-essential, and last-stage techniques involved in the process. The general trend nowadays is to eliminate the human factor out of security equations, so purely technical bugs will gain more and more importance in the long run.

DS: Please tell us about your new adventure, Zero Day Engineering.

AE: Zero-day exploit engineering is my favorite e-sport. I can't live without it, as it gives my brain all the food that it needs, that I struggled to find in other lines of work. I had no other choice than to create a public venture based on it. It's been just one month since I officially announced it, and it's already my favorite pet project. Turns out that it effectively wraps up all the knowledge and experience that I gained in two decades of work, both in deep technical practice and in entrepreneurship.

Businesswise, the general idea is to offer a single-source cyber threat intelligence with a very specific focus: low-level vulnerabilities in computer systems, to a variety of target audiences, from individual technical specialists to cybersecurity firms, software vendors, and governments. All our research intelligence offerings shall be based primarily on original in-lab deep technical research, versus collecting it from external sources. While I'm still figuring out the specific commercial and open-source value offerings that the industry is ready to embrace at this stage, the general idea already proved to be highly profitable with zero capital investment (via deep technical specialized trainings for individuals), and I am seeing a lot of potential for future developments.

For more details, I'll just quote the website: "Regardless of the fashionable cyber buzzword of the moment, there exist only two possible root causes of any security compromise: technical systems vulnerabilities, and human vulnerabilities. As global trends in technology move to eliminate human factors while accelerating technical ones, deep expertise on technical systems vulnerabilities is becoming a critical point in all technological developments. The answer to this is knowledge. Instead of leveraging our expertise to build yet another defensive system that we know first hand will be attacked and bypassed, we develop raw intelligence products that systematically bridge the primordial knowledge gap which enables the existence of vulnerabilities and exploits in the first place."

DS: I love that you mention "different code, same bug patterns" in your blog post. How much overlap do you generally see in hypervisor-specific bugs (e.g. one hypervisor to the next) and how much of this do you think is due to the relative obscurity of the hypervisor to most researchers, versus the slower patch cycle as compared with more "popular" systems like Windows?

"I always laugh at occasional media propaganda that tries to link software exploits to killings; it's just ridiculous. No computer exploit would ever inflict more damage than old-school physical weapons that they tend to replace."

AE: There is a lot of vulnerability overlap in different systems of the same class, in general. This is not specific to hypervisors. For instance, OS kernels and Javascript Engines display this trend as well. The root cause of this phenomenon is that systems of the same class are based on the same abstract models, as dictated by the system's functions, use cases, and deployment scenarios, even when they are developed independently and in different time periods. In turn, the same abstract models suggest the same wrong assumptions made by system designers and coders, unless they were specifically trained in modern code security. This is where a partial vulnerability overlap may be observed.

With respect to software security, this is super important, if not obvious. It essentially means that any information about security issues in any given software system is directly relevant to all other systems in the same class, and even relevant to many unrelated systems in general (though to a lesser extent). For instance, if you are a software developer, then learning about security bugs in competing software products is an instantly actionable and trivial way to harden your own code base, or at least to avoid public shame when somebody finds a bug in your code that was first demonstrated in a competing product 10-20 years ago (no kidding, my training is full of specific case studies of this sad situation). In the offensive security workflows, learning about bugs in one system (an open-source one, usually) would instantly map out insecurity patterns in another, proprietary one, for which direct threat modeling is hard. Generally speaking, a big deal of technical security challenges may be solved or optimized by leveraging this not-so-obvious overlap, if you are able to observe it systematically.

DS: You mention both the systems that rarely change in a code base and the rapid addition of new code as areas of opportunity for exploitation. Can you talk about the relative balance of both of these as it pertains to creating exploits that have longevity versus the potential to inflict damage?

AE: My blog post that you mention suggests a bit of a different meaning: the importance of studying deep core system internals, as they are key to constructing reusable advanced exploit primitives due to their relative stability, as many peripheral subsystems depend on them. Also, an exploit’s shelf-life is not necessarily in opposition to its impact potential.

With respect to an exploit’s longevity, only one metric is key: how hard it is to find the bug. Since the vulnerability discovery industry is an adversarial environment, the main impact on an exploit’s longevity comes from actions of competition. Less competition, longer a bug's shelf-life.

This metric can be further expanded into a system of variables—such as information availability, the scope of knowledge required to solve a task, specialized skills required, specialized equipment required, affordability of toolsets, various properties of the target software vendor's security response, and so on—which pertains separately to each stage of the offensive workflow. With respect to serious strategic thinking on vulnerability R&D, it can get incredibly complex. For most common purposes, however, all that complexity can be reduced into a simple rule: get as deep as you can in the most obscure system that makes sense. This is the point that partially overlaps with deep system internals, but is not restricted to them specifically.

As opposed to deep system internals, newly added superficial code tends to contain shallow critical bugs that are easy to find and exploit. Due to the adversarial nature of the vulnerability research industry, those bugs would be found and patched relatively quickly (at least in popular and critical systems), which makes exploits based on them short-lived. Which is not necessarily a bad thing.

Finally, the potential of an exploit to inflict damage has nothing to do with any of the above technical metrics. I always laugh at occasional media propaganda that tries to link software exploits to killings; it's just ridiculous. No computer exploit would ever inflict more damage than old-school physical weapons that they tend to replace, in fact. Abstracting away from ethical and emotional connotations, any exploit indeed has a certain "power potential" which is initially neutral and depends on many factors. The key factor here is in whose hands an exploit lands, rather than its technical properties.

"One thing that I noticed is that none of my friends from Russia are rushing to escape to foreign lands these days, as it was a general trend in the 2000s. My sister even returned back to Russia after some time living in Europe. This is the only indicator that I need to know that my motherland is doing all right."

DS: What's it like working with government agencies in Russia? Are they professional? Skillful? Bureaucratic?

AE: I hear that they are competent and well-paying, as long as you are willing to sacrifice an international career.

DS: What do you think of Putin? How do Russian hackers in general feel about his policies and administration?

AE: I haven't been introduced to Putin yet. I don't think about him, and I am not concerned about what others think. One thing that I noticed is that none of my friends from Russia are rushing to escape to foreign lands these days, as it was a general trend in the 2000s. My sister even returned back to Russia after some time living in Europe. This is the only indicator that I need to know that my motherland is doing all right. Based on that, I guess one could assume that Putin works for Russia.

DS: Tell me a secret, what happened to @badd1e username that everyone liked so much?

AE: It's not a secret. It wasn't good for my Twitter purposes, so I changed it to @alisaesage. I kept it on my GitHub page, though—for the history.

Get more insights with the
Recorded Future
Intelligence Cloud.
Learn more.
No previous article
No new articles

Dmitry Smilyanets

Mission-driven and Russian-speaking intelligence analyst with type A personality. Dmitry has twenty years of experience and expertise in cybercrime activity that includes being a former member of an elite Russian-based hacking organization.