Infosec Island Latest Articles Adrift in Threats? Come Ashore! en hourly 1 Today’s Top Public Cloud Security Threats …And How to Thwart Them Fri, 21 Jun 2019 11:02:00 -0500 Many enterprises today have inadvertently exposed proprietary information by failing to properly secure data stored in public cloud environments like AWS, Azure, and GCP. And while cloud computing has streamlined many business processes, it can also create a security nightmare when mismanaged. A simple misconfiguration or human error can compromise the security of your organization's entire cloud environment.

Whether your whole business or small portions operate in the cloud, it’s imperative to understand the cloud-specific threats facing your organization in order to find creative and impactful solutions for remediation and protection. Let’s start by walking through the top security challenges in the cloud today to gain a better understanding of this complicated and ever-evolving landscape.

Top Security Challenges in the Cloud

Top threat: Phishing

Phishing is very popular in the cloud today. It’s often deployed using PDF decoys hosted in public cloud that arrive as email attachments and claim to have legitimate content, such as an invoice, employee directory, etc. Furthermore, since the malicious pages are stored in public cloud, they fool users into thinking that they are dealing with a legitimate entity, such as Microsoft, AWS, or Google. Once received, such content is saved to cloud storage services, like Google Drive. As soon as attachments are shared, malware can propagate within an organization, leading to cloud phishing fan out. In a matter of minutes, a legitimate user’s account can be compromised and used as part of a phishing campaign, which is far harder to detect and mitigate.

Top threat: Cryptojacking

Cryptojacking occurs when a nefarious actor uses your public cloud compute resources without your authorization. Such attacks are indifferent to device type, service, or OS, making them especially dangerous. What’s more, because such attacks usually appear to be coming from legitimate users, they often go undetected for quite some time, allowing the actors to execute a number of attacks under the radar.

A deeper understanding of these threats is critical, but it doesn’t solve the problem. So, where do we go from here? Below are my recommendations on steps for combating the above risks (and others) in the cloud.

Recommendations for Better Cloud Security

Assess Your Risk Exposure

Organizations must deploy a real-time visibility and control solution for sanctioned and unsanctioned accounts to perform continuous assessment of the security posture of these accounts and to provide visibility into what is going on with your IaaS accounts. You must also track admin activity using logging services like Amazon CloudTrail and Azure Operational Insights to gather logs about everything that is going on in an environment. Additionally, consider deploying an IaaS-ready DLP solution to prevent sensitive data loss in web facing storage services, like AWS S3 and Azure Blob. And lastly, get real-time threat and malware detection and remediation for IaaS, SaaS, and Web. It’s imperative to continuously monitor and audit for IaaS security configuration to ensure compliance with standards and best practices and to make sure that the bad guys do not split in and fly under the radar.

Protect Sensitive Data from Insider Threats

While it sounds like common sense, many of today’s breaches occur when a user either intentionally or inadvertently shares sensitive information that compromises the security of an organization. To combat this, it’s important to educate all employees of the risks associated with doing business in the cloud. Warn users against opening untrusted attachments and executing files. Teach employees to verify the domains of links and identify common object store domains. Deploy real-time visibility and control solutions, as well as threat and malware detection solutions to monitor, detect, and remediate nefarious activity. And lastly, scan for sensitive content and apply cloud DLP policies to prevent unauthorized activity, especially from unsanctioned cloud apps. People are often the weakest link and proper training and education should be a priority for your business.

Follow Best Practices

Businesses should leverage compliance standards, such as NIST, CIS, and PCI, to easily benchmark risk and security. A lot of these tools will provide insights and recommendations for how to remediate various violations, but you should still understand that customization is key.

In order to thwart exposure, companies must have the capability to look at all cloud environments and perform assessments of how such resources are secured. And remember, every organization is different, and there is no one-size-fits-all approach to proper protection in the cloud. That said, by better understanding the threat landscape (whether within or outside your organization) and putting the proper tools in place, comprehensive cloud security is, indeed, possible.

About the author: Michael Koyfman is a Principal Global Solution Architect with Netskope. In his role, he advises Netskope customers on best practices around Netskope deployments and integrating Netskope solutions within customer environment by leveraging integration with customer technology ecosystem.

Copyright 2010 Respective Author at Infosec Island]]>
Influence Operation Uses Old News of New Purposes Tue, 18 Jun 2019 10:11:54 -0500 A recently uncovered influence campaign presents old terror news stories as if they were new, likely in an attempt to spread fear and uncertainty, Recorded Future reports. 

Dubbed Fishwrap, the operation uses 215 social media accounts that leverage a special family of URL shorteners to track click-through from the posts. At least 10 shortener services are used, all of which run the same code and are hosted on the same commercial infrastructure.

The campaign was identified using a Recorded Future-designed “Snowball” algorithm that allows for the detection of “seed accounts” and the discovery and analysis of additional accounts engaged in an operation.

Fishwrap was initially detected through the automatic tracking of terror events only reported by social media, which led to the identification of around a dozen accounts engaged in spreading old terror news as if it were new. 

Recorded Future’s security researchers then applied the Snowball algorithm to the small set of identified posts which led them to the suspicious activities that more than a thousand profiles have been engaged into. 

To narrow down the activity, the researchers then looked at similarities related to temporal behavior, the domain of the URLs referred to in the accounts’ posts, and account status.

This revealed three different activity periods, with clusters of accounts active between May 2018 and October 2018, between November 2018 and April 2019, and active during the entire time period (May 2018 to April 2019). 

These patterns revealed the launch of a series of accounts in May 2018, many of which were shut down in October 2018, which resulted in new accounts being created only a few weeks later. 

Some of the accounts were found to post, to some extent, identical URLs. Overall, the researchers identified 215 accounts that posted only links created using 10 domains hosting URL shortener services. Some of the accounts use multiple shorteners, but each of the domains has a fairly large number of accounts referencing to it. 

Analysis of the HTML code for the 10 URL shorteners, all of which are anonymously registered, reveals that they all appear to be tracking all agents that follow the links, which suggests that the actors are looking into measuring the effectiveness of the operation or to profile the “captured audience” of the operation.

While a fair percentage of the accounts have been suspended, there has been no general suspension of accounts related to these URL shorteners, likely because they were posting links related to old, but real, terror events. 

RelatedIran-based Social Media Scheme Impersonated Press

RelatedHow China Exploits Social Media to Influence American Public

RelatedFacebook Blocks More Accounts Over Influence Campaigns

Copyright 2010 Respective Author at Infosec Island]]>
Spring Cleaning: Why Companies Must Spring Clean Out Their Social Media Accounts This Season Fri, 14 Jun 2019 12:03:00 -0500 Every year around this time, we collectively decide to open the windows, brush off the dust, and kick the spring season off on a clean foot. But as you are checking off your cleaning to-dos, be sure to add your social media profiles to that list. It’s obvious that social media profiles hold sensitive personal data but letting that information and unknown followers pile up can put your company, customers and employees at risk.

We live in a world where data privacy is top of mind, and in fact, this spring season marks the one-year anniversary of GDPR. Since the law went into effect, we have seen numerous cases of high profile data breaches making headlines. Now more than ever, businesses have an obligation to not only comply with data privacy laws but go above and beyond to secure proprietary, sensitive, and consumer data.

So, what can you do to protect your business, customers, and employees from data breaches and information leakage? Here are three tips for cleaning and securing your online data this spring.

#1: Clean what’s yours

You wouldn’t just clean your bedroom and leave the bathroom a mess, would you? Of course not. So, when managing your data, you first need to understand what online assets you own. Whether corporate or personal, start by taking stock of your owned social media accounts, domains, e-commerce sites, and any other digital channels where you or your company has a presence. Not only should you identify what accounts you own, but it’s necessary to review the privacy settings on those accounts. What are you sharing? Who can see your posts? Your locations? Your contact information?

One of the most overlooked ways of protecting your owned accounts is through strong passwords. You should have a unique password for each of your social media accounts, and for all accounts for that matter. The passwords should have a variety of cases, letters and symbols, and be hard to decipher. Be sure to avoid names, soccer players, musicians and fictional characters – according to the U.K. government’s National Cyber Security Center, these are some of the worst, most hackable passwords.

#2: Clean on behalf of your customers

For corporate channels, keeping owned accounts secure protects your brand’s reputation against impersonators, offensive content and spam. What’s more, it also protects your followers – which includes customers – from being exposed to that malicious content. As customers are more frequently using social media channels to engage with brands before making a purchase or obtaining a service, companies must prioritize retaining trust and loyalty among their customers.

To do so, your organization needs to, let’s say, “polish the windows” and be fully transparent with how the company will use their personal data. And with more state laws replicating the precedent set by GDPR, this visibility will not only be a best practice, but a law.

In addition, you should invest in the identification and remediation of targeted attacks and scams on your customers. This will not only help you gain their trust, but also provide them with ample protection. Finding and removing customer scams – i.e. malware links to social accounts impersonating your customer support team – will keep you and your valued customers safe online.

#3: Empower your employees to clean

Easy-to-use tools like Amplify by Hootsuite have turned employees into companies’ greatest brand ambassadors, particularly on social media. This type of promotion is invaluable to marketing teams, but whether on corporate or personal channels, employee use of social media must be addressed by security and marketing teams alike.

This spring, empower your employees to own their own social media cleanliness. By establishing and providing comprehensive education and training programs for your employees empowers them to learn the latest when it comes to corporate online policies and also social media security best practices. Traditionally, we find that companies have invested in trainings focused on email or insider threat risks but have neglected social and digital channels.

Don’t wait until next spring to clean again

Although it is best to incorporate social media security best practices into our everyday, this spring season make it a point to do a deep dive into your personal and professional social media profiles. Your brand, employees and customers will thank you, and your profiles will have a fresh glow after a long winter.

About the author: David Stuart is a senior director at ZeroFOX with over 12 years of security experiences.

Copyright 2010 Respective Author at Infosec Island]]>
Building Modern Security Awareness with Experiences Fri, 14 Jun 2019 11:02:47 -0500 Experiences and events, the way that I define them, are segments of time in which a learner is more actively engaging in an element of your program. At their best, “experiences” should be well, experiential, requiring active participation rather than passively watching or paging through a Computer Based Training module.

But, that’s not necessary to be considered an experience. I generally consider anything like a meeting, a webinar, a lunch-and-learn, a team activity, or even an everyday interaction with a piece of technology, as an event-based experience. The key is that these are situations that people step into and out of. And, each of these can be leveraged to create a learning opportunity.

How do we apply this? Let’s look at some examples:

Meetings, Presentations, and Lunch-and-Learns
The best thing about each of these is that they are personal. There is generally not a screen separating the presenter from the participants. The formats are more open and interactive, allowing a greater sense of emotion and shared empathy to exist within the event room.

Yes, you can share great content, but you also have the benefit of directly interacting with your audience. This can help foster a bond of trust between your organization’s employees and the security team. These are great forums for storytelling, “ask me anything” sessions, sharing about seasonal/topical issues, and more.

Special meetings with compelling speakers are always good, but not always necessary. An executive from your organization can also share how security is critical to the organization’s success. You can conduct briefings about security incidents that succeeded or were thwarted. The most important thing is to engage your people. Don’t set up these meetings to talk at them. Talk with them.

You can (and should) also find ways to integrate security messaging and values into regularly occurring meetings throughout the organization that you may not actually be able to participate in. For instance, there is great benefit in sending security talking points to all managers to cover in their team meetings. One benefit of doing this is that the employees hear security messaging from their primary point of motivation (their manager).

Tabletop Exercises
I’m a big fan of tabletop exercises (TTXs). What I like about them is that they are extremely flexible. You can easily create tabletop exercises that last anywhere from a couple minutes (so you can slip them into a team meeting) up to a full day or more. In essence, these are thought exercises structured around a “what if” scenario.

One of the best benefits of a TTX is that it allows your people to mentally rehearse their reactions to scenarios at a time when the stakes aren’t high. Their reactions and answers can be studied, and you can decide how best to augment your training, messaging, and playbooks based on what you are seeing and hearing.

With just a few minutes on Google, you’ll see that there are a lot of good resources out there on how to  create tabletop exercises. And, what you’ll notice is that many of them come from the emergency preparedness field because that field is always having to develop plans and processes for how to deal with the next big “what if?” Everything from hurricanes, to pandemics, to bombings, and more. You can use these resources as a model for creating your own cybersecurity and physical security scenarios.

Since rituals exist to embody and sustain the organization’s cultures, it can be beneficial for you to see if you can incorporate some of your security-related messaging or activities into preestablished company rituals. If you have an “all-hands” meeting each morning, then see if you can incorporate security updates. Rituals also serve the purpose of codifying organizational values, such as service. Can you incorporate security messaging into service rituals that already exist? Or perhaps even create new rituals that are modeled after popular rituals within your organization?

Security-themed games are good for helping your people consider security topics through a different lens. The fun, challenge, and variable rewards associated with games make them effective Trojan Horses for embedding messages and habits. Games can be computer-based or physical games like Jeopardy, puzzle solving, card decks with scenarios, carnival-type games, and so on. Above all else, make your games fun, out of the ordinary, and rewarding.

These are only a handful of examples of how to leverage experiences as a way to influence your security culture. Think about your own organization’s culture and then find ways to create immersive, engaging experiences that will resonate with your people. 

About the author: Perry Carpenter is the Chief Evangelist and Strategy Officer for KnowBe4, the provider of the world’s most popular integrated new school security awareness training and simulated phishing platform.

Copyright 2010 Respective Author at Infosec Island]]>
The Promise and Perils of Artificial Intelligence Fri, 14 Jun 2019 10:59:48 -0500 Many companies use artificial intelligence (AI) solutions to combat cyber-attacks. But, how effective are these solutions in this day and age? As of 2019, AI isn’t the magic solution that will remove all cyber threats—as many believe it to be.

Companies working to implement AI algorithms to automate threat detection are on the right track; however, it’s important to also understand that AI and automation are two entirely separate things.

Automation is a rule-based concept. You may have heard it referred to as machine learning. AI, on the other hand, involves software that is trained to learn and adapt based on the data it receives. The fact that software is capable of adapting to changes, especially in a rapidly evolving cyber threat landscape, is very promising. It’s also important to note that AI is still at a very immature stage of its development.

The promise of AI bringing cognition to the realm of software has been exciting tech enthusiasts for years. The fact remains however that it is still software. And we should all know by now that software (particularly web-based software) is vulnerable.

As AI does mature over the next few years, we can expect to see a great deal of AI-enabled automation solutions. This is especially true with regard to day-to-day routine provision tasks and particularly around SOC operations.

We must not forget that AI technologies are also a double-edged sword as not only defenders have access to such capabilities. Attackers who also possess such skills can tip the balance. Thus, with the commoditization of AI, we can expect to see more incidents like the infamous Google speech recognition API that was used to bypass Google’s own reCaptcha mechanism.

Examples such as this lead us to remember that software is only as good as the developers who designed and wrote it. After all, data science is bound by the data that is fed to the algorithms. For critical applications such as those used for medical, law enforcement, and border control purposes, we need to be aware of such pitfalls and actively filter human bias from these systems.

As IT leaders and CIOs build out their AI strategies, software security is a key consideration. Software security is always an important part of any product, whether it is in the development stage or in production, or whether it’s purchased from a vendor—AI is no exception.

When considering the possible applications (health, automotive, robotics, etc.) of AI, the importance of software security for the development of AI applications is at a really high level. It should be of high concern throughout the application’s lifecycle. And with all products brought in from third parties, security must be thoroughly vetted before being implemented.

Imagine if someone were able to take control of your AI device or software and feed it false answers. Or, picture this: an attacker who is able to control the input information that your AI needs to process—the input information that the AI will act on. For example, an attacker who is able to control the sensorics input of the surroundings of a car. Giving wrong information as input would lead to wrong decisions, which can potentially endanger lives. For this reason, the development and usage of AI must be absolutely secure.

Technologies such as interactive application security testing (IAST) allow software developers (including those developing web-based AI applications) to perform security testing during functional testing. IAST solutions help organizations to identify and manage security risks associated with vulnerabilities discovered in running web applications using dynamic testing methods. Through software instrumentation, IAST monitors applications to detect vulnerabilities. This technology is highly compatible with the future of AI.

As with all technology, the question comes down to how we apply it in practice. It’s a positive attribute that the industry is concerned about how AI can impact our lives. This should push developers and security teams to be more cautious and to find and implement mechanisms that will help us to avoid catastrophes relating to AI’s decisions and actions. In the end, AI will help us to improve our lives. We, in turn, must ensure that the software doing so is secure.

About the author: Boris Cipot is a senior security engineer at Synopsys. He helps companies of all shapes and sizes to create secure software. Boris joined Synopsys when Black Duck Software was acquired in 2017. He specializes in open source software security, robotics, and artificial intelligence.

Copyright 2010 Respective Author at Infosec Island]]>
Utilising the Benefits of Industrial Robots Securely Wed, 05 Jun 2019 01:36:45 -0500 Jalal Bouhdada, Founder & CEO at Applied Risk, discusses the rise of industrial robotics and how we can increase the cyber resilience of production environments in the future.

It is increasingly likely that a factory worker today will find themselves employed as part of a diverse workforce, one which includes industrial robots. That is because the industry is rapidly gaining popularity, so much so that it is expected that 4 million commercial robots will be installed in over 50,000 warehouses by 2025; 4,000 of those were deployed in 2018 alone. As time goes on, there’s every likelihood that more workplace colleagues will be of the robotic kind.

This is of course having a positive impact in industrial environments. Robots are becoming an integral part of Industry 4.0 and the Industrial Internet of Things (IIoT), helping to boost productivity, streamline operations and improve physical safety. Falling costs and common programming platforms are also helping to accelerate the proliferation of robots in all sectors. But have manufacturers placed enough emphasis on the cybersecurity of their new workforce?

The impact of industrial robotics
The use of robots in industrial environments isn’t actually new. For almost 50 years they’ve been improving the way that we manufacture products and deal with risk in hazardous environments. But we have now reached an important inflection point, and their increased usage comes with some important considerations.

Up until now, much of the attention has been the physical safety of robots in the workplace, especially when they share space with human co-workers. For example, a new standard is set to be published this year governing when robots should shut down (if approached by a human, for example) and when they are allowed to restart their process. Unfortunately, the cyber risk has not had the same level of attention and, although awareness is growing, there is still much work to be done.

The increased risk that autonomous production brings
It may be a relief to learn that currently, there have been no known cyberattacks on industrial robots that have hit the headlines. But the truth, and part of the reason, is that robots haven’t been an attractive target for hackers. There have only been small numbers in operation and it’s expensive to get hold of examples in order to develop attacks, meaning it hasn’t been worth an attacker’s effort.

But as costs decrease and the number of robots in use continues to rise, they are becoming a more tempting target. Researchers have repeatedly shown proof of concept (POC) attacks in which they have been able to take over well-known robots and infect them with ransomware. The potential for physical harm, or at the very least significant business disruption, is troubling.

How to ensure cybersecurity
Robotics have proven to be incredibly effective in industrial environments, so security concerns shouldn’t slow the market’s growth. However, as with any other connected technology, there are well known and proven processes that can help to improve the state of cybersecurity. Effective planning is one of the most important threat mitigation tools: the principles of “secure by design” mean ensuring security is addressed from the early stages of the design phase and continue as a key consideration at every stage of the development process to ensure a cyber resilient end product.

Potential purchasers of industrial robotics should also define clear security requirements during the procurement process and conduct a thorough risk assessment of any new robots that they look to deploy. There are experts in the field that can conduct independent tests to ensure that robots and systems are appropriately hardened against attack before they are integrated, and that staff are appropriately trained to understand the risks that could be introduced into the environment through their behaviour.

Vendors, meanwhile, should adopt the “secure development lifecycle” best practices, and ensure they are providing end users with cyber resilient products to be implemented in their business-critical production environments. Cybersecurity must be a priority when designing and building robots, and clear roadmaps for managing upgrades and patches should be well documented and regularly updated.

Industrial robots do promise to improve manufacturing productivity, streamline operations and reduce risk for many organisations. But those benefits won’t be achieved for long if they are not deployed with cybersecurity at their core.

About the author: Jalal Bouhdada is the Founder and Principal ICS Security Consultant at Applied Risk.

Copyright 2010 Respective Author at Infosec Island]]>
On the Horizon: Parasitic Malware Will Feast on Critical Infrastructure Tue, 04 Jun 2019 07:42:18 -0500 Parasitic malware, which seeks to steal processing power, has traditionally targeted computers and mobile devices. In the coming years, this type of malware will evolve to target more powerful, industrial sources of processing power such as Industrial Control Systems (ICS), cloud infrastructures, critical national infrastructure (CNI) and the IoT. The malware’s primary goal will be to feast on processing power, remaining undetected for as long as possible. Services will be significantly disrupted, becoming entirely unresponsive as they have the life sucked out of them.

At the Information Security Forum, we anticipate that unprepared organizations will have a wide, and often unmonitored, attack surface that can be targeted by parasitic malware. They will see infected devices constantly running at full capacity, raising electricity costs and compromising functionality. Systems will degrade, in some cases leading to unexpected failure that halts critical services.

Every organization will be susceptible to parasitic malware. However, environments with high power consumption (such as power stations, water and waste treatment plants and data centers) and those reliant on industrial IoT (such as computerized warehouses, automated factories and smart cities) will become enticing targets for malicious attackers as high-power consumption tends to mask the energy usage of parasitic malware.

What is the Justification for This Threat?

ICS, combined with the increased adoption of IoT devices with greater processing power, will provide new and irresistible targets for parasitic malware. Additionally, smart cities have a high degree of digital adoption and, according to ISACA’s 2018 Smart City survey, are particularly susceptible to malware.

Cryptojacking’ is a particularly popular strain of parasitic malware. It is installed on devices and steals processing power in order to illegally mine cryptocurrency. There has been a spectacular growth in cases of cryptojacking on computers and mobile devices and that this form of malware is taking over from ransomware as the most prevalent type of malware. Botnets, which also feast on processing power, are continuing to grow in scale and have already proved to have detrimental impacts on infected devices.

Parasitic malware infections on computers and other devices have already proven to generate significant costs to business. Their consumption of computational resources can cause business-critical systems to slow down or stop functioning entirely with compromised machines even infecting other network-connected devices. Parasitic malware can also exploit often overlooked security holes in a company’s network. Organizations infected with parasitic malware are also likely to be vulnerable to other exploits and attacks, such as ransomware.

Given the significant power consumption of ICS and its relatively weak security, lack of monitoring and poor patching regimes, it will become the next frontier for parasitic malware. ICS environments often rely on older hardware and low-bandwidth networks. Consequently, even a slight increase in load could leave them unresponsive. Early 2018 saw the first documented cryptojacking malware attack on an ICS network, targeting a water utility in Europe. The attack was detected by chance before the network was compromised. However, it is just a matter of time before there is a successful attack and CNI is impacted by a serious infection.

Cloud infrastructure will also be a target for parasitic malware because it offers an attack surface with large amounts of processing power in an environment where computer resource consumption is difficult to monitor. In February 2018, Tesla found a strain of parasitic malware mining Monero on its AWS cloud servers. Although there was no major impact in this particular case, it indicates the potential for such malware to affect cloud environments.

How Can Your Organization Prepare?

Organizations should start implementing suitable controls to protect against parasitic malware holistically across the business, including areas that have ICS, IoT and cloud deployments.

About the author: Steve Durbin is Managing Director of the Information Security Forum (ISF). His main areas of focus include strategy, information technology, cyber security and the emerging security threat landscape across both the corporate and personal environments. Previously, he was senior vice president at Gartner.

Copyright 2010 Respective Author at Infosec Island]]>
Thoughts on DoS Attack on US Electric Utility Tue, 04 Jun 2019 07:37:54 -0500 The recent DoS incident affecting power grid control systems in Utah, Wyoming and California was interesting for several reasons.

First, the threat actors did not directly attack the systems that control power generation and distribution for the electrical grid, but rather they disrupted the ability of utility operators to monitor the current status of those systems. The utility industry refers to this type of incident as "loss of view." If an attacker wanted to shut down parts of the grid, one of their first steps might be precisely this step, because it would leave utility operators "blind" to subsequent disruptive actions the attackers would take, such as switching relays off to halt the flow of electricity. In the case of Stuxnet, one of the first known cyberattacks on industrial control systems (ICS), the attackers performed a similar action whereby they fooled the operators into thinking all was fine with their nuclear centrifuges when in fact they were being spun at very high rates in order to damage them.

The second interesting aspect is that the threat actors compromised a networking appliance to cause loss of visibility. We've seen attackers go after network devices in the past, such as in the VPNFilter attacks of 2018, which have been widely-attributed to Russian threat actors. In these attacks, threat actors similarly exploited unpatched vulnerabilities in network devices so they could spy on network traffic, steal credentials, and inject malicious code into the traffic in order to compromise endpoints. These appliances are relatively easy to attack because they are typically directly exposed to the Internet, are difficult to patch, and have no built-in anti-malware capabilities.

The third interesting aspect is that the electric industry is currently the only critical infrastructure vertical in the US to have regulations (called NERC CIP) around minimum cybersecurity standards. (Other verticals, such as oil & gas, chemicals, pharmaceuticals, manufacturing and transportation, do not currently have any cyber regulations in place.) So this incident will likely result in additional scrutiny from regulators, who recently doled out a record $10M fine to a major US utility for multiple incidents of cyber negligence indicating an "ad hoc, informal, inconsistent, chaotic" approach to addressing the regulations, such as neglecting to revoke administrative passwords for employees that had been fired.

What efforts have been made so far to secure the grid?

The NERC CIP regulations were an important first step but have not been updated to include modern security controls such as continuous monitoring to detect suspicious or unauthorized activities in utility networks. Plus they rely on utilities to self-report incidents, which likely leads to under-reporting, since reporting incidents can potentially lead to fines and shareholder lawsuits.

Some people mistakenly believe that the Department of Defense or the FBI are responsible for defending the electrical grid from nation-state attacks. However, 85 percent of the nation's critical infrastructure is owned by the private sector -- and the DoD and DHS/FBI have neither the resources nor the legal standing to defend civilian assets before they’re attacked.

What is the likelihood that a real attack would take down the entire power grid?

It is highly unlikely that attackers could take down the entire US power grid because it has been specifically designed to eliminate any single points of failure. Nevertheless, it is easy to imagine how determined nation-state attackers could target specific population regions to cause major disruption and chaos, as Russian threat actors did with the Ukrainian grid attacks of 2015 and 2016. For example, disrupting power to the Wall Street area or Washington DC, in the middle of winter, would have a major economic and psychological impact on the population, with the potential of causing loss of human lives as well. 

This is not completely theoretical. In March 2018, the US FBI/DHS concluded that since at least March 2016, Russian government cyber actors had targeted and compromised "government entities and multiple U.S. critical infrastructure sectors, including the energy, nuclear, commercial facilities, water, aviation, and critical manufacturing sectors."

About the author: Phil Neray is VP of Industrial Cybersecurity at CyberX, the IoT and ICS security company. Prior to CyberX, Phil held executive roles at IBM Security/Q1 Labs, Symantec, Veracode, and Guardium. Phil began his career as a Schlumberger engineer on oil rigs in South America and as an engineer with Hydro-Quebec.

Copyright 2010 Respective Author at Infosec Island]]>
Network of Fake Social Accounts Serves Iranian Interests Wed, 29 May 2019 07:53:06 -0500 FireEye security researchers have uncovered a network of fake social media accounts that engage in inauthentic behavior and misrepresentation, likely in support of Iranian political interests.

Comprised of fake American personas and accounts impersonating real American individuals, including candidates that ran for House of Representatives seats last year, the network might be related to accounts exposed last year.

Most of the accounts were created between April 2018 and March 2019 and used profile pictures taken from various online sources, including photos of real individuals on social media. Most of the accounts in this network appear to have been suspended on or around the evening of May 9, 2019, FireEye says.

Some of the personas posed as activists, correspondents, or free journalists, and some of these so called journalists claimed to belong to specific news organizations, yet the researchers couldn’t identify individuals belonging to those news organizations with those names.

The accounts promoted anti-Saudi, anti-Israeli, and pro-Palestinian themes. They expressed support for the Iran nuclear deal, opposition to the Trump administration’s designation of Iran’s Islamic Revolutionary Guard Corps (IRGC) as a Foreign Terrorist Organization, and condemnation of U.S. President Trump’s veto of a resolution passed by Congress to end U.S. involvement in the Yemen conflict.

The security researchers also found on these accounts messages seemingly contradictory to their otherwise pro-Iran stances. One account posted tweets almost entirely in line with Iranian political interests, but also messages directed at U.S. President Trump, calling attacks on Iran, and other accounts in the network echoed these.

“It is possible that these accounts were seeking to build an audience with views antipathetic to Iran that could then later be targeted with pro-Iranian messaging,” the security researchers note.

FireEye also found “several limited indicators that the network was operated by Iranian actors.” These include older tweets in Persian and of a personal nature (which could suggest that the account was compromised from another individual or repurposed by the same actor), along with the use of Persian as the interface language for one account (most of the accounts had their languages set to English).

Some of the observed Twitter accounts impersonated Republican political candidates that ran for House of Representatives seats in the 2018 U.S. congressional midterms. They appropriated the candidates’ photos and, in some cases, even plagiarized tweets from the real individuals’ accounts, but their general activity was similar to that of other accounts in the network.

Some of the personas would also submit letters, guest columns, and blog posts to legitimate print and online media outlets in the U.S. and Israel, to promote Iranian interests. Many of the materials were mostly published in small, local U.S. news outlets, but also appeared on several larger outlets, the security researchers suspect.

Personas involved in this behavior include John Turner (published on The Times of Israel and U.S.-based site Natural News Blogs), Ed Sullivan (Galveston County, Texas-based The Daily News, the New York Daily News, and the Los Angeles Times), Mathew Obrien (Galveston County’s The Daily News and the Athens, Texas-based Athens Daily Review), Jeremy Watte (The Baytown Sun and the Seattle Times), and Isabelle Kingsly (The Baytown Sun and the Newport News Virginia local paper The Daily Press).

“Personas in the network also engaged in other media-related activity, including criticism and solicitation of mainstream media coverage, and conducting remote video and audio interviews with real U.S. and UK-based individuals while presenting themselves as journalists. One of those latter personas presented as working for a mainstream news outlet,” FireEye reports.

Accounts in the network posted tweets either calling on mainstream media outlets to cover topics aligned with Iranian interests or criticizing them for insufficient coverage of those topics.

“If [the network] is of Iranian origin or supported by Iranian state actors, it would demonstrate that Iranian influence tactics extend well beyond the use of inauthentic news sites and fake social media personas, to also include the impersonation of real individuals on social media and the leveraging of legitimate Western news outlets to disseminate favorable messaging,” FireEye, which continues the investigation into these accounts, concludes.

Related: Iran-Linked Cyberspy Group APT33 Continues Attacks on Saudi Arabia, U.S.

Related: Facebook Takes Down Vast Iran-led Manipulation Campaign

Copyright 2010 Respective Author at Infosec Island]]>
Researchers Analyze the Linux Variant of Winnti Malware Tue, 28 May 2019 10:47:15 -0500 Chronicle, the cybersecurity arm of Google’s parent Alphabet, has identified and analyzed samples of the Winnti malware that have been designed specifically for the Linux platform.

Believed to be operating out of China, the Winnti group was initially discovered in 2012, but is believed to have been operating since at least 2009, targeting software companies, particularly those in the gaming sector, for industrial cyber-espionage purposes.

Recent reports suggested that various Chinese actors might be sharing tools, and the Winnti malware family too might have been used by multiple groups. The threat has been used in numerous attacks, with the most recent ones observed in April 2019.

The Linux version of Winnti, Chronicle’s security researchers reveal, is comprised of the main backdoor (libxselinux) and a library ( designed to hide the malicious activity on the infected system.

The same as other variants of the malware, the Linux iteration was designed to handle communication with the command and control (C&C) server, as well as the deployment of modules. Plugins commonly deployed support remote command execution, file exfiltration, and socks5 proxying on the host.

The library, which is a copy of the open-source userland rootkit Azazel, but with some changes, registers symbols for multiple commonly used functions and modifies their returns to hide the malware’s operations.

The Winnti-modified version of the rootkit keeps a list of process identifiers and network connections associated with the malware’s activity.

When executed, the Winnti Linux variant’s main backdoor decodes an embedded configuration that is similar in structure to the variant used in the Winnti 2.0 version of the Windows malware (detailed more than five years ago).

The analyzed sample’s configuration included three command-and-control server addresses and two additional strings that Chronicle’s security researchers believe to be campaign designators.

The identified Winnti Linux samples fall under three distinct campaign designators, but ranged from target names, geographic areas, industry, and profanity.

The malware uses multiple protocols for outbound communications, including ICMP, HTTP, and custom TCP and UDP protocols, a feature already documented in previous reports.

A function that hasn’t received enough attention, however, allows the operators to initiate a connection directly to an infected host, without requiring a connection to a control server. This ensures that communication is still possible even when access to the hard-coded control servers is disrupted.

“Additionally, the operators could leverage this feature when infecting internet-facing devices in a targeted organization to allow them to reenter a network if evicted from internal hosts. This passive implant approach to network persistence has been previously observed with threat actors like Project Sauron and the Lamberts,” the researchers explain.

Winnti-related activity, Chronicle notes, has been intensively analyzed by the security community, which attributed it to different codenamed threat actors that have already demonstrated their expertise in compromising Windows-based environments.

“An expansion into Linux tooling indicates iteration outside of their traditional comfort zone. This may indicate the OS requirements of their intended targets but it may also be an attempt to take advantage of a security telemitry blindspot in many enterprises, as is with Penquin Turla and APT28’s Linux XAgent variant,” Chronicle concludes.

Related: Researchers Link Several State-Sponsored Chinese Spy Groups

Related: Winnti Group Uses GitHub for C&C Communications

Copyright 2010 Respective Author at Infosec Island]]>
BlackWater Campaign Linked to MuddyWater Cyberspies Tue, 21 May 2019 14:48:00 -0500 A recently discovered campaign shows that the cyber-espionage group MuddyWater has updated tactics, techniques and procedures (TTPs) to evade detection, Talos’ security researchers report. 

MuddyWater was first detailed in 2017 and has been highly active throughout 2018. The cyber-spies have been focused mainly on governmental and telco targets in the Middle East (Iraq, Saudi Arabia, Bahrain, Jordan, Turkey and Lebanon) and nearby regions (Azerbaijan, Pakistan and Afghanistan).

The recently observed campaign, which Talos calls BlackWater, aims to install a PowerShell-based backdoor onto the victim’s machine, for remote access. Analyzed samples show that, while the actor made changes to bypass security controls, the underlying code was unchanged. 

Observed modifications include the use of an obfuscated VBA script to establish persistence as a registry key and trigger a PowerShell stager. The stager would connect to the attacker’s server to obtain a component of the open-source FruityC2 agent script to further enumerate the host machine. 

The gathered data is then sent to a different command and control (C&C) server, in the URL field, in another attempt to make host-based detection more difficult. Moreover, recent samples show that the actor aimed to replace some variable strings, likely in an attempt to avoid signature-based detection. 

MuddyWater-associated samples observed in the February - March timeframe revealed that, after achieving persistence, the actor used PowerShell commands for reconnaissance. The samples also contained the IP address of the C&C server. 

These components were found in a Trojanized attachment sent to the victim, which allowed security researcher to easily analyze the attacks by obtaining a copy of the document. 

Activity observed in April, however, “would require a multi-step investigative approach,” Talos noted. A malicious document used last month and believed to be associated with MuddyWater contained a password-protected and obfuscated macro titled "BlackWater.bas". 

The macro contains a PowerShell script to persist in the "Run" registry key, and call the file “SysTextEnc.ini” every 300 seconds. The clear text version of the file, the security researchers say, appears to be a lightweight stager.

The stager would connect to a C&C server at hxxp://38[.]132[.]99[.]167/crf.txt. The clear text version of the crf.txt, Talos says, closely resembles a PowerShell agent previously used by the group. It only shows small changes, likely made to avoid detection. 

PowerShell commands derived from FruityC2 were then used to call Windows Management Instrumentation (WMI) and gather system information such as operating system name, OS architecture, operating system’s caption, domain and username, and the machine’s public IP address. 

The only command that did not call WMI would attempt to obtain the security system’s MD5 hash, which was likely used to uniquely identify the machine in case multiple workstations were compromised within the same network. 

“Despite last month's report on aspects of the MuddyWater campaign, the group is undeterred and continues to perform operations. Based on these observations, as well as MuddyWater's history of targeting Turkey-based entities, we assess with moderate confidence that this campaign is associated with the MuddyWater threat actor group,” Talos concludes. 

Related: Kaspersky Analyzes Hacking Group's Homegrown Attack Tools

Related: Highly Active MuddyWater Hackers Hit 30 Organizations in 2 Months


Copyright 2010 Respective Author at Infosec Island]]>
Privilege Escalation Flaws Impact Wacom Update Helper Fri, 17 May 2019 09:57:37 -0500 Talos’ security researchers have discovered two security flaws in the Wacom update helper that could be exploited to elevate privileges on a vulnerable system.

The update helper tool is being installed alongside the macOS application for Wacom tablets. Designed for interaction with the tablet, the application can be managed by the user.

What the security researchers have discovered is that an attacker with local access could exploit these vulnerabilities to leverage their privileges to root.

Tracked as CVE-2019-5012 and featuring a CVSS score of 7.8, the first bug was found in the Wacom, driver version 6.3.32-3, update helper service in the startProcess command.

The command, Talos explains, takes a user-supplied script argument and executes it under root context. This could allow a user with local access to raise their privileges to root.

The second security flaw is tracked as CVE-2019-5013 and features a CVSS score of 7.1. It was found in the Wacom update helper service in the start/stopLaunchDProcess command.

“The command takes a user-supplied string argument and executes launchctl under root context. A user with local access can use this vulnerability to raise load arbitrary launchD agents,” Talos reveals.

Attackers looking to target these vulnerabilities would need local access to a vulnerable machine for successful exploitation.

According to the security researchers, Wacom driver on macOS, versions and are affected by these vulnerabilities.

Wacom has already released version 6.3.34, which addresses these bugs.

Related: Cisco Finds Serious Flaws in Sierra Wireless AirLink Devices

Related: Hard-Coded Credentials Found in Alpine Linux Docker Images

Related: Multiple Vulnerabilities Fixed in CUJO Smart Firewall

Copyright 2010 Respective Author at Infosec Island]]>
Answering Tough Questions About Network Metadata and Zeek Wed, 08 May 2019 14:53:49 -0500 We often receive questions about our decision to anchor network visibility to network metadata as well as how we choose and design the algorithmic models to further enrich it for data lakes and even security information and event management (SIEMs).

The story of Goldilocks and the Three Bears offers a pretty good analogy as she stumbles across a cabin in the woods in search of creature comforts that strike her as being just right.

As security operations teams search for the best threat data to analyze in their data lakes, network metadata often lands in the category of being just right.

Here’s what I mean: NetFlow offers incomplete data and was originally conceived to manage network performance. PCAPs are performance-intensive and expensive to store in a way that ensures fidelity in post-forensics investigations. The tradeoffs between NetFlow and PCAPs leaves security practitioners in an untenable state.

NetFlow: Too little

As the former Chief of Analysis for US-CERT has recommended: “Many organizations feed a steady stream of Layer 3 or Layer 4 data to their security teams. But what does this data, with its limited context, really tell us about modern attacks? Unfortunately, not much.”

That’s NetFlow.

Originally designed for network performance management and repurposed for security, NetFlow fails when used in forensics scenarios. What’s missing are attributes like port, application, and host context that are foundational to threat hunting and incident investigations.

What if you need to go deep into the connections themselves? How do you know if there are SMBv1 connection attempts, the main infection vector for WannaCry ransomware? You might know if a connection on Port 445 exists between hosts, but how do you see into the connection without protocol-level details?

You can’t. And that’s the problem with NetFlow.

PCAPs: Too much

Used in post-forensic investigations, PCAPs are handy for payload analysis and to reconstruct files to determine the scale and scope of an attack and identify malicious activity.

However, an analysis of full PCAPs in Security Intelligence explains how the simplest networks would require hundreds of terabytes, if not petabytes, of storage for PCAPs.

Because of that – not to mention the exorbitant cost – organizations that rely on PCAPs rarely store more than a week’s worth of data, which is useless when you have a large data like. A week’s worth of data is also insufficient when you consider that security operations teams don’t often know for weeks or months that they’ve been breached.

Add to that the huge performance degradation – I mean frustratingly slow – when conducting post-forensic investigations across large data sets. Why would anyone pay to store PCAPs in return for lackluster performance?

Network metadata: Just right

The collection and storage of network metadata strikes a balance that is just right for data lakes and SIEMs.

Zeek-formatted metadata gives you the proper balance between network telemetry and price/performance. You get rich, organized and easily searchable data with traffic attributes relevant to security detections and investigation use-cases (e.g. the connection ID attribute).

Metadata also enables security operations teams to craft queries that interrogate the data and lead to deeper investigations. From there, progressively targeted queries can be constructed as more and more attack context is extracted.

And it does so without the performance and big-data limitations common with PCAPs. Network metadata reduces storage requirements by over 99%, compared to PCAPs. And you can selectively store the right PCAPs, requiring them only after metadata-based forensics have pinpointed payload data that is relevant.

The perils of managing your own Bro/Zeek deployment

Another question customers often ask us is whether they should manage their own Bro/Zeek deployments. The answer is best explained through the experience of one of our government customers, which chose to deploy and manage it themselves.

At the time, the rationale was reasonable: Use in-house resources for a one-time, small-scale deployment, and incrementally maintain it with the rest of the infrastructure while providing significant value to their security team.

But over time, it became increasingly untenable:

  • It was difficult to keep it tuned. Each patch or newly released version required the administrator to recompile a binary and redeploy. 
  • It became difficult to scale. While partially an architectural decision, sensors can rarely scale by default – especially those that push much of the analytics and processing to the sensor. We don’t see many deployments that can even operate at 3 Gbps per sensor. Over time, the sensors began to drop packets. The customer had to suddenly architect clusters to support the required processing.
  • It was painfully difficult to manage legions of distributed sensors across multiple geographic locations, especially when sensor configurations were heterogeneous. When administrators who were familiar with the system left, a critical part of their security infrastructure was left unmanaged.

This no-win tradeoff drives many customers to ask us how their security teams can better spend their time. Should they manually administer tools (a.k.a. barely keeping afloat) in a self-managing fashion or focus on being security experts and threat hunters?

In addition to the deployment challenges for those who opt for the self-managed approach, day-to-day operational requirements like system monitoring, system logging and even front-end authentication pose a heavy burden.

Most make the choice to find a partner that can simplify the complexity of such a deployment: Accelerate time to deployment, enable automatic updates that eliminate the need to regularly patch and maintain, and perform continuous system monitoring.

These are default capabilities that free you to focus on the original charter of your security team.

About the author: Kevin Sheu leads product marketing at Vectra. During the past 15 years, he has held executive leadership roles in product marketing and management consulting experience, where he has demonstrated a passion for product innovations and how they are adopted by customers. Kevin previously led growth initiatives at Okta, FireEye and Barracuda Networks.

Copyright 2010 Respective Author at Infosec Island]]>
Qakbot Trojan Updates Persistence, Evasion Mechanism Mon, 06 May 2019 12:11:02 -0500 The Qakbot banking Trojan has updated its persistence mechanism in recent attacks and also received changes that potentially allow it to evade detection, Talos’ security researchers say. 

Also known as Qbot and Quakbot, the Trojan has been around for nearly a decade, and has received a variety of changes over time to remain a persistent threat, although its functionality remained largely unaltered. 

Known for the targeting of businesses to steal login credentials and eventually drain their bank accounts, the malware has received updates to the scheduled task it uses to achieve persistence on the infected systems, which also allows it to evade detection. 

The Trojan typically uses a dropper to compromise a victim’s machine. During the infection process, a scheduled task is created on the victim machine to execute a JavaScript downloader that makes a request to one of several hijacked domains.

A spike in requests to these hijacked domains observed on April 2, 2019 (which follows DNS changes made to them on March 19) suggests that the threat actor has made updates to the persistence mechanism only recently, in preparation for a new campaign. 

The downloader requests the URI "/datacollectionservice[.]php3." from the hijacked domains, which are XOR encrypted at the beginning of the JavaScript. The response is also obfuscated, with the transmitted data saved as (randalpha)_1.zzz and (randalpha)_2.zzz and decrypted using a code contained in the JavaScript downloader.

At the same time, a scheduled task is created to execute a batch file. The code reassembles the Qakbot executable from the two .zzz files, using the type command, after which the two .zzz files are deleted. 

The changes in the infection chain make it more difficult for traditional anti-virus software to detect attacks, and the malware may easily be downloaded onto target machine, given that it is now obfuscated and saved in two separate files. 

“Detection that is focused on seeing the full transfer of the malicious executable would likely miss this updated version of Qakbot. Because of this update to persistence mechanisms, the transfer of the malicious Qbot binary will be obfuscated to the point that some security products could miss it,” Talos concludes. 

RelatedQakbot, Emotet Increasingly Targeting Business Users: Microsoft

RelatedQbot Infects Thousands in New Campaign

Copyright 2010 Respective Author at Infosec Island]]>
Flaws in D-Link Cloud Camera Expose Video Streams Mon, 06 May 2019 12:09:02 -0500 Vulnerabilities in the D-Link DCS-2132L cloud camera can be exploited by attackers to tap into video or audio streams, but could also potentially provide full access to the device. 

The main issue with the camera is the fact that no encryption is used when transmitting the video stream. Specifically, both the connection between the camera and the cloud and that between the cloud and the viewing application are unencrypted, thus potentially exposed to man-in-the-middle (MitM) attacks.

The viewer app and the camera communicate through a proxy server on port 2048, using a TCP tunnel based on a custom D-Link tunneling protocol, but only parts of the traffic are encrypted, ESET’s security researchers have discovered. 

In fact, sensitive details such as the requests for camera IP and MAC addresses, version information, video and audio streams, and extensive camera info are left exposed to attackers. The vulnerability resides in the request.c file, which handles HTTP requests to the camera. 

“All HTTP requests from are elevated to the admin level, granting a potential attacker full access to the device,” ESET notes.

An attacker able to intercept the network traffic between the viewer app and the cloud or between the cloud and the camera can see the HTTP requests for the video and audio packets. This allows the attacker to reconstruct and replay the stream at any time, or obtain the current audio or video stream. 

ESET’s security researchers say they were able to obtain the streamed video content in two raw formats. 

Another major issue was found in the “mydlink services” web browser plug-in, which allows users to view video streams. The plug-in manages the creation of the TCP tunnel and the video playback, but is also responsible for forwarding requests for the video and audio data streams through a tunnel. 

The tunnel is available for the entire operating system, meaning that any application or user on the computer can access the camera’s web interface by a simple request (only during the live video streaming).

“No authorization is needed since the HTTP requests to the camera’s webserver are automatically elevated to admin level when accessing it from a localhost IP (viewer app’s localhost is tunneled to camera localhost),” the researchers explain. 

While D-Link has addressed issues with the plug-in, there are still a series of vulnerabilities in the custom D-Link tunneling protocol that provide an attacker with the possibility to replace the legitimate firmware on the device with a maliciously modified one. For that, they would need to replace the video stream GET request with a specific POST request to fetch a bogus firmware update.

The attack, ESET notes, is not trivial to perform and requires dividing the firmware file into blocks with specific headers and of a certain maximum length. However, because the authenticity of the firmware binary is not verified, an attacker could upload one containing cryptocurrency miners, backdoors, spying software, botnets or other Trojans, or they could deliberately “brick” the device.

Other issues the researchers discovered include the fact that D-Link DCS-2132L can set port forwarding to itself on a home router, via the Universal Plug and Play (UPnP) protocol. Thus, it exposes its HTTP interface on port 80 to the Internet without the user even knowing about it. The issue can be mitigated by disabling UPnP. 

“Why the camera uses such a hazardous setting is unclear. Currently close to 1,600 D-Link DCS-2132L cameras with exposed port 80 can be found via Shodan, most of them in the United States, Russia and Australia,” the researchers say. 

ESET says it reported the issues to D-Link in August 2018, including vulnerable unencrypted cloud communication, insufficient cloud message authentication and unencrypted LAN communication, but that only some of the flaws have been mitigated, such as the “mydlink services” plug-in, which is now properly secured. The most recent firmware available for the device is dated November 2016. 

“D-Link DCS-2132L camera is still available on the market. Current owners of the device are advised to check that port 80 isn’t exposed to the public internet and reconsider the use of remote access if the camera is monitoring highly sensitive areas of their household or company,” ESET concludes. 

Related: Critical Vulnerabilities Allow Takeover of D-Link Routers

Related: D-Link Patches Code Execution, XSS Flaws in Management Tool

Copyright 2010 Respective Author at Infosec Island]]>
SOAR: Doing More with Less Fri, 26 Apr 2019 04:29:01 -0500 Security orchestration, automation and response model has many benefits, including some that are unintended

Security teams in every industry and vertical are facing a common set of challenges. Namely, defending against an endless stream of cyberattacks, having too many security tools to manage, dealing with overwhelming workloads, and having a shortage of skilled security analysts. Most enterprises try to solve these challenges the old-fashioned way — by adding more tools and hoping they deliver on their promises.

Progressive enterprises are adopting a new approach, called Security Orchestration, Automation and Respons (SOAR) that focuses on making existing technologies work together to align and automate processes. SOAR also frees security teams to focus on mitigating active threats instead of wasting time investigating false positives, and performing routine tasks manually.

What is SOAR?

SOAR enables security operations centers (SOCs), computer security incident response teams (CSIRTs) and managed security service providers (MSSPs) to work faster and more efficiently.

Security Orchestration connects disparate security systems as well as complex workflows into a single entity, for enhanced visibility and to automate response actions. Orchestration can be accomplished between security tools via integration using APIs to coordinate data alert streams into workflows.

Automation, meanwhile, executes multiple processes or workflows without the need for human intervention. It can drastically reduce the time it takes to execute operational workflows, and enables the creation of repeatable processes and tasks.

Instead of performing repetitive, low level manual actions, security analysts can concentrate on investigating verified threats that require human analysis.

Some SOAR approaches even use machine learning to recommend actions based on the responses used in previous incidents.

Three elements make up a successful SOAR implementation:

Collaboration - is essential for creating efficient communication flows and knowledge transfer across security teams.

Incident Management  - ideally, a single platform will process all inputs from security tools providing decision-makers with full visibility into the incident management process.

Dashboards and Reporting - provide a comprehensive view of an enterprise’s security infrastructure as well as detailed information for any incident, event, or case.

Implementing SOAR

One of the primary benefits of SOAR is its flexibility. It can be used to unify operations across an enterprise’s entire security ecosystem, or as a vertical solution integrated within an existing product.

For example, one of the most popular product categories for this kind of vertical implementation is Security Information and Event Management (SIEM). Primarily because SOAR within a SIEM can have broad applicability across a wide range of processes. In contrast, when SOAR is implemented within other product areas, such as Threat Intelligence, it tends to have a more limited scope.

Initially, SOAR was designed for use by SOCs. However, as the approach matured and proved its benefits, other groups have adopted it including managed security services providers (MSSP) and computer security incident response teams (CSIRT). More recently, financial fraud and physical security team have also turned to SOAR.

Top Five SOAR Benefits

Arguably, the most powerful benefit of SOAR is its ability to integrate with just about any security process or tool already in use — and to enhance the performance and usefulness of each. Tight integration improves the efficiency of security teams to detect and remediate threats and attacks. It provides a single ‘pane of glass’ into asset databases, helpdesk systems, configuration management systems, and other IT management tools.

SOAR arms security teams with the ability and intelligence to react faster and more decisively to a threat or attack by unifying information from multiple tools and creating a single version of the truth.

Security teams waste an inordinate amount of time and energy dealing with false positives, since there are so many of them generated each day. SOAR automates the triage and assessment of low-level alerts, freeing staff to focus their attention where it is really needed.

Security staff spend way too much time on menial tasks such as updating firewall rules, adding new users to the network, and removing those who have left the company. SOAR virtually eliminates such time-consuming, repetitive functions.

Although cutting costs is rarely a driving factor for adopting SOAR, it often delivers this additional benefit by improving efficiencies and staff productivity.

Unifying and making existing security tools work together, rather than in silos, delivers greater visibility into threats. Implementing an SOAR model can provide the glue to make this security intelligence actionable using repeatable processes for faster incident response that does not require adding more resources.

About the Author: Michele Zambelli has more than 15 years of experience in security auditing, forensics investigations and incident response. He is CTO at DFLabs, where he is responsible for the long-term technology vision of its security orchestration, automation and response platform, managing R&D and coordinating worldwide teams.

Copyright 2010 Respective Author at Infosec Island]]>