Prelert Takes Home a Silver Stevie Award

Last Friday marked the twelfth annual American Business Awards and Prelert was honored with a Silver Stevie Award in the New Product or Service of the Year - Software - Big Data Solution category. The announcement was made at the organization’s first ever New Product & Tech Awards banquet at the Palace Hotel in (where else but the tech mecca) San Francisco.

The Stevies are the nation’s premier business awards program and any organization operating in the U.S. is eligible – big, small, for profit, non-profit, you name it. This year, more than 3,300 nominations were submitted, representing organizations of all sizes and in virtually every industry to be considered in a wide range of categories. Winners were selected by more than 240 executives nationwide who participated in the judging process.

Read More

It's Time to Democratize Data Science!

The biggest trend we’ve seen in the analytics industry is both the increase in understanding of data’s value and the desire of executives – who aren’t data scientists – to gain insights from it. This leads us to believe that it’s truly time to democratize data science.

In the early 1990s, the HTML protocol was invented, the government opened the internet to private enterprise and the first internet mail and shopping experiences came online. In that transition from government funded research network to ‘open to the public’ the internet was democratized.

It took the intentional development of tools like Netscape’s Mosaic web browser, Intel’s Pentium processor and Sun Microsystems’ Java to package that technology for widespread consumption. But that started the ball rolling and today we couldn’t imagine a world that did not have the internet.

Read More

Why What You Don't Know May Hurt You, & How Security Analytics Can Help

As originally published by Infosec Island.

Managing security in today’s enterprise is far different than it was ten to fifteen years ago. In the past, companies were able to set up proxy agents, firewalls and strong virus protection software and feel pretty secure that their company’s information was safe.

However, in today’s world, things have changed. We are no longer dealing with teenage hackers or disgruntled young adults with a political or social ax to grind. The real threat to your security comes from advanced cybercriminal organizations. They are well versed in your typical defenses and spend all their time figuring out ways to bypass them. These are professionals with the skills, knowledge, talent, creativity and motivation to succeed.

If you consider your organization to be a likely target, then it’s a safe bet that your defenses have already been infiltrated – and that it’s only a matter of time until the real theft begins. This means your organization needs to immediately focus on detecting nefarious activities inside of your perimeter.

Read More

How Security Analytics Help Identify and Manage Breaches

As originally published by Help Net Security.

In this interview, Steve Dodson, CTO at Prelert, illustrates the importance of security analytics in today's complex security architectures, talks about the most significant challenges involved in getting usable information from massive data sets, and much more.

How important are security analytics in today's complex security architectures? What are the benefits?

It has become a near 'mission impossible' to totally prevent breaches because of the increasingly large and complex environment security professionals are tasked with protecting. We’re even to the point where many organizations already assume they have been successfully breached by advanced persistent attacks, and in this difficult state of affairs, security analytics are extremely important to help us learn everything we can about our environments and the threats they face.

Read More

Occupy Your Data. Anomaly Detection Stops the Top 1% from Ruling IT.

How much of your data do you actually pay attention to?  Would you be surprised to realize it is probably far less than 1%?  How about 1% of 1%?

This is the case in the vast majority of IT operations, performance management and security shops of any size anywhere in the world. Do you agree? If not, consider the data that actually describes the performance of any app or service your organization provides. A typical web app involves hundreds if not thousands of components including software, networks, middleware, app servers, databases, etc. Out of all the thousands of things we could look at to understands performance, we typically look at a handful to a few dozen key performance indicators (KPI). That math works out to around 1% or less. 

Read More

Data Exfiltration Detection via Behavioral Analysis

There are many possible ways that one can detect “data exfiltration” (data theft), but in many cases, this involves either manual raw data inspection or the application of rules or signatures for specific behavioral violations. An alternative approach is to detect data exfiltration using automated behavioral anomaly detection using data that you’re probably already collecting and storing, and without the use of a DLP-specific security tool.


The key thing to note when using behavioral anomaly detection to expose data exfiltration is that you will be using an approach of looking for deviations of behaviors amongst users or systems - that is, you’re assuming that users or systems that exfiltrate data act differently than the typical user or system. This would either be a deviation with respect to that item’s own history (temporal deviation) or with respect to others within a population (peer analysis).

Read More

The Secret to Fixing Problems Before Users Find Them (part 2)

Review

In part 1 of this post, we talked about the failed paradigm of using thresholds and rules or 'eyeballs on timecharts' to monitor a critical app or service.

Thresholds are notorious for generating 'noise.' It is tremendously difficult to create a sufficiently accurate combination of thresholds and rules to monitor anything but the most egregious indicators of system failure. Some KPI (key performance indicators), like response time for a standard query, may seem straight forward. One might suspect that this should never be more than say 1,000 milliseconds. But you can be pretty much guaranteed that the actual response time will vary widely depending on physical distances, other server workloads and network congestion times. As a result, we would generate a large percentage of false positives with such alert conditions. Given the difficulty of defining accurate alert conditions for any KPI, the number chosen to be monitored this way is often exceedingly low.

Read More

Choosing bucketSpan Wisely

In a previous blog post about optimizing the performance of the Engine API, I mentioned that choosing the proper bucketSpan results in not only a possible performance improvement, but I also alluded to bucketSpan affecting the timeliness and quality of your results. In effect, there is a 3-way balance between performance, timeliness of the results, and quality of the results that I’d like to dig into further here.

  • Quality of results - The choice of bucketSpan provides different “views” into the data. In general if you want to maximize your detection and minimize your false alarm rate, then choose a bucketSpan roughly equal to the typical duration of an anomaly that you would want to know about. At first, that sounds like generic advice, but let’s look at it within the context of an example: analyzing a log looking for the occurrence rate (count) of events by some error code. Let’s imagine there were 2,880 errors suddenly seen in the span of 1 minute, and then they stopped. This would be highly anomalous and interesting to know about.
Read More

Static code analysis for C++

Static code analysis has long been touted as a must have for high quality software.  Unfortunately, my experience with it in previous jobs didn't live up to the hype.  Within the last few years the majority of compilers have added a built-in static code analysis capability, so I thought it would be interesting to see how good they are.

The two static code analysis tools for C++ that I've integrated into the Prelert build system are those provided with clang and Visual Studio 2013.

Read More

Machine Learning, Anomaly Detection, and the Smart City

Burdened by heavy traffic, a major metropolitan city worked with us to find a solution to help them in their goal to become a “smart city.” The city knew they needed to collect metrics and data points related to travel time for cars and buses, accidents, construction zones, and congestion in general. Once this massive amount of data was collected however, how were they to prioritize what projects to work on first to have the greatest impact in clearing congestion? How could they identify significant increases in journey times, or identify which roads were most significant so they could be sure to clear any accidents there first?


Since the city already calculated incident data and average journey times for a large number of the roads (which they divided into segments for data collection purposes), Prelert was able to easily analyze that data and correlate the journey times with the incident data. This correlated data was then charted such that the incidents were prioritized by impact on journey times, and then displayed in real-time on a map. At a glance, it was clear which traffic incidents and accidents caused the worst congestion at a given time.

Read More

Subscribe to updates