From $16B to $160B: The 100X data future beyond SalesForce/Tableau and Google/Looker

Posted by Graphistry Staff on June 10, 2019

It feels likes eye-popping times for those deep into building the future of visual data experiences. With Looker exiting (-> Google for $3B), Tableau exiting (->SalesForce for $16B), and less public, Periscope & ZoomData exiting, the Graphistry team is experiencing good feelings and key reflections. One of them is… the $16B exits are just a prelude to the next $160B in opportunities.

Read More

The Future of GPU Analytics Using NVIDIA RAPIDS and Graphistry

Posted by Leo Meyerovich on October 22, 2018

When everything runs on GPUs, we can fundamentally shift the way we experience data analysis much like video moving to HD or shifting from black-and-white to color. What if you could load your full dataset, ask whole-table questions like what are the patterns, and get the answers… immediately? What if you could do that visually, replacing writing queries with simple infinite zoom and direct manipulations down to the level of individual data points? Core analytics areas like security, fraud, operations, and customer 360 are entering this sci-fi-level world of rapid hypothesis iteration.

Running analytics end-to-end on GPUs, all the way from the data warehouse to what’s on screen in your browser, is not easy. Graphistry first brought that experience to investigating event and graph data. Starting from before the Rapids team was even officially formed, we have been collaborating with them on how to get these techniques into the hands of all analysts. With the official project announcement of Rapids, we thought it would help to share our promising early experiences.

Enter Apache Arrow & GoAi

RAPIDS is one of NVIDIA’s biggest contributions to the GPU Open Analytics Initiative (GoAi), and is poised to become its computational backbone. (We previously overviewed GoAi for the web and visual analytics.) Big data framework developers are shifting to fast data — handling more data at millisecond levels. Similar to how many SQL analytics tasks moved to distributed Hadoop, and then Hadoop moved to in-memory Spark, we are seeing the rise of in-GPU GoAi. Contributors already include most GPU database developers (OmniSci, BlazingDB, FastData, u2026), visual analytics developers (Graphistry), and broader data eco-system OSS companies like Conda.

To make the set of tools work together, GoAi members rallied around Apache Arrow. It is a file format and set of protocols that support in-memory typed dataframes with zero-copy data transfers between tasks and libraries. Clouds let you rent instances with multiple GPUs that have 16GB GPU RAM each, and NVIDIA DGX nodes already store 512GB+ in-GPU RAM. This unlocks running most tasks entirely in the GPU, and as streaming frameworks emerge, nearly everything is fair game.

For a taste of what happens when you switch to streaming of Arrow files between GPUs, the following videos show a before/after of the Graphistry 2.0 engine. The first video shows our original hand-written visual analytics engine: GPUs in the browser, GPUs in the data center, and optimized networking. This year, we rewrote our interop code into Arrow (forming the core of Apache Arrow[JS]): the result is our new visual analytics engine — which runs in any browser — takes much less code, handles about 5X more data, and runs visibly faster:

Graphistry 1.0 Engine

Graphistry 2.0 Engine


Apache Arrow unlocks and speeds up interoperability between analytics tools, and RAPIDS provides convenient GPU IO and compute layers. This can help all the way across the data pipeline: sending data from CPU Spark to GPU frameworks, converting untyped CSVs to typed Arrow, performing tabular operations like filtering, and under the same family, supporting additional analytics areas like ML and graph. Enterprise-grade GPU analytics tools like Graphistry (visual analytics) and BlazingDB (warehouse interop) are incorporating it as part of a common core that is better than CPU alternatives but not fundamentally differentiating between specific analytic tool categories: Rapids is part of the GoAi rising tide.

RAPIDS is still early, but the numbers already look great. As a few examples of core data tasks, on a Titan V single GPU with 12GB GPU RAM and 32GB CPU RAM, similar to a cloud device, we see significant speedups on loading data and a simple cross-filtering task (filtering followed by histogramming). The result is 100M-1B row datasets become interactive!

Test setup:

  • Titan V single GPU, 12GB GPU RAM, 32GB CPU RAM machine
  • Representative of a $1.0/hr AWS P3.2 preemptible
  • IO: Load 100M rows (x 6 floats) or 1.5B rows (x 1 float) of data, as CSV and Arrow
  • Compute: Cross-filter (filter + histogram)
  • Compare CPU (Pandas) with GPU (PyGDF for filtering and Numba for histograms)


The early results are spectacular — 20-30s computations become subsecond, 100M-1B row datasets become easy… and that is when bursting on just one GPU.

Graphistry + NVIDIA RAPIDS

Think of Graphistry as a UI for accessing RAPIDS tech without coding. Graphistry is the only full-stack GPU visual analytics platform, meaning we use GPUs all the way from your browser to the data center. The platform has been architecting to use Arrow end-to-end in the pipeline over the last year and helping bring similar Arrow-based workflows to the web, and RAPIDS has been a big motivator for that. As new RAPIDS functionality become available, they become drop-in replacements along our pipeline. The result is visual analytics users get to leverage RAPIDS — and broader GoAi frameworks — without writing code.

Our results around GoAi have been raising eyebrows all the way from operational analyst to bank executives. NVIDIA RAPIDS has been a key investment for us, and not discussed here, especially in terms of marching to a multi-node multi-GPU future. Hard tech startups have to be targeted in the bets they make, and Graphistry is excited to welcome RAPIDS into the GoAi community!

Read More

Graphistry + Bro Logs for Faster IR and Threat Hunting

Posted by Leo Meyerovich on September 20, 2018

Incident responders and threat hunters are often facing a bit of an analytical catch-22. They typically have access to more and higher fidelity data sources than ever before, yet the volume and complexity of the data can often make it hard to see the point that matters.

Analyzing Bro logs is a good case in point. Bro can bring a ton of context and potential paths to pivot through an investigation, but this same wealth of data can quickly get impractical to use in a real investigation. Being able to see through this complexity and pivot to bring in the right context is something that graphs excel in general and Graphistry specializes in the context of an investigation. The video and walkthrough below shows how Graphistry can quickly accelerate a common investigation.

Getting Started

Let’s look at common investigation. Below we are looking at some Bro logs in Splunk, where we see some suspicious downloads that appear to GIF files but are actually executables. From here we can jump right into the investigation in Graphistry using a deeplink from within Splunk. This drops us into a pre-built Graphistry investigation template that can automatically query additional context and data sources.


Viewing Basic Connections

Once we are are in the Graphistry template, have pre-built pivots that brings in additional context. We can just Run All Pivots and then use the UI to filter data from the pivots that we want to see.

By looking at the first pivots, we can quickly see all the IP addresses and domains that are associated with our suspect files. In the diagram each ring shows a type of data (e.g. File hash, IP address, domain, etc), and the key in the bottom right shows what each ring represents.
From this view we can quickly see the IP addresses that are associated with our suspicious files.


Enrich and Expand

Next we can start to enrich our info from Bro. In the next few pivots we can pull in data from Virus Total to see if there are any hits on the suspect files and IP addresses. Below we can see we are getting a non-trivial amount of hits on our files as well as the IP addresses associated with those files. This gives a quick and easy way to verify that we are looking at a real incident.


Expand and Hunt

Now that we know that we are looking at a real incident, we might get curious to see if other devices have been communicating with these bad IP addresses. We can enable our final two pivots and focus just on the results of those two data sources to see if we picked up any new hits.


And from here we can quickly see a new IP address with Virus Total hits that we hadn’t seen before. So now we can continue to pull this thread to find other hosts in the network that may be affected by this same threat and see the full scope of the incident.


Of course, we can continue to pull this thread to expand our search. However, hopefully this provides a feel for how we can take a relatively dense set of data and visually expand to see the relationships that we care about and progressively expand to follow the natural flow of an investigation.

Read More

Using Graphistry and to Uncover a Massive Ethereum Heist

Posted by Leo Meyerovich on September 5, 2018

Graph visualization has proven to be powerful for investigating almost any type of data, and most recently the team at Graphistry was able to help in uncovering a massive Ethereum heist on two of the world’s most popular DApps (distributed applications). and Graphistry recently partnered to investigate the world’s first publicly identified BAPT (Blockchain Advanced Persistent Threat). The investigation identified the BAPT-F3D hacker group, which was responsible for stealing 12,948 ETH (~ $4 million) between July and August 2018 from various vulnerable smart contract DApps. As of today, BAPT-F3D is still actively attacking.

Fomo3D and the Airdrop Vulnerability, which specializes in security for the blockchain ecosystem, analyzed the wildly popular game u201cFomo3Du201d ( the #1 DApp in July 2018) and its copycat u201cLast Winneru201d (the #5 DApp in August 2018). These games are DApps based on Ethereum Solidity smart contract and operate quite openly as Ponzi schemes or exit scams. At high level the game works as a lottery with players buying keys that reset the timer for a round. Keys continue to get more expensive over time, and eventually when the time runs out, the player who bought the last key wins the entire pot.

Additionally the game included another side-betting opportunity when a player buys their keys. When a player buys their keys they have a percentage chance to win an u201cairdropu201d to instantly win ETH from a growing sidepot. The more a player gambles on their chance, the more they stand to win. And this airdrop function is where things got interesting.The airdrop function contained a vulnerability, which allowed coordinated attackers to steal the equivalent of more than $4 million USD across both games in just a few days.

Finding the Industry’s First Blockchain APT

Combining Graphistry’s industry-leading GPU-powered investigation platform with Situational Awareness Platform (SAP), gained a holistic view of all millions of events and over 30,000 addresses related to the games. As a result, the AnChain team was able to identify the first known Blockchain Advanced Persistent Threat (BAPT), dubbed BAPT-F3D. This was the first known BAPT in blockchain history. Further bytecode artifacts similarity analysis by SECBIT Labs confirmed this BAPT group of 5+ addresses are strongly correlated, as likewise seen in the visualization.

AnchainFigure: Center white node – main contract; intermediate money sinks seen on path to APT accounts identified by anomalous high-volume behavior. Paths with many edges (transactions) are either killchain or benign use that are visually separated by their operational behavior.

The SAP was able to identify the following traits related to BAPT-F3D:

  • Advanced: Leverages massive scale of sophisticated attack contracts to exploit a vulnerability in the u201cairdropu201d feature; Anti-Forensics capability that self-destructs the blockchain artifacts. Coordinated crime.

  • Persistent: Well planned, and operating continuously for weeks; Constantly upgrading attack contracts from V1 to V3. Moving from target to target

  • Threat: Financially motivated threat targeting specific smart contract DApps with similar vulnerabilities, stealing $4 millions worth of ETH and counting.

Impacts and Conclusions

Using knowledge graphs, was able to document a new type of threat facing DApp owners, exchanges, and the growing blockchain ecosystem. For Graphistry, the analysis proved to be very similar to our work in anti-fraud and money-laundering investigations, although with very new and interesting twist. But most importantly, it shows the power of knowledge graphs and GPU-powered graph investigations to quickly expose the important connections and relationships across millions of pieces of data.

We think of this as the user interface for a world increasingly dependent on data, machine-learning, and AI. Analysts have similar needs whether investigating malware or phishing incidents, tracking the flow of illicit funds, fraud within a healthcare system, or hundreds of other data driven projects. Humans need to be able to see and understand what is in their data. They need AI and ML models to not be impenetrable black boxes. By bringing an interactive and investigative front end to these technologies, we hope to make them more accessible, usable, and ultimately deliver far more impactful analysis and applications.

Read More

Building for the Human Half of Security Orchestration & AI

Posted by Leo Meyerovich on June 29, 2018

Learning to Whitebox the SOC-in-a-Box

operations_center_smallEven as organizations automate their security operations with orchestration and AI, some of the most important parts of security investigations continue to depend on human analysis and talent. These critical moments in the investigation remain frustratingly slow, and need categorically different technologies that are optimized for human-in-the-loop analysis.

A balanced security strategy requires us to augment and extend human skills and abilities for the many daily tasks that we cannot trust to bots. This is one of the key goals at Graphistry, and we have previously described the fuzzy data aspect of the problem in our previous article, u201cSecurity in the Age of Maybeu201d. Orchestration and AI are important parts of modern security strategies, but we have to remember that analysts need to deal with them. This article digs into our experiences around the challenges and opportunities presented when orchestration and AI meet critical human-in-the-loop phases of an investigation.

Hurry Up and Wait
Security investigation workloads have outpaced the ability of organizations to hire analysts, so it is no surprise that teams are replacing people with programs for low-level and low-risk tasks. The interesting part, as in most things, is where automation stops short.

Security-critical workflows still often end in or depend on human-in-the-loop (HITL) analysis, and for good reason. Distinguishing real threats from false positives, understanding the true scope of an infection or intrusion, or pulling the thread to expose a hidden attacker are just a few examples where human analysis remains essential. The outcome of these investigations determines the real security of an organization, so tickets and projects remain a daily reality.

Unfortunately, these investigations often remain slow and laborious, and are where efficiency and insight can go to die. As soon as tools make the handoff to the human analyst, the process regresses by 15 to 20 years. We go from automated process to an analyst squinting at dashboards and writing command-line style search queries. In order to make security operations run faster, we need to bring the same ethos of automation, orchestration, and intelligence to the messier, more complicated iterative work of human in the loop analysis. If we don’t, then much of the anticipated benefit of investing in those tools could be lost in a case of u201churry up and waitu201d. This means that the speed, visibility, and reliability we gained through automation could be lost at moment it matters the most!

Augmenting Human Analysis

If we want to improve a human outcome, it makes sense that we design for and try to extend natural human skills. That is why Graphistry has made unprecedented investments into building best-of-class visual technology. Unlike programs, people understand information visually. Humans deal with enormous amounts of data and complexity every day when it is shown visually, and this is why we convert virtually any data into visual graphs. Using graphs we literally see the connections and relationships between our events, entities, and metadata. That could be seeing the progression of an attack along the kill chain or it could be seeing the layers of obfuscation within a money laundering scheme. In either case, a picture instantly reveals what would be relatively impenetrable if analyzed in a table of data.

Analysts are also wrestling with new types of data that may not always be intuitive. Machine learning and AI have become central to all types of analysis. The problem for many analysts is that the algorithms driving these models are often a black box that the analyst simply has to take on faith. Graph visualization has the power to provide analysts with the human UI into machine learning insights. Instead of looking at a generic alert reporting anomalous behavior, an analyst can actually see clusters, outliers, and complex relationships in the data. Likewise, the graph provides a direct visual interface for easily driving these systems, such as steering machine learning towards different parts of the dataset, and triggering actions on identified regions.

Leveraging Scale Without Letting It Get in the Way

The team at Graphistry has created a variety of core GPU technologies, which lets us unlock the needed flexibility to visually interact with large amounts of data. That includes simply seeing and understanding 100X+ more of our data in context. But since the final answer that we are looking for is often small, we also need to easily remove the noise and drill down or pivot to follow the intuitive flow of the investigation.

The goal is that we never want to limit the scope of an investigation, because we can’t see all of the important data, but at the same time we need to make sure the data doesn’t get in the way of seeing what’s really important. This is frankly where most see the difference between having a pretty picture and having a truly interactive investigation. Analysts need the ability to pivot across data sources on the fly, view events in the context of a timeline, or view data in the context of the network. Being able to do this without changing screens or writing new queries is critical for making sure analysts can investigate intuitively, creatively, and actually leverage the skills that make human analysts so valuable.

Automating the Human Workflow

In the previous topic, we were focused on improving our analysts vision: enable them to see more information, see deeper into relationships, and adapt on the fly. To close the loop, we need to focus on the speed of the workflow and how we accelerate those insights. Just because a workflow involves a human doesn’t mean that we can’t speed it up by orders of magnitude. This why Graphistry has pioneered the use of investigation templates and visual playbooks as a highly interactive investigation environment rather than rigid and hard-to-edit software.

First, a template allows an investigation to automatically begin with all the data that an analyst will need. With a trigger as simple as a single SIEM alert, Graphistry can automatically connect to and query any and all data sources to pull in the relevant context. This could be logs from other tools in the SIEM, NetFlow stored in a Spark cluster, and a variety of metadata from Bro logs in Elasticsearch. Without writing a single query, the analyst can right click on an incident, and all the necessary data is queried and prepared for analysis.

Crucially, that data is delivered through a highly interactive and visual workflow. Each step or pivot can have its own unique visualization setting tied to the needs of the analyst. Instead of being rigidly predefined, the analyst can tweak settings such as to look at a wider time range or find out more about a specific entity of interest, thus remaining fully interactive and explorable.

Organizations face a similar challenge when bringing orchestration into human-in-the-loop scenarios. Scripts should not be a blackbox that only other scripts can use. The visual graph and templates solve the human side of orchestration: analysts can simply click-and-fire!

This is just the beginning of what Graphistry does, but it hopefully serves to illustrate the path forward for security organizations. Analysts are some of the most critical assets in the enterprise, and it doesn’t make sense to simply automate around them. They need to be in the process. This is what we call turning the blackbox into a whitebox. To do so, we need to give analysts tools that augment their skills, and close the loop around automated workflows around data lakes, AI, and orchestration. At Graphistry, that is our mission.

Read More

Security in the Age of Maybe

Posted by Leo Meyerovich on May 14, 2018

Security is in the midst of a transformation that is putting extreme pressure on security analysts and hunt teams. One shift that is causing teams a lot of pain in their daily work is that as threats have gotten more sophisticated, security products have gotten much less sure of themselves. Security products increasingly detect the u201canomalousu201d and report threats on a sliding scale of confidence. Not only must staff deal with advanced threats, but they must spend an increasing amount of time navigating the grey areas and ambiguities of modern threat detections to determine and deliver the right actions.

Welcome to the Age of Maybe, where it is critical that we arm analysts for dealing with the indicators that are diverse, widespread…and uncertain.

glasses_dataSecurity in the Age of Maybe

It wasn’t so long ago that most of our security solutions were signature-based, network intrusions were relatively rare, and incident response was reserved for the few truly exceptional events.

But today, persistent attacks are the norm, not the exception. That means that IR has likewise become the norm, and many organizations proactively hunt for threats based on the statistically valid assumption that they are already compromised.

The problem is that while threats have gotten smarter and more common, security products have gotten less certain. Data science, machine learning, and AI have enabled security to see threats that would avoid traditional signatures, but the results are rarely cut and dry. Modern security products are increasingly powered by black-box algorithms that generate uncertain results. Is this anomalous behavior a threat or just an anomaly?

It falls to IR teams and hunters to turn this ambiguity into action. Security products report u201clikelyu201d or u201csuspectedu201d infections, give hints at a symptom of a greater incident, and report confidence in terms of percentages: these are too fuzzy to rely solely on automated actions. Despite all the progress in data-driven algorithms for finding hidden threats, almost no organization is willing to block and walk away without an analyst reviewing the incident and making the call. The net result is that every day more and more of the enterprise security stack is assuming their fuzzy alerts will go into the SIEM and someone will successfully pick it up and connect it to other activity: a human in the loop. As threats get ever more complex and security products follow suit, this is a problem that will keep is getting worse long before it gets better.

As a result, most IR teams are chronically overwhelmed with incidents and most organizations have realized they can’t hire enough staff to keep pace. Teams have naturally sought out ways to make IR more efficient often by automating and orchestrating IR process. This makes intuitive sense – if you are facing a manual bottleneck, then figure out how to automate it.

The challenge however is that IR and threat hunting aren’t just a robotic process of connecting logs and analytics to firewalls for enforcement. The critical step is still about human understanding and making smart decisions. Whether it is the team writing and maintaining the automations, or the responders dealing with what gets flagged, automation loops still involve an analyst loop.

It’s this human-in-the-loop part of the investigation where the magic happens, and it remains the most valuable in terms of stopping initial intrusions from turning into headline news, and the most time-consuming part of the IR process. It is also where innovation is needed the most. This is where Graphistry comes in. Instead of trying to turn analysts into bots, we arm analysts to get to better answers in a fraction of the time of a normal investigation. We add tooling to the human-in-the-loop flow to restore right balance between analyst and machine.

Getting a Grip on Fuzzy Data

The idea behind Graphistry is to provide analysts with a visual environment that brings together all of your security investments in unified and streamlined investigation. Graphistry is on a mission to knock out data bottlenecks in the human-in-the-loop analyst flow, one by one. Analysts can bring in as much or as little data as they need, see it all automatically correlated and mapped out, follow connections and pivot to new data sources on the fly, and drill down into event details when they need it. Using the power of graph visualizations, analysts get one-click visibility into event progression, correlations, and outliers in your data. Data is interactively visualized in analyst-friendly terms such as in the context of a kill-chain, timelines, network boundaries, and other perspectives that go beyond low-fidelity search and dashboard views

Our platform automatically handles the backend querying so that analysts can see connections across all of their security products, logs, SIEMs, threat feeds, and data sources without the need for complex manual queries.

Once we have the right answers, then Graphistry turns to automating the process. Investigations can be saved as repeatable best practices through the use of visual playbooks. These playbooks can act as a sort of interactive map to guide an analyst through a logical investigative flow. With each step an analyst can bring in new data sources and correlate or pivot using customized views of the data. Or instead of going step-by-step, analysts can run the entire investigation at once and render it all as single interactive visual flow. For investigations this often means vastly accelerating the u201ca-hau201d moment. Over time, more and more of a team’s investigations become fast, comprehensive, and reliable by covering them with Graphistry fastpaths.

This really only scratches the surface of what we do at Graphistry, and we haven’t yet talked about the technology that makes it all work. We’ll save that for another blog, but suffice it to say when you raise the visualization bar by 100x and deliver it all through commodity browsers, there is some interesting stuff going on on the backend. But ultimately the point of all that technology is to make life easier on the analyst. The role of the analyst is growing in organizations for a reason. Let’s focus on making analysts better instead of making them into bots.

Read More

Graphs as the User Interface for AI

Posted by Leo Meyerovich on March 6, 2018

O’Reilly’s Data Show recently had our CEO, Leo Meyerovich, on to talk about why and how enterprises and data teams are adopting graph technology. You can check it out here where we dive into how we are using graphs as an interface to AI tools & data.

Meanwhile, our team is on the move. Let us know if you’ll be near one of our upcoming talks and events – we love catching up with current & new users!

  • San Jose: Nvidia GTC, March 26th-29th
  • San Francisco: Security analytics meetup with Databricks (Spark) and Trail of Bits (OS Query), April 4th.
  • Nashville: BSides Nashville, April 14th
  • San Francisco: RSA, Week of April 16th
  • Seattle: Microsoft’s annual Security Data Science Colloquium, June 2018
  • DC/NYC: In scheduling

Read More

Playbook Coverage as a Reliability KPI: A note on our NYC InfoSec talk

Posted by Leo Meyerovich on January 10, 2018

Ron Gula’s (ex-Tenable CEO) fireside chat at the NYC Infosec Meetup got serious when he questioned whether to optimize security team efficacy vs. efficiency. This dovetailed beautifully with our tech talk right before. When we explain visual playbooks, people quickly see how they cut MTTR, which in turn gets at both efficacy and efficiency. This has led us to think about what KPIs to focus on, so I ended up presenting a different take: focus on reliability… and an actionable KPI around that, playbook coverage.

crowd.jpgImage: Leo sharing visual playbook best practices

A key property of a visual playbook is it enables, for the investigations in the category the playbook was defined for, starting every investigation with a computer-assisted run through of best practices. Think tasks like data gathering, correlation, and inspection. Analogous to code coverage for software, we’ve started thinking about playbook coverage for incidents: what percent of investigations were covered by visual playbooks, or some complementary technique like orchestration? Playbook coverage measures how prepared IR is in practice. Making the KPI actionable, it provides a clear target for what to cover by the next report. In contrast, MTTR requires more thinking and interpretation.

To see more on this, go to the final slides @ .

– Leo

Read More

On Amazon’s Growing Graph Capabilities with Neptune’s Launch & Sqrrl Acquisition

Posted by Leo Meyerovich on December 21, 2017

Amazon is investing heavily in graph technologies, which is worth paying attention to. Between launching Neptune and the likely acquisition of Sqrrl (on top of other security acquisitions!), they’ve been busy. For our users and those interested in the broader space, we thought it’d help to share our perspective. Graphistry’s mission is to power the next generation of investigation and visualization technologies, so we’ve been quite active on adjacent problems… including with Amazon.

Neptune Launches

At Reinvent, Amazon launched their first Graph-Database-as-a-Service, Neptune. This is an especially big deal because Neptune is also the first managed graph database by a top 3 cloud provider. Graph databases help power a variety of technologies, and the ones Graphistry cares about are investigative. Think cybersecurity, anti-fraud, market analysis, netops, devops, etc. The Amazon Neptune team invited Graphistry to join them on-stage at Reinvent, where we were delighted to share what we have been seeing and doing in this space:


Graphistry+Neptune teams demoing
graph-powered investigations at Amazon Reinvent

Over the coming year, we expect to see many teams to start leveraging Neptune. For security, especially so alongside existing traditional SIEM tools — think Splunk, ElasticSearch, Hadoop systems, etc. The fraud story is similar and just as compelling. We have been seeing several top uses already:

  • 360 maps around key events and entities, like incidents, accounts, and devices. Graphistry has been turning best practices here into visual software that is smart, fast, and comprehensive, so stay tuned for our coming posts introducing visual analytics playbooks!
  • Decrease daily alert whack-a-mole through incident grouping & prioritization. See the video segment on the emerging trend of Enterprise Correlation Services. Matt Swann, on The Microsoft Office 365 Security blog, wrote up a great example of their first steps here.
  • Power smarter automated response. Graph DBs can accelerate queries like 360 neighborhoods, triangle counting, and shortest-path that feed into automated decision systems. Initially, we expect to see headless use much more in fraud, where it is already a growing norm.

As teams roll out graph data infrastructure, we’ll be excited to help with the problem of getting graph capabilities into the hands of more of their analysts.

Farewell to Sqrrl; Long Live Sqrrl!

We’ve watched Sqrrl, a suite of tools for analysts performing advanced threat hunting — including security analytics, a Hadoop cluster, and a graph-based active hunting UI — grow up from their roots as a NSA spinout. We’re already missing how David Bianco’s think pieces would easily trigger internal Slack discussions on what our easy visual playbook reinterpretation would look like, or if we could enable seeing more through our GPU visualizations. Sqrrl’s founders and employees merit a true tip of the hat for beating the drum on active hunt methodology!

For teams now needing to address a holiday surprise around the resulting platform risk in their visual tooling capabilities, Graphistry may be a shortcut: it can plug directly into wherever your data and compute already is, no matter if that is cloud or on-premise, nor whether it is Hadoop, Splunk, ELK, or anything else with an API. We would be happy to see about getting you up quickly. Our tech solves investigation visibility and workflow problems all the way down to your Tier 1, not just hunt, so at least there’ll be a silver lining.

To all the graphistas at Amazon, old and new, congrats from the Graphistry team, good luck with your future endeavors, and we look forward to the next time we’re in Seattle!

Read More

Your Blog Post Title Here…

Posted by Leo Meyerovich on December 12, 2017


Read More