Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Handbook: add customer ops data workflows page #2952

Merged
merged 6 commits into from
Apr 21, 2021
Merged
Show file tree
Hide file tree
Changes from 1 commit
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
26 changes: 14 additions & 12 deletions handbook/ops/bizops/analytics.md
Original file line number Diff line number Diff line change
Expand Up @@ -15,50 +15,52 @@ We collect data from the following:
- HubSpot: Marketing automation
- Salesforce: Customer Relationship Management system (CRM)
- MixMax: Email marketing automation (Apollo is not used in production, but still retains data)
- ZoomInfo: Data enrichment of account and contact information
- ZoomInfo: Data enrichment of account and contact information
- Sourcegraph.com Site-admin pages: customer subscriptions and license keys
- [Pings](https://docs.sourcegraph.com/admin/pings) from self-hosted Sourcegraph instances containing anonymous and aggregated information. There are [specific guidelines](https://docs.sourcegraph.com/dev/background-information/adding_ping_data) that must be followed for teams to add ping data.
- [Pings](https://docs.sourcegraph.com/admin/pings) from self-hosted Sourcegraph instances containing anonymous and aggregated information. There are [specific guidelines](https://docs.sourcegraph.com/dev/background-information/adding_ping_data) that must be followed for teams to add ping data.
- [Event logger: custom tool to track events](https://sourcegraph.com/github.com/sourcegraph/sourcegraph/-/blob/client/web/src/tracking/eventLogger.ts). On Sourcegraph.com, this sends events directly to BigQuery. On customer instances, this sends events to the `EventLogs` database, which is then used to populate pings.
- [Prometheus dashboards](https://sourcegraph.com/-/debug/grafana/?orgId=1) show high-level insight into the health of a Sourcegraph instance to admins. Sourcegraph teammates can see the health of Sourcegraph.com.
- [Prometheus dashboards](https://sourcegraph.com/-/debug/grafana/?orgId=1) show high-level insight into the health of a Sourcegraph instance to admins. Sourcegraph teammates can see the health of Sourcegraph.com.
- [Customer Environment Questions](customer_environment_questions.md)

We have [written policies about how we handle customer information](./customer_data_policy.md).
We have [written policies about how we handle customer information](./customer_data_policy.md).

## Data tools

- [Looker](https://sourcegraph.looker.com/projects/sourcegraph_events/files/1_home.md): Business intelligence/data visualization tool
- Google Cloud Platform: BigQuery is our data warehouse and the database Looker runs on top of
- Google Sheets: There are a [number of spreadsheets](https://drive.google.com/drive/folders/1LIfVyhjhh_mpc0SNOFvpNfN2h4CmGQmI) that Looker queries (by way of BigQuery).
- BizOps builds ad-hoc tools to analyze data for various reasons. The projects are in the [Google Drive Analytics folder](https://drive.google.com/drive/folders/1mtrHKsB2Kv0IGQ829zbcRGDSYHQpzkfd) and the source code is available in the [analytics repo](https://github.com/sourcegraph/analytics).
- For further explanation on how we use these tools, see the [customer ops data workflows](customer_ops_data.md) page

### Data pipelines

Every underlying data source (not chart!) is assumed to always be up-to-date unless noted otherwise.

#### Google BigQuery

Most "data pipelines" are SQL queries that turn raw ping data into clean datasets for
Most "data pipelines" are SQL queries that turn raw ping data into clean datasets for

#### HubSpot

[The HubSpot data pipeline](https://github.com/sourcegraph/analytics/tree/master/HubSpot%20ETL) is updated once per day (in the afternoon PST). If you need the latest data at any time, post in the #analytics channel in Slack and the BizOps team can run the pipeline manually.
[The HubSpot data pipeline](https://github.com/sourcegraph/analytics/tree/master/HubSpot%20ETL) is updated once per day (in the afternoon PST). If you need the latest data at any time, post in the #analytics channel in Slack and the BizOps team can run the pipeline manually.

## Using Looker

[Looker](https://sourcegraph.looker.com/) is a self-service tool with many pre-built reports and visualizations. The [onboarding doc](https://sourcegraph.looker.com/projects/sourcegraph_events/files/1_home.md) is located in Looker. Reach out in the #analytics Slack channel if you have any questions, we're happy to help!

### Quick links

- [Sales](https://sourcegraph.looker.com/browse/boards/2)
- [Customer Engineering](https://sourcegraph.looker.com/browse/boards/8)
- [Product/engineering board](https://sourcegraph.looker.com/browse/boards/5)

### Things to know about using Looker

- By clicking `Explore from here` or changing a filter on a dashboard, you *will not* change the underlying dashboard. Unless you explicitly click `Edit`, you are considered to be on your own temporary branch and will not change anything (even for yourself the next time you open the dashboard).
- When creating and editing dashboards, save individual tables and charts as [looks](https://docs.looker.com/exploring-data/saving-and-editing-looks) instead of tiles directly to the dashboard. Looks can be added to multiple dashboards while tiles cannot be, and when look is edited, the changes will apply to dashboards that look exists.
- By clicking `Explore from here` or changing a filter on a dashboard, you _will not_ change the underlying dashboard. Unless you explicitly click `Edit`, you are considered to be on your own temporary branch and will not change anything (even for yourself the next time you open the dashboard).
- When creating and editing dashboards, save individual tables and charts as [looks](https://docs.looker.com/exploring-data/saving-and-editing-looks) instead of tiles directly to the dashboard. Looks can be added to multiple dashboards while tiles cannot be, and when look is edited, the changes will apply to dashboards that look exists.

### Downsides of Looker (and our plans to address them)

- **Discoverability of data**: Bookmarking, favoriting or adding the sales/customer engineering board, product/engineering board and server instances overview look (or some combination of them) to your Looker instance is the best solution right now. These are all kept up-to-date with the most relevant data for all teams.
- **Speed**: Looker’s UI makes it easy to analyze data, but the result really complex SQL query that take awhile to run (especially on dashboards that are compiled of many separate queries). Fixing the performance issues is not currently a priority, but is something that we’ll get to when we grow the team out.
- **Naming conventions**: We’re slowly working on making naming conventions of dashboards, graphs, data points, etc... more obvious. If you come across anything that isn’t clear, let us know!
- **Discoverability of data**: Bookmarking, favoriting or adding the sales/customer engineering board, product/engineering board and server instances overview look (or some combination of them) to your Looker instance is the best solution right now. These are all kept up-to-date with the most relevant data for all teams.
- **Speed**: Looker’s UI makes it easy to analyze data, but the result really complex SQL query that take awhile to run (especially on dashboards that are compiled of many separate queries). Fixing the performance issues is not currently a priority, but is something that we’ll get to when we grow the team out.
- **Naming conventions**: We’re slowly working on making naming conventions of dashboards, graphs, data points, etc... more obvious. If you come across anything that isn’t clear, let us know!
38 changes: 38 additions & 0 deletions handbook/ops/bizops/customer_ops_data.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,38 @@
# CustomerOps Data Workflows
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
# CustomerOps Data Workflows
# Data Workflows

I think we should remove the customer ops framing of this since we use these for lots of different reasons

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

(And I won't go through and suggest on all the customer ops references but I think the same thing for all of them 😄)


This document outlines the processes we have in place to pull data from our various third-party customer ops tools and get it into BigQuery and Looker for analysis.

We extract data from our customer ops tools using several methods, described in more detail below. The methods we use currently include reading data into Google Sheets using data connectors (add-ons), writing Python scripts to query the tools' APIs, and create workflows in Zapier.
attfarhan marked this conversation as resolved.
Show resolved Hide resolved

## Google Sheets Add-Ons

We use Google Sheets as a data store after pulling data from some of our customer ops tools. Once the sheets are created with the data we want, we connect them to BigQuery to create database tables.

The add-ons we use depend on the data source. Currently, we use:

- The [Google Analytics Spreadsheet Add on](https://developers.google.com/analytics/solutions/google-analytics-spreadsheet-add-on) to pull Google Analytics data.
- [Data Connector for Salesforce](https://workspace.google.com/marketplace/app/data_connector_for_salesforce/857627895310) to pull Salesforce data. You should create a report in Salesforce containing the data you want, and then use this add on to pull data directly from the report.

Both of these add-ons allow us to define the data we pull from our GA and Salesforce instances, and automatically refresh the data periodically.

### Connecting the sheet to BigQuery

Once the data we want is populated into a Google sheet, we can use it create a new table in BigQuery. In BigQuery, select the dataset you want to add the table to. For example, we have a dataset called `google_analytics` that contains our GA data.

Then, click _Create Table_. In the configuration section, in the dropdown next to _Create table from_, select _Drive_, and _CSV_ as the file format. Then, copy and paste the Google Sheet URL containing your data into the URL field. BigQuery may be able to autodetect the schema, but if not, you will have to add each column name manually. You’ll also need to specify the number of header rows to skip to ensure your headers don’t get included as data.Then, create a table.
attfarhan marked this conversation as resolved.
Show resolved Hide resolved

Once the table is created in BigQuery, you can create a view in Looker using the table as normal.

_Note_: In order for BigQuery to query your sheet, you’ll need to share the sheet to a Google service account associated with the GCP project that the database is in.
attfarhan marked this conversation as resolved.
Show resolved Hide resolved

## Write a script

We have Python scripts that fetch data via APIs and send that data into BigQuery. These scripts are run periodically via a CI pipeline.

### HubSpot

To pull data from HubSpot, we have Python scripts that pull in data using the HubSpot API and creates a table with the data in BigQuery. For example, we have a script that pulls in all of our [Contacts data](https://github.com/sourcegraph/analytics/blob/master/HubSpot%20ETL/get_contacts.py), and creates a corresponding table in BigQuery. This script is run via the [ETL script](https://github.com/sourcegraph/analytics/blob/master/HubSpot%20ETL/hubspot_etl.py), which is run every 24 hours via a [CI pipeline](https://buildkite.com/sourcegraph/analytics).

## Zapier

Sometimes, it may be easier to create an if-this-then-that workflow/Zap in Zapier than writing a script. Zaps can run code snippets, and update data in various places in response to certain events. For example, we have a [zap that runs whenever a new HubSpot form submission](https://zapier.com/app/editor/113508746) and updates a Google Sheet with new submissions. The Google Sheet is then used to create a table in BigQuery as above.