Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Converts logging README format to be compatible with new docs system #91958

Merged
merged 6 commits into from
Feb 22, 2021
Merged
Show file tree
Hide file tree
Changes from 4 commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
34 changes: 17 additions & 17 deletions dev_docs/kibana_platform_plugin_intro.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -7,25 +7,25 @@ date: 2021-01-06
tags: ['kibana','onboarding', 'dev', 'architecture']
---

From an end user perspective, Kibana is a tool for interacting with Elasticsearch, providing an easy way
From an end user perspective, Kibana is a tool for interacting with Elasticsearch, providing an easy way
to visualize and analyze data.

From a developer perspective, Kibana is a platform that provides a set of tools to build not only the UI you see in Kibana today, but
a wide variety of applications that can be used to explore, visualize, and act upon data in Elasticsearch. The platform provides developers the ability
to build applications, or inject extra functionality into
a wide variety of applications that can be used to explore, visualize, and act upon data in Elasticsearch. The platform provides developers the ability
to build applications, or inject extra functionality into
already existing applications. Did you know that almost everything you see in the
Kibana UI is built inside a plugin? If you removed all plugins from Kibana, you'd be left with an empty navigation menu, and a set of
developer tools. The Kibana platform is a blank canvas, just waiting for a developer to come along and create something!

![Kibana personas](assets/kibana_platform_plugin_end_user.png)

## Platform services

Plugins have access to three kinds of public services:

- Platform services provided by `core` (<DocLink id="kibPlatformIntro" section="core-services" text="Core services"/>)
- Platform services provided by plugins (<DocLink id="kibPlatformIntro" section="platform-plugins" text="Platform plugins"/>)
- Shared services provided by plugins, that are only relevant for only a few, specific plugins (e.g. "presentation utils").
- Shared services provided by plugins, that are only relevant for only a few, specific plugins (e.g. "presentation utils").

The first two items are what make up "Platform services".

Expand All @@ -37,9 +37,9 @@ clear, and we haven't done a great job of sticking to it. For example, notificat
Today it looks something like this.

![Core vs platform plugins vs plugins](assets/platform_plugins_core.png)

<DocAccordion buttonContent="A bit of history">
When the Kibana platform and plugin infrastructure was built, we thought of two types of code: core services, and other plugin services. We planned to keep the most stable and fundamental
When the Kibana platform and plugin infrastructure was built, we thought of two types of code: core services, and other plugin services. We planned to keep the most stable and fundamental
code needed to build plugins inside core.

In reality, we ended up with many platform-like services living outside of core, with no (short term) intention of moving them. We highly encourage plugin developers to use
Expand All @@ -54,7 +54,7 @@ In reality, our plugin model ended up being used like micro-services. Plugins ar
they desire, without the need to build a plugin.

Another side effect of having many small plugins is that common code often ends up extracted into another plugin. Use case specific utilities are exported,
that are not meant to be used in a general manner. This makes our definition of "platform code" a bit trickier to define. We'd like to say "The platform is made up of
that are not meant to be used in a general manner. This makes our definition of "platform code" a bit trickier to define. We'd like to say "The platform is made up of
every publically exposed service", but in today's world, that wouldn't be a very accurate picture.

We recognize the need to better clarify the relationship between core functionality, platform-like plugin functionality, and functionality exposed by other plugins.
Expand All @@ -69,19 +69,19 @@ We will continue to focus on adding clarity around these types of services and w
### Core services

Sometimes referred to just as Core, Core services provide the most basic and fundamental tools neccessary for building a plugin, like creating saved objects,
routing, application registration, and notifications. The Core platform is not a plugin itself, although
routing, application registration, notifications and <DocLink id="kibCoreLogging" text="logging"/>. The Core platform is not a plugin itself, although
Copy link
Contributor Author

@TinaHeiligers TinaHeiligers Feb 22, 2021

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

AFAIK, this doc is only used in the new Docs system, so linking to the kibCoreLogging section should be ok but please correct me if I'm wrong.

The other changes are all white-space changes for some reason.

there are some plugins that provide platform functionality. We call these <DocLink id="kibPlatformIntro" section="platform-plugins" text="Platform plugins"/>.

### Platform plugins

Plugins that provide fundamental services and functionality to extend and customize Kibana, for example, the
Plugins that provide fundamental services and functionality to extend and customize Kibana, for example, the
<DocLink id="kibDataPlugin" text="data"/> plugin. There is no official way to tell if a plugin is a platform plugin or not.
Platform plugins are _usually_ plugins that are managed by the Platform Group, but we are starting to see some exceptions.

## Plugins

Plugins are code that is written to extend and customize Kibana. Plugin's don't have to be part of the Kibana repo, though the Kibana
repo does contain many plugins! Plugins add customizations by
Plugins are code that is written to extend and customize Kibana. Plugin's don't have to be part of the Kibana repo, though the Kibana
repo does contain many plugins! Plugins add customizations by
using <DocLink id="kibPlatformIntro" section="extension-points" text="extension points"/> provided by <DocLink id="kibPlatformIntro" section="platform-services" text="platform services"/>.
Sometimes people confuse the term "plugin" and "application". While often there is a 1:1 relationship between a plugin and an application, it is not always the case.
A plugin may register many applications, or none.
Expand All @@ -97,7 +97,7 @@ adding it to core's application <DocLink id="kibPlatformIntro" section="registry

### Public plugin API

A plugin's public API consists of everything exported from a plugin's <DocLink id="kibPlatformIntro" section="plugin-lifecycle" text="start or setup lifecycle methods"/>,
A plugin's public API consists of everything exported from a plugin's <DocLink id="kibPlatformIntro" section="plugin-lifecycle" text="start or setup lifecycle methods"/>,
as well as from the top level `index.ts` files that exist in the three "scope" folders:

- common/index.ts
Expand All @@ -113,18 +113,18 @@ Core, and plugins, expose different features at different parts of their lifecyc
specifically-named functions on the service definition.

Kibana has three lifecycles: setup, start, and stop. Each plugin’s setup function is called sequentially while Kibana is setting up
on the server or when it is being loaded in the browser. The start functions are called sequentially after setup has been completed for all plugins.
on the server or when it is being loaded in the browser. The start functions are called sequentially after setup has been completed for all plugins.
The stop functions are called sequentially while Kibana is gracefully shutting down the server or when the browser tab or window is being closed.

The table below explains how each lifecycle relates to the state of Kibana.

| lifecycle | purpose | server | browser |
| ---------- | ------ | ------- | ----- |
| setup | perform "registration" work to setup environment for runtime |configure REST API endpoint, register saved object types, etc. | configure application routes in SPA, register custom UI elements in extension points, etc. |
| start | bootstrap runtime logic | respond to an incoming request, request Elasticsearch server, etc. | start polling Kibana server, update DOM tree in response to user interactions, etc.|
| start | bootstrap runtime logic | respond to an incoming request, request Elasticsearch server, etc. | start polling Kibana server, update DOM tree in response to user interactions, etc.|
| stop | cleanup runtime | dispose of active handles before the server shutdown. | store session data in the LocalStorage when the user navigates away from Kibana, etc. |

Different service interfaces can and will be passed to setup, start, and stop because certain functionality makes sense in the context of a running plugin while other types
Different service interfaces can and will be passed to setup, start, and stop because certain functionality makes sense in the context of a running plugin while other types
of functionality may have restrictions or may only make sense in the context of a plugin that is stopping.

## Extension points
Expand All @@ -141,4 +141,4 @@ plugins to customize the Kibana experience. Examples of extension points are:

## Follow up material

Learn how to build your own plugin by following <DocLink id="kibDevTutorialBuildAPlugin" />
Learn how to build your own plugin by following <DocLink id="kibDevTutorialBuildAPlugin" />
2 changes: 1 addition & 1 deletion src/core/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -29,7 +29,7 @@ rules tailored to our needs (e.g. `byteSize`, `duration` etc.). That means that
by the "legacy" Kibana may be rejected by the `core` now.

### Logging
`core` has its own [logging system](./server/logging/README.md) and will output log records directly (e.g. to file or terminal) when configured. When no
`core` has its own [logging system](./server/logging/README.mdx) and will output log records directly (e.g. to file or terminal) when configured. When no
TinaHeiligers marked this conversation as resolved.
Show resolved Hide resolved
specific configuration is provided, logs are forwarded to the "legacy" Kibana so that they look the same as the rest of the
log records throughout Kibana.

Expand Down
Original file line number Diff line number Diff line change
@@ -1,3 +1,13 @@
---
id: kibCoreLogging
slug: /kibana-dev-docs/services/logging
title: Logging system
image: https://source.unsplash.com/400x175/?Logging
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I assume the unsplash images are just a placeholder thing? I noticed these in a few other docs from the new system but don't see those images surfaced anywhere.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

They're a placeholder and are optional at this point. I added it simply to follow what I also saw was being done in other docs.

summary: Core logging contains the system and service for Kibana logs.
date: 2020-12-02
tags: ['kibana','dev', 'contributor', 'api docs']
---

# Logging
- [Loggers, Appenders and Layouts](#loggers-appenders-and-layouts)
- [Logger hierarchy](#logger-hierarchy)
Expand All @@ -15,7 +25,7 @@
- [Log record format changes](#log-record-format-changes)

The way logging works in Kibana is inspired by `log4j 2` logging framework used by [Elasticsearch](https://www.elastic.co/guide/en/elasticsearch/reference/current/settings.html#logging).
The main idea is to have consistent logging behaviour (configuration, log format etc.) across the entire Elastic Stack
The main idea is to have consistent logging behaviour (configuration, log format etc.) across the entire Elastic Stack
where possible.

## Loggers, Appenders and Layouts
Expand All @@ -33,25 +43,28 @@ __Layouts__ define how log messages are formatted and what type of information t

## Logger hierarchy

Every logger has its unique context name that follows hierarchical naming rule. The logger is considered to be an
Every logger has its unique context name that follows hierarchical naming rule. The logger is considered to be an
ancestor of another logger if its name followed by a `.` is a prefix of the descendant logger name. For example logger
with `a.b` context name is an ancestor of logger with `a.b.c` context name. All top-level loggers are descendants of special
logger with `root` context name that resides at the top of the logger hierarchy. This logger always exists and
logger with `root` context name that resides at the top of the logger hierarchy. This logger always exists and
fully configured.

Developer can configure _log level_ and _appenders_ that should be used within particular context name. If logger configuration
specifies only _log level_ then _appenders_ configuration will be inherited from the ancestor logger.
specifies only _log level_ then _appenders_ configuration will be inherited from the ancestor logger.

__Note:__ in the current implementation log messages are only forwarded to appenders configured for a particular logger
__Note:__
In the current implementation log messages are only forwarded to appenders configured for a particular logger
context name or to appenders of the closest ancestor if current logger doesn't have any appenders configured. That means that
we __don't support__ so called _appender additivity_ when log messages are forwarded to _every_ distinct appender within
ancestor chain including `root`.

## Log level

Currently we support the following log levels: _all_, _fatal_, _error_, _warn_, _info_, _debug_, _trace_, _off_.

Levels are ordered, so _all_ > _fatal_ > _error_ > _warn_ > _info_ > _debug_ > _trace_ > _off_.
A log record is being logged by the logger if its level is higher than or equal to the level of its logger. Otherwise,

A log record is being logged by the logger if its level is higher than or equal to the level of its logger. Otherwise,
the log record is ignored.

The _all_ and _off_ levels can be used only in configuration and are just handy shortcuts that allow developer to log every
Expand All @@ -60,15 +73,15 @@ log record or disable logging entirely for the specific context name.
## Layouts

Every appender should know exactly how to format log messages before they are written to the console or file on the disk.
This behaviour is controlled by the layouts and configured through `appender.layout` configuration property for every
This behaviour is controlled by the layouts and configured through `appender.layout` configuration property for every
custom appender (see examples in [Configuration](#configuration)). Currently we don't define any default layout for the
custom appenders, so one should always make the choice explicitly.

There are two types of layout supported at the moment: `pattern` and `json`.
There are two types of layout supported at the moment: `pattern` and `json`.

### Pattern layout
With `pattern` layout it's possible to define a string pattern with special placeholders `%conversion_pattern` (see the table below) that
will be replaced with data from the actual log message. By default the following pattern is used:
will be replaced with data from the actual log message. By default the following pattern is used:
`[%date][%level][%logger]%meta %message`. Also `highlight` option can be enabled for `pattern` layout so that
some parts of the log message are highlighted with different colors that may be quite handy if log messages are forwarded
to the terminal with color support.
Expand Down Expand Up @@ -110,7 +123,7 @@ Example of `%meta` output:

##### date
Outputs the date of the logging event. The date conversion specifier may be followed by a set of braces containing a name of predefined date format and canonical timezone name.
Timezone name is expected to be one from [TZ database name](https://en.wikipedia.org/wiki/List_of_tz_database_time_zones).
Timezone name is expected to be one from [TZ database name](https://en.wikipedia.org/wiki/List_of_tz_database_time_zones).
Timezone defaults to the host timezone when not explicitly specified.
Example of `%date` output:

Expand All @@ -129,7 +142,7 @@ Example of `%date` output:
Outputs the process ID.

### JSON layout
With `json` layout log messages will be formatted as JSON strings that include timestamp, log level, context name, message
With `json` layout log messages will be formatted as JSON strings that include timestamp, log level, context name, message
text and any other metadata that may be associated with the log message itself.

## Appenders
Expand Down Expand Up @@ -159,7 +172,7 @@ logging:
type: size-limit
size: 50mb
strategy:
//...
//...
layout:
type: pattern
```
Expand Down Expand Up @@ -187,7 +200,7 @@ logging:
interval: 10s
modulate: true
strategy:
//...
//...
layout:
type: pattern
```
Expand All @@ -201,10 +214,10 @@ How often a rollover should occur.
The default value is `24h`

- `modulate`

Whether the interval should be adjusted to cause the next rollover to occur on the interval boundary.
For example, when true, if the interval is `4h` and the current hour is 3 am then the first rollover will occur at 4 am

For example, when true, if the interval is `4h` and the current hour is 3 am then the first rollover will occur at 4 am
and then next ones will occur at 8 am, noon, 4pm, etc.

The default value is `true`.
Expand Down Expand Up @@ -331,8 +344,8 @@ Here is what we get with the config above:
| metrics.ops | console | debug |


The `root` logger has a dedicated configuration node since this context name is special and should always exist. By
default `root` is configured with `info` level and `default` appender that is also always available. This is the
The `root` logger has a dedicated configuration node since this context name is special and should always exist. By
default `root` is configured with `info` level and `default` appender that is also always available. This is the
configuration that all custom loggers will use unless they're re-configured explicitly.

For example to see _all_ log messages that fall back on the `root` logger configuration, just add one line to the configuration:
Expand Down Expand Up @@ -391,8 +404,8 @@ The message contains some high-level information, and the corresponding log meta

## Usage

Usage is very straightforward, one should just get a logger for a specific context name and use it to log messages with
different log level.
Usage is very straightforward, one should just get a logger for a specific context name and use it to log messages with
different log level.

```typescript
const logger = kibana.logger.get('server');
Expand Down Expand Up @@ -435,7 +448,7 @@ All log messages handled by `root` context are forwarded to the legacy logging s
root appenders, make sure that it contains `default` appender to provide backward compatibility.
**Note**: If you define an appender for a context name, the log messages aren't handled by the
`root` context anymore and not forwarded to the legacy logging service.

#### logging.dest
By default logs in *stdout*. With new Kibana logging you can use pre-existing `console` appender or
define a custom one.
Expand All @@ -445,7 +458,7 @@ logging:
- name: plugins.myPlugin
appenders: [console]
```
Logs in a *file* if given file path. You should define a custom appender with `type: file`
Logs in a *file* if given file path. You should define a custom appender with `type: file`
```yaml

logging:
Expand All @@ -458,13 +471,13 @@ logging:
loggers:
- name: plugins.myPlugin
appenders: [file]
```
```
#### logging.json
Defines the format of log output. Logs in JSON if `true`. With new logging config you can adjust
the output format with [layouts](#layouts).

#### logging.quiet
Suppresses all logging output other than error messages. With new logging, config can be achieved
Suppresses all logging output other than error messages. With new logging, config can be achieved
with adjusting minimum required [logging level](#log-level).
```yaml
loggers:
Expand Down