Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[server] Expose HTTP service to plugins #12464

Closed
11 of 15 tasks
kimjoar opened this issue Jun 22, 2017 · 6 comments
Closed
11 of 15 tasks

[server] Expose HTTP service to plugins #12464

kimjoar opened this issue Jun 22, 2017 · 6 comments
Assignees
Labels
enhancement New value added to drive a business result Feature:Legacy Removal Issues related to removing legacy Kibana Feature:New Platform Team:Core Core services & architecture: plugins, logging, config, saved objects, http, ES client, i18n, etc

Comments

@kimjoar
Copy link
Contributor

kimjoar commented Jun 22, 2017

API Availability

7.4

7.6-7.9

Questionable

  • Add response interceptor
  • Add pre-handler inceptor

Summary

The current http server in the new platform is purely exploration at this stage (e.g. only get is implemented so far). In addition to how we handle get, post, put and delete there are some things we need to think about when designing the http server.

First of all, this is not a "generic" http server — we wrap an existing http server to build something specific for Kibana. One of the reasons we wrap an existing http server is that we want to be able to upgrade the http server at any time, which means we need to control the apis. This also means we can expose a subset of functionality that fits with Kibana. We can also make Kibana-specific adjustments that simplifies (and clarifies) the normal patterns we see in plugins.

Below I've described some of the things I think we need to think about (and explore) in the new http server.

Request-scoped services

In current Kibana we use callWithRequest often. This means we pass around request to many places. To alleviate this we have started to "request-scope" certain services, e.g. getUiSettingsService which depends on getSavedObjectsClient, which depends on the request to scope callWithRequest. This simplifies getting a savedObjectsClient, but it also adds some problems:

  • Everything we put on request becomes available everywhere. This is precisely what we're trying to avoid with the new plugin system design, where we are very specific about what you receive when you depend on another plugin. I think it's essential that transitive dependencies don't provide any values to a plugin, only direct dependencies can expose values.
  • Putting things on the request makes it more difficult to type.

I think we need to explore ideas that don't involve adding function on the request like this. It feels like a leaky abstraction.

I don't have a clear api in mind here, so I think we need to explore ideas. Currently in the new-platform branch there is an onRequest method that can be specified on a plugin's router. It's "pure exploration" code at the current time, so don't treat it as more than that. I'd be happy to see that go away for a better/different abstraction.

I think the best way to solve this problem is to find out how we want to handle #12442, as it seems like callWithRequest is the only reason we scope services today.

An example of how to avoid using request-scoped services is in a temporary branch, here made into a PR only for ease of reviewing. #14482

Validate fields

In the current PoC code we've implemented validating url params, query params and request body, which looks like this:

router.get(
  {
    path: '/:type',
    validate: {
      params: object({
        type: string()
      }),
      query: object({
        page: maybe(number({ min: 1 })),
        per_page: maybe(number({ min: 1 }))
      })
    }
  },
  async (req, res) => {
    const { params, query } = req;

    const savedObjects = await savedObjectsService.find({
      type: params.type,
      page: query.page,
      perPage: query.per_page
    });

    return res.ok(savedObjects);
  }
);

When adding validation, all fields must validate (aka the entire objects must validate, not just the specified keys). There's a couple nice values from doing this validation:

  • The expected format is clearly documented (and guaranteed to stay up-to-date).
  • We can (automatically) create TS types for the data, so params and query above will be typed.

In the current implementation we do not inject schema, but rely on it being injected further up. I think we should change this to inject schema directly into validate instead, so it could look like this:

router.get(
  {
    path: '/:type',
    validate: ({ object, string, maybe, number }) => ({
      params: object({
        type: string()
      }),
      query: object({
        page: maybe(number({ min: 1 })),
        per_page: maybe(number({ min: 1 }))
      })
    })
  },
  async (req, res) => {
    const { params, query } = req;

    const savedObjects = await savedObjectsService.find({
      type: params.type,
      page: query.page,
      perPage: query.per_page
    });

    return res.ok(savedObjects);
  }
);

Currently the new platform can perform validation on query params, url params and the body. Should we explore doing this for headers too? We can't validate all headers, of course, but maybe we can do it just for the specific headers you need?

Are there use-cases where this wouldn't work?

Not provide the full request

In current Kibana the handler for each request receives the full hapi request and response objects. I think we should explore not giving the handler the full request object. Instead of giving request handlers the full request, I think we should ONLY give it the validated fields (described above).

Why go this route? It ensures we're strict about input, treat failures on the input in the same way across Kibana endpoints, makes us depend less on these objects and all their features (e.g. mutating them and passing them down the stack), and it makes it easier for us to perform major changes to the underlying request and response objects (e.g. upgrade majors) without it impacting the plugin apis.

In theory we could give access to the underlying request object through an "escape hatch", e.g.

router.get(
  {
    path: '/:type',
    validate: ({ object, string }) => ({
      params: object({
        type: string()
      })
    })
  },
  async (req, res) => {
    const rawRequest = req.unstable_getRawRequest();
    // ...
  }
);

Escape hatch

Let's explore this "escape hatch" idea. With the new platform we want to create a more "stable plugin contract" and ideally "not expose internals or dependencies to plugins". However, we sometimes have plugins that need very specific access to apis, which doesn't really fit "most plugins". Two examples are Security and Reporting.

Instead of building stable apis for all use-cases, I think we should consider having an "escape hatch" that gives you access to the underlying library/framework objects. A good example of this is the request example from above:

const rawRequest = req.unstable_getRawRequest();

Using unstable_ in the name indicates that this is not a part of the "stable Kibana api", and that it can change at any time. We could even do a major upgrade in a patch release if we wanted to. It's up to plugin authors to keep up-to-date if they rely on these apis.

But the nice thing is that it allows us a way of not solving for all use-cases immediately.

I don't think this is something that should be used often, but it definitely feels like there are use-cases for this kind of approach. You can basically say we handle 95% of use-cases with the stable apis, but for special cases (e.g. Security) you might sometimes need to get the raw object instead.

Middleware, request filter, response filter

For the new platform we need to decide how we can run things before and after a request. In the current platform we rely on both the hapi lifecycle for this, and being able to specify pre on a route.

These are all the things that rely on the hapi lifecycle in Kibana today:

We also rely on pre to get the IndexPatternsService and the SavedObjectsClient, in addition to validating proxy routes for Console and for the basepath proxy.

If we find a solution for request-scoped services as described above, there are so few other things that depend on this right now, that I think we can rely on an "escape hatch" for this stuff. That way we can postpone the api design of this until we have more relevant use-cases.

This escape hatch needs to be exposed, so Security can rely on it. (TODO: Explore ideas of what this could look like)

(I'm definitely open to exploring ideas here too, but it feels like it's not the highest priority task we have for the http server.)

Format on error responses

(TODO: describe)

Is this defined in Kibana today? If so, can/should we use the same format in the new platform?

It feels like we should explore just keep using Boom to signal errors in the app, but we might not want to make that dep available to plugins? Hm. Maybe we need to create our own KibanaError class with static helpers like notFound, forbidden, badRequest etc.

How to fail requests

(TODO: describe. Code fails somewhere far down, what does the flow look like?)

Serve static files

(TODO: describe)

Plugins must be able to expose static files. This should likely just be some predefined folder, with no option to configure the file path. (TODO: What does the current plugin system do here?)

(TODO: How does this work with webpack et al when plugins are built on their own? Do they need to know which path they are being served from?)

@kimjoar kimjoar added the Team:Core Core services & architecture: plugins, logging, config, saved objects, http, ES client, i18n, etc label Jun 22, 2017
@rhoboat
Copy link

rhoboat commented Oct 18, 2017

An example of how to avoid using request-scoped services is in a temporary branch, here made into a PR only for ease of reviewing. #14482 (also updated in main description)

@rhoboat
Copy link

rhoboat commented Dec 20, 2017

All the must-haves are checked off.

@epixa epixa changed the title Http server in the new platform HTTP service in the new platform May 6, 2018
@epixa epixa added the enhancement New value added to drive a business result label May 6, 2018
@epixa epixa assigned azasypkin and unassigned rhoboat May 7, 2018
@epixa
Copy link
Contributor

epixa commented May 7, 2018

@azasypkin Archana sorted out the must-haves on this, but I've assigned this to you so you can draw a line somewhere on the initial HTTP service that you're shipping with the new platform in #12660. Rather than keep around some generic mega issue, if/when you're satisfied with the initial implementation, create individual issues for any followup stuff you think we should do.

@epixa epixa changed the title HTTP service in the new platform [server] HTTP service May 7, 2018
@rhoboat
Copy link

rhoboat commented Jul 31, 2018

We'd like a way to limit when systems can access es client and saved objects client. One way to do this is via the http service.

elasticsearch module could do this, just writing from memory...

start(core, plugins) {
  const { http } = core;
  http.registerCapability('elasticsearch', {
    getScopedDataClient: (request, ...args) => {
      // some logic that returns the right scoped data client
    },

    // maybe this can't happen:
    getAdminClient: (...args) => {
    },
    // more things??
  })
}

savedobjects module could do similar thing

then some dependency later can do this

// registering an endpoint
router.get/post(
  {validation object},
  handler(req, res, coreCapabilities) {
    // whoa, check it! es service already scoped to the request
    const { elasticsearch } = coreCapabilities;
    const esClient = elasticsearch.getScopedDataClient();
    // maybe getAdminClient, since it's not scoped, is available outside the handler
  },
)

We can work on solving exactly how admin/data clients are exposed, but basically I'm thinking the http service would let es and saved objects register stuff that is only available in a handler function. They themselves would define how the binding to request happens, I think.

@azasypkin
Copy link
Member

Thanks for sharing your ideas @archanid! One thought though: registering elasticsearch client through generic registerCapability can make it difficult to "type" it, but let's see how it goes.

@rhoboat
Copy link

rhoboat commented Aug 10, 2018

@azasypkin Hmm.. Actually this is a good thing. Maybe we have to be explicit about what capabilities can be registered. We do already know what they are going to be, among our kore services. I'm open to possibilities here. If we want to honor types, we could have more like a registerElasticsearchCap (registerXXXCap, etc) and avoid passing in a string as the first argument, to act as the name, which is a bit fragile.

Hm. The more I think about it, I don't see a clear reason to keep it generic.

@joshdover joshdover added the Feature:Legacy Removal Issues related to removing legacy Kibana label Apr 26, 2020
@mshustov mshustov closed this as completed May 5, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New value added to drive a business result Feature:Legacy Removal Issues related to removing legacy Kibana Feature:New Platform Team:Core Core services & architecture: plugins, logging, config, saved objects, http, ES client, i18n, etc
Projects
None yet
Development

No branches or pull requests

6 participants