The Data Flow Server deploys Streams by delegating to Spring Cloud Skipper and deploys Tasks directly. Both Skipper and the Data Flow server can deploy to the local machine, Cloud Foundry, and Kubernetes.
Note
|
The local version of the Data Flow server should only be used for Stream development but can be used in a production environment for Task deployment as a replacement for the Spring Cloud Batch Admin Server. |
This shows how to start the Data Flow Server and Shell to create the time | log
stream.
-
Start Kafka locally (e.g.
kafka-server-start.sh /usr/local/etc/kafka/server.properties
) -
Build from the spring-cloud-dataflow root directory:
./mvnw clean install
-
Start the Data Flow Server application:
java -jar spring-cloud-dataflow-server/target/spring-cloud-dataflow-server-<version>.jar
-
Start the shell
$ java -jar spring-cloud-dataflow-shell/target/spring-cloud-dataflow-shell-<version>.jar
-
Create the 'ticktock' stream:
dataflow:>stream create --name ticktock --definition "time | log" Created new stream 'ticktock'
This is equivalent to the following HTTP POST request
$ curl -X POST -d "name=ticktock&definition=time | log" http://localhost:9393/streams/definitions?deploy=false
-
List all streams available in the repository:
dataflow:>stream list ╔═══════════╤═════════════════╤══════════╗ ║Stream Name│Stream Definition│ Status ║ ╠═══════════╪═════════════════╪══════════╣ ║ticktock │time | log │undeployed║ ╚═══════════╧═════════════════╧══════════╝
This is equivalent to the following HTTP get request
$ curl http://localhost:9393/streams/definitions
-
Deploy the 'ticktock' stream:
dataflow:>stream deploy --name ticktock Deployed stream 'ticktock'
This is equivalent to the following HTTP get request
$ curl -X POST http://localhost:9393/streams/deployments/ticktock
If successful you should see output similar to the following in the Data Flow Server
console:
...o.s.c.d.spi.local.LocalAppDeployer : deploying app ticktock.log instance 0 Logs will be in /some/path/ticktock.log
If you tail the stdout_0.log
file from the directory mentioned, you should see output similar to the following:
2016-04-26 15:10:18.320 INFO 59890 --- [pool-1-thread-1] log.sink : 04/26/16 15:10:18 2016-04-26 15:10:19.322 INFO 59890 --- [pool-1-thread-1] log.sink : 04/26/16 15:10:19 2016-04-26 15:10:20.322 INFO 59890 --- [pool-1-thread-1] log.sink : 04/26/16 15:10:20
To configure the Data Flow Server you can follow the configuration setup guidelines specified in the boot documentation found here
Note: The dataflow-server.yml
containing the defaults can be found here
The Spring Cloud Data Flow Server offers the user the ability to configure properties via
spring-cloud-config.
All configurations retrieved from the cloud config will take precedence over Boot’s
defaults enumerated above. The Spring Cloud Data Flow Server will look for the server at
localhost:8888
, however this can be overwritten by setting the spring.cloud.config.uri
property to the desired url.
To specify a repository in the cloud config server configuration.yml for the Data Flow Server,
setup a repo profile with the pattern spring-cloud-dataflow-server
. For example:
spring:
cloud:
config:
server:
git:
uri: https://github.com/myrepo/configurations
repos:
spring-cloud-dataflow-server:
pattern: spring-cloud-dataflow-server
uri: https://github.com/myrepo/configurations
searchPaths: dataFlowServer
In some cases, it may be desirable to fail startup of a service if it cannot connect to
the Config Server. If this is the desired behavior, set the bootstrap configuration
property spring.cloud.config.failFast=true
and the client will halt with an Exception.
If the Data Flow Server cannot connect to the cloud config server, the following warning message will be logged:
`WARN 42924 --- [main] c.c.c.ConfigServicePropertySourceLocator : Could not locate PropertySource: I/O error on GET request for "http://localhost:8888/spring-cloud-dataflow-server/default":Connection refused; nested exception is java.net.ConnectException: Connection refused`
To disable the cloud config server set the spring.cloud.config.enabled
property to false
.
To run the docker compose integration, enable the -Pfailsafe
maven profile.
-
You can run just the integration tests like this:
./mvnw clean test-compile failsafe:integration-test -pl spring-cloud-dataflow-server -Pfailsafe
With the help of the TestProperties
properties one can change the docker-compose files used, the exact versions of the
SCDF, Skipper servers installed, or the versions Stream and Task apps:
./mvnw clean test-compile failsafe:integration-test -pl spring-cloud-dataflow-server -Pfailsafe -Dtest.docker.compose.paths=docker-compose.yml,docker-compose-influxdb.yml,docker-compose-postgres.yml,docker-compose-rabbitmq.yml -Dtest.docker.compose.stream.apps.uri=https://dataflow.spring.io/rabbitmq-maven-latest -Dtest.docker.compose.dataflow.version=2.7.0-SNAPSHOT -Dtest.docker.compose.skipper.version=2.6.0-SNAPSHOT
The test.docker.compose.paths
property accepts comma separated list of docker compose file names and supports file:
, classpath:
, and http:
/https:
URI schemas. If the schema prefix is not explicitly set, the file is treated as local one.
./mvnw clean test-compile failsafe:integration-test -pl spring-cloud-dataflow-server -Pfailsafe \ -Dtest.docker.compose.paths=docker-compose.yml,docker-compose-dood.yml,docker-compose-prometheus.yml \ -Dtest.docker.compose.stream.apps.uri=https://dataflow.spring.io/kafka-docker-latest \ -Dtest.docker.compose.task.apps.uri=https://dataflow.spring.io/task-docker-latest \ -Dtest.docker.compose.dataflow.version=2.7.0-SNAPSHOT \ -Dtest.docker.compose.skipper.version=2.6.0-SNAPSHOT \ -Dtest.docker.compose.apps.port.range=80 \ -Dtest.docker.compose.pullOnStartup=false
./mvnw clean test-compile failsafe:integration-test -pl spring-cloud-dataflow-server -Pfailsafe \ -Dtest.docker.compose.dood=true \ -Dtest.docker.compose.pullOnStartup=false