diff --git a/CONTRIBUTING.md b/CONTRIBUTING.md index ce0968d4..473c6063 100644 --- a/CONTRIBUTING.md +++ b/CONTRIBUTING.md @@ -85,18 +85,18 @@ We use [Pytest](https://docs.pytest.org/en/7.1.x/) as our test runner. Invoke it Unit tests do not require a Databricks account. ```bash -poetry run python -m pytest tests/unit +poetry run python -m pytest databricks_sql_connector_core/tests/unit ``` #### Only a specific test file ```bash -poetry run python -m pytest tests/unit/tests.py +poetry run python -m pytest databricks_sql_connector_core/tests/unit/tests.py ``` #### Only a specific method ```bash -poetry run python -m pytest tests/unit/tests.py::ClientTestSuite::test_closing_connection_closes_commands +poetry run python -m pytest databricks_sql_connector_core/tests/unit/tests.py::ClientTestSuite::test_closing_connection_closes_commands ``` #### e2e Tests @@ -133,7 +133,7 @@ There are several e2e test suites available: To execute the core test suite: ```bash -poetry run python -m pytest tests/e2e/driver_tests.py::PySQLCoreTestSuite +poetry run python -m pytest databricks_sql_connector_core/tests/e2e/driver_tests.py::PySQLCoreTestSuite ``` The `PySQLCoreTestSuite` namespace contains tests for all of the connector's basic features and behaviours. This is the default namespace where tests should be written unless they require specially configured clusters or take an especially long-time to execute by design. diff --git a/README.md b/README.md index 54d4b178..db37bf51 100644 --- a/README.md +++ b/README.md @@ -3,7 +3,7 @@ [![PyPI](https://img.shields.io/pypi/v/databricks-sql-connector?style=flat-square)](https://pypi.org/project/databricks-sql-connector/) [![Downloads](https://pepy.tech/badge/databricks-sql-connector)](https://pepy.tech/project/databricks-sql-connector) -The Databricks SQL Connector for Python allows you to develop Python applications that connect to Databricks clusters and SQL warehouses. It is a Thrift-based client with no dependencies on ODBC or JDBC. It conforms to the [Python DB API 2.0 specification](https://www.python.org/dev/peps/pep-0249/) and exposes a [SQLAlchemy](https://www.sqlalchemy.org/) dialect for use with tools like `pandas` and `alembic` which use SQLAlchemy to execute DDL. Use `pip install databricks-sql-connector[sqlalchemy]` to install with SQLAlchemy's dependencies. `pip install databricks-sql-connector[alembic]` will install alembic's dependencies. +The Databricks SQL Connector for Python allows you to develop Python applications that connect to Databricks clusters and SQL warehouses. It is a Thrift-based client with no dependencies on ODBC or JDBC. It conforms to the [Python DB API 2.0 specification](https://www.python.org/dev/peps/pep-0249/) and exposes a [SQLAlchemy](https://www.sqlalchemy.org/) dialect for use with tools like `pandas` and `alembic` which use SQLAlchemy to execute DDL. Use `pip install databricks-sql-connector[databricks-sqlalchemy]` to install with SQLAlchemy's dependencies. `pip install databricks-sql-connector[alembic]` will install alembic's dependencies. This connector uses Arrow as the data-exchange format, and supports APIs to directly fetch Arrow tables. Arrow tables are wrapped in the `ArrowQueue` class to provide a natural API to get several rows at a time.