Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[FLINK-17118][python] Add Cython support for primitive data types #11718

Merged
merged 1 commit into from
Apr 16, 2020

Conversation

HuangXingBo
Copy link
Contributor

What is the purpose of the change

This pull request will support primitive DataTypes in Cython

Brief change log

  • Adds the file fast_coder_impl.(pyx,pxd) which includes the implementation of cython
  • Add cython test environment in tox

Verifying this change

  • Add correponding unit test in test_coders_common.py
  • Add cython test environment in tox

Does this pull request potentially affect one of the following parts:

  • Dependencies (does it add or upgrade a dependency): (no)
  • The public API, i.e., is any changed class annotated with @Public(Evolving): (no)
  • The serializers: (no)
  • The runtime per-record code paths (performance sensitive): (no)
  • Anything that affects deployment or recovery: JobManager (and its components), Checkpointing, Kubernetes/Yarn/Mesos, ZooKeeper: (no)
  • The S3 file system connector: (no)

Documentation

  • Does this pull request introduce a new feature? (no)
  • If yes, how is the feature documented? (not applicable)

@flinkbot
Copy link
Collaborator

flinkbot commented Apr 13, 2020

Thanks a lot for your contribution to the Apache Flink project. I'm the @flinkbot. I help the community
to review your pull request. We will use this comment to track the progress of the review.

Automated Checks

Last check on commit f08d03b (Wed Apr 15 11:40:15 UTC 2020)

Warnings:

  • No documentation files were touched! Remember to keep the Flink docs up to date!
  • This pull request references an unassigned Jira ticket. According to the code contribution guide, tickets need to be assigned before starting with the implementation work.

Mention the bot in a comment to re-run the automated checks.

Review Progress

  • ❓ 1. The [description] looks good.
  • ❓ 2. There is [consensus] that the contribution should go into to Flink.
  • ❓ 3. Needs [attention] from.
  • ❓ 4. The change fits into the overall [architecture].
  • ❓ 5. Overall code [quality] is good.

Please see the Pull Request Review Guide for a full explanation of the review process.


The Bot is tracking the review progress through labels. Labels are applied according to the order of the review items. For consensus, approval by a Flink committer of PMC member is required Bot commands
The @flinkbot bot supports the following commands:

  • @flinkbot approve description to approve one or more aspects (aspects: description, consensus, architecture and quality)
  • @flinkbot approve all to approve all aspects
  • @flinkbot approve-until architecture to approve everything until architecture
  • @flinkbot attention @username1 [@username2 ..] to require somebody's attention
  • @flinkbot disapprove architecture to remove an approval you gave earlier

@flinkbot
Copy link
Collaborator

flinkbot commented Apr 13, 2020

CI report:

Bot commands The @flinkbot bot supports the following commands:
  • @flinkbot run travis re-run the last Travis build
  • @flinkbot run azure re-run the last Azure build

self._value_coder = value_coder

cpdef encode_to_stream(self, value, OutputStream out_stream, bint nested):
self._value_coder.encode_to_stream(value, out_stream, False)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

False -> nested


cdef class FlattenRowCoderImpl(StreamCoderImpl):
def __cinit__(self, field_coders):
self._output_field_coders = field_coders
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What about removed the prefix _output?

self._init_attribute()

cpdef decode_from_stream(self, InputStream in_stream, bint nested):
cdef WrapperInputElement wrapper_input_element
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Rename to InputStreamWrapper?


import datetime

cdef class WrapperFuncInputElement:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Rename to InputStreamAndFunctionWrapper?


import datetime

cdef class WrapperFuncInputElement:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Add some description about this class.

cdef encode_row_result(self, WrapperFuncInputElement wrapper_func_input_element,
OutputStream out_stream):
cdef list result
self._before_encode(wrapper_func_input_element, out_stream)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

_before_encode -> _prepare_encode

while self._input_buffer_size > self._input_pos:
self._load_row()
result = self.func(self.row)
self._write_data(result)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

_write_data -> _encode_one_row

return self._load_bytes().decode("utf-8")
elif field_type == DATE:
# Date
return datetime.date.fromordinal(self._load_int() + 719163)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could you add some explain how 719163 is computed?

self._output_field_type[i] = self._output_field_coders[i].type_name()
self._output_coder_type[i] = self._output_field_coders[i].coder_type()

cdef void _consume_input_data(self, WrapperInputElement wrapper_input_element, size_t size):
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

_wrapInputStream

self._output_row_data = <char*> libc.stdlib.realloc(self._output_row_data,
self._output_row_buffer_size)
self._output_row_data[self._output_row_pos] = <unsigned char> (v >> 8)
self._output_row_data[self._output_row_pos + 1] = <unsigned char> (v)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Remove the parentheses

@HuangXingBo
Copy link
Contributor Author

Thanks a lot for @dianfu review, I have addressed the comments at the latest commit.

out_stream.buffer_size = self._output_buffer_size

cdef void _encode_byte(self, unsigned char val):
if self._output_row_buffer_size < self._output_row_pos + 1:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could you refactor this a bit and make it reusable for all the _encode_xxx functions?

cdef libc.stdint.int32_t length = strlen(b)
self._encode_int(length)
if self._output_row_buffer_size < self._output_row_pos + length:
self._output_row_buffer_size *= 2
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There is possibility that the buffer size _output_row_buffer_size *= 2 isn't large enough.

self._encode_int(milliseconds)

# write 0x00 as end message of udtf
cdef void _encode_end_message(self):
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Move this method to TableFunctionRowCoderImpl?

self._output_remaining_bits_num = self._output_field_count % 8
self._output_row_buffer_size = 1024
self._output_row_pos = 0
self._output_row_data = <char*> libc.stdlib.malloc(self._output_row_buffer_size)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Rename it to something like _tmp_output_buffer to make it more explicitly that this is a tempory buffer?

self._output_data[self._output_pos + 1] = 0x00
self._output_pos += 2

cdef void _copy_row_buffer_to_output_buffer(self):
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Rename to _copy_to_output_buffer?

out_stream.flush()
self._output_pos = 0

cdef void _map_output_data_to_output_stream(self, OutputStream out_stream):
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Add some comments describing why we need to establish this map.

@@ -36,65 +36,230 @@ def check_coder(self, coder, *values):
else:
self.assertEqual(v, coder.decode(coder.encode(v)))

def check_cython_coder(self, python_field_coders, cython_field_coders, data):
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Rename test_coders_common to test_coders.py and Move the coders for cython to test_fast_coders.py?

# decide whether two floats are equal
@staticmethod
def float_equal(a, b, rel_tol=1e-09, abs_tol=0.0):
return abs(a - b) <= max(rel_tol * max(abs(a), abs(b)), abs_tol)

def skip_python_test(self):
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Use unittest.skipIf

@dianfu
Copy link
Contributor

dianfu commented Apr 16, 2020

@HuangXingBo Thanks a lot for the update. The test time increases a lot, could you take a look?

@dianfu
Copy link
Contributor

dianfu commented Apr 16, 2020

It seems that it's because we run the udf related tests also in cython cases.

Copy link
Contributor

@dianfu dianfu left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM.

@dianfu dianfu changed the title [FLINK-17118][python] Support Primitive DataTypes in Cython [FLINK-17118][python] Support primitive data types in cython Apr 16, 2020
@dianfu dianfu changed the title [FLINK-17118][python] Support primitive data types in cython [FLINK-17118][python] Add cython support for primitive data types Apr 16, 2020
@dianfu dianfu changed the title [FLINK-17118][python] Add cython support for primitive data types [FLINK-17118][python] Add Cython support for primitive data types Apr 16, 2020
@dianfu dianfu merged commit 1a5b35b into apache:master Apr 16, 2020
pnowojski added a commit that referenced this pull request Apr 17, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants