status
stringclasses 1
value | repo_name
stringclasses 13
values | repo_url
stringclasses 13
values | issue_id
int64 1
104k
| updated_files
stringlengths 11
1.76k
| title
stringlengths 4
369
| body
stringlengths 0
254k
⌀ | issue_url
stringlengths 38
55
| pull_url
stringlengths 38
53
| before_fix_sha
stringlengths 40
40
| after_fix_sha
stringlengths 40
40
| report_datetime
timestamp[ns, tz=UTC] | language
stringclasses 5
values | commit_datetime
timestamp[us, tz=UTC] |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 15,940 | ["utils/db-generator/CMakeLists.txt", "utils/db-generator/query_db_generator.cpp"] | db_generator: substandard usability. | ```
milovidov@milovidov-desktop:~/work/ClickHouse$ build/utils/db-generator/query_db_generator
^C
milovidov@milovidov-desktop:~/work/ClickHouse$ build/utils/db-generator/query_db_generator --help
^C
milovidov@milovidov-desktop:~/work/ClickHouse$ build/utils/db-generator/query_db_generator < query.sql
milovidov@milovidov-desktop:~/work/ClickHouse$ build/utils/db-generator/query_db_generator query.sql
^C
```
| https://github.com/ClickHouse/ClickHouse/issues/15940 | https://github.com/ClickHouse/ClickHouse/pull/15973 | cb15e72229de85cdfe975199cfbef28c2ec362dd | 57575a5a12721682f07ff813460c6edb4b85b2c7 | 2020-10-13T19:46:39Z | c++ | 2020-10-14T17:24:02Z |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 15,854 | ["src/Storages/AlterCommands.cpp", "tests/queries/0_stateless/01522_validate_alter_default.reference", "tests/queries/0_stateless/01522_validate_alter_default.sql"] | Clickhouse does not validate DEFAULT value on MODIFY COLUMN for type compability | **Describe the bug**
Here's my table:
```sql
CREATE TABLE table2
(
EventDate Date,
Id Int32,
Value Int32
)
Engine = MergeTree()
PARTITION BY toYYYYMM(EventDate)
ORDER BY Id;
```
Modify default value (intentionally wrong):
```
>>> alter table table2 modify column Value default 'some_string';
ALTER TABLE table2
MODIFY COLUMN `Value` DEFAULT 'some_string'
Ok.
0 rows in set. Elapsed: 0.007 sec.
```
Now you obviously cannot insert anything to this table, because:
```
>>> insert into table2 (EventDate, Id) values (1234567890, 2)
INSERT INTO table2 (EventDate, Id) VALUES
Received exception from server (version 20.9.3):
Code: 6. DB::Exception: Received from localhost:9000. DB::Exception: Cannot parse string 'some_string' as Int32: syntax error at begin of string. Note: there are toInt32OrZero and toInt32OrNull functions, which returns zero/NULL instead of throwing exception..
1 rows in set. Elapsed: 0.002 sec.
```
**How to reproduce**
* Which ClickHouse server version to use: 20.9.3
* Which interface to use, if matters: I used standard CLI
**Expected behavior**
Currently ClickHouse doesn't check types compability while setting a DEFAULT value for a column. Maybe ClickHouse should validate DEFAULT values for type compability?
| https://github.com/ClickHouse/ClickHouse/issues/15854 | https://github.com/ClickHouse/ClickHouse/pull/15858 | 6c1d59cd49dcc1b1174fefd5bbb645f8c31f7d95 | 835c4800563db5de885211c9c747bad9ee8c9817 | 2020-10-12T08:37:11Z | c++ | 2020-10-13T09:47:18Z |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 15,800 | ["src/Storages/MergeTree/MergeTreeReverseSelectProcessor.cpp", "tests/queries/0_stateless/01521_alter_enum_and_reverse_read.reference", "tests/queries/0_stateless/01521_alter_enum_and_reverse_read.sql"] | Alter enum -> Block structure mismatch in Pipe::unitePipes stream | 20.10.1
```
cat x.sql
drop table if exists enum_test;
create table enum_test(timestamp DateTime, host String, e Enum8('IU' = 1, 'WS' = 2))
Engine = MergeTree PARTITION BY toDate(timestamp) ORDER BY (timestamp, host);
insert into enum_test select '2020-10-09 00:00:00', 'h1', 'WS' from numbers(1000);
alter table enum_test modify column e Enum8('IU' = 1, 'WS' = 2, 'PS' = 3);
insert into enum_test select '2020-10-09 00:00:00', 'h1', 'PS' from numbers(10);
SELECT * FROM enum_test ORDER BY timestamp DESC LIMIT 1;
Received exception from server (version 20.10.1):
Code: 49. DB::Exception: Received from localhost:9000. DB::Exception: Block structure mismatch in Pipe::unitePipes stream: different types:
timestamp DateTime UInt32(size = 0), host String String(size = 0), e Enum8('IU' = 1, 'WS' = 2, 'PS' = 3) Int8(size = 0)
timestamp DateTime UInt32(size = 0), host String String(size = 0), e Enum8('IU' = 1, 'WS' = 2) Int8(size = 0).
optimize table enum_test final;
SELECT timestamp, host FROM enum_test ORDER BY timestamp DESC LIMIT 1;
┌───────────timestamp─┬─host─┐
│ 2020-10-09 00:00:00 │ h1 │
└─────────────────────┴──────┘
``` | https://github.com/ClickHouse/ClickHouse/issues/15800 | https://github.com/ClickHouse/ClickHouse/pull/15852 | 4798234002f84368566811f475755b972c85f781 | 321a7ae6bfc6a5cd243e2b363a9bb1d8d0a09a37 | 2020-10-09T20:31:15Z | c++ | 2020-10-13T06:49:26Z |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 15,792 | ["src/Interpreters/ExpressionAnalyzer.cpp", "src/Interpreters/ExpressionAnalyzer.h", "src/Interpreters/InterpreterSelectQuery.cpp", "tests/queries/0_stateless/01521_global_in_prewhere_15792.reference", "tests/queries/0_stateless/01521_global_in_prewhere_15792.sql"] | Wrong result in case of GLOBAL IN and PREWHERE | 20.10.1.4853
```sql
drop table if exists xp;
drop table if exists xp_d;
create table xp(A Date, B Int64, S String) Engine=MergeTree partition by toYYYYMM(A) order by B;
insert into xp select '2020-01-01', number , '' from numbers(100000);
insert into xp select '2020-10-01', number , '' from numbers(100000);
create table xp_d as xp Engine=Distributed(test_shard_localhost, currentDatabase(), xp);
select count() from xp_d prewhere toYYYYMM(A) global in (select toYYYYMM(min(A)) from xp_d) where B > -1;
┌─count()─┐
│ 0 │ <------ expected 100000
└─────────┘
-- global in & both predicates in prewhere
select count() from xp_d prewhere toYYYYMM(A) global in (select toYYYYMM(min(A)) from xp_d) and B > -1;
┌─count()─┐
│ 100000 │
└─────────┘
-- global in & both predicates in where
select count() from xp_d where toYYYYMM(A) global in (select toYYYYMM(min(A)) from xp_d) and B > -1;
┌─count()─┐
│ 100000 │
└─────────┘
-- not global
select count() from xp_d prewhere toYYYYMM(A) in (select toYYYYMM(min(A)) from xp_d) where B > -1;
┌─count()─┐
│ 100000 │
└─────────┘
select count() from xp_d prewhere toYYYYMM(A) global in (select 202001) where B > -1;
┌─count()─┐
│ 0 │
└─────────┘
```
hmm, it worked in 19.13.7.57 | https://github.com/ClickHouse/ClickHouse/issues/15792 | https://github.com/ClickHouse/ClickHouse/pull/15933 | 22381685fdf1a4e6cfeada5ab9cdeb841e85fb7a | be7776608d7871b4617b55082d93a2f68f4d5b6f | 2020-10-09T16:08:38Z | c++ | 2020-10-19T18:21:57Z |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 15,784 | ["tests/queries/0_stateless/01768_extended_range.reference", "tests/queries/0_stateless/01768_extended_range.sql", "tests/queries/0_stateless/01769_extended_range_2.reference", "tests/queries/0_stateless/01769_extended_range_2.sql", "tests/queries/0_stateless/01770_extended_range_3.reference", "tests/queries/0_stateless/01770_extended_range_3.sql", "tests/queries/0_stateless/01771_datetime64_no_time_part.reference", "tests/queries/0_stateless/01771_datetime64_no_time_part.sql"] | Date and time functions fail when DateTime64 is out of normal range | To PR #9404
@Enmk
**The following functions return incorrect result when passing it date-time that's out of normal range:**
`toYear()`, `toMonth()`, `toQuarter()`, `toTime()`, `toRelative[....]Num()`, `toStartOf[...]()`, `toMonday()`, `toWeek()`
***For example:***
`SELECT toYear(toDateTime64('1968-12-12 11:22:33', 0, 'UTC'))` returns `2105`
`SELECT toTime(toDateTime64('1800-11-30 18:00:00', 0, 'UTC'))` returns `1970-01-02 06:56:32`: the time must have remained the same
`SELECT toRelativeWeekNum(toDateTime64('1960-11-30 18:00:11.999', 3, 'UTC'))` returns 6627
**`toStartOfYear()`, `toStartOfMonth()`, `toStartOfQuarter()` returns zero timestamp even when in normal range:**
`SELECT toStartOfQuarter(toDateTime64('1990-01-04 12:14:12', 0, 'UTC'))` returns `0000-00-00`
**`toUnixTimestamp()` crashes when out of normal range:**
```
SELECT toUnixTimestamp(toDateTime64('1900-12-12 11:22:33', 0, 'UTC'))
Received exception from server (version 20.5.1):
Code: 407. DB::Exception: Received from localhost:9000. DB::Exception: DateTime64 convert overflow.
0 rows in set. Elapsed: 0.011 sec.
``` | https://github.com/ClickHouse/ClickHouse/issues/15784 | https://github.com/ClickHouse/ClickHouse/pull/22002 | 7c0dba1b0ce30ff216f82fb1fa87b1d11b88b5d6 | 072d4bf199eae03be516b2de79e5775c8c6dd61a | 2020-10-09T13:34:07Z | c++ | 2021-03-23T17:04:44Z |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 15,780 | ["src/Functions/array/arrayIndex.h", "tests/queries/0_stateless/01441_low_cardinality_array_index.reference", "tests/queries/0_stateless/01441_low_cardinality_array_index.sql"] | 20.8 Types of array and 2nd argument of function "indexOf" must be identical up to ... | 20.7
```
SELECT indexOf(['a', 'b', 'c'], toLowCardinality('a'))
┌─indexOf(['a', 'b', 'c'], toLowCardinality('a'))─┐
│ 1 │
└─────────────────────────────────────────────────┘
```
20.8
```
SELECT indexOf(['a', 'b', 'c'], toLowCardinality('a'))
Received exception from server (version 20.8.3):
Code: 43. DB::Exception: Received from localhost:9000. DB::Exception: Types of array and 2nd argument of function "indexOf" must be identical up to nullability, cardinality, numeric types, or Enum and numeric type. Passed: Array(String) and LowCardinality(String)..
```
https://github.com/ClickHouse/ClickHouse/pull/12550
/cc @myrrc
| https://github.com/ClickHouse/ClickHouse/issues/15780 | https://github.com/ClickHouse/ClickHouse/pull/16038 | 0bb4480fee6fb17094b98bec1c694015dec7b1db | e89a3b5d098f63327a2fb521370d27294844ed6a | 2020-10-09T09:47:13Z | c++ | 2020-10-16T09:08:44Z |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 15,764 | ["programs/server/Server.cpp", "src/Server/CertificateReloader.cpp", "src/Server/CertificateReloader.h", "src/Server/ya.make"] | Dynamically reload TLS certificates | **Describe the issue**
We should be able to reload TLS certificates without restarting clickhouse. (will send WIP PR shortly)
relates to #14106 | https://github.com/ClickHouse/ClickHouse/issues/15764 | https://github.com/ClickHouse/ClickHouse/pull/15765 | 1c72421e53b9a1553f6c8eea070069a7212c8aac | d8510583d0f127037c2ccc80e6d42d3f29b5ed1b | 2020-10-08T16:20:50Z | c++ | 2021-11-05T14:07:29Z |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 15,732 | ["src/DataStreams/PushingToViewsBlockOutputStream.cpp", "src/Storages/StorageMaterializedView.cpp", "tests/queries/0_stateless/01527_materialized_view_stack_overflow.reference", "tests/queries/0_stateless/01527_materialized_view_stack_overflow.sql"] | Crash when use the same name for materialized view and TO table | Clickhouse crashes, when you specify the same name for materialized view and TO table.
**How to reproduce**
* version 20.9.2.20 (official build)
CREATE TABLE table1( Col1 String )
ENGINE = MergeTree()
ORDER BY Col1;
CREATE MATERIALIZED VIEW view1 to view1
AS SELECT DISTINCT Col1 FROM table1;
select name, engine, total_rows, data_paths from system.tables;
[osboxes] 2020.10.07 15:57:21.185817 [ 64414 ] {ad0193e0-ae82-40f3-b773-3091737e3d16} <Debug> executeQuery: (from 192.168.1.178:64108) select name, engine, total_rows, data_paths from system.tables;
Error on processing query: select name, engine, total_rows, data_paths from system.tables;
Code: 32, e.displayText() = DB::Exception: Attempt to read after eof: while receiving packet from 192.168.1.144:9000, Stack trace (when copying this message, always include the lines below):
0. Poco::Exception::Exception(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, int) @ 0x18e02790 in /usr/bin/clickhouse
1. DB::throwReadAfterEOF() @ 0xe734291 in /usr/bin/clickhouse
2. DB::readVarUInt(unsigned long&, DB::ReadBuffer&) @ 0xe931034 in /usr/bin/clickhouse
3. DB::Connection::receivePacket() @ 0x16432957 in /usr/bin/clickhouse
4. DB::Client::receiveAndProcessPacket(bool) @ 0xe8319c8 in /usr/bin/clickhouse
5. DB::Client::receiveResult() @ 0xe833255 in /usr/bin/clickhouse
6. DB::Client::processParsedSingleQuery() @ 0xe833b1d in /usr/bin/clickhouse
7. DB::Client::processMultiQuery(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&) @ 0xe83636c in /usr/bin/clickhouse
8. DB::Client::mainImpl() @ 0xe838b18 in /usr/bin/clickhouse
9. DB::Client::main(std::__1::vector<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, std::__1::allocator<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > > > const&) @ 0xe83a7da in /usr/bin/clickhouse
10. Poco::Util::Application::run() @ 0x18d31827 in /usr/bin/clickhouse
11. mainEntryClickHouseClient(int, char**) @ 0xe8028c9 in /usr/bin/clickhouse
12. main @ 0xe71a891 in /usr/bin/clickhouse
13. __libc_start_main @ 0x21b97 in /lib/x86_64-linux-gnu/libc-2.27.so
14. _start @ 0xe71a02e in /usr/bin/clickhouse
(version 20.9.2.20 (official build))
**Expected behavior**
I think the better solution is to forbid using the same names for materialized view and TO table
| https://github.com/ClickHouse/ClickHouse/issues/15732 | https://github.com/ClickHouse/ClickHouse/pull/16048 | 11f56186db286c771fa7492f95a7229d3ac4cbd2 | 88de1b052c80eaa73dd40f4cd1bbaef46f5e3af9 | 2020-10-07T16:19:02Z | c++ | 2020-11-16T09:02:27Z |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 15,684 | ["src/Common/CurrentMetrics.cpp", "src/Core/BackgroundSchedulePool.cpp", "src/Core/BackgroundSchedulePool.h", "src/Interpreters/Context.cpp", "src/Storages/MergeTree/BackgroundProcessingPool.cpp", "src/Storages/MergeTree/BackgroundProcessingPool.h", "src/Storages/MergeTree/MergeList.cpp"] | Potential metrics overflow | When I queried `system.metrics` table, I found some negative values which hint to some potential internal overflows.
```
SELECT *
FROM system.metrics
FORMAT Vertical
...
Row 31:
───────
metric: MemoryTrackingInBackgroundProcessingPool
value: -580863171
description: Total amount of memory (bytes) allocated in background processing pool (that is dedicated for backround merges, mutations and fetches). Note that this value may include a drift when the memory was allocated in a context of background processing pool and freed in other context or vice-versa. This happens naturally due to caches for tables indexes and doesn't indicate memory leaks.
Row 32:
───────
metric: MemoryTrackingInBackgroundMoveProcessingPool
value: 0
description: Total amount of memory (bytes) allocated in background processing pool (that is dedicated for backround moves). Note that this value may include a drift when the memory was allocated in a context of background processing pool and freed in other context or vice-versa. This happens naturally due to caches for tables indexes and doesn't indicate memory leaks.
Row 33:
───────
metric: MemoryTrackingInBackgroundSchedulePool
value: -4461308831
description: Total amount of memory (bytes) allocated in background schedule pool (that is dedicated for bookkeeping tasks of Replicated tables).
...
56 rows in set. Elapsed: 0.003 sec.
```
Server version: 20.8.2.3. | https://github.com/ClickHouse/ClickHouse/issues/15684 | https://github.com/ClickHouse/ClickHouse/pull/15813 | 717c48cbf3c67fa1c956a9a07c6e56a0d970ee6b | 1187903b447d9ec3e5f331f28fdaaf9c013cf171 | 2020-10-06T15:40:05Z | c++ | 2020-10-10T22:11:17Z |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 15,641 | ["tests/queries/0_stateless/01635_nullable_fuzz.reference", "tests/queries/0_stateless/01635_nullable_fuzz.sql", "tests/queries/0_stateless/01636_nullable_fuzz2.reference", "tests/queries/0_stateless/01636_nullable_fuzz2.sql", "tests/queries/0_stateless/01637_nullable_fuzz3.reference", "tests/queries/0_stateless/01637_nullable_fuzz3.sql", "tests/queries/0_stateless/arcadia_skip_list.txt"] | Logical error: 'Bad cast from type DB::ColumnConst to DB::ColumnNullable'. Assertion `false' | Found by AST Fuzzer at https://clickhouse-test-reports.s3.yandex.net/0/ce3d18e8c52144bac814cae6284295625411ef74/fuzzer/report.html#fail1
Query seems to be
```sql
SELECT
'Nul\0able\0String)Nul\0\0ble(String)Nul\0able(String)Nul\0able(String)',
NULL AND 2,
'',
number,
NULL AS k
FROM
(
SELECT
materialize(NULL) OR materialize(-9223372036854775808),
number
FROM system.numbers
LIMIT 1000000
)
ORDER BY
k ASC,
number ASC,
k ASC
LIMIT 1023, 1023
SETTINGS max_bytes_before_external_sort = 1000000
```
* v20.4.9.110 is ok
* Since at least v20.5.3.27 | https://github.com/ClickHouse/ClickHouse/issues/15641 | https://github.com/ClickHouse/ClickHouse/pull/18753 | 7007f53152cf3bbfd6837451b3a8ddb49690daa8 | 2d1afa5dad0d98a0c4f82a510e0164573e9d1244 | 2020-10-05T21:31:12Z | c++ | 2021-01-05T16:58:57Z |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 15,637 | ["src/Parsers/ExpressionElementParsers.cpp", "src/Parsers/ExpressionListParsers.cpp", "src/Parsers/ExpressionListParsers.h", "tests/queries/0_stateless/01523_interval_operator_support_string_literal.reference", "tests/queries/0_stateless/01523_interval_operator_support_string_literal.sql"] | INTERVAL operator should be applicable to string literal. | `INTERVAL '1 hour'` should be equivalent to `INTERVAL 1 HOUR`
**Use case**
People are used to Postgres. | https://github.com/ClickHouse/ClickHouse/issues/15637 | https://github.com/ClickHouse/ClickHouse/pull/15978 | 4476117ac60b5e3432aff6c4190755fccbf4da13 | 28c9e66dc13039e4ab5178399f30f5cd53f9cd16 | 2020-10-05T16:44:40Z | c++ | 2020-10-23T11:44:05Z |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 15,610 | ["src/Processors/Pipe.h", "src/Processors/QueryPipeline.h", "src/Processors/QueryPlan/ReadFromStorageStep.cpp", "src/Processors/QueryPlan/ReadFromStorageStep.h"] | 20.8.3.18 DB::RWLockImpl::unlock segfault | _Originally posted by @aubweb9 in https://github.com/ClickHouse/ClickHouse/issues/11940#issuecomment-699988601_
Hello,
I have kind of the same issue in version 20.8.3.18
```
2020.09.27 21:07:16.306908 [ 23103 ] {} <Fatal> BaseDaemon: ########################################
2020.09.27 21:07:16.316298 [ 23103 ] {} <Fatal> BaseDaemon: (version 20.8.3.18 (official build), build id: E6AA632BFA7BC9A5) (from thread 10426) (query_id: 25afeec8-49ba-4c38-acfd-9740b30240eb) Received signal Segmentation fault (11)
2020.09.27 21:07:16.317599 [ 23103 ] {} <Fatal> BaseDaemon: Address: NULL pointer. Access: read. Address not mapped to object.
2020.09.27 21:07:16.317693 [ 23103 ] {} <Fatal> BaseDaemon: Stack trace: 0x16f1dece 0x1791e787 0x1791f909 0x181e830e 0x181e859d 0x181ddc6f 0x1763dd33 0x1774b63a 0x1774b6a6 0x1751c465 0x170286bd 0x180dcc3d 0x17f20460 0x17f250ad 0x17f2597$
2020.09.27 21:07:16.318812 [ 23103 ] {} <Fatal> BaseDaemon: 3. std::__1::__hash_iterator<std::__1::__hash_node<std::__1::__hash_value_type<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, unsigned l$
2020.09.27 21:07:16.318939 [ 23103 ] {} <Fatal> BaseDaemon: 4. DB::RWLockImpl::unlock(std::__1::__list_iterator<DB::RWLockImpl::Group, void*>, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&)$
2020.09.27 21:07:16.318976 [ 23103 ] {} <Fatal> BaseDaemon: 5. std::__1::__shared_ptr_emplace<DB::RWLockImpl::LockHolderImpl, std::__1::allocator<DB::RWLockImpl::LockHolderImpl> >::__on_zero_shared() @ 0x1791f909 in /usr/bin/clickhouse
2020.09.27 21:07:16.320705 [ 23103 ] {} <Fatal> BaseDaemon: 6. DB::ReadFromStorageStep::~ReadFromStorageStep() @ 0x181e830e in /usr/bin/clickhouse
2020.09.27 21:07:16.320754 [ 23103 ] {} <Fatal> BaseDaemon: 7. DB::ReadFromStorageStep::~ReadFromStorageStep() @ 0x181e859d in /usr/bin/clickhouse
2020.09.27 21:07:16.320792 [ 23103 ] {} <Fatal> BaseDaemon: 8. DB::QueryPlan::~QueryPlan() @ 0x181ddc6f in /usr/bin/clickhouse
2020.09.27 21:07:16.320848 [ 23103 ] {} <Fatal> BaseDaemon: 9. ? @ 0x1763dd33 in /usr/bin/clickhouse
2020.09.27 21:07:16.320888 [ 23103 ] {} <Fatal> BaseDaemon: 10. ? @ 0x1774b63a in /usr/bin/clickhouse
2020.09.27 21:07:16.320912 [ 23103 ] {} <Fatal> BaseDaemon: 11. ? @ 0x1774b6a6 in /usr/bin/clickhouse
2020.09.27 21:07:16.324712 [ 23103 ] {} <Fatal> BaseDaemon: 12. DB::LazyBlockInputStream::readImpl() @ 0x1751c465 in /usr/bin/clickhouse
2020.09.27 21:07:16.325591 [ 23103 ] {} <Fatal> BaseDaemon: 13. DB::IBlockInputStream::read() @ 0x170286bd in /usr/bin/clickhouse
2020.09.27 21:07:16.325642 [ 23103 ] {} <Fatal> BaseDaemon: 14. DB::CreatingSetsTransform::work() @ 0x180dcc3d in /usr/bin/clickhouse
2020.09.27 21:07:16.325684 [ 23103 ] {} <Fatal> BaseDaemon: 15. ? @ 0x17f20460 in /usr/bin/clickhouse
2020.09.27 21:07:16.325710 [ 23103 ] {} <Fatal> BaseDaemon: 16. ? @ 0x17f250ad in /usr/bin/clickhouse
2020.09.27 21:07:16.325735 [ 23103 ] {} <Fatal> BaseDaemon: 17. ? @ 0x17f25976 in /usr/bin/clickhouse
2020.09.27 21:07:16.325828 [ 23103 ] {} <Fatal> BaseDaemon: 18. ThreadPoolImpl<std::__1::thread>::worker(std::__1::__list_iterator<std::__1::thread, void*>) @ 0xffce517 in /usr/bin/clickhouse
2020.09.27 21:07:16.325854 [ 23103 ] {} <Fatal> BaseDaemon: 19. ? @ 0xffccb53 in /usr/bin/clickhouse
2020.09.27 21:07:16.325946 [ 23103 ] {} <Fatal> BaseDaemon: 20. start_thread @ 0x7fa3 in /usr/lib/x86_64-linux-gnu/libpthread-2.28.so
2020.09.27 21:07:16.326031 [ 23103 ] {} <Fatal> BaseDaemon: 21. clone @ 0xf94cf in /usr/lib/x86_64-linux-gnu/libc-2.28.so
``` | https://github.com/ClickHouse/ClickHouse/issues/15610 | https://github.com/ClickHouse/ClickHouse/pull/15645 | 9594e463b4de8745c084de65e3c01e4623c98492 | 1f2e867ce669323500b030629c7e6d1a77d71b9f | 2020-10-05T08:34:43Z | c++ | 2020-10-06T16:17:38Z |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 15,591 | ["src/Storages/MergeTree/registerStorageMergeTree.cpp", "tests/queries/0_stateless/01532_primary_key_without_order_by_zookeeper.reference", "tests/queries/0_stateless/01532_primary_key_without_order_by_zookeeper.sql"] | Allow to specify PRIMARY KEY near the list of columns | `CREATE TABLE test (x Int32, y Int32, s String, PRIMARY KEY (x, y)) ENGINE = MergeTree;`
should be equivalent to
`CREATE TABLE test (x Int32, y Int32, s String) ENGINE = MergeTree PRIMARY KEY (x, y);` | https://github.com/ClickHouse/ClickHouse/issues/15591 | https://github.com/ClickHouse/ClickHouse/pull/16284 | d350d864e6be6355b3366e7fba37407476645e84 | 6037982343b898071064bcda70ec3674ace54afc | 2020-10-04T00:25:23Z | c++ | 2020-10-24T03:05:04Z |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 15,587 | ["src/Parsers/ParserDataType.cpp", "tests/queries/0_stateless/01307_bloom_filter_index_string_multi_granulas.sql", "tests/queries/0_stateless/01532_tuple_with_name_type.reference", "tests/queries/0_stateless/01532_tuple_with_name_type.sql"] | Named tuple inside array data type specification can't be parsed | ```
$ clickhouse client
ClickHouse client version 20.10.1.1-arcadia.
Connecting to localhost:9000 as user default.
Connected to ClickHouse server version 20.10.1 revision 54441.
max42-dev.sas.yp-c.yandex.net :) create table T(f Array(Tuple(key String, value UInt8))) engine = Log()
Syntax error: failed at position 34 ('String'):
create table T(f Array(Tuple(key String, value UInt8))) engine = Log()
Expected one of: LIKE, GLOBAL NOT IN, AS, IS, OR, QuestionMark, BETWEEN, NOT LIKE, AND, Comma, alias, IN, ILIKE, Dot, NOT ILIKE, NOT, Arrow, NOT IN, token, GLOBAL IN
max42-dev.sas.yp-c.yandex.net :) create table T(f Array(Tuple(String, UInt8))) engine = Log()
CREATE TABLE T
(
`f` Array(Tuple(String, UInt8))
)
ENGINE = Log()
Ok.
0 rows in set. Elapsed: 0.008 sec.
max42-dev.sas.yp-c.yandex.net :) create table T2(f Tuple(key String, value UInt8)) engine = Log()
CREATE TABLE T2
(
`f` Tuple( key String, value UInt8)
)
ENGINE = Log()
Ok.
0 rows in set. Elapsed: 0.005 sec.
```
If I understand correctly, DataType parser allows parsing of nested-like data type specifications (i.e. Typename(f1 T1, f2 T2, ...)) only at the top-level. It makes impossible to enclose named tuple into any kind of other structure, like Array or another Tuple.
Such option is needed for proper interoperability with third-party applications which impose such data types. | https://github.com/ClickHouse/ClickHouse/issues/15587 | https://github.com/ClickHouse/ClickHouse/pull/16262 | 973c1d798394e8232874c178d7951d1819766123 | 64bd63ca4999297e041fe853c6e4b6a2fa5bd5f0 | 2020-10-03T15:54:46Z | c++ | 2020-11-04T00:08:55Z |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 15,541 | ["src/Common/filesystemHelpers.cpp", "src/Common/filesystemHelpers.h"] | Interrupted system call from DB::DiskLocal::getAvailableSpace() | `Could not calculate available disk space (statvfs), errno: 4, strerror: Interrupted system call`
> EINTR надо обрабатывать в цикле, см. в качестве примера ReadBufferFromFileDescriptor.cpp.
> Сигнал USR1, USR2 от профайлера, скорее всего. | https://github.com/ClickHouse/ClickHouse/issues/15541 | https://github.com/ClickHouse/ClickHouse/pull/15557 | aaafdfe22c7f4de69f5f041d6782a9d62c347114 | 46fa5ff53e459248f37b2fc64ca269a51423bcf8 | 2020-10-02T17:22:07Z | c++ | 2020-10-03T12:57:56Z |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 15,540 | ["tests/queries/0_stateless/01538_fuzz_aggregate.reference", "tests/queries/0_stateless/01538_fuzz_aggregate.sql"] | Crash with nullable, subquery and array join | ```
SELECT
count(),
sum(ns)
FROM
(
SELECT intDiv(number, NULL) AS k
FROM system.numbers_mt
GROUP BY k
)
ARRAY JOIN ns
2020.10.02 19:24:50.750387 [ 47305 ] {} <Trace> BaseDaemon: Received signal 11
2020.10.02 19:24:50.751783 [ 50232 ] {} <Fatal> BaseDaemon: ########################################
2020.10.02 19:24:50.752166 [ 50232 ] {} <Fatal> BaseDaemon: (version 20.10.1.1, build id: 59AA3CC2CD1B0A6E) (from thread 47603) (query_id: 4084af45-f022-466c-980e-15c5dd8225e2) Received signal Segmentation fault (11)
2020.10.02 19:24:50.752565 [ 50232 ] {} <Fatal> BaseDaemon: Stack trace: 0x114c5bc5 0x114c2bc2 0x114c206b 0x112f79f6 0x112dde5a 0x112d9bbf 0x112d8147 0x11509102 0x11244beb 0x11243923 0x116a794c 0x116a6a1f 0x11e279b1 0x11e34d08 0x14c6d903 0x14c6e157 0x14dd58a7 0x14dd3fe0 0x14dd2858 0x7f968ad 0x7f4ea8b3c6db 0x7f4ea845988f
2020.10.02 19:24:50.773973 [ 50232 ] {} <Fatal> BaseDaemon: 5. /home/alesap/code/cpp/ClickHouse/src/Interpreters/ExpressionAnalyzer.cpp:428: DB::ExpressionAnalyzer::makeAggregateDescriptions(std::__1::shared_ptr<DB::ActionsDAG>&) @ 0x114c5bc5 in /home/alesap/code/cpp/BuildCH/programs/clickhouse
2020.10.02 19:24:50.791784 [ 50232 ] {} <Fatal> BaseDaemon: 6. /home/alesap/code/cpp/ClickHouse/src/Interpreters/ExpressionAnalyzer.cpp:0: DB::ExpressionAnalyzer::analyzeAggregation() @ 0x114c2bc2 in /home/alesap/code/cpp/BuildCH/programs/clickhouse
2020.10.02 19:24:50.806558 [ 50232 ] {} <Fatal> BaseDaemon: 7. /home/alesap/code/cpp/ClickHouse/src/Interpreters/ExpressionAnalyzer.cpp:137: DB::ExpressionAnalyzer::ExpressionAnalyzer(std::__1::shared_ptr<DB::IAST> const&, std::__1::shared_ptr<DB::TreeRewriterResult const> const&, DB::Context const&, unsigned long, bool) @ 0x114c206b in /home/alesap/code/cpp/BuildCH/programs/clickhouse
2020.10.02 19:24:50.826783 [ 50232 ] {} <Fatal> BaseDaemon: 8. /home/alesap/code/cpp/ClickHouse/src/Interpreters/ExpressionAnalyzer.h:249: DB::SelectQueryExpressionAnalyzer::SelectQueryExpressionAnalyzer(std::__1::shared_ptr<DB::IAST> const&, std::__1::shared_ptr<DB::TreeRewriterResult const> const&, DB::Context const&, std::__1::shared_ptr<DB::StorageInMemoryMetadata const> const&, std::__1::unordered_set<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, std::__1::hash<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > >, std::__1::equal_to<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > >, std::__1::allocator<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > > > const&, bool, DB::SelectQueryOptions const&) @ 0x112f79f6 in /home/alesap/code/cpp/BuildCH/programs/clickhouse
2020.10.02 19:24:50.842787 [ 50232 ] {} <Fatal> BaseDaemon: 9. /home/alesap/code/cpp/ClickHouse/contrib/libcxx/include/memory:3028: DB::InterpreterSelectQuery::InterpreterSelectQuery(std::__1::shared_ptr<DB::IAST> const&, DB::Context const&, std::__1::shared_ptr<DB::IBlockInputStream> const&, std::__1::optional<DB::Pipe>, std::__1::shared_ptr<DB::IStorage> const&, DB::SelectQueryOptions const&, std::__1::vector<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, std::__1::allocator<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > > > const&, std::__1::shared_ptr<DB::StorageInMemoryMetadata const> const&)::$_2::operator()(bool) const @ 0x112dde5a in /home/alesap/code/cpp/BuildCH/programs/clickhouse
2020.10.02 19:24:50.859398 [ 50232 ] {} <Fatal> BaseDaemon: 10. /home/alesap/code/cpp/ClickHouse/src/Interpreters/InterpreterSelectQuery.cpp:405: DB::InterpreterSelectQuery::InterpreterSelectQuery(std::__1::shared_ptr<DB::IAST> const&, DB::Context const&, std::__1::shared_ptr<DB::IBlockInputStream> const&, std::__1::optional<DB::Pipe>, std::__1::shared_ptr<DB::IStorage> const&, DB::SelectQueryOptions const&, std::__1::vector<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, std::__1::allocator<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > > > const&, std::__1::shared_ptr<DB::StorageInMemoryMetadata const> const&) @ 0x112d9bbf in /home/alesap/code/cpp/BuildCH/programs/clickhouse
2020.10.02 19:24:50.875940 [ 50232 ] {} <Fatal> BaseDaemon: 11. /home/alesap/code/cpp/ClickHouse/src/Interpreters/InterpreterSelectQuery.cpp:145: DB::InterpreterSelectQuery::InterpreterSelectQuery(std::__1::shared_ptr<DB::IAST> const&, DB::Context const&, DB::SelectQueryOptions const&, std::__1::vector<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, std::__1::allocator<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > > > const&) @ 0x112d8147 in /home/alesap/code/cpp/BuildCH/programs/clickhouse
2020.10.02 19:24:50.891624 [ 50232 ] {} <Fatal> BaseDaemon: 12. /home/alesap/code/cpp/ClickHouse/contrib/libcxx/include/memory:0: DB::InterpreterSelectWithUnionQuery::InterpreterSelectWithUnionQuery(std::__1::shared_ptr<DB::IAST> const&, DB::Context const&, DB::SelectQueryOptions const&, std::__1::vector<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, std::__1::allocator<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > > > const&) @ 0x11509102 in /home/alesap/code/cpp/BuildCH/programs/clickhouse
2020.10.02 19:24:50.906602 [ 50232 ] {} <Fatal> BaseDaemon: 13. /home/alesap/code/cpp/ClickHouse/contrib/libcxx/include/memory:0: std::__1::__unique_if<DB::InterpreterSelectWithUnionQuery>::__unique_single std::__1::make_unique<DB::InterpreterSelectWithUnionQuery, std::__1::shared_ptr<DB::IAST>&, DB::Context&, DB::SelectQueryOptions>(std::__1::shared_ptr<DB::IAST>&, DB::Context&, DB::SelectQueryOptions&&) @ 0x11244beb in /home/alesap/code/cpp/BuildCH/programs/clickhouse
2020.10.02 19:24:50.921000 [ 50232 ] {} <Fatal> BaseDaemon: 14. /home/alesap/code/cpp/ClickHouse/src/Interpreters/InterpreterFactory.cpp:0: DB::InterpreterFactory::get(std::__1::shared_ptr<DB::IAST>&, DB::Context&, DB::QueryProcessingStage::Enum) @ 0x11243923 in /home/alesap/code/cpp/BuildCH/programs/clickhouse
2020.10.02 19:24:50.937131 [ 50232 ] {} <Fatal> BaseDaemon: 15. /home/alesap/code/cpp/ClickHouse/src/Interpreters/executeQuery.cpp:0: DB::executeQueryImpl(char const*, char const*, DB::Context&, bool, DB::QueryProcessingStage::Enum, bool, DB::ReadBuffer*) @ 0x116a794c in /home/alesap/code/cpp/BuildCH/programs/clickhouse
2020.10.02 19:24:50.954001 [ 50232 ] {} <Fatal> BaseDaemon: 16. /home/alesap/code/cpp/ClickHouse/src/Interpreters/executeQuery.cpp:708: DB::executeQuery(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, DB::Context&, bool, DB::QueryProcessingStage::Enum, bool) @ 0x116a6a1f in /home/alesap/code/cpp/BuildCH/programs/clickhouse
2020.10.02 19:24:50.972645 [ 50232 ] {} <Fatal> BaseDaemon: 17. /home/alesap/code/cpp/ClickHouse/src/Server/TCPHandler.cpp:0: DB::TCPHandler::runImpl() @ 0x11e279b1 in /home/alesap/code/cpp/BuildCH/programs/clickhouse
2020.10.02 19:24:50.993298 [ 50232 ] {} <Fatal> BaseDaemon: 18. /home/alesap/code/cpp/ClickHouse/src/Server/TCPHandler.cpp:0: DB::TCPHandler::run() @ 0x11e34d08 in /home/alesap/code/cpp/BuildCH/programs/clickhouse
2020.10.02 19:24:51.014305 [ 50232 ] {} <Fatal> BaseDaemon: 19. /home/alesap/code/cpp/ClickHouse/contrib/poco/Net/src/TCPServerConnection.cpp:57: Poco::Net::TCPServerConnection::start() @ 0x14c6d903 in /home/alesap/code/cpp/BuildCH/programs/clickhouse
2020.10.02 19:24:51.035142 [ 50232 ] {} <Fatal> BaseDaemon: 20. /home/alesap/code/cpp/ClickHouse/contrib/poco/Net/src/TCPServerDispatcher.cpp:0: Poco::Net::TCPServerDispatcher::run() @ 0x14c6e157 in /home/alesap/code/cpp/BuildCH/programs/clickhouse
2020.10.02 19:24:51.056594 [ 50232 ] {} <Fatal> BaseDaemon: 21. /home/alesap/code/cpp/ClickHouse/contrib/poco/Foundation/src/ThreadPool.cpp:213: Poco::PooledThread::run() @ 0x14dd58a7 in /home/alesap/code/cpp/BuildCH/programs/clickhouse
2020.10.02 19:24:51.077906 [ 50232 ] {} <Fatal> BaseDaemon: 22. /home/alesap/code/cpp/ClickHouse/contrib/poco/Foundation/src/Thread.cpp:56: Poco::(anonymous namespace)::RunnableHolder::run() @ 0x14dd3fe0 in /home/alesap/code/cpp/BuildCH/programs/clickhouse
2020.10.02 19:24:51.098759 [ 50232 ] {} <Fatal> BaseDaemon: 23. /home/alesap/code/cpp/ClickHouse/contrib/poco/Foundation/include/Poco/SharedPtr.h:277: Poco::ThreadImpl::runnableEntry(void*) @ 0x14dd2858 in /home/alesap/code/cpp/BuildCH/programs/clickhouse
2020.10.02 19:24:51.121458 [ 50232 ] {} <Fatal> BaseDaemon: 24. __tsan_thread_start_func @ 0x7f968ad in /home/alesap/code/cpp/BuildCH/programs/clickhouse
2020.10.02 19:24:51.121771 [ 50232 ] {} <Fatal> BaseDaemon: 25. start_thread @ 0x76db in /lib/x86_64-linux-gnu/libpthread-2.27.so
2020.10.02 19:24:51.122638 [ 50232 ] {} <Fatal> BaseDaemon: 26. /build/glibc-OTsEL5/glibc-2.27/misc/../sysdeps/unix/sysv/linux/x86_64/clone.S:97: clone @ 0x12188f in /usr/lib/debug/lib/x86_64-linux-gnu/libc-2.27.so
```
| https://github.com/ClickHouse/ClickHouse/issues/15540 | https://github.com/ClickHouse/ClickHouse/pull/16454 | 512dddb2b8163abc118b41493d4daa0f2b8396f9 | 6667261b023296054c1811d5d27a5c416e17a299 | 2020-10-02T16:26:00Z | c++ | 2020-10-28T06:25:44Z |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 15,529 | ["src/Storages/StorageBuffer.cpp", "tests/queries/0_stateless/01514_empty_buffer_different_types.reference", "tests/queries/0_stateless/01514_empty_buffer_different_types.sql"] | After upgrade from 20.3 to 20.8 Engine=Buffer raise Code: 49. DB::Exception: Cannot add simple transform to empty Pipe | **Describe the issue**
after succesful update clickhouse from 20.3 to 20.8
a SELECT query which combine `FROM buffer_table` and `WHERE dictGet` and `filter by date` which doesn't contains data on original MergeTree table, fail with
Code: 49. DB::Exception: Cannot add simple transform to empty Pipe
**How to reproduce**
```
git clone [email protected]:8a8bb43c43b6257a013753d1ac51c978.git
docker-compose up -d
docker-compose exec clickhouse bash -c "clickhouse-client -mn --echo < /var/lib/clickhouse/user_files/success_query.sql"
docker-compose exec clickhouse bash -c "clickhouse-client -mn --echo < /var/lib/clickhouse/user_files/failed_query.sql"
diff -u failed_query.sql success_query.sql
less clickhouse-server.log
```
* Which ClickHouse server versions are incompatible
failure become in 20.8+
* `CREATE TABLE` statements for all tables involved
https://gist.github.com/Slach/8a8bb43c43b6257a013753d1ac51c978#file-init_schema-sql
* Queries to run that lead to unexpected result
https://gist.github.com/Slach/8a8bb43c43b6257a013753d1ac51c978#file-failed_query-sql
**Error message and/or stacktrace**
#### 20.8 stacktrace
```
2020.10.02 10:08:31.985110 [ 190 ] {1f7e7c9d-86ed-4e5f-a55f-20c045b6c044} <Error> executeQuery: Code: 49, e.displayText() = DB::Exception: Cannot add simple transform to empty Pipe. (version 20.8.3.18 (official build)) (from 127.0.0.1:52718) (in query: SELECT toDate(i.event_date) AS day, coalesce(sum(i.cost), 0) AS pcost FROM wister.rtb_and_mb_left_join_raw_data_buffer i WHERE dictGet('wister.dict_prod_partner_affiliate_links','partner_id',tuple(i.code_affilie))>0 AND i.event_date >= now() - INTERVAL 30 DAY AND i.event_date < now() - INTERVAL 7 DAY GROUP BY day; ), Stack trace (when copying this message, always include the lines below):
0. Poco::Exception::Exception(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, int) @ 0x1a80cd70 in /usr/bin/clickhouse
1. DB::Exception::Exception(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, int) @ 0xff9e75d in /usr/bin/clickhouse
2. ? @ 0x17efbaf8 in /usr/bin/clickhouse
3. DB::Pipe::addSimpleTransform(std::__1::function<std::__1::shared_ptr<DB::IProcessor> (DB::Block const&)> const&) @ 0x17ef55c8 in /usr/bin/clickhouse
4. DB::StorageBuffer::read(std::__1::vector<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, std::__1::allocator<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > > > const&, std::__1::shared_ptr<DB::StorageInMemoryMetadata const> const&, DB::SelectQueryInfo const&, DB::Context const&, DB::QueryProcessingStage::Enum, unsigned long, unsigned int) @ 0x17ae992f in /usr/bin/clickhouse
5. DB::ReadFromStorageStep::ReadFromStorageStep(std::__1::shared_ptr<DB::RWLockImpl::LockHolderImpl>, std::__1::shared_ptr<DB::StorageInMemoryMetadata const>&, DB::SelectQueryOptions, std::__1::shared_ptr<DB::IStorage>, std::__1::vector<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, std::__1::allocator<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > > > const&, DB::SelectQueryInfo const&, std::__1::shared_ptr<DB::Context>, DB::QueryProcessingStage::Enum, unsigned long, unsigned long) @ 0x181e870a in /usr/bin/clickhouse
6. DB::InterpreterSelectQuery::executeFetchColumns(DB::QueryProcessingStage::Enum, DB::QueryPlan&, std::__1::shared_ptr<DB::PrewhereInfo> const&, std::__1::vector<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, std::__1::allocator<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > > > const&) @ 0x174b9321 in /usr/bin/clickhouse
7. DB::InterpreterSelectQuery::executeImpl(DB::QueryPlan&, std::__1::shared_ptr<DB::IBlockInputStream> const&, std::__1::optional<DB::Pipe>) @ 0x174bd1b2 in /usr/bin/clickhouse
8. DB::InterpreterSelectQuery::buildQueryPlan(DB::QueryPlan&) @ 0x174be964 in /usr/bin/clickhouse
9. DB::InterpreterSelectWithUnionQuery::buildQueryPlan(DB::QueryPlan&) @ 0x1763b478 in /usr/bin/clickhouse
10. DB::InterpreterSelectWithUnionQuery::execute() @ 0x1763b64a in /usr/bin/clickhouse
11. ? @ 0x177d9262 in /usr/bin/clickhouse
12. DB::executeQuery(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, DB::Context&, bool, DB::QueryProcessingStage::Enum, bool) @ 0x177dabc2 in /usr/bin/clickhouse
13. DB::TCPHandler::runImpl() @ 0x17ea1a25 in /usr/bin/clickhouse
14. DB::TCPHandler::run() @ 0x17ea2790 in /usr/bin/clickhouse
15. Poco::Net::TCPServerConnection::start() @ 0x1a72abdb in /usr/bin/clickhouse
16. Poco::Net::TCPServerDispatcher::run() @ 0x1a72b06b in /usr/bin/clickhouse
17. Poco::PooledThread::run() @ 0x1a8a9b46 in /usr/bin/clickhouse
18. Poco::ThreadImpl::runnableEntry(void*) @ 0x1a8a4f40 in /usr/bin/clickhouse
19. start_thread @ 0x9609 in /usr/lib/x86_64-linux-gnu/libpthread-2.31.so
20. clone @ 0x122103 in /usr/lib/x86_64-linux-gnu/libc-2.31.so
```
#### 20.9 stacktrace
```
2020.10.02 10:15:35.950777 [ 190 ] {791d4417-a240-4a2a-b10f-a16e4af49804} <Error> executeQuery: Code: 49, e.displayText() = DB::Exception: Cannot add simple transform to empty Pipe. (version 20.9.2.20 (official build)) (from 127.0.0.1:52732) (in query: SELECT toDate(i.event_date) AS day, coalesce(sum(i.cost), 0) AS pcost FROM wister.rtb_and_mb_left_join_raw_data_buffer i WHERE dictGet('wister.dict_prod_partner_affiliate_links','partner_id',tuple(i.code_affilie))>0 AND i.event_date >= now() - INTERVAL 30 DAY AND i.event_date < now() - INTERVAL 7 DAY GROUP BY day; ), Stack trace (when copying this message, always include the lines below):
0. Poco::Exception::Exception(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, int) @ 0x18e02790 in /usr/bin/clickhouse
1. DB::Exception::Exception(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, int) @ 0xe72fdad in /usr/bin/clickhouse
2. ? @ 0x1651b840 in /usr/bin/clickhouse
3. DB::Pipe::addSimpleTransform(std::__1::function<std::__1::shared_ptr<DB::IProcessor> (DB::Block const&)> const&) @ 0x16514578 in /usr/bin/clickhouse
4. DB::StorageBuffer::read(std::__1::vector<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, std::__1::allocator<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > > > const&, std::__1::shared_ptr<DB::StorageInMemoryMetadata const> const&, DB::SelectQueryInfo const&, DB::Context const&, DB::QueryProcessingStage::Enum, unsigned long, unsigned int) @ 0x1611967f in /usr/bin/clickhouse
5. DB::ReadFromStorageStep::ReadFromStorageStep(std::__1::shared_ptr<DB::RWLockImpl::LockHolderImpl>, std::__1::shared_ptr<DB::StorageInMemoryMetadata const>&, DB::SelectQueryOptions, std::__1::shared_ptr<DB::IStorage>, std::__1::vector<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, std::__1::allocator<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > > > const&, DB::SelectQueryInfo const&, std::__1::shared_ptr<DB::Context>, DB::QueryProcessingStage::Enum, unsigned long, unsigned long) @ 0x167ed13a in /usr/bin/clickhouse
6. DB::InterpreterSelectQuery::executeFetchColumns(DB::QueryProcessingStage::Enum, DB::QueryPlan&, std::__1::shared_ptr<DB::PrewhereInfo> const&, std::__1::vector<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, std::__1::allocator<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > > > const&) @ 0x15b1dff1 in /usr/bin/clickhouse
7. DB::InterpreterSelectQuery::executeImpl(DB::QueryPlan&, std::__1::shared_ptr<DB::IBlockInputStream> const&, std::__1::optional<DB::Pipe>) @ 0x15b21ec3 in /usr/bin/clickhouse
8. DB::InterpreterSelectQuery::buildQueryPlan(DB::QueryPlan&) @ 0x15b239b4 in /usr/bin/clickhouse
9. DB::InterpreterSelectWithUnionQuery::buildQueryPlan(DB::QueryPlan&) @ 0x15c9d6f8 in /usr/bin/clickhouse
10. DB::InterpreterSelectWithUnionQuery::execute() @ 0x15c9d8ca in /usr/bin/clickhouse
11. ? @ 0x15e1ac62 in /usr/bin/clickhouse
12. DB::executeQuery(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, DB::Context&, bool, DB::QueryProcessingStage::Enum, bool) @ 0x15e1c5c2 in /usr/bin/clickhouse
13. DB::TCPHandler::runImpl() @ 0x164c3585 in /usr/bin/clickhouse
14. DB::TCPHandler::run() @ 0x164c42f0 in /usr/bin/clickhouse
15. Poco::Net::TCPServerConnection::start() @ 0x18d205fb in /usr/bin/clickhouse
16. Poco::Net::TCPServerDispatcher::run() @ 0x18d20a8b in /usr/bin/clickhouse
17. Poco::PooledThread::run() @ 0x18e9f566 in /usr/bin/clickhouse
18. Poco::ThreadImpl::runnableEntry(void*) @ 0x18e9a960 in /usr/bin/clickhouse
19. start_thread @ 0x9609 in /usr/lib/x86_64-linux-gnu/libpthread-2.31.so
20. __clone @ 0x122103 in /usr/lib/x86_64-linux-gnu/libc-2.31.so
```
**Additional context**
Add any other context about the problem here.
| https://github.com/ClickHouse/ClickHouse/issues/15529 | https://github.com/ClickHouse/ClickHouse/pull/15662 | 38c7132c0f580547a72e3cc1fa18a091abf46221 | 2a62a91af421efb412e82dced949fbaea3f64761 | 2020-10-02T10:16:43Z | c++ | 2020-10-12T11:12:11Z |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 15,449 | ["src/Storages/IndicesDescription.cpp", "src/Storages/ReplaceAliasByExpressionVisitor.cpp", "src/Storages/ReplaceAliasByExpressionVisitor.h", "tests/queries/0_stateless/02911_support_alias_column_in_indices.reference", "tests/queries/0_stateless/02911_support_alias_column_in_indices.sql"] | ALIAS column in secondary index usage. | **Use case**
We have some complex function, which we would like to use in secondary index, and we want to create an alias column as that function in order to have simple and memoriable name to it, which we can use in WHERE clause.
**Describe the solution you'd like**
Clickhouse server version 20.9.2.20
```
CREATE TABLE test_index
(
`key_string` String,
`key_uint32` ALIAS toUInt32(key_string),
INDEX idx key_uint32 TYPE set(0) GRANULARITY 1
)
ENGINE = MergeTree
PARTITION BY tuple()
PRIMARY KEY tuple()
ORDER BY key_string
Received exception from server (version 20.9.2):
Code: 47. DB::Exception: Received from localhost:9000. DB::Exception: Missing columns: 'key_uint32' while processing query: 'key_uint32', required columns: 'key_uint32', source columns: 'key_string'.
CREATE TABLE test_index
(
`key_string` String,
`key_uint32` ALIAS toUInt32(key_string),
INDEX idx toUInt32(key_string) TYPE set(0) GRANULARITY 1
)
ENGINE = MergeTree
PARTITION BY tuple()
PRIMARY KEY tuple()
ORDER BY key_string;
INSERT INTO test_index SELECT * FROM numbers(1000000);
SELECT *
FROM test_index
WHERE key_uint32 = 1
┌─key_string─┐
│ 1 │
└────────────┘
1 rows in set. Elapsed: 0.010 sec. Processed 1.00 million rows, 14.89 MB (99.93 million rows/s., 1.49 GB/s.)
SELECT *
FROM test_index
WHERE toUInt32(key_string) = 1
┌─key_string─┐
│ 1 │
└────────────┘
1 rows in set. Elapsed: 0.008 sec. Processed 8.19 thousand rows, 121.96 KB (1.02 million rows/s., 15.20 MB/s.)
```
**Describe alternatives you've considered**
Write that kind of conditions in full form, but that makes less sense when we have the ability to use ALIAS columns.
| https://github.com/ClickHouse/ClickHouse/issues/15449 | https://github.com/ClickHouse/ClickHouse/pull/57220 | d405560f1341523cb47e9ef83b1f7f91869c64bf | 5f5e8633c7878957a2de1fda3aacced9a6ba2a4e | 2020-09-29T14:51:02Z | c++ | 2023-11-28T20:31:34Z |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 15,365 | ["src/Core/MultiEnum.h", "src/Databases/DatabaseFactory.cpp", "src/Databases/MySQL/ConnectionMySQLSettings.cpp", "src/Databases/MySQL/ConnectionMySQLSettings.h", "src/Databases/MySQL/DatabaseConnectionMySQL.cpp", "src/Databases/MySQL/DatabaseConnectionMySQL.h", "src/Databases/ya.make", "tests/integration/test_mysql_database_engine/test.py"] | Cannot create MySQL database, because... There is no query. | ```
Caught exception while loading metadata: Code: 501, e.displayText() = DB::Exception: Cannot create MySQL database, because Code: 393, e.displayText()
= DB::Exception: There is no query, Stack trace (when copying this message, always include the lines below):
0. /home/milovidov/ClickHouse/build_gcc9/../contrib/poco/Foundation/src/Exception.cpp:27: Poco::Exception::Exception(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&,
int) @ 0x1074687c in /usr/bin/clickhouse
1. /home/milovidov/ClickHouse/build_gcc9/../src/Common/Exception.cpp:37: DB::Exception::Exception(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, int) @ 0x7867c29 in
/usr/bin/clickhouse
2. /home/milovidov/ClickHouse/build_gcc9/../contrib/libcxx/include/string:2134: DB::Context::getQueryContext() const (.cold) @ 0xd202416 in /usr/bin/clickhouse
3. /home/milovidov/ClickHouse/build_gcc9/../src/Interpreters/Context.h:457: DB::DatabaseConnectionMySQL::DatabaseConnectionMySQL(DB::Context const&, std::__1::basic_string<char, std::__1::char_traits<char>, std:
:__1::allocator<char> > const&, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, DB::ASTStorage const*, std::__1::basic_string<char, std::__1::char_traits<char>, std:
:__1::allocator<char> > const&, mysqlxx::Pool&&) @ 0xd408d7c in /usr/bin/clickhouse
```
```
ATTACH DATABASE conv_main
ENGINE = MySQL('hostname:3306', 'db', 'metrika', 'password')
``` | https://github.com/ClickHouse/ClickHouse/issues/15365 | https://github.com/ClickHouse/ClickHouse/pull/15384 | a4b4895b26fc03825efc1ed1fb6ac4bca1c44a25 | c9eeb149fd11b314fa0a21ca302503df6623740d | 2020-09-28T00:32:10Z | c++ | 2020-09-30T08:27:15Z |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 15,358 | ["src/Dictionaries/SSDComplexKeyCacheDictionary.cpp", "tests/queries/0_stateless/01280_ssd_complex_key_dictionary.sql"] | SSDComplexKeyCacheDictionary: Assertion `key_columns.size() == dict_struct.key->size()' failed | https://clickhouse-test-reports.s3.yandex.net/15348/34addcf61f653e50e493ff8d94fab015db91fa59/fuzzer/report.html#fail1 | https://github.com/ClickHouse/ClickHouse/issues/15358 | https://github.com/ClickHouse/ClickHouse/pull/16550 | 49d3b65cc276aa743d6964bd420d1eba6bc6cfb0 | 94ae5aed739966798245c03c19d6972150498a11 | 2020-09-27T11:09:12Z | c++ | 2020-10-30T19:45:48Z |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 15,349 | ["src/Formats/FormatFactory.cpp", "src/Processors/Formats/Impl/JSONAsStringRowInputFormat.cpp", "src/Processors/Formats/Impl/RawBLOBRowInputFormat.cpp", "src/Processors/Formats/Impl/RawBLOBRowInputFormat.h", "src/Processors/Formats/Impl/RawBLOBRowOutputFormat.cpp", "src/Processors/Formats/Impl/RawBLOBRowOutputFormat.h", "src/Processors/ya.make", "tests/queries/0_stateless/01509_format_raw_blob.reference", "tests/queries/0_stateless/01509_format_raw_blob.sh"] | Input format RawBLOB | **Use case**
Import binary data as a single row single column of String or similar type.
It will just read all input as a single value.
Loading large blobs as well as loading data by single rows are typical anipatterns in ClickHouse.
But sometimes it is needed. | https://github.com/ClickHouse/ClickHouse/issues/15349 | https://github.com/ClickHouse/ClickHouse/pull/15364 | 3e99ca797b0e5a737a631e1dd95f65ea5f94d881 | 9f944424cf6d0c5d231ff25d6e877000e3154e26 | 2020-09-26T23:13:28Z | c++ | 2020-09-29T22:14:40Z |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 15,313 | ["src/Functions/FunctionDateOrDateTimeToSomething.h", "tests/queries/0_stateless/00921_datetime64_compatibility.reference", "tests/queries/0_stateless/01472_toStartOfInterval_disallow_empty_tz_field.sql"] | toStartOfDay(yesterday()) Function toStartOfDay supports a 2nd argument (optional) that must be non-empty | ```
SELECT toStartOfDay(yesterday())
Received exception from server (version 20.10.1):
Code: 43. DB::Exception: Received from localhost:9000. DB::Exception: Function toStartOfDay supports a 2nd argument (optional) that must be non-empty and be a valid time zone.
SELECT toStartOfDay(today())
Received exception from server (version 20.10.1):
Code: 43. DB::Exception: Received from localhost:9000. DB::Exception: Function toStartOfDay supports a 2nd argument (optional) that must be non-empty and be a valid time zone.
```
because of https://github.com/ClickHouse/ClickHouse/pull/14509 | https://github.com/ClickHouse/ClickHouse/issues/15313 | https://github.com/ClickHouse/ClickHouse/pull/15319 | fd24654ca450eab6ec925676d1e309efda2585de | 6f3aab8f09576194161c3bd57a0baa06356f74a5 | 2020-09-25T15:56:52Z | c++ | 2020-10-03T14:02:57Z |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 15,310 | ["src/Interpreters/NullableUtils.cpp", "src/Interpreters/NullableUtils.h", "src/Interpreters/Set.cpp", "src/Interpreters/Set.h", "tests/queries/0_stateless/01558_transform_null_in.reference", "tests/queries/0_stateless/01558_transform_null_in.sql"] | Wrong results of query with setting transform_null_in=1. | **Describe the bug**
Setting `transform_null_in` works incorrectly with `IN` operator over tuples.
**How to reproduce**
1.
```
CREATE TABLE null_in_1 (u UInt32, n Nullable(UInt32)) ENGINE = Memory;
INSERT INTO null_in_1 VALUES (1, NULL), (2, 2), (3, NULL), (4, 4), (5, NULL);
```
```
SET transform_null_in = 1;
SELECT count() FROM null_in_1 WHERE (u, n) IN ((42, NULL));
┌─count()─┐
│ 3 │
└─────────┘
```
It interprets all rows, where `n` is `NULL` as matched, despite the value of `u`.
2.
```
CREATE TABLE null_in_1 (a Nullable(UInt32), b Nullable(UInt32)) ENGINE = Memory;
INSERT INTO null_in_1 VALUES (1, NULL) (0, NULL) (NULL, NULL) (NULL, 1) (NULL, 0) (0, 0) (1, 1);
```
```
SET transform_null_in = 1;
SELECT count() FROM null_in_1 WHERE (a, b) IN (0, NULL);
┌─count()─┐
│ 3 │
└─────────┘
SELECT count() FROM null_in_1 WHERE (a, b) IN (NULL, 0);
┌─count()─┐
│ 3 │
└─────────┘
SELECT count() FROM null_in_1 WHERE (a, b) IN (0, 0);
┌─count()─┐
│ 3 │
└─────────┘
```
It doesn't see the difference between default value and null value.
| https://github.com/ClickHouse/ClickHouse/issues/15310 | https://github.com/ClickHouse/ClickHouse/pull/16722 | 0da112744065125da732f3f22afdce627001c22d | dd83a358423bda14e2671fd0a819965d742052dd | 2020-09-25T14:55:22Z | c++ | 2020-11-06T09:39:57Z |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 15,268 | ["programs/client/Suggest.cpp", "programs/format/CMakeLists.txt", "programs/format/Format.cpp", "src/Common/BitHelpers.h", "src/Common/StringUtils/StringUtils.h", "src/Parsers/obfuscateQueries.cpp", "src/Parsers/obfuscateQueries.h", "src/Parsers/ya.make", "tests/queries/0_stateless/01508_query_obfuscator.reference", "tests/queries/0_stateless/01508_query_obfuscator.sh"] | Query obfuscator | **Use case**
A friend wants to show me a set of queries, but is afraid to reveal the names of columns (domain area) and the values of constants.
**Describe the solution you'd like**
A simple ad-hoc solution that will be mostly acceptable.
Use Lexer to tokenize query and replace tokens. Use a list of english nouns or similar words for replacements. The user will specify a salt in command line to make deterministic pseudorandom unpredictable replacement.
Split identifiers to words according to their style (snake_case, CamelCase, ALL_CAPS). If identifier is too complex, treat it as a single word. These words will be replaced with english nouns selected by the value of a SipHash from the word and the salt. Save all replacements in a map to track collisions. In case of collision, add 1 to the salt and repeat to select another noun. Keep the style of replaced identifiers for convenience. Keep some words like `id`, `num`, `value` unreplaced.
For literals in a query: for numbers, dates, datetimes, apply a model similar to clickhouse-obfuscator. For string literals, keep punctuation charaters as is to keep the meaning of LIKE and regexp. Replace alphanumeric parts to a gibberish of similar length (maybe keeping alpha and numerics). Replace valid UTF-8 to another valid UTF-8 while keeping the number of leading zeros in bytes.
Keep the keywords in the query as is. (Lexer does not know about keywords, so we have to list them explicitly). | https://github.com/ClickHouse/ClickHouse/issues/15268 | https://github.com/ClickHouse/ClickHouse/pull/15321 | df6221f7bf96368059cea20a2a8a23cb9c7f7598 | bbbe51033dfd5b8c3d54e168475ca707ac7ec0b4 | 2020-09-25T03:31:35Z | c++ | 2020-09-26T16:43:36Z |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 15,235 | ["programs/copier/ClusterCopier.cpp", "tests/integration/test_cluster_copier/task_non_partitioned_table.xml", "tests/integration/test_cluster_copier/test.py"] | 20.6.6.7 clickhouse-copier segfault | ```
2020.09.24 13:54:06.581506 [ 17189 ] {} <Debug> StorageDistributed (.read_shard_0.destination_cluster.dwh._dim_customer_local): Auto-increment is 0
2020.09.24 13:54:06.581537 [ 17192 ] {} <Debug> StorageDistributed (.read_shard_3.destination_cluster.dwh._dim_customer_local): Auto-increment is 0
2020.09.24 13:54:06.581681 [ 17189 ] {} <Debug> ClusterCopier: Computing destination partition set, executing query: SELECT DISTINCT 'all' AS partition FROM _local.`.read_shard_0.destination_cluster.dwh._dim_customer_local` ORDER BY partition DESC
2020.09.24 13:54:06.581804 [ 17192 ] {} <Debug> ClusterCopier: Computing destination partition set, executing query: SELECT DISTINCT 'all' AS partition FROM _local.`.read_shard_3.destination_cluster.dwh._dim_customer_local` ORDER BY partition DESC
2020.09.24 13:54:06.581929 [ 17191 ] {} <Debug> StorageDistributed (.read_shard_4.destination_cluster.dwh._dim_customer_local): Auto-increment is 0
2020.09.24 13:54:06.582132 [ 17191 ] {} <Debug> ClusterCopier: Computing destination partition set, executing query: SELECT DISTINCT 'all' AS partition FROM _local.`.read_shard_4.destination_cluster.dwh._dim_customer_local` ORDER BY partition DESC
2020.09.24 13:54:06.582538 [ 17189 ] {} <Trace> InterpreterSelectQuery: Complete -> Complete
2020.09.24 13:54:06.582997 [ 17191 ] {} <Trace> InterpreterSelectQuery: Complete -> Complete
2020.09.24 13:54:06.583306 [ 17190 ] {} <Debug> StorageDistributed (.read_shard_1.destination_cluster.dwh._dim_customer_local): Auto-increment is 0
2020.09.24 13:54:06.583470 [ 17190 ] {} <Debug> ClusterCopier: Computing destination partition set, executing query: SELECT DISTINCT 'all' AS partition FROM _local.`.read_shard_1.destination_cluster.dwh._dim_customer_local` ORDER BY partition DESC
2020.09.24 13:54:06.583989 [ 17192 ] {} <Trace> InterpreterSelectQuery: Complete -> Complete
2020.09.24 13:54:06.584162 [ 17190 ] {} <Trace> InterpreterSelectQuery: Complete -> Complete
2020.09.24 13:54:06.585082 [ 17188 ] {} <Debug> StorageDistributed (.read_shard_2.destination_cluster.dwh._dim_customer_local): Auto-increment is 0
2020.09.24 13:54:06.585215 [ 17188 ] {} <Debug> ClusterCopier: Computing destination partition set, executing query: SELECT DISTINCT 'all' AS partition FROM _local.`.read_shard_2.destination_cluster.dwh._dim_customer_local` ORDER BY partition DESC
2020.09.24 13:54:06.585858 [ 17188 ] {} <Trace> InterpreterSelectQuery: Complete -> Complete
2020.09.24 13:54:06.587341 [ 17171 ] {} <Trace> BaseDaemon: Received signal 11
2020.09.24 13:54:06.587515 [ 17193 ] {} <Fatal> BaseDaemon: ########################################
2020.09.24 13:54:06.587579 [ 17193 ] {} <Fatal> BaseDaemon: (version 20.6.6.7, no build id) (from thread 17191) (no query) Received signal Segmentation fault (11)
2020.09.24 13:54:06.587693 [ 17193 ] {} <Fatal> BaseDaemon: Address: 0x7f5b55deff4f Access: read. Address not mapped to object.
2020.09.24 13:54:06.587710 [ 17193 ] {} <Fatal> BaseDaemon: Stack trace: 0xd2380e1 0x5d048ff 0xa2a53ef 0x5dfc737 0x5dfd2c7 0x5d0d06d 0x5d0d753 0x5d0c60d 0x5d0acdf 0x7f5b07ba5e65 0x7f5b074c288d
2020.09.24 13:54:06.587767 [ 17193 ] {} <Fatal> BaseDaemon: 3. memcpy @ 0xd2380e1 in /usr/bin/clickhouse
2020.09.24 13:54:06.587836 [ 17193 ] {} <Fatal> BaseDaemon: 4. void DB::writeAnyEscapedString<(char)39, false>(char const*, char const*, DB::WriteBuffer&) @ 0x5d048ff in /usr/bin/clickhouse
2020.09.24 13:54:06.587870 [ 17193 ] {} <Fatal> BaseDaemon: 5. DB::DataTypeString::serializeTextQuoted(DB::IColumn const&, unsigned long, DB::WriteBuffer&, DB::FormatSettings const&) const @ 0xa2a53ef in /usr/bin/clickhouse
2020.09.24 13:54:06.587890 [ 17193 ] {} <Fatal> BaseDaemon: 6. DB::ClusterCopier::getShardPartitions(DB::ConnectionTimeouts const&, DB::TaskShard&) @ 0x5dfc737 in /usr/bin/clickhouse
2020.09.24 13:54:06.587933 [ 17193 ] {} <Fatal> BaseDaemon: 7. DB::ClusterCopier::discoverShardPartitions(DB::ConnectionTimeouts const&, std::__1::shared_ptr<DB::TaskShard> const&) @ 0x5dfd2c7 in /usr/bin/clickhouse
2020.09.24 13:54:06.588043 [ 17193 ] {} <Fatal> BaseDaemon: 8. ThreadPoolImpl<ThreadFromGlobalPool>::worker(std::__1::__list_iterator<ThreadFromGlobalPool, void*>) @ 0x5d0d06d in /usr/bin/clickhouse
2020.09.24 13:54:06.588105 [ 17193 ] {} <Fatal> BaseDaemon: 9. ThreadFromGlobalPool::ThreadFromGlobalPool<void ThreadPoolImpl<ThreadFromGlobalPool>::scheduleImpl<void>(std::__1::function<void ()>, int, std::__1::optional<unsigned long>)::'lambda1'()>(void&&, void ThreadPoolImpl<ThreadFromGlobalPool>::scheduleImpl<void>(std::__1::function<void ()>, int, std::__1::optional<unsigned long>)::'lambda1'()&&...)::'lambda'()::operator()() const @ 0x5d0d753 in /usr/bin/clickhouse
2020.09.24 13:54:06.588134 [ 17193 ] {} <Fatal> BaseDaemon: 10. ThreadPoolImpl<std::__1::thread>::worker(std::__1::__list_iterator<std::__1::thread, void*>) @ 0x5d0c60d in /usr/bin/clickhouse
2020.09.24 13:54:06.588149 [ 17193 ] {} <Fatal> BaseDaemon: 11. ? @ 0x5d0acdf in /usr/bin/clickhouse
2020.09.24 13:54:06.588176 [ 17193 ] {} <Fatal> BaseDaemon: 12. start_thread @ 0x7e65 in /usr/lib64/libpthread-2.17.so
2020.09.24 13:54:06.588195 [ 17193 ] {} <Fatal> BaseDaemon: 13. __clone @ 0xfe88d in /usr/lib64/libc-2.17.so
2020.09.24 13:54:06.587723 [ 17194 ] {} <Fatal> BaseDaemon: ########################################
2020.09.24 13:54:06.587773 [ 17194 ] {} <Fatal> BaseDaemon: (version 20.6.6.7, no build id) (from thread 17192) (no query) Received signal Segmentation fault (11)
2020.09.24 13:54:06.587786 [ 17194 ] {} <Fatal> BaseDaemon: Address: NULL pointer. Access: read. Unknown si_code.
2020.09.24 13:54:06.587808 [ 17194 ] {} <Fatal> BaseDaemon: Stack trace: 0xd2380e1 0x5d048ff 0xa2a53ef 0x5dfc737 0x5dfd2c7 0x5d0d06d 0x5d0d753 0x5d0c60d 0x5d0acdf 0x7f5b07ba5e65 0x7f5b074c288d
2020.09.24 13:54:06.587877 [ 17194 ] {} <Fatal> BaseDaemon: 3. memcpy @ 0xd2380e1 in /usr/bin/clickhouse
2020.09.24 13:54:06.587930 [ 17194 ] {} <Fatal> BaseDaemon: 4. void DB::writeAnyEscapedString<(char)39, false>(char const*, char const*, DB::WriteBuffer&) @ 0x5d048ff in /usr/bin/clickhouse
2020.09.24 13:54:06.587962 [ 17194 ] {} <Fatal> BaseDaemon: 5. DB::DataTypeString::serializeTextQuoted(DB::IColumn const&, unsigned long, DB::WriteBuffer&, DB::FormatSettings const&) const @ 0xa2a53ef in /usr/bin/clickhouse
2020.09.24 13:54:06.587984 [ 17194 ] {} <Fatal> BaseDaemon: 6. DB::ClusterCopier::getShardPartitions(DB::ConnectionTimeouts const&, DB::TaskShard&) @ 0x5dfc737 in /usr/bin/clickhouse
2020.09.24 13:54:06.587995 [ 17194 ] {} <Fatal> BaseDaemon: 7. DB::ClusterCopier::discoverShardPartitions(DB::ConnectionTimeouts const&, std::__1::shared_ptr<DB::TaskShard> const&) @ 0x5dfd2c7 in /usr/bin/clickhouse
2020.09.24 13:54:06.588008 [ 17194 ] {} <Fatal> BaseDaemon: 8. ThreadPoolImpl<ThreadFromGlobalPool>::worker(std::__1::__list_iterator<ThreadFromGlobalPool, void*>) @ 0x5d0d06d in /usr/bin/clickhouse
2020.09.24 13:54:06.588036 [ 17194 ] {} <Fatal> BaseDaemon: 9. ThreadFromGlobalPool::ThreadFromGlobalPool<void ThreadPoolImpl<ThreadFromGlobalPool>::scheduleImpl<void>(std::__1::function<void ()>, int, std::__1::optional<unsigned long>)::'lambda1'()>(void&&, void ThreadPoolImpl<ThreadFromGlobalPool>::scheduleImpl<void>(std::__1::function<void ()>, int, std::__1::optional<unsigned long>)::'lambda1'()&&...)::'lambda'()::operator()() const @ 0x5d0d753 in /usr/bin/clickhouse
2020.09.24 13:54:06.588064 [ 17194 ] {} <Fatal> BaseDaemon: 10. ThreadPoolImpl<std::__1::thread>::worker(std::__1::__list_iterator<std::__1::thread, void*>) @ 0x5d0c60d in /usr/bin/clickhouse
2020.09.24 13:54:06.588080 [ 17194 ] {} <Fatal> BaseDaemon: 11. ? @ 0x5d0acdf in /usr/bin/clickhouse
2020.09.24 13:54:06.588118 [ 17194 ] {} <Fatal> BaseDaemon: 12. start_thread @ 0x7e65 in /usr/lib64/libpthread-2.17.so
2020.09.24 13:54:06.588142 [ 17194 ] {} <Fatal> BaseDaemon: 13. __clone @ 0xfe88d in /usr/lib64/libc-2.17.so
```
The destination cluster consists of single shard / single replica. | https://github.com/ClickHouse/ClickHouse/issues/15235 | https://github.com/ClickHouse/ClickHouse/pull/17248 | edce1e636e1b73488a620a27a3b0c2a8a14d9945 | 27acf6462f1a087e2ccc274a86a1489698be72a6 | 2020-09-24T14:50:49Z | c++ | 2020-11-25T11:31:59Z |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 15,228 | ["src/Storages/StorageMerge.cpp", "tests/queries/0_stateless/01483_merge_table_join_and_group_by.reference", "tests/queries/0_stateless/01483_merge_table_join_and_group_by.sql"] | Merge table 'Unknown identifier' in GROUP BY for joined column | **How to reproduce**
Clickhouse server version 20.3.17
```
CREATE TABLE table_2(key UInt32, ID UInt32) ENGINE = MergeTree PARTITION BY tuple() ORDER BY key;
CREATE TABLE merge_table (key UInt32) ENGINE=Merge(default,'^table');
CREATE TABLE table (key UInt32) ENGINE=MergeTree PARTITION BY tuple() ORDER BY key;}
SELECT ID, key FROM merge_table INNER JOIN table_2 USING(key) GROUP BY ID, key;
[a5342ff16411] 2020.09.24 10:45:40.732748 [ 72 ] {42a90654-6796-4584-a7f0-5113d0b3d721} <Debug> executeQuery: (from 127.0.0.1:40086) SELECT ID, key FROM merge_table INNER JOIN table_2 USING (key) GROUP BY ID, key
[a5342ff16411] 2020.09.24 10:45:40.733291 [ 72 ] {42a90654-6796-4584-a7f0-5113d0b3d721} <Trace> AccessRightsContext (default): Access granted: SELECT(key, ID) ON default.table_2
[a5342ff16411] 2020.09.24 10:45:40.733356 [ 72 ] {42a90654-6796-4584-a7f0-5113d0b3d721} <Debug> Join: setSampleBlock: table_2.key UInt32 UInt32(size = 0), ID UInt32 UInt32(size = 0)
[a5342ff16411] 2020.09.24 10:45:40.733928 [ 72 ] {42a90654-6796-4584-a7f0-5113d0b3d721} <Debug> MemoryTracker: Peak memory usage (total): 0.00 B.
[a5342ff16411] 2020.09.24 10:45:40.734072 [ 72 ] {42a90654-6796-4584-a7f0-5113d0b3d721} <Error> executeQuery: Code: 47, e.displayText() = DB::Exception: Unknown identifier (in GROUP BY): ID (version 20.3.17.173 (official build)) (from 127.0.0.1:40086) (in query: SELECT ID, key FROM merge_table INNER JOIN table_2 USING (key) GROUP BY ID, key), Stack trace (when copying this message, always include the lines below):
0. Poco::Exception::Exception(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, int) @ 0x1059b460 in /usr/bin/clickhouse
1. DB::Exception::Exception(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, int) @ 0x8f9972d in /usr/bin/clickhouse
2. ? @ 0xd34aca0 in /usr/bin/clickhouse
3. DB::ExpressionAnalyzer::ExpressionAnalyzer(std::__1::shared_ptr<DB::IAST> const&, std::__1::shared_ptr<DB::SyntaxAnalyzerResult const> const&, DB::Context const&, unsigned long, bool) @ 0xd346e65 in /usr/bin/clickhouse
4. DB::KeyCondition::getBlockWithConstants(std::__1::shared_ptr<DB::IAST> const&, std::__1::shared_ptr<DB::SyntaxAnalyzerResult const> const&, DB::Context const&) @ 0xd910f68 in /usr/bin/clickhouse
5. DB::KeyCondition::KeyCondition(DB::SelectQueryInfo const&, DB::Context const&, std::__1::vector<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, std::__1::allocator<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > > > const&, std::__1::shared_ptr<DB::ExpressionActions> const&) @ 0xd9178a1 in /usr/bin/clickhouse
6. DB::MergeTreeDataSelectExecutor::readFromParts(std::__1::vector<std::__1::shared_ptr<DB::IMergeTreeDataPart const>, std::__1::allocator<std::__1::shared_ptr<DB::IMergeTreeDataPart const> > >, std::__1::vector<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, std::__1::allocator<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > > > const&, DB::SelectQueryInfo const&, DB::Context const&, unsigned long, unsigned int, std::__1::unordered_map<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, long, std::__1::hash<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > >, std::__1::equal_to<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > >, std::__1::allocator<std::__1::pair<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const, long> > > const*) const @ 0xda02303 in /usr/bin/clickhouse
7. DB::MergeTreeDataSelectExecutor::read(std::__1::vector<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, std::__1::allocator<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > > > const&, DB::SelectQueryInfo const&, DB::Context const&, unsigned long, unsigned int, std::__1::unordered_map<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, long, std::__1::hash<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > >, std::__1::equal_to<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > >, std::__1::allocator<std::__1::pair<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const, long> > > const*) const @ 0xda08103 in /usr/bin/clickhouse
8. DB::StorageMergeTree::read(std::__1::vector<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, std::__1::allocator<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > > > const&, DB::SelectQueryInfo const&, DB::Context const&, DB::QueryProcessingStage::Enum, unsigned long, unsigned int) @ 0xd765f45 in /usr/bin/clickhouse
9. DB::StorageMerge::createSources(DB::SelectQueryInfo const&, DB::QueryProcessingStage::Enum const&, unsigned long, DB::Block const&, std::__1::tuple<std::__1::shared_ptr<DB::IStorage>, DB::TableStructureReadLockHolder, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > > const&, std::__1::vector<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, std::__1::allocator<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > > >&, std::__1::shared_ptr<DB::Context> const&, unsigned long, bool, bool) @ 0xd75a276 in /usr/bin/clickhouse
10. DB::StorageMerge::read(std::__1::vector<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, std::__1::allocator<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > > > const&, DB::SelectQueryInfo const&, DB::Context const&, DB::QueryProcessingStage::Enum, unsigned long, unsigned int) @ 0xd75b95f in /usr/bin/clickhouse
11. void DB::InterpreterSelectQuery::executeFetchColumns<DB::QueryPipeline>(DB::QueryProcessingStage::Enum, DB::QueryPipeline&, std::__1::shared_ptr<DB::PrewhereInfo> const&, std::__1::vector<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, std::__1::allocator<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > > > const&, DB::QueryPipeline&) @ 0xd1ba65d in /usr/bin/clickhouse
12. void DB::InterpreterSelectQuery::executeImpl<DB::QueryPipeline>(DB::QueryPipeline&, std::__1::shared_ptr<DB::IBlockInputStream> const&, std::__1::optional<DB::Pipe>, DB::QueryPipeline&) @ 0xd1bddc7 in /usr/bin/clickhouse
13. DB::InterpreterSelectQuery::executeWithProcessors() @ 0xd180155 in /usr/bin/clickhouse
14. DB::InterpreterSelectWithUnionQuery::executeWithProcessors() @ 0xd38c8f8 in /usr/bin/clickhouse
15. ? @ 0xd59ff3a in /usr/bin/clickhouse
16. DB::executeQuery(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, DB::Context&, bool, DB::QueryProcessingStage::Enum, bool, bool) @ 0xd5a0a01 in /usr/bin/clickhouse
17. DB::TCPHandler::runImpl() @ 0x907b5c9 in /usr/bin/clickhouse
18. DB::TCPHandler::run() @ 0x907c5b0 in /usr/bin/clickhouse
19. Poco::Net::TCPServerConnection::start() @ 0xe402b9b in /usr/bin/clickhouse
20. Poco::Net::TCPServerDispatcher::run() @ 0xe40301d in /usr/bin/clickhouse
21. Poco::PooledThread::run() @ 0x106295c7 in /usr/bin/clickhouse
22. Poco::ThreadImpl::runnableEntry(void*) @ 0x106253cc in /usr/bin/clickhouse
23. ? @ 0x10626d6d in /usr/bin/clickhouse
24. start_thread @ 0x76db in /lib/x86_64-linux-gnu/libpthread-2.27.so
25. __clone @ 0x121a3f in /lib/x86_64-linux-gnu/libc-2.27.so
Received exception from server (version 20.3.17):
Code: 47. DB::Exception: Received from localhost:9000. DB::Exception: Unknown identifier (in GROUP BY): ID.
0 rows in set. Elapsed: 0.004 sec.
```
**Expected behavior**
Clickhouse server version 20.3.15
```
SELECT
ID,
key
FROM merge_table
INNER JOIN table_2 USING (key)
[4b6e47203a6d] 2020.09.24 10:47:41.508163 [ 248 ] {2c1bd414-7a2e-4025-ba12-aefbbb84b46c} <Debug> executeQuery: (from 172.17.0.1:52396) SELECT ID, key FROM merge_table INNER JOIN table_2 USING(key);
[4b6e47203a6d] 2020.09.24 10:47:41.508710 [ 248 ] {2c1bd414-7a2e-4025-ba12-aefbbb84b46c} <Trace> AccessRightsContext (default): Access granted: SELECT(key, ID) ON default.table_2
[4b6e47203a6d] 2020.09.24 10:47:41.508765 [ 248 ] {2c1bd414-7a2e-4025-ba12-aefbbb84b46c} <Debug> Join: setSampleBlock: table_2.key UInt32 UInt32(size = 0), ID UInt32 UInt32(size = 0)
[4b6e47203a6d] 2020.09.24 10:47:41.508943 [ 248 ] {2c1bd414-7a2e-4025-ba12-aefbbb84b46c} <Debug> default.table (SelectExecutor): Key condition: unknown
[4b6e47203a6d] 2020.09.24 10:47:41.508999 [ 248 ] {2c1bd414-7a2e-4025-ba12-aefbbb84b46c} <Debug> default.table (SelectExecutor): Selected 0 parts by date, 0 parts by key, 0 marks to read from 0 ranges
[4b6e47203a6d] 2020.09.24 10:47:41.509077 [ 248 ] {2c1bd414-7a2e-4025-ba12-aefbbb84b46c} <Debug> default.table_2 (SelectExecutor): Key condition: unknown
[4b6e47203a6d] 2020.09.24 10:47:41.509115 [ 248 ] {2c1bd414-7a2e-4025-ba12-aefbbb84b46c} <Debug> default.table_2 (SelectExecutor): Selected 0 parts by date, 0 parts by key, 0 marks to read from 0 ranges
[4b6e47203a6d] 2020.09.24 10:47:41.509166 [ 248 ] {2c1bd414-7a2e-4025-ba12-aefbbb84b46c} <Trace> InterpreterSelectQuery: FetchColumns -> Complete
[4b6e47203a6d] 2020.09.24 10:47:41.509563 [ 203 ] {2c1bd414-7a2e-4025-ba12-aefbbb84b46c} <Trace> CreatingSetsBlockInputStream: Creating join.
[4b6e47203a6d] 2020.09.24 10:47:41.509643 [ 203 ] {2c1bd414-7a2e-4025-ba12-aefbbb84b46c} <Debug> default.table_2 (SelectExecutor): Key condition: unknown
[4b6e47203a6d] 2020.09.24 10:47:41.509673 [ 203 ] {2c1bd414-7a2e-4025-ba12-aefbbb84b46c} <Debug> default.table_2 (SelectExecutor): Selected 0 parts by date, 0 parts by key, 0 marks to read from 0 ranges
[4b6e47203a6d] 2020.09.24 10:47:41.509713 [ 203 ] {2c1bd414-7a2e-4025-ba12-aefbbb84b46c} <Trace> InterpreterSelectQuery: FetchColumns -> Complete
[4b6e47203a6d] 2020.09.24 10:47:41.509791 [ 203 ] {2c1bd414-7a2e-4025-ba12-aefbbb84b46c} <Debug> CreatingSetsBlockInputStream: Subquery has empty result.
[4b6e47203a6d] 2020.09.24 10:47:41.510345 [ 248 ] {2c1bd414-7a2e-4025-ba12-aefbbb84b46c} <Information> executeQuery: Read 1 rows, 4.00 B in 0.002 sec., 478 rows/sec., 1.87 KiB/sec.
[4b6e47203a6d] 2020.09.24 10:47:41.510439 [ 248 ] {2c1bd414-7a2e-4025-ba12-aefbbb84b46c} <Debug> MemoryTracker: Peak memory usage (for query): 0.00 B.
Ok.
0 rows in set. Elapsed: 0.004 sec.
```
**Additional context**
Clickhouse server version 20.3.16 works correctly.
Clickhouse server version 20.3.18 doesn't work.
Clickhouse server version 20.9.2 doesn't work.
| https://github.com/ClickHouse/ClickHouse/issues/15228 | https://github.com/ClickHouse/ClickHouse/pull/15242 | aa43e3d5db0675f38da5c023981771b5177f177a | 0ac18a382ffb00358e1da53132c8772c80ba0bf9 | 2020-09-24T10:56:05Z | c++ | 2020-09-30T20:11:27Z |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 15,206 | ["src/Core/Settings.h", "src/Storages/AlterCommands.cpp", "src/Storages/StorageMaterializedView.cpp", "tests/integration/test_replicated_database/test.py", "tests/queries/0_stateless/02888_obsolete_settings.reference", "tests/queries/0_stateless/02931_alter_materialized_view_query_inconsistent.reference", "tests/queries/0_stateless/02931_alter_materialized_view_query_inconsistent.sql"] | ALTER MV MODIFY QUERY changes MV query but not columns list. | ```
CREATE TABLE src(v UInt64) ENGINE = Null;
CREATE TABLE dest(v UInt64) Engine = MergeTree() ORDER BY v;
CREATE MATERIALIZED VIEW pipe TO dest AS SELECT v FROM src;
ALTER TABLE dest ADD COLUMN v2 UInt64;
SET allow_experimental_alter_materialized_view_structure = 1;
ALTER TABLE pipe add column v2 Int64, MODIFY QUERY SELECT v * 2 as v, 1 as v2 FROM src;
DESCRIBE TABLE pipe
┌─name─┬─type───┬─default_type─┬─default_expression─┬─comment─┬─codec_expression─┬─ttl_expression─┐
│ v │ UInt64 │ │ │ │ │ │
└──────┴────────┴──────────────┴────────────────────┴─────────┴──────────────────┴────────────────┘
expected 2 columns `v`, `v2`
SHOW CREATE TABLE pipe
┌─statement─────────────────────────────────────────────────────────────────────────────────────────────────┐
│ CREATE MATERIALIZED VIEW dw.pipe TO dw.dest
(
`v` UInt64
) AS
SELECT
v * 2 AS v,
1 AS v2
FROM dw.src │
└───────────────────────────────────────────────────────────────────────────────────────────────────────────┘
``` | https://github.com/ClickHouse/ClickHouse/issues/15206 | https://github.com/ClickHouse/ClickHouse/pull/57311 | e664e66a9a5532bee7ff0533348b041fc2be556f | fb98b212c5ba614fc8f3d8308f4cade183bc7c26 | 2020-09-23T20:12:03Z | c++ | 2023-12-01T11:18:43Z |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 15,187 | ["src/Interpreters/MySQL/InterpretersMySQLDDLQuery.cpp", "src/Interpreters/MySQL/tests/gtest_create_rewritten.cpp"] | MaterializeMySQL Code: 46. DB::Exception: Received from localhost:9000. DB::Exception: Unknown function task_id | **Describe the bug**
version: 20.10.1.4699
I got this issue when access table after MaterializeMySQL database created , and found some tables not synchronized to CH from MySQL
**How to reproduce**
CREATE DATABASE credit_ga ENGINE = MaterializeMySQL('192.168.1.123:3306', 'credit_ga', 'root', '123456');
test-1-118.raipeng.com :) use credit_ga
test-1-118.raipeng.com :) SELECT *FROM aaa
SELECT *
FROM aaa
Received exception from server (version 20.10.1):
Code: 46. DB::Exception: Received from localhost:9000. DB::Exception: Unknown function task_id.
0 rows in set. Elapsed: 0.003 sec.
error log as below:
2020.09.23 17:27:57.140067 [ 22323 ] {} <Error> MaterializeMySQLSyncThread: Code: 46, e.displayText() = DB::Exception: Unknown function task_id, Stack trace (when copying this message, always include the lines below):
0. Poco::Exception::Exception(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, int) @ 0x114fb850 in /usr/bin/clickhouse
1. DB::Exception::Exception(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, int) @ 0x8367afd in /usr/bin/clickhouse
2. ? @ 0xdd5051a in /usr/bin/clickhouse
3. DB::FunctionFactory::get(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, DB::Context const&) const @ 0xdd4fdd5 in /usr/bin/clickhouse
4. DB::ActionsMatcher::visit(DB::ASTFunction const&, std::__1::shared_ptr<DB::IAST> const&, DB::ActionsMatcher::Data&) @ 0xe44e00c in /usr/bin/clickhouse
5. DB::ActionsMatcher::visit(DB::ASTFunction const&, std::__1::shared_ptr<DB::IAST> const&, DB::ActionsMatcher::Data&) @ 0xe44e169 in /usr/bin/clickhouse
6. ? @ 0xe428873 in /usr/bin/clickhouse
7. DB::ExpressionAnalyzer::getActions(bool, bool) @ 0xe42e147 in /usr/bin/clickhouse
8. ? @ 0xe704160 in /usr/bin/clickhouse
9. ? @ 0xe705add in /usr/bin/clickhouse
10. DB::MySQLInterpreter::InterpreterCreateImpl::getRewrittenQueries(DB::MySQLParser::ASTCreateQuery const&, DB::Context const&, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&) @ 0xe708b8b in /usr/bin/clickhouse
11. DB::InterpreterExternalDDLQuery::execute() @ 0xe376835 in /usr/bin/clickhouse
12. ? @ 0xe6e22bc in /usr/bin/clickhouse
13. DB::executeQuery(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, DB::Context&, bool, DB::QueryProcessingStage::Enum, bool) @ 0xe6e41a1 in /usr/bin/clickhouse
14. ? @ 0xe2d2664 in /usr/bin/clickhouse
15. ? @ 0xe2ddb63 in /usr/bin/clickhouse
16. ? @ 0xe2df273 in /usr/bin/clickhouse
17. DB::commitMetadata(std::__1::function<void ()> const&, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&) @ 0xe317429 in /usr/bin/clickhouse
18. DB::MaterializeMetadata::transaction(DB::MySQLReplication::Position const&, std::__1::function<void ()> const&) @ 0xe31983e in /usr/bin/clickhouse
19. DB::MaterializeMySQLSyncThread::prepareSynchronized(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&) @ 0xe2d3d72 in /usr/bin/clickhouse
20. DB::MaterializeMySQLSyncThread::synchronization(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&) @ 0xe2dc17b in /usr/bin/clickhouse
21. ? @ 0xe2dc6c6 in /usr/bin/clickhouse
22. ThreadPoolImpl<std::__1::thread>::worker(std::__1::__list_iterator<std::__1::thread, void*>) @ 0x8395fcf in /usr/bin/clickhouse
23. ? @ 0x83948f3 in /usr/bin/clickhouse
24. start_thread @ 0x7ea5 in /usr/lib64/libpthread-2.17.so
25. __clone @ 0xfe8dd in /usr/lib64/libc-2.17.so
(version 20.10.1.4699 (official build)) | https://github.com/ClickHouse/ClickHouse/issues/15187 | https://github.com/ClickHouse/ClickHouse/pull/17944 | 5f78280aec51a375827eebfcc7c8a2a91efeb004 | 60aef3d5291bed69947bba7c82e12f5fcd85e7b4 | 2020-09-23T09:38:51Z | c++ | 2020-12-10T16:42:38Z |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 15,168 | ["src/Storages/MergeTree/MergeTreeDataSelectExecutor.cpp"] | confusing message "Selected n parts by date", | `Selected 13 parts by date, 13 parts by key, 26 marks by primary key .... `
This message is confusing for new users.
I suggest to replace it with
`Selected 13 parts by partition key, 13 parts by primary key, 26 marks by primary key...` | https://github.com/ClickHouse/ClickHouse/issues/15168 | https://github.com/ClickHouse/ClickHouse/pull/15169 | 88dc3126ad9f47b3f2406085994d4fa1547d4256 | 88f01954d9b5c3f5836f711ad22881f6b51e0848 | 2020-09-22T19:27:17Z | c++ | 2020-09-22T23:03:05Z |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 15,117 | ["docs/en/engines/table-engines/special/buffer.md", "docs/ru/engines/table-engines/special/buffer.md"] | ALTER TABLE ADD COLUMN on Buffer table breaks block structure | **Describe the bug**
INSERT fails after adding a column to a buffer table.
FYI SELECT works.
**How to reproduce**
* ClickHouse server version 20.8.3.18 Debian official build. Error message showed version 20.8.2.
* `CREATE TABLE` statement
```
CREATE TABLE t_local (timestamp DateTime)
ENGINE = MergeTree PARTITION BY toYYYYMMDD(timestamp)
ORDER BY ( timestamp )
;
CREATE TABLE t_buffer (timestamp DateTime)
Engine = Buffer(default, t_local, 16, 3, 20, 2000000, 20000000, 100000000, 300000000);
```
* Queries to run that lead to unexpected result
```
INSERT INTO t_buffer (timestamp) VALUES (now());
ALTER TABLE t_local ADD COLUMN s String;
ALTER TABLE t_buffer ADD COLUMN s String;
-- SELECT works. It successfully returns two columns
SELECT * FROM t_buffer;
-- INSERT fails
INSERT INTO t_buffer (timestamp, s) VALUES (now(), 'hello');
```
**Expected behavior**
INSERT should be successful.
**Error message**
```
Received exception from server (version 20.8.2):
Code: 49. DB::Exception: Received from localhost:9000. DB::Exception: Block structure mismatch in Buffer stream: different number of columns:
s String String(size = 1), timestamp DateTime UInt32(size = 1)
timestamp DateTime UInt32(size = 0).
```
**Stack trace**
```
2020.09.22 05:14:31.973903 [ 2044016 ] {5dede1ca-e5c5-4f57-bac1-ba11a89fdbdc} <Error> executeQuery: Code: 49, e.displayText() = DB::Exception: Block structure mismatch in Buffer stream: different number of columns:
s String String(size = 1), timestamp DateTime UInt32(size = 1)
timestamp DateTime UInt32(size = 0) (version 20.8.2.3 (official build)) (from [::1]:44036) (in query: INSERT into t_buffer (timestamp, s) values ), Stack trace (when copying this message, always include the lines below):
0. Poco::Exception::Exception(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, int) @ 0x1a80ae30 in /usr/bin/clickhouse
1. DB::Exception::Exception(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, int) @ 0xff9e75d in /usr/bin/clickhouse
2. ? @ 0x16f1b264 in /usr/bin/clickhouse
3. ? @ 0x16f16906 in /usr/bin/clickhouse
4. DB::BufferBlockOutputStream::insertIntoBuffer(DB::Block const&, DB::StorageBuffer::Buffer&) @ 0x17aea10a in /usr/bin/clickhouse
5. DB::BufferBlockOutputStream::write(DB::Block const&) @ 0x17aeae41 in /usr/bin/clickhouse
6. DB::PushingToViewsBlockOutputStream::write(DB::Block const&) @ 0x17453176 in /usr/bin/clickhouse
7. DB::AddingDefaultBlockOutputStream::write(DB::Block const&) @ 0x17459329 in /usr/bin/clickhouse
8. DB::SquashingBlockOutputStream::finalize() @ 0x17458b77 in /usr/bin/clickhouse
9. DB::SquashingBlockOutputStream::writeSuffix() @ 0x17458c5d in /usr/bin/clickhouse
10. DB::TCPHandler::processInsertQuery(DB::Settings const&) @ 0x17e9c4aa in /usr/bin/clickhouse
11. DB::TCPHandler::runImpl() @ 0x17e9d6ab in /usr/bin/clickhouse
12. DB::TCPHandler::run() @ 0x17e9dd50 in /usr/bin/clickhouse
13. Poco::Net::TCPServerConnection::start() @ 0x1a728c9b in /usr/bin/clickhouse
14. Poco::Net::TCPServerDispatcher::run() @ 0x1a72912b in /usr/bin/clickhouse
15. Poco::PooledThread::run() @ 0x1a8a7c06 in /usr/bin/clickhouse
16. Poco::ThreadImpl::runnableEntry(void*) @ 0x1a8a3000 in /usr/bin/clickhouse
17. start_thread @ 0x9609 in /lib/x86_64-linux-gnu/libpthread-2.31.so
18. /build/glibc-YYA7BZ/glibc-2.31/misc/../sysdeps/unix/sysv/linux/x86_64/clone.S:97: __GI___clone @ 0x122103 in /usr/lib/debug/lib/x86_64-linux-gnu/libc-2.31.so
```
**Additional context**
To work around the problem DROP and CREATE t_buffer table with s String column. Make sure the content of the buffer table has been flushed to the local table before dropping.
| https://github.com/ClickHouse/ClickHouse/issues/15117 | https://github.com/ClickHouse/ClickHouse/pull/27786 | 53d7842877e6f5a77820540545aa0e7ebfbf3ba9 | 29ff255601e06af614ba988fadc289969709182d | 2020-09-22T05:20:40Z | c++ | 2021-08-18T11:41:09Z |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 14,894 | ["src/Databases/MySQL/MaterializeMetadata.cpp"] | MaterializeMySQL run error | I have created MaterializeMySQL with:
CREATE DATABASE dmp_idm ENGINE = MaterializeMySQL('xxx:3307', 'dmp_idm', 'root', 'xxx');
====================
my.cnf:
gtid_mode=on
enforce_gtid_consistency=true
binlog_format=ROW
log-bin=mysql-bin
===================
see the log:
tail -100f clickhouse-server.err.log
2020.09.17 10:37:30.979768 [ 13674 ] {} <Fatal> BaseDaemon: ########################################
2020.09.17 10:37:30.979849 [ 13674 ] {} <Fatal> BaseDaemon: (version 20.9.1.4571 (official build), no build id) (from thread 13578) (no query) Received signal Segmentation fault (11)
2020.09.17 10:37:30.979889 [ 13674 ] {} <Fatal> BaseDaemon: Address: 0x2d85ef80 Access: write. Address not mapped to object.
2020.09.17 10:37:30.979914 [ 13674 ] {} <Fatal> BaseDaemon: Stack trace: 0x15c225a8 0x15c188f8 0x15c1bf25 0x15889272 0x15bf6aa2 0x15bf8402 0x157e7064 0x157f10d5 0x157f2803 0x1582ccb9 0x1582f0ee 0x157eaa91 0x157ef71b 0x157efc36 0xe641547 0xe63fb83 0x7fd0189fcdc5 0x7fd01831e21d
2020.09.17 10:37:30.980005 [ 13674 ] {} <Fatal> BaseDaemon: 3. std::__1::enable_if<(std::__1::__is_cpp17_forward_iterator<std::__1::__wrap_iter<std::__1::shared_ptr<DB::IAST>*> >::value) && (std::__1::is_constructible<std::__1::shared_ptr<DB::IAST>, std::__1::iterator_traits<std::__1::iterator_traits>::reference>::value), std::__1::__wrap_iter<std::__1::shared_ptr<DB::IAST>*> >::type std::__1::vector<std::__1::shared_ptr<DB::IAST>, std::__1::allocator<std::__1::shared_ptr<DB::IAST> > >::insert<std::__1::__wrap_iter<std::__1::shared_ptr<DB::IAST>*> >(std::__1::__wrap_iter<std::__1::shared_ptr<DB::IAST> const*>, std::__1::iterator_traits, std::__1::iterator_traits) @ 0x15c225a8 in /usr/bin/clickhouse
2020.09.17 10:37:30.980034 [ 13674 ] {} <Fatal> BaseDaemon: 4. ? @ 0x15c188f8 in /usr/bin/clickhouse
2020.09.17 10:37:30.980056 [ 13674 ] {} <Fatal> BaseDaemon: 5. DB::MySQLInterpreter::InterpreterCreateImpl::getRewrittenQueries(DB::MySQLParser::ASTCreateQuery const&, DB::Context const&, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&) @ 0x15c1bf25 in /usr/bin/clickhouse
2020.09.17 10:37:30.980081 [ 13674 ] {} <Fatal> BaseDaemon: 6. DB::InterpreterExternalDDLQuery::execute() @ 0x15889272 in /usr/bin/clickhouse
2020.09.17 10:37:30.980095 [ 13674 ] {} <Fatal> BaseDaemon: 7. ? @ 0x15bf6aa2 in /usr/bin/clickhouse
2020.09.17 10:37:30.980113 [ 13674 ] {} <Fatal> BaseDaemon: 8. DB::executeQuery(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, DB::Context&, bool, DB::QueryProcessingStage::Enum, bool) @ 0x15bf8402 in /usr/bin/clickhouse
2020.09.17 10:37:30.980126 [ 13674 ] {} <Fatal> BaseDaemon: 9. ? @ 0x157e7064 in /usr/bin/clickhouse
2020.09.17 10:37:30.980138 [ 13674 ] {} <Fatal> BaseDaemon: 10. ? @ 0x157f10d5 in /usr/bin/clickhouse
2020.09.17 10:37:30.980149 [ 13674 ] {} <Fatal> BaseDaemon: 11. ? @ 0x157f2803 in /usr/bin/clickhouse
2020.09.17 10:37:30.980174 [ 13674 ] {} <Fatal> BaseDaemon: 12. DB::commitMetadata(std::__1::function<void ()> const&, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&) @ 0x1582ccb9 in /usr/bin/clickhouse
2020.09.17 10:37:30.980195 [ 13674 ] {} <Fatal> BaseDaemon: 13. DB::MaterializeMetadata::transaction(DB::MySQLReplication::Position const&, std::__1::function<void ()> const&) @ 0x1582f0ee in /usr/bin/clickhouse
2020.09.17 10:37:30.980226 [ 13674 ] {} <Fatal> BaseDaemon: 14. DB::MaterializeMySQLSyncThread::prepareSynchronized(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&) @ 0x157eaa91 in /usr/bin/clickhouse
2020.09.17 10:37:30.980249 [ 13674 ] {} <Fatal> BaseDaemon: 15. DB::MaterializeMySQLSyncThread::synchronization(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&) @ 0x157ef71b in /usr/bin/clickhouse
2020.09.17 10:37:30.980261 [ 13674 ] {} <Fatal> BaseDaemon: 16. ? @ 0x157efc36 in /usr/bin/clickhouse
2020.09.17 10:37:30.980279 [ 13674 ] {} <Fatal> BaseDaemon: 17. ThreadPoolImpl<std::__1::thread>::worker(std::__1::__list_iterator<std::__1::thread, void*>) @ 0xe641547 in /usr/bin/clickhouse
2020.09.17 10:37:30.980291 [ 13674 ] {} <Fatal> BaseDaemon: 18. ? @ 0xe63fb83 in /usr/bin/clickhouse
2020.09.17 10:37:30.980329 [ 13674 ] {} <Fatal> BaseDaemon: 19. start_thread @ 0x7dc5 in /usr/lib64/libpthread-2.17.so
2020.09.17 10:37:30.980349 [ 13674 ] {} <Fatal> BaseDaemon: 20. clone @ 0xf621d in /usr/lib64/libc-2.17.so
2020.09.17 10:38:11.697416 [ 13738 ] {} <Error> MaterializeMySQLSyncThread: Code: 76, e.displayText() = DB::ErrnoException: Cannot open file /data/clickhouse/metadata/dmp_idm//.metadata.tmp, errno: 17, strerror: File exists, Stack trace (when copying this message, always include the lines below):
0. Poco::Exception::Exception(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, int) @ 0x18bcb170 in /usr/bin/clickhouse
1. DB::ErrnoException::ErrnoException(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, int, int, std::__1::optional<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > > const&) @ 0xe617f8b in /usr/bin/clickhouse
2. DB::throwFromErrnoWithPath(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, int, int) @ 0xe615d8a in /usr/bin/clickhouse
3. DB::WriteBufferFromFile::WriteBufferFromFile(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, unsigned long, int, unsigned int, char*, unsigned long) @ 0xe71f608 in /usr/bin/clickhouse
4. DB::MaterializeMetadata::transaction(DB::MySQLReplication::Position const&, std::__1::function<void ()> const&) @ 0x1582e78c in /usr/bin/clickhouse
5. DB::MaterializeMySQLSyncThread::prepareSynchronized(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&) @ 0x157eaa91 in /usr/bin/clickhouse
6. DB::MaterializeMySQLSyncThread::synchronization(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&) @ 0x157ef71b in /usr/bin/clickhouse
7. ? @ 0x157efc36 in /usr/bin/clickhouse
8. ThreadPoolImpl<std::__1::thread>::worker(std::__1::__list_iterator<std::__1::thread, void*>) @ 0xe641547 in /usr/bin/clickhouse
9. ? @ 0xe63fb83 in /usr/bin/clickhouse
10. start_thread @ 0x7dc5 in /usr/lib64/libpthread-2.17.so
11. clone @ 0xf621d in /usr/lib64/libc-2.17.so
(version 20.9.1.4571 (official build))
2020.09.17 10:38:11.698228 [ 13738 ] {} <Error> MaterializeMySQLSyncThread: Code: 76, e.displayText() = DB::ErrnoException: Cannot open file /data/clickhouse/metadata/dmp_idm//.metadata.tmp, errno: 17, strerror: File exists, Stack trace (when copying this message, always include the lines below):
0. Poco::Exception::Exception(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, int) @ 0x18bcb170 in /usr/bin/clickhouse
1. DB::ErrnoException::ErrnoException(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, int, int, std::__1::optional<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > > const&) @ 0xe617f8b in /usr/bin/clickhouse
2. DB::throwFromErrnoWithPath(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, int, int) @ 0xe615d8a in /usr/bin/clickhouse
3. DB::WriteBufferFromFile::WriteBufferFromFile(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, unsigned long, int, unsigned int, char*, unsigned long) @ 0xe71f608 in /usr/bin/clickhouse
4. DB::MaterializeMetadata::transaction(DB::MySQLReplication::Position const&, std::__1::function<void ()> const&) @ 0x1582e78c in /usr/bin/clickhouse
5. DB::MaterializeMySQLSyncThread::prepareSynchronized(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&) @ 0x157eaa91 in /usr/bin/clickhouse
6. DB::MaterializeMySQLSyncThread::synchronization(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&) @ 0x157ef71b in /usr/bin/clickhouse
7. ? @ 0x157efc36 in /usr/bin/clickhouse
8. ThreadPoolImpl<std::__1::thread>::worker(std::__1::__list_iterator<std::__1::thread, void*>) @ 0xe641547 in /usr/bin/clickhouse
9. ? @ 0xe63fb83 in /usr/bin/clickhouse
10. start_thread @ 0x7dc5 in /usr/lib64/libpthread-2.17.so
11. clone @ 0xf621d in /usr/lib64/libc-2.17.so
(version 20.9.1.4571 (official build))
2020.09.17 10:38:26.307285 [ 13793 ] {fc34c804-2465-413b-97f6-cb7aca0526c8} <Error> executeQuery: Code: 76, e.displayText() = DB::Exception: Cannot open file /data/clickhouse/metadata/dmp_idm//.metadata.tmp, errno: 17, strerror: File exists (version 20.9.1.4571 (official build)) (from 172.30.108.6:51390) (in query: SELECT * FROM dmp_idm.invitation FORMAT TabSeparatedWithNamesAndTypes;), Stack trace (when copying this message, always include the lines below):
0. Poco::Exception::Exception(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, int) @ 0x18bcb170 in /usr/bin/clickhouse
1. DB::ErrnoException::ErrnoException(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, int, int, std::__1::optional<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > > const&) @ 0xe617f8b in /usr/bin/clickhouse
2. DB::throwFromErrnoWithPath(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, int, int) @ 0xe615d8a in /usr/bin/clickhouse
3. DB::WriteBufferFromFile::WriteBufferFromFile(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, unsigned long, int, unsigned int, char*, unsigned long) @ 0xe71f608 in /usr/bin/clickhouse
4. DB::MaterializeMetadata::transaction(DB::MySQLReplication::Position const&, std::__1::function<void ()> const&) @ 0x1582e78c in /usr/bin/clickhouse
5. DB::MaterializeMySQLSyncThread::prepareSynchronized(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&) @ 0x157eaa91 in /usr/bin/clickhouse
6. DB::MaterializeMySQLSyncThread::synchronization(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&) @ 0x157ef71b in /usr/bin/clickhouse
7. ? @ 0x157efc36 in /usr/bin/clickhouse
8. ThreadPoolImpl<std::__1::thread>::worker(std::__1::__list_iterator<std::__1::thread, void*>) @ 0xe641547 in /usr/bin/clickhouse
9. ? @ 0xe63fb83 in /usr/bin/clickhouse
10. start_thread @ 0x7dc5 in /usr/lib64/libpthread-2.17.so
11. clone @ 0xf621d in /usr/lib64/libc-2.17.so
2020.09.17 10:38:26.307798 [ 13793 ] {fc34c804-2465-413b-97f6-cb7aca0526c8} <Error> DynamicQueryHandler: Code: 76, e.displayText() = DB::Exception: Cannot open file /data/clickhouse/metadata/dmp_idm//.metadata.tmp, errno: 17, strerror: File exists, Stack trace (when copying this message, always include the lines below):
0. Poco::Exception::Exception(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, int) @ 0x18bcb170 in /usr/bin/clickhouse
1. DB::ErrnoException::ErrnoException(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, int, int, std::__1::optional<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > > const&) @ 0xe617f8b in /usr/bin/clickhouse
2. DB::throwFromErrnoWithPath(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, int, int) @ 0xe615d8a in /usr/bin/clickhouse
3. DB::WriteBufferFromFile::WriteBufferFromFile(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, unsigned long, int, unsigned int, char*, unsigned long) @ 0xe71f608 in /usr/bin/clickhouse
4. DB::MaterializeMetadata::transaction(DB::MySQLReplication::Position const&, std::__1::function<void ()> const&) @ 0x1582e78c in /usr/bin/clickhouse
5. DB::MaterializeMySQLSyncThread::prepareSynchronized(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&) @ 0x157eaa91 in /usr/bin/clickhouse
6. DB::MaterializeMySQLSyncThread::synchronization(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&) @ 0x157ef71b in /usr/bin/clickhouse
7. ? @ 0x157efc36 in /usr/bin/clickhouse
8. ThreadPoolImpl<std::__1::thread>::worker(std::__1::__list_iterator<std::__1::thread, void*>) @ 0xe641547 in /usr/bin/clickhouse
9. ? @ 0xe63fb83 in /usr/bin/clickhouse
10. start_thread @ 0x7dc5 in /usr/lib64/libpthread-2.17.so
11. clone @ 0xf621d in /usr/lib64/libc-2.17.so
(version 20.9.1.4571 (official build))
help me,please! | https://github.com/ClickHouse/ClickHouse/issues/14894 | https://github.com/ClickHouse/ClickHouse/pull/14898 | 5a890d73777ed1be4da46dc30df4f2c03b53a9be | a76d8fb96b99b35b07658114f5f813f647515d96 | 2020-09-17T02:47:07Z | c++ | 2020-09-17T16:18:10Z |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 14,865 | ["src/Common/renameat2.cpp", "src/Common/renameat2.h", "src/Databases/DatabaseAtomic.cpp"] | v20.8 cannot create system.query_log when use docker volumes in /var/lib/clcikhouse | **Describe the bug**
System.query_log doesn't exist
It is not because of log_query setting
I found that there was a problem when query_log table initialization
My ClickHouse server version is **20.8.2.3**
**How to reproduce**
> First start a docker sever instance (/var/data/clickhouse is an empty local directory)
```
docker run -d --name clickhouse-server -p 8123:8123 -p 9000:9000 -p 9009:9009 -v /var/data/clickhouse:/var/lib/clickhouse yandex/clickhouse-server
```
> And then connect it
```
docker run -it --rm --link clickhouse-server:clickhouse-server yandex/clickhouse-client --host clickhouse-server
```
> Finally you can find error msg in clickhouse-server.err.log below:
**Error message and/or stacktrace**
```
{} <Error> void DB::SystemLog<LogElement>::flushImpl(const std::__1::vector<T>&, uint64_t) [with LogElement = DB::MetricLogElement; uint64_t = long unsigned int]: Code: 425, e.displayText() = DB::ErrnoException: Cannot rename /var/lib/clickhouse/store/363/3633f3aa-0d9f-4a49-989b-6d888dee71d4/metric_log.sql.tmp to /var/lib/clickhouse/store/363/3633f3aa-0d9f-4a49-989b-6d888dee71d4/metric_log.sql, errno: 22, strerror: Invalid argument, Stack trace (when copying this message, always include the lines below):
0. Poco::Exception::Exception(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, int) @ 0x1a80ae30 in /usr/bin/clickhouse
1. DB::ErrnoException::ErrnoException(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, int, int, std::__1::optional<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > > const&) @ 0xffa293b in /usr/bin/clickhouse
2. DB::throwFromErrnoWithPath(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, int, int) @ 0xffa073a in /usr/bin/clickhouse
3. ? @ 0x1724d3f1 in /usr/bin/clickhouse
4. DB::renameNoReplace(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&) @ 0x1724d80a in /usr/bin/clickhouse
5. DB::DatabaseAtomic::commitCreateTable(DB::ASTCreateQuery const&, std::__1::shared_ptr<DB::IStorage> const&, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&) @ 0x172430af in /usr/bin/clickhouse
6. DB::DatabaseOnDisk::createTable(DB::Context const&, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, std::__1::shared_ptr<DB::IStorage> const&, std::__1::shared_ptr<DB::IAST> const&) @ 0x172621db in /usr/bin/clickhouse
7. DB::InterpreterCreateQuery::doCreateTable(DB::ASTCreateQuery&, DB::InterpreterCreateQuery::TableProperties const&) @ 0x17380c89 in /usr/bin/clickhouse
8. DB::InterpreterCreateQuery::createTable(DB::ASTCreateQuery&) @ 0x1738363d in /usr/bin/clickhouse
9. DB::InterpreterCreateQuery::execute() @ 0x173861d9 in /usr/bin/clickhouse
10. DB::SystemLog<DB::MetricLogElement>::prepareTable() @ 0x1002b15b in /usr/bin/clickhouse
11. DB::SystemLog<DB::MetricLogElement>::flushImpl(std::__1::vector<DB::MetricLogElement, std::__1::allocator<DB::MetricLogElement> > const&, unsigned long) @ 0x10035614 in /usr/bin/clickhouse
12. DB::SystemLog<DB::MetricLogElement>::savingThreadFunction() @ 0x100391a4 in /usr/bin/clickhouse
13. ThreadFromGlobalPool::ThreadFromGlobalPool<DB::SystemLog<DB::MetricLogElement>::startup()::'lambda'()>(DB::MetricLogElement&&, DB::SystemLog<DB::MetricLogElement>::startup()::'lambda'()&&...)::'lambda'()::operator()() const @ 0x100398a6 in /usr/bin/clickhouse
14. ThreadPoolImpl<std::__1::thread>::worker(std::__1::__list_iterator<std::__1::thread, void*>) @ 0xffce517 in /usr/bin/clickhouse
15. ? @ 0xffccb53 in /usr/bin/clickhouse
16. start_thread @ 0x9609 in /usr/lib/x86_64-linux-gnu/libpthread-2.31.so
17. clone @ 0x122103 in /usr/lib/x86_64-linux-gnu/libc-2.31.so
(version 20.8.2.3 (official build))
```
| https://github.com/ClickHouse/ClickHouse/issues/14865 | https://github.com/ClickHouse/ClickHouse/pull/15024 | 5e56f14d5f854066b2fb8de0431c1271e5a9a096 | e29c4c3cc47ab2a6c4516486c1b77d57e7d42643 | 2020-09-16T05:28:33Z | c++ | 2020-09-21T18:08:17Z |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 14,840 | ["programs/server/Server.cpp", "src/Common/ThreadStatus.cpp", "src/Functions/trap.cpp"] | Try capturing stack overflow errors? | https://rethinkdb.com/blog/handling-stack-overflow-on-custom-stacks/
https://barrgroup.com/embedded-systems/how-to/prevent-detect-stack-overflow
etc. | https://github.com/ClickHouse/ClickHouse/issues/14840 | https://github.com/ClickHouse/ClickHouse/pull/16346 | dd78c9dc75ad684b4cad458a5e87f3e28d3ba775 | e9d97160f4431e5b460ca77c1851146d24981791 | 2020-09-15T09:44:30Z | c++ | 2020-12-17T19:28:22Z |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 14,835 | ["src/Core/MySQL/MySQLReplication.cpp", "src/Core/MySQL/MySQLReplication.h", "src/IO/MySQLBinlogEventReadBuffer.cpp", "src/IO/MySQLBinlogEventReadBuffer.h", "src/IO/tests/gtest_mysql_binlog_event_read_buffer.cpp", "src/IO/ya.make"] | MaterializedMySQL Packet payload is not fully read. Stopped after 7372 bytes, while 693 bytes are in buffer | MySQL INSERT INTO :
table: cleanout_asin_info_jp
table count :
select count(1) from cleanout_asin_info_jp
10308
execution sql:
insert into cleanout_asin_info_jp
(asin
,product_title
,provide
,provide_id
,product_path
,img1
,img2
,img3
,img4
,img5
,currency
,list_price
,sale_price
,price
,deal_price
,price_mc
,price_savings
,price_savings_percent
,bullet_point1
,bullet_point2
,bullet_point3
,bullet_point4
,bullet_point5
,customer_reviews_count
,ask
,best_seller
,best_seller_path
,stars
,five_star
,four_star
,three_star
,two_star
,one_star
,issue_date
,top_bad_reviews
,offer_listing
,item_model_number
,soldby
,seller_id
,fba
,bsr1
,bsr1path
,bsr2
,bsr2path
,bsr3
,bsr3path
,bsr4
,bsr4path
,bsr5
,bsr5path
,AddOn
,add_date_time
,local_date_time)
select
asin
,product_title
,provide
,provide_id
,product_path
,img1
,img2
,img3
,img4
,img5
,currency
,list_price
,sale_price
,price
,deal_price
,price_mc
,price_savings
,price_savings_percent
,bullet_point1
,bullet_point2
,bullet_point3
,bullet_point4
,bullet_point5
,customer_reviews_count
,ask
,best_seller
,best_seller_path
,stars
,five_star
,four_star
,three_star
,two_star
,one_star
,issue_date
,top_bad_reviews
,offer_listing
,item_model_number
,soldby
,seller_id
,fba
,bsr1
,bsr1path
,bsr2
,bsr2path
,bsr3
,bsr3path
,bsr4
,bsr4path
,bsr5
,bsr5path
,AddOn
,add_date_time
,local_date_time
from cleanout_asin_info_jp ;
ClickHouse exception, code: 99, host: 172.16.6.207, port: 8123; Code: 99, e.displayText() = DB::Exception: Packet payload is not fully read. Stopped after 7372 bytes, while 693 bytes are in buffer. (version 20.8.2.3 (official build))
err log:
2020.09.15 16:32:49.288707 [ 1552 ] {cc5108a6-0250-4d05-91b2-aea49f5ea3ba} <Error> DynamicQueryHandler: Code: 99, e.displayText() = DB::Exception: Packet payload is not fully read. Stopped after 7372 bytes, while 693 bytes are in buffer., Stack trace (when copying this message, always include the lines below):
0. Poco::Exception::Exception(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, int) @ 0x1a80ae30 in /usr/bin/clickhouse
1. DB::Exception::Exception(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, int) @ 0xff9e75d in /usr/bin/clickhouse
2. ? @ 0x173bf410 in /usr/bin/clickhouse
3. DB::MySQLProtocol::PacketEndpoint::tryReceivePacket(DB::MySQLProtocol::IMySQLReadPacket&, unsigned long) @ 0x173cbe27 in /usr/bin/clickhouse
4. DB::MySQLClient::readOneBinlogEvent(unsigned long) @ 0x173c9491 in /usr/bin/clickhouse
5. DB::MaterializeMySQLSyncThread::synchronization(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&) @ 0x173a10b0 in /usr/bin/clickhouse
6. ? @ 0x173a14b6 in /usr/bin/clickhouse
7. ThreadPoolImpl<std::__1::thread>::worker(std::__1::__list_iterator<std::__1::thread, void*>) @ 0xffce517 in /usr/bin/clickhouse
8. ? @ 0xffccb53 in /usr/bin/clickhouse
9. start_thread @ 0x7e65 in /usr/lib64/libpthread-2.17.so
10. clone @ 0xfe88d in /usr/lib64/libc-2.17.so
(version 20.8.2.3 (official build))
clickhouse-server.log:
2020.09.15 16:32:49.287267 [ 1552 ] {} <Trace> DynamicQueryHandler: Request URI: /?max_result_rows=200&compress=1&user=default&password=clickhouse123&extremes=0&query_id=cc5108a6-0250-4d05-91b2-aea49f5ea3ba&result_overflow_mode=break&database=ptx_db
2020.09.15 16:32:49.287574 [ 1552 ] {cc5108a6-0250-4d05-91b2-aea49f5ea3ba} <Debug> executeQuery: (from 172.16.7.163:1231) SELECT count(1) FROM test_binlog.cleanout_asin_info_jp FORMAT TabSeparatedWithNamesAndTypes;
2020.09.15 16:32:49.288400 [ 1552 ] {cc5108a6-0250-4d05-91b2-aea49f5ea3ba} <Error> executeQuery: Code: 99, e.displayText() = DB::Exception: Packet payload is not fully read. Stopped after 7372 bytes, while 693 bytes are in buffer. (version 20.8.2.3 (official build)) (from 172.16.7.163:1231) (in query: SELECT count(1) FROM test_binlog.cleanout_asin_info_jp FORMAT TabSeparatedWithNamesAndTypes;), Stack trace (when copying this message, always include the lines below):
0. Poco::Exception::Exception(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, int) @ 0x1a80ae30 in /usr/bin/clickhouse
1. DB::Exception::Exception(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, int) @ 0xff9e75d in /usr/bin/clickhouse
2. ? @ 0x173bf410 in /usr/bin/clickhouse
3. DB::MySQLProtocol::PacketEndpoint::tryReceivePacket(DB::MySQLProtocol::IMySQLReadPacket&, unsigned long) @ 0x173cbe27 in /usr/bin/clickhouse
4. DB::MySQLClient::readOneBinlogEvent(unsigned long) @ 0x173c9491 in /usr/bin/clickhouse
5. DB::MaterializeMySQLSyncThread::synchronization(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&) @ 0x173a10b0 in /usr/bin/clickhouse
6. ? @ 0x173a14b6 in /usr/bin/clickhouse
7. ThreadPoolImpl<std::__1::thread>::worker(std::__1::__list_iterator<std::__1::thread, void*>) @ 0xffce517 in /usr/bin/clickhouse
8. ? @ 0xffccb53 in /usr/bin/clickhouse
9. start_thread @ 0x7e65 in /usr/lib64/libpthread-2.17.so
10. clone @ 0xfe88d in /usr/lib64/libc-2.17.so
2020.09.15 16:32:49.288707 [ 1552 ] {cc5108a6-0250-4d05-91b2-aea49f5ea3ba} <Error> DynamicQueryHandler: Code: 99, e.displayText() = DB::Exception: Packet payload is not fully read. Stopped after 7372 bytes, while 693 bytes are in buffer., Stack trace (when copying this message, always include the lines below):
0. Poco::Exception::Exception(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, int) @ 0x1a80ae30 in /usr/bin/clickhouse
1. DB::Exception::Exception(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, int) @ 0xff9e75d in /usr/bin/clickhouse
2. ? @ 0x173bf410 in /usr/bin/clickhouse
3. DB::MySQLProtocol::PacketEndpoint::tryReceivePacket(DB::MySQLProtocol::IMySQLReadPacket&, unsigned long) @ 0x173cbe27 in /usr/bin/clickhouse
4. DB::MySQLClient::readOneBinlogEvent(unsigned long) @ 0x173c9491 in /usr/bin/clickhouse
5. DB::MaterializeMySQLSyncThread::synchronization(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&) @ 0x173a10b0 in /usr/bin/clickhouse
6. ? @ 0x173a14b6 in /usr/bin/clickhouse
7. ThreadPoolImpl<std::__1::thread>::worker(std::__1::__list_iterator<std::__1::thread, void*>) @ 0xffce517 in /usr/bin/clickhouse
8. ? @ 0xffccb53 in /usr/bin/clickhouse
9. start_thread @ 0x7e65 in /usr/lib64/libpthread-2.17.so
10. clone @ 0xfe88d in /usr/lib64/libc-2.17.so
| https://github.com/ClickHouse/ClickHouse/issues/14835 | https://github.com/ClickHouse/ClickHouse/pull/14852 | 652163c07c62bdd8724e109160b03016551b6c1a | 5a890d73777ed1be4da46dc30df4f2c03b53a9be | 2020-09-15T08:45:09Z | c++ | 2020-09-17T16:17:30Z |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 14,829 | ["src/Core/MySQL/MySQLReplication.cpp", "src/Core/MySQL/MySQLReplication.h", "src/IO/MySQLBinlogEventReadBuffer.cpp", "src/IO/MySQLBinlogEventReadBuffer.h", "src/IO/tests/gtest_mysql_binlog_event_read_buffer.cpp", "src/IO/ya.make"] | MaterializeMySQL count() result is incorrect | mysql have table t(a Int, b Int), 2 records: (1,1) and (2, 2), then in ck count
`select count(1) from ckdb.t ` -- 2, that's ok.
then, mysql delete (2,2) and ck count
`select count(1) from ckdb.t ` -- still 2, that's not correct . | https://github.com/ClickHouse/ClickHouse/issues/14829 | https://github.com/ClickHouse/ClickHouse/pull/14852 | 652163c07c62bdd8724e109160b03016551b6c1a | 5a890d73777ed1be4da46dc30df4f2c03b53a9be | 2020-09-15T02:31:49Z | c++ | 2020-09-17T16:17:30Z |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 14,814 | ["programs/server/Server.cpp", "src/Common/ThreadPool.cpp"] | Clickhouse gets stuck talking to Zookeeper at startup | We are seeing an issue where Clickhouse gets stuck at start up, seemingly while talking to Zookeeper. It can start perfectly fine in a single node mode. Trace logs at startup:
```
Processing configuration file '/etc/clickhouse-server/config.xml'.
Merging configuration file '/etc/clickhouse-server/conf.d/config.xml'.
Merging configuration file '/etc/clickhouse-server/conf.d/fragments.xml'.
Including configuration file '/etc/clickhouse-server/conf.d/fragments.xml'.
Include not found: clickhouse_compression
Logging trace to console
2020.09.14 16:16:06.519248 [ 6 ] {} <Information> SentryWriter: Sending crash reports is disabled
2020.09.14 16:16:06.522189 [ 6 ] {} <Trace> Pipe: Pipe capacity is 1.00 MiB
2020.09.14 16:16:06.687157 [ 6 ] {} <Information> : Starting ClickHouse 20.9.1.4585 with revision 54439, no build id, PID 6
2020.09.14 16:16:06.687319 [ 6 ] {} <Information> Application: starting up
Processing configuration file '/etc/clickhouse-server/config.xml'.
Merging configuration file '/etc/clickhouse-server/conf.d/config.xml'.
Merging configuration file '/etc/clickhouse-server/conf.d/fragments.xml'.
Including configuration file '/etc/clickhouse-server/conf.d/fragments.xml'.
2020.09.14 16:16:06.701294 [ 6 ] {} <Trace> ZooKeeper: Initialized, hosts: clickhouse-jaeger3.zookeeper.foo.bar:2181,clickhouse-jaeger1.zookeeper.foo.bar:2181,clickhouse-jaeger2.zookeeper.foo.bar:2181
Include not found: clickhouse_compression
Include not found: networks
Include not found: networks
Include not found: networks
Include not found: networks
Include not found: networks
Include not found: networks
Include not found: networks
Include not found: networks
Include not found: networks
Include not found: networks
Saved preprocessed configuration to '/var/lib/clickhouse/preprocessed_configs/config.xml'.
2020.09.14 16:16:06.715072 [ 6 ] {} <Information> Application: It looks like the process has no CAP_IPC_LOCK capability, binary mlock will be disabled. It could happen due to incorrect ClickHouse package installation. You could resolve the problem manually with 'sudo setcap cap_ipc_lock=+ep /usr/bin/clickhouse'. Note that it will not work on 'nosuid' mounted filesystems.
2020.09.14 16:16:06.715156 [ 6 ] {} <Information> StatusFile: Status file /var/lib/clickhouse/status already exists - unclean restart. Contents:
PID: 6
Started at: 2020-09-14 16:14:19
Revision: 54439
2020.09.14 16:16:06.715273 [ 6 ] {} <Debug> Application: rlimit on number of file descriptors is 1048576
2020.09.14 16:16:06.715295 [ 6 ] {} <Debug> Application: Initializing DateLUT.
2020.09.14 16:16:06.715327 [ 6 ] {} <Trace> Application: Initialized DateLUT with time zone 'Etc/UTC'.
```
At this point everything is stuck. Attaching GDB shows this:
```
(gdb) thread apply all bt
Thread 6 (LWP 40716):
#0 0x00007fa2b1b767ef in epoll_wait () from target:/lib/x86_64-linux-gnu/libc.so.6
#1 0x00000000130afe47 in Poco::Net::SocketImpl::pollImpl(Poco::Timespan&, int) ()
#2 0x00000000130b00a9 in Poco::Net::SocketImpl::poll(Poco::Timespan const&, int) ()
#3 0x0000000010ee8003 in DB::ReadBufferFromPocoSocket::poll(unsigned long) ()
#4 0x0000000010cc0a4a in Coordination::ZooKeeper::receiveThread() ()
#5 0x0000000010cc0c45 in ?? ()
#6 0x000000000920ca54 in ThreadPoolImpl<std::__1::thread>::worker(std::__1::__list_iterator<std::__1::thread, void*>) ()
#7 0x000000000920b042 in ?? ()
#8 0x00007fa2b1c54fa3 in start_thread () from target:/lib/x86_64-linux-gnu/libpthread.so.0
#9 0x00007fa2b1b764cf in clone () from target:/lib/x86_64-linux-gnu/libc.so.6
Thread 5 (LWP 40715):
#0 0x00007fa2b1c5b3f9 in pthread_cond_timedwait@@GLIBC_2.3.2 () from target:/lib/x86_64-linux-gnu/libpthread.so.0
#1 0x00000000131f3327 in Poco::SemaphoreImpl::waitImpl(long) ()
#2 0x0000000010cc2fee in Coordination::ZooKeeper::sendThread() ()
#3 0x0000000010cc3e75 in ?? ()
#4 0x000000000920ca54 in ThreadPoolImpl<std::__1::thread>::worker(std::__1::__list_iterator<std::__1::thread, void*>) ()
#5 0x000000000920b042 in ?? ()
#6 0x00007fa2b1c54fa3 in start_thread () from target:/lib/x86_64-linux-gnu/libpthread.so.0
#7 0x00007fa2b1b764cf in clone () from target:/lib/x86_64-linux-gnu/libc.so.6
Thread 4 (LWP 40708):
#0 0x00007fa2b1c5e544 in read () from target:/lib/x86_64-linux-gnu/libpthread.so.0
#1 0x00000000091fd4b2 in DB::ReadBufferFromFileDescriptor::nextImpl() ()
#2 0x000000000fa5240a in SignalListener::run() ()
#3 0x0000000013200aba in Poco::ThreadImpl::runnableEntry(void*) ()
#4 0x00007fa2b1c54fa3 in start_thread () from target:/lib/x86_64-linux-gnu/libpthread.so.0
#5 0x00007fa2b1b764cf in clone () from target:/lib/x86_64-linux-gnu/libc.so.6
Thread 3 (LWP 40707):
#0 0x00007fa2b1c5b00c in pthread_cond_wait@@GLIBC_2.3.2 () from target:/lib/x86_64-linux-gnu/libpthread.so.0
#1 0x000000001317d1eb in Poco::EventImpl::waitImpl() ()
#2 0x000000001320436f in Poco::PooledThread::run() ()
#3 0x0000000013200aba in Poco::ThreadImpl::runnableEntry(void*) ()
#4 0x00007fa2b1c54fa3 in start_thread () from target:/lib/x86_64-linux-gnu/libpthread.so.0
#5 0x00007fa2b1b764cf in clone () from target:/lib/x86_64-linux-gnu/libc.so.6
Thread 2 (LWP 40706):
#0 0x00007fa2b1c5b00c in pthread_cond_wait@@GLIBC_2.3.2 () from target:/lib/x86_64-linux-gnu/libpthread.so.0
#1 0x000000001317d1eb in Poco::EventImpl::waitImpl() ()
#2 0x000000001320436f in Poco::PooledThread::run() ()
#3 0x0000000013200aba in Poco::ThreadImpl::runnableEntry(void*) ()
#4 0x00007fa2b1c54fa3 in start_thread () from target:/lib/x86_64-linux-gnu/libpthread.so.0
#5 0x00007fa2b1b764cf in clone () from target:/lib/x86_64-linux-gnu/libc.so.6
Thread 1 (LWP 40705):
#0 0x00007fa2b1c56495 in __pthread_timedjoin_ex () from target:/lib/x86_64-linux-gnu/libpthread.so.0
#1 0x000000001407f1bb in std::__1::thread::join() ()
#2 0x000000000920b8c1 in ThreadPoolImpl<std::__1::thread>::finalize() ()
#3 0x000000000920b943 in ThreadPoolImpl<std::__1::thread>::~ThreadPoolImpl() ()
#4 0x000000000920af2b in GlobalThreadPool::initialize(unsigned long) ()
#5 0x000000000916c9b8 in DB::Server::main(std::__1::vector<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, std::__1::allocator<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > > > const&) ()
#6 0x00000000130c55f8 in Poco::Util::Application::run() ()
#7 0x000000000923c12b in DB::Server::run() ()
#8 0x000000000923343d in mainEntryClickHouseServer(int, char**) ()
#9 0x000000000916a845 in main ()
```
If I wait for 1 minute while GDB is attached, on detach Clickhouse is able to continue:
```
2020.09.14 16:17:49.984072 [ 11 ] {} <Error> void Coordination::ZooKeeper::receiveThread(): Code: 33, e.displayText() = DB::Exception: Cannot read all data. Bytes read: 0. Bytes expected: 4., Stack trace (when copying this message, always include the lines below):
0. Poco::Exception::Exception(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, int) @ 0x131849c5 in /usr/lib/debug/usr/bin/clickhouse
1. DB::Exception::Exception(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, int) @ 0x9203b21 in /usr/lib/debug/usr/bin/clickhouse
2. DB::ReadBuffer::readStrict(char*, unsigned long) @ 0x9220150 in /usr/lib/debug/usr/bin/clickhouse
3. Coordination::ZooKeeper::receiveEvent() @ 0x10cbf5fe in /usr/lib/debug/usr/bin/clickhouse
4. Coordination::ZooKeeper::receiveThread() @ 0x10cc0b86 in /usr/lib/debug/usr/bin/clickhouse
5. ThreadFromGlobalPool::ThreadFromGlobalPool<Coordination::ZooKeeper::ZooKeeper(std::__1::vector<Coordination::ZooKeeper::Node, std::__1::allocator<Coordination::ZooKeeper::Node> > const&, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, Poco::Timespan, Poco::Timespan, Poco::Timespan)::'lambda0'()>(Coordination::ZooKeeper::ZooKeeper(std::__1::vector<Coordination::ZooKeeper::Node, std::__1::allocator<Coordination::ZooKeeper::Node> > const&, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, Poco::Timespan, Poco::Timespan, Poco::Timespan)::'lambda0'()&&)::'lambda'()::operator()() const @ 0x10cc0c45 in /usr/lib/debug/usr/bin/clickhouse
6. ThreadPoolImpl<std::__1::thread>::worker(std::__1::__list_iterator<std::__1::thread, void*>) @ 0x920ca54 in /usr/lib/debug/usr/bin/clickhouse
7. void* std::__1::__thread_proxy<std::__1::tuple<std::__1::unique_ptr<std::__1::__thread_struct, std::__1::default_delete<std::__1::__thread_struct> >, void ThreadPoolImpl<std::__1::thread>::scheduleImpl<void>(std::__1::function<void ()>, int, std::__1::optional<unsigned long>)::'lambda1'()> >(void*) @ 0x920b042 in /usr/lib/debug/usr/bin/clickhouse
8. start_thread @ 0x7fa3 in /lib/x86_64-linux-gnu/libpthread-2.28.so
9. __clone @ 0xf94cf in /lib/x86_64-linux-gnu/libc-2.28.so
(version 20.9.1.4585)
2020.09.14 16:17:49.984409 [ 11 ] {} <Error> void Coordination::ZooKeeper::finalize(bool, bool): Poco::Exception. Code: 1000, e.code() = 107, e.displayText() = Net Exception: Socket is not connected, Stack trace (when copying this message, always include the lines below):
0. Poco::IOException::IOException(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, int) @ 0x1318d525 in /usr/lib/debug/usr/bin/clickhouse
1. Poco::Net::NetException::NetException(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, int) @ 0x13098001 in /usr/lib/debug/usr/bin/clickhouse
2. Poco::Net::SocketImpl::error(int, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&) (.cold) @ 0x9098d77 in /usr/lib/debug/usr/bin/clickhouse
3. Poco::Net::SocketImpl::shutdown() @ 0x130ad720 in /usr/lib/debug/usr/bin/clickhouse
4. Coordination::ZooKeeper::finalize(bool, bool) @ 0x10cc0020 in /usr/lib/debug/usr/bin/clickhouse
5. Coordination::ZooKeeper::receiveThread() (.cold) @ 0x8f309a3 in /usr/lib/debug/usr/bin/clickhouse
6. ThreadFromGlobalPool::ThreadFromGlobalPool<Coordination::ZooKeeper::ZooKeeper(std::__1::vector<Coordination::ZooKeeper::Node, std::__1::allocator<Coordination::ZooKeeper::Node> > const&, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, Poco::Timespan, Poco::Timespan, Poco::Timespan)::'lambda0'()>(Coordination::ZooKeeper::ZooKeeper(std::__1::vector<Coordination::ZooKeeper::Node, std::__1::allocator<Coordination::ZooKeeper::Node> > const&, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, Poco::Timespan, Poco::Timespan, Poco::Timespan)::'lambda0'()&&)::'lambda'()::operator()() const @ 0x10cc0c45 in /usr/lib/debug/usr/bin/clickhouse
7. ThreadPoolImpl<std::__1::thread>::worker(std::__1::__list_iterator<std::__1::thread, void*>) @ 0x920ca54 in /usr/lib/debug/usr/bin/clickhouse
8. void* std::__1::__thread_proxy<std::__1::tuple<std::__1::unique_ptr<std::__1::__thread_struct, std::__1::default_delete<std::__1::__thread_struct> >, void ThreadPoolImpl<std::__1::thread>::scheduleImpl<void>(std::__1::function<void ()>, int, std::__1::optional<unsigned long>)::'lambda1'()> >(void*) @ 0x920b042 in /usr/lib/debug/usr/bin/clickhouse
9. start_thread @ 0x7fa3 in /lib/x86_64-linux-gnu/libpthread-2.28.so
10. __clone @ 0xf94cf in /lib/x86_64-linux-gnu/libc-2.28.so
(version 20.9.1.4585)
2020.09.14 16:17:49.984711 [ 6 ] {} <Debug> Application: Setting up /var/lib/clickhouse/tmp/ to store temporary data in it
2020.09.14 16:17:49.985238 [ 6 ] {} <Error> Access(user directories): The <users>, <profiles> and <quotas> elements should be located in users config file: /etc/clickhouse-server/users.xml not in main config /etc/clickhouse-server/config.xml. Also note that you should place configuration changes to the appropriate *.d directory like 'users.d'.
2020.09.14 16:17:49.986050 [ 6 ] {} <Debug> ConfigReloader: Loading config '/etc/clickhouse-server/users.xml'
Processing configuration file '/etc/clickhouse-server/users.xml'.
Merging configuration file '/etc/clickhouse-server/conf.d/config.xml'.
Merging configuration file '/etc/clickhouse-server/conf.d/fragments.xml'.
Including configuration file '/etc/clickhouse-server/conf.d/fragments.xml'.
Processing configuration file '/etc/clickhouse-server/users.xml'.
Merging configuration file '/etc/clickhouse-server/conf.d/config.xml'.
Merging configuration file '/etc/clickhouse-server/conf.d/fragments.xml'.
Including configuration file '/etc/clickhouse-server/conf.d/fragments.xml'.
2020.09.14 16:17:50.006720 [ 6 ] {} <Trace> ZooKeeper: Initialized, hosts: clickhouse-jaeger3.zookeeper.foo.bar:2181,clickhouse-jaeger1.zookeeper.foo.bar:2181,clickhouse-jaeger2.zookeeper.foo.bar:2181
Include not found: networks
Include not found: networks
Include not found: networks
Include not found: networks
Include not found: networks
Include not found: networks
Include not found: networks
Include not found: networks
Include not found: networks
Include not found: networks
Saved preprocessed configuration to '/var/lib/clickhouse/preprocessed_configs/users.xml'.
2020.09.14 16:17:50.015128 [ 6 ] {} <Debug> ConfigReloader: Loaded config '/etc/clickhouse-server/users.xml', performing update on configuration
2020.09.14 16:17:50.016655 [ 6 ] {} <Debug> ConfigReloader: Loaded config '/etc/clickhouse-server/users.xml', performed update on configuration
2020.09.14 16:17:50.019022 [ 6 ] {} <Information> Application: Setting max_server_memory_usage was set to 168.81 GiB (187.57 GiB available * 0.90 max_server_memory_usage_to_ram_ratio)
2020.09.14 16:17:50.019041 [ 6 ] {} <Information> Application: Loading metadata from /var/lib/clickhouse/
```
The error here is similar to #8871, but circumstances are different,
We are seeing this on both `v20.9.1.4585-prestable` and `v20.7:20.7.1.4189-testing`, v20.3 cluster looks ok.
Our Clickhouse runs in Docker, Zookeeper is v3.4.13. | https://github.com/ClickHouse/ClickHouse/issues/14814 | https://github.com/ClickHouse/ClickHouse/pull/14843 | e97c9b16a7105841a31aaf31fa6b5398c7cf6443 | 478c7309d45abe861d3d0b7d2c4f88006843c776 | 2020-09-14T17:43:07Z | c++ | 2020-09-22T10:44:39Z |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 14,795 | ["src/Core/MySQL/MySQLReplication.cpp", "src/Core/MySQL/MySQLReplication.h", "src/IO/MySQLBinlogEventReadBuffer.cpp", "src/IO/MySQLBinlogEventReadBuffer.h", "src/IO/tests/gtest_mysql_binlog_event_read_buffer.cpp", "src/IO/ya.make"] | done MaterializeMySQL,however clickhouse server abort,The error log is as follows | 2020.09.14 15:51:41.314157 [ 4218 ] {} <Fatal> BaseDaemon: ########################################
2020.09.14 15:51:41.314215 [ 4218 ] {} <Fatal> BaseDaemon: (version 20.9.1.4571 (official build), no build id) (from thread 9653) (no query) Received signal Segmentation fault (11)
2020.09.14 15:51:41.314247 [ 4218 ] {} <Fatal> BaseDaemon: Address: 0x100 Access: read. Address not mapped to object.
2020.09.14 15:51:41.314267 [ 4218 ] {} <Fatal> BaseDaemon: Stack trace: 0x165ce35f 0xe5ff0ad 0x157ffcbf 0x15801242 0x157fa426 0x1580bb91 0x15819267 0x15816cc1 0x157ef830 0x157efc36 0xe641547 0xe63fb83 0x7f948bfaee25 0x7f948b8ccbad
2020.09.14 15:51:41.314316 [ 4218 ] {} <Fatal> BaseDaemon: 3. sallocx @ 0x165ce35f in /usr/bin/clickhouse
2020.09.14 15:51:41.314360 [ 4218 ] {} <Fatal> BaseDaemon: 4. operator delete(void*, unsigned long) @ 0xe5ff0ad in /usr/bin/clickhouse
2020.09.14 15:51:41.314395 [ 4218 ] {} <Fatal> BaseDaemon: 5. std::__1::__shared_ptr_emplace<DB::MySQLReplication::FormatDescriptionEvent, std::__1::allocator<DB::MySQLReplication::FormatDescriptionEvent> >::__on_zero_shared() @ 0x157ffcbf in /usr/bin/clickhouse
2020.09.14 15:51:41.314423 [ 4218 ] {} <Fatal> BaseDaemon: 6. std::__1::shared_ptr<DB::MySQLReplication::EventBase>::~shared_ptr() @ 0x15801242 in /usr/bin/clickhouse
2020.09.14 15:51:41.314444 [ 4218 ] {} <Fatal> BaseDaemon: 7. DB::MySQLReplication::MySQLFlavor::readPayloadImpl(DB::ReadBuffer&) @ 0x157fa426 in /usr/bin/clickhouse
2020.09.14 15:51:41.314463 [ 4218 ] {} <Fatal> BaseDaemon: 8. DB::MySQLProtocol::IMySQLReadPacket::readPayload(DB::ReadBuffer&, unsigned char&) @ 0x1580bb91 in /usr/bin/clickhouse
2020.09.14 15:51:41.314483 [ 4218 ] {} <Fatal> BaseDaemon: 9. DB::MySQLProtocol::PacketEndpoint::tryReceivePacket(DB::MySQLProtocol::IMySQLReadPacket&, unsigned long) @ 0x15819267 in /usr/bin/clickhouse
2020.09.14 15:51:41.314501 [ 4218 ] {} <Fatal> BaseDaemon: 10. DB::MySQLClient::readOneBinlogEvent(unsigned long) @ 0x15816cc1 in /usr/bin/clickhouse
2020.09.14 15:51:41.314524 [ 4218 ] {} <Fatal> BaseDaemon: 11. DB::MaterializeMySQLSyncThread::synchronization(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&) @ 0x157ef830 in /usr/bin/clickhouse
2020.09.14 15:51:41.314552 [ 4218 ] {} <Fatal> BaseDaemon: 12. ? @ 0x157efc36 in /usr/bin/clickhouse
2020.09.14 15:51:41.314574 [ 4218 ] {} <Fatal> BaseDaemon: 13. ThreadPoolImpl<std::__1::thread>::worker(std::__1::__list_iterator<std::__1::thread, void*>) @ 0xe641547 in /usr/bin/clickhouse
2020.09.14 15:51:41.314591 [ 4218 ] {} <Fatal> BaseDaemon: 14. ? @ 0xe63fb83 in /usr/bin/clickhouse
2020.09.14 15:51:41.314615 [ 4218 ] {} <Fatal> BaseDaemon: 15. start_thread @ 0x7e25 in /usr/lib64/libpthread-2.17.so
2020.09.14 15:51:41.314672 [ 4218 ] {} <Fatal> BaseDaemon: 16. clone @ 0xfebad in /usr/lib64/libc-2.17.so
2020.09.14 15:58:37.068724 [ 4635 ] {} <Warning> Application: Listen [::1]:8123 failed: Poco::Exception. Code: 1000, e.code() = 99, e.displayText() = Net Exception: Cannot assign requested address: [::1]:8123 (version 20.9.1.4571 (official build)). If it is an IPv6 or IPv4 address and your host has disabled IPv6 or IPv4, then consider to specify not disabled IPv4 or IPv6 address to listen in <listen_host> element of configuration file. Example for disabled IPv6: <listen_host>0.0.0.0</listen_host> . Example for disabled IPv4: <listen_host>::</listen_host>
2020.09.14 15:58:37.069343 [ 4635 ] {} <Warning> Application: Listen [::1]:9000 failed: Poco::Exception. Code: 1000, e.code() = 99, e.displayText() = Net Exception: Cannot assign requested address: [::1]:9000 (version 20.9.1.4571 (official build)). If it is an IPv6 or IPv4 address and your host has disabled IPv6 or IPv4, then consider to specify not disabled IPv4 or IPv6 address to listen in <listen_host> element of configuration file. Example for disabled IPv6: <listen_host>0.0.0.0</listen_host> . Example for disabled IPv4: <listen_host>::</listen_host>
2020.09.14 15:58:37.069890 [ 4635 ] {} <Warning> Application: Listen [::1]:9009 failed: Poco::Exception. Code: 1000, e.code() = 99, e.displayText() = Net Exception: Cannot assign requested address: [::1]:9009 (version 20.9.1.4571 (official build)). If it is an IPv6 or IPv4 address and your host has disabled IPv6 or IPv4, then consider to specify not disabled IPv4 or IPv6 address to listen in <listen_host> element of configuration file. Example for disabled IPv6: <listen_host>0.0.0.0</listen_host> . Example for disabled IPv4: <listen_host>::</listen_host>
2020.09.14 15:58:37.070418 [ 4635 ] {} <Warning> Application: Listen [::1]:9004 failed: Poco::Exception. Code: 1000, e.code() = 99, e.displayText() = Net Exception: Cannot assign requested address: [::1]:9004 (version 20.9.1.4571 (official build)). If it is an IPv6 or IPv4 address and your host has disabled IPv6 or IPv4, then consider to specify not disabled IPv4 or IPv6 address to listen in <listen_host> element of configuration file. Example for disabled IPv6: <listen_host>0.0.0.0</listen_host> . Example for disabled IPv4: <listen_host>::</listen_host>
2020.09.14 15:58:37.078140 [ 4719 ] {} <Fatal> BaseDaemon: ########################################
2020.09.14 15:58:37.078288 [ 4719 ] {} <Fatal> BaseDaemon: (version 20.9.1.4571 (official build), no build id) (from thread 4675) (no query) Received signal Segmentation fault (11)
2020.09.14 15:58:37.078359 [ 4719 ] {} <Fatal> BaseDaemon: Address: 0x100 Access: read. Address not mapped to object.
2020.09.14 15:58:37.078424 [ 4719 ] {} <Fatal> BaseDaemon: Stack trace: 0x165ce35f 0xe5ff0ad 0x157ffcbf 0x15801242 0x157fa426 0x1580bb91 0x15819267 0x15816cc1 0x157ef830 0x157efc36 0xe641547 0xe63fb83 0x7f43bada6e25 0x7f43ba6c4bad
2020.09.14 15:58:37.078541 [ 4719 ] {} <Fatal> BaseDaemon: 3. sallocx @ 0x165ce35f in /usr/bin/clickhouse
2020.09.14 15:58:37.078668 [ 4719 ] {} <Fatal> BaseDaemon: 4. operator delete(void*, unsigned long) @ 0xe5ff0ad in /usr/bin/clickhouse
2020.09.14 15:58:37.078783 [ 4719 ] {} <Fatal> BaseDaemon: 5. std::__1::__shared_ptr_emplace<DB::MySQLReplication::FormatDescriptionEvent, std::__1::allocator<DB::MySQLReplication::FormatDescriptionEvent> >::__on_zero_shared() @ 0x157ffcbf in /usr/bin/clickhouse
2020.09.14 15:58:37.078862 [ 4719 ] {} <Fatal> BaseDaemon: 6. std::__1::shared_ptr<DB::MySQLReplication::EventBase>::~shared_ptr() @ 0x15801242 in /usr/bin/clickhouse
2020.09.14 15:58:37.078951 [ 4719 ] {} <Fatal> BaseDaemon: 7. DB::MySQLReplication::MySQLFlavor::readPayloadImpl(DB::ReadBuffer&) @ 0x157fa426 in /usr/bin/clickhouse
2020.09.14 15:58:37.079019 [ 4719 ] {} <Fatal> BaseDaemon: 8. DB::MySQLProtocol::IMySQLReadPacket::readPayload(DB::ReadBuffer&, unsigned char&) @ 0x1580bb91 in /usr/bin/clickhouse
2020.09.14 15:58:37.079086 [ 4719 ] {} <Fatal> BaseDaemon: 9. DB::MySQLProtocol::PacketEndpoint::tryReceivePacket(DB::MySQLProtocol::IMySQLReadPacket&, unsigned long) @ 0x15819267 in /usr/bin/clickhouse
2020.09.14 15:58:37.079158 [ 4719 ] {} <Fatal> BaseDaemon: 10. DB::MySQLClient::readOneBinlogEvent(unsigned long) @ 0x15816cc1 in /usr/bin/clickhouse
2020.09.14 15:58:37.079226 [ 4719 ] {} <Fatal> BaseDaemon: 11. DB::MaterializeMySQLSyncThread::synchronization(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&) @ 0x157ef830 in /usr/bin/clickhouse
2020.09.14 15:58:37.079287 [ 4719 ] {} <Fatal> BaseDaemon: 12. ? @ 0x157efc36 in /usr/bin/clickhouse
2020.09.14 15:58:37.079354 [ 4719 ] {} <Fatal> BaseDaemon: 13. ThreadPoolImpl<std::__1::thread>::worker(std::__1::__list_iterator<std::__1::thread, void*>) @ 0xe641547 in /usr/bin/clickhouse
2020.09.14 15:58:37.079411 [ 4719 ] {} <Fatal> BaseDaemon: 14. ? @ 0xe63fb83 in /usr/bin/clickhouse
2020.09.14 15:58:37.079480 [ 4719 ] {} <Fatal> BaseDaemon: 15. start_thread @ 0x7e25 in /usr/lib64/libpthread-2.17.so
2020.09.14 15:58:37.079556 [ 4719 ] {} <Fatal> BaseDaemon: 16. clone @ 0xfebad in /usr/lib64/libc-2.17.so
2020.09.14 16:00:21.810977 [ 4839 ] {} <Warning> Application: Listen [::1]:8123 failed: Poco::Exception. Code: 1000, e.code() = 99, e.displayText() = Net Exception: Cannot assign requested address: [::1]:8123 (version 20.9.1.4571 (official build)). If it is an IPv6 or IPv4 address and your host has disabled IPv6 or IPv4, then consider to specify not disabled IPv4 or IPv6 address to listen in <listen_host> element of configuration file. Example for disabled IPv6: <listen_host>0.0.0.0</listen_host> . Example for disabled IPv4: <listen_host>::</listen_host>
2020.09.14 16:00:21.811611 [ 4839 ] {} <Warning> Application: Listen [::1]:9000 failed: Poco::Exception. Code: 1000, e.code() = 99, e.displayText() = Net Exception: Cannot assign requested address: [::1]:9000 (version 20.9.1.4571 (official build)). If it is an IPv6 or IPv4 address and your host has disabled IPv6 or IPv4, then consider to specify not disabled IPv4 or IPv6 address to listen in <listen_host> element of configuration file. Example for disabled IPv6: <listen_host>0.0.0.0</listen_host> . Example for disabled IPv4: <listen_host>::</listen_host>
2020.09.14 16:00:21.812155 [ 4839 ] {} <Warning> Application: Listen [::1]:9009 failed: Poco::Exception. Code: 1000, e.code() = 99, e.displayText() = Net Exception: Cannot assign requested address: [::1]:9009 (version 20.9.1.4571 (official build)). If it is an IPv6 or IPv4 address and your host has disabled IPv6 or IPv4, then consider to specify not disabled IPv4 or IPv6 address to listen in <listen_host> element of configuration file. Example for disabled IPv6: <listen_host>0.0.0.0</listen_host> . Example for disabled IPv4: <listen_host>::</listen_host>
2020.09.14 16:00:21.812696 [ 4839 ] {} <Warning> Application: Listen [::1]:9004 failed: Poco::Exception. Code: 1000, e.code() = 99, e.displayText() = Net Exception: Cannot assign requested address: [::1]:9004 (version 20.9.1.4571 (official build)). If it is an IPv6 or IPv4 address and your host has disabled IPv6 or IPv4, then consider to specify not disabled IPv4 or IPv6 address to listen in <listen_host> element of configuration file. Example for disabled IPv6: <listen_host>0.0.0.0</listen_host> . Example for disabled IPv4: <listen_host>::</listen_host>
2020.09.14 16:00:21.819912 [ 4918 ] {} <Fatal> BaseDaemon: ########################################
2020.09.14 16:00:21.820107 [ 4918 ] {} <Fatal> BaseDaemon: (version 20.9.1.4571 (official build), no build id) (from thread 4847) (no query) Received signal Segmentation fault (11)
2020.09.14 16:00:21.820211 [ 4918 ] {} <Fatal> BaseDaemon: Address: 0x100 Access: read. Address not mapped to object.
2020.09.14 16:00:21.820282 [ 4918 ] {} <Fatal> BaseDaemon: Stack trace: 0x165ce35f 0xe5ff0ad 0x157ffcbf 0x15801242 0x157fa426 0x1580bb91 0x15819267 0x15816cc1 0x157ef830 0x157efc36 0xe641547 0xe63fb83 0x7f73130b3e25 0x7f73129d1bad
2020.09.14 16:00:21.820411 [ 4918 ] {} <Fatal> BaseDaemon: 3. sallocx @ 0x165ce35f in /usr/bin/clickhouse
2020.09.14 16:00:21.820508 [ 4918 ] {} <Fatal> BaseDaemon: 4. operator delete(void*, unsigned long) @ 0xe5ff0ad in /usr/bin/clickhouse
2020.09.14 16:00:21.820591 [ 4918 ] {} <Fatal> BaseDaemon: 5. std::__1::__shared_ptr_emplace<DB::MySQLReplication::FormatDescriptionEvent, std::__1::allocator<DB::MySQLReplication::FormatDescriptionEvent> >::__on_zero_shared() @ 0x157ffcbf in /usr/bin/clickhouse
2020.09.14 16:00:21.820678 [ 4918 ] {} <Fatal> BaseDaemon: 6. std::__1::shared_ptr<DB::MySQLReplication::EventBase>::~shared_ptr() @ 0x15801242 in /usr/bin/clickhouse
2020.09.14 16:00:21.820756 [ 4918 ] {} <Fatal> BaseDaemon: 7. DB::MySQLReplication::MySQLFlavor::readPayloadImpl(DB::ReadBuffer&) @ 0x157fa426 in /usr/bin/clickhouse
2020.09.14 16:00:21.820835 [ 4918 ] {} <Fatal> BaseDaemon: 8. DB::MySQLProtocol::IMySQLReadPacket::readPayload(DB::ReadBuffer&, unsigned char&) @ 0x1580bb91 in /usr/bin/clickhouse
2020.09.14 16:00:21.820904 [ 4918 ] {} <Fatal> BaseDaemon: 9. DB::MySQLProtocol::PacketEndpoint::tryReceivePacket(DB::MySQLProtocol::IMySQLReadPacket&, unsigned long) @ 0x15819267 in /usr/bin/clickhouse
2020.09.14 16:00:21.820970 [ 4918 ] {} <Fatal> BaseDaemon: 10. DB::MySQLClient::readOneBinlogEvent(unsigned long) @ 0x15816cc1 in /usr/bin/clickhouse
2020.09.14 16:00:21.821045 [ 4918 ] {} <Fatal> BaseDaemon: 11. DB::MaterializeMySQLSyncThread::synchronization(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&) @ 0x157ef830 in /usr/bin/clickhouse
2020.09.14 16:00:21.821122 [ 4918 ] {} <Fatal> BaseDaemon: 12. ? @ 0x157efc36 in /usr/bin/clickhouse
2020.09.14 16:00:21.821189 [ 4918 ] {} <Fatal> BaseDaemon: 13. ThreadPoolImpl<std::__1::thread>::worker(std::__1::__list_iterator<std::__1::thread, void*>) @ 0xe641547 in /usr/bin/clickhouse
2020.09.14 16:00:21.821252 [ 4918 ] {} <Fatal> BaseDaemon: 14. ? @ 0xe63fb83 in /usr/bin/clickhouse
2020.09.14 16:00:21.821328 [ 4918 ] {} <Fatal> BaseDaemon: 15. start_thread @ 0x7e25 in /usr/lib64/libpthread-2.17.so
2020.09.14 16:00:21.821402 [ 4918 ] {} <Fatal> BaseDaemon: 16. clone @ 0xfebad in /usr/lib64/libc-2.17.so
| https://github.com/ClickHouse/ClickHouse/issues/14795 | https://github.com/ClickHouse/ClickHouse/pull/14852 | 652163c07c62bdd8724e109160b03016551b6c1a | 5a890d73777ed1be4da46dc30df4f2c03b53a9be | 2020-09-14T10:11:07Z | c++ | 2020-09-17T16:17:30Z |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 14,786 | ["src/Core/MySQL/MySQLReplication.cpp", "src/Core/MySQL/MySQLReplication.h", "src/IO/MySQLBinlogEventReadBuffer.cpp", "src/IO/MySQLBinlogEventReadBuffer.h", "src/IO/tests/gtest_mysql_binlog_event_read_buffer.cpp", "src/IO/ya.make"] | MaterializeMySQLSyncThread: Code: 32, e.displayText() = DB::Exception: Attempt to read after eof | **Describe the bug**
The bug may not be reproduced on each platform.
**How to reproduce**
* Set up replication relationship between MySQL and ClickHouse
* When data copy has completed and after a while, stop ClickHouse service
* Wait for 0.5 hour, start ClickHouse service, you will get odd error, may different from each time
**Additional comment**
I strongly suspect MySQL replication protocol send abnormal message to ClickHouse, which cause ClickHouse side code unable to parse the message
* Is it possible to print the abnormal message?
* Is there a mechanism to let MySQL resend required binlog message, if the fetch one is abnormal?
**Error message and/or stacktrace**
```
2020.09.14 13:14:28.493063 [ 4737 ] {} <Error> MaterializeMySQLSyncThread: Code: 1002, e.displayText()
= DB::Exception: ParseRow: Unhandled MySQL field type:0, Stack trace (when copying this message,
always include the lines below):
```
```
2020.09.14 13:14:04.557507 [ 4721 ] {} <Error> MaterializeMySQLSyncThread: Code: 32, e.displayText()
= DB::Exception: Attempt to read after eof, Stack trace (when copying this message, always include
the lines below):
```
```
2020.09.14 12:51:54.084817 [ 21290 ] {} <Error> MaterializeMySQLSyncThread: std::exception. Code: 10
01, type: std::length_error, e.what() = basic_string, Stack trace (when copying this message, always
include the lines below):
```
```
2020.09.14 12:26:44.455515 [ 20389 ] {} <Error> MaterializeMySQLSyncThread: Code: 62, e.displayText(
) = DB::Exception: Syntax error: failed at position 121 ('BE'): BE. Expected one of: RENAME DATABASE
, TRUNCATE, RENAME TABLE, DROP query, RENAME DICTIONARY, DETACH, RENAME query, DROP, create query, C
REATE, EXCHANGE TABLES, ALTER TABLE, alter query, Stack trace (when copying this message, always inc
lude the lines below):
```
```
2020.09.14 12:14:25.200499 [ 4794 ] {} <Error> MaterializeMySQLSyncThread: Code: 62, e.displayText()
= DB::Exception: Syntax error: failed at position 121 ('BEG'): BEG. Expected one of: RENAME DATABAS
E, TRUNCATE, RENAME TABLE, DROP query, RENAME DICTIONARY, DETACH, RENAME query, DROP, create query,
CREATE, EXCHANGE TABLES, ALTER TABLE, alter query, Stack trace (when copying this message, always in
clude the lines below):
``` | https://github.com/ClickHouse/ClickHouse/issues/14786 | https://github.com/ClickHouse/ClickHouse/pull/14852 | 652163c07c62bdd8724e109160b03016551b6c1a | 5a890d73777ed1be4da46dc30df4f2c03b53a9be | 2020-09-14T07:02:28Z | c++ | 2020-09-17T16:17:30Z |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 14,781 | ["src/Core/MySQL/MySQLReplication.cpp", "src/Core/MySQL/MySQLReplication.h", "src/IO/MySQLBinlogEventReadBuffer.cpp", "src/IO/MySQLBinlogEventReadBuffer.h", "src/IO/tests/gtest_mysql_binlog_event_read_buffer.cpp", "src/IO/ya.make"] | MaterializeMySQL Data inconsistency between MySQL and Clickhouse | **Describe the bug**
mysql:5.7.31
clickhouse:20.10.1.4635
I found data inconsistency between MySQL and Clickhouse when I execute update in MySQL with one transaction with 439129 rows
**How to reproduce**
step1:
I check that the amount of data is consistent between MySQL and Clickhouse
mysql> select count(1) from mobile_belong
-> ;
+----------+
| count(1) |
+----------+
| 439129 |
+----------+
test-1-118.raipeng.com :) select count(1) from mobile_belong;
SELECT count(1)
FROM mobile_belong
┌─count(1)─┐
│ 439129 │
└──────────┘
1 rows in set. Elapsed: 0.043 sec. Processed 439.13 thousand rows, 10.98 MB (10.30 million rows/s., 257.46 MB/s.)
step2:
I executed update in MySQL as below
mysql> update mobile_belong set province='北京',city='北京';
Query OK, 419371 rows affected (1.98 sec)
Rows matched: 439129 Changed: 419371 Warnings: 0
step3:
checking data in MySQL and ClickHouse, and found mysql updated data of 439129 rows successfully ,but,clickhouse only have 438459 rows be updated successfully,remian 439129-438459 not be updated.
mysql> select count(1) from mobile_belong where province='北京' and city='北京';
+----------+
| count(1) |
+----------+
| 439129 |
+----------+
1 row in set (0.16 sec)
test-1-118.raipeng.com :) select count(1) from mobile_belong where province='北京' and city='北京';
SELECT count(1)
FROM mobile_belong
WHERE (province = '北京') AND (city = '北京')
┌─count(1)─┐
│ 438459 │
└──────────┘
1 rows in set. Elapsed: 0.068 sec. Processed 523.68 thousand rows, 29.88 MB (7.66 million rows/s., 436.86 MB/s.)
| https://github.com/ClickHouse/ClickHouse/issues/14781 | https://github.com/ClickHouse/ClickHouse/pull/14852 | 652163c07c62bdd8724e109160b03016551b6c1a | 5a890d73777ed1be4da46dc30df4f2c03b53a9be | 2020-09-14T01:59:35Z | c++ | 2020-09-17T16:17:30Z |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 14,778 | ["src/Functions/array/arrayCompact.cpp", "tests/queries/0_stateless/01020_function_array_compact.sql", "tests/queries/0_stateless/01025_array_compact_generic.reference", "tests/queries/0_stateless/01025_array_compact_generic.sql"] | arrayCompact for array with tuples | Currently arrayCompact doesn't work with tuples like other array functions arraySort or arrayFilter.
Please consider adding such functionality to arrayCompact.
We need something like this... SELECT arrayCompact( (x,y) -> y, arr1, arr2) AS v
Current behavior:
`SELECT
[('a', 1), ('a', 2), ('c', 3), ('a', 2)] AS arr,
arrayCompact(x -> (x.1), arr) AS v,
[('a', 1), ('c', 3), ('a', 2)] AS expected_v
`

| https://github.com/ClickHouse/ClickHouse/issues/14778 | https://github.com/ClickHouse/ClickHouse/pull/34795 | aea7bfb59aa23432b7eb6f69c4ce158c40f65c11 | 7d01516202152c8d60d4fed6b72dad67357d337f | 2020-09-13T21:20:15Z | c++ | 2022-03-03T18:25:43Z |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 14,677 | ["base/common/throwError.h", "base/common/types.h", "base/common/wide_integer.h", "base/common/wide_integer_impl.h", "base/common/wide_integer_to_string.h", "src/IO/WriteHelpers.h", "tests/queries/0_stateless/01475_fix_bigint_shift.reference", "tests/queries/0_stateless/01475_fix_bigint_shift.sql"] | (unreleased) debug assertion in some arithmetic operations on UInt256 | `SELECT bitShiftLeft(toInt256(-2), number)`
`shift left for negative lhsbers is underfined!`
See https://clickhouse-test-reports.s3.yandex.net/14525/44726c37c3003ec8112dbf32efa350bd595163b3/fuzzer/report.html#fail1 | https://github.com/ClickHouse/ClickHouse/issues/14677 | https://github.com/ClickHouse/ClickHouse/pull/14697 | 3113aa6cfefbf5eee6de6541ffd1f20596f8f8d2 | d274125c74c4784b461e938a92c0afd2cb2e9b41 | 2020-09-10T03:06:54Z | c++ | 2020-09-14T11:56:43Z |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 14,648 | ["src/Functions/FunctionUnixTimestamp64.h", "tests/queries/0_stateless/01277_fromUnixTimestamp64.reference", "tests/queries/0_stateless/01277_fromUnixTimestamp64.sql"] | fromUnixTimestamp64 functions expect Int64 and not UInt64 | SELECT toYYYYMMDD(fromUnixTimestamp64Micro(1599672041963889))
Received exception from server (version 20.9.1):
Code: 43. DB::Exception: Received from localhost:9000.
DB::Exception: Illegal type of argument #0 'value' of function fromUnixTimestamp64Micro, expected Int64, got UInt64. | https://github.com/ClickHouse/ClickHouse/issues/14648 | https://github.com/ClickHouse/ClickHouse/pull/33505 | 8c9266b24c0bb434fd144ca4dad9744109b542b4 | 5a4ad04ae6c182e8bc6e8bacf12409b167567652 | 2020-09-09T17:25:30Z | c++ | 2022-01-27T19:54:53Z |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 14,614 | ["src/Storages/tests/gtest_transform_query_for_external_database.cpp", "src/Storages/transformQueryForExternalDatabase.cpp", "tests/integration/test_mysql_database_engine/test.py"] | Joining MySQL table and MergeTree table causes 'Missing columns' | **Describe the bug**
If you make a query to MySQL Engine Table with joined MergeTree Engine Table, you use any columns of the latter.
**How to reproduce**
* ClickHouse server 20.8.2
* ClickHouse client version 20.8.2.3 (official build).
```sql
CREATE TABLE mysql_table
(
`id` UInt64,
`name` Nullable(String),
)
ENGINE = MySQL('test:3306', 'test', 'test', 'test', 'test');
```
```sql
CREATE TABLE ch_table
(
`id` UInt64,
`mysql_item_id` UInt64,
`d` DateTime
)
ENGINE = MergeTree()
PARTITION BY toYYYYMM(d)
ORDER BY d;
```
Query
```sql
SELECT *
FROM mysql_table AS t_mysql_table
LEFT JOIN ch_table AS t_ch_table
ON t_ch_table.mysql_item_id = t_mysql_table.id
WHERE t_ch_table.id > 100
```
**Expected behavior**
Returns empty result without errors.
**Error message and/or stacktrace**
```
Code: 47. DB::Exception: Received from test:9000. DB::Exception: Missing columns: 't_ch_table.id' while processing query: 't_ch_table.id > 100', required columns: 't_ch_table.id', source columns: 'id', 'name'.
```
| https://github.com/ClickHouse/ClickHouse/issues/14614 | https://github.com/ClickHouse/ClickHouse/pull/21640 | aa3e8fc4f01a80ef93996d19fa17469a825f8c37 | 0171ab95fa922e79511f2be2b05ec6af662fbb37 | 2020-09-09T09:07:01Z | c++ | 2021-03-29T13:14:10Z |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 14,613 | ["docs/en/sql-reference/functions/ext-dict-functions.md"] | dictGet gives error External dictionary not found | Select from `system.dictionaries` shows my external odbc-sourced dictionary with status `LOADED`.
I can SELECT from that dictionary successfully.
However, running `dictGet()` on that dictionary says it was not found.
> Code: 36, e.displayText() = DB::Exception: external dictionary 'my_d' not found (version 20.6.3.28 (official build))
```sql
SELECT dictGet('my_d', 'name', tuple(toUUID('f724ff3c-3302-4abd-b403-b50a4d00e66a')));
```
```sql
CREATE DICTIONARY IF NOT EXISTS my_d ON CLUSTER '{cluster}' (
id UUID,
name String
)
PRIMARY KEY id
SOURCE(ODBC(connection_string 'DSN=pg_conn' TABLE 't1'))
LAYOUT(complex_key_hashed())
LIFETIME(MIN 1800 MAX 3600);
``` | https://github.com/ClickHouse/ClickHouse/issues/14613 | https://github.com/ClickHouse/ClickHouse/pull/14620 | cdb72add3534f33912bf5a77612aae7e1d9073be | 9acb8fe196b1b04a20d1f001213e5cc8ad66b487 | 2020-09-09T08:25:15Z | c++ | 2020-09-09T16:59:58Z |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 14,589 | ["src/Common/MemoryTracker.cpp", "src/Common/ProfileEvents.cpp"] | New "Query Memory Limit Exceeded" event in system.events | Would love to have an event that tracks the number of times this happens so that we can adjust resources accordingly.
Example log:
```
Memory limit (for query) exceeded: would use 3.47 GiB (attempt to allocate chunk of 1073758016 bytes), maximum: 2.79 GiB
``` | https://github.com/ClickHouse/ClickHouse/issues/14589 | https://github.com/ClickHouse/ClickHouse/pull/14647 | 92b746116a9661523a7a8e0281e77149b70d0bae | e65e29d5376923deaac0d0b00643d33fd0c21dfb | 2020-09-08T15:58:19Z | c++ | 2020-09-10T03:12:39Z |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 14,571 | ["src/Storages/Distributed/DistributedBlockOutputStream.cpp", "src/Storages/StorageDistributed.cpp", "tests/queries/0_stateless/01639_distributed_sync_insert_zero_rows.reference", "tests/queries/0_stateless/01639_distributed_sync_insert_zero_rows.sql", "tests/queries/0_stateless/arcadia_skip_list.txt"] | INSERT to Distributed sent to a shard with zero weight | No rows are written, though. Still, this may be problematic when this insert fails due to the `too many parts` error, which leads to the parent insert also failing. I haven't reproduced this failure exactly (complicated to do on localhost), but seen it happen for a ClickHouse user in Yandex.
Apparently this only happens with `insert_distributed_sync = 1`.
Cluster config:
```
<weighted>
<shard>
<weight>100</weight>
<replica>
<host>127.0.0.2</host>
<port>9000</port>
</replica>
</shard>
<shard>
<weight>0</weight>
<replica>
<host>127.0.0.3</host>
<port>9000</port>
</replica>
</shard>
<shard>
<weight>100</weight>
<replica>
<host>127.0.0.4</host>
<port>9000</port>
</replica>
</shard>
</weighted>
```
Queries:
```
create table t_local (a int) engine MergeTree order by (a);
create table t_dist (a int) engine Distributed(weighted, 'default', 't_local', cityHash64(a));
set insert_distributed_sync = 1;
select written_rows, query_id from system.query_log where initial_query_id = '4c93852e-bdcb-436a-9ad3-4953ccedfcdc' and type != 1;
```
In the query log we see that the query is sent to all three shards, including the one with zero weight, but no rows are sent there.
```
SELECT
written_rows,
query_id
FROM system.query_log
WHERE (initial_query_id = '4c93852e-bdcb-436a-9ad3-4953ccedfcdc') AND (type != 1)
┌─written_rows─┬─query_id─────────────────────────────┐
│ 0 │ 177294a7-5612-45ef-bb57-2b36b73d709f │
│ 499720 │ 02d724ae-5840-4805-9e5f-747a85332545 │
│ 500280 │ a4692444-5409-4aec-a821-d9095e912ca1 │
│ 1000000 │ 4c93852e-bdcb-436a-9ad3-4953ccedfcdc │
└──────────────┴──────────────────────────────────────┘
```
For more fun, try the `Log` engine for `t_local` -- the insert deadlocks: https://github.com/ClickHouse/ClickHouse/issues/14570 | https://github.com/ClickHouse/ClickHouse/issues/14571 | https://github.com/ClickHouse/ClickHouse/pull/18775 | fdff71a36fd4d2f051b303659c80914c78e9f74a | 417e685830d424d99edb5f87d677d69c1cbc991a | 2020-09-08T05:10:52Z | c++ | 2021-01-06T17:06:35Z |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 14,560 | ["src/Common/MemoryTracker.cpp", "src/Common/MemoryTracker.h", "src/Interpreters/SystemLog.h", "src/Interpreters/executeQuery.cpp", "src/Storages/MergeTree/IMergeTreeDataPart.cpp", "src/Storages/MergeTree/MergeTreeDataPartWriterOnDisk.cpp", "src/Storages/MergeTree/MergeTreeMarksLoader.cpp", "src/Storages/StorageBuffer.cpp", "tests/queries/0_stateless/01529_bad_memory_tracking.reference", "tests/queries/0_stateless/01529_bad_memory_tracking.sh"] | A query from `generateRandom` function may lead to OOM | **Describe the bug**
`SELECT i FROM generateRandom('i Array(Int8)', 1048575, 10, 1048577) LIMIT 1048575`
Memory limit should be respected. | https://github.com/ClickHouse/ClickHouse/issues/14560 | https://github.com/ClickHouse/ClickHouse/pull/16206 | c53f59decebc38c5003d66167ffd12a9236746f2 | 0b7430dda1409c0d9d71eceab2d9ce7abf6693e0 | 2020-09-07T22:20:23Z | c++ | 2020-10-21T11:34:22Z |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 14,531 | ["src/Storages/MergeTree/MergeTreeBlockReadUtils.cpp", "tests/queries/0_stateless/01084_defaults_on_aliases.reference", "tests/queries/0_stateless/01084_defaults_on_aliases.sql", "tests/queries/0_stateless/01497_alias_on_default_array.reference", "tests/queries/0_stateless/01497_alias_on_default_array.sql"] | 'Missing columns' when column DEFAULT value refers to ALIAS columns | ```sql
CREATE TABLE test_new_col
(
`_csv` String,
`csv_as_array` Array(String) ALIAS splitByChar(';',_csv),
`csv_col1` String DEFAULT csv_as_array[1],
`csv_col2` String DEFAULT csv_as_array[2]
)
ENGINE = MergeTree
ORDER BY tuple();
INSERT INTO test_new_col (_csv) values
('a1;b1;c1;d1'),
('a2;b2;c2;d2'),
('a3;b3;c3;d3');
SELECT csv_col1, csv_col2 FROM test_new_col;
┌─csv_col1─┬─csv_col2─┐
│ a1 │ b1 │
│ a2 │ b2 │
│ a3 │ b3 │
└──────────┴──────────┘
ALTER TABLE test_new_col ADD COLUMN `csv_col3` String DEFAULT csv_as_array[3];
SELECT csv_col3 FROM test_new_col
→ Progress: 0.00 rows, 0.00 B (0.00 rows/s., 0.00 B/s.)
Received exception from server (version 20.3.17):
Code: 47. DB::Exception: Received from localhost:9000. DB::Exception: Missing columns: '_csv' while processing query: 'splitByChar(';', _csv) AS csv_as_array, CAST(csv_as_array[3], 'String') AS csv_col3', required columns: '_csv', source columns: 'csv_col1': (while reading from part /var/lib/clickhouse/data/default/test_new_col/all_1_1_0/): While executing MergeTreeThread. Stack trace:
0. Poco::Exception::Exception(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, int) @ 0x1059b460 in /usr/bin/clickhouse
1. DB::Exception::Exception(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, int) @ 0x8f9972d in /usr/bin/clickhouse
2. ? @ 0xd536aa8 in /usr/bin/clickhouse
3. DB::SyntaxAnalyzer::analyze(std::__1::shared_ptr<DB::IAST>&, DB::NamesAndTypesList const&, std::__1::shared_ptr<DB::IStorage>) const @ 0xd5316c3 in /usr/bin/clickhouse
4. ? @ 0xd5acb8b in /usr/bin/clickhouse
5. DB::evaluateMissingDefaults(DB::Block&, DB::NamesAndTypesList const&, std::__1::unordered_map<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, DB::ColumnDefault, std::__1::hash<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > >, std::__1::equal_to<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > >, std::__1::allocator<std::__1::pair<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const, DB::ColumnDefault> > > const&, DB::Context const&, bool) @ 0xd5ad69d in /usr/bin/clickhouse
6. DB::IMergeTreeReader::evaluateMissingDefaults(DB::Block, std::__1::vector<COW<DB::IColumn>::immutable_ptr<DB::IColumn>, std::__1::allocator<COW<DB::IColumn>::immutable_ptr<DB::IColumn> > >&) @ 0xda7e62e in /usr/bin/clickhouse
7. DB::MergeTreeRangeReader::read(unsigned long, std::__1::deque<DB::MarkRange, std::__1::allocator<DB::MarkRange> >&) @ 0xdaaf969 in /usr/bin/clickhouse
8. DB::MergeTreeBaseSelectProcessor::readFromPartImpl() @ 0xdaa6e8d in /usr/bin/clickhouse
9. DB::MergeTreeBaseSelectProcessor::generate() @ 0xdaa79c7 in /usr/bin/clickhouse
10. DB::ISource::work() @ 0xdbea94b in /usr/bin/clickhouse
11. DB::SourceWithProgress::work() @ 0xdf43827 in /usr/bin/clickhouse
12. ? @ 0xdc25a21 in /usr/bin/clickhouse
13. DB::PipelineExecutor::executeSingleThread(unsigned long, unsigned long) @ 0xdc29bad in /usr/bin/clickhouse
14. DB::PipelineExecutor::executeImpl(unsigned long) @ 0xdc2bc58 in /usr/bin/clickhouse
15. DB::PipelineExecutor::execute(unsigned long) @ 0xdc2be25 in /usr/bin/clickhouse
16. ? @ 0x9072357 in /usr/bin/clickhouse
17. ThreadPoolImpl<ThreadFromGlobalPool>::worker(std::__1::__list_iterator<ThreadFromGlobalPool, void*>) @ 0x8fbde77 in /usr/bin/clickhouse
18. ThreadFromGlobalPool::ThreadFromGlobalPool<void ThreadPoolImpl<ThreadFromGlobalPool>::scheduleImpl<void>(std::__1::function<void ()>, int, std::__1::optional<unsigned long>)::'lambda1'()>(void&&, void ThreadPoolImpl<ThreadFromGlobalPool>::scheduleImpl<void>(std::__1::function<void ()>, int, std::__1::optional<unsigned long>)::'lambda1'()&&...)::'lambda'()::operator()() const @ 0x8fbe4f8 in /usr/bin/clickhouse
19. ThreadPoolImpl<std::__1::thread>::worker(std::__1::__list_iterator<std::__1::thread, void*>) @ 0x8fbd387 in /usr/bin/clickhouse
20. ? @ 0x8fbb7d3 in /usr/bin/clickhouse
21. start_thread @ 0x76db in /lib/x86_64-linux-gnu/libpthread-2.27.so
22. clone @ 0x121a3f in /lib/x86_64-linux-gnu/libc-2.27.so
0 rows in set. Elapsed: 0.106 sec.
```
But during merges, the default is calculated properly, and after the column is materialized it works properly:
```sql
OPTIMIZE TABLE test_new_col FINAL;
SELECT csv_col3 FROM test_new_col
┌─csv_col3─┐
│ c1 │
│ c2 │
│ c3 │
└──────────┘
```
But it doesn't work at all when one alias column refers to another:
```sql
ALTER TABLE test_new_col ADD COLUMN `csv_col4` String ALIAS csv_as_array[4];
SELECT csv_col4 FROM test_new_col;
```
/cc @alesapin | https://github.com/ClickHouse/ClickHouse/issues/14531 | https://github.com/ClickHouse/ClickHouse/pull/14845 | a67986791ffa9f8b1942f7178445de880f3581ac | 73544a37819567b3fb868d7e071da82d51c3bb06 | 2020-09-07T08:04:53Z | c++ | 2020-09-17T07:02:39Z |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 14,489 | ["programs/odbc-bridge/ODBCBridge.cpp", "src/Common/XDBCBridgeHelper.h", "src/Dictionaries/ExternalQueryBuilder.h", "src/IO/ReadWriteBufferFromHTTP.h"] | "clickhouse-odbc-bridge is not responding" during dictionary initialization: program startup sequence problems | docker version: 1.13.1
clickhouse version: 20.3.11.97
system: centos7
The **/docker-entrypoint-initdb.d/** path contains the sql file, and the sql contains the **odbc Dictionary ENGINE**; when the **_entrypoint.sh_** script is executed, there is a certain probability that the dictionary table creation fails: `Code: 410. DB::Exception: Received from 127.0.0.1:9000. DB::Exception: ODBCBridgeHelper: clickhouse-odbc-bridge is not responding.`
Only after the clickhouse-odbc-bridge process is completely started, the creation of the obdc table will be added manually again.
the [ODBC](https://clickhouse.tech/docs/en/engines/table-engines/integrations/odbc/) `ClickHouse automatically starts clickhouse-odbc-bridge when it is required`, But what is this trigger mechanism? Can the above problems be solved in entrypoint.sh?
**entrypoint.sh log:**
```
Recreating clickhouse ... done
Attaching to clickhouse
clickhouse | Processing configuration file '/etc/clickhouse-server/config.xml'.
clickhouse | Merging configuration file '/etc/clickhouse-server/config.d/config.xml'.
clickhouse | Include not found: macros
clickhouse | Include not found: clickhouse_compression
clickhouse | Logging error to /var/log/clickhouse-server/clickhouse-server.log
clickhouse | Logging errors to /var/log/clickhouse-server/clickhouse-server.err.log
clickhouse |
clickhouse | /entrypoint.sh: running /docker-entrypoint-initdb.d/schema.dnsmon.sql
clickhouse | Received exception from server (version 20.3.12):
clickhouse | Code: 410. DB::Exception: Received from localhost:9000. DB::Exception: ODBCBridgeHelper: clickhouse-odbc-bridge is not responding.
clickhouse |
clickhouse |
clickhouse | Processing configuration file '/etc/clickhouse-server/config.xml'.
clickhouse | Merging configuration file '/etc/clickhouse-server/config.d/config.xml'.
clickhouse | Include not found: macros
clickhouse | Include not found: clickhouse_compression
clickhouse | Logging error to /var/log/clickhouse-server/clickhouse-server.log
clickhouse | Logging errors to /var/log/clickhouse-server/clickhouse-server.err.log
```
**clickhouse-odbc-bridge.log:**
```
2020.09.04 17:40:30.025145 [ 121 ] {} <Trace> Pipe: Pipe capacity is 1.00 MiB
2020.09.04 17:40:30.037783 [ 121 ] {} <Debug> ApplicationStartup: Initializing subsystem: Logging Subsystem
2020.09.04 17:40:30.037858 [ 121 ] {} <Information> ApplicationStartup: Starting up
2020.09.04 17:40:30.039492 [ 121 ] {} <Information> ApplicationStartup: Listening http://[::1]:9018
2020.09.04 17:40:31.234435 [ 122 ] {} <Information> Application: Received termination signal (Terminated)
2020.09.04 17:40:31.234514 [ 121 ] {} <Debug> ApplicationStartup: Received termination signal.
2020.09.04 17:40:31.234534 [ 121 ] {} <Debug> ApplicationStartup: Waiting for current connections to close.
2020.09.04 17:40:31.292217 [ 121 ] {} <Debug> Application: Uninitializing subsystem: Logging Subsystem
2020.09.04 17:40:31.292406 [ 122 ] {} <Information> BaseDaemon: Stop SignalListener thread
```
| https://github.com/ClickHouse/ClickHouse/issues/14489 | https://github.com/ClickHouse/ClickHouse/pull/18278 | 850d584d3f0afd50353f2612bbc649a08dbc0979 | fa68af02d7863eaf62635feb51ebaae4433994c0 | 2020-09-04T09:12:54Z | c++ | 2020-12-21T07:18:04Z |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 14,476 | ["src/DataTypes/IDataType.h", "src/Functions/FunctionsConversion.h", "tests/queries/0_stateless/01837_cast_to_array_from_empty_array.reference", "tests/queries/0_stateless/01837_cast_to_array_from_empty_array.sql"] | Can't create VIEW from select query | **Describe the unexpected behaviour**
Expected create view from any "select" query, but it doesn't work.
But select-query is correct and working
version 20.4.5.36
**How to reproduce**
CREATE VIEW tempView AS
WITH(SELECT groupArray(a) FROM (SELECT [1, 2] AS a)) AS aaa
SELECT aaa
**Error message and/or stacktrace**
Code: 53, e.displayText() = DB::Exception: CAST AS Array can only be performed between same-dimensional array types or from String
| https://github.com/ClickHouse/ClickHouse/issues/14476 | https://github.com/ClickHouse/ClickHouse/pull/23456 | a2170b7818013691691bb3814f3e54f79a296b61 | 8134c270a2f468deaa6534be8765588cd61b1076 | 2020-09-04T07:51:32Z | c++ | 2021-04-22T05:41:35Z |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 14,447 | ["src/Common/Macros.cpp", "src/Storages/MergeTree/registerStorageMergeTree.cpp"] | Misleading exception message on creating a table with double-quoted arguments | **Describe the bug**
A clear and concise description of what works not as it is supposed to.
```
CREATE TABLE table_repl
(
number UInt32
)
ENGINE = ReplicatedMergeTree("/clickhouse/{cluster}/tables/{shard}/table_repl", "{replica}")
PARTITION BY intDiv(number, 1000)
ORDER BY number
```
yields an Exception:
`DB::Exception: No macro 'uuid' in config while processing substitutions in '/clickhouse/tables/{uuid}/{shard}' at '20' or macro is not supported here.
`
**How to reproduce**
* Which ClickHouse server version to use: 20.9.1
**Expected**
The error is that the query should contain only single-quoted strings, but the message is quite misleading.
So the exception should be like "You're using double-quoted strings, which is prohibited".
| https://github.com/ClickHouse/ClickHouse/issues/14447 | https://github.com/ClickHouse/ClickHouse/pull/14704 | e73ca17ad3f689d2ae5ee1ed8a7b11f7b7e113ac | 1f47b1ff6bb1f6f266aa3482466420a4c355d7f6 | 2020-09-03T16:55:11Z | c++ | 2020-09-11T09:45:32Z |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 14,441 | ["src/Databases/DatabaseFactory.cpp", "src/Databases/MySQL/DatabaseMySQL.cpp", "src/Databases/MySQL/DatabaseMySQL.h", "tests/integration/test_mysql_database_engine/test.py"] | ClickHouse fails to start with 'Cannot create MySQL database' when MySQL is not available | I had one mysql engine connection in clickhouse.
I shutdown mysql aws RDS instance.
I updated clickhouse using apt get.
Then I restarted clickhouse. Clickhouse wont start.
`ClickHouse client version 20.7.2.30 (official build).`
```
No LSB modules are available.
Distributor ID: Debian
Description: Debian GNU/Linux 9.13 (stretch)
Release: 9.13
Codename: stretch
```
clickhouse server log
```
2020.09.03 12:31:01.937116 [ 947 ] {} <Debug> Application: Shut down storages.
2020.09.03 12:31:01.937566 [ 947 ] {} <Debug> Application: Destroyed global context.
2020.09.03 12:31:01.937693 [ 947 ] {} <Error> Application: DB::Exception: Cannot create MySQL database, because Poco::Exception. Code: 1000, e.code() = 2002, e.displayText() = mysqlxx::ConnectionFailed: Can't connect to MySQL server on '.....rds.amazonaws.com' (110) ((nullptr):0), Stack trace (when copying this message, always include the lines below):
0. Poco::Exception::Exception(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, int) @ 0x1271b990 in /usr/bin/clickhouse
1. ? @ 0x12596556 in /usr/bin/clickhouse
2. mysqlxx::Pool::allocConnection(bool) @ 0x1259fb31 in /usr/bin/clickhouse
3. mysqlxx::Pool::initialize() @ 0x1259fedd in /usr/bin/clickhouse
4. mysqlxx::Pool::get() @ 0x125a00e8 in /usr/bin/clickhouse
5. DB::DatabaseMySQL::fetchTablesWithModificationTime() const @ 0xf46df4a in /usr/bin/clickhouse
6. DB::DatabaseMySQL::fetchTablesIntoLocalCache() const @ 0xf470b57 in /usr/bin/clickhouse
7. DB::DatabaseMySQL::empty() const @ 0xf470c15 in /usr/bin/clickhouse
8. DB::DatabaseFactory::getImpl(DB::ASTCreateQuery const&, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, DB::Context&) @ 0xf462839 in /usr/bin/clickhouse
9. DB::DatabaseFactory::get(DB::ASTCreateQuery const&, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, DB::Context&) @ 0xf463052 in /usr/bin/clickhouse
10. DB::InterpreterCreateQuery::createDatabase(DB::ASTCreateQuery&) @ 0xf455fc0 in /usr/bin/clickhouse
11. DB::InterpreterCreateQuery::execute() @ 0xf456f5e in /usr/bin/clickhouse
12. ? @ 0xf8338fa in /usr/bin/clickhouse
13. DB::loadMetadata(DB::Context&, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&) @ 0xf83441b in /usr/bin/clickhouse
14. DB::Server::main(std::__1::vector<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, std::__1::allocator<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > > > const&) @ 0xa495dd7 in /usr/bin/clickhouse
15. Poco::Util::Application::run() @ 0x1264a9e7 in /usr/bin/clickhouse
16. DB::Server::run() @ 0xa4558d9 in /usr/bin/clickhouse
17. mainEntryClickHouseServer(int, char**) @ 0xa44a993 in /usr/bin/clickhouse
18. main @ 0xa3dd919 in /usr/bin/clickhouse
19. __libc_start_main @ 0x202e1 in /lib/x86_64-linux-gnu/libc-2.24.so
20. _start @ 0xa3dd02e in /usr/bin/clickhouse
(version 20.7.2.30 (official build))
2020.09.03 12:31:01.937740 [ 947 ] {} <Informa
```
| https://github.com/ClickHouse/ClickHouse/issues/14441 | https://github.com/ClickHouse/ClickHouse/pull/32802 | e929175dae5cdab3536826abeb45ec35827d5043 | db01e94f6608ca0f42fc8245a57559dd0e37a32d | 2020-09-03T12:38:35Z | c++ | 2021-12-17T05:57:15Z |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 14,314 | ["src/Dictionaries/ClickHouseDictionarySource.cpp", "src/Dictionaries/ClickHouseDictionarySource.h", "tests/queries/0_stateless/01780_clickhouse_dictionary_source_loop.reference", "tests/queries/0_stateless/01780_clickhouse_dictionary_source_loop.sql", "tests/queries/skip_list.json"] | It is possible to create a dictionary that looks at itself. Query from this dictionary will hang. | ```
milovidov-desktop :) CREATE DICTIONARY dict
:-] (
:-] k1 UInt64,
:-] k2 UInt8,
:-] value String
:-] )
:-] PRIMARY KEY k1, k2
:-] SOURCE(CLICKHOUSE(HOST 'localhost' PORT 9000 USER 'default' TABLE 'dict'))
:-] LIFETIME(1000)
:-] LAYOUT(COMPLEX_KEY_HASHED());
CREATE DICTIONARY dict
(
`k1` UInt64,
`k2` UInt8,
`value` String
)
PRIMARY KEY k1, k2
SOURCE(CLICKHOUSE(HOST 'localhost' PORT 9000 USER 'default' TABLE 'dict'))
LIFETIME(MIN 0 MAX 1000)
LAYOUT(COMPLEX_KEY_HASHED())
Ok.
0 rows in set. Elapsed: 0.001 sec.
milovidov-desktop :) SELECT dictGetString('default.dict', 'third_column', (1, 2, 3, 4, 5, 6, 7, 8, 9, 10));
SELECT dictGetString('default.dict', 'third_column', (1, 2, 3, 4, 5, 6, 7, 8, 9, 10))
``` | https://github.com/ClickHouse/ClickHouse/issues/14314 | https://github.com/ClickHouse/ClickHouse/pull/22479 | 751222743191c243f7f59cce08e6e9a8914f579b | 0bfb429c42bc84fa20c9706f99981e44cde07473 | 2020-08-31T23:01:34Z | c++ | 2021-04-04T10:29:17Z |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 14,298 | ["debian/clickhouse-server.init"] | Centos/8: sysV vs systemd mess. Service status is 'stopped' although it is running | **Description**
Whenever I run `sudo service clickhouse-server status` I get `clickhouse-server service is stopped`, although the server is actually running.
I can verify that it is running with the following:
```
$ ps -ef | grep clickhouse
clickho+ 2646 1 0 20:00 ? 00:00:01 clickhouse-server --daemon --pid-file=/var/run/clickhouse-server/clickhouse-server.pid --config-file=/etc/clickhouse-server/config.xml
```
or
```
$ sudo lsof -i:9000
COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME
clickhous 2646 clickhouse 54u IPv6 60885 0t0 TCP *:cslistener (LISTEN)
```
or simpliy, I can open `clickhouse-client` with no problems.
It is not just about the displayed status; I cannot stop the server using `sudo service clickhouse-server stop`, I must kill the process manually. I cannot restart the server using `sudo service clickhouse-server restart` because the other process is allocating the resources.
**Environment**
I am using a fresh image of CentOS 8.2 with ClickHouse 20.7.2.30 installed.
Here is the configuration that I am using:
```
<yandex>
<path>/mnt/clickhouse/</path>
<tmp_path>/mnt/clickhouse/tmp/</tmp_path>
<user_files_path>/mnt/clickhouse/user_files/</user_files_path>
<access_control_path>/mnt/clickhouse/access/</access_control_path>
<format_schema_path>/mnt/clickhouse/format_schemas/</format_schema_path>
<listen_host>::</listen_host>
<compression>
<case>
<min_part_size>10000000000</min_part_size>
<min_part_size_ratio>0.01</min_part_size_ratio>
<method>lz4</method>
</case>
</compression>
<remote_servers>
<clickhouse_cluster>
<shard>
<internal_replication>true</internal_replication>
<replica>
<host>clickhouse-01</host>
<port>9000</port>
</replica>
</shard>
<shard>
<internal_replication>true</internal_replication>
<replica>
<host>clickhouse-02</host>
<port>9000</port>
</replica>
</shard>
</clickhouse_cluster>
</remote_servers>
<zookeeper>
<node index="1">
<host>zookeeper-01</host>
<port>2181</port>
</node>
</zookeeper>
</yandex>
``` | https://github.com/ClickHouse/ClickHouse/issues/14298 | https://github.com/ClickHouse/ClickHouse/pull/25921 | edce77803f07a29efd32190bc7c3053325894113 | 38d1ce310d9ff824fc38143ab362460b2b83ab7d | 2020-08-31T20:20:20Z | c++ | 2021-07-03T11:50:05Z |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 14,275 | ["src/Functions/ignore.cpp", "tests/queries/0_stateless/01652_ignore_and_low_cardinality.reference", "tests/queries/0_stateless/01652_ignore_and_low_cardinality.sql"] | Logical error: 'Expected single dictionary argument for function.' | ```
set allow_suspicious_low_cardinality_types = 1;
CREATE TABLE lc_null_int8_defnull (val LowCardinality(Nullable(Int8)) DEFAULT NULL) ENGINE = MergeTree order by tuple();
SELECT ignore(10, ignore(*), ignore(ignore(-2, 1025, *)), NULL, *), * FROM lc_null_int8_defnull AS values;
2020.08.31 16:54:42.486431 [ 177540 ] {40cb44da-e053-41a6-a0f7-d5a7a80f3a41} <Error> : Logical error: 'Expected single dictionary argument for function.'.
clickhouse-server: /home/akuzm/ch2/ch/src/Common/Exception.cpp:45: DB::Exception::Exception(const std::string &, int): Assertion `false' failed.
2020.08.31 16:54:42.487002 [ 177458 ] {} <Trace> BaseDaemon: Received signal 6
2020.08.31 16:54:42.487353 [ 177700 ] {} <Fatal> BaseDaemon: ########################################
2020.08.31 16:54:42.487897 [ 177700 ] {} <Fatal> BaseDaemon: (version 20.8.1.1, build id: 7DFA2F634F1D770E) (from thread 177540) (query_id: 40cb44da-e053-41a6-a0f7-d5a7a80f3a41) Received signal Aborted (6)
2020.08.31 16:54:42.488007 [ 177700 ] {} <Fatal> BaseDaemon:
2020.08.31 16:54:42.488132 [ 177700 ] {} <Fatal> BaseDaemon: Stack trace: 0x7f533922e18b 0x7f533920d859 0x7f533920d729 0x7f533921ef36 0x7f533d17bc95 0x7f533027f0ba 0x7f533027e402 0x7f5326b85133 0x7f5326b87bc3 0x7f532211386f 0x7f53219cf75e 0x7f532701670c 0x7f532700c544 0x7f5327005d56 0x7f5327003e56 0x7f532707a548 0x7f532707a816 0x7f5327431933 0x7f532743092a 0x7f53233535a4 0x7f532335aad8 0x7f533a3f6d7c 0x7f533a3f758a 0x7f5339f08173 0x7f5339f0503d 0x7f5339f03eba 0x7f5339074609 0x7f533930a103
2020.08.31 16:54:42.489394 [ 177700 ] {} <Fatal> BaseDaemon: 4. /build/glibc-YYA7BZ/glibc-2.31/signal/../sysdeps/unix/sysv/linux/raise.c:51: __GI_raise @ 0x4618b in /usr/lib/debug/lib/x86_64-linux-gnu/libc-2.31.so
2020.08.31 16:54:42.489904 [ 177700 ] {} <Fatal> BaseDaemon: 5. /build/glibc-YYA7BZ/glibc-2.31/stdlib/abort.c:81: abort @ 0x25859 in /usr/lib/debug/lib/x86_64-linux-gnu/libc-2.31.so
2020.08.31 16:54:42.490486 [ 177700 ] {} <Fatal> BaseDaemon: 6. /build/glibc-YYA7BZ/glibc-2.31/intl/loadmsgcat.c:509: _nl_load_domain.cold @ 0x25729 in /usr/lib/debug/lib/x86_64-linux-gnu/libc-2.31.so
2020.08.31 16:54:42.490784 [ 177700 ] {} <Fatal> BaseDaemon: 7. ? @ 0x36f36 in /usr/lib/debug/lib/x86_64-linux-gnu/libc-2.31.so
2020.08.31 16:54:42.491038 [ 177700 ] {} <Fatal> BaseDaemon: 8. /home/akuzm/ch2/ch/src/Common/Exception.cpp:48: DB::Exception::Exception(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, int) @ 0x2d6c95 in /home/akuzm/ch2/build-clang10/src/libclickhouse_common_iod.so
2020.08.31 16:54:42.495831 [ 177700 ] {} <Fatal> BaseDaemon: 9. /home/akuzm/ch2/ch/src/Functions/IFunction.cpp:338: DB::findLowCardinalityArgument(DB::Block const&, std::__1::vector<unsigned long, std::__1::allocator<unsigned long> > const&) @ 0x497f0ba in /home/akuzm/ch2/build-clang10/src/Functions/libclickhouse_functionsd.so
2020.08.31 16:54:42.497643 [ 177700 ] {} <Fatal> BaseDaemon: 10. /home/akuzm/ch2/ch/src/Functions/IFunction.cpp:430: DB::ExecutableFunctionAdaptor::execute(DB::Block&, std::__1::vector<unsigned long, std::__1::allocator<unsigned long> > const&, unsigned long, unsigned long, bool) @ 0x497e402 in /home/akuzm/ch2/build-clang10/src/Functions/libclickhouse_functionsd.so
2020.08.31 16:54:42.499693 [ 177700 ] {} <Fatal> BaseDaemon: 11. /home/akuzm/ch2/ch/src/Interpreters/ExpressionActions.cpp:378: DB::ExpressionAction::execute(DB::Block&, bool) const @ 0x13dc133 in /home/akuzm/ch2/build-clang10/src/libclickhouse_interpretersd.so
2020.08.31 16:54:42.500711 [ 177700 ] {} <Fatal> BaseDaemon: 12. /home/akuzm/ch2/ch/src/Interpreters/ExpressionActions.cpp:659: DB::ExpressionActions::execute(DB::Block&, bool) const @ 0x13debc3 in /home/akuzm/ch2/build-clang10/src/libclickhouse_interpretersd.so
2020.08.31 16:54:42.503109 [ 177700 ] {} <Fatal> BaseDaemon: 13. /home/akuzm/ch2/ch/src/Processors/Transforms/ExpressionTransform.cpp:10: DB::ExpressionTransform::transformHeader(DB::Block, std::__1::shared_ptr<DB::ExpressionActions> const&) @ 0x27986f in /home/akuzm/ch2/build-clang10/src/libclickhouse_processors_transformsd.so
2020.08.31 16:54:42.505710 [ 177700 ] {} <Fatal> BaseDaemon: 14. /home/akuzm/ch2/ch/src/Processors/QueryPlan/ExpressionStep.cpp:31: DB::ExpressionStep::ExpressionStep(DB::DataStream const&, std::__1::shared_ptr<DB::ExpressionActions>) @ 0x23175e in /home/akuzm/ch2/build-clang10/src/libclickhouse_processors_querypland.so
2020.08.31 16:54:42.509246 [ 177700 ] {} <Fatal> BaseDaemon: 15. /home/akuzm/ch2/ch/contrib/libcxx/include/memory:3028: std::__1::__unique_if<DB::ExpressionStep>::__unique_single std::__1::make_unique<DB::ExpressionStep, DB::DataStream const&, std::__1::shared_ptr<DB::ExpressionActions> const&>(DB::DataStream const&, std::__1::shared_ptr<DB::ExpressionActions> const&) @ 0x186d70c in /home/akuzm/ch2/build-clang10/src/libclickhouse_interpretersd.so
2020.08.31 16:54:42.511916 [ 177700 ] {} <Fatal> BaseDaemon: 16. /home/akuzm/ch2/ch/src/Interpreters/InterpreterSelectQuery.cpp:1568: DB::InterpreterSelectQuery::executeExpression(DB::QueryPlan&, std::__1::shared_ptr<DB::ExpressionActions> const&, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&) @ 0x1863544 in /home/akuzm/ch2/build-clang10/src/libclickhouse_interpretersd.so
2020.08.31 16:54:42.514110 [ 177700 ] {} <Fatal> BaseDaemon: 17. /home/akuzm/ch2/ch/src/Interpreters/InterpreterSelectQuery.cpp:931: DB::InterpreterSelectQuery::executeImpl(DB::QueryPlan&, std::__1::shared_ptr<DB::IBlockInputStream> const&, std::__1::optional<DB::Pipe>) @ 0x185cd56 in /home/akuzm/ch2/build-clang10/src/libclickhouse_interpretersd.so
2020.08.31 16:54:42.516152 [ 177700 ] {} <Fatal> BaseDaemon: 18. /home/akuzm/ch2/ch/src/Interpreters/InterpreterSelectQuery.cpp:473: DB::InterpreterSelectQuery::buildQueryPlan(DB::QueryPlan&) @ 0x185ae56 in /home/akuzm/ch2/build-clang10/src/libclickhouse_interpretersd.so
2020.08.31 16:54:42.518558 [ 177700 ] {} <Fatal> BaseDaemon: 19. /home/akuzm/ch2/ch/src/Interpreters/InterpreterSelectWithUnionQuery.cpp:183: DB::InterpreterSelectWithUnionQuery::buildQueryPlan(DB::QueryPlan&) @ 0x18d1548 in /home/akuzm/ch2/build-clang10/src/libclickhouse_interpretersd.so
2020.08.31 16:54:42.520820 [ 177700 ] {} <Fatal> BaseDaemon: 20. /home/akuzm/ch2/ch/src/Interpreters/InterpreterSelectWithUnionQuery.cpp:206: DB::InterpreterSelectWithUnionQuery::execute() @ 0x18d1816 in /home/akuzm/ch2/build-clang10/src/libclickhouse_interpretersd.so
2020.08.31 16:54:42.524811 [ 177700 ] {} <Fatal> BaseDaemon: 21. /home/akuzm/ch2/ch/src/Interpreters/executeQuery.cpp:425: DB::executeQueryImpl(char const*, char const*, DB::Context&, bool, DB::QueryProcessingStage::Enum, bool, DB::ReadBuffer*) @ 0x1c88933 in /home/akuzm/ch2/build-clang10/src/libclickhouse_interpretersd.so
2020.08.31 16:54:42.528737 [ 177700 ] {} <Fatal> BaseDaemon: 22. /home/akuzm/ch2/ch/src/Interpreters/executeQuery.cpp:740: DB::executeQuery(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, DB::Context&, bool, DB::QueryProcessingStage::Enum, bool) @ 0x1c8792a in /home/akuzm/ch2/build-clang10/src/libclickhouse_interpretersd.so
2020.08.31 16:54:42.529289 [ 177700 ] {} <Fatal> BaseDaemon: 23. /home/akuzm/ch2/ch/src/Server/TCPHandler.cpp:253: DB::TCPHandler::runImpl() @ 0x3755a4 in /home/akuzm/ch2/build-clang10/src/libclickhouse_serverd.so
2020.08.31 16:54:42.529933 [ 177700 ] {} <Fatal> BaseDaemon: 24. /home/akuzm/ch2/ch/src/Server/TCPHandler.cpp:1217: DB::TCPHandler::run() @ 0x37cad8 in /home/akuzm/ch2/build-clang10/src/libclickhouse_serverd.so
2020.08.31 16:54:42.530313 [ 177700 ] {} <Fatal> BaseDaemon: 25. /home/akuzm/ch2/ch/contrib/poco/Net/src/TCPServerConnection.cpp:43: Poco::Net::TCPServerConnection::start() @ 0x1b9d7c in /home/akuzm/ch2/build-clang10/contrib/poco-cmake/Net/lib_poco_netd.so
2020.08.31 16:54:42.530680 [ 177700 ] {} <Fatal> BaseDaemon: 26. /home/akuzm/ch2/ch/contrib/poco/Net/src/TCPServerDispatcher.cpp:114: Poco::Net::TCPServerDispatcher::run() @ 0x1ba58a in /home/akuzm/ch2/build-clang10/contrib/poco-cmake/Net/lib_poco_netd.so
2020.08.31 16:54:42.531108 [ 177700 ] {} <Fatal> BaseDaemon: 27. /home/akuzm/ch2/ch/contrib/poco/Foundation/src/ThreadPool.cpp:199: Poco::PooledThread::run() @ 0x276173 in /home/akuzm/ch2/build-clang10/contrib/poco-cmake/Foundation/lib_poco_foundationd.so
2020.08.31 16:54:42.531524 [ 177700 ] {} <Fatal> BaseDaemon: 28. /home/akuzm/ch2/ch/contrib/poco/Foundation/src/Thread.cpp:56: Poco::(anonymous namespace)::RunnableHolder::run() @ 0x27303d in /home/akuzm/ch2/build-clang10/contrib/poco-cmake/Foundation/lib_poco_foundationd.so
2020.08.31 16:54:42.531912 [ 177700 ] {} <Fatal> BaseDaemon: 29. /home/akuzm/ch2/ch/contrib/poco/Foundation/src/Thread_POSIX.cpp:345: Poco::ThreadImpl::runnableEntry(void*) @ 0x271eba in /home/akuzm/ch2/build-clang10/contrib/poco-cmake/Foundation/lib_poco_foundationd.so
2020.08.31 16:54:42.532030 [ 177700 ] {} <Fatal> BaseDaemon: 30. start_thread @ 0x9609 in /lib/x86_64-linux-gnu/libpthread-2.31.so
2020.08.31 16:54:42.532257 [ 177700 ] {} <Fatal> BaseDaemon: 31. /build/glibc-YYA7BZ/glibc-2.31/misc/../sysdeps/unix/sysv/linux/x86_64/clone.S:97: __clone @ 0x122103 in /usr/lib/debug/lib/x86_64-linux-gnu/libc-2.31.so
``` | https://github.com/ClickHouse/ClickHouse/issues/14275 | https://github.com/ClickHouse/ClickHouse/pull/19016 | ac1c17ac76df709b57a5e379b3c47a8956885ea8 | d8b9278193405512bc96bd74d3f30d281c481bb9 | 2020-08-31T13:56:59Z | c++ | 2021-01-14T13:50:54Z |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 14,251 | ["src/Core/MySQL/MySQLReplication.cpp", "src/Core/MySQL/MySQLReplication.h", "src/IO/MySQLBinlogEventReadBuffer.cpp", "src/IO/MySQLBinlogEventReadBuffer.h", "src/IO/tests/gtest_mysql_binlog_event_read_buffer.cpp", "src/IO/ya.make"] | MaterializeMySQLSyncThread DB::Exception: Packet payload is not fully read | **Describe the bug**
I got below error message when data replication is in progress, after then data replication has stopped and the DB is not accessible (detach and attach not work), I'm wondering what does the error message mean?
```
2020.08.30 11:43:28.441220 [ 105102 ] {} <Error> MaterializeMySQLSyncThread: Code: 99, e.displayText() = DB::Exception: Packet payload is not fully read. Stopped after 38 bytes, while 89 bytes are in buffer., Stack trace (when copying this message, always include the lines below):
0. Poco::Exception::Exception(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, int) @ 0x1a7d7400 in /usr/bin/clickhouse
1. DB::Exception::Exception(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, int) @ 0xff8f78d in /usr/bin/clickhouse
2. ? @ 0x1738ea80 in /usr/bin/clickhouse
3. DB::MySQLProtocol::PacketEndpoint::tryReceivePacket(DB::MySQLProtocol::IMySQLReadPacket&, unsigned long) @ 0x1739b497 in /usr/bin/clickhouse
4. DB::MySQLClient::readOneBinlogEvent(unsigned long) @ 0x17398b01 in /usr/bin/clickhouse
5. DB::MaterializeMySQLSyncThread::synchronization(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&) @ 0x173707a0 in /usr/bin/clickhouse
6. ? @ 0x17370ba6 in /usr/bin/clickhouse
7. ThreadPoolImpl<std::__1::thread>::worker(std::__1::__list_iterator<std::__1::thread, void*>) @ 0xffbf517 in /usr/bin/clickhouse
8. ? @ 0xffbdb53 in /usr/bin/clickhouse
9. start_thread @ 0x7dd5 in /usr/lib64/libpthread-2.17.so
10. clone @ 0xfdead in /usr/lib64/libc-2.17.so
(version 20.8.1.4474 (official build))
2020.08.30 11:43:30.346747 [ 104909 ] {cc04190d-a6a4-44bc-b9f2-b04e14129ee9} <Error> executeQuery: Code: 99, e.displayText() = DB::Exception: Packet payload is not fully read. Stopped after 38 bytes, while 89 bytes are in buffer. (version 20.8.1.4474 (official build)) (from 10.99.4.75:44904) (in query: SELECT COUNT(*) FROM rs_monitor.done_jobs FORMAT TabSeparatedWithNamesAndTypes;), Stack trace (when copying this message, always include the lines below):
0. Poco::Exception::Exception(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, int) @ 0x1a7d7400 in /usr/bin/clickhouse
1. DB::Exception::Exception(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, int) @ 0xff8f78d in /usr/bin/clickhouse
2. ? @ 0x1738ea80 in /usr/bin/clickhouse
3. DB::MySQLProtocol::PacketEndpoint::tryReceivePacket(DB::MySQLProtocol::IMySQLReadPacket&, unsigned long) @ 0x1739b497 in /usr/bin/clickhouse
4. DB::MySQLClient::readOneBinlogEvent(unsigned long) @ 0x17398b01 in /usr/bin/clickhouse
5. DB::MaterializeMySQLSyncThread::synchronization(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&) @ 0x173707a0 in /usr/bin/clickhouse
6. ? @ 0x17370ba6 in /usr/bin/clickhouse
7. ThreadPoolImpl<std::__1::thread>::worker(std::__1::__list_iterator<std::__1::thread, void*>) @ 0xffbf517 in /usr/bin/clickhouse
8. ? @ 0xffbdb53 in /usr/bin/clickhouse
9. start_thread @ 0x7dd5 in /usr/lib64/libpthread-2.17.so
10. clone @ 0xfdead in /usr/lib64/libc-2.17.so
2020.08.30 11:43:30.347066 [ 104909 ] {cc04190d-a6a4-44bc-b9f2-b04e14129ee9} <Error> DynamicQueryHandler: Code: 99, e.displayText() = DB::Exception: Packet payload is not fully read. Stopped after 38 bytes, while 89 bytes are in buffer., Stack trace (when copying this message, always include the lines below):
0. Poco::Exception::Exception(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, int) @ 0x1a7d7400 in /usr/bin/clickhouse
1. DB::Exception::Exception(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, int) @ 0xff8f78d in /usr/bin/clickhouse
2. ? @ 0x1738ea80 in /usr/bin/clickhouse
3. DB::MySQLProtocol::PacketEndpoint::tryReceivePacket(DB::MySQLProtocol::IMySQLReadPacket&, unsigned long) @ 0x1739b497 in /usr/bin/clickhouse
4. DB::MySQLClient::readOneBinlogEvent(unsigned long) @ 0x17398b01 in /usr/bin/clickhouse
5. DB::MaterializeMySQLSyncThread::synchronization(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&) @ 0x173707a0 in /usr/bin/clickhouse
6. ? @ 0x17370ba6 in /usr/bin/clickhouse
7. ThreadPoolImpl<std::__1::thread>::worker(std::__1::__list_iterator<std::__1::thread, void*>) @ 0xffbf517 in /usr/bin/clickhouse
8. ? @ 0xffbdb53 in /usr/bin/clickhouse
9. start_thread @ 0x7dd5 in /usr/lib64/libpthread-2.17.so
10. clone @ 0xfdead in /usr/lib64/libc-2.17.so
(version 20.8.1.4474 (official build))
2020.08.30 11:43:38.036083 [ 104908 ] {c2ba4822-a889-496b-a167-9171b896ebb1} <Error> executeQuery: Code: 99, e.displayText() = DB::Exception: Packet payload is not fully read. Stopped after 38 bytes, while 89 bytes are in buffer. (version 20.8.1.4474 (official build)) (from 10.99.4.75:44906) (in query: SELECT * FROM rs_monitor.done_jobs ORDER BY id DESC FORMAT TabSeparatedWithNamesAndTypes;), Stack trace (when copying this message, always include the lines below):
0. Poco::Exception::Exception(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, int) @ 0x1a7d7400 in /usr/bin/clickhouse
1. DB::Exception::Exception(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, int) @ 0xff8f78d in /usr/bin/clickhouse
2. ? @ 0x1738ea80 in /usr/bin/clickhouse
3. DB::MySQLProtocol::PacketEndpoint::tryReceivePacket(DB::MySQLProtocol::IMySQLReadPacket&, unsigned long) @ 0x1739b497 in /usr/bin/clickhouse
4. DB::MySQLClient::readOneBinlogEvent(unsigned long) @ 0x17398b01 in /usr/bin/clickhouse
5. DB::MaterializeMySQLSyncThread::synchronization(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&) @ 0x173707a0 in /usr/bin/clickhouse
6. ? @ 0x17370ba6 in /usr/bin/clickhouse
7. ThreadPoolImpl<std::__1::thread>::worker(std::__1::__list_iterator<std::__1::thread, void*>) @ 0xffbf517 in /usr/bin/clickhouse
8. ? @ 0xffbdb53 in /usr/bin/clickhouse
9. start_thread @ 0x7dd5 in /usr/lib64/libpthread-2.17.so
10. clone @ 0xfdead in /usr/lib64/libc-2.17.so
2020.08.30 11:43:38.036391 [ 104908 ] {c2ba4822-a889-496b-a167-9171b896ebb1} <Error> DynamicQueryHandler: Code: 99, e.displayText() = DB::Exception: Packet payload is not fully read. Stopped after 38 bytes, while 89 bytes are in buffer., Stack trace (when copying this message, always include the lines below):
0. Poco::Exception::Exception(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, int) @ 0x1a7d7400 in /usr/bin/clickhouse
1. DB::Exception::Exception(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, int) @ 0xff8f78d in /usr/bin/clickhouse
2. ? @ 0x1738ea80 in /usr/bin/clickhouse
3. DB::MySQLProtocol::PacketEndpoint::tryReceivePacket(DB::MySQLProtocol::IMySQLReadPacket&, unsigned long) @ 0x1739b497 in /usr/bin/clickhouse
4. DB::MySQLClient::readOneBinlogEvent(unsigned long) @ 0x17398b01 in /usr/bin/clickhouse
5. DB::MaterializeMySQLSyncThread::synchronization(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&) @ 0x173707a0 in /usr/bin/clickhouse
6. ? @ 0x17370ba6 in /usr/bin/clickhouse
7. ThreadPoolImpl<std::__1::thread>::worker(std::__1::__list_iterator<std::__1::thread, void*>) @ 0xffbf517 in /usr/bin/clickhouse
8. ? @ 0xffbdb53 in /usr/bin/clickhouse
9. start_thread @ 0x7dd5 in /usr/lib64/libpthread-2.17.so
10. clone @ 0xfdead in /usr/lib64/libc-2.17.so
(version 20.8.1.4474 (official build))
2020.08.30 11:45:11.352646 [ 104907 ] {f91d2659-1d7f-4869-941b-01216b4238b1} <Error> executeQuery: Code: 99, e.displayText() = DB::Exception: Packet payload is not fully read. Stopped after 38 bytes, while 89 bytes are in buffer. (version 20.8.1.4474 (official build)) (from 10.99.4.75:59228) (in query: select * from job_summary;), Stack trace (when copying this message, always include the lines below):
0. Poco::Exception::Exception(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, int) @ 0x1a7d7400 in /usr/bin/clickhouse
1. DB::Exception::Exception(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, int) @ 0xff8f78d in /usr/bin/clickhouse
2. ? @ 0x1738ea80 in /usr/bin/clickhouse
3. DB::MySQLProtocol::PacketEndpoint::tryReceivePacket(DB::MySQLProtocol::IMySQLReadPacket&, unsigned long) @ 0x1739b497 in /usr/bin/clickhouse
4. DB::MySQLClient::readOneBinlogEvent(unsigned long) @ 0x17398b01 in /usr/bin/clickhouse
5. DB::MaterializeMySQLSyncThread::synchronization(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&) @ 0x173707a0 in /usr/bin/clickhouse
6. ? @ 0x17370ba6 in /usr/bin/clickhouse
7. ThreadPoolImpl<std::__1::thread>::worker(std::__1::__list_iterator<std::__1::thread, void*>) @ 0xffbf517 in /usr/bin/clickhouse
8. ? @ 0xffbdb53 in /usr/bin/clickhouse
9. start_thread @ 0x7dd5 in /usr/lib64/libpthread-2.17.so
10. clone @ 0xfdead in /usr/lib64/libc-2.17.so
2020.08.30 11:45:11.353144 [ 104907 ] {} <Error> ServerErrorHandler: Code: 99, e.displayText() = DB::Exception: Packet payload is not fully read. Stopped after 38 bytes, while 89 bytes are in buffer., Stack trace (when copying this message, always include the lines below):
0. Poco::Exception::Exception(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, int) @ 0x1a7d7400 in /usr/bin/clickhouse
1. DB::Exception::Exception(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, int) @ 0xff8f78d in /usr/bin/clickhouse
2. ? @ 0x1738ea80 in /usr/bin/clickhouse
3. DB::MySQLProtocol::PacketEndpoint::tryReceivePacket(DB::MySQLProtocol::IMySQLReadPacket&, unsigned long) @ 0x1739b497 in /usr/bin/clickhouse
4. DB::MySQLClient::readOneBinlogEvent(unsigned long) @ 0x17398b01 in /usr/bin/clickhouse
5. DB::MaterializeMySQLSyncThread::synchronization(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&) @ 0x173707a0 in /usr/bin/clickhouse
6. ? @ 0x17370ba6 in /usr/bin/clickhouse
7. ThreadPoolImpl<std::__1::thread>::worker(std::__1::__list_iterator<std::__1::thread, void*>) @ 0xffbf517 in /usr/bin/clickhouse
8. ? @ 0xffbdb53 in /usr/bin/clickhouse
9. start_thread @ 0x7dd5 in /usr/lib64/libpthread-2.17.so
10. clone @ 0xfdead in /usr/lib64/libc-2.17.so
(version 20.8.1.4474 (official build))
```
| https://github.com/ClickHouse/ClickHouse/issues/14251 | https://github.com/ClickHouse/ClickHouse/pull/14852 | 652163c07c62bdd8724e109160b03016551b6c1a | 5a890d73777ed1be4da46dc30df4f2c03b53a9be | 2020-08-30T03:55:09Z | c++ | 2020-09-17T16:17:30Z |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 14,235 | ["src/Core/MySQL/MySQLReplication.cpp", "src/Core/MySQL/MySQLReplication.h"] | MaterializeMySQL unable to handle 'COMMIT' log may cause synced DB inaccessible |
It seems MaterializeMySQL not able to handle below bin log events, hence cause exception, the problem is high frequency occur of the exception may cause MaterializeMySQLSyncThread crash (I guess, the phenomenon is the synced DB in ClickHouse is not accessible)
My suggestion is, if such log event is harmless, we just skip it.
> #200829 16:15:00 server id 936 end_log_pos 710243420 CRC32 0x0fe88b7e Xid = 13469366
> COMMIT/*!*/;
Exception:
> 2020.08.29 16:41:02.696328 [ 104835 ] {} <Error> MaterializeMySQLSyncThread(): Query EXTERNAL DDL FROM MySQL(host_monitor, host_monitor) COMMIT wasn't finished successfully: Code: 62, e.displayText() = DB::Exception: Syntax error: failed at position 115 ('COMMIT'): COMMIT. Expected one of: RENAME DATABASE, TRUNCATE, RENAME TABLE, DROP query, RENAME DICTIONARY, DETACH, RENAME query, DROP, create query, CREATE, EXCHANGE TABLES, ALTER TABLE, alter query, Stack trace (when copying this message, always include the lines below):
>
> 0. Poco::Exception::Exception(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, int) @ 0x1a7d7400 in /usr/bin/clickhouse
> 1. DB::Exception::Exception(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, int) @ 0xff8f78d in /usr/bin/clickhouse
> 2. ? @ 0x183f6741 in /usr/bin/clickhouse
> 3. ? @ 0x177a3d6e in /usr/bin/clickhouse
> 4. DB::executeQuery(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, DB::Context&, bool, DB::QueryProcessingStage::Enum, bool) @ 0x177a5e7f in /usr/bin/clickhouse
> 5. ? @ 0x17367e84 in /usr/bin/clickhouse
> 6. DB::MaterializeMySQLSyncThread::onEvent(DB::MaterializeMySQLSyncThread::Buffers&, std::__1::shared_ptr<DB::MySQLReplication::EventBase> const&, DB::MaterializeMetadata&) @ 0x1736ee86 in /usr/bin/clickhouse
> 7. DB::MaterializeMySQLSyncThread::synchronization(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&) @ 0x173707bf in /usr/bin/clickhouse
> 8. ? @ 0x17370ba6 in /usr/bin/clickhouse
> 9. ThreadPoolImpl<std::__1::thread>::worker(std::__1::__list_iterator<std::__1::thread, void*>) @ 0xffbf517 in /usr/bin/clickhouse
> 10. ? @ 0xffbdb53 in /usr/bin/clickhouse
> 11. start_thread @ 0x7dd5 in /usr/lib64/libpthread-2.17.so
> 12. clone @ 0xfdead in /usr/lib64/libc-2.17.so
> (version 20.8.1.4474 (official build))
> 2020.08.29 16:41:02.696498 [ 104835 ] {} <Error> MaterializeMySQLSyncThread: Code: 62, e.displayText() = DB::Exception: Syntax error: failed at position 115 ('COMMIT'): COMMIT. Expected one of: RENAME DATABASE, TRUNCATE, RENAME TABLE, DROP query, RENAME DICTIONARY, DETACH, RENAME query, DROP, create query, CREATE, EXCHANGE TABLES, ALTER TABLE, alter query, Stack trace (when copying this message, always include the lines below):
>
> 0. Poco::Exception::Exception(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, int) @ 0x1a7d7400 in /usr/bin/clickhouse
> 1. DB::Exception::Exception(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, int) @ 0xff8f78d in /usr/bin/clickhouse
> 2. ? @ 0x183f6741 in /usr/bin/clickhouse
> 3. ? @ 0x177a3d6e in /usr/bin/clickhouse
> 4. DB::executeQuery(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, DB::Context&, bool, DB::QueryProcessingStage::Enum, bool) @ 0x177a5e7f in /usr/bin/clickhouse
> 5. ? @ 0x17367e84 in /usr/bin/clickhouse
> 6. DB::MaterializeMySQLSyncThread::onEvent(DB::MaterializeMySQLSyncThread::Buffers&, std::__1::shared_ptr<DB::MySQLReplication::EventBase> const&, DB::MaterializeMetadata&) @ 0x1736ee86 in /usr/bin/clickhouse
> 7. DB::MaterializeMySQLSyncThread::synchronization(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&) @ 0x173707bf in /usr/bin/clickhouse
> 8. ? @ 0x17370ba6 in /usr/bin/clickhouse
> 9. ThreadPoolImpl<std::__1::thread>::worker(std::__1::__list_iterator<std::__1::thread, void*>) @ 0xffbf517 in /usr/bin/clickhouse
> 10. ? @ 0xffbdb53 in /usr/bin/clickhouse
> 11. start_thread @ 0x7dd5 in /usr/lib64/libpthread-2.17.so
> 12. clone @ 0xfdead in /usr/lib64/libc-2.17.so
> (version 20.8.1.4474 (official build))
**Additional context**
I'm using MySQL slave as the source of ClickHouse
| https://github.com/ClickHouse/ClickHouse/issues/14235 | https://github.com/ClickHouse/ClickHouse/pull/14253 | ebbdaf41aa44b8c9a70bc738c1be56dd9a5e0304 | 0586f0d555f7481b394afc55bbb29738cd573a1c | 2020-08-29T08:51:55Z | c++ | 2020-08-31T12:08:34Z |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 14,231 | ["src/Databases/MySQL/MaterializeMySQLSyncThread.cpp", "src/Parsers/Lexer.cpp", "tests/queries/0_stateless/01460_allow_dollar_and_number_in_identifier.reference", "tests/queries/0_stateless/01460_allow_dollar_and_number_in_identifier.sql"] | MaterializeMySQL not support number started column name | **Describe the bug**
Table schema:
> mysql> show create table name\G
>
> *************************** 1. row ***************************
> Table: name
> Create Table: CREATE TABLE `name` (
> `id` bigint(20) unsigned NOT NULL AUTO_INCREMENT,
> `username` varchar(40) DEFAULT NULL,
> `1m_10m` smallint(6) DEFAULT NULL,
> PRIMARY KEY (`id`)
> ) ENGINE=InnoDB AUTO_INCREMENT=178 DEFAULT CHARSET=latin1
When doing DB creation:
> CREATE DATABASE clickhouse_test ENGINE = MaterializeMySQL('192.168.153.132:3306', 'clickhouse_test', 'root', 'password');
Got error message:
> Code: 62. DB::Exception: Received from 192.168.153.132:9000. DB::Exception: Syntax error: failed at position 80 ('1m_10m'): 1m_10m) VALUES. Wrong number.
| https://github.com/ClickHouse/ClickHouse/issues/14231 | https://github.com/ClickHouse/ClickHouse/pull/14232 | bc8765d5ada07f658d302110d6d762a21c10cf53 | 75ca3d217fe0dd6dce7411abdbcb43069a8138d0 | 2020-08-29T01:07:39Z | c++ | 2020-08-31T16:03:04Z |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 14,224 | ["src/Interpreters/InterpreterAlterQuery.cpp", "src/Storages/LiveView/StorageLiveView.cpp", "src/Storages/LiveView/StorageLiveView.h", "tests/queries/0_stateless/01463_test_alter_live_view_refresh.reference", "tests/queries/0_stateless/01463_test_alter_live_view_refresh.sql"] | ALTER LIVE VIEW REFRESH throws an exception | **Describe the bug**
Using `ALTER LIVE VIEW view REFRESH` results in an exception.
**How to reproduce**
* Connected to ClickHouse server version 20.8.1 revision 54438
* `CREATE TABLE` statements for all tables involved
```
clicktest :) SHOW CREATE table0
SHOW CREATE TABLE table0
┌─statement─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┐
│ CREATE TABLE default.table0
(
`d` Date,
`a` String,
`b` UInt8,
`x` String,
`y` Int8
)
ENGINE = MergeTree()
ORDER BY d
SETTINGS index_granularity = 8192 │
└───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┘
1 rows in set. Elapsed: 0.006 sec.
```
* Queries to run that lead to unexpected result
```
ClickHouse client version 20.8.1.4447 (official build).
Connecting to localhost:9000 as user default.
Connected to ClickHouse server version 20.8.1 revision 54438.
clicktest :) SET allow_experimental_live_view=1
SET allow_experimental_live_view = 1
Ok.
0 rows in set. Elapsed: 0.003 sec.
clicktest :) CREATE LIVE VIEW live1 AS SELECT * FROM table0
CREATE LIVE VIEW live1 AS
SELECT *
FROM table0
Ok.
0 rows in set. Elapsed: 0.008 sec.
clicktest :) ALTER LIVE VIEW live1 REFRESH
ALTER LIVE VIEW live1
REFRESH
Received exception from server (version 20.8.1):
Code: 49. DB::Exception: Received from localhost:9000. DB::Exception: RWLockImpl::getLock(): RWLock is already locked in exclusive mode.
0 rows in set. Elapsed: 0.005 sec.
```
**Expected behavior**
A clear and concise description of what you expected to happen.
| https://github.com/ClickHouse/ClickHouse/issues/14224 | https://github.com/ClickHouse/ClickHouse/pull/14320 | 951bce59118538c33192713bc334d7e34b3fe5d7 | 08ed74732e944e7cb6e9fb8ff27709ef9c6ee520 | 2020-08-28T15:24:25Z | c++ | 2020-09-02T01:52:35Z |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 14,157 | ["src/Core/ExternalResultDescription.cpp", "src/Core/ExternalResultDescription.h", "src/Core/MySQL/MySQLReplication.cpp", "src/Core/MySQL/MySQLReplication.h", "src/Databases/MySQL/MaterializeMySQLSyncThread.cpp", "src/Formats/MySQLBlockInputStream.cpp", "tests/integration/test_materialize_mysql_database/materialize_with_ddl.py", "tests/integration/test_materialize_mysql_database/test.py"] | DB::Exception: Unsupported type Decimal(13, 4) when using MaterializeMySQL | It seems MaterializeMySQL doesn't support Decimal data type when doing replication. | https://github.com/ClickHouse/ClickHouse/issues/14157 | https://github.com/ClickHouse/ClickHouse/pull/14535 | c1402d62db751f502e118b56f06a9730798652a1 | 6e0bdaf46da2c939c45d360a442ce739db63ba94 | 2020-08-27T07:08:18Z | c++ | 2020-09-19T11:05:32Z |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 14,156 | ["docker/client/Dockerfile", "docker/server/Dockerfile", "docker/server/Dockerfile.alpine"] | orc requires tzdata: 'orc::getLocalTimezone(): Can't open /etc/localtime." | when i use "SELECT *
FROM hdfs()" with version 20.6 .but it works well when use version 19.14.
Code: 1001. DB::Exception: Received from hadoop:9000. DB::Exception: N3orc13TimezoneErrorE: Can't open /etc/localtime.
I am searching for a long time on net. But no use. Please help or try to give some ideas how to achieve this.
| https://github.com/ClickHouse/ClickHouse/issues/14156 | https://github.com/ClickHouse/ClickHouse/pull/22000 | 087be05dfdc044f71d02aabebc5e04690b25e5fd | b6a0f2f4ad8b1dd6734929a2370f9f6fab128c30 | 2020-08-27T05:42:28Z | c++ | 2021-03-24T20:10:54Z |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 14,130 | ["src/Storages/Distributed/DirectoryMonitor.cpp", "tests/integration/test_insert_distributed_async_extra_dirs/__init__.py", "tests/integration/test_insert_distributed_async_extra_dirs/configs/remote_servers.xml", "tests/integration/test_insert_distributed_async_extra_dirs/test.py"] | Clickhouse unable to start with unclear error if Distributed table have unexpected folders | ```
create table default.DFA(A Int64) Engine=MergeTree order by tuple();
create table default.DFAD as default.DFA Engine=Distributed(test_shard_localhost, default, DFA)
mkdir /var/lib/clickhouse/data/default/DFAD/shard1_replica1,shard1_replica2
chown -R clickhouse.clickhouse /var/lib/clickhouse/data/default/DFAD/shard1_replica1,shard1_replica2/
/etc/init.d/clickhouse-server restart
<Fatal> BaseDaemon: ########################################
<Fatal> BaseDaemon: (version 20.8.1.4436, build id: 30B7E2A4815769B9) (from thread 21305) (no query) Received signal Segmentation fault (11)
<Fatal> BaseDaemon: Address: NULL pointer. Access: read. Unknown si_code.
<Fatal> BaseDaemon: Stack trace: 0x17a585f5 0x1783c5de 0x17841b4e 0x17841ea0 0x1715a672 0xff3df97 0xff3e70a 0xff3d4a7 0xff3bae3 0x7f411ecd9fa3 0x7f411ebfb4
<Fatal> BaseDaemon: 3. DB::StorageDistributedDirectoryMonitor::createPool(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<cha
t&) @ 0x17a585f5 in /usr/bin/clickhouse
<Fatal> BaseDaemon: 4. DB::StorageDistributed::requireDirectoryMonitor(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>
td::__1::char_traits<char>, std::__1::allocator<char> > const&) @ 0x1783c5de in /usr/bin/clickhouse
<Fatal> BaseDaemon: 5. DB::StorageDistributed::createDirectoryMonitors(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>
house
<Fatal> BaseDaemon: 6. DB::StorageDistributed::startup() @ 0x17841ea0 in /usr/bin/clickhouse
<Fatal> BaseDaemon: 7. ? @ 0x1715a672 in /usr/bin/clickhouse
<Fatal> BaseDaemon: 8. ThreadPoolImpl<ThreadFromGlobalPool>::worker(std::__1::__list_iterator<ThreadFromGlobalPool, void*>) @ 0xff3df97 in /usr/bin/clickho
<Fatal> BaseDaemon: 9. ThreadFromGlobalPool::ThreadFromGlobalPool<void ThreadPoolImpl<ThreadFromGlobalPool>::scheduleImpl<void>(std::__1::function<void ()>
``` | https://github.com/ClickHouse/ClickHouse/issues/14130 | https://github.com/ClickHouse/ClickHouse/pull/20498 | 2c7c5c9604e6f71cecc6d6d534a5431c6409cd16 | 6363d444887ff3e66caeac04c2c0fd3d4d5a39eb | 2020-08-26T19:00:57Z | c++ | 2021-02-17T06:43:11Z |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 14,118 | ["src/Functions/FunctionBinaryArithmetic.h", "src/Functions/FunctionUnaryArithmetic.h", "src/Functions/FunctionsComparison.h", "src/Functions/FunctionsLogical.h", "tests/queries/0_stateless/01457_compile_expressions_fuzzer.reference", "tests/queries/0_stateless/01457_compile_expressions_fuzzer.sql"] | Debug assertion with function GREATEST if compile_expressions = 1. | ```
SET compile_expressions = 1
SELECT GREATEST(2,0)
```
| https://github.com/ClickHouse/ClickHouse/issues/14118 | https://github.com/ClickHouse/ClickHouse/pull/14122 | 0f706c01caad8ac83308413ddc019d7c4ee0373d | 67f16d5ae8e5ecfdb989c689134c279f8a9c2fa4 | 2020-08-26T16:32:08Z | c++ | 2020-08-26T23:15:40Z |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 14,114 | ["src/Interpreters/MySQL/InterpretersMySQLDDLQuery.cpp", "src/Interpreters/MySQL/tests/gtest_create_rewritten.cpp", "src/Parsers/MySQL/ASTDeclareColumn.cpp"] | experimental MaterializeMySQL in 20.8 cannot synchronize "create table" commands | i use
my steps
1. create a new database ckdb on mysql, then create table t1(a int, primary key(a)); and insert some rows
2. SET allow_experimental_database_materialize_mysql=1; at clickhouse,
3. CREATE DATABASE ckdb ENGINE = MaterializeMySQL('127.0.0.1:3306', 'ckdb', 'root', 'A123b_456');
4. use ckdb and select * from t1 ok
5. create table dzm as select * from others in mysql ----> cannot see dzm in clickhouse, but if select * from t1, reports
```
Code: 48. DB::Exception: Received from localhost:9000. DB::Exception: The ckdb.dzm cannot be materialized, because there is no primary keys..
```
6. create table t2(a int,b int primary a);----> cannot see t2 in clickhouse, , but if select * from t1, reports
```
Code: 44. DB::Exception: Received from localhost:9000. DB::Exception: Sorting key cannot contain nullable columns.
```
<del>7. if i drop ckdb and recreate it, all tables can be selected </del>
7.if there are some 'bad' tables in a database, all tables in that database cannot be selected
Make sure to check documentation https://clickhouse.yandex/docs/en/ first. If the question is concise and probably has a short answer, asking it in Telegram chat https://telegram.me/clickhouse_en is probably the fastest way to find the answer. For more complicated questions, consider asking them on StackOverflow with "clickhouse" tag https://stackoverflow.com/questions/tagged/clickhouse
If you still prefer GitHub issues, remove all this text and ask your question here.
| https://github.com/ClickHouse/ClickHouse/issues/14114 | https://github.com/ClickHouse/ClickHouse/pull/14397 | 8e2fba5be1ec3b5a0c7f8bf34b72913547b19b20 | 3a7181cfcb47b9f54cde1dcb80340009909c20d4 | 2020-08-26T14:47:44Z | c++ | 2020-09-02T20:12:33Z |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 14,045 | ["src/Columns/ColumnVector.h", "src/Functions/if.cpp", "tests/queries/0_stateless/01475_mutation_with_if.reference", "tests/queries/0_stateless/01475_mutation_with_if.sql"] | Segmentation fault after upgrade from v20.5.5.74 to v20.6.4.44 | ```
2020.08.25 12:14:14.924784 [ 10386 ] {} <Fatal> BaseDaemon: ########################################
2020.08.25 12:14:14.924879 [ 10386 ] {} <Fatal> BaseDaemon: (version 20.6.4.44 (official build), no build id) (from thread 10231) (no query) Received signal Segmentation fault (11)
2020.08.25 12:14:14.924933 [ 10386 ] {} <Fatal> BaseDaemon: Address: 0x7fa55fcb0000 Access: read. Attempted access has violated the permissions assigned to the memory area.
2020.08.25 12:14:14.924971 [ 10386 ] {} <Fatal> BaseDaemon: Stack trace: 0xf57c528 0xc510217 0xc635179 0xc510cb3 0xc634c24 0xae6ee25 0xf1a2bb2 0xf1a51ed 0xf3f63cf 0xede92cd 0xf3f63af 0xede92cd 0xf3f63af 0xede92cd 0xf3f63af 0xede92cd 0xf3f63af 0xede92cd 0xf3f63af 0xede92cd 0xf3f63af 0xede92cd 0xf3f63af 0xede92cd 0xf3f63af 0xede92cd 0xf3f63af 0xede92cd 0xf3f63af
2020.08.25 12:14:14.925045 [ 10386 ] {} <Fatal> BaseDaemon: 3. DB::ColumnString::insertFrom(DB::IColumn const&, unsigned long) @ 0xf57c528 in /usr/bin/clickhouse
2020.08.25 12:14:14.925101 [ 10386 ] {} <Fatal> BaseDaemon: 4. DB::FunctionIf::executeGeneric(DB::ColumnVector<char8_t> const*, DB::Block&, std::__1::vector<unsigned long, std::__1::allocator<unsigned long> > const&, unsigned long, unsigned long) @ 0xc510217 in /usr/bin/clickhouse
2020.08.25 12:14:14.925133 [ 10386 ] {} <Fatal> BaseDaemon: 5. DB::FunctionIf::executeImpl(DB::Block&, std::__1::vector<unsigned long, std::__1::allocator<unsigned long> > const&, unsigned long, unsigned long) @ 0xc635179 in /usr/bin/clickhouse
2020.08.25 12:14:14.925154 [ 10386 ] {} <Fatal> BaseDaemon: 6. DB::FunctionIf::executeForNullableThenElse(DB::Block&, std::__1::vector<unsigned long, std::__1::allocator<unsigned long> > const&, unsigned long, unsigned long) @ 0xc510cb3 in /usr/bin/clickhouse
2020.08.25 12:14:14.925182 [ 10386 ] {} <Fatal> BaseDaemon: 7. DB::FunctionIf::executeImpl(DB::Block&, std::__1::vector<unsigned long, std::__1::allocator<unsigned long> > const&, unsigned long, unsigned long) @ 0xc634c24 in /usr/bin/clickhouse
2020.08.25 12:14:14.925216 [ 10386 ] {} <Fatal> BaseDaemon: 8. DB::ExecutableFunctionAdaptor::execute(DB::Block&, std::__1::vector<unsigned long, std::__1::allocator<unsigned long> > const&, unsigned long, unsigned long, bool) @ 0xae6ee25 in /usr/bin/clickhouse
2020.08.25 12:14:14.925280 [ 10386 ] {} <Fatal> BaseDaemon: 9. DB::ExpressionAction::execute(DB::Block&, bool) const @ 0xf1a2bb2 in /usr/bin/clickhouse
2020.08.25 12:14:14.925315 [ 10386 ] {} <Fatal> BaseDaemon: 10. DB::ExpressionActions::execute(DB::Block&, bool) const @ 0xf1a51ed in /usr/bin/clickhouse
2020.08.25 12:14:14.925341 [ 10386 ] {} <Fatal> BaseDaemon: 11. DB::ExpressionBlockInputStream::readImpl() @ 0xf3f63cf in /usr/bin/clickhouse
2020.08.25 12:14:14.925448 [ 10386 ] {} <Fatal> BaseDaemon: 12. DB::IBlockInputStream::read() @ 0xede92cd in /usr/bin/clickhouse
2020.08.25 12:14:14.925481 [ 10386 ] {} <Fatal> BaseDaemon: 13. DB::ExpressionBlockInputStream::readImpl() @ 0xf3f63af in /usr/bin/clickhouse
2020.08.25 12:14:14.925531 [ 10386 ] {} <Fatal> BaseDaemon: 14. DB::IBlockInputStream::read() @ 0xede92cd in /usr/bin/clickhouse
2020.08.25 12:14:14.925564 [ 10386 ] {} <Fatal> BaseDaemon: 15. DB::ExpressionBlockInputStream::readImpl() @ 0xf3f63af in /usr/bin/clickhouse
2020.08.25 12:14:14.925582 [ 10386 ] {} <Fatal> BaseDaemon: 16. DB::IBlockInputStream::read() @ 0xede92cd in /usr/bin/clickhouse
2020.08.25 12:14:14.925595 [ 10386 ] {} <Fatal> BaseDaemon: 17. DB::ExpressionBlockInputStream::readImpl() @ 0xf3f63af in /usr/bin/clickhouse
2020.08.25 12:14:14.925608 [ 10386 ] {} <Fatal> BaseDaemon: 18. DB::IBlockInputStream::read() @ 0xede92cd in /usr/bin/clickhouse
2020.08.25 12:14:14.925622 [ 10386 ] {} <Fatal> BaseDaemon: 19. DB::ExpressionBlockInputStream::readImpl() @ 0xf3f63af in /usr/bin/clickhouse
2020.08.25 12:14:14.925639 [ 10386 ] {} <Fatal> BaseDaemon: 20. DB::IBlockInputStream::read() @ 0xede92cd in /usr/bin/clickhouse
2020.08.25 12:14:14.925652 [ 10386 ] {} <Fatal> BaseDaemon: 21. DB::ExpressionBlockInputStream::readImpl() @ 0xf3f63af in /usr/bin/clickhouse
2020.08.25 12:14:14.925710 [ 10386 ] {} <Fatal> BaseDaemon: 22. DB::IBlockInputStream::read() @ 0xede92cd in /usr/bin/clickhouse
2020.08.25 12:14:14.925729 [ 10386 ] {} <Fatal> BaseDaemon: 23. DB::ExpressionBlockInputStream::readImpl() @ 0xf3f63af in /usr/bin/clickhouse
2020.08.25 12:14:14.925744 [ 10386 ] {} <Fatal> BaseDaemon: 24. DB::IBlockInputStream::read() @ 0xede92cd in /usr/bin/clickhouse
2020.08.25 12:14:14.925761 [ 10386 ] {} <Fatal> BaseDaemon: 25. DB::ExpressionBlockInputStream::readImpl() @ 0xf3f63af in /usr/bin/clickhouse
2020.08.25 12:14:14.925818 [ 10386 ] {} <Fatal> BaseDaemon: 26. DB::IBlockInputStream::read() @ 0xede92cd in /usr/bin/clickhouse
2020.08.25 12:14:14.925844 [ 10386 ] {} <Fatal> BaseDaemon: 27. DB::ExpressionBlockInputStream::readImpl() @ 0xf3f63af in /usr/bin/clickhouse
2020.08.25 12:14:14.925859 [ 10386 ] {} <Fatal> BaseDaemon: 28. DB::IBlockInputStream::read() @ 0xede92cd in /usr/bin/clickhouse
2020.08.25 12:14:14.925876 [ 10386 ] {} <Fatal> BaseDaemon: 29. DB::ExpressionBlockInputStream::readImpl() @ 0xf3f63af in /usr/bin/clickhouse
2020.08.25 12:14:14.925901 [ 10386 ] {} <Fatal> BaseDaemon: 30. DB::IBlockInputStream::read() @ 0xede92cd in /usr/bin/clickhouse
2020.08.25 12:14:14.925929 [ 10386 ] {} <Fatal> BaseDaemon: 31. DB::ExpressionBlockInputStream::readImpl() @ 0xf3f63af in /usr/bin/clickhouse
``` | https://github.com/ClickHouse/ClickHouse/issues/14045 | https://github.com/ClickHouse/ClickHouse/pull/14646 | 3e00d64ebf38218a3210b8be42f73858bbb804c4 | 336430d3c27bfcfd7b45d314fea62dbaf66c3852 | 2020-08-25T12:26:58Z | c++ | 2020-09-14T06:51:40Z |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 14,006 | ["src/Functions/FunctionBase64Conversion.h"] | MemorySanitizer: use-of-uninitialized-value | https://clickhouse-test-reports.s3.yandex.net/13925/0f492917338ba5b20728563e38abc35be67d606c/stress_test_(memory)/stderr.log
MemorySanitizer: use-of-uninitialized-value in DB::FunctionToStringCutToZero::tryExecuteString.
Uninitialized value was created ... in DB::FunctionBase64ConversionDB::Base64Decode::executeImpl
(from ExecuteScalarSubqueriesVisitor -> ... -> InterpreterSelectQuery::getSampleBlockImpl() -> ... -> IFunction::executeImplDryRun())
There's only one query with `toStringCutToZero(base64Decode)` in server log
```
SELECT JSONExtractKeysAndValues(-0, (CAST((( SELECT ['"O'], (CAST((( SELECT toStringCutToZero((CAST((base64Decode('UE')) AS String))) ) AS op, ( SELECT positionCaseInsensitiveUTF8() ) AS uxxgbn) AS String)), (CAST((-44955.4) AS Date)), regionToDistrict(NULL) ) AS q) AS DateTime)))
``` | https://github.com/ClickHouse/ClickHouse/issues/14006 | https://github.com/ClickHouse/ClickHouse/pull/15030 | bf81dcbd2ff02945527f19f13a97f8aadbde26b7 | 068e9f372e35a9c0cad84cfe7230de61f5ed2023 | 2020-08-24T14:08:30Z | c++ | 2020-09-20T16:03:28Z |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 13,992 | ["src/Interpreters/OptimizeIfWithConstantConditionVisitor.cpp", "tests/queries/0_stateless/01503_if_const_optimization.reference", "tests/queries/0_stateless/01503_if_const_optimization.sql"] | Syntax check fails for function CAST() (found by fuzzer) | **Describe the bug**
ClickHouse is unable to detect syntax error in expression: `SELECT if(CAST(NULL), '2.55', NULL) AS x`, which leads to a failure.
**How to reproduce**
* Reproduces in 20.7, 20.8 version of ClickHouse
* Queries to run that lead to unexpected result
**Expected behavior**
A clear message, indicating there is a syntax error in the above expression (CAST(NULL) is not a valid logical expression).
**Error message and/or stacktrace**
The original AST-fuzzer report - https://clickhouse-test-reports.s3.yandex.net/12550/48333b29f619490fd1df50230301596e44e2af45/fuzzer/fuzzer.log#fail1
Server error log - https://clickhouse-test-reports.s3.yandex.net/12550/48333b29f619490fd1df50230301596e44e2af45/fuzzer/server.log | https://github.com/ClickHouse/ClickHouse/issues/13992 | https://github.com/ClickHouse/ClickHouse/pull/15029 | 068e9f372e35a9c0cad84cfe7230de61f5ed2023 | 3f5d7843f6a2f2a4bd284db3bc79fefe15b7ce8f | 2020-08-24T09:52:21Z | c++ | 2020-09-20T16:03:47Z |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 13,990 | ["tests/queries/0_stateless/01584_distributed_buffer_cannot_find_column.reference", "tests/queries/0_stateless/01584_distributed_buffer_cannot_find_column.sql", "tests/queries/0_stateless/arcadia_skip_list.txt"] | Code: 8. DB::Exception: Cannot find column xx in source stream | **Describe the bug**
I created a table with engine = buffer and going to flush data to distributed table in shard. All things are done from create tables to insert data. I also select data by basic queries but it failed if I use sum, count and the following error shows:
```
SELECT sum(amount)
FROM realtime.realtimebuff
Received exception from server (version 20.6.3):
Code: 8. DB::Exception: Received from 10.60.37.37:9000. DB::Exception: Cannot find column amount in source stream.
0 rows in set. Elapsed: 0.005 sec. "
```
**For basic queries as below:**
```
SELECT *
FROM realtime.realtimebuff
┌─amount─┬─transID─────────┬─userID──────────┬─appID─┬─appName─┬─transType─┬─orderSource─┬─nau─┬─fau─┬─transactionType─┬─supplier─┬─fMerchant─┬─bankConnCode─┬─────────────reqDate─┐
│ 100 │ 200312000295032 │ 200223000028708 │ 14 │ Data │ 1 │ 20 │ 1 │ 0 │ 123 │ abc │ 1234a │ ZPVBIDV │ 2020-08-24 15:09:43 │
└────────┴─────────────────┴─────────────────┴───────┴─────────┴───────────┴─────────────┴─────┴─────┴─────────────────┴──────────┴───────────┴──────────────┴─────────────────────┘
1 rows in set. Elapsed: 0.018 sec.
```
**How to reproduce**
****
```sql
/** 1. Create tables **/
/* a. Create replicated table */
CREATE TABLE realtimedrep(amount Int64,transID String,userID String,appID String,appName String,transType String,orderSource String,nau String,fau String,transactionType String,supplier String,fMerchant String,bankConnCode String,reqDate DateTime) ENGINE = ReplicatedMergeTree('/data/clickhouse/3/realtimedrep','2') PARTITION BY toDate(reqDate) ORDER BY transID SETTINGS index_granularity = 8192;
/* b. Create Distributed table */
CREATE TABLE realtimedistributed(amount Int64,transID String,userID String,appID String,appName String,transType String,orderSource String,nau String,fau String,transactionType String,supplier String,fMerchant String,bankConnCode String,reqDate DateTime) ENGINE = Distributed(cluster_two_shards, realtime, realtimedrep, rand());
/* c. Create Buffer */
CREATE TABLE realtimebuff(amount Int64,transID String,userID String,appID String,appName String,transType String,orderSource String,nau String,fau String,transactionType String,supplier String,fMerchant String,bankConnCode String,reqDate DateTime) ENGINE = Buffer('realtime', 'realtimedistributed', 16, 3600, 100, 10000, 1000000, 10000000, 100000000);
/** 2. Insert data **/
insert into realtimebuff (amount,transID,userID,appID,appName,transType,orderSource,nau,fau,transactionType,supplier,fMerchant,bankConnCode,reqDate) values (100, '200312000295032','200223000028708','14', 'Data','1', '20','1', '0','123','abc', '1234a','ZPVBIDV', 1598256583);
```
**3. Run queries**
```sql
select sum(amount) from realtime.realtimebuff;
```
**Which ClickHouse server version to use**
Version 20.6.3
**Expected behavior**
It's a basic query and it should work.
**Additional context**
Even though the query above cannot work in buffer table but it works in distributed table. The following is results:
```
SELECT sum(amount)
FROM realtimedistributed
┌─sum(amount)─┐
│ 100 │
└─────────────┘
1 rows in set. Elapsed: 0.008 sec
```
| https://github.com/ClickHouse/ClickHouse/issues/13990 | https://github.com/ClickHouse/ClickHouse/pull/17298 | 00da5148a105f9306b6d15492090453e96988d39 | 650d20fdeb9db045f96f5ee5abfa046b785a17cd | 2020-08-24T09:05:41Z | c++ | 2020-11-29T14:54:24Z |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 13,981 | ["src/Dictionaries/ExternalQueryBuilder.cpp", "src/Dictionaries/SSDComplexKeyCacheDictionary.cpp", "src/Dictionaries/SSDComplexKeyCacheDictionary.h"] | An error in SSD Cache dictionary (found by fuzzer) | https://clickhouse-test-reports.s3.yandex.net/8367/333ec2e496f4fe4addd22d5fa7c5c2a499099c57/fuzzer/fuzzer.log#fail1 | https://github.com/ClickHouse/ClickHouse/issues/13981 | https://github.com/ClickHouse/ClickHouse/pull/14313 | 76aa1d15adcfc9745f730b1294d9378358f18de7 | 7bd31fb3d3b5a2b0d960f0da52e00b829825eb1f | 2020-08-23T21:32:32Z | c++ | 2020-09-02T21:50:24Z |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 13,962 | ["src/Functions/FunctionsJSON.h", "tests/queries/0_stateless/01685_json_extract_double_as_float.reference", "tests/queries/0_stateless/01685_json_extract_double_as_float.sql"] | JSONExtract returns 0 for numbers with floating-point for type float / Float32 | **Describe the bug**
JSONExtract returns 0 for numbers with floating-point for type *float / Float32*.
**How to reproduce**
```sql
WITH '{ "v":1.1}' AS raw
SELECT
JSONExtract(raw, 'v', 'float') AS float32_1,
JSONExtract(raw, 'v', 'Float32') AS float32_2,
JSONExtractFloat(raw, 'v') AS float64_1,
JSONExtract(raw, 'v', 'double') AS float64_2
/*
┌─float32_1─┬─float32_2─┬─float64_1─┬─float64_2─┐
│ 0 │ 0 │ 1.1 │ 1.1 │
└───────────┴───────────┴───────────┴───────────┘
*/
```
```sql
WITH '{ "v":1E-2}' AS raw
SELECT
JSONExtract(raw, 'v', 'float') AS float32_1,
JSONExtract(raw, 'v', 'Float32') AS float32_2,
JSONExtractFloat(raw, 'v') AS float64_1,
JSONExtract(raw, 'v', 'double') AS float64_2
/*
┌─float32_1─┬─float32_2─┬─float64_1─┬─float64_2─┐
│ 0 │ 0 │ 0.01 │ 0.01 │
└───────────┴───────────┴───────────┴───────────┘
*/
```
**Expected behavior**
I think either need to add support *float / Float32* type or throw an exception to point to need to use *double / Float64* type.
**Additional context**
CH v. 20.6.4.44
| https://github.com/ClickHouse/ClickHouse/issues/13962 | https://github.com/ClickHouse/ClickHouse/pull/19960 | fb02d5653494e8b45560e3dc5f6f0be18c1dff9c | 695e28079dff59d4eb1798da98899a3b2c3ffa44 | 2020-08-22T08:01:33Z | c++ | 2021-02-02T12:15:37Z |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 13,958 | ["src/Columns/ColumnLowCardinality.cpp", "tests/queries/0_stateless/01456_low_cardinality_sorting_bugfix.reference", "tests/queries/0_stateless/01456_low_cardinality_sorting_bugfix.sql"] | Multiple ORDER BY expressions not returning correct order | **Describe the bug**
When using ORDER BY with multiple expressions on a MergeTree table, the ordering is not correct.
**How to reproduce**
` yandex/clickhouse-server:20.5.2.7`
Create the table:
```sql
create table order_test
(
timestamp DateTime64(3),
color LowCardinality(Nullable(String))
) engine = MergeTree()
PARTITION BY toYYYYMM(timestamp)
ORDER BY timestamp
SETTINGS index_granularity = 8192;
```
Fill the table (run a few time times to get a range of timestamps):
```sql
insert into order_test
select now64(),
arrayElement(['red', 'green', 'blue', null], modulo(number, 4) + 1) as color
from (select number from system.numbers limit 1000000);
```
Query:
```sql
SELECT count(),
color,
toStartOfSecond(timestamp) AS `second`
FROM order_test AS i
GROUP BY color, `second`
ORDER BY color, `second` desc
LIMIT 500;
```
See out of order results:
```
500000,blue,2020-08-21 18:30:06.000
500000,blue,2020-08-21 18:30:09.000
250000,blue,2020-08-21 18:30:03.000
250000,blue,2020-08-21 18:30:08.000
250000,blue,2020-08-21 18:30:07.000
```
**Expected behavior**
See results in order by descending second:
```
500000,blue,2020-08-21 18:30:09.000
250000,blue,2020-08-21 18:30:08.000
250000,blue,2020-08-21 18:30:07.000
500000,blue,2020-08-21 18:30:06.000
250000,blue,2020-08-21 18:30:03.000
```
Notes:
It appears when I take out `LowCardinality` from the color column, the order works as expected.
| https://github.com/ClickHouse/ClickHouse/issues/13958 | https://github.com/ClickHouse/ClickHouse/pull/14223 | 2d33a4029bc71a3ab5127330e8f8f74db1f3a24f | 2a514eae219839940a28d3b7678b9097fc421eff | 2020-08-21T18:35:58Z | c++ | 2020-08-29T15:25:26Z |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 13,926 | ["src/Common/UnicodeBar.cpp", "src/Common/UnicodeBar.h", "src/Common/ya.make", "tests/queries/0_stateless/01502_bar_overflow.reference", "tests/queries/0_stateless/01502_bar_overflow.sql"] | It's possible that function `bar` will try to allocate too much memory. | Not a bug (the user will get exception) but triggers assertion in debug build. See:
https://clickhouse-test-reports.s3.yandex.net/13922/dd4b8b96635beecaaa40cd635f56818242b3f2a3/fuzzer/fuzzer.log#fail1 | https://github.com/ClickHouse/ClickHouse/issues/13926 | https://github.com/ClickHouse/ClickHouse/pull/15028 | 3f5d7843f6a2f2a4bd284db3bc79fefe15b7ce8f | 7b35a6a59a52be8ff6d0d3e85869ca7270455f68 | 2020-08-20T17:45:04Z | c++ | 2020-09-20T16:04:15Z |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 13,911 | ["docs/en/sql-reference/statements/create/table.md", "src/Interpreters/InterpreterCreateQuery.cpp", "src/Storages/AlterCommands.cpp", "tests/queries/0_stateless/01462_test_codec_on_alias.reference", "tests/queries/0_stateless/01462_test_codec_on_alias.sql"] | CODEC is allowed for ALIAS column | **Describe the bug**
`CODEC` is allowed for `ALIAS` column
**How to reproduce**
``` sql
CREATE TABLE t102
(
`c0` ALIAS c1 CODEC(ZSTD),
`c1` String
)
ENGINE = Memory()
Ok.
DESCRIBE TABLE t102
┌─name─┬─type───┬─default_type─┬─default_expression─┬─comment─┬─codec_expression─┬─ttl_expression─┐
│ c0 │ String │ ALIAS │ c1 │ │ ZSTD(1) │ │
│ c1 │ String │ │ │ │ │ │
└──────┴────────┴──────────────┴────────────────────┴─────────┴──────────────────┴────────────────┘
```
**Expected behavior**
Error | https://github.com/ClickHouse/ClickHouse/issues/13911 | https://github.com/ClickHouse/ClickHouse/pull/14263 | 4d96510a294cd498d649051ed85187b9eb7f9484 | e28b477f79a38e8e2b9726f7a94dbd94ff1cce9b | 2020-08-20T10:37:15Z | c++ | 2020-09-01T06:42:53Z |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 13,907 | ["tests/queries/0_stateless/02210_toColumnTypeName_toLowCardinality_const.reference", "tests/queries/0_stateless/02210_toColumnTypeName_toLowCardinality_const.sql"] | toColumnTypeName(toLowCardinality(...)) does not work | **Describe the unexpected behaviour**
ClickHouse reports an error on `toColumnTypeName(toLowCardinality(...))`.
**How to reproduce**
```sql
SELECT toColumnTypeName(toLowCardinality(1))
Received exception from server (version 20.7.1):
Code: 44. DB::Exception: Received from localhost:9000. DB::Exception: Cannot convert column `toColumnTypeName(toLowCardinality(1))` because it is non constant in source stream but must be constant in result.
```
**Additional context**
I am not sure if it is unexpected. In comparison, `select toColumnTypeName(toNullable(1))` has no problem.
| https://github.com/ClickHouse/ClickHouse/issues/13907 | https://github.com/ClickHouse/ClickHouse/pull/34471 | 3a0e71ca14751a0f43ef99b7a6faa7f4bb1974a7 | 9a757c0ed1704301407c0d587efcc7e852c2bf60 | 2020-08-20T06:11:54Z | c++ | 2022-02-10T10:30:10Z |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 13,893 | ["tests/queries/0_stateless/01537_fuzz_count_equal.reference", "tests/queries/0_stateless/01537_fuzz_count_equal.sql"] | Crash with Null and countEqual | ```
SELECT NULL = countEqual(materialize([arrayJoin([NULL, NULL, NULL]), NULL AS x, arrayJoin([255, 1025, NULL, NULL]), arrayJoin([2, 1048576, NULL, NULL])]), materialize(x))
```
```
020.08.19 19:27:31.685853 [ 30223 ] {} <Trace> BaseDaemon: Received signal 11
2020.08.19 19:27:31.686133 [ 16976 ] {} <Fatal> BaseDaemon: ########################################
2020.08.19 19:27:31.686305 [ 16976 ] {} <Fatal> BaseDaemon: (version 20.8.1.1, build id: 27DCF58DA33035A9) (from thread 30226) (query_id: e051d487-5d00-481d-a724-a59ab0f9e8f0) Received signal Segmentation fault (11)
2020.08.19 19:27:31.686414 [ 16976 ] {} <Fatal> BaseDaemon: Address: 0x30 Access: read. Address not mapped to object.
2020.08.19 19:27:31.686486 [ 16976 ] {} <Fatal> BaseDaemon: Stack trace: 0xa96ba8b 0xe09f23f 0xe01cff7 0xe01b206 0xe01a4cf 0xb6eaa26 0xf61bf46 0xf61fe7d 0xfe79b4d 0xfa3c820 0xfd850b2 0xfdb4a8c 0xfdb3a60 0xfdb2683 0xfdb233d 0xfdba841 0xa859256 0xa85b37b 0x7ff6f13af6db 0x7ff6f0ccca3f
2020.08.19 19:27:31.687447 [ 16976 ] {} <Fatal> BaseDaemon: 3. /home/nik-kochetov/dev/ClickHouse/build-clang/../src/Columns/ColumnVector.h:33: DB::ColumnVector<unsigned int>::compareAt(unsigned long, unsigned long, DB::IColumn const&, int) const @ 0xa96ba8b in /home/nik-kochetov/dev/ClickHouse/build-clang/programs/clickhouse
2020.08.19 19:27:31.689404 [ 16976 ] {} <Fatal> BaseDaemon: 4. DB::ArrayIndexGenericImpl<DB::IndexCount, false>::vectorCase4(DB::IColumn const&, DB::PODArray<unsigned long, 4096ul, Allocator<false, false>, 15ul, 16ul> const&, DB::IColumn const&, DB::PODArray<unsigned long, 4096ul, Allocator<false, false>, 15ul, 16ul>&, DB::PODArray<char8_t, 4096ul, Allocator<false, false>, 15ul, 16ul> const&, DB::PODArray<char8_t, 4096ul, Allocator<false, false>, 15ul, 16ul> const&) @ 0xe09f23f in /home/nik-kochetov/dev/ClickHouse/build-clang/programs/clickhouse
2020.08.19 19:27:31.690925 [ 16976 ] {} <Fatal> BaseDaemon: 5. DB::FunctionArrayIndex<DB::IndexCount, DB::NameCountEqual>::executeGeneric(DB::Block&, std::__1::vector<unsigned long, std::__1::allocator<unsigned long> > const&, unsigned long) const @ 0xe01cff7 in /home/nik-kochetov/dev/ClickHouse/build-clang/programs/clickhouse
2020.08.19 19:27:31.692386 [ 16976 ] {} <Fatal> BaseDaemon: 6. DB::FunctionArrayIndex<DB::IndexCount, DB::NameCountEqual>::perform(DB::Block&, std::__1::vector<unsigned long, std::__1::allocator<unsigned long> > const&, unsigned long) const @ 0xe01b206 in /home/nik-kochetov/dev/ClickHouse/build-clang/programs/clickhouse
2020.08.19 19:27:31.693795 [ 16976 ] {} <Fatal> BaseDaemon: 7. DB::FunctionArrayIndex<DB::IndexCount, DB::NameCountEqual>::executeImpl(DB::Block&, std::__1::vector<unsigned long, std::__1::allocator<unsigned long> > const&, unsigned long, unsigned long) const @ 0xe01a4cf in /home/nik-kochetov/dev/ClickHouse/build-clang/programs/clickhouse
2020.08.19 19:27:31.695311 [ 16976 ] {} <Fatal> BaseDaemon: 8. DB::ExecutableFunctionAdaptor::execute(DB::Block&, std::__1::vector<unsigned long, std::__1::allocator<unsigned long> > const&, unsigned long, unsigned long, bool) @ 0xb6eaa26 in /home/nik-kochetov/dev/ClickHouse/build-clang/programs/clickhouse
2020.08.19 19:27:31.696226 [ 16976 ] {} <Fatal> BaseDaemon: 9. /home/nik-kochetov/dev/ClickHouse/build-clang/../contrib/libcxx/include/vector:461: DB::ExpressionAction::execute(DB::Block&, bool) const @ 0xf61bf46 in /home/nik-kochetov/dev/ClickHouse/build-clang/programs/clickhouse
2020.08.19 19:27:31.697187 [ 16976 ] {} <Fatal> BaseDaemon: 10. /home/nik-kochetov/dev/ClickHouse/build-clang/../src/Interpreters/ExpressionActions.cpp:684: DB::ExpressionActions::execute(DB::Block&, bool) const @ 0xf61fe7d in /home/nik-kochetov/dev/ClickHouse/build-clang/programs/clickhouse
2020.08.19 19:27:31.698256 [ 16976 ] {} <Fatal> BaseDaemon: 11. /home/nik-kochetov/dev/ClickHouse/build-clang/../src/Processors/Transforms/ExpressionTransform.cpp:0: DB::ExpressionTransform::transform(DB::Chunk&) @ 0xfe79b4d in /home/nik-kochetov/dev/ClickHouse/build-clang/programs/clickhouse
2020.08.19 19:27:31.700113 [ 16976 ] {} <Fatal> BaseDaemon: 12. /home/nik-kochetov/dev/ClickHouse/build-clang/../contrib/libcxx/include/type_traits:3695: DB::ISimpleTransform::transform(DB::Chunk&, DB::Chunk&) @ 0xfa3c820 in /home/nik-kochetov/dev/ClickHouse/build-clang/programs/clickhouse
2020.08.19 19:27:31.701221 [ 16976 ] {} <Fatal> BaseDaemon: 13. /home/nik-kochetov/dev/ClickHouse/build-clang/../src/Processors/ISimpleTransform.cpp:99: DB::ISimpleTransform::work() @ 0xfd850b2 in /home/nik-kochetov/dev/ClickHouse/build-clang/programs/clickhouse
2020.08.19 19:27:31.702426 [ 16976 ] {} <Fatal> BaseDaemon: 14. /home/nik-kochetov/dev/ClickHouse/build-clang/../src/Processors/Executors/PipelineExecutor.cpp:98: std::__1::__function::__func<DB::PipelineExecutor::addJob(DB::ExecutingGraph::Node*)::$_0, std::__1::allocator<DB::PipelineExecutor::addJob(DB::ExecutingGraph::Node*)::$_0>, void ()>::operator()() @ 0xfdb4a8c in /home/nik-kochetov/dev/ClickHouse/build-clang/programs/clickhouse
2020.08.19 19:27:31.703634 [ 16976 ] {} <Fatal> BaseDaemon: 15. /home/nik-kochetov/dev/ClickHouse/build-clang/../src/Processors/Executors/PipelineExecutor.cpp:563: DB::PipelineExecutor::executeStepImpl(unsigned long, unsigned long, std::__1::atomic<bool>*) @ 0xfdb3a60 in /home/nik-kochetov/dev/ClickHouse/build-clang/programs/clickhouse
2020.08.19 19:27:31.704779 [ 16976 ] {} <Fatal> BaseDaemon: 16. /home/nik-kochetov/dev/ClickHouse/build-clang/../src/Processors/Executors/PipelineExecutor.cpp:0: DB::PipelineExecutor::executeImpl(unsigned long) @ 0xfdb2683 in /home/nik-kochetov/dev/ClickHouse/build-clang/programs/clickhouse
2020.08.19 19:27:31.705839 [ 16976 ] {} <Fatal> BaseDaemon: 17. /home/nik-kochetov/dev/ClickHouse/build-clang/../contrib/libcxx/include/memory:2587: DB::PipelineExecutor::execute(unsigned long) @ 0xfdb233d in /home/nik-kochetov/dev/ClickHouse/build-clang/programs/clickhouse
2020.08.19 19:27:31.706957 [ 16976 ] {} <Fatal> BaseDaemon: 18. /home/nik-kochetov/dev/ClickHouse/build-clang/../src/Processors/Executors/PullingAsyncPipelineExecutor.cpp:0: std::__1::__function::__func<ThreadFromGlobalPool::ThreadFromGlobalPool<DB::PullingAsyncPipelineExecutor::pull(DB::Chunk&, unsigned long)::$_0>(DB::PullingAsyncPipelineExecutor::pull(DB::Chunk&, unsigned long)::$_0&&)::'lambda'(), std::__1::allocator<ThreadFromGlobalPool::ThreadFromGlobalPool<DB::PullingAsyncPipelineExecutor::pull(DB::Chunk&, unsigned long)::$_0>(DB::PullingAsyncPipelineExecutor::pull(DB::Chunk&, unsigned long)::$_0&&)::'lambda'()>, void ()>::operator()() @ 0xfdba841 in /home/nik-kochetov/dev/ClickHouse/build-clang/programs/clickhouse
2020.08.19 19:27:31.707149 [ 16976 ] {} <Fatal> BaseDaemon: 19. /home/nik-kochetov/dev/ClickHouse/build-clang/../contrib/libcxx/include/atomic:1036: ThreadPoolImpl<std::__1::thread>::worker(std::__1::__list_iterator<std::__1::thread, void*>) @ 0xa859256 in /home/nik-kochetov/dev/ClickHouse/build-clang/programs/clickhouse
2020.08.19 19:27:31.707370 [ 16976 ] {} <Fatal> BaseDaemon: 20. /home/nik-kochetov/dev/ClickHouse/build-clang/../contrib/libcxx/include/memory:2615: void* std::__1::__thread_proxy<std::__1::tuple<std::__1::unique_ptr<std::__1::__thread_struct, std::__1::default_delete<std::__1::__thread_struct> >, void ThreadPoolImpl<std::__1::thread>::scheduleImpl<void>(std::__1::function<void ()>, int, std::__1::optional<unsigned long>)::'lambda1'()> >(void*) @ 0xa85b37b in /home/nik-kochetov/dev/ClickHouse/build-clang/programs/clickhouse
2020.08.19 19:27:31.707460 [ 16976 ] {} <Fatal> BaseDaemon: 21. start_thread @ 0x76db in /lib/x86_64-linux-gnu/libpthread-2.27.so
2020.08.19 19:27:31.707647 [ 16976 ] {} <Fatal> BaseDaemon: 22. /build/glibc-2ORdQG/glibc-2.27/misc/../sysdeps/unix/sysv/linux/x86_64/clone.S:97: __clone @ 0x121a3f in /usr/lib/debug/lib/x86_64-linux-gnu/libc-2.27.so
```
From https://clickhouse-test-reports.s3.yandex.net/13887/25dd0fa4d67daaf0db8d0f16965a6bf1204b3666/fuzzer/fuzzer.log#fail1 | https://github.com/ClickHouse/ClickHouse/issues/13893 | https://github.com/ClickHouse/ClickHouse/pull/16453 | 6667261b023296054c1811d5d27a5c416e17a299 | 7c4b0e559d123b5e71c1c91c0dc0a24e90ceb9db | 2020-08-19T16:30:34Z | c++ | 2020-10-28T06:26:03Z |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 13,871 | ["src/IO/WriteBufferFromS3.cpp", "src/IO/WriteBufferFromS3.h", "src/Storages/StorageS3.cpp", "src/Storages/StorageS3.h", "tests/integration/test_storage_s3/test.py"] | S3 table engine compression doesn't work. | **Describe the bug**
Clickhouse expect last parameter of S3 table engine to be data format, not compression type.
**How to reproduce**
Clickhouse version 20.5.4.40
```
CREATE TABLE test_s3(S_SUPPKEY UInt32, S_NAME String, S_ADDRESS String, S_CITY LowCardinality(String), S_NATION LowCardinality(String), S_REGION LowCardinality(String), S_PHONE String) ENGINE=S3('https://s3.amazonaws.com/{some_bucket_path}.csv.gz','CSV','gzip');
SELECT * FROM test_s3;
Received exception from server (version 20.5.4):
Code: 73. DB::Exception: Received from localhost:9000. DB::Exception: Unknown format gzip.
0 rows in set. Elapsed: 0.001 sec.
```
**Error message and/or stacktrace**
```
Code: 73, e.displayText() = DB::Exception: Unknown format gzip (version 20.5.4.40 (official build)) (from 127.0.0.1:56140) (in query: select * from test_s3;), Stack trace (when copying this message, always include the lines below):
0. Poco::Exception::Exception(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, int) @ 0x11b9acc0 in /usr/bin/clickhouse
1. DB::Exception::Exception(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, int) @ 0x9f3e2cd in /usr/bin/clickhouse
2. ? @ 0xf36cd04 in /usr/bin/clickhouse
3. DB::FormatFactory::getInput(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, DB::ReadBuffer&, DB::Block const&, DB::Context const&, unsigned long, std::__1::function<void ()>) const @ 0xf36b0c9 in /usr/bin/clickhouse
4. DB::StorageS3::read(std::__1::vector<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, std::__1::allocator<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > > > const&, DB::SelectQueryInfo const&, DB::Context const&, DB::QueryProcessingStage::Enum, unsigned long, unsigned int) @ 0xefb7520 in /usr/bin/clickhouse
5. DB::ReadFromStorageStep::ReadFromStorageStep(DB::TableStructureReadLockHolder, DB::SelectQueryOptions, std::__1::shared_ptr<DB::IStorage>, std::__1::vector<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, std::__1::allocator<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > > > const&, DB::SelectQueryInfo const&, std::__1::shared_ptr<DB::Context>, DB::QueryProcessingStage::Enum, unsigned long, unsigned long) @ 0xf68f3fd in /usr/bin/clickhouse
6. DB::InterpreterSelectQuery::executeFetchColumns(DB::QueryProcessingStage::Enum, DB::QueryPlan&, std::__1::shared_ptr<DB::PrewhereInfo> const&, std::__1::vector<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, std::__1::allocator<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > > > const&) @ 0xea608c1 in /usr/bin/clickhouse
7. DB::InterpreterSelectQuery::executeImpl(DB::QueryPlan&, std::__1::shared_ptr<DB::IBlockInputStream> const&, std::__1::optional<DB::Pipe>) @ 0xea647a2 in /usr/bin/clickhouse
8. DB::InterpreterSelectQuery::buildQueryPlan(DB::QueryPlan&) @ 0xea65d54 in /usr/bin/clickhouse
9. DB::InterpreterSelectWithUnionQuery::buildQueryPlan(DB::QueryPlan&) @ 0xebce0f4 in /usr/bin/clickhouse
10. DB::InterpreterSelectWithUnionQuery::execute() @ 0xebce44c in /usr/bin/clickhouse
11. ? @ 0xed3c7ed in /usr/bin/clickhouse
12. DB::executeQuery(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, DB::Context&, bool, DB::QueryProcessingStage::Enum, bool) @ 0xed3fe2a in /usr/bin/clickhouse
13. DB::TCPHandler::runImpl() @ 0xf36443c in /usr/bin/clickhouse
14. DB::TCPHandler::run() @ 0xf365190 in /usr/bin/clickhouse
15. Poco::Net::TCPServerConnection::start() @ 0x11ab8aeb in /usr/bin/clickhouse
16. Poco::Net::TCPServerDispatcher::run() @ 0x11ab8f7b in /usr/bin/clickhouse
17. Poco::PooledThread::run() @ 0x11c37aa6 in /usr/bin/clickhouse
18. Poco::ThreadImpl::runnableEntry(void*) @ 0x11c32ea0 in /usr/bin/clickhouse
19. start_thread @ 0x76db in /lib/x86_64-linux-gnu/libpthread-2.27.so
20. clone @ 0x121a3f in /lib/x86_64-linux-gnu/libc-2.27.so
```
**Additional context**
Auto detect of compression type from file extension doesnt work either.
```
CREATE TABLE test_s3(S_SUPPKEY UInt32, S_NAME String, S_ADDRESS String, S_CITY LowCardinality(String), S_NATION LowCardinality(String), S_REGION LowCardinality(String), S_PHONE String) ENGINE=S3('https://s3.amazonaws.com/{some_bucket_path}.csv.gz','CSV');
Code: 27, e.displayText() = DB::Exception: Cannot parse input: expected ',' before: '�\b\bvY!_\0supplier.csv\0��G��J����*��\\���R9笝r�Y��5���R��=U�;�3fh6�\b\0u��~������q��i|�l��d}4=.�מ[(F��WS�V��~�����R�\\"���}���8�M}�s�����T�?��Ҽ�}��"}�': (at row 1)
Row 1:
Column 0, name: S_SUPPKEY, type: UInt32, ERROR: text "<0x1F>�<BACKSPACE><BACKSPACE>vY!_<ASCII NUL><0x03>" is not like UInt32
: While executing S3 (version 20.5.4.40 (official build)) (from 127.0.0.1:56140) (in query: select * from test_s3;), Stack trace (when copying this message, always include the lines below):
0. Poco::Exception::Exception(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, int) @ 0x11b9acc0 in /usr/bin/clickhouse
1. DB::Exception::Exception(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, int) @ 0x9f3e2cd in /usr/bin/clickhouse
2. ? @ 0x9f808fd in /usr/bin/clickhouse
3. ? @ 0xf45bd5d in /usr/bin/clickhouse
4. DB::CSVRowInputFormat::readRow(std::__1::vector<COW<DB::IColumn>::mutable_ptr<DB::IColumn>, std::__1::allocator<COW<DB::IColumn>::mutable_ptr<DB::IColumn> > >&, DB::RowReadExtension&) @ 0xf45d179 in /usr/bin/clickhouse
5. DB::IRowInputFormat::generate() @ 0xf42cab1 in /usr/bin/clickhouse
6. DB::ISource::work() @ 0xf3aa2bb in /usr/bin/clickhouse
7. DB::InputStreamFromInputFormat::readImpl() @ 0xf36fe8d in /usr/bin/clickhouse
8. DB::IBlockInputStream::read() @ 0xe65fb3d in /usr/bin/clickhouse
9. DB::ParallelParsingBlockInputStream::parserThreadFunction(std::__1::shared_ptr<DB::ThreadGroupStatus>, unsigned long) @ 0xf374738 in /usr/bin/clickhouse
10. ? @ 0xf375494 in /usr/bin/clickhouse
11. ThreadPoolImpl<ThreadFromGlobalPool>::worker(std::__1::__list_iterator<ThreadFromGlobalPool, void*>) @ 0x9f6ca37 in /usr/bin/clickhouse
12. ThreadFromGlobalPool::ThreadFromGlobalPool<void ThreadPoolImpl<ThreadFromGlobalPool>::scheduleImpl<void>(std::__1::function<void ()>, int, std::__1::optional<unsigned long>)::'lambda1'()>(void&&, void ThreadPoolImpl<ThreadFromGlobalPool>::scheduleImpl<void>(std::__1::function<void ()>, int, std::__1::optional<unsigned long>)::'lambda1'()&&...)::'lambda'()::operator()() const @ 0x9f6d1aa in /usr/bin/clickhouse
13. ThreadPoolImpl<std::__1::thread>::worker(std::__1::__list_iterator<std::__1::thread, void*>) @ 0x9f6bf47 in /usr/bin/clickhouse
14. ? @ 0x9f6a433 in /usr/bin/clickhouse
15. start_thread @ 0x76db in /lib/x86_64-linux-gnu/libpthread-2.27.so
16. clone @ 0x121a3f in /lib/x86_64-linux-gnu/libc-2.27.so
Received exception from server (version 20.5.4):
Code: 27. DB::Exception: Received from localhost:9000. DB::Exception: Cannot parse input: expected ',' before: '�\b\bvY!_\0supplier.csv\0��G��J����*��\\���R9笝r�Y��5���R��=U�;�3fh6�\b\0u��~������q��i|�l��d}4=.�מ[(F��WS�V��~�����R�\\"���}���8�M}�s�����T�?��Ҽ�}��"}�': (at row 1)
Row 1:
Column 0, name: S_SUPPKEY, type: UInt32, ERROR: text "<0x1F>�<BACKSPACE><BACKSPACE>vY!_<ASCII NUL><0x03>" is not like UInt32
: While executing S3.
0 rows in set. Elapsed: 0.702 sec.
```
If we in both CREATE table queries replace S3 with URL, SELECT queries will work fine.
| https://github.com/ClickHouse/ClickHouse/issues/13871 | https://github.com/ClickHouse/ClickHouse/pull/15376 | 2f95609f98a1c690f81b198b8ec1fe238dd0c6f9 | 9808d0be818e1c764ba11eccf19b3e8d3929ba83 | 2020-08-18T22:18:40Z | c++ | 2020-10-01T01:30:32Z |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 13,864 | ["src/Storages/MergeTree/MergeTreeReaderStream.cpp"] | Performance degradation between 20.1 and 20.3 for large number of very short queries with LowCardinality fields | Schema
```
ATTACH TABLE default.test (`col1` UUID, `col2` LowCardinality(String)) ENGINE = MergeTree ORDER BY (col1, col2) SETTINGS index_granularity = 128;
```
Data sample: https://www.dropbox.com/s/untu96ad69a2thl/test_data.tar.gz?dl=0
Query:
```
echo "SELECT col1, col2 FROM test PREWHERE col1 = '042fecec-ea68-1b26-00a4-a4f3b3987574'" | clickhouse-benchmark -c 1 -i 2000 -d 0
```
Results - look on QPS metric:
```
### 20.1.16.120
Loaded 1 queries.
Queries executed: 2000.
localhost:9000, queries 2000, QPS: 597.941, RPS: 76536.485, MiB/s: 1.168, result RPS: 0.000, result MiB/s: 0.000.
0.000% 0.001 sec.
10.000% 0.001 sec.
20.000% 0.002 sec.
30.000% 0.002 sec.
40.000% 0.002 sec.
50.000% 0.002 sec.
60.000% 0.002 sec.
70.000% 0.002 sec.
80.000% 0.002 sec.
90.000% 0.002 sec.
95.000% 0.002 sec.
99.000% 0.002 sec.
99.900% 0.003 sec.
99.990% 0.015 sec.
### 20.3.2.1
Loaded 1 queries.
Queries executed: 2000.
localhost:9000, queries 2000, QPS: 322.157, RPS: 41236.084, MiB/s: 0.629, result RPS: 0.000, result MiB/s: 0.000.
0.000% 0.001 sec.
10.000% 0.003 sec.
20.000% 0.003 sec.
30.000% 0.003 sec.
40.000% 0.003 sec.
50.000% 0.003 sec.
60.000% 0.003 sec.
70.000% 0.003 sec.
80.000% 0.003 sec.
90.000% 0.003 sec.
95.000% 0.004 sec.
99.000% 0.004 sec.
99.900% 0.016 sec.
99.990% 0.017 sec.
## 20.6.3.28
Loaded 1 queries.
Queries executed: 2000.
localhost:9000, queries 2000, QPS: 308.133, RPS: 39441.007, MiB/s: 0.602, result RPS: 0.000, result MiB/s: 0.000.
0.000% 0.001 sec.
10.000% 0.002 sec.
20.000% 0.003 sec.
30.000% 0.003 sec.
40.000% 0.003 sec.
50.000% 0.003 sec.
60.000% 0.003 sec.
70.000% 0.003 sec.
80.000% 0.004 sec.
90.000% 0.004 sec.
95.000% 0.004 sec.
99.000% 0.004 sec.
99.900% 0.016 sec.
99.990% 0.017 sec.
```
Bisecting on releases:
v20.2.1.2442-testing (21-Feb-2020 18:15) - last good
v20.3.1.2495-testing (26-Feb-2020 18:27) - first bad
ps. experimental_use_processors don't change the situation.
| https://github.com/ClickHouse/ClickHouse/issues/13864 | https://github.com/ClickHouse/ClickHouse/pull/14129 | 65a69ff6ad674df1a2d436c07819b7bd90198089 | 9baa0fbf816ba498c870672b82c13caa53d3f573 | 2020-08-18T16:27:47Z | c++ | 2020-08-27T12:07:19Z |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 13,857 | ["src/Functions/array/arrayCompact.cpp", "tests/queries/0_stateless/01025_array_compact_generic.reference", "tests/queries/0_stateless/01025_array_compact_generic.sql"] | arrayCompact UB for nan, nan + Null | in some cases nan === nan
```
SELECT arrayCompact([1, 1, nan, nan, 2, 3, 3, 3]) AS x
┌─x───────────────┐
│ [1,nan,nan,2,3] │
└─────────────────┘
SELECT arrayCompact([1, 1, nan, nan, 2, 3, 3, inf, inf, 3]) AS x
┌─x─────────────────────┐
│ [1,nan,nan,2,3,inf,3] │
└───────────────────────┘
SELECT arrayCompact([1, 1, nan, nan, 2, 3, 3, inf, inf, 3, NULL, NULL]) AS x
┌─x──────────────────────┐
│ [1,nan,2,3,inf,3,NULL] │ ???
└────────────────────────┘
SELECT arrayCompact([1, 1, inf, nan, NULL, nan, nan]) AS x
┌─x────────────────────┐
│ [1,inf,nan,NULL,nan] │ ???
└──────────────────────┘
``` | https://github.com/ClickHouse/ClickHouse/issues/13857 | https://github.com/ClickHouse/ClickHouse/pull/13868 | 663ff938c208bdb97183d357f95d42891df00c0b | 6d72777b6e9f9dc83b0a9cd2b26ba5fabecb6b64 | 2020-08-18T13:07:58Z | c++ | 2020-08-19T08:53:58Z |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 13,845 | ["tests/queries/0_stateless/01765_tehran_dst.reference", "tests/queries/0_stateless/01765_tehran_dst.sql"] | data not from oracle in date 2020-03-21 (Daylight Saving Time) | when i try to import data from oracle in 2020-03-21 , 30 % of records import and others ignored.
my clickhouse DateTime timezone is Asia-Tehran | https://github.com/ClickHouse/ClickHouse/issues/13845 | https://github.com/ClickHouse/ClickHouse/pull/21995 | 7a48ce6b794b1259898d919b24ae0abbc25cac06 | 7c0dba1b0ce30ff216f82fb1fa87b1d11b88b5d6 | 2020-08-18T01:45:46Z | c++ | 2021-03-23T17:04:13Z |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 13,832 | ["src/Interpreters/ExternalLoader.cpp", "tests/integration/test_catboost_model_first_evaluate/__init__.py", "tests/integration/test_catboost_model_first_evaluate/config/models_config.xml", "tests/integration/test_catboost_model_first_evaluate/model/libcatboostmodel.so", "tests/integration/test_catboost_model_first_evaluate/model/model.bin", "tests/integration/test_catboost_model_first_evaluate/model/model_config.xml", "tests/integration/test_catboost_model_first_evaluate/test.py"] | First modelEvaluate() call on a newly created CatBoost model deadlocks forever | Hello,
so I'm working to integrate CatBoost model with our ClickHouse cluster. It appears that, whenever we create a new XML model file in the configured path, the **first** call to the new model never returns (this is true both of the HTTP interface as well as via clickhouse-client).
Independently from whether the locked statement gets cancelled, subsequent calls succeed immediately.
```
16e8bdb5bbb9 :) select version();
SELECT version()
┌─version()─┐
│ 20.6.3.28 │
└───────────┘
1 rows in set. Elapsed: 0.006 sec.
```
Is this known behavior? If it helps, my models look like this:
```
<models>
<model>
<type>catboost</type>
<name>ctr_predictor</name>
<path>/catboost/my_predictor</path>
<lifetime>0</lifetime>
</model>
</models>
``` | https://github.com/ClickHouse/ClickHouse/issues/13832 | https://github.com/ClickHouse/ClickHouse/pull/21844 | 36898bdc4a2df3ec38e0e6befa8589564a3cef9a | 090e558da4a93741dbe8de5415781fe1951d82c1 | 2020-08-17T10:20:37Z | c++ | 2021-03-23T12:01:04Z |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 13,804 | ["src/Functions/FunctionsComparison.h", "tests/queries/0_stateless/01560_DateTime_and_DateTime64_comparision.reference", "tests/queries/0_stateless/01560_DateTime_and_DateTime64_comparision.sql", "tests/queries/0_stateless/01561_Date_and_DateTime64_comparision.reference", "tests/queries/0_stateless/01561_Date_and_DateTime64_comparision.sql"] | Comparison DateTime64 to DateTime / Date | **Use case**
Comparison between DateTime64 and DateTime / Date types is allowed but the behavior is confusing (it looks like used numerical comparison):
* from one side for zero-precision it works as expected (values compared as datetime-based ones):
```sql
SELECT toTypeName(dt64) AS dt64_typename, dt64 = dt
FROM
(
SELECT
toDateTime64(toStartOfInterval(now(), toIntervalSecond(1), 'UTC'), 0, 'UTC') AS dt64,
toStartOfInterval(now(), toIntervalSecond(1), 'UTC') AS dt
)
/*
┌─dt64_typename────────┬─equals(dt64, dt)─┐
│ DateTime64(0, 'UTC') │ 1 │
└──────────────────────┴──────────────────┘
*/
```
* from another side for non-zero-precision its behavior is unexpected if not used explicit type conversion:
```sql
SELECT
toTypeName(dt64) AS dt64_typename,
dt64 = dt,
toDateTime(dt64) = dt,
dt64 = toDateTime64(dt, 1, 'UTC')
FROM
(
SELECT
toDateTime64(toStartOfInterval(now(), toIntervalSecond(1), 'UTC'), 1, 'UTC') AS dt64,
toStartOfInterval(now(), toIntervalSecond(1), 'UTC') AS dt
)
/*
┌─dt64_typename────────┬─equals(dt64, dt)─┬─equals(toDateTime(dt64), dt)─┬─equals(dt64, toDateTime64(dt, 1, 'UTC'))─┐
│ DateTime64(1, 'UTC') │ 0 │ 1 │ 1 │
└──────────────────────┴──────────────────┴──────────────────────────────┴──────────────────────────────────────────┘
*/
```
**Describe the solution you'd like**
I think this comparison either:
- should be not allowed and throw an exception to ask for user uses explicit conversion
- or make valid implicit conversion relying on datetime/date essence of values
| https://github.com/ClickHouse/ClickHouse/issues/13804 | https://github.com/ClickHouse/ClickHouse/pull/18050 | b93d0f3922369d4cf0b51e41ffb15f742d014ed4 | 7985ed9778ff9f4bebc22be63dac420482a9f1bc | 2020-08-16T08:47:39Z | c++ | 2020-12-22T02:12:14Z |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 13,750 | ["docker/test/fuzzer/query-fuzzer-tweaks-users.xml"] | JIT, least/greatest, vector range check | ```
will now parse 'SELECT toTypeName(least(-9223372036854775808, 18446744073709551615)), toTypeName(greatest(-9223372036854775808, 18446744073709551615));
'
fuzzing step 0 out of 1000 for query at pos 0
[linux-ubuntu-14-04-trusty] 2020.08.14 20:44:02.223877 [ 246 ] <Fatal> BaseDaemon: ########################################
[linux-ubuntu-14-04-trusty] 2020.08.14 20:44:02.224446 [ 246 ] <Fatal> BaseDaemon: (version 20.8.1.4372, build id: 090B158C1AEE3ED0) (from thread 97) (query_id: a6618b95-f80d-468c-9789-66df037b407d) Received signal Aborted (6)
[linux-ubuntu-14-04-trusty] 2020.08.14 20:44:02.224703 [ 246 ] <Fatal> BaseDaemon:
[linux-ubuntu-14-04-trusty] 2020.08.14 20:44:02.224921 [ 246 ] <Fatal> BaseDaemon: Stack trace: 0x7fd18ad72f47 0x7fd18ad748b1 0x25e41d5c 0x18276892 0x1a6b5eb0 0x196542b9 0x195e99fc 0x19666d17 0x1fc3f7ce 0x1fc3f530 0x1fc3e5aa 0x1fc270f8 0x1fc2aad5 0x1ff5d80c 0x1ff5d579 0x1fca0d6b 0x1fc9e154 0x1fc9ad93 0x1fc99c7c 0x1ff9c4c1 0x1ff9b07d 0x1fbc9c55 0x1fbc8b7f 0x2015a745 0x2015996a 0x20a680c6 0x20a6f558 0x245f279c
[linux-ubuntu-14-04-trusty] 2020.08.14 20:44:02.225310 [ 246 ] <Fatal> BaseDaemon: 4. /build/glibc-2ORdQG/glibc-2.27/signal/../sysdeps/unix/sysv/linux/raise.c:51: __GI_raise @ 0x3ef47 in /usr/lib/debug/lib/x86_64-linux-gnu/libc-2.27.so
[linux-ubuntu-14-04-trusty] 2020.08.14 20:44:02.225584 [ 246 ] <Fatal> BaseDaemon: 5. /build/glibc-2ORdQG/glibc-2.27/stdlib/abort.c:81: __GI_abort @ 0x408b1 in /usr/lib/debug/lib/x86_64-linux-gnu/libc-2.27.so
[linux-ubuntu-14-04-trusty] 2020.08.14 20:44:02.291501 [ 246 ] <Fatal> BaseDaemon: 6. /build/obj-x86_64-linux-gnu/../contrib/libcxx/src/debug.cpp:36: std::__1::__libcpp_abort_debug_function(std::__1::__libcpp_debug_info const&) @ 0x25e41d5c in /workspace/clickhouse
[linux-ubuntu-14-04-trusty] 2020.08.14 20:44:02.293166 [ 246 ] <Fatal> BaseDaemon: 7. /build/obj-x86_64-linux-gnu/../contrib/libcxx/include/vector:1557: std::__1::vector<std::__1::shared_ptr<DB::IDataType const>, std::__1::allocator<std::__1::shared_ptr<DB::IDataType const> > >::operator[](unsigned long) const @ 0x18276892 in /workspace/clickhouse
[linux-ubuntu-14-04-trusty] 2020.08.14 20:44:02.304407 [ 246 ] <Fatal> BaseDaemon: 8. /build/obj-x86_64-linux-gnu/../src/Functions/FunctionBinaryArithmetic.h:1134: DB::FunctionBinaryArithmetic<DB::GreatestImpl, DB::NameGreatest, true>::isCompilableImpl(std::__1::vector<std::__1::shared_ptr<DB::IDataType const>, std::__1::allocator<std::__1::shared_ptr<DB::IDataType const> > > const&) const @ 0x1a6b5eb0 in /workspace/clickhouse
[linux-ubuntu-14-04-trusty] 2020.08.14 20:44:02.310583 [ 246 ] <Fatal> BaseDaemon: 9. /build/obj-x86_64-linux-gnu/../src/Functions/IFunction.cpp:553: DB::IFunction::isCompilable(std::__1::vector<std::__1::shared_ptr<DB::IDataType const>, std::__1::allocator<std::__1::shared_ptr<DB::IDataType const> > > const&) const @ 0x196542b9 in /workspace/clickhouse
[linux-ubuntu-14-04-trusty] 2020.08.14 20:44:02.316982 [ 246 ] <Fatal> BaseDaemon: 10. /build/obj-x86_64-linux-gnu/../src/Functions/IFunctionAdaptors.h:181: DB::DefaultFunction::isCompilable() const @ 0x195e99fc in /workspace/clickhouse
[linux-ubuntu-14-04-trusty] 2020.08.14 20:44:02.324300 [ 246 ] <Fatal> BaseDaemon: 11. /build/obj-x86_64-linux-gnu/../src/Functions/IFunctionAdaptors.h:54: DB::FunctionBaseAdaptor::isCompilable() const @ 0x19666d17 in /workspace/clickhouse
[linux-ubuntu-14-04-trusty] 2020.08.14 20:44:02.368541 [ 246 ] <Fatal> BaseDaemon: 12. /build/obj-x86_64-linux-gnu/../src/Interpreters/ExpressionJIT.cpp:565: DB::isCompilable(DB::IFunctionBase const&) @ 0x1fc3f7ce in /workspace/clickhouse
```
| https://github.com/ClickHouse/ClickHouse/issues/13750 | https://github.com/ClickHouse/ClickHouse/pull/18777 | 6430fd3b2ab60b5b68d8a97252b3864eac1cc80b | cb8b6ff3fadda5a9c37cab8e4d63085ef0cc83e8 | 2020-08-15T08:03:31Z | c++ | 2021-01-06T04:18:36Z |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 13,742 | ["docs/en/sql-reference/aggregate-functions/reference/index.md", "docs/en/sql-reference/aggregate-functions/reference/quantileexact.md", "src/AggregateFunctions/AggregateFunctionQuantile.cpp", "src/AggregateFunctions/AggregateFunctionQuantile.h", "src/AggregateFunctions/QuantileExact.h", "tests/queries/0_stateless/00700_decimal_aggregates.reference", "tests/queries/0_stateless/00700_decimal_aggregates.sql"] | quantileExactLow, quantileExactHight | Users want to get rounding mode in `quantileExact` equivalent to "median_low" from Python:
https://docs.python.org/3/library/statistics.html#statistics.median_low
while current implementation works equivalent to "median_high" (?) | https://github.com/ClickHouse/ClickHouse/issues/13742 | https://github.com/ClickHouse/ClickHouse/pull/13818 | e3c919ec19157f24b1ab2e121ba0693203d3bebe | e4fc48254a563874157f28f0535649ee684475dd | 2020-08-14T23:28:51Z | c++ | 2020-08-24T19:51:30Z |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 13,705 | ["src/AggregateFunctions/IAggregateFunction.h", "src/Interpreters/Aggregator.cpp", "tests/queries/0_stateless/01441_array_combinator.reference", "tests/queries/0_stateless/01441_array_combinator.sql"] | A bug in GROUP BY in version 20.7 (unreleased) | **Describe the bug**
```
SELECT number % 100 AS k, sumArray(emptyArrayUInt8()) AS v FROM numbers(10) GROUP BY k
```
Fails under ASan.
| https://github.com/ClickHouse/ClickHouse/issues/13705 | https://github.com/ClickHouse/ClickHouse/pull/13709 | 9a640ff210541cf52fef44bcccf0a6f0a2c16287 | 9e33f41dece5be2fe925414da6a2c44547d5b0c6 | 2020-08-14T05:12:01Z | c++ | 2020-08-14T22:10:34Z |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 13,666 | ["src/Interpreters/DuplicateDistinctVisitor.h", "src/Interpreters/TreeOptimizer.cpp", "tests/queries/0_stateless/01305_duplicate_order_by_and_distinct.reference", "tests/queries/0_stateless/01306_disable_duplicate_order_by_and_distinct_optimize_for_distributed_table.reference", "tests/queries/0_stateless/01306_disable_duplicate_order_by_and_distinct_optimize_for_distributed_table.sql", "tests/queries/0_stateless/01455_duplicate_distinct_optimization.reference", "tests/queries/0_stateless/01455_duplicate_distinct_optimization.sql"] | DISTINCT works incorrectly if was already applied to one of the subqueries | Hello, there. Run into the problem with DISTINCT expression. It works incorrectly on the main query if it was already applied to one of the subqueries bond with UNION ALL.
**How to reproduce**
Server version: 20.5.4.40. 19.17.6.36 was free of the bug.
Using Datagrip for running queries.
Non-default settings:
```
input_format_defaults_for_omitted_fields: 1
decimal_check_overflow: 0
joined_subquery_requires_alias: 0
```
Consider following query
```
select distinct number
from ((select number from numbers(10)) union all (select number from numbers(20)));
```
It works fine, returning 20 rows as expected.
But if we add distinct to subqueries (it doesn't matter to the first, the second or both)
```
select distinct number
from ((select distinct number from numbers(10)) union all (select distinct number from numbers(20)));
```
Then in 20.5.4.40 we'll see 30 rows with numbers from 0 to 9 duplicated.
However, `count(distinct number)` returns the correct number of rows.
| https://github.com/ClickHouse/ClickHouse/issues/13666 | https://github.com/ClickHouse/ClickHouse/pull/13925 | 6a164634d7a5ea5ddcafc39a8455ace136f211a2 | 2e6ff0c5ec0b5d39781f8602769358c57af39a66 | 2020-08-13T06:14:01Z | c++ | 2020-08-24T19:33:22Z |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 13,654 | ["base/common/LineReader.cpp", "tests/queries/0_stateless/01599_multiline_input_and_singleline_comments.reference", "tests/queries/0_stateless/01599_multiline_input_and_singleline_comments.sh"] | client interactive mode line comments (-- ) does not work anymore. | 20.7.1.4189
```
clickhouse-client -m
:) select 1
:-] ---abbbb
:-] , 2 b
:-] ;
SELECT 1
┌─1─┐
│ 1 │
└───┘
```
19.13.7.57
```
clickhouse-client -m
select 1
:-] --xxxx
:-] ,2 b
:-] ;
SELECT
1,
2 AS b
┌─1─┬─b─┐
│ 1 │ 2 │
└───┴───┘
``` | https://github.com/ClickHouse/ClickHouse/issues/13654 | https://github.com/ClickHouse/ClickHouse/pull/17565 | df90cbd7d34059d22e16c6ab4775f43f16e6faba | 46e685e1b8e3e1150f978e885670dd058318924e | 2020-08-12T17:22:54Z | c++ | 2020-11-30T06:38:41Z |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 13,653 | ["tests/queries/0_stateless/02468_has_any_tuple.reference", "tests/queries/0_stateless/02468_has_any_tuple.sql"] | Unexpected type inequality | select [(toUInt8(3), toUInt8(3))] = [(toInt16(3), toInt16(3))] -- true
select hasAny([(toInt16(3), toInt16(3))],[(toInt16(3), toInt16(3))]) -- true
select arrayFilter(x -> x = (toInt16(3), toInt16(3)), arrayZip([toUInt8(3)], [toUInt8(3)])) -- [(3,3)]
select hasAny([(toUInt8(3), toUInt8(3))],[(toInt16(3), toInt16(3))]) -- false, odd man out
I would have expected the last one to return true (1). I am being explicit with types to show the issue. This actually came up when doing something more like:
-- x.a and x.b are columns of type array(Int16)
-- clickhouse makes the constant literal 3 have type UInt8
select hasAny(arrayZip(x.a, x.b), (3,3)) from table
The query above never returns true (1). I can make it work with explicit casting.
select hasAny(arrayZip(x.a, x.b), (toInt16(3),toInt16(3))) from table
But, that feels a bit heavy and inconsistent.
Is this maybe related to covariant typing for clickhouse functions? Is it expected behavior? Is there a better workaround than the explicit casting. | https://github.com/ClickHouse/ClickHouse/issues/13653 | https://github.com/ClickHouse/ClickHouse/pull/42512 | bd80e6a10b0b7e2f539e79419b160cef9a775141 | 8c5e6a8d6c4e043ea4a56e3ebce516d22b026d93 | 2020-08-12T16:04:15Z | c++ | 2022-10-20T22:03:36Z |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 13,634 | ["src/Columns/ColumnVector.h", "src/Functions/if.cpp", "tests/queries/0_stateless/01475_mutation_with_if.reference", "tests/queries/0_stateless/01475_mutation_with_if.sql"] | Updating a Nullable column to a value would sets another value if `where` is used | Updating a Nullable column value by using alter .... where ... causes to set another value!
ClickHouse version: 20.6.3.28
Engine: MergeTree
OS: Ubuntu 18.04.3 LTS
Steps to reproduce:
```sql
CREATE TABLE table1(
id int,
price Nullable(Int32)
)
ENGINE = MergeTree()
PARTITION BY id
ORDER BY (id);
INSERT INTO table1 (id, price) VALUES (1, 100);
ALTER TABLE table1 update price = 150 where id=1;
SELECT * FROM table1;
```
We expect the price to be `150`, but it's `93148480`:
```
┌─id─┬────price─┐
│ 1 │ 93148480 │
└────┴──────────┘
```
Using `Decimal(9,2)` field instead of `Int32` would set the value `932342.88` always. It seems there's a fixed value for each data type.
If we don't use such a `where` clause, it works correctly:
```sql
ALTER TABLE db1.table1 update price = 150 where 1=1;
┌─id─┬─price─┐
│ 1 │ 150 │
└────┴───────┘
``` | https://github.com/ClickHouse/ClickHouse/issues/13634 | https://github.com/ClickHouse/ClickHouse/pull/14646 | 3e00d64ebf38218a3210b8be42f73858bbb804c4 | 336430d3c27bfcfd7b45d314fea62dbaf66c3852 | 2020-08-12T04:38:06Z | c++ | 2020-09-14T06:51:40Z |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 13,630 | ["src/Formats/FormatFactory.cpp", "src/Processors/Formats/Impl/LineAsStringRowInputFormat.cpp", "src/Processors/Formats/Impl/LineAsStringRowInputFormat.h", "src/Processors/ya.make", "tests/queries/0_stateless/01460_line_as_string_format.reference", "tests/queries/0_stateless/01460_line_as_string_format.sh"] | LineAsString format. | Treat every line of input as a single String field.
Similar to `JSONAsString`. | https://github.com/ClickHouse/ClickHouse/issues/13630 | https://github.com/ClickHouse/ClickHouse/pull/13846 | e9be2f14ea8ac45f11c7c65b6c36646b64a5b390 | a39ba57e8c4e72781e22c7b45a8f30efab1d3e75 | 2020-08-12T00:00:54Z | c++ | 2020-09-10T14:10:47Z |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 13,629 | ["src/Formats/FormatFactory.cpp", "src/Formats/FormatFactory.h", "src/Processors/Formats/Impl/ODBCDriverBlockOutputFormat.cpp", "src/Processors/Formats/Impl/ODBCDriverBlockOutputFormat.h", "src/Processors/ya.make"] | Deprecate ODBCDriver format. | Leave only ODBCDriver2. | https://github.com/ClickHouse/ClickHouse/issues/13629 | https://github.com/ClickHouse/ClickHouse/pull/13847 | 2e6ff0c5ec0b5d39781f8602769358c57af39a66 | e3c919ec19157f24b1ab2e121ba0693203d3bebe | 2020-08-11T23:57:00Z | c++ | 2020-08-24T19:34:52Z |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 13,584 | ["programs/client/Client.cpp", "tests/queries/0_stateless/00964_live_view_watch_events_heartbeat.py", "tests/queries/0_stateless/00965_live_view_watch_heartbeat.py"] | SELECT ... FORMAT JSON is not a valid JSON. | **How to reproduce**
```
:) CREATE TABLE `test` (`data` Int64) ENGINE=MergeTree() ORDER BY(`data`);
CREATE TABLE test
(
`data` Int64
)
ENGINE = MergeTree()
ORDER BY data
Ok.
0 rows in set. Elapsed: 0.007 sec.
```
```
:) INSERT INTO `test` VALUES(100);
INSERT INTO test VALUES
Ok.
1 rows in set. Elapsed: 0.002 sec.
```
```
:) SELECT * FROM `test`;
SELECT *
FROM test
┌─data─┐
│ 100 │
└──────┘
1 rows in set. Elapsed: 0.002 sec.
```
```
:) SELECT * FROM `test` FORMAT JSON;
SELECT *
FROM test
FORMAT JSON
{
"meta":
[
{
"name": "data",
"type": "Int64"
}
],
"data":
[
{
"data": "100"
],
"rows": 1,
"statistics":
{
"elapsed": 0.001765624,
"rows_read": 1,
"bytes_read": 8
}
}
1 rows in set. Elapsed: 0.005 sec.
```
---
**Which ClickHouse server version to use**
I am using the clickhouse-server docker image. Image ID: `3aa412f816ec`
```
# clickhouse-client
ClickHouse client version 20.6.3.28 (official build).
Connecting to localhost:9000 as user default.
Connected to ClickHouse server version 20.6.3 revision 54436.
```
---
**Expected behavior**
In the `FORMAT JSON` output, I expected the `data` field to be a valid JSON. But it is not. The closing curly bracket for the object is missing.
**Error message and/or stacktrace**

| https://github.com/ClickHouse/ClickHouse/issues/13584 | https://github.com/ClickHouse/ClickHouse/pull/13691 | c157b7a6858fa2de229721a6667213274388e561 | 318f14b95e0bbc02d1a5f07241d8f2c1c3d4d281 | 2020-08-10T15:05:45Z | c++ | 2020-08-26T10:25:25Z |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 13,477 | ["tests/queries/0_stateless/01633_limit_fuzz.reference", "tests/queries/0_stateless/01633_limit_fuzz.sql"] | limit + offset: PODArray.h:342 assertion failed | ```
SELECT number, 1 AS k FROM numbers(100000) ORDER BY k, number LIMIT 1025, 1023
clickhouse-server: /home/akuzm/ch2/ch/src/Common/PODArray.h:342: const T &DB::PODArray<char8_t, 4096, Allocator<false, false>, 15, 16>::operator[](ssize_t) const [T = char8_t, initial_bytes = 4096, TAllocator = Allocator<false, false>, pad_right_ = 15, pad_left_ = 16]: Assertion `(n >= (static_cast<ssize_t>(pad_left_) ? -1 : 0)) && (n <= static_cast<ssize_t>(this->size()))' failed.
2020.08.07 16:47:01.722895 [ 596571 ] {} <Trace> BaseDaemon: Received signal 6
2020.08.07 16:47:01.723330 [ 596627 ] {} <Fatal> BaseDaemon: ########################################
2020.08.07 16:47:01.724115 [ 596627 ] {} <Fatal> BaseDaemon: (version 20.7.1.1, build id: D3FC167BA205D4A1) (from thread 596624) (query_id: e98718aa-3f2f-4de9-bd8a-8bf0acd78181) Received signal Aborted (6)
2020.08.07 16:47:01.724289 [ 596627 ] {} <Fatal> BaseDaemon:
2020.08.07 16:47:01.724521 [ 596627 ] {} <Fatal> BaseDaemon: Stack trace: 0x7f13c1e7418b 0x7f13c1e53859 0x7f13c1e53729 0x7f13c1e64f36 0x7f13c0aae23d 0x7f13bbb5f52f 0x7f13bbb60c50 0x7f13bbb6015c 0x7f13bbb5f5b1 0x7f13b36ed2c9 0x7f13b0646a29 0x7f13b0646cac 0x7f13b3287c45 0x7f13b10ceeea 0x7f13b0f1132c 0x7f13b0f1128f 0x7f13b0f1124d 0x7f13b0f111fd 0x7f13b0f111cd 0x7f13b0f1031e 0x7f13c65de8c5 0x7f13c65de865 0x7f13b0f0ec35 0x7f13b0f0f429 0x7f13b0f0d970 0x7f13b0f0d046 0x7f13b0f3003d 0x7f13b0f2ffa2
2020.08.07 16:47:01.724891 [ 596627 ] {} <Fatal> BaseDaemon: 4. /build/glibc-YYA7BZ/glibc-2.31/signal/../sysdeps/unix/sysv/linux/raise.c:51: gsignal @ 0x4618b in /usr/lib/debug/lib/x86_64-linux-gnu/libc-2.31.so
2020.08.07 16:47:01.725087 [ 596627 ] {} <Fatal> BaseDaemon: 5. /build/glibc-YYA7BZ/glibc-2.31/stdlib/abort.c:81: abort @ 0x25859 in /usr/lib/debug/lib/x86_64-linux-gnu/libc-2.31.so
2020.08.07 16:47:01.725345 [ 596627 ] {} <Fatal> BaseDaemon: 6. /build/glibc-YYA7BZ/glibc-2.31/intl/loadmsgcat.c:509: _nl_load_domain.cold @ 0x25729 in /usr/lib/debug/lib/x86_64-linux-gnu/libc-2.31.so
2020.08.07 16:47:01.725666 [ 596627 ] {} <Fatal> BaseDaemon: 7. ? @ 0x36f36 in /usr/lib/debug/lib/x86_64-linux-gnu/libc-2.31.so
2020.08.07 16:47:01.727287 [ 596627 ] {} <Fatal> BaseDaemon: 8. /home/akuzm/ch2/ch/src/Common/PODArray.h:0: DB::PODArray<char8_t, 4096ul, Allocator<false, false>, 15ul, 16ul>::operator[](long) const @ 0x25bd23d in /home/akuzm/ch2/build-clang10/src/AggregateFunctions/libclickhouse_aggregate_functionsd.so
2020.08.07 16:47:01.730960 [ 596627 ] {} <Fatal> BaseDaemon: 9. /home/akuzm/ch2/ch/src/Columns/ColumnVector.h:190: DB::ColumnVector<char8_t>::compareAt(unsigned long, unsigned long, DB::IColumn const&, int) const @ 0x292b52f in /home/akuzm/ch2/build-clang10/src/Functions/libclickhouse_functionsd.so
2020.08.07 16:47:01.734565 [ 596627 ] {} <Fatal> BaseDaemon: 10. /home/akuzm/ch2/ch/src/Columns/IColumnImpl.h:82: void DB::IColumn::compareImpl<DB::ColumnVector<char8_t>, false, true>(DB::ColumnVector<char8_t> const&, unsigned long, DB::PODArray<unsigned long, 4096ul, Allocator<false, false>, 15ul, 16ul>*, DB::PODArray<signed char, 4096ul, Allocator<false, false>, 15ul, 16ul>&, int) const @ 0x292cc50 in /home/akuzm/ch2/build-clang10/src/Functions/libclickhouse_functionsd.so
2020.08.07 16:47:01.738309 [ 596627 ] {} <Fatal> BaseDaemon: 11. /home/akuzm/ch2/ch/src/Columns/IColumnImpl.h:125: void DB::IColumn::doCompareColumn<DB::ColumnVector<char8_t> >(DB::ColumnVector<char8_t> const&, unsigned long, DB::PODArray<unsigned long, 4096ul, Allocator<false, false>, 15ul, 16ul>*, DB::PODArray<signed char, 4096ul, Allocator<false, false>, 15ul, 16ul>&, int, int) const @ 0x292c15c in /home/akuzm/ch2/build-clang10/src/Functions/libclickhouse_functionsd.so
2020.08.07 16:47:01.742117 [ 596627 ] {} <Fatal> BaseDaemon: 12. /home/akuzm/ch2/ch/src/Columns/ColumnVector.h:197: DB::ColumnVector<char8_t>::compareColumn(DB::IColumn const&, unsigned long, DB::PODArray<unsigned long, 4096ul, Allocator<false, false>, 15ul, 16ul>*, DB::PODArray<signed char, 4096ul, Allocator<false, false>, 15ul, 16ul>&, int, int) const @ 0x292b5b1 in /home/akuzm/ch2/build-clang10/src/Functions/libclickhouse_functionsd.so
2020.08.07 16:47:01.742550 [ 596627 ] {} <Fatal> BaseDaemon: 13. /home/akuzm/ch2/ch/src/Columns/ColumnConst.h:204: DB::ColumnConst::compareColumn(DB::IColumn const&, unsigned long, DB::PODArray<unsigned long, 4096ul, Allocator<false, false>, 15ul, 16ul>*, DB::PODArray<signed char, 4096ul, Allocator<false, false>, 15ul, 16ul>&, int, int) const @ 0x2752c9 in /home/akuzm/ch2/build-clang10/src/libclickhouse_columnsd.so
2020.08.07 16:47:01.743372 [ 596627 ] {} <Fatal> BaseDaemon: 14. /home/akuzm/ch2/ch/src/Processors/Transforms/PartialSortingTransform.cpp:73: DB::getFilterMask(std::__1::vector<DB::IColumn const*, std::__1::allocator<DB::IColumn const*> > const&, std::__1::vector<DB::IColumn const*, std::__1::allocator<DB::IColumn const*> > const&, unsigned long, std::__1::vector<DB::SortColumnDescription, std::__1::allocator<DB::SortColumnDescription> > const&, unsigned long, DB::PODArray<char8_t, 4096ul, Allocator<false, false>, 15ul, 16ul>&, DB::PODArray<unsigned long, 4096ul, Allocator<false, false>, 15ul, 16ul>&, DB::PODArray<signed char, 4096ul, Allocator<false, false>, 15ul, 16ul>&) @ 0x29fa29 in /home/akuzm/ch2/build-clang10/src/libclickhouse_processors_transformsd.so
2020.08.07 16:47:01.744116 [ 596627 ] {} <Fatal> BaseDaemon: 15. /home/akuzm/ch2/ch/src/Processors/Transforms/PartialSortingTransform.cpp:107: DB::PartialSortingTransform::transform(DB::Chunk&) @ 0x29fcac in /home/akuzm/ch2/build-clang10/src/libclickhouse_processors_transformsd.so
2020.08.07 16:47:01.748047 [ 596627 ] {} <Fatal> BaseDaemon: 16. /home/akuzm/ch2/ch/src/Processors/ISimpleTransform.h:43: DB::ISimpleTransform::transform(DB::Chunk&, DB::Chunk&) @ 0xa97c45 in /home/akuzm/ch2/build-clang10/src/libclickhouse_storagesd.so
2020.08.07 16:47:01.748464 [ 596627 ] {} <Fatal> BaseDaemon: 17. /home/akuzm/ch2/ch/src/Processors/ISimpleTransform.cpp:89: DB::ISimpleTransform::work() @ 0x18eeea in /home/akuzm/ch2/build-clang10/src/libclickhouse_processorsd.so
2020.08.07 16:47:01.748859 [ 596627 ] {} <Fatal> BaseDaemon: 18. /home/akuzm/ch2/ch/src/Processors/Executors/PipelineExecutor.cpp:78: DB::executeJob(DB::IProcessor*) @ 0x13132c in /home/akuzm/ch2/build-clang10/src/libclickhouse_processors_executorsd.so
2020.08.07 16:47:01.749220 [ 596627 ] {} <Fatal> BaseDaemon: 19. /home/akuzm/ch2/ch/src/Processors/Executors/PipelineExecutor.cpp:95: DB::PipelineExecutor::addJob(DB::ExecutingGraph::Node*)::$_0::operator()() const @ 0x13128f in /home/akuzm/ch2/build-clang10/src/libclickhouse_processors_executorsd.so
2020.08.07 16:47:01.749641 [ 596627 ] {} <Fatal> BaseDaemon: 20. /home/akuzm/ch2/ch/contrib/libcxx/include/type_traits:3519: decltype(std::__1::forward<DB::PipelineExecutor::addJob(DB::ExecutingGraph::Node*)::$_0&>(fp)()) std::__1::__invoke<DB::PipelineExecutor::addJob(DB::ExecutingGraph::Node*)::$_0&>(DB::PipelineExecutor::addJob(DB::ExecutingGraph::Node*)::$_0&) @ 0x13124d in /home/akuzm/ch2/build-clang10/src/libclickhouse_processors_executorsd.so
2020.08.07 16:47:01.750000 [ 596627 ] {} <Fatal> BaseDaemon: 21. /home/akuzm/ch2/ch/contrib/libcxx/include/__functional_base:349: void std::__1::__invoke_void_return_wrapper<void>::__call<DB::PipelineExecutor::addJob(DB::ExecutingGraph::Node*)::$_0&>(DB::PipelineExecutor::addJob(DB::ExecutingGraph::Node*)::$_0&) @ 0x1311fd in /home/akuzm/ch2/build-clang10/src/libclickhouse_processors_executorsd.so
2020.08.07 16:47:01.750364 [ 596627 ] {} <Fatal> BaseDaemon: 22. /home/akuzm/ch2/ch/contrib/libcxx/include/functional:1540: std::__1::__function::__alloc_func<DB::PipelineExecutor::addJob(DB::ExecutingGraph::Node*)::$_0, std::__1::allocator<DB::PipelineExecutor::addJob(DB::ExecutingGraph::Node*)::$_0>, void ()>::operator()() @ 0x1311cd in /home/akuzm/ch2/build-clang10/src/libclickhouse_processors_executorsd.so
2020.08.07 16:47:01.750783 [ 596627 ] {} <Fatal> BaseDaemon: 23. /home/akuzm/ch2/ch/contrib/libcxx/include/functional:1714: std::__1::__function::__func<DB::PipelineExecutor::addJob(DB::ExecutingGraph::Node*)::$_0, std::__1::allocator<DB::PipelineExecutor::addJob(DB::ExecutingGraph::Node*)::$_0>, void ()>::operator()() @ 0x13031e in /home/akuzm/ch2/build-clang10/src/libclickhouse_processors_executorsd.so
2020.08.07 16:47:01.752654 [ 596627 ] {} <Fatal> BaseDaemon: 24. /home/akuzm/ch2/ch/contrib/libcxx/include/functional:1867: std::__1::__function::__value_func<void ()>::operator()() const @ 0x1d68c5 in /home/akuzm/ch2/build-clang10/programs/server/libclickhouse-server-libd.so
2020.08.07 16:47:01.754506 [ 596627 ] {} <Fatal> BaseDaemon: 25. /home/akuzm/ch2/ch/contrib/libcxx/include/functional:2473: std::__1::function<void ()>::operator()() const @ 0x1d6865 in /home/akuzm/ch2/build-clang10/programs/server/libclickhouse-server-libd.so
2020.08.07 16:47:01.754868 [ 596627 ] {} <Fatal> BaseDaemon: 26. /home/akuzm/ch2/ch/src/Processors/Executors/PipelineExecutor.cpp:559: DB::PipelineExecutor::executeStepImpl(unsigned long, unsigned long, std::__1::atomic<bool>*) @ 0x12ec35 in /home/akuzm/ch2/build-clang10/src/libclickhouse_processors_executorsd.so
2020.08.07 16:47:01.755184 [ 596627 ] {} <Fatal> BaseDaemon: 27. /home/akuzm/ch2/ch/src/Processors/Executors/PipelineExecutor.cpp:472: DB::PipelineExecutor::executeSingleThread(unsigned long, unsigned long) @ 0x12f429 in /home/akuzm/ch2/build-clang10/src/libclickhouse_processors_executorsd.so
2020.08.07 16:47:01.755450 [ 596627 ] {} <Fatal> BaseDaemon: 28. /home/akuzm/ch2/ch/src/Processors/Executors/PipelineExecutor.cpp:738: DB::PipelineExecutor::executeImpl(unsigned long) @ 0x12d970 in /home/akuzm/ch2/build-clang10/src/libclickhouse_processors_executorsd.so
2020.08.07 16:47:01.755697 [ 596627 ] {} <Fatal> BaseDaemon: 29. /home/akuzm/ch2/ch/src/Processors/Executors/PipelineExecutor.cpp:399: DB::PipelineExecutor::execute(unsigned long) @ 0x12d046 in /home/akuzm/ch2/build-clang10/src/libclickhouse_processors_executorsd.so
2020.08.07 16:47:01.756022 [ 596627 ] {} <Fatal> BaseDaemon: 30. /home/akuzm/ch2/ch/src/Processors/Executors/PullingAsyncPipelineExecutor.cpp:79: DB::threadFunction(DB::PullingAsyncPipelineExecutor::Data&, std::__1::shared_ptr<DB::ThreadGroupStatus>, unsigned long) @ 0x15003d in /home/akuzm/ch2/build-clang10/src/libclickhouse_processors_executorsd.so
2020.08.07 16:47:01.756344 [ 596627 ] {} <Fatal> BaseDaemon: 31. /home/akuzm/ch2/ch/src/Processors/Executors/PullingAsyncPipelineExecutor.cpp:101: DB::PullingAsyncPipelineExecutor::pull(DB::Chunk&, unsigned long)::$_0::operator()() const @ 0x14ffa2 in /home/akuzm/ch2/build-clang10/src/libclickhouse_processors_executorsd.so
``` | https://github.com/ClickHouse/ClickHouse/issues/13477 | https://github.com/ClickHouse/ClickHouse/pull/18708 | c0a1d6b9b09d8c7c777a87e0573884bc542b6ec0 | 750a39171689ceacb3c67d73098503c18f7acc2d | 2020-08-07T13:49:58Z | c++ | 2021-01-04T13:45:36Z |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 13,424 | ["src/Interpreters/JoinedTables.cpp", "src/Interpreters/PredicateExpressionsOptimizer.cpp", "src/Storages/StorageMerge.cpp", "src/Storages/VirtualColumnUtils.h", "tests/queries/0_stateless/01436_storage_merge_with_join_push_down.reference", "tests/queries/0_stateless/01436_storage_merge_with_join_push_down.sql"] | Server crash on Merge-table join & filter condition | In (Merge -> Distributed -> MergeTree) tables structure server **crashes** on SELECT query when:
1. Merge table is queried
2. There is a filtration condition
3. There is JOIN on some other table (no matter what)
**How to reproduce**
Prerequisites: CH with 'clstr1' cluster created (I tested on 2 nodes cluster in docker)
```
CREATE TABLE default.test1 ON CLUSTER clstr1
( id Int64, name String )
ENGINE MergeTree PARTITION BY (id) ORDER BY (id);
CREATE TABLE default.test1_distributed ON CLUSTER clstr1
AS default.test1
ENGINE = Distributed(clstr1, default, test1);
CREATE TABLE default.test_merge ON CLUSTER clstr1
AS default.test1
ENGINE = Merge('default', 'test1_distributed');
```
So with this structure following SELECT query crashes CH server (node which received query):
```
select count()
from default.test_merge
join ( select 'anystring' as name) as n USING name
WHERE id = 1
```
**Note** that if you remove either filter condition or join clause query works fine:
```
select count()
from default.test_merge
join ( select 'anystring' as name) as n USING name;
select count()
from default.test_merge
WHERE id = 1;
select count()
from default.test_merge
WHERE id = 1 and name in ( select 'anystring' as name);
```
Issue happens only when both filtration & join applied.
**Note2:** Sometimes issue raise only on the second query execution.
**Logs before crash**
```
2020.08.06 18:03:42.923308 [ 66 ] {a0e71bd3-67ec-46d2-9a2f-a72bf53e109c} <Debug> executeQuery: (from 192.168.160.1:46103) SELECT count() FROM default.test_merge INNER JOIN (SELECT 'anystring' AS name) AS n USING (name) WHERE id = 1
2020.08.06 18:03:42.923565 [ 66 ] {a0e71bd3-67ec-46d2-9a2f-a72bf53e109c} <Trace> AccessRightsContext (default): Access granted: SELECT(dummy) ON system.one
2020.08.06 18:03:42.923823 [ 66 ] {a0e71bd3-67ec-46d2-9a2f-a72bf53e109c} <Trace> AccessRightsContext (default): Access granted: SELECT(dummy) ON system.one
2020.08.06 18:03:42.923857 [ 66 ] {a0e71bd3-67ec-46d2-9a2f-a72bf53e109c} <Debug> Join: setSampleBlock: n.name String Const(size = 0, String(size = 1))
2020.08.06 18:03:42.924451 [ 101 ] {} <Fatal> BaseDaemon: ########################################
2020.08.06 18:03:42.924489 [ 101 ] {} <Fatal> BaseDaemon: (version 20.3.12.112 (official build)) (from thread 66) (query_id: a0e71bd3-67ec-46d2-9a2f-a72bf53e109c) Received signal Segmentation fault (11).
2020.08.06 18:03:42.924516 [ 101 ] {} <Fatal> BaseDaemon: Address: NULL pointer. Access: read. Address not mapped to object.
2020.08.06 18:03:42.924534 [ 101 ] {} <Fatal> BaseDaemon: Stack trace: 0xd4fd8c1 0xd500a1d 0xd4faa20 0xd4faa69 0xd4faa69 0xd4f87a1 0xd4fa09e 0xd4cdb14 0xd123ea5 0xd12521e 0xd1268bd 0xd6f2c65 0xd6f48fb 0xd15ce70 0xd160647 0xd122975 0xd3276b8 0xd539eba 0xd53a991 0x90327b9 0x90337a0 0xe390dfb 0xe39127d 0x105a7d47 0x105a3b4c 0x105a54ed 0x7f109626a6db 0x7f1096b4988f
2020.08.06 18:03:42.963483 [ 101 ] {} <Fatal> BaseDaemon: 3. DB::PredicateRewriteVisitorData::rewriteSubquery(DB::ASTSelectQuery&, std::__1::vector<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, std::__1::allocator<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > > > const&, std::__1::vector<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, std::__1::allocator<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > > > const&) @ 0xd4fd8c1 in /usr/bin/clickhouse
2020.08.06 18:03:42.963513 [ 101 ] {} <Fatal> BaseDaemon: 4. DB::PredicateRewriteVisitorData::visit(DB::ASTSelectWithUnionQuery&, std::__1::shared_ptr<DB::IAST>&) @ 0xd500a1d in /usr/bin/clickhouse
2020.08.06 18:03:42.963535 [ 101 ] {} <Fatal> BaseDaemon: 5. DB::InDepthNodeVisitor<DB::OneTypeMatcher<DB::PredicateRewriteVisitorData, false, std::__1::shared_ptr<DB::IAST> >, true, std::__1::shared_ptr<DB::IAST> >::visit(std::__1::shared_ptr<DB::IAST>&) @ 0xd4faa20 in /usr/bin/clickhouse
2020.08.06 18:03:42.963552 [ 101 ] {} <Fatal> BaseDaemon: 6. DB::InDepthNodeVisitor<DB::OneTypeMatcher<DB::PredicateRewriteVisitorData, false, std::__1::shared_ptr<DB::IAST> >, true, std::__1::shared_ptr<DB::IAST> >::visit(std::__1::shared_ptr<DB::IAST>&) @ 0xd4faa69 in /usr/bin/clickhouse
2020.08.06 18:03:42.963570 [ 101 ] {} <Fatal> BaseDaemon: 7. DB::InDepthNodeVisitor<DB::OneTypeMatcher<DB::PredicateRewriteVisitorData, false, std::__1::shared_ptr<DB::IAST> >, true, std::__1::shared_ptr<DB::IAST> >::visit(std::__1::shared_ptr<DB::IAST>&) @ 0xd4faa69 in /usr/bin/clickhouse
2020.08.06 18:03:42.963594 [ 101 ] {} <Fatal> BaseDaemon: 8. DB::PredicateExpressionsOptimizer::tryRewritePredicatesToTables(std::__1::vector<std::__1::shared_ptr<DB::IAST>, std::__1::allocator<std::__1::shared_ptr<DB::IAST> > >&, std::__1::vector<std::__1::vector<std::__1::shared_ptr<DB::IAST>, std::__1::allocator<std::__1::shared_ptr<DB::IAST> > >, std::__1::allocator<std::__1::vector<std::__1::shared_ptr<DB::IAST>, std::__1::allocator<std::__1::shared_ptr<DB::IAST> > > > > const&) @ 0xd4f87a1 in /usr/bin/clickhouse
2020.08.06 18:03:42.963605 [ 101 ] {} <Fatal> BaseDaemon: 9. DB::PredicateExpressionsOptimizer::optimize(DB::ASTSelectQuery&) @ 0xd4fa09e in /usr/bin/clickhouse
2020.08.06 18:03:42.963627 [ 101 ] {} <Fatal> BaseDaemon: 10. DB::SyntaxAnalyzer::analyzeSelect(std::__1::shared_ptr<DB::IAST>&, DB::SyntaxAnalyzerResult&&, DB::SelectQueryOptions const&, std::__1::vector<DB::TableWithColumnNamesAndTypes, std::__1::allocator<DB::TableWithColumnNamesAndTypes> > const&, std::__1::vector<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, std::__1::allocator<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > > > const&) const @ 0xd4cdb14 in /usr/bin/clickhouse
2020.08.06 18:03:42.963641 [ 101 ] {} <Fatal> BaseDaemon: 11. ? @ 0xd123ea5 in /usr/bin/clickhouse
2020.08.06 18:03:42.963656 [ 101 ] {} <Fatal> BaseDaemon: 12. DB::InterpreterSelectQuery::InterpreterSelectQuery(std::__1::shared_ptr<DB::IAST> const&, DB::Context const&, std::__1::shared_ptr<DB::IBlockInputStream> const&, std::__1::optional<DB::Pipe>, std::__1::shared_ptr<DB::IStorage> const&, DB::SelectQueryOptions const&, std::__1::vector<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, std::__1::allocator<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > > > const&) @ 0xd12521e in /usr/bin/clickhouse
2020.08.06 18:03:42.963666 [ 101 ] {} <Fatal> BaseDaemon: 13. DB::InterpreterSelectQuery::InterpreterSelectQuery(std::__1::shared_ptr<DB::IAST> const&, DB::Context const&, std::__1::shared_ptr<DB::IBlockInputStream> const&, DB::SelectQueryOptions const&) @ 0xd1268bd in /usr/bin/clickhouse
2020.08.06 18:03:42.963687 [ 101 ] {} <Fatal> BaseDaemon: 14. DB::StorageMerge::getQueryHeader(std::__1::vector<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, std::__1::allocator<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > > > const&, DB::SelectQueryInfo const&, DB::Context const&, DB::QueryProcessingStage::Enum) @ 0xd6f2c65 in /usr/bin/clickhouse
2020.08.06 18:03:42.963706 [ 101 ] {} <Fatal> BaseDaemon: 15. DB::StorageMerge::read(std::__1::vector<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, std::__1::allocator<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > > > const&, DB::SelectQueryInfo const&, DB::Context const&, DB::QueryProcessingStage::Enum, unsigned long, unsigned int) @ 0xd6f48fb in /usr/bin/clickhouse
2020.08.06 18:03:42.963719 [ 101 ] {} <Fatal> BaseDaemon: 16. void DB::InterpreterSelectQuery::executeFetchColumns<DB::QueryPipeline>(DB::QueryProcessingStage::Enum, DB::QueryPipeline&, std::__1::shared_ptr<DB::PrewhereInfo> const&, std::__1::vector<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, std::__1::allocator<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > > > const&, DB::QueryPipeline&) @ 0xd15ce70 in /usr/bin/clickhouse
2020.08.06 18:03:42.963737 [ 101 ] {} <Fatal> BaseDaemon: 17. void DB::InterpreterSelectQuery::executeImpl<DB::QueryPipeline>(DB::QueryPipeline&, std::__1::shared_ptr<DB::IBlockInputStream> const&, std::__1::optional<DB::Pipe>, DB::QueryPipeline&) @ 0xd160647 in /usr/bin/clickhouse
2020.08.06 18:03:42.963746 [ 101 ] {} <Fatal> BaseDaemon: 18. DB::InterpreterSelectQuery::executeWithProcessors() @ 0xd122975 in /usr/bin/clickhouse
2020.08.06 18:03:42.963761 [ 101 ] {} <Fatal> BaseDaemon: 19. DB::InterpreterSelectWithUnionQuery::executeWithProcessors() @ 0xd3276b8 in /usr/bin/clickhouse
2020.08.06 18:03:42.963777 [ 101 ] {} <Fatal> BaseDaemon: 20. ? @ 0xd539eba in /usr/bin/clickhouse
2020.08.06 18:03:42.963790 [ 101 ] {} <Fatal> BaseDaemon: 21. DB::executeQuery(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, DB::Context&, bool, DB::QueryProcessingStage::Enum, bool, bool) @ 0xd53a991 in /usr/bin/clickhouse
2020.08.06 18:03:42.963804 [ 101 ] {} <Fatal> BaseDaemon: 22. DB::TCPHandler::runImpl() @ 0x90327b9 in /usr/bin/clickhouse
2020.08.06 18:03:42.963813 [ 101 ] {} <Fatal> BaseDaemon: 23. DB::TCPHandler::run() @ 0x90337a0 in /usr/bin/clickhouse
2020.08.06 18:03:42.963823 [ 101 ] {} <Fatal> BaseDaemon: 24. Poco::Net::TCPServerConnection::start() @ 0xe390dfb in /usr/bin/clickhouse
2020.08.06 18:03:42.963840 [ 101 ] {} <Fatal> BaseDaemon: 25. Poco::Net::TCPServerDispatcher::run() @ 0xe39127d in /usr/bin/clickhouse
2020.08.06 18:03:42.963864 [ 101 ] {} <Fatal> BaseDaemon: 26. Poco::PooledThread::run() @ 0x105a7d47 in /usr/bin/clickhouse
2020.08.06 18:03:42.963880 [ 101 ] {} <Fatal> BaseDaemon: 27. Poco::ThreadImpl::runnableEntry(void*) @ 0x105a3b4c in /usr/bin/clickhouse
2020.08.06 18:03:42.963894 [ 101 ] {} <Fatal> BaseDaemon: 28. ? @ 0x105a54ed in /usr/bin/clickhouse
2020.08.06 18:03:42.963905 [ 101 ] {} <Fatal> BaseDaemon: 29. start_thread @ 0x76db in /lib/x86_64-linux-gnu/libpthread-2.27.so
2020.08.06 18:03:42.963916 [ 101 ] {} <Fatal> BaseDaemon: 30. clone @ 0x12188f in /lib/x86_64-linux-gnu/libc-2.27.so
```
------
**Reproduced on**:
- 20.5.4.40-stable
- 20.3.12.112-lts
- 20.3.11.97-lts
Version 19.17.10.1 works fine.
Will be happy to provide additional details if necessary!
| https://github.com/ClickHouse/ClickHouse/issues/13424 | https://github.com/ClickHouse/ClickHouse/pull/13679 | 09a72d0c64ef1efeff54dc47b9943ea61008fb62 | 89e967333662c8ec4c46dfa1ba12bd10781538b9 | 2020-08-06T16:01:01Z | c++ | 2020-08-14T09:38:18Z |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 13,383 | ["src/Functions/extractAllGroups.h", "tests/queries/0_stateless/01497_extract_all_groups_empty_match.reference", "tests/queries/0_stateless/01497_extract_all_groups_empty_match.sql"] | extactAllGroups[Horizonta|Vertical] stuck and high memory usage with specific regexp expression | **How to reproduce**
Clickhouse version 20.5.2.7
```
SELECT extractAllGroupsHorizontal(' a=10e-2,b=c,c="d",d=2,f=1.3, g="1,3", s=, k=3', '\W(\w+)=("[\w.,-]*")|([\w.-]*)');
SELECT extractAllGroupsVertical(' a=10e-2,b=c,c="d",d=2,f=1.3, g="1,3", s=, k=3', '\W(\w+)=("[\w.,-]*")|([\w.-]*)');
SELECT extractAllGroupsVertical(' a=10e-2,b=c,c="d",d=2,f=1.3, g="1,3", s=, k=3', '\W(\w+)="[\w.,-]*"|[\w.-]*');
```
**Expected behavior**
Query return some result (even empty)
**Error message and/or stacktrace**
```
2020.08.05 15:25:42.644160 [ 3745 ] {96193b12-98c2-4d74-a600-ebcf597b770f} <Error> executeQuery: Code: 241, e.displayText() = DB::Exception: Memory limit (for query) exceeded: would use 11.00 GiB (attempt to allocate chunk of 4295000960 bytes), maximum: 9.31 GiB (version 20.5.2.7 (official build)) (from 127.0.0.1:43996) (in query: SELECT extractAllGroupsVertical(' a=10e-2,b=c,c="d",d=2,f=1.3, g="1,3", s=, k=3', '\W(\w+)=("[\w.,-]*")|([\w.-]*)');), Stack trace (when copying this message, always include the lines below):
0. /build/obj-x86_64-linux-gnu/../contrib/poco/Foundation/src/Exception.cpp:27: Poco::Exception::Exception(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, int) @ 0x10ed0da0 in /usr/lib/debug/usr/bin/clickhouse
1. /build/obj-x86_64-linux-gnu/../src/Common/Exception.cpp:37: DB::Exception::Exception(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, int) @ 0x95c923d in /usr/lib/debug/usr/bin/clickhouse
2. /build/obj-x86_64-linux-gnu/../contrib/libcxx/include/string:2134: MemoryTracker::alloc(long) (.cold) @ 0x95c8228 in /usr/lib/debug/usr/bin/clickhouse
3. /build/obj-x86_64-linux-gnu/../src/Common/MemoryTracker.cpp:134: MemoryTracker::alloc(long) @ 0x95c6724 in /usr/lib/debug/usr/bin/clickhouse
4. DB::FunctionExtractAllGroups<(anonymous namespace)::VerticalImpl>::executeImpl(DB::Block&, std::__1::vector<unsigned long, std::__1::allocator<unsigned long> > const&, unsigned long, unsigned long) @ 0xbaa49ab in /usr/lib/debug/usr/bin/clickhouse
5. DB::ExecutableFunctionAdaptor::defaultImplementationForConstantArguments(DB::Block&, std::__1::vector<unsigned long, std::__1::allocator<unsigned long> > const&, unsigned long, unsigned long, bool) @ 0xa1c5806 in /usr/lib/debug/usr/bin/clickhouse
6. DB::ExecutableFunctionAdaptor::executeWithoutLowCardinalityColumns(DB::Block&, std::__1::vector<unsigned long, std::__1::allocator<unsigned long> > const&, unsigned long, unsigned long, bool) @ 0xa1c5a22 in /usr/lib/debug/usr/bin/clickhouse
7. DB::ExecutableFunctionAdaptor::execute(DB::Block&, std::__1::vector<unsigned long, std::__1::allocator<unsigned long> > const&, unsigned long, unsigned long, bool) @ 0xa1c67a1 in /usr/lib/debug/usr/bin/clickhouse
8. /build/obj-x86_64-linux-gnu/../src/Interpreters/ExpressionActions.cpp:209: DB::ExpressionAction::prepare(DB::Block&, DB::Settings const&, std::__1::unordered_set<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, std::__1::hash<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > >, std::__1::equal_to<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > >, std::__1::allocator<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > > >&) @ 0xdd4d675 in /usr/lib/debug/usr/bin/clickhouse
9. /build/obj-x86_64-linux-gnu/../contrib/libcxx/include/vector:1635: DB::ExpressionActions::addImpl(DB::ExpressionAction, std::__1::vector<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, std::__1::allocator<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > > >&) @ 0xdd4e235 in /usr/lib/debug/usr/bin/clickhouse
10. /build/obj-x86_64-linux-gnu/../src/Interpreters/ExpressionActions.cpp:577: DB::ExpressionActions::add(DB::ExpressionAction const&, std::__1::vector<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, std::__1::allocator<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > > >&) @ 0xdd4e89d in /usr/lib/debug/usr/bin/clickhouse
11. /build/obj-x86_64-linux-gnu/../contrib/libcxx/include/vector:1549: DB::ScopeStack::addAction(DB::ExpressionAction const&) @ 0xddf31fd in /usr/lib/debug/usr/bin/clickhouse
12. /build/obj-x86_64-linux-gnu/../src/Interpreters/ActionsVisitor.cpp:593: DB::ActionsMatcher::visit(DB::ASTFunction const&, std::__1::shared_ptr<DB::IAST> const&, DB::ActionsMatcher::Data&) @ 0xddfb476 in /usr/lib/debug/usr/bin/clickhouse
13. /build/obj-x86_64-linux-gnu/../contrib/libcxx/include/memory:3826: DB::InDepthNodeVisitor<DB::ActionsMatcher, true, std::__1::shared_ptr<DB::IAST> const>::visit(std::__1::shared_ptr<DB::IAST> const&) @ 0xdde8beb in /usr/lib/debug/usr/bin/clickhouse
14. /build/obj-x86_64-linux-gnu/../src/Interpreters/InDepthNodeVisitor.h:45: DB::ExpressionAnalyzer::getRootActions(std::__1::shared_ptr<DB::IAST> const&, bool, std::__1::shared_ptr<DB::ExpressionActions>&, bool) (.constprop.0) @ 0xddda423 in /usr/lib/debug/usr/bin/clickhouse
15. /build/obj-x86_64-linux-gnu/../contrib/libcxx/include/vector:656: DB::SelectQueryExpressionAnalyzer::appendSelect(DB::ExpressionActionsChain&, bool) @ 0xdddc1ed in /usr/lib/debug/usr/bin/clickhouse
16. /build/obj-x86_64-linux-gnu/../contrib/libcxx/include/vector:662: DB::ExpressionAnalysisResult::ExpressionAnalysisResult(DB::SelectQueryExpressionAnalyzer&, bool, bool, bool, std::__1::shared_ptr<DB::FilterInfo> const&, DB::Block const&) @ 0xdde595d in /usr/lib/debug/usr/bin/clickhouse
17. /build/obj-x86_64-linux-gnu/../src/Interpreters/ExpressionAnalyzer.h:165: DB::InterpreterSelectQuery::getSampleBlockImpl() @ 0xdd94368 in /usr/lib/debug/usr/bin/clickhouse
18. /build/obj-x86_64-linux-gnu/../src/Interpreters/InterpreterSelectQuery.cpp:307: DB::InterpreterSelectQuery::InterpreterSelectQuery(std::__1::shared_ptr<DB::IAST> const&, DB::Context const&, std::__1::shared_ptr<DB::IBlockInputStream> const&, std::__1::optional<DB::Pipe>, std::__1::shared_ptr<DB::IStorage> const&, DB::SelectQueryOptions const&, std::__1::vector<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, std::__1::allocator<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > > > const&)::'lambda'(bool)::operator()(bool) const @ 0xdd9aa81 in /usr/lib/debug/usr/bin/clickhouse
19. /build/obj-x86_64-linux-gnu/../src/Interpreters/InterpreterSelectQuery.cpp:401: DB::InterpreterSelectQuery::InterpreterSelectQuery(std::__1::shared_ptr<DB::IAST> const&, DB::Context const&, std::__1::shared_ptr<DB::IBlockInputStream> const&, std::__1::optional<DB::Pipe>, std::__1::shared_ptr<DB::IStorage> const&, DB::SelectQueryOptions const&, std::__1::vector<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, std::__1::allocator<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > > > const&) @ 0xdda29a5 in /usr/lib/debug/usr/bin/clickhouse
20. /build/obj-x86_64-linux-gnu/../src/Interpreters/InterpreterSelectQuery.cpp:145: DB::InterpreterSelectQuery::InterpreterSelectQuery(std::__1::shared_ptr<DB::IAST> const&, DB::Context const&, DB::SelectQueryOptions const&, std::__1::vector<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, std::__1::allocator<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > > > const&) @ 0xdda4339 in /usr/lib/debug/usr/bin/clickhouse
21. /build/obj-x86_64-linux-gnu/../contrib/libcxx/include/vector:1681: DB::InterpreterSelectWithUnionQuery::InterpreterSelectWithUnionQuery(std::__1::shared_ptr<DB::IAST> const&, DB::Context const&, DB::SelectQueryOptions const&, std::__1::vector<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, std::__1::allocator<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > > > const&) @ 0xdf050e7 in /usr/lib/debug/usr/bin/clickhouse
22. /build/obj-x86_64-linux-gnu/../contrib/libcxx/include/vector:461: DB::InterpreterFactory::get(std::__1::shared_ptr<DB::IAST>&, DB::Context&, DB::QueryProcessingStage::Enum) @ 0xdd11d74 in /usr/lib/debug/usr/bin/clickhouse
23. /build/obj-x86_64-linux-gnu/../contrib/libcxx/include/memory:2587: DB::executeQueryImpl(char const*, char const*, DB::Context&, bool, DB::QueryProcessingStage::Enum, bool, DB::ReadBuffer*) @ 0xe0748a4 in /usr/lib/debug/usr/bin/clickhouse
24. /build/obj-x86_64-linux-gnu/../src/Interpreters/executeQuery.cpp:643: DB::executeQuery(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, DB::Context&, bool, DB::QueryProcessingStage::Enum, bool) @ 0xe07811a in /usr/lib/debug/usr/bin/clickhouse
25. /build/obj-x86_64-linux-gnu/../src/Server/TCPHandler.cpp:251: DB::TCPHandler::runImpl() @ 0xe698946 in /usr/lib/debug/usr/bin/clickhouse
26. /build/obj-x86_64-linux-gnu/../src/Server/TCPHandler.cpp:1197: DB::TCPHandler::run() @ 0xe699660 in /usr/lib/debug/usr/bin/clickhouse
27. /build/obj-x86_64-linux-gnu/../contrib/poco/Net/src/TCPServerConnection.cpp:57: Poco::Net::TCPServerConnection::start() @ 0x10deebcb in /usr/lib/debug/usr/bin/clickhouse
28. /build/obj-x86_64-linux-gnu/../contrib/libcxx/include/atomic:856: Poco::Net::TCPServerDispatcher::run() @ 0x10def05b in /usr/lib/debug/usr/bin/clickhouse
29. /build/obj-x86_64-linux-gnu/../contrib/poco/Foundation/include/Poco/Mutex_POSIX.h:59: Poco::PooledThread::run() @ 0x10f6db86 in /usr/lib/debug/usr/bin/clickhouse
30. /build/obj-x86_64-linux-gnu/../contrib/poco/Foundation/include/Poco/AutoPtr.h:223: Poco::ThreadImpl::runnableEntry(void*) @ 0x10f68f80 in /usr/lib/debug/usr/bin/clickhouse
31. start_thread @ 0x9609 in /usr/lib/x86_64-linux-gnu/libpthread-2.31.so
Received exception from server (version 20.5.2):
Code: 241. DB::Exception: Received from localhost:9000. DB::Exception: Memory limit (for query) exceeded: would use 11.00 GiB (attempt to allocate chunk of 4295000960 bytes), maximum: 9.31 GiB.
Query was cancelled.
0 rows in set. Elapsed: 92.109 sec.
```
**Additional context**
ExtractAll works fine.
```
SELECT extractAllGroupsVertical(' a=10e-2,b=c,c="d",d=2,f=1.3, g="1,3", s=, k=3', '\\W(\\w+)=("[\\w.,-]*"|[\\w.-]*)') AS pairs
┌─pairs────────────────────────────────────────────────────────────────────────────────────────┐
│ [['a','10e-2'],['b','c'],['c','"d"'],['d','2'],['f','1.3'],['g','"1,3"'],['s',''],['k','3']] │
└──────────────────────────────────────────────────────────────────────────────────────────────┘
```
Looks like problem happens when we have | operator not surrounded with brackets.
| https://github.com/ClickHouse/ClickHouse/issues/13383 | https://github.com/ClickHouse/ClickHouse/pull/14889 | 000cd42d5ea64199b95029dfbde60def67ffccef | 2413caa7d5b62db0b0ea2a5e7d3ab9406a3e6c75 | 2020-08-05T12:37:16Z | c++ | 2020-09-17T13:02:30Z |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 13,380 | ["docs/en/sql-reference/aggregate-functions/reference/index.md"] | Incorrect link in documentation | On the page https://clickhouse.tech/docs/en/sql-reference/aggregate-functions/reference/, the link topK actually leads to topKWeighted.
| https://github.com/ClickHouse/ClickHouse/issues/13380 | https://github.com/ClickHouse/ClickHouse/pull/13381 | 0eaab3d0954683aa6adf3d9d31e16c267ba5ebc0 | 4e08b60b5e77e6ff2e2c8a40e7d2357cc8de014d | 2020-08-05T11:29:01Z | c++ | 2020-08-05T13:06:31Z |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 13,369 | ["programs/client/Client.cpp", "tests/queries/0_stateless/00964_live_view_watch_events_heartbeat.py", "tests/queries/0_stateless/00965_live_view_watch_heartbeat.py"] | Invalid json output with FORMAT JSON | Easy to reproduce:
### 20.3.13
```
SELECT 1
FORMAT JSON
{
"meta":
[
{
"name": "1",
"type": "UInt8"
}
],
"data":
[
{
"1": 1
}
],
"rows": 1,
"statistics":
{
"elapsed": 0.000203731,
"rows_read": 1,
"bytes_read": 1
}
}
```
### 20.4.5
```
:) select 1 format JSON
SELECT 1
FORMAT JSON
{
"meta":
[
{
"name": "1",
"type": "UInt8"
}
],
"data":
[
{
"1": 1
>>>>>>>> no closing bracket <<<<<<<<<<
],
"rows": 1,
"statistics":
{
"elapsed": 0.000771732,
"rows_read": 1,
"bytes_read": 1
}
}
```
### master
```
SELECT 1
FORMAT JSON
{
"meta":
[
{
"name": "1",
"type": "UInt8"
}
],
"data":
[
{
"1": 1
>>>>>>>> no closing bracket <<<<<<<<<<
],
"rows": 1,
"statistics":
{
"elapsed": 0.00103364,
"rows_read": 1,
"bytes_read": 1
}
}
```
| https://github.com/ClickHouse/ClickHouse/issues/13369 | https://github.com/ClickHouse/ClickHouse/pull/13691 | c157b7a6858fa2de229721a6667213274388e561 | 318f14b95e0bbc02d1a5f07241d8f2c1c3d4d281 | 2020-08-05T10:24:16Z | c++ | 2020-08-26T10:25:25Z |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 13,368 | ["src/Functions/lcm.cpp", "tests/queries/0_stateless/01435_lcm_overflow.reference", "tests/queries/0_stateless/01435_lcm_overflow.sql"] | Assert in lcm function. | ```
SELECT lcm(256, 9223372036854775807)
```
| https://github.com/ClickHouse/ClickHouse/issues/13368 | https://github.com/ClickHouse/ClickHouse/pull/13510 | ec89e921709370d2e931d4354c197836bca51291 | 3af1eba808da93018777821b8260c85ebede8fde | 2020-08-05T10:11:55Z | c++ | 2020-08-08T14:02:56Z |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 13,362 | ["src/Common/BitHelpers.h", "src/IO/parseDateTimeBestEffort.cpp", "tests/queries/0_stateless/01432_parse_date_time_best_effort_timestamp.reference", "tests/queries/0_stateless/01432_parse_date_time_best_effort_timestamp.sql"] | parseDateTime64BestEffort can not parse timestamp correct | (you don't have to strictly follow this form)
i use parseDateTime64BestEffort function to parse string to datetime.
i have met a stranger situation.
when i use :
```
select parseDateTime64BestEffortOrNull('1596538462') from test_file_1_tmp limit 1;
```
result:
```
2020-08-04 18:55:26.000
```
when i use :
```
select parseDateTime64BestEffortOrNull(time) , time from test_file_1_tmp ;
```
result:
```
│ 2020-07-18 10:00:00.000 │ 2020-07-18 10:00 │
│ 2020-01-01 00:00:00.000 │ 2020 │
│ 2020-08-01 00:00:00.000 │ 202008 │
│ 2020-09-01 00:00:00.000 │ 2020-09 │
│ 2020-10-01 00:00:00.000 │ 2020/10 │
│ 2020-11-01 00:00:00.000 │ 20201101 │
│ 2020-12-01 00:00:00.000 │ 2020-12-01 │
│ 2020-02-01 00:00:00.000 │ 2020/02/01 │
│ ᴺᵁᴸᴸ │ 202003012200 │
│ 2020-07-28 22:00:00.000 │ 2020/07/28 22:00 │
│ 2020-04-01 22:00:55.000 │ 20200401220055 │
│ 2020-07-28 22:00:22.000 │ 2020/07/28 22:00:22 │
│ 2020-07-18 10:00:22.000 │ 2020-07-18 10:00:22 │
**│ 2071-02-19 20:54:44.000 │ **1596538462** │**
```
the function `parseDateTime64BestEffortOrNull` show different result in two querys.
how can i get the correct result of `parseDateTime64BestEffortOrNull('1596538462')` ? | https://github.com/ClickHouse/ClickHouse/issues/13362 | https://github.com/ClickHouse/ClickHouse/pull/13441 | 55088d53cdfc90f0dcf3521ceec29394224053d3 | b5667e3b0f9df563ff7bf515b2f3d8aba21dda3d | 2020-08-05T09:10:15Z | c++ | 2020-08-07T11:02:59Z |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 13,361 | ["src/Interpreters/DDLWorker.cpp", "tests/integration/test_on_cluster_timeouts/__init__.py", "tests/integration/test_on_cluster_timeouts/configs/remote_servers.xml", "tests/integration/test_on_cluster_timeouts/configs/users_config.xml", "tests/integration/test_on_cluster_timeouts/test.py"] | Cannot execute replicated DDL query on leader | I deleted all the zookeeper's for clickhouse path because its hard disk was full, after that I restarted the cluster.
Right now the data can not be inserted because the path is not created | https://github.com/ClickHouse/ClickHouse/issues/13361 | https://github.com/ClickHouse/ClickHouse/pull/13450 | a348bc4641684b79f87c083643a5a46ae60132bb | d806e0c0522978cf7fc1b06d6ebf020c1311079e | 2020-08-05T09:02:11Z | c++ | 2020-08-24T16:23:09Z |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 13,342 | ["tests/queries/0_stateless/01536_fuzz_cast.reference", "tests/queries/0_stateless/01536_fuzz_cast.sql"] | Assert in CAST | ```
SELECT CAST(arrayJoin([NULL, '', '', NULL, '', NULL, '01.02.2017 03:04\005GMT', '', NULL, '01/02/2017 03:04:05 MSK01/02/\0017 03:04:05 MSK', '', NULL, '03/04/201903/04/201903/04/\001903/04/2019']), 'Enum8(\'a\' = 1, \'b\' = 2)') AS x
```
`Logical error: 'Last column should be ColumnNullable'.` | https://github.com/ClickHouse/ClickHouse/issues/13342 | https://github.com/ClickHouse/ClickHouse/pull/16452 | 7c4b0e559d123b5e71c1c91c0dc0a24e90ceb9db | 0cb377da6e85744c53b123cc5143c07c32c7cc5d | 2020-08-04T21:20:41Z | c++ | 2020-10-28T06:26:19Z |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.