Skip to content

fix: test_sql_length_boundary#35143

Closed
Pengrongkun wants to merge 1 commit into3.3.8from
fix/3.3.8/6856960267
Closed

fix: test_sql_length_boundary#35143
Pengrongkun wants to merge 1 commit into3.3.8from
fix/3.3.8/6856960267

Conversation

@Pengrongkun
Copy link
Copy Markdown
Contributor

Description

Issue(s)

  • Close/close/Fix/fix/Resolve/resolve: Issue Link

Checklist

Please check the items in the checklist if applicable.

  • Is the user manual updated?
  • Are the test cases passed and automated?
  • Is there no significant decrease in test coverage?

@Pengrongkun Pengrongkun requested a review from a team as a code owner April 15, 2026 03:22
Copilot AI review requested due to automatic review settings April 15, 2026 03:22
Copy link
Copy Markdown
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request updates the test_boundary.py test suite to use fixed base timestamps instead of relative 'now' offsets, which prevents potential duplicate timestamp issues during large-scale data insertion tests. The changes also include setting an initial maxSQLLength configuration and adjusting the expected row counts for the 10MB and 64MB SQL length boundary tests to reflect the new data generation logic. I have no feedback to provide as there were no review comments to evaluate.

Copy link
Copy Markdown
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

This PR updates the SQL-length boundary test to reduce flakiness by removing now+offset timestamp generation (which can create duplicate ts values across large inserts) and by setting a consistent maxSQLLength baseline.

Changes:

  • Set maxSQLLength to 1MB during class setup.
  • Use a fixed base timestamp and deterministic offsets for large INSERT generation.
  • Update expected row counts for 10MB and ~64MB SQL inserts.

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

Comment on lines 433 to 434
tdSql.execute(insert_sql)
tdSql.query("select * from t2")
Copy link

Copilot AI Apr 15, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

tdSql.query("select * from t2") will fetch millions of rows into memory (tdSql.query calls cursor.fetchall()), which can be slow/fragile and risk OOM. Prefer select count(*) (or select count(1)) and assert on the count instead of materializing all rows.

Copilot uses AI. Check for mistakes.
tdSql.execute(insert_sql)
tdSql.query("select * from t2")
tdSql.checkRows(509902)
tdSql.checkRows(460732)
Copy link

Copilot AI Apr 15, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The expected row count 460732 is a hard-coded value derived from the generated SQL length heuristics and is likely to change if the value formatting or truncation logic changes. Consider computing the expected number of inserted tuples from the final insert_sql (e.g., by counting value groups) and asserting against that computed value.

Copilot uses AI. Check for mistakes.
Comment on lines 467 to 468
tdSql.execute(insert_sql_64mb)
tdSql.query("select * from t2")
Copy link

Copilot AI Apr 15, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Same as above: using select * from t2 just to validate row count forces a full fetch of ~3.3M rows. Switching to select count(*) would make this test much faster and less memory-intensive while still validating the insert succeeded.

Copilot uses AI. Check for mistakes.
tdSql.execute(insert_sql_64mb)
tdSql.query("select * from t2")
tdSql.checkRows(3524291)
tdSql.checkRows(3303225)
Copy link

Copilot AI Apr 15, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The expected row count 3303225 is another magic number tied to the current SQL string generation/truncation details and includes both the 10MB and 64MB inserts. Consider calculating expected_64mb_rows from the final insert_sql_64mb (and expected_total = expected_10mb + expected_64mb) to keep the assertion stable if formatting changes.

Copilot uses AI. Check for mistakes.
Comment on lines 466 to +475
tdSql.execute("alter local 'maxSQLLength' '67108864'")
tdSql.execute(insert_sql_64mb)
tdSql.query("select * from t2")
tdSql.checkRows(3524291)
tdSql.checkRows(3303225)

# test out of boundary value
tdSql.error("alter local 'maxSQLLength' '67108865'")
tdSql.error("alter local 'maxSQLLength' '1048575'")

tdSql.execute("drop database db_maxsql_large")

tdSql.execute("drop database db_maxsql_large") No newline at end of file
Copy link

Copilot AI Apr 15, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

test_sql_length_boundary changes the session/local variable maxSQLLength up to 64MB but never restores it. When running with --skip_stop (no environment teardown between classes), this can leak into subsequent tests and change their behavior. Reset maxSQLLength back to the baseline (e.g., 1MB/default) in a finally block or at the end of the test.

Copilot uses AI. Check for mistakes.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants