 |
SoftTree Technologies
Technical Support Forums
|
|
Author |
Message |
gemisigo
Joined: 11 Mar 2010 Posts: 2165
|
|
[11.0.35 Pro] - Data Generator, adding tables is slow |
|
Adding one table to the Test Data Generator is okay. Adding more often runs into trouble. While it is just mildly inconvenient waiting for individual tables (some of them takes quite a while, see here here), it turns out that adding all of them from a schema is not something that ends in the foreseeable future. 20 minutes have passed since I selected nejp9 to add all of the tables to the data generator and it still hasn't returned control. It will return, I've made tests where I left it unbothered for hours and it got back eventually, though I don't know how much time it required.
Oh, and it also freezes SQL Editor while it's waiting.
|
|
Tue Sep 17, 2019 9:15 am |
|
 |
SysOp
Site Admin
Joined: 26 Nov 2006 Posts: 7948
|
|
|
|
Thank you. Is that always reproducible? Were there backups or other kinds of maintenance jobs running in the background when you captured the video?
Is the effect the same when the cache is hot? What I mean is a case of the cache prepopulated with table column details. One way to force that is to change the database in the editor to the same database you plan on using with the test data gen, and then type SELECT and expand any table to make SQL Assistant retrieve definitions of all table columns from that database into the cache.
|
|
Wed Sep 18, 2019 8:55 am |
|
 |
gemisigo
Joined: 11 Mar 2010 Posts: 2165
|
|
|
|
On the individual table level, no. On the whole shebang, yes, it is reproducible. Selecting a single table in the data generator is sometimes instant and sometimes it requires a considerable amount of time (several seconds). Expanding the same table in a popup to show the columns is similar. There were no maintenance jobs running on this MariaDB at the time I was using it.
I took the liberty and executed the DB Query for Columns + Keys using the same filters I suppose SA is using when expanding:
 |
 |
SELECT
COLUMN_NAME, COLUMN_TYPE,
CONCAT(
CASE WHEN EXTRA = 'auto_increment' THEN 'A'
ELSE SUBSTR(IS_NULLABLE,1,1)/*N,Y*/
END,
CASE /*P,F,U*/
WHEN COLUMN_KEY = 'PRI' THEN 'P'
WHEN EXISTS (
SELECT NULL FROM information_schema.KEY_COLUMN_USAGE k
WHERE k.TABLE_SCHEMA = c.TABLE_SCHEMA AND k.TABLE_NAME = c.TABLE_NAME AND k.COLUMN_NAME = c.COLUMN_NAME
AND k.REFERENCED_COLUMN_NAME IS NOT NULL
) THEN 'F'
WHEN COLUMN_KEY = 'UNI' THEN 'U'
ELSE ' '
END,
IF(COLUMN_KEY = 'MUL','I','')
)
FROM
information_schema.COLUMNS c
WHERE
TABLE_SCHEMA = :SCHEMA_NAME
AND TABLE_NAME = :OBJECT_NAME
ORDER BY
ORDINAL_POSITION
|
Replacing :SCHEMA_NAME and :OBJECT_NAME with schema and table names this query took ~7 seconds, and that's repeatable, subsequent executions run about the same speed. The information_schema.columns table has ~9k records. I think that's a very large amount of time even if that table isn't indexed (is it?). Made some tests and it turned out that subquery in the CASE statement is the thing wrecking the performance here. I changed the WHERE conditions in the subquery to
 |
 |
SELECT
COLUMN_NAME, COLUMN_TYPE,
CONCAT(
CASE WHEN EXTRA = 'auto_increment' THEN 'A'
ELSE SUBSTR(IS_NULLABLE,1,1)/*N,Y*/
END,
CASE /*P,F,U*/
WHEN COLUMN_KEY = 'PRI' THEN 'P'
WHEN EXISTS (
SELECT NULL FROM information_schema.KEY_COLUMN_USAGE k
WHERE k.TABLE_SCHEMA = :SCHEMA_NAME AND k.TABLE_NAME = :OBJECT_NAME AND k.COLUMN_NAME = c.COLUMN_NAME
AND k.REFERENCED_COLUMN_NAME IS NOT NULL
) THEN 'F'
WHEN COLUMN_KEY = 'UNI' THEN 'U'
ELSE ' '
END,
IF(COLUMN_KEY = 'MUL','I','')
)
FROM
information_schema.COLUMNS c
WHERE
TABLE_SCHEMA = :SCHEMA_NAME
AND TABLE_NAME = :OBJECT_NAME
ORDER BY
ORDINAL_POSITION
|
and now the replacing the placeholders with the corresponding values the query runs in 15 ms.
Their execution plain is different too (notice DEPENDENT SUBQUERY having Scanned all databases and using no keys when in the old version):
Old
 |
 |
+--+------------------+-----+----+-------------+-----------------------+-------+---+----+---------------------------------------------------------------+
|id| select_type |table|type|possible_keys| key |key_len|ref|rows| Extra |
+--+------------------+-----+----+-------------+-----------------------+-------+---+----+---------------------------------------------------------------+
|1 |PRIMARY |c |ALL | |TABLE_SCHEMA,TABLE_NAME| | | |Using where; Open_frm_only; Scanned 0 databases; Using filesort|
|2 |DEPENDENT SUBQUERY|k |ALL | | | | | |Using where; Open_full_table; Scanned all databases |
+--+------------------+-----+----+-------------+-----------------------+-------+---+----+---------------------------------------------------------------+
|
vs
New
 |
 |
+--+------------------+-----+----+-------------+-----------------------+-------+---+----+---------------------------------------------------------------+
|id| select_type |table|type|possible_keys| key |key_len|ref|rows| Extra |
+--+------------------+-----+----+-------------+-----------------------+-------+---+----+---------------------------------------------------------------+
|1 |PRIMARY |c |ALL | |TABLE_SCHEMA,TABLE_NAME| | | |Using where; Open_frm_only; Scanned 0 databases; Using filesort|
|2 |DEPENDENT SUBQUERY|k |ALL | |TABLE_SCHEMA,TABLE_NAME| | | |Using where; Open_full_table; Scanned 0 databases |
+--+------------------+-----+----+-------------+-----------------------+-------+---+----+---------------------------------------------------------------+
|
Using this as DB Query for Columns + Keys expanding the table in a popup is now quasi instant. Also, adding all the 221 tables in the Data Generator by checking the schema (instead of individual tables) takes about ~4 minutes instead of approximately half an hour of my previous tries. This, though tolerable, is still very long.
|
|
Wed Sep 18, 2019 10:43 am |
|
 |
|
|
You cannot post new topics in this forum You cannot reply to topics in this forum You cannot edit your posts in this forum You cannot delete your posts in this forum You cannot vote in polls in this forum
|
|
|