------------------------------------------------------------
revno: 5913 committer: Balasubramanian Kandasamy <balasubramanian.kandasamy@oracle.com> branch nick: mysql-5.6.19-release timestamp: Tue 2014-05-06 12:13:29 +0200 message: Disable dtrace for el7 ------------------------------------------------------------ revno: 5912 [merge] tags: clone-5.6.19-build committer: Alexander Nozdrin <alexander.nozdrin@oracle.com> branch nick: 5.6-build timestamp: Wed 2014-04-30 20:53:43 +0400 message: Manual merge to 5.6. ------------------------------------------------------------ revno: 2875.468.25 tags: clone-5.5.38-build committer: Alexander Nozdrin <alexander.nozdrin@oracle.com> branch nick: 5.5-build timestamp: Wed 2014-04-30 20:48:29 +0400 message: Patch for Bug#18511348 (DDL_I18N_UTF8 AND DDL_I18N_KOI8R ARE PERMANENTLY SKIPPED IN 5.5/5.6). The problem was that some result files were not updated, so the tests were skipped. The fix is to record updated result files. ------------------------------------------------------------ revno: 5911 committer: Praveenkumar Hulakund <praveenkumar.hulakund@oracle.com> branch nick: mysql-5.6 timestamp: Thu 2014-05-01 09:10:19 +0530 message: Bug#18596756 - FAILED PREPARING OF TRIGGER ON TRUNCATED TABLES CAUSE ERROR 1054. Analysis: -------- Issue here is, the re-parse of the 'SELECT' query in the 'SET' statement for the NEW field of a row (triggered due to a DDL operation), removes current object and creates a new object for the "NEW.<field>" information. But the binding between the new object and the actual field in the table is not set. Hence while setting the value, an "unknown column" error is reported. For the SET operation with NEW.<field>, object of type "sp_instr_set_trigger_field" is created while parsing the trigger. This object has the member "m_trigger_field" of type "Item_trigger_field"(Represents NEW/OLD field of row). This trigger is bound with actual field of the row by calling method "Item_trigger_field::fix_field". During the cleanup process, performed before the re-parse of the "SELECT" query of "SET" operation, "Item_trigger_field" field held by "m_trigger_field" is unlinked. After the "SELECT" query is re-parsed, the "Item_trigger_field" for "NEW.id" is created and held with "m_trigger_field" member of "sp_instr_set_trigger_field" object. But it is not bound with the actual field object of table. Hence an "unknown column" is reported. ? Fix: --- Modified code to bind the "Item_trigger_field" created for "NEW.id" to the Field object in table by calling "Item_trigger_field::fix_field" method. ------------------------------------------------------------ revno: 5910 committer: mithun <mithun.c.y@oracle.com> branch nick: mysql-5.6 timestamp: Wed 2014-04-30 16:20:10 +0530 message: Bug #17156940 : THE UPDATE AND SELECT BEHAVE DIFFERENTLY UNDER THE SAME CONDITIONS. ISSUE : In myisam, Suppose in btree index of varchar type we have keys with and without trailing spaces example keys 'abc', 'abc ', 'abc '. During index search based on same key, length of the lastkey get changed if lastkey read is the one with trailing spaces. If x is the length of key 'abc'. After reading 'abc ' length will be x + 1. And, last_rkey_length should be recalculated as accordingly whenever lastkey changes. But in function mi_rnext_same during BTREE search when we tried to copy lastkey to lastkey2 we have used non updated last_rkey_length even though lastkey and its length might have been changed as explained above. And, last_rkey_length is computed only once during mi_rkey. Because of this invalid length usage compare_key failed and scan got terminated. And, hence Update command ended before updating further tuples which will satisfy the condition. SOLUTION : In function mi_rnext_same the input key lastkey2 can remain constant if we use a separate buffer other than lastkey2. And, we can fill this key just for one time for the complete scan in mi_rnext_same. Since key and length is computed only once for the entire scan, issues related to invalid length during compare keys will not arise. ------------------------------------------------------------ revno: 5909 committer: Bill Qu <bill.qu@Oracle.com> branch nick: mysql-5.6 timestamp: Wed 2014-04-30 18:39:19 +0800 message: Bug#17305385 I_RPL.RPL_GROUP_COMMIT_WITH_SESSION_ATTACH_ERROR FAILS ON SOLARIS The slave IO thread will try to connect master server and report an error with Error_code: 2003, after the master server is shut down when simulating session attach error. PB2 tree will start up mutiple processes to run all tests in parallel in general. So the problem will happen when one process is checking if their is an error on slave, when executing 'rpl_end.inc' and the slave thread in another process is tring to connect master server and report an error with Error_code: 2003. To solve the problem, we stop slave IO thread before master server is shut down, and start the slave IO thread after the master server is started, then the slave IO thread does not report the error any more when connecting to master server. ------------------------------------------------------------ revno: 5908 committer: Venkatesh Duggirala<venkatesh.duggirala@oracle.com> branch nick: mysql-5.6 timestamp: Tue 2014-04-29 17:03:20 +0530 message: Bug #17026898 PREVIOUS GTID EVENT IS NOT WRITTEN WHEN BINLOG IS ROTATED VIA SIGHUP Problem: When Binlog is rotated via SIGHUP signal, the newly generated binlog does not contain previous gtid event which is very important for processing that binlog's gtid events later. If there are any transactions written to this binlog then on next restart, while server is processing available binary logs, it was failing with following error: "The first global transaction identifier was read, but no other information regarding identifiers existing on the previous log files was found." and the server refuses to start. Or If the new GTID transactions which were written to this new binlog are replicated, Slave gets confused after seeing a GTID event without a previous_gtid_event and enters into "Fatal 1236" error. Analysis: SIGHUP siganl causes the server to reload the grant tables and to flush tables, logs, the thread cache, and the host cache. As part of flush logs, server rotates binary log as well. When server receives SIGHUP signal, it calls reload_acl_and_cache and which eventually executes the following code to write PREVIOUS_GTID_EVENT. if (current_thd && gtid_mode > 0) { /* write previous gtid event */ } And current_thd is NULL when server reaches this code through signal handler. Hence the newly generated binary log is not containing previous gtid event which resulted in reported issue at the time of restart. Fix: If reload_acl_and_cache() is called from SIGHUP handler, then allocate temporary THD before execution of binary log rotation function. The same above problem can be seen with relay log as well. Hence this temporary THD will be allocated even before relay log rotation function. And delete the THD object after finishing the task. ------------------------------------------------------------ revno: 5907 committer: bin.x.su@oracle.com branch nick: mysql-5.6 timestamp: Tue 2014-04-29 18:06:44 +0800 message: Bug #18634201 - Upgrade from 5.6.10 to 5.6.16 crashes and leaves unausable DB When user tries to upgrade from 5.6.10 to latest 5.6, the rename of aux tables could make server crash, since we don't get rid of those obsolete tables, which were removed away since 5.6.11. The patch fixed 2 issues: 1) We shouldn't try to rename those obsolete tables. 2) We should try to drop those obsolete tables when upgrade, so that those obsolete tables would not be left in the data directory. rb#5202, approved by Jimmy. ------------------------------------------------------------ revno: 5906 [merge] committer: mithun <mithun.c.y@oracle.com> branch nick: mysql-5.6 timestamp: Mon 2014-04-28 21:09:44 +0530 message: Bug #18167356: EXPLAIN W/ EXISTS(SELECT* UNION SELECT*) WHERE ONE OF SELECT* IS DISTINCT FAILS. NULL MERGE from 5.5 ------------------------------------------------------------ revno: 2875.468.24 committer: mithun <mithun.c.y@oracle.com> branch nick: mysql-5.5 timestamp: Mon 2014-04-28 21:07:27 +0530 message: Bug #18167356: EXPLAIN W/ EXISTS(SELECT* UNION SELECT*) WHERE ONE OF SELECT* IS DISTINCT FAILS. ISSUE: ------ There are 2 issues related to explain union. 1. If we have subquery with union of selects. And, one of the select need temp table to materialize its results then it will replace its query structure with a simple select from temporary table. Trying to display new internal temporary table scan resulted in crash. But to display the query plan, we should save the original query structure. 2. Multiple execution of prepared explain statement which have union of subqueries resulted in crash. If we have constant subqueries, fake select used in union operation will be evaluated once before using it for explain. During first execution we have set fake select options to SELECT_DESCRIBE, but did not reset after the explain. Hence during next execution of prepared statement during first time evaluation of fake select we had our select options as SELECT_DESCRIBE this resulted in improperly initialized data structures and crash. SOLUTION: --------- 1. If called by explain now we save the original query structure. And this will be used for displaying. 2. Reset the fake select options after it is called for explain of union. ------------------------------------------------------------ revno: 5905 [merge] committer: Nisha Gopalakrishnan <nisha.gopalakrishnan@oracle.com> branch nick: mysql-5.6-17994219 timestamp: Mon 2014-04-28 19:37:33 +0530 message: Merge from mysql-5.5 to mysql-5.6 ------------------------------------------------------------ revno: 2875.468.23 committer: Nisha Gopalakrishnan <nisha.gopalakrishnan@oracle.com> branch nick: mysql-5.5-17994219 timestamp: Mon 2014-04-28 16:28:09 +0530 message: BUG#17994219: CREATE TABLE .. SELECT PRODUCES INVALID STRUCTURE, BREAKS RBR Analysis: -------- A table created using a query of the format: CREATE TABLE t1 AS SELECT REPEAT('A',1000) DIV 1 AS a; breaks the Row Based Replication. The query above creates a table having a field of datatype 'bigint' with a display width of 3000 which is beyond the maximum acceptable value of 255. In the RBR mode, CREATE TABLE SELECT statement is replicated as a combination of CREATE TABLE statement equivalent to one the returned by SHOW CREATE TABLE and row events for rows inserted. When this CREATE TABLE event is executed on the slave, an error is reported: Display width out of range for column 'a' (max = 255) The following is the output of 'SHOW CREATE TABLE t1': CREATE TABLE t1(`a` bigint(3000) DEFAULT NULL) ENGINE=InnoDB DEFAULT CHARSET=latin1; The problem is due to the combination of two facts: 1) The above CREATE TABLE SELECT statement uses the display width of the result of DIV operation as the display width of the column created without validating the width for out of bound condition. 2) The DIV operation incorrectly returns the length of its first argument as the display width of its result; thus allowing creation of a table with an incorrect display width of 3000 for the field. Fix: ---- This fix changes the DIV operation implementation to correctly evaluate the display width of its result. We check if DIV's results estimated width crosses maximum width for integer value (21) and if yes set it to this maximum value. This patch also fixes fixes maximum display width evaluation for DIV function when its first argument is in UCS2. ------------------------------------------------------------ revno: 5904 committer: Nisha Gopalakrishnan <nisha.gopalakrishnan@oracle.com> branch nick: mysql-5.6-18158639 timestamp: Sat 2014-04-26 09:03:58 +0530 message: BUG#18158639: MATERIALIZED CURSOR + FLUSH TABLES CRASH WHEN FETCHING VARIABLE Analysis: -------- Concurrent execution of a stored program having cursor and a FLUSH TABLE operation may cause the 'mysqld' to crash. The crash can be observed, when the events occurs in the following sequence: a) Cursor OPEN executes the SELECT statement for which the table is opened. b) Flush table operation is triggered and finds that a table share object is present. Its version is marked as zero to ensure that the share is removed when it is no longer referenced. c) Since the share version is old and is referenced, the flush table operation waits until the flush request is granted. d) The SELECT statement execution for cursor OPEN closes all tables except the internal temporary table used by cursor for saving the materialized records. e) While closing the table, since the table share is an old version and there is a pending flush request, the flush request is granted. Thus the table share is deleted by awakening the FLUSH TABLE operation. f) During the cursor FETCH operation, the column type is checked for field conversion. To perform the check, the table share of orig_table in the field definition of the cursor temporary table is accessed. Since the share was deleted by FLUSH operation, accessing the invalid memory may cause the server to crash. Fix: --- In case of cursors, since all tables other than the temporary table are closed, the orig_table in the field definition for the internal temporary table is set to NULL. This is done once the metadata of the temporary table for the CURSOR is sent. ------------------------------------------------------------ revno: 5903 [merge] committer: Balasubramanian Kandasamy <balasubramanian.kandasamy@oracle.com> branch nick: mysql-5.6 timestamp: Thu 2014-04-24 11:56:18 +0200 message: Merge from 5.5 - Updated for 5.6.18 ------------------------------------------------------------ revno: 2875.468.22 [merge] committer: Balasubramanian Kandasamy <balasubramanian.kandasamy@oracle.com> branch nick: mysql-5.5 timestamp: Thu 2014-04-24 11:06:02 +0200 message: - Support for enterprise packages - Upgrade from MySQL-* packages - Fix Cflags for el7 ------------------------------------------------------------ revno: 2875.469.8 committer: Balasubramanian Kandasamy <balasubramanian.kandasamy@oracle.com> branch nick: mysql-5.5.37-repo timestamp: Mon 2014-04-07 16:36:09 +0200 message: updated optflags variable and cmake option for debug build ------------------------------------------------------------ revno: 2875.469.7 committer: Balasubramanian Kandasamy <balasubramanian.kandasamy@oracle.com> branch nick: mysql-5.5.37-repo timestamp: Mon 2014-04-07 14:51:44 +0200 message: Fix Cflags for el7 ------------------------------------------------------------ revno: 2875.469.6 committer: Balasubramanian Kandasamy <balasubramanian.kandasamy@oracle.com> branch nick: mysql-5.5.37-repo timestamp: Fri 2014-04-04 05:58:49 +0200 message: Changed permisison for filter-requires.sh and filter-provides.sh ------------------------------------------------------------ revno: 2875.469.5 committer: Balasubramanian Kandasamy <balasubramanian.kandasamy@oracle.com> branch nick: mysql-5.5.37-repo timestamp: Thu 2014-04-03 12:56:26 +0200 message: Support for enterprise packages ------------------------------------------------------------ revno: 5902 [merge] committer: Nisha Gopalakrishnan <nisha.gopalakrishnan@oracle.com> branch nick: mysql-5.6-18080920 timestamp: Thu 2014-04-24 09:32:56 +0530 message: Merge from mysql-5.5 to mysql-5.6 ------------------------------------------------------------ revno: 2875.468.21 committer: Nisha Gopalakrishnan <nisha.gopalakrishnan@oracle.com> branch nick: mysql-5.5-18080920 timestamp: Thu 2014-04-24 09:30:21 +0530 message: BUG#18080920: CRASH; MY_REALLOC_STR DEREFERENCES NEGATIVE VALUE INTO CLIENT_ERRORS ARRAY Analysis: -------- The client may crash while executing a statement due to the missing mapping of the server error to it's equivalent client error. When trying to reallocate memory for the packet buffer, if the system is out of memory or the packet buffer is large, the server errors 'ER_OUT_OF_RESOURCES' or 'ER_PACKET_TOO_LARGE' is returned respectively. The client error number calculated is negative and when trying to dereference the array of client error messages with the calculated error number, the client crashes. Fix: ---- Map the server error returned to it's equivalent client error prior to dereferencing the array of client error messages. Note: Test case is not added since it is difficult to simulate the error condition. ------------------------------------------------------------ revno: 5901 [merge] committer: Tor Didriksen <tor.didriksen@oracle.com> branch nick: 5.6-merge timestamp: Wed 2014-04-23 17:04:55 +0200 message: merge 5.5 => 5.6 ------------------------------------------------------------ revno: 2875.468.20 committer: Tor Didriksen <tor.didriksen@oracle.com> branch nick: 5.5-merge timestamp: Wed 2014-04-23 17:01:35 +0200 message: Backport from trunk: Bug#18396916 MAIN.OUTFILE_LOADDATA TEST FAILS ON ARM, AARCH64, PPC/PPC64 The recorded results for the failing tests were wrong. They were introduced by the patch for Bug#30946 mysqldump silently ignores --default-character-set when used with --tab Correct results were returned for platforms where 'char' is implemented as unsigned. This was reported as Bug#46895 Test "outfile_loaddata" fails (reproducible) Bug#11755168 46895: TEST "OUTFILE_LOADDATA" FAILS (REPRODUCIBLE) The patch for that bug fixed only parts of the problem, leaving the incorrect results in the .result file. Solution: use 'uchar' for field_terminator and line_terminator on all platforms. Also: remove some un-necessary casts, leaving the ones we actually need. ------------------------------------------------------------ revno: 5900 committer: Erlend Dahl <erlend.dahl@oracle.com> branch nick: mysql-5.6 timestamp: Wed 2014-04-23 13:43:28 +0200 message: 5.6 version of patch for WL#7689 Deprecate and remove mysqlbug WL#7826 Deprecate and remove mysql_zap and mysql_waitpid ------------------------------------------------------------ revno: 5899 [merge] committer: Igor Solodovnikov <igor.solodovnikov@oracle.com> branch nick: mysql-5.6 timestamp: Wed 2014-04-23 12:47:09 +0300 message: Merge from mysql-5.5 ------------------------------------------------------------ revno: 2875.468.19 committer: Igor Solodovnikov <igor.solodovnikov@oracle.com> branch nick: mysql-5.5 timestamp: Wed 2014-04-23 12:46:00 +0300 message: Bug #17514920 MYSQL_THREAD_INIT() CALL WITHOUT MYSQL_INIT() IS CRASHING IN WINDOWS It is error to call mysql_thread_init() before libmysql is initialized with mysql_library_init(). Thus to fix this bug we need to detect if library was initialized and return error result if mysql_thread_init() is called with uninitialized library. Fixed by checking my_thread_global_init_done and returning nonzero if the library is not initialized. ------------------------------------------------------------ revno: 5898 committer: Vasil Dimov <vasil.dimov@oracle.com> branch nick: mysql-5.6 timestamp: Tue 2014-04-22 16:28:49 +0300 message: Followup to vasil.dimov@oracle.com-20140409172641-jf52d93p1f1yo1x1: Add an explicit typecast, needed to fix a Windows 32 only compilation error: ...\os0once.h(89): error C2664: '_InterlockedCompareExchange' : cannot convert parameter 1 from 'volatile os_once::state_t *' to 'volatile long *' Types pointed to are unrelated; conversion requires reinterpret_cast, C-style cast or function-style cast ------------------------------------------------------------ revno: 5897 committer: Venkatesh Duggirala<venkatesh.duggirala@oracle.com> branch nick: mysql-5.6 timestamp: Tue 2014-04-22 18:32:55 +0530 message: Bug#18069107 SLAVE CRASHES WITH GTIDS,TEMP TABLE, STOP IO_THREAD, START SLAVE Fixing post push pb2 failure ------------------------------------------------------------ revno: 5896 committer: Nuno Carvalho <nuno.carvalho@oracle.com> branch nick: mysql-5.6 timestamp: Sun 2014-04-20 20:11:56 +0100 message: BUG#18432744: RPL_CORRUPTION TEST FAILING ON 5.6 rpl_corruption test was sporadically failing with a different slave error code then expected on PB2 when it was run with gtid-mode=ON. When gtid-mode is ON, and in particular when MASTER_AUTO_POSITION=1, slave informs master which transactions it already retrieved and applied so that master (dump thread) can move forward on binary logs until it reaches the missing transactions and send them to slave. The injected event corruption through corrupt_read_log_event debug flag was also corrupting Previous_gtids_log_event and Gtid_log_event events disallowing dump thread to walk through the binary logs what was causing the unexpected ER_MASTER_FATAL_ERROR_READING_BINLOG error. Also the injected corruption was only corrupting the first read event from binary log, since dump thread was skipping the transactions already sent to slave the corrupted read event was not sent to slave. Fixed the failure by: 1) Excluding Previous_gtids_log_event and Gtid_log_event events from the injected corruption. 2) Corrupting all read events when corrupt_read_log_event debug flag is set. ------------------------------------------------------------ revno: 5895 committer: Venkata Sidagam <venkata.sidagam@oracle.com> branch nick: 5.6 timestamp: Fri 2014-04-18 16:22:06 +0530 message: Bug #17235179 OPTIMIZE AFTER A DELETE RETURNS ERROR 0 CAN'T GET STAT OF MYD FILE Description: Impossible to neither REPAIR nor OPTIMIZE .MYD files bigger than 4GB. Only the Windows versions are affected. Analysis: In my_copystat() function while calling stat() system call for Windows it is unable to get the file information for files greater than 4GB. Hence stat() fails with error and OPTIMIZE table on 4GB ".MYD" files fails. And the corresponding .TMD file(temporary .MYD file) is not getting deleted. Fix: Now we are calling windows specific stat() system call i.e _stati64() instead of stat() for windows and stat() call for other OS's. ------------------------------------------------------------ revno: 5894 [merge] committer: Igor Solodovnikov <igor.solodovnikov@oracle.com> branch nick: mysql-5.6 timestamp: Thu 2014-04-17 16:35:23 +0300 message: Merge from mysql-5.5 ------------------------------------------------------------ revno: 2875.468.18 committer: Igor Solodovnikov <igor.solodovnikov@oracle.com> branch nick: mysql-5.5 timestamp: Thu 2014-04-17 16:33:55 +0300 message: Bug #18053212 MYSQL_GET_SERVER_VERSION() CALL WITHOUT A VALID CONNECTION RESULTS IN SEG FAULT When there is no connection mysql_get_server_version() will return 0 and report CR_COMMANDS_OUT_OF_SYNC error. ------------------------------------------------------------ revno: 5893 committer: Venkatesh Duggirala<venkatesh.duggirala@oracle.com> branch nick: mysql-5.6 timestamp: Wed 2014-04-16 22:36:55 +0530 message: Bug#18069107 SLAVE CRASHES WITH GTIDS,TEMP TABLE, STOP IO_THREAD, START SLAVE Problem: When Slave SQL thread detects that the master was restarted with the help of information sent by master's 'Format Description' event, slave SQL thread drops all the opened temporary tables inorder to have proper cleanup. When GTID mode is on and while slave SQL thread is generating DROP TEMPORARY statement for all these temporary tables, server is hitting an assert DEBUG_ASSERT(gtid.spec_type != UNDEF_GROUP). Analysis: When the server was cleaning up a threadd (client disconnected for example) it was setting up thd's GTID_NEXT variable to AUTOMATIC at THD::cleanup just before calling close_temporary_tables(thd) function in order to generate proper GTID events for any 'DROP TEMPORARY' query created. No problem here. When slave SQL thread applies a FD/Start_log_event_v3 supposed be from a master that had just (re)started, it assumes that the master doesn't has the temporary tables anymore. So, slave SQL thread needs to do a cleanup of the current temporary tables by calling close_temporary_tables(thd). Slave SQL thread always starts with GTID_NEXT set to AUTOMATIC. So, when the first FD event sent by the master after slave SQL thread started was calling close_temporary_tables(thd), it had GTID_NEXT set to AUTOMATIC, not hitting the assert. When slave SQL thread applies transactions with GTID, at the time of commit/rollback of that transaction, GTID_NEXT is set to UNDEF_GROUP. Any thread with it's GTID_NEXT set to UNDEF_GROUP cannot generate Gtid events. Hence, after applying some transactions, slave SQL thread was hitting this assert when having to close temporary tables if it detected that the master had restarted. Fix: Moved the code that set GTID_NEXT to AUTOMATIC from THD::cleanup() to close_temporary_tables(THD *thd). ------------------------------------------------------------ revno: 5892 [merge] committer: Sujatha Sivakumar <sujatha.sivakumar@oracle.com> branch nick: Bug17942050_mysql-5.6 timestamp: Tue 2014-04-15 15:26:56 +0530 message: Merge from mysql-5.5 to mysql-5.6. ------------------------------------------------------------ revno: 2875.468.17 committer: Sujatha Sivakumar <sujatha.sivakumar@oracle.com> branch nick: Bug17942050_mysql-5.5 timestamp: Tue 2014-04-15 15:17:25 +0530 message: Bug#17942050:KILL OF TRUNCATE TABLE WILL LEAD TO BINARY LOG WRITTEN WHILE ROWS REMAINS Problem: ======== When truncate table fails while using transactional based engines even though the operation errors out we still continue and log it to binlog. Because of this master has data but the truncate will be written to binary log which will cause inconsistency. Analysis: ======== Truncate table can happen either through drop and create of table or by deleting rows. In the second case the existing code is written in such a way that even if an error occurs the truncate statement will always be binlogged. Which is not correct. Binlogging of TRUNCATE TABLE statement should check whether truncate is executed "transactionally or not". If the table is transaction based we log the TRUNCATE TABLE only on successful completion. If table is non transactional there are possibilities that on error we could have partial changes done hence in such cases we do log in spite of errors as some of the lines might have been removed, so the statement has to be sent to slave. Fix: === Using table handler whether truncate table is being executed in transaction based mode or not is identified and statement is binlogged accordingly. ------------------------------------------------------------ revno: 5891 committer: Sujatha Sivakumar <sujatha.sivakumar@oracle.com> branch nick: Bug18542111_mysql-5.6 timestamp: Sat 2014-04-12 15:45:55 +0530 message: Bug#18542111:ADD A TEST CASE TO TEST THE BINLOG TRANSACTION CACHE SIZE TO 32768 ------------------------------------------------------------ revno: 5890 [merge] committer: Georgi Kodinov <georgi.kodinov@oracle.com> branch nick: mysql-5.6 timestamp: Fri 2014-04-11 11:06:53 +0300 message: merge ------------------------------------------------------------ revno: 2875.468.16 committer: Georgi Kodinov <georgi.kodinov@oracle.com> branch nick: mysql-5.5 timestamp: Fri 2014-04-11 10:42:30 +0300 message: Addendum #1 to the fix for bug #18359924 Removed unused variable. Fixed long (>80 lines) ------------------------------------------------------------ revno: 5889 [merge] committer: Georgi Kodinov <georgi.kodinov@oracle.com> branch nick: B18359924-5.6 timestamp: Thu 2014-04-10 17:42:45 +0300 message: auto-merge ------------------------------------------------------------ revno: 2875.468.15 committer: Georgi Kodinov <georgi.kodinov@oracle.com> branch nick: B18359924-5.5 timestamp: Thu 2014-04-10 13:18:32 +0300 message: Bug #18359924: INNODB AND MYISAM CORRUPTION ON PREFIX INDEXES The problem was in the validation of the input data for blob types. When assigned binary data, the character blob types were only checking if the length of these data is a multiple of the minimum char length for the destination charset. And since e.g. UTF-8's minimum character length is 1 (becuase it's variable length) even byte sequences that are invalid utf-8 strings (e.g. wrong leading byte etc) were copied verbatim into utf-8 columns when coming from binary strings or fields. Storing invalid data into string columns was having all kinds of ill effects on code that assumed that the encoding data are valid to begin with. Fixed by additionally checking the incoming binary string for validity when assigning it to a non-binary string column. Made sure the conversions to charsets with no known "invalid" ranges are not covered by the extra check. Removed trailing spaces. Test case added. ------------------------------------------------------------ revno: 5888 committer: Vasil Dimov <vasil.dimov@oracle.com> branch nick: mysql-5.6 timestamp: Wed 2014-04-09 20:27:43 +0300 message: Non-functional change: rename dict_table_stats_latch_key to dict_table_stats_key, so that they have the same name in mysql-5.6 and mysql-trunk. ------------------------------------------------------------ revno: 5887 committer: Vasil Dimov <vasil.dimov@oracle.com> branch nick: mysql-5.6 timestamp: Wed 2014-04-09 20:27:09 +0300 message: Backport the following changeset from mysql-trunk to mysql-5.6: ** revision-id: vasil.dimov@oracle.com-20140401162003-zfhuxtu710846jd9 ** committer: Vasil Dimov <vasil.dimov@oracle.com> ** branch nick: mysql-trunk ** timestamp: Tue 2014-04-01 19:20:03 +0300 ** message: ** Fix Bug#71708 70768 fix perf regression: high rate of RW lock creation ** and destruction ** ** Lazily create dict_table_t::stats_latch the first time it is used. ** It may not be used at all in the lifetime of some dict_table_t objects. ** ** Approved by: Bin (rb:4739) ------------------------------------------------------------ revno: 5886 committer: Vasil Dimov <vasil.dimov@oracle.com> branch nick: mysql-5.6 timestamp: Wed 2014-04-09 20:26:41 +0300 message: Backport the following changeset from mysql-trunk to mysql-5.6: ** revision-id: vasil.dimov@oracle.com-20140401123646-qvr25kcca2bntx9q ** committer: Vasil Dimov <vasil.dimov@oracle.com> ** branch nick: mysql-trunk ** timestamp: Tue 2014-04-01 15:36:46 +0300 ** message: ** Non-functional change: use InterlockedCompareExchange() instead of ** win_cmp_and_xchg_dword() for the macro os_compare_and_swap_uint32() ** on Windows. win_cmp_and_xchg_dword() only calls InterlockedCompareExchange() ** so it can be skipped altogether. ** ** The macro os_compare_and_swap_uint32() is not used anywhere in the code. ------------------------------------------------------------ revno: 5885 committer: Bjorn Munch <bjorn.munch@oracle.com> branch nick: main-56 timestamp: Thu 2014-04-10 10:28:42 +0200 message: Increase version number, this will not be 5.6.18 ------------------------------------------------------------ revno: 5884 [merge] committer: Arun Kuruvila <arun.kuruvila@oracle.com> branch nick: mysql-5.6 timestamp: Thu 2014-04-10 11:12:28 +0530 message: Null merge from mysql-5.5 to mysql-5.6. ------------------------------------------------------------ revno: 2875.468.14 committer: Arun Kuruvila <arun.kuruvila@oracle.com> branch nick: mysql-5.5 timestamp: Thu 2014-04-10 11:10:31 +0530 message: Description: When we execute a correlated subquery on an archive table which is using an auto increment column, the server hangs. In order to recover the mysqld process, it has to be terminated abnormally using SIGKILL. The problem is observed in mysql-5.5. Bug #18065452 "PREPARING" STATE HOGS CPU WITH ARCHIVE + SUBQUERY Analysis: This happens because the server is trapped inside an infinite loop in the function, "subselect_indexsubquery_engine::exec()". This function resolves the correlated suquery by doing an index lookup for the appropriate engine. In case of archive engine, after reaching the end of records, "table->status" is not set to STATUS_NOT_FOUND. As a result the loop is not terminated. Fix: The "table->status" is set to STATUS_NOT_FOUND when the end of records is reached. ------------------------------------------------------------ revno: 5883 committer: Vasil Dimov <vasil.dimov@oracle.com> branch nick: mysql-5.6 timestamp: Tue 2014-04-08 19:08:35 +0300 message: Backport the following changeset from mysql-trunk to mysql-5.6: ** revision-id: vasil.dimov@oracle.com-20140403163612-rjymqwuzkh6vs6dj ** committer: Vasil Dimov <vasil.dimov@oracle.com> ** branch nick: mysql-trunk ** timestamp: Thu 2014-04-03 19:36:12 +0300 ** message: ** Followup to vasil.dimov@oracle.com-20140403070651-w1nefsafrqeid6ct: ** Increase the margin of allowed deviance for n_rows - with 4k page size ** it could sometimes be 745 (27% away from the actual value 1024). ------------------------------------------------------------ revno: 5882 committer: Vasil Dimov <vasil.dimov@oracle.com> branch nick: mysql-5.6 timestamp: Tue 2014-04-08 19:07:27 +0300 message: Backport the following changeset from mysql-trunk to mysql-5.6: ** revision-id: vasil.dimov@oracle.com-20140403070651-w1nefsafrqeid6ct ** committer: Vasil Dimov <vasil.dimov@oracle.com> ** branch nick: mysql-trunk ** timestamp: Thu 2014-04-03 10:06:51 +0300 ** message: ** Fix Bug#18384390 WRONG STATISTICS WITH BIG ROW LENGTH AND PERSISTENT STATS ** ** Estimate the number of external pages when scanning any leaf page and subtract ** that estimate from index->stat_n_leaf_pages when calculating ** index->stat_n_diff_key_vals[]. ** ** Approved by: Kevin, Satya (rb:4956) ------------------------------------------------------------ revno: 5881 committer: Vasil Dimov <vasil.dimov@oracle.com> branch nick: mysql-5.6 timestamp: Tue 2014-04-08 16:00:23 +0300 message: Backport the following changeset from mysql-trunk to mysql-5.6: ** revision-id: vasil.dimov@oracle.com-20140403070224-eu2mw56ut6ydp354 ** committer: Vasil Dimov <vasil.dimov@oracle.com> ** branch nick: mysql-trunk ** timestamp: Thu 2014-04-03 10:02:24 +0300 ** message: ** Non-functional change: ** ** Delay the calculation of each dict_index_t::stat_n_diff_key_vals[n_prefix] ** until after data is gathered for all n-column prefixes. This is a noop (non ** functional change) - still the same numbers will be used to calculate ** dict_index_t::stat_n_diff_key_vals[], but it will help fix ** Bug#18384390 WRONG STATISTICS WITH BIG ROW LENGTH AND PERSISTENT STATS ** ** The problem in that bug is that index->stat_n_leaf_pages is bloated by ** the number of externally stored pages, which should not really be counted ** in the formula that derives dict_index_t::stat_n_diff_key_vals[]. ** ** To fix this we need to estimate the number of external pages in the ** index and subtract it from dict_index_t::stat_n_leaf_pages in that formula. ** ** In a subsequent change we will estimate the number of external pages while ** sampling each page for all possible n-column prefixes and then this cumulative ** result will be used then calculating each member of ** dict_index_t::stat_n_diff_key_vals[]. ** ** The code before this change: ** ** dict_stats_analyze_index() ** for each n prefix ** dict_stats_analyze_index_for_n_prefix() ** sample some pages and save the n_diff results in ** index->stat_n_diff_key_vals[] using index->stat_n_leaf_pages in the ** formula ** ** The code after this change (equivalent): ** ** dict_stats_analyze_index() ** for each n prefix ** dict_stats_analyze_index_for_n_prefix() ** sample some pages and save the n_diff results in a temporary place ** for each n prefix ** // new function, code moved from dict_stats_analyze_index_for_n_prefix() ** dict_stats_index_set_n_diff() ** set index->stat_n_diff_key_vals[] using ** index->stat_n_leaf_pages in the formula ** ** Further planned change that will actually fix the bug: ** ** dict_stats_analyze_index() ** for each n prefix ** dict_stats_analyze_index_for_n_prefix() ** sample some pages and save the n_diff results in a temporary place ** and also accumulate an estimate about the number of external pages ** when scanning each leaf page ** for each n prefix ** dict_stats_index_set_n_diff() ** set index->stat_n_diff_key_vals[] using ** "index->stat_n_leaf_pages - number_of_external_pages" in the formula ** ** Approved by: Satya (rb:4955) ------------------------------------------------------------ revno: 5880 [merge] committer: Serge Kozlov <serge.kozlov@oracle.com> branch nick: mysql-5.6 timestamp: Fri 2014-04-04 12:06:10 +0400 message: Bug#18506556 Merge 5.5->5.6 ------------------------------------------------------------ revno: 2875.468.13 committer: Serge Kozlov <serge.kozlov@oracle.com> branch nick: mysql-5.5 timestamp: Fri 2014-04-04 10:42:25 +0400 message: BUG#18506556. Added sync slave with master for clean-up ------------------------------------------------------------ revno: 5879 committer: bin.x.su@oracle.com branch nick: mysql-5.6 timestamp: Fri 2014-04-04 11:35:27 +0800 message: BUG 18277082 - FTS: STACK BUFFER OVERFLOW IN INNOBASE_STRNXFRM AND TIS620 When we run windows debug build or use a linux Address Sanitizer build, there would be a failure or stack-buffer-overflow in innobase_strnxfrm. The root cause is that my_strnxfrm_tis620 would access one more byte of the dst parameter and set it to '\0', which is unnecessary in this case. So we just do not set this terminating '\0'. Approved by Tor, rb#5051. ------------------------------------------------------------ revno: 5878 committer: Aditya A <aditya.a@oracle.com> branch nick: mysql-5.6 timestamp: Wed 2014-04-02 10:50:50 +0530 message: Bug #13029450 OFF BY ONE ERROR IN INNODB_MAX_DIRTY_PAGES_PCT LOGIC If the percentage of dirty pages in the buffer pool exceeds innodb_max_dirty_pages_pct (set by the user) then we flush the pages.If user sets innodb_max_dirty_pages_pct=0,then the flushing mechanism will not kick in unless the percentage of dirty pages reaches at least 1%.For huge buffer pools even 1% of the buffer pool can be a huge number. FIX --- Flush the dirty pages in buffer pool if percentage of dirty pages is greater than zero and innodb_max_dirty_pages_pct is set to zero. [Approved by vasil #rb4776] ------------------------------------------------------------ revno: 5877 [merge] committer: Thirunarayanan B<thirunarayanan.balathandayuth@oracle.com> branch nick: mysql-5.6 timestamp: Tue 2014-04-01 11:38:02 +0530 message: Bug #17858679 TOO MANY TIMES OF MEMSET DECREASE THE PERFORMANCE UNDER HEAVY INSERT Null merge from mysql-5.5 ------------------------------------------------------------ revno: 2875.468.12 committer: Thirunarayanan B<thirunarayanan.balathandayuth@oracle.com> branch nick: mysql-5.5 timestamp: Tue 2014-04-01 11:36:58 +0530 message: Bug #17858679 TOO MANY TIMES OF MEMSET DECREASE THE PERFORMANCE UNDER HEAVY INSERT Fixing the build problem in 5.5. ------------------------------------------------------------ revno: 5876 [merge] committer: Thirunarayanan B<thirunarayanan.balathandayuth@oracle.com> branch nick: mysql-5.6 timestamp: Tue 2014-04-01 10:49:54 +0530 message: Bug #17858679 TOO MANY TIMES OF MEMSET DECREASE THE PERFORMANCE UNDER HEAVY INSERT Merge from 5.5 ------------------------------------------------------------ revno: 2875.468.11 committer: Thirunarayanan B<thirunarayanan.balathandayuth@oracle.com> branch nick: mysql-5.5 timestamp: Tue 2014-04-01 10:46:13 +0530 message: Bug #17858679 TOO MANY TIMES OF MEMSET DECREASE THE PERFORMANCE UNDER HEAVY INSERT Problem: There are three memset call to allocate memory for system fields in each insert. Solution: Instead of calling it in 3 times, we can combine it into one memset call. It will reduce the CPU usage under heavy insert. Approved by Marko rb-4916 ------------------------------------------------------------ revno: 5875 committer: Joao Gramacho <joao.gramacho@oracle.com> branch nick: mysql-5.6 timestamp: Mon 2014-03-31 16:53:58 +0100 message: BUG#18482854 RPL : ROTATE_LOG_EVENT INCORRECTLY ADVANCES GROUP_RELAY_LOG_POS IN A GROUP Problem: ======= Rotate events can cause the group_relay_log_pos to be incorrectly moved forward within a group. This means that when the transaction is retried, or if you stop the SQL thread in the middle of a transaction after some Rotates (considering that the transaction/group was spanned into multiple relay log files), the group (or part of the group from the beginning) will be silently skipped. Analysis: ======== It was found a problem in the logic for avoiding to touch SQL thread coordinates in Rotate_log_event::do_update_pos(). The logic was allowing to update SQL thread coordinates if not using MTS regardless rli->is_in_group() signaling that it was in the middle of a group. Fix: === Corrected the logic. ------------------------------------------------------------ revno: 5874 committer: Jon Olav Hauglid <jon.hauglid@oracle.com> branch nick: mysql-5.6-test timestamp: Mon 2014-03-31 15:09:59 +0200 message: Backport from mysql-trunk to mysql-5.6 of: ------------------------------------------------------------ revno: 7033 committer: Jon Olav Hauglid <jon.hauglid@oracle.com> branch nick: mysql-trunk-c11 timestamp: Wed 2013-11-27 13:54:59 +0100 message: Bug#14631159: ALLOW COMPILATION USING CLANG IN C++11 MODE This patch fixes the new compilation errors that are reported by Clang and GCC when compiling in C++11 mode. The patch is not based on the contribution in the bug report. ------------------------------------------------------------ revno: 5873 [merge] committer: Venkatesh Duggirala<venkatesh.duggirala@oracle.com> branch nick: mysql-5.6 timestamp: Fri 2014-03-28 17:12:45 +0530 message: Bug#18364070: Backporting Bug#18236612 to Mysql-5.6 Problem: When Slave SQL thread detects that Master was restarted with the help of information sent by Master through 'FormatDescription' event, slave SQL drops all the opened temporary tables inorder to have proper cleanup. While slave SQL thread is dropping the temporary tables, it is not decrementing Slave_open_temp_tables count. Fix: Set slave_open_temp_tables=0 in close_temporary_tables(thd). ------------------------------------------------------------ revno: 5853.1.1 committer: Venkatesh Duggirala<venkatesh.duggirala@oracle.com> branch nick: mysql-5.6 timestamp: Tue 2014-03-18 18:40:47 +0530 message: Bug#18364070: Backporting Bug#18236612 to Mysql-5.6 Problem: When Slave SQL thread detects that Master was restarted with the help of information sent by Master through 'FormatDescription' event, slave SQL drops all the opened temporary tables inorder to have proper cleanup. While slave SQL thread is dropping the temporary tables, it is not decrementing Slave_open_temp_tables count. Fix: Set slave_open_temp_tables=0 in close_temporary_tables(thd). ------------------------------------------------------------ revno: 5872 [merge] author: laasya.moduludu@oracle.com committer: Laasya Moduludu <laasya.moduludu@oracle.com> branch nick: mysql-5.6 timestamp: Fri 2014-03-28 09:43:01 +0100 message: Merge from mysql-5.6.17-release ------------------------------------------------------------ revno: 5850.1.9 tags: mysql-5.6.17 committer: Balasubramanian Kandasamy <balasubramanian.kandasamy@oracle.com> branch nick: mysql-5.6.17-release timestamp: Fri 2014-03-14 19:45:58 +0100 message: Bug#18402229 - Resolve mysql conflict with mysql-community-client |