#maria Log v0.1

logs Catalogue

This page loads from the db, refresh as and when. It displays 24 hrs
For older data http://marialog.archivist.info/previous.php, logs go back to 2009-01-18
Some user stats
Table logs
date_time
user
message
2016-02-05 09:48:41
markusjm
is there documentation on the actual protocol level implementation of the CRC32 checksum in binlogs?
2016-02-05 09:49:38
markusjm
and especially how it works with events larger than 2^24 bytes
2016-02-05 09:50:58
markusjm
the only thing that Google gave me was a MySQL worklog with a task to implement it
2016-02-05 09:52:24
markusjm
also, is there protocol documentation on the KB?
2016-02-05 09:53:16
markusjm
I know mysql has it but would be nice to have a MariaDB version especially for the future if there are MariaDB specific protocol changes
2016-02-05 09:58:48
wlad
markusjm, when it comes to binlogs, sadly, source is your documentation
2016-02-05 09:59:02
markusjm
:(
2016-02-05 09:59:41
markusjm
I'd assume a very small minority actually wants that documentation :)
2016-02-05 10:00:36
wlad
markusjm, well, only people who actually need it, i.e implement connectors or somesuch
2016-02-05 10:00:41
knielsen_
markusjm: the mariadb binlog checksum is the same as the mysql one. It's basically extra 4 bytes of crc32 after every event. But the rules for exactly when these 4 bytes are present or not are complex (and error-prone, unfortunately)
2016-02-05 10:01:11
markusjm
knielsen_: what I'm trying to figure out is when I have a binlog record which spans multiple SQL packets, is the checksum for the event or the packet
2016-02-05 10:01:41
markusjm
I'd assume it's done for each packet
2016-02-05 10:01:52
markusjm
since to my knowledge, it's aimed for error checking
2016-02-05 10:03:42
knielsen
markusjm: hm, everything I've seen in binlog code works on events as the minimal unit (and the binlog checksums are per-event). I'm not familiar with how this is split into 2^24 (?) protocol-level packets, sorry - not even sure if there is also checksums at the lower level
2016-02-05 10:04:03
markusjm
hmm
2016-02-05 10:04:27
markusjm
well, I think the easiest way to see how it is, is to actually try and see what happens :D
2016-02-05 10:04:48
knielsen
markusjm: also, note that binlog events can also be processed on the file level (reading/writing binlog/relaylog files), where there is no underlying protocol...
2016-02-05 10:05:09
markusjm
and the same checksums are there?
2016-02-05 10:05:26
knielsen
yes - if you are talking about the optional binlog checksums
2016-02-05 10:05:34
markusjm
yeah, the CRC32 stuff
2016-02-05 10:05:51
markusjm
based on that I don't think it's per-packet checksums
2016-02-05 10:06:16
knielsen
--master-verify-checksum and such. Agree, there shouldn't be per-packet checksums related to binlogs
2016-02-05 10:06:35
markusjm
ok, then I'm pretty sure it's done for each replication event
2016-02-05 10:06:50
markusjm
I'll test and see if this assumption is true
2016-02-05 10:07:05
scoopex
how can i rollback all xa transactions on a mysql system, is there a shorthand?
2016-02-05 10:55:22
ifohancroft
Hello everyone! Is it possible to use IGNORE INDEX (the_index) with UPDATE?
2016-02-05 11:03:30
jkavalik
ifohancroft, should be, update expects table_reference which can contain index hints
2016-02-05 11:04:06
ifohancroft
Thank you, jkavalik. Would you mind giving me an example?
2016-02-05 11:04:39
jkavalik
ifohancroft, use it the same way you would with select
2016-02-05 11:05:02
jkavalik
after "<tablename> as <tablealias>"
2016-02-05 11:05:40
ifohancroft
well we've tried UPDATE some_table IGNORE INDEX (PRIMARY) SET some_column = 0 and we've got a syntax error
2016-02-05 11:08:29
ifohancroft
Isn't this the right syntax?
2016-02-05 11:11:47
jkavalik
ifohancroft, just tested "update t ignore index(val) set val = 1 where val = 1;" and it works for me (and has the right effect)
2016-02-05 11:12:56
ifohancroft
weird
2016-02-05 11:12:58
jkavalik
ifohancroft, whats the exact error? which part it does not understand?
2016-02-05 11:13:25
ifohancroft
inside the () you put the index name, right? it doesn't have to match a particular column or you put the column name?
2016-02-05 11:14:10
ifohancroft
or does it have to match the column being updated?
2016-02-05 11:16:37
jkavalik
the index name, and index will usually be picked according to the WHERE conditions anyway, not by the updated column(s)
2016-02-05 12:19:18
Administrator__
hello
2016-02-05 12:21:52
Christian87
I plan to make my website ha and scalable. I thought of the best possible setup and came up with the following:
2016-02-05 12:25:04
Christian87
1 webserver and 2 db servers in one location and 1 webserver and 2 db servers in another datacenter. Then use galera cluster to bundle all 4 nodes with 2 seprated groups for each location
2016-02-05 12:25:59
Christian87
use then maxscale on each webserver to tell them to primary use the nodes in their dc but with the possible to fallback to the nodes in the other dc
2016-02-05 12:26:10
Christian87
is this possible?
2016-02-05 12:26:27
markusjm
should be
2016-02-05 12:27:01
markusjm
but the problems come when the two datacenters lose connectivity to each other
2016-02-05 12:28:00
markusjm
at least I believe Galera doesn't handle this very well
2016-02-05 12:28:40
markusjm
one option would be to set up the webservers, maxscale and databases at different "HA layers"
2016-02-05 12:29:22
markusjm
so that the web servers would always use the same maxscale to talk to the databases
2016-02-05 12:29:50
markusjm
this way no conflicts happen if the connection between the datacenters is lost
2016-02-05 12:30:02
markusjm
at least in theory :)
2016-02-05 12:35:37
scoopex
"xa recover" shows me some transactions which have xid names with stange characters, therefore i am not able to execute a "xa rollback <xid>" on them....do you have a idea how to abort these transactions?
2016-02-05 12:40:56
jkavalik
scoopex, not able to copy the xid?
2016-02-05 12:41:53
Christian87
markusjm: thank you i plan to use a single maxscale instance on every webserver
2016-02-05 12:41:59
montywi
serg: need a bit help with merge
2016-02-05 12:46:34
montywi
wlad: ping
2016-02-05 12:46:35
markusjm
Christian87: That seems to be at least a good first step
2016-02-05 12:46:42
wlad
montywi, here
2016-02-05 12:47:07
markusjm
if you're interested about talking about maxscale in detail, you could join us on #maxscale
2016-02-05 12:54:13
serg
montywi: yes?
2016-02-05 13:09:10
serg
SanjaByelkin?
2016-02-05 14:27:08
SanjaByelkin
serg: yes
2016-02-05 14:33:36
montywi
serg: solved the issue. Just would like to have information of how to start xemacs with the 3 files needed for doing an efficient merge...
2016-02-05 15:01:21
serg
montywi: that's not for me. but I can tell you how to do it with vim, if you're interested :)
2016-02-05 15:01:46
serg
SanjaByelkin: did you want me to review MDEV-9462?
2016-02-05 15:04:10
elenst
SanjaByelkin: I would appreciate a comment on MDEV-6745. Do you understand how the patch for MDEV-6830 fixed MDEV-6745? If so, please leave it for the future reference. If not, then possibly it's not fixed after all. Also, it makes sense to add the test case
2016-02-05 15:10:34
SanjaByelkin
elenst: it is definetely fixed, but how I have no idea (actually it looks like it found merge (and I have no idea how to find inside it)
2016-02-05 15:10:57
SanjaByelkin
serg: yes, I have not found other reviewer
2016-02-05 15:10:59
elenst
SanjaByelkin: i added the exact commit after which it disappeared
2016-02-05 15:11:04
serg
SanjaByelkin: see last elena's comment in this mdev
2016-02-05 15:11:15
elenst
SanjaByelkin: however, disappeared and got fixed are totally different things
2016-02-05 15:11:16
SanjaByelkin
ah... let me see...
2016-02-05 15:11:42
serg
SanjaByelkin: as for MDEV-9462 - commit d8c50ce ?
2016-02-05 15:12:32
elenst
for example, the simplest way for an optimizer problem to disappear is if after a change optimizer picks up a different execution plan. Then the old test case won't reveal it anymore, even though the problem is still there. I'm not saying it's the case here, it's just an example
2016-02-05 15:13:18
SanjaByelkin
elenst: fixed
2016-02-05 15:14:04
SanjaByelkin
and the fix is actually better then my was (after all Petrunia better know internals of all that indexes, it was first tinme I looked on them)
2016-02-05 15:14:18
elenst
please leave the comment and add a test case
2016-02-05 15:14:51
SanjaByelkin
elenst: how your comment if not good enough?
2016-02-05 15:15:07
SanjaByelkin
s/if/is/
2016-02-05 15:15:08
elenst
SanjaByelkin: I just told you. Disappeared is not the same as fixed
2016-02-05 15:15:35
elenst
if you know how that patch fixes this bug, please leave the comment for the future reference
2016-02-05 15:15:50
SanjaByelkin
ok, you want other wording
2016-02-05 15:16:56
elenst
and mind you, I don't care in this case because it's my bug, but for external bugs it's important: "cannot reproduce" does not really apply here, it's another nasty MySQL habit (not only MySQL of course, it's very common)
2016-02-05 15:17:25
SanjaByelkin
elenst: it is the same problem
2016-02-05 15:17:48
elenst
s/babit/habit/
2016-02-05 15:17:57
SanjaByelkin
elenst: you better say how you found the commit, so next time I will not stuck with merge commit?
2016-02-05 15:18:05
elenst
oh, it was habit, i need to clean my screen :)
2016-02-05 15:18:29
SanjaByelkin
:)
2016-02-05 15:18:46
elenst
SanjaByelkin: I found the merge. I looked at the merge. There were only two commits in the merge. I tried them in order of probability, the first made the difference
2016-02-05 15:19:30
elenst
there are so few changes in 5.3 nowadays, you don't even need to bother automating it
2016-02-05 15:19:40
SanjaByelkin
elenst: tried is interesting thing here? did you tried 5.3 or just apply it manually?
2016-02-05 15:19:56
elenst
applied manually, it's a one-line change
2016-02-05 15:20:25
elenst
it applied smoothly
2016-02-05 15:20:52
SanjaByelkin
aha, OK this idea did not come to my mind, promambly was exhausted by struggle with git bisect
2016-02-05 15:22:10
elenst
heh... saw your comment. I guess I have to give up
2016-02-05 15:22:15
elenst
i meant a meaningful comment
2016-02-05 15:22:36
elenst
explaining *why* SP's fix fixed it
2016-02-05 15:23:58
elenst
something one would put as a commit comment
2016-02-05 15:27:40
montywi
serg: then tell me how to do it with vim
2016-02-05 15:45:27
montywi
serg: solved it
2016-02-05 16:02:54
tom[]
i have a MyISAM table with 10m rows. i run an UPDATE that using a table scan modifies one column for roughly half the rows. the column that's modified is part of an index. does mysqld update the index one node at a time as it updates rows? or is it done at the end of the query?
2016-02-05 16:03:22
serg
tom[]: one at a time
2016-02-05 16:03:55
tom[]
does set autocommit=0; UPDATE...; COMMIT change that?
2016-02-05 16:04:33
serg
no. if you want to change it, try ALTER TABLE ... DISABLE KEYS; UPDATE ...; ALTER TABLE ... ENABLE KEYS
2016-02-05 16:04:45
serg
tom[]: but only if your key is not UNIQUE.
2016-02-05 16:04:55
tom[]
it is not
2016-02-05 16:05:08
tom[]
thanks, serg
2016-02-05 16:38:35
tom[]
another UPDATE q
2016-02-05 16:38:38
tom[]
i have to similar queries, both with form exactly like: UPDATE t1, t2 SET t1.foo = t2.foo WHERE t1.t2_id = t2.id. they both work as expected. but with one, rows_affected is as expected and with the other it is always zero, despite that rows were affected
2016-02-05 16:38:57
tom[]
why might this be? can i fix it?
2016-02-05 19:15:43
cek
Hi. How good is xtradb with logical restores? innodb is so fucking slow, even with all optimizations on.
2016-02-05 19:16:46
thumbs
cek: is the profanity really needed?
2016-02-05 19:17:03
cek
yes when innodb is involved
2016-02-05 19:17:19
cek
and when it takes a day to import 50GB.
2016-02-05 19:18:03
litheum
cek: are you importing SQL or TSV?
2016-02-05 19:18:05
thumbs
use innobackup/xtrabackup
2016-02-05 19:18:14
thumbs
and that question too.
2016-02-05 19:18:24
cek
mysqldump --opt , sql.
2016-02-05 19:18:54
thumbs
using a logical backup for 50GB of data is a pretty bad choice
2016-02-05 19:18:55
cek
xtrabackup is fine, yes, but I've got that data, now i'm thinking should i let the import run for another half day or upgrade to xtradb and run the remainder
2016-02-05 19:19:05
cek
thumbs: no shit cap
2016-02-05 19:20:00
litheum
cek: upgrading to xtradb is not going to magically solve the problem for you. have you done the appropriate tuning to give this a better chance of completing quickly? disable binlog, set innodb_flush_log_at_trx_commit=0, large innodb logs, etc.?
2016-02-05 19:20:21
cek
litheum: all that done sadly
2016-02-05 19:20:58
serg
SanjaByelkin: MDEV-9462, why there was an OOM?
2016-02-05 19:21:04
cek
innodb_log_file_size is 2gb, is that large enough?
2016-02-05 19:22:11
litheum
cek: "large enough" is pretty relative. you'll quickly fill the log files if you are importing 50GB, which means lots of buffer pool flushing. if you have poor IO performance, that's gonna hit you pretty hard.
2016-02-05 19:24:09
cek
okay, thanks.
2016-02-05 19:24:49
litheum
but in order to increase those, you have to stop the import, stop mysql, and maybe do some even more scary stuff depending on the version
2016-02-05 19:47:15
cek
I STOPped the import but mysql is still writing to disk with 100% busy rate, proclist is not showing anything relevant. Why?
2016-02-05 19:47:56
litheum
cek: because the innodb logs are almost full and the server is flushing buffer pool pages
2016-02-05 19:48:05
cek
for 5 minutes?
2016-02-05 19:48:45
litheum
could go on for much longer than that
2016-02-05 19:48:51
litheum
especially if you have poor IO performance
2016-02-05 19:55:47
cek
well pending writes lru is 0, what the hell is going on i wonder
2016-02-05 19:58:34
cek
https://gist.github.com/celesteking/a7f0ed0fd036fe80d874
2016-02-05 22:43:29
cdcasey
Is there anything specific that would cause an "install timer expired" error in a galera cluster?
2016-02-05 22:44:44
mgriffin
i would guess network connectivity issues (latency, packet loss or firewall)
2016-02-05 22:45:01
mgriffin
or that node is blocking/stalled
2016-02-05 22:46:11
cdcasey
ok, thanks
2016-02-05 22:46:54
cdcasey
for confirming :)
2016-02-05 22:46:57
mgriffin
still learning, so.. again a first guess
2016-02-05 22:47:07
cdcasey
first guess here, too
2016-02-05 22:47:30
mgriffin
cdcasey: read https://groups.google.com/forum/#!topic/codership-team/KFc6nmmrLr0
2016-02-05 22:47:48
mgriffin
one of my 100+ galera bookmarks i need to read another 100+ times
2016-02-05 22:48:40
cdcasey
thank you
2016-02-05 22:55:13
arthurl
test
2016-02-05 22:55:53
mgriffin
!me pushes the_wench down the stairs
2016-02-05 22:56:03
mgriffin
mgriffin pushes the_wench down the stairs
2016-02-05 22:56:07
mgriffin
oh well.