Skip directly to content

Feed aggregator

Libdrizzle 5.1.2 released

Libdrizzle News - Fri, 01/18/2013 - 08:37

The latest version of the Libdrizzle Redux series of Libdrizzle has been released! Changes in this version:

This is mostly a bug fix release with several important changes:
* Non-blocking Windows connections are now more stable
* Improvements to Windows building
* Unix Socket connections are now more stable
* Memory allocation/freeing has been greatly improved
* Network packet buffer now much more flexible
* Many performance improvements (bundled drizzle_binlogs tool is now around 10x faster on my i7 laptop)

API chages:
* drizzle_query_str() has been removed, drizzle_query() with a 0 byte length parameter now does the same thing.

I will be at FOSDEM 2013 speaking about Libdrizzle in the MySQL dev room. I look forward to seeing you all there!

Categories: Libdrizzle News

First Beta of libdrizzle 5.x

Libdrizzle News - Mon, 12/24/2012 - 10:40

5.1.0 has been released today

This release has massive improvements on the API and build system, including:
* An example binlog client
* An improved binlog API
* A new prepared statement API
* CMake replaced with DDM4 (a makefile based build system)
* Many bug fixes

Categories: Libdrizzle News

Libdrizzle Redux 5.0-alpha1 released

Libdrizzle News - Sun, 12/09/2012 - 14:16

The first alpha of our new connector for MySQL servers has been released called Libdrizzle Redux.

The differences between this and the older Libdrizzle are:

* The server-side functionality has been removed, it no longer acts as both a client and server API.
* The Drizzle prototype library functions have been removed. It now only talks to MySQL compatible servers.
* API functions have been simplified. In Libdrizzle there was a big confusion over whether the application or library should be doing the allocation and freeing of objects. It is now less ambiguous.
* New binlog API added. The library can now connect as a slave or mysqlbinlog client and retrieve binlog events.

With many more cool features to come in future versions.

Categories: Libdrizzle News

DrizzleDB: @koolhead17 Hi! I guess we did. Docs are all in the source at http://t.co/vy1kWD6r. Submit a fix?

@DrizzleDB on Twitter - Sun, 08/19/2012 - 22:55
DrizzleDB: @koolhead17 Hi! I guess we did. Docs are all in the source at http://t.co/vy1kWD6r. Submit a fix?

DrizzleDB: RT @stewartsmith: and full CATALOG support in @DrizzleDB gets one step closer: large bulk of my work merged.

@DrizzleDB on Twitter - Mon, 07/30/2012 - 02:13
DrizzleDB: RT @stewartsmith: and full CATALOG support in @DrizzleDB gets one step closer: large bulk of my work merged.

DrizzleDB: RT @schwern: Projects should accept "drive by" patches (found bug, fixed bug, submit patch, done) w/o needing to hang around and learn s ...

@DrizzleDB on Twitter - Wed, 07/25/2012 - 18:59
DrizzleDB: RT @schwern: Projects should accept "drive by" patches (found bug, fixed bug, submit patch, done) w/o needing to hang around and learn s ...

DrizzleDB: @dshrews Good question, why are you not here? :-)

@DrizzleDB on Twitter - Wed, 07/18/2012 - 16:41
DrizzleDB: @dshrews Good question, why are you not here? :-)

MySQL Performance Blog: Percona Playback 0.3 development release

Libdrizzle News - Wed, 07/11/2012 - 04:47

I’m glad to announce the third Percona Playback release – another alpha release of a new software package designed to replay database server load. The first two versions were released in April, just in time for my talk at the Percona Live MySQL Conference and Expo: Replaying Database Load with Percona Playback.

This is still very much under development, so there’s likely going to be bugs. Please feel free to report bugs here: https://bugs.launchpad.net/percona-playback

Percona Playback is designed to replay database load captured either in a MySQL slow query log or a tcpdump capture of the MySQL protocol exchange between client and server.

It can replay the load either as fast as possible or in accurate mode, where it tries to replay load over the same wall time as the capture.

Current Notable Limitations:

  • tcpdump replay: IPv4 only
  • tcpdump replay: no support for server side prepared statements

Build requirements:

  • libtbb-dev (Intel Threading Building blocks)
  • boost (including boost program_options)
  • intltool
  • gettext
  • libpcap-dev
  • libcloog-ppl (if using gcc 4.6)
  • libmysqlclient-dev
  • libdrizzle-dev
  • pkg-config

Source Tarball: percona-playback-0.3.tar.gz (md5sig)

We’ve tested building on CentOS/RHEL 6, Ubuntu 10.04LTS (lucid), Ubuntu 12.04 (precise) and Debian 6.

You can build using the standard: ./configure && make && make install

Run the test suite: make check

We are planning binary packages for the next development milestone.

Categories: Libdrizzle News

Eric Day: Whole-food Pizza

Libdrizzle News - Thu, 06/28/2012 - 07:00
Sometimes we have a strong pizza craving, so we decided to create a nutritious, guilt-free version that delights our taste buds. This is a very basic version, but feel free to add your favorite veggie toppings. We often add chopped broccoli, tomatoes, and sometimes zucchini - whatever we have on hand! We usually use the nut-cheese topping below to keep it 100% whole-foods, but once in a while we'll replace it with a store-bought non-dairy cheese (good for first-time vegan pizza eaters).


Crust ingredients:
  • 1 cup brown rice flour
  • 1 cup whole wheat pastry flour (or oat flour for a gluten-free option)
  • 2 teaspoons baking powder
  • pinch of salt, pepper, garlic powder, and onion powder
  • 3/4 cup water

  1. Preheat the oven to 400 degrees F.
  2. Mix all dry ingredients in a bowl.
  3. Add water and combine thoroughly.
  4. Use hands to form a dough ball.
  5. Sprinkle flour on a greased cookie sheet or pie plate and roll out dough to 1/4" thick.
  6. Bake for 15 minutes.

Topping ingredients:
  • 1 batch nut-cheese sauce (below)*
  • 1 cup mushrooms (sliced)
  • 1/2 medium onion (chopped)
  • 10-12 oz extra firm tofu (grated)
  • 4 oz tomato paste

  1. Spread tomato paste evenly on pre-baked crust.
  2. Top with onions and mushrooms, then tofu.
  3. Drizzle on nut-cheese sauce and spread evenly.
  4. Lower oven to 350 degrees F.
  5. Bake for 25-35 minutes until onions are done.

* You can replace the nut-cheese sauce with 1-2 cups of shredded non-dairy cheese (i.e. Daiya, Follow Your Heart, etc.)

Nut-cheese Sauce ingredients:
  • 1 cup cashews (or other nuts)
  • 3/4 cup water
  • 1/4 cup nutritional yeast
  • 1/2 teaspoon salt

  1. Place all ingredients in blender and blend on high until sauce is smooth. Add more water if needed.

Yields: About 6 servings
Categories: Libdrizzle News

Stewart Smith: Hacking the Jenkins BZR plugin

Libdrizzle News - Thu, 06/28/2012 - 05:10

For Drizzle and for all of the projects we work on at Percona we use the Bazaar revision control system (largely because it’s what we were using at MySQL and it’s what MySQL still uses). We also use Jenkins.

We have a lot of jobs in our Jenkins. A lot. We build upstream MySQL 5.1, 5.5 and 5.6, Percona Server 5.1, Percona Server 5.5, XtraBackup 1.6, 2.0 and 2.1. For each of these we also have the normal trunk builds as well as parameterised ones that allow a developer to test out a tree before they ask for it to be merged. We also have each of these products across seven operating systems and for each of those both x86 32bit and 64bit. If we weren’t already in the hundreds of jobs, we certainly are once you multiply out between release and debug and XtraBackup being across so many MySQL and Percona Server versions.

I honestly would not be surprised if we had the most jobs of any user of the Bazaar plugin to Jenkins, and we’re probably amongst the top few of all Jenkins installations.

So, in August last year we discovered a file descriptor leak in the Bazaar plugin. Basically, garbage collection doesn’t get kicked off when you run out of file descriptors. This prevented us from even starting back up Jenkins until I found and fixed the bug. Good times.

We later hit a bug that was triggered in the parallel loading of jobs during startup. We could get stuck in an infinite loop during Jenkins starting that would just eat CPU and get nowhere. Luckily Jenkins provides a workaround: specify “-Djenkins.model.Jenkins.parallelLoad=false” as an argument and it just does it single threaded. For us, this solves that problem.

We were also hitting another problem. If you kill bzr at just the wrong time, you can leave the repository in not an entirely happy state. An initial branch can be killed at a time where it’ll think it’s a repository rather than a checkout and there’s a bunch of other weirdness (including file system corruption if you happen to use bad VM software).

The way we were solving this was to sometimes go and “clean workspace” on the jobs that needed it (annoying with matrix builds). We’d switched to just doing “clean tree” for a bunch of builds. The problem with doing a clean tree was that “bzr branch” to check out the source code could take a very long time – especially for Percona Server which is a branch of MySQL and hence has hundreds of megabytes of history.

We couldn’t use bzr shared repositories as we kept hitting concurrency bugs when more than one jenkins job was trying to do a bzr operation at the same time (common when matrix builds kick off builds for release and debug for example).

So.. I fixed that in the Jenkins bazaar plugin too (which should be in an upcoming release) and we’ve been running it on our Jenkins instance for the past ~2 months.

Basically, if we fail to check out the Bazaar tree, we wipe it clean and try again (Jenkins has a “retry count” for source checkouts). This is a really awesome form of self healing. Even if the bazaar team fixed all the bugs, we’d still have to go and get that new version of bzr on all our build machines – including ancient systems such as CentOS 5. Not as much fun as bashing your head into a vice.

After all of that, I seem to now be the maintainer of the Bazaar plugin for Jenkins as Monty pointed out I was using it a lot more than him and kept finding and fixing bugs.

Soooo… say hello to the new Jenkins Bazaar plugin maintainer, me.

Yes, I maintain Java code now. Be afraid. Be very afraid.

Categories: Libdrizzle News

Eric Day: Mac & Bean "Cheese"

Libdrizzle News - Tue, 06/26/2012 - 07:00
This is by no means your typical "mac & cheese" that is high in fat, calories and animal protein. Instead, we used navy beans as the base for the "cheese sauce." By doing this you get all the goodness of beans including protein, fiber, vitamins and minerals. We figured out each serving has more than 10 grams of fiber (if whole grain noodles are used too). Enjoy!


Ingredients:
  • 2 cups cooked navy beans
  • 1 1/4 cups water
  • 1/2 cup nutritional yeast
  • 1/4 cup tahini or 1/2 cup raw cashews
  • 1/2 t salt (or more to taste)
  • 1/2 t garlic & onion powder
  • 1/2 t mustard
  • pinch turmeric and cayenne (optional)

  • 2 1/2 cups dry pasta (we like brown rice noodles)

  1. Combine all ingredients in a blender (except for pasta of course) and blend until smooth. You can add a little more liquid if needed.
  2. Cook the pasta as usual and drain. Mix the "cheese sauce" with the noodles and heat on low for a few minutes.
  3. The sauce will thicken as the noodles absorb some of the liquid.

Yield: 4-6 servings
If you'd like a little added richness, feel free to drizzle a little flax, hemp or olive oil on top.
Categories: Libdrizzle News

Anshu Kumar: ansharyan015

Libdrizzle News - Sat, 06/16/2012 - 13:41

Last week passed faster than I expected it to. During the week I worked on two separate plugins, namely, auth_http and logging_gearman. Although, according to the order as decided earlier, the turn was for filtered_replicator plugin, but that plugin is temporarily disabled as it depended on transaction_log plugin which has been removed from the system. So working of filtered_replicator will require to solve some dependencies and making all the test cases work. The next in the list was memcached_stats which too has been disabled due to some unknown issues (will update on this soon). Buffering both these plugins for future, I jumped onto auth_http and then logging_gearman.

Auth_http

This is an authentication plugin, which uses apache mod_auth module for authentication. Hence a web server is required for this authentication. This plugin just registers a system variable ‘URL’ with the drizzle server. Making this dynamic just required to make this variable dynamic, so that user can change this at runtime. The work was very simple as there were no cache values and other issues to deal with as in case of other plugins. It just checks if the new url value given is valid or not. It if is then replace the value of the variable with the new one.

Logging_gearman

This is a logging plugin which uses gearman server to log all the queries submitted to the server. Refer to the previous blog entry for detailed description.

Code References
http://bazaar.launchpad.net/~ansharyan015/drizzle/auth_http_dynamic/revision/2565
http://bazaar.launchpad.net/~ansharyan015/drizzle/logging_gearman_dynamic/revision/2567

Dynamic code for regex_policy and auth_file plugins were also merged to the trunk during the last week.
Cheers.


Categories: Libdrizzle News

Anshu Kumar: ansharyan015

Libdrizzle News - Sat, 06/16/2012 - 08:43

Of all the plugins that I have to make dynamic as part of my work in GSoC ’12, logging_gearman was one. This plugin is used to log all the queries to a gearman server. After making that dynamic, here is small write-up describing how it works.

This plugins register 2 variables with the server, namely, logging_gearman_host and logging_gearman_function. Logging_gearman_host specifies hostname on which the gearman server is running and logging_gearman_function is the gearman function to which the logging information is sent. Making this plugin dynamic required to make both of these variables dynamic, so that their values can be changed at run time.

Demo of dynamic logging_gearman in action
Make sure gearman in installed and running on your system.

sudo apt-get install gearman-server
sudo apt-get install gearman-job-server

Start drizzle server with logging_gearman plugin loaded.

ansh@ubuntu:~/repos/drizzle/logging_gearman_dynamic$ drizzled/drizzled –plugin-add=logging_gearman

As we have not passed any argument for logging_gearman_host and logging_gearman_function, default arguments will be used for both of them. Default value of these are ‘localhost’ and ‘drizzlelog’ respectively.

Now add a background job to the job server for function drizzlelog

ansh@ubuntu:~$ gearman -f drizzlelog -d

-d is to run this job in background

Now go to your drizzle client and make any query with the server.

drizzle> SELECT 1;
+—+
| 1 |
+—+
| 1 |
+—+
1 row in set (0.000581 sec)

Now this should have been logged to the gearman job with the specified function running. Lets check it now.

ansh@ubuntu:~$ gearman -f drizzlelog -w
1339833870643631,1,9,”",”SELECT 1″,”Query”,237936150,88,88,1,0,0,0,1,”ubuntu”

As we can see here, the query that we made with the server is logged with the gearman job.

Now lets check what may be dynamic in this plugin. Lets say for example, we want to change the function to which we want to send our logging information. Let it be, for example ‘log’  instead of ‘drizzlelog’. For this we will start another gearman job with function ‘log’.

ansh@ubuntu:~$ gearman -f log -d

Now we will change the global variable which corresponds to gearman function in our drizzle server.

drizzle> SHOW VARIABLES LIKE “%gearman%”;
+————————–+————+
| Variable_name | Value |
+————————–+————+
| logging_gearman_function | drizzlelog |
| logging_gearman_host | localhost |
+————————–+————+
2 rows in set (0.000725 sec)drizzle> SET GLOBAL logging_gearman_function=”log”;
Query OK, 0 rows affected (0.000401 sec)

drizzle> SHOW VARIABLES LIKE “%gearman%”;
+————————–+———–+
| Variable_name | Value |
+————————–+———–+
| logging_gearman_function | log |
| logging_gearman_host | localhost |
+————————–+———–+
2 rows in set (0.000664 sec)

Now we will check if these queries are logged into the gearman job with function ‘log’ running.

ansh@ubuntu:~$ gearman -f log -w
1339834035673149,1,6,”",”SET GLOBAL logging_gearman_function=\”log\”",”Query”,40936194,103,103,0,0,0,0,1,”ubuntu”1339834037806111,1,7,”",”SHOW VARIABLES LIKE \”%gearman%\”",”Query”,43069156,347,205,2,2,0,0,1,”ubuntu”

As we can see here, these queries are logged into the new gearman job.

Similar is the case with logging_gearman_host. We can change the host to some other hostname on which the required gearman job is running. Showing this in demo is not possible, though.

To check the code making this plugin dynamic, please refer to http://bazaar.launchpad.net/~ansharyan015/drizzle/logging_gearman_dynamic/revision/2567


Categories: Libdrizzle News

Mohit Srivastava: Design of AlsoSQL: Drizzle JSON HTTP Server

Libdrizzle News - Thu, 06/14/2012 - 12:38
This particular project was proposed by Henrik at Drizzle Day 2011. A week later, Stewart published version 0.1 of AlsoSQL. Later I and Henrik worked on version 0.2 and he talked about it at Drizzle Day 2012 .
AlsoSQL:Drizzle JSON HTTP Server, is capable of SQL-over-HTTP with key-value operation in pure json. Henrik explained about its working and functionality in this particular post. Here,I am going to talk about design of AlsoSQL and the various problems I faced.
Up-to version 0.2 , the API wasn't properly object oriented, so we planned to do re-factoring first.After going through the code-base of AlsoSQL, I realized the need of design pattern. So we looked through various of them and chose Fascade design pattern for future development.
Here is the rough design of AlsoSQL (I called it rough because it might change in future).

Description:
HttpHandler is used to handle the http request ,parsing and validating json from that request and sending back response.SQLGenerator simply generates the sql string corresponding to request type using input json.SQLExecutor executes the sql and returns resultset.SQLToJsonGenerator generates the output json corresponding to request type.DBAccess is an interface to access generators and executor.
The main reason to design it in such a way is that it can be used over storage engine in future directly.
Problems Faced:<config.h> , I always forgot to include this header file and it floods error on my terminal.Another one , I need to get LAST_INSERT_ID() and I want to get it in a single query with REPLACE query. Still working on it.
Categories: Libdrizzle News

Anshu Kumar: ansharyan015

Libdrizzle News - Fri, 06/08/2012 - 08:33

Last week was mostly spent in fixing the issues that were coming while saving the file path as system variable. As explained in my last post, this problem was due to boost::filesystem::path and std::string being unrelated types and an explicit cast can be dangerous. Daniel and I, after discussing some approaches, sorted out a solution.

A Small Hack

What we need to do actually is to store boost::filesystem::path as a system variable and there is no such datatype supported by drizzle currently. The most optimal solution would have been constructing something like sys_var_fs_path variable but it was out of scope for the current project. This problem can be bypassed in a simpler way by using a separate std::string variable which will always be same as fs::path variable. The system variable is registered using this std::string instance and whenever the file path is changed from the drizzle server, both these variables are updated with the new value. Even if certain internal fs::path methods are called, they will continue to work normally using the fs::path instance.

Auth_file plugin

Apart from doing this work and getting this reviewed, I also worked on making auth_file plugin dynamic. In this plugin, we provide a separate passwords file containing the username:password pair for all the users who can access the server. The task of making this dynamic just involved clearing the old cache (which stored username:password pairs given in the previous file) and updating this with the new username:password pairs. Most of the task involved in this was the same as regex_policy plugin.

Am currently working on auth_http plugin which uses apache mod_auth module for authentication. Will keep posting the progress.


Categories: Libdrizzle News

Mohit Srivastava: Debug Drizzle Code with GDB

Libdrizzle News - Wed, 05/30/2012 - 04:09
From last few days , I have been working on a bug in JSON Server of Drizzle. And the bug is of Segmentation fault , which is not easy to recognize by just reading the code-base. So , the best way to get rid of this problem is  to debug the code. But How ?
I used Visual Studio once during my internships to debug a project. So ,I tried to find out some IDEs that can be use with drizzle . But I figured it out as we can debug a C++ code-base easily and efficiently with GDB .

Here are few steps for debugging:

First of all build your server and enable debugging:
mohit@mohit-PC:~$ cd repos/drizzle/drizzle-json_server/

mohit@mohit-PC:~/repos/drizzle/drizzle-json_server$ ./config/autorun.sh && ./configure --with-debug && make install -j2
Default installation path is /usr/local/ but you can change it with --prefix = /installation_path/
Start server and debug the code:
mohit@mohit-PC:~/repos/drizzle/drizzle-json_server$ gdb /usr/local/sbin/drizzled >
Now set arguments needed to start server with specific plugin (In my case , plugin is json_server)
(gdb) set args --plugin-add=json_server
Set breakpoints,you can do that with this command:
 b <filename>:<line-number> or break <filename>:<line-number>
(gdb) break json_server.cc:1093
Since the json_server plugin is not loaded yet , hence it prompts with :
No source file named json_server.cc.
Make breakpoint pending on future shared library load? (y or [n])
Go with "y"
Run your server now :
(gdb) r
You will get something like this:
Starting program: /usr/local/sbin/drizzled --plugin-add=json_server
[Thread debugging using libthread_db enabled]
Using host libthread_db library "/lib/i386-linux-gnu/libthread_db.so.1".
Now you can use various commands of GDB to debug your code.
List of these commands are mentioned here.

Problem Faced :

Once I started drizzle server with plugin. I got a message:
Using host libthread_db library "/lib/i386-linux-gnu/libthread_db.so.1".
/usr/local/sbin/drizzled: relocation error: /usr/local/sbin/drizzled: symbol _ZN8drizzled7message6AccessC1Ev, version DRIZZLE7 not defined in file libdrizzledmessage.so.0 with link time reference
[Inferior 1 (process 23532) exited with code 0177]

I was unable to get a single line of this problem, Thank to Henrik for solution.

Solution:
mohit@mohit-PC:~$ sudo rm /etc/ld.so.cache
mohit@mohit-PC:~$ ldconfig

Toru and Padraig previously posted on this topic which may also be helpful.
Also You can find documentation on this at Drizzle wiki.
Categories: Libdrizzle News

Anshu Kumar: ansharyan015

Libdrizzle News - Tue, 05/29/2012 - 19:46

Officially started off with the work on 21st May. This week was mostly spent in making the regex_policy plugin dynamic. Although the plan in the proposal was a bit different and the task allotted for first week was filtered_replicator plugin, Daniel and I sorted out the plugins in the increasing order of their difficulties and hence regex_policy plugin.

Here is my first weekly report for GSoC ’12

Regex_policy

I started off with regex_policy plugin as it seems to be the most trivial one. This plugin uses regex based authorization to match policies given in the policy file. As was pointed out by Daniel, the plugin was fundamentally designed to be static and the hack to make it dynamic was supposed to be not too complex in order to avoid future confusions, this required writing new code as well as redesigning the existing code.

Work Done: The regex_policy plugin specifies a file which either is given by the user at the time of startup of server or if not specified, a default file is used. The work was to give user the facility to dynamically change the policy file on runtime so that all the policies in the system will be reloaded with the policies given in the new file. Few approaches which were tried to accomplish this were

  • A lock variable (say regex_policy_autoreload) was used. Whenever user needs to change the policy file, he will have to explicitly make this lock as true and then the policy file will be refreshed every minute. This lacked the functionality of change of policy file. Also it involved some other critical issues like “what if user is still editing the file after a minute”.
  • After the failure of previous idea, the autoreload variable was removed and was changed to a variable reload (i.e. regex_policy_reload). The flow was to first set the regex_policy_reload to true and then change the policy file at runtime (i.e. SET GLOBAL regex_policy_policy = “path/to/new/policy/file”). After reloading the policies, the reload variable was changed to false automatically to show that the policies have been refreshed. This use of a second variable was redundant as anyone who wants to change the policy file can change the reload variable as well (hence no real kind of lock). This was removed in the next version.
  • After both the earlier approaches proved to be non optimal, finally refresh functionally was directly put into for regex_policy_policy variable and now policy file can be changed at runtime by using SET GLOBAL regex_policy_policy = “path/to/new/policy/file” or can be reloaded using SET GLOBAL regex_policy_policy = @@regex_policy_policy.

Difficulties faced: The main problem that am facing is the type of policy file in Policy class is boost file system path and there isn’t any such data type for database variables. Although this can be explicitly cast to std::string (after fs::path::string() doesn’t seems to work) but these are purely unrelated types and this may cause problems as there may be some internal fs::path methods which are being called and cannot be used with std::string. Daniel and I are still working on this to make sure this doesn’t come up in future.


Categories: Libdrizzle News

Daniel Nichter: Making Drizzle plugins dynamic

Libdrizzle News - Sat, 05/26/2012 - 15:37
As part of Google Summer of Code 2012, Anshu Kumar and I are working to make Drizzle plugins dynamic, i.e. making plugins’ variables configurable at runtime.  We have started with regex_policy, and progress is good.  As I noted to Anshu, this project is part new design and part redesign, which presents many challenges.  As for new [...]
Categories: Libdrizzle News

Stewart Smith: There is a story….

Libdrizzle News - Fri, 05/25/2012 - 06:11

I have a friend who is fond of telling a story from way back in November 2008 at the OpenSQL camp in Charlottesville, Virgina. This was relatively shortly after we had announced to the public that we’d started something called Drizzle (we did that at OSCON) and was even closer to the date I started working on Drizzle full time (which was November 1st). Compared to what it is now, the Drizzle code base was in its infancy. One of the things we hadn’t yet sorted out was the rewrite of the replication code.

So, I had my laptop plugged into a projector, and somebody suggested opening up some random source file… so I did. It was a bit of the replication code that we’d inherited from MySQL. Immediately we spotted a bug. In fact, between myself and Brian I think we worked out that none of the error handling in this code path ever even remotely worked.

Fast forward a bunch of years, and recently I had open part of the replication code in MySQL 5.5 and (again) instantly spotted a bug. Well.. the code is correct in 2 out of 3 situations…

It is rather impressive that the MySQL Replication team has managed to add the features they have in MySQL 5.6.

I’m also really happy with what we managed to do inside Drizzle for replication. Ripping out all the MySQL legacy code was a big step to take, and for a while it seemed like possibly the wrong one  - but ultimately, it was incredibly the right thing to do. I love going and looking at the Drizzle replication code. I simply love it.

Categories: Libdrizzle News

Daniel Nichter: Kick-start: Compiling Drizzle on bare-metal Ubuntu 11.10

Libdrizzle News - Sat, 05/19/2012 - 20:59
The following package dependencies had to be installed on my bare-metal Ubuntu 11.10 vbox to compile Drizzle: sudo apt-get install g++ gperf intltool libprotobuf-dev protobuf-compiler uuid-dev libpcre3-dev bison python-sphinx libboost-all-dev libcurl4-gnutls-dev libpam0g flex libcloog-ppl0 Other packages may be required, like the actual Boost libs, but that was sufficient for me.  Most of these dependencies are documented.
Categories: Libdrizzle News

Pages