I'll have to re-read the OP with more attention, but you can, in fact, use MySQL (or another RDBMS) to do inserts/updates from several sources onto the central. There is a configurable parameter that specifies how you want to stagger new table IDs, ex: master = N, slave_one = N+1, slave_two= N+2, and so on.
Just so I'm clear: the scenario in question is mostly concerned about pushing data onto the central, or is there a specific requirement to also pull up the latest data and have it locally as well?
I needs have access to current data locally as well.
How wedded are you to SQL? NoSQL databases tend to handle this sort of thing more easily. I'm guessing you're already to far down the SQL path to start again almost from scratch but I just thought I'd mention them anyway. It's also entirely possible the NoSQL DBs wouldn't be suitable for other reasons.
Personally I've been using CouchDB for a while so I'll give you a quick summary of how it works if you're interested...
Couch doesn't use auto-increment fields for primary keys so when two DBs synchronize you don't have to worry about key conflicts or anything like that.
All your data is stored in "documents" which have a UUID and a revision number. When you update a document its revision number changes so when two DBs sync this revision number is used to work out which document is the newest one. Obviously there's a problem if the same document is updated on both DBs at the same time, that can only be resolved by a human. However if you create your documents carefully you should be able to avoid conflicts like that ever happening.
By the sound of it information about any particular card is only being updated from one location at a time anyway so it should be pretty simple to set up.
I'm not wedded to a particular tools (MySQL, or any RDBMS), though I would have to pretty much start from scratch. That is an ok answer if it really solves the problem.
This data base consists of a number of tables. Some have fairly static data and no foreign keys, like part number definitions, site locations, and test definitions. Others really only contain data relationships, like the "configuration" table for example -- I'll explain more in a minute. And of course some are somewhat in between.
When a card comes off the manufacturing floor and into the test harness, it gets a serial number programmed. During this step, and entry is made in the card table that will uniquely identify this card for life. The card also gets an initial configuration entry that goes in the configuration table. This configuration entry defines the part number that is associated with the card configuration. This step can happen at one of two sites.
It then starts through testing. Each step in testing generates an entry in the test results table that links to a specific card AND a specific configuration of that card, links to a test definition, and captures the pass/fail status of the test plus a bit of other information. These test updates can happen at any of the three sites and since the test result table has foreign key dependencies on several other tables, including the card and card config tables, it requires up to date information. The state of the card config table is of special importance as the most recent entry in the table for a given card is what is used when logging the test result. An out of date config table would result in a test entry that referenced the wrong configuration.
It is the foreign key relationships in these tables that makes it a real PIA to handle data sync. I'll have to give some thought to the CouchDB suggestion. Specifically how it would handle the case where the configuration state of a card is updated at site A and then a test is run at site B before site B has received the updated configuration change.