Are you over 18 and want to see adult content?
More Annotations

深圳金æœå®˜ç½‘——专注ä¸å°å¾®ä¼ä¸šä¸€ç«™å¼é‡‘èžæœåŠ¡
Are you over 18 and want to see adult content?

Marketing Communications Agency - TRW Consult
Are you over 18 and want to see adult content?

Hobby Warehouse Store Australia - Best Online Shop for LEGO, Pop Vinyls and RC Cars
Are you over 18 and want to see adult content?

ã‚ãã¼ãƒ¼ã•ãŒï¼»å…¬å¼ï¼½ä½è³€çœŒè¦³å…‰é€£ç›Ÿ
Are you over 18 and want to see adult content?

Aklınızda Neresi Varsa Oraya Metro Turizm Var - Metro Turizm
Are you over 18 and want to see adult content?
Favourite Annotations

KinoZ.TO - Best Online Movie Streams - Kostenlos Filme online und Serien anschauen - KinoX.to - KinoS.to - KinoZ.to
Are you over 18 and want to see adult content?

A complete backup of bundleofholding.com
Are you over 18 and want to see adult content?

Minecraft Mods - Mods for Minecraft
Are you over 18 and want to see adult content?

Universiteit voor Humanistiek - Universiteit voor Humanistiek
Are you over 18 and want to see adult content?

Independent Designer Sunglasses and Glasses - sunglasscurator.com
Are you over 18 and want to see adult content?

Australian motorcycle tests, reviews & custom bikes - Bike Review
Are you over 18 and want to see adult content?

Kolhapur's Largest Wedding Co. - Wedding Planners in Kolhapur- OmEvent.in
Are you over 18 and want to see adult content?

Find The Job That's Right For You - Targeted Career
Are you over 18 and want to see adult content?
Text
Skip to content
CODE.OPENARK.ORG
Blog by Shlomi Noachprimary-menu
* archives
* presentations
* about
*
*
*
*
THE PROBLEM WITH MYSQL FOREIGN KEY CONSTRAINTS IN ONLINE SCHEMACHANGES
MySQL gh-ost
operations
Percona Toolkit
Schema
Vitess
March 17, 2021March 17, 2021 This post explains the inherent problem of running online schema changes in MySQL, on tables participating in a foreign key relationship. We’ll lay some ground rules and facts, sketch a simplified schema, and dive into an online schema change operation. Our discussion applies to pt-online-schema-change,
gh-ost , and Vitess
based migrations, or any other online schema change tool that works with a shadow/ghost table like the Facebook tools. WHY ONLINE SCHEMA CHANGE? Online schema change tools come as workarounds to an old problem: schema migrations in MySQL were blocking, uninterruptible, aggressive in resources, replication unfriendly. Running a straight ALTER TABLE in production means locking your table, generating high load on the primary, causing massive replication lag on replicas once the migration moves down the replication stream. ISN’T THERE SOME ONLINE DDL? Yes. InnoDB supports Online DDL,
where for many ALTER types, your table remains unblocked throughout the migration. That’s an important improvement, but unfortunately not enough. Some migration types do not permit concurrent DDL (notably changing column data type, e.g. from INT to BIGINT). Migration is still aggressive and generates high load on your server. Replicas still run the migration sequentially. If your migration takes 5 hours to run concurrently on the primary, expect a 5 hour replication lag on your replica, i.e. complete loss of your fresh read capacity. ISN’T THERE SOME INSTANT DDL? Yes. But unfortunately extremely limited. Mostly just for adding a newcolumn. See here
or again
here
.
Instant DDLs showed great promise when introduced (contributed to MySQL by Tencent Games DBA Team) three years ago, and the hope was that MySQL would support many more types of ALTER TABLE in INSTANT DDL. At this time this has not happened yet, and we do withwhat we have.
NOT EVERYONE IS GOOGLE OR FACEBOOK SCALE, RIGHT? True. But you don’t need to to be Google, or Facebook, or GitHub etc. scale to feel the pain of schema changes. Any non trivially sized table takes time to ALTER, which results with lock/downtime. If your tables are limited to hundreds or mere thousands of small rows, you can get away with it. When your table grows, and a mere dozens of MB of data is enough, ALTER becomes non-trivial at best case, and outright a cause of outage in a _common_ scenario, in my experience. LET’S DISCUSS FOREIGN KEY CONSTRAINTS In the relational model tables have relationships. A column in one table indicates a column in another table, so that a row in one table has a relationship one or more rows in another table. That’s the “foreign key”. A foreign key _constraint_ is the enforcement of that relationship. A foreign key constraint is a database construct which watches over rows in different tables and ensures the relationship does not break. For example, it may prevent me from deleting a row that is in a relationship, to prevent the related row(s) from becoming orphaned. Continue reading » “The problem with MySQL foreign key constraints in Online Schema Changes” Leave a Comment on The problem with MySQL foreign key constraints in Online Schema Changes ORCHESTRATOR ON DB AMA: SHOW NOTESMySQL Open Source
orchestrator
May 26, 2020
Earlier today I presented orchestratoron DB AMA
. Thank you to the organizers Morgan Tocker, Liz van Dijk and Frédéric Descamps for hosting me, and thank you to all who participated! This was a no-slides, all command-line walkthrough of some of orchestrator‘s capabilities, highlighting refactoring, topology analysis, takeovers and failovers, and discussing a bit of scriptingand HTTP API tips.
The recording is available on YouTube(also
embedded on https://dbama.now.sh/#history). To present orchestrator, I used the new shiny docker CI environment; it’s a single docker image running orchestrator, a 4-node MySQL replication topology (courtesy dbdeployer ), heartbeat injection, Consul, consul-template and HAProxy. You can run it, too! Just clone the orchestrator repo, then run: ./script/dock system From there, you may follow the same playbook I used in the presentation, available as orchestrator-demo-playbook.sh.
Hope you find the presentation and the playbook to be usefulresources.
Leave a Comment on orchestrator on DB AMA: show notes ORCHESTRATOR: WHAT’S NEW IN CI, TESTING & DEVELOPMENTMySQL dbdeployer
docker
GitHub
Open Source
orchestrator
testing
May 11, 2020May 12, 2020 Recent focus on development & testing yielded with new orchestrator environments and offerings for developers and with increased reliability and trust. This post illustrates the new changes, and see Developers section on the official documentation for more details.TESTING
In the past four years orchestrator was developed at GitHub, and using
GitHub’s environments for testing.
This is very useful for testing orchestrator‘s behavior within GitHub, interacting with its internal infrastructure, and validating failover behavior in a production environment. These tests and their results are not visible to the public, though. Now that orchestrator is developed outside GitHub (that is, outside GitHub the _company_, not GitHub the _platform_) I wanted to improve on the testing framework, making it visible, accessible and contribute-able to the community. Thankfully, the GitHub platform has much to offer on that front and orchestrator now uses GitHub Actions more heavilyfor testing.
GitHub Actions provide a way to run code in a container in the context of the repository. The most common use case is to run CI tests on receiving a Pull Request. Indeed, when GitHub Actions became available, we switched out of Travis CI and into Actions for orchestrator‘s CI. Today, orchestrator runs three different tests: * Build, unit testing, integration testing, code & doc validation* Upgrade testing
* System testing
To highlight what each does: Continue reading » “orchestrator: what’s new in CI, testing & development” Leave a Comment on orchestrator: what’s new in CI, testing &development
PULLING THIS BLOG OUT OF PLANET MYSQL AGGREGATOR, OVER COMMUNITYCONCERNS
MySQL community
orchestrator
Planet
April 23, 2020May 10, 2020 I’ve decided to pull this blog (http://code.openark.org/blog/) out of the PLANET.MYSQL.COM aggregator. PLANET.MYSQL.COM (formerly PLANETMYSQL.COM) serves as a blog aggregator, and collects news and blog posts on various MySQL and its ecosystem topics. It collects some vendor and team blogs as well as “indie” blogs such as this one. It has traditionally been the go-to place to catch up on the latest developments, or to read insightful posts. This blog itself has been aggregated in Planet MySQL for some eleven years. Planet MySQL used to be owned by the MySQL community team. This recently changed with unwelcoming implications for the community. I recently noticed how a blog post of mine, The state of Orchestrator, 2020 (spoiler: healthy),
did not get aggregated in Planet MySQL. After a quick discussion and investigation, it was determined (and confirmed) it was filtered out because it contained the word “MariaDB”. It has later been confirmed that Planet MySQL now filters out posts indicating its competitors, such as MariaDB, PostgreSQL, MongoDB. Planet MySQL is owned by Oracle and it is their decision to make. Yes, logic implies they would not want to publish a promotional post for a competitor. However, I wish to explain how this blind filtering negatively affects the community. But, before that, I’d like to share that I first attempted to reach out to whoever is in charge of Planet MySQL at this time (my understanding is that this is a marketing team). Sadly, two attempts at reaching out to them individually, and another attempt at reaching out on behalf of a small group of individual contributors, yielded no response. The owners would not have audience with me, and would not hear me out. I find it disappointing and will let others draw morals. WHY FILTERING IS HARMFUL FOR THE COMMUNITY We recognize that PLANET.MYSQL.COM is an important information feed. It is responsible for a massive ratio of the traffic on my blog, and no doubt for many others. Indie blog posts, or small-team blog posts, practically depend on PLANET.MYSQL.COM to get visibility. And this is particularly important if you’re an open source developer who is trying to promote an open source project in the MySQL ecosystem. Without this aggregation, you will get significantly lessvisibility.
But, open source projects in the MySQL ecosystem do not live in MySQL vacuum, and typically target/support MySQL, Percona Server and MariaDB. As examples: * DBDeployer should understand MariaDBversioning scheme
*
skeema needs to recognize MariaDB features not present in MySQL*
ProxySQL needs to support MariaDB Galeraqueries
*
orchestrator needs to support MariaDB’s GTID flavor Consider that a blog post of the form “Project version 1.2.3 now released!” is likely to mention things like “fixed MariaDB GTID setup” or “MariaDB 10.x now supported” etc. Consider just pointing out that “PROJECT X supports MySQL, MariaDB and PerconaServer”.
Consider that merely mentioning “MariaDB” gets your blog post filtered out on PLANET.MYSQL.COM. This has an actual impact on open source development in the MySQL ecosystem. We will lose audience andlose adoption.
I believe the MySQL ecosystem as a whole will be negatively affected as result, and this will circle back to MySQL itself. I believe this goes against the very interests of Oracle/MySQL. I’ve been around the MySQL community for some 12 years now. From my observation, there is no doubt that MySQL would not thrive as it does today, without the tooling, blogs, presentations and general advice bythe community.
This is more than an estimation. I happen to know that, internally at MySQL, they have used or are using open source projects from the community, projects whose blog posts get filtered out today because they mention “MariaDB”. I find that disappointing. I have personally witnessed how open source developments broke existing barriers to enable companies to use MySQL at greater scale, in greater velocity, with greater stability. I was part of such companies and I’ve personally authored such tools. I’m disappointed that PLANET.MYSQL.COM filters out my blog posts for those tools and without giving me audience, and extend my disappointment for all open source project maintainers. At this time I consider PLANET.MYSQL.COM to be a marketing blog, not a community feed, and do not want to participate in its biasedaggregation.
Leave a Comment on Pulling this blog out of Planet MySQL aggregator, over community concerns THE STATE OF ORCHESTRATOR, 2020 (SPOILER: HEALTHY)MySQL GitHub
MySQL
Open Source
orchestrator
February 18,
2020February 18, 2020 This post serves as a pointer to my previous announcement about The state of Orchestrator, 2020.
Thank you to Tom Krouper who applied his operational engineer expertise to content publishing problems. Leave a Comment on The state of Orchestrator, 2020 (spoiler: healthy) THE STATE OF ORCHESTRATOR, 2020 (SPOILER: HEALTHY)MySQL GitHub
Open Source
orchestrator
February 18, 2020
Yesterday was my last day at GitHub, and this post explains what this means for orchestrator. First, a quick historical review: * 2014: I began work on orchestrator at Outbrain, as
https://github.com/outbrain/orchestrator. I authored several open source projects while working for Outbrain, and created orchestrator to solve discovery, visualization and simple refactoring needs. Outbrain was happy to have the project developed as a public, open source repo from day 1, and it was released under the Apache 2 license. Interestingly, the idea to develop orchestrator came after I attended Percona Live Santa Clara 2014 and watched “ChatOps: How GitHub Manages MySQL” by one Sam Lambert. * 2015: Joined Booking.com where my main focus was to redesign and solve issues with the existing high availability setup. With Booking.com’s support, I continued work on orchestrator, pursuing better failure detection and recovery processes. Booking.com was an incredible playground and testbed for orchestrator, a massive deployment of multiple MySQL/MariaDB flavors and configuration. * 2016 – 2020: Joined GitHub . GitHub adoptedorchestrator and
I developed it under GitHub’s own org, at https://github.com/github/orchestrator. It became a core componentin
github.com’s high availability design, running failure detection and recoveries across sites and geographical regions, with more to come. These 4+ years have been critical to orchestrator‘s development and saw its widespread use. At this time I’m aware of multiple large-scale organizations using orchestrator for high availability and failovers. Some of these are GitHub, Booking.com, Shopify, Slack, Wix, Outbrain, and more. orchestrator is the underlying failover mechanism for vitess , and is also included in Percona’sPMM
.
These years saw a significant increase in community adoption and contributions, in published content, such as Pythian and Percona technical blog posts, and, not surprisingly, increase in issues andfeature requests.
2020
GitHub was very kind to support moving the orchestrator repo under my own https://github.com/openark org. This means all issues, pull requests, releases, forks, stars and watchers have automatically transferred to the new location: https://github.com/openark/orchestrator. The old links do a “follow me” and implicitly direct to the new location. All external links to code and docs still work. I’m grateful to GitHub for supporting thistransfer.
I’d like to thank all the above companies for their support of orchestrator and of open source in general. Being able to work on the same product throughout three different companies is mind blowing and an incredible opportunity. orchestrator of course remains open source and licensed with Apache 2. Existing Copyrights are unchanged. As for what’s next: some personal time off, please understand if there’s delays to reviews/answers. My intention is to continue developing orchestrator. Naturally, the shape of future development depends on how orchestrator meets my future work. Nothing changes in that respect: my focus on orchestrator has always been first and foremost the pressing business needs, and then community support as possible. There are some interesting ideas by prominent orchestrator users and adopters and I’ll share more thoughts in due time. Leave a Comment on The state of Orchestrator, 2020 (spoiler: healthy) QUICK HACK FOR GTID_OWN LACKMySQL GTID
Replication
December 11, 2019
One of the benefits of MySQL GTIDs is that each server remembers _all_ GTID entries ever executed. Normally these would be ranges, e.g. 0041e600-f1be-11e9-9759-a0369f9435dc:1-3772242 or multi-ranges, e.g. 24a83cd3-e30c-11e9-b43d-121b89fcdde6:1-103775793, 2efbcca6-7ee1-11e8-b2d2-0270c2ed2e5a:1-356487160, 46346470-6561-11e9-9ab7-12aaa4484802:1-26301153, 757fdf0d-740e-11e8-b3f2-0a474bcf1734:1-192371670, d2f5e585-62f5-11e9-82a5-a0369f0ed504:1-10047. One of the common problems in asynchronous replication is the issue of consistent reads. I’ve just written to the master. Is the data available on a replica yet? We have iterated on this, from reading on master, to heuristically finding up-to-date replicas based on heartbeats (see presentationand slides
)
via freno , and now settled, on some parts of our apps, to using GTID. GTIDs are reliable as any replica can give you a definitive answer to the question: _have you applied a given transaction or not?_. Given a GTID entry, say f7b781a9-cbbd-11e9-affb-008cfa542442:12345, one may query for the following on a replica: mysql> select gtid_subset('f7b781a9-cbbd-11e9-affb-008cfa542442:12345', @@global.gtid_executed) as transaction_found; +-------------------+ | transaction_found | +-------------------+ | 1 | +-------------------+ mysql> select gtid_subset('f7b781a9-cbbd-11e9-affb-008cfa542442:123450000', @@global.gtid_executed) as transaction_found; +-------------------+ | transaction_found | +-------------------+ | 0 | +-------------------+GETTING OWN_GTID
This is all well, but, given some INSERT or UPDATE on the master, how can I tell what’s the GTID associated with that transaction? There\s good news and bad news. * Good news is, you may SET SESSION session_track_gtids = OWN_GTID. This makes the MySQL protocol return the GTID generated by yourtransaction.
* Bad news is, this isn’t a standard SQL response, and the common MySQL drivers offer you no way to get that information! At GitHub we author our own Ruby driver, and have implemented the functionality to extract OWN_GTID, much like you’d extract LAST_INSERT_ID. But, how does one solve that without modifying the drivers? Here’s a poor person’s solution which gives you an inexact, but good enough, info. Following a write (insert, delete,create, …), run:
select gtid_subtract(concat(@@server_uuid, ':1-1000000000000000'), gtid_subtract(concat(@@server_uuid, ':1-1000000000000000'), @@global.gtid_executed)) as master_generated_gtid; The idea is to “clean” the executed GTID set from irrelevant entries, by filtering out all ranges that do not belong to the server you’ve just written to (the master). The number 1000000000000000 stands for “high enough value that will never be reached in practice” – set to your own preferred value, but this value should take you beyond 300 years assuming 100,000 transactions per second. Continue reading » “Quick hack for GTID_OWN lack” Leave a Comment on Quick hack for GTID_OWN lack UN-SPLIT BRAIN MYSQL VIA GH-MYSQL-REWINDMySQL Open Source
Replication
March 5, 2019March 5,2019
We are pleased to release gh-mysql-rewind, a tool
that allows us to move MySQL back in time, automatically identify and rewind split brain changes, restoring a split brain server into a healthy replication chain. I recently had the pleasure of presenting gh-mysql-rewind at FOSDEM.
Video and slides
are
available. Consider following along with the video.MOTIVATION
Consider a split brain scenario: a “standard” MySQL replication topology suffered network isolation, and one of the replicas was promoted as new master. Meanwhile, the old master was still receiving writes from co-located apps. Once the network isolation is over, we have a new master and an old master, and a split-brain situation: some writes only took place on one master; others only took place on the other. What if we wanted to converge the two? What paths do we have to, say, restore the old, demoted master, as a replica of the newly promoted master? The old master is unlikely to agree to replicate from the new master. Changes have been made. AUTO_INCREMENT values have been taken. UNIQUE constraints will fail. A few months ago, we at GitHub had exactly this scenario. An
entire data center went network isolated. Automation failed over to a 2nd DC. Masters in the isolated DC meanwhile kept receiving writes. At the end of the failover we ended up with a split brain scenario –which we expected
.
However, an additional, unexpected constraint forced us to fail backto the original DC.
We had to make a choice: we’ve already operated for a long time in the 2nd DC and took many writes, that we were unwilling to lose. We were OK to lose (after auditing) the few seconds of writes on the isolated DC. But, how do we converge the data? Backups are the trivial way out, but they incur long recovery time. Shipping backup data over the network for dozens of servers takes time. Restore time, catching up with changes since backup took place, warming up the servers so that they can handle production traffic, alltake time.
Could we have reduces the time for recovery? Continue reading » “Un-split brain MySQL via gh-mysql-rewind” 1 Comment on Un-split brain MySQL via gh-mysql-rewind MYSQL MASTER DISCOVERY METHODS, PART 6: OTHER METHODS MySQL May 22, 2018May22, 2018
This is the sixth in a series of posts reviewing methods for MySQL master discovery: the means by which an application connects to the master of a replication tree. Moreover, the means by which, upon master failover, it identifies and connects to the newly promotedmaster.
These posts are not concerned with the manner by which the replication failure detection and recovery take place. I will share orchestrator specific configuration/advice, and point out where cross DC orchestrator/raft setup plays part in discovery itself, but for the most part any recovery tool such as MHA, replication-manager, severalnines or other, is applicable. HARD CODED CONFIGURATION DEPLOYMENT You may use your source/config repo as master service discovery methodof sorts.
The master’s identity would be hard coded into your, say, git repo, to be updated and deployed to production upon failover. This method is simple and I’ve seen it being used by companies, in production. Noteworthy: Continue reading » “MySQL master discovery methods, part 6: othermethods”
1 Comment on MySQL master discovery methods, part 6: other methods MYSQL MASTER DISCOVERY METHODS, PART 5: SERVICE DISCOVERY & PROXY MySQL High availabilityorchestrator
Replication
May 14, 2018May 22, 2018 This is the fifth in a series of posts reviewing methods for MySQL master discovery: the means by which an application connects to the master of a replication tree. Moreover, the means by which, upon master failover, it identifies and connects to the newly promotedmaster.
These posts are not concerned with the manner by which the replication failure detection and recovery take place. I will share orchestrator specific configuration/advice, and point out where cross DC orchestrator/raft setup plays part in discovery itself, but for the most part any recovery tool such as MHA, replication-manager, severalnines or other, is applicable. We discuss asynchronous (or semi-synchronous) replication, a classic single-master-multiple-replicas setup. A later post will briefly discuss synchronous replication (Galera/XtraDB Cluster/InnoDBCluster).
MASTER DISCOVERY VIA SERVICE DISCOVERY AND PROXYPart 4
presented with an anti-pattern setup, where a proxy would infer the identify of the master by drawing conclusions from backend server checks. This led to split brains and undesired scenarios. The problem was the loss of context. We re-introduce a service discovery component (illustrated in part 3),
such that:
* The app does not own the discovery, and * The proxy behaves in an expected and consistent way. In a failover/service discovery/proxy setup, there is clear ownershipof duties:
* The failover tool own the failover itself and the master identity change notification. * The service discovery component is the source of truth as for the identity of the master of a cluster. * The proxy routes traffic but does not make routing decisions. * The app only ever connects to a single target, but should allow for a brief outage while failover takes place. Depending on the technologies used, we can further achieve: * Hard cut for connections to old, demoted master M. * Black/hold off for incoming queries for the duration of failover. We explain the setup using the following assumptions and scenarios: * All clients connect to master via cluster1-writer.example.net, which resolves to a proxy box. * We fail over from master M to promoted replica R. Continue reading » “MySQL master discovery methods, part 5: Service discovery & Proxy” Leave a Comment on MySQL master discovery methods, part 5: Servicediscovery & Proxy
POSTS NAVIGATION
Older posts
Powered by WordPress | Theme: openark-primerby Underscores.me .
Details
Copyright © 2023 ArchiveBay.com. All rights reserved. Terms of Use | Privacy Policy | DMCA | 2021 | Feedback | Advertising | RSS 2.0