added support for 1 to 1 relations and added support for n unique key… #3
No reviewers
Labels
No labels
area: devops
area: discovery
area: docs
area: livestream
area: proposal
consider soon
Epic
good first issue
hacktoberfest
help wanted
icebox
level: 1
level: 2
level: 3
level: 4
needs: exploration
needs: grooming
needs: priority
needs: repro
needs: tech design
on hold
priority: blocker
priority: high
priority: low
priority: medium
resilience
Tom's Wishlist
type: bug
type: discussion
type: improvement
type: new feature
type: refactor
type: task
type: testing
unplanned
No milestone
No project
No assignees
1 participant
Notifications
Due date
No due date set.
Dependencies
No dependencies set.
Reference: LBRYCommunity/sqlboiler#3
Loading…
Reference in a new issue
No description provided.
Delete branch "merge_changes"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
added support for 1 to 1 relations and added support for n unique keys for conflict resolution during merging.
Just thought of a case not covered. If the secondary user has a row but the primary does not the secondary row will still be deleted. We need to also check that the primary has a row too in order for the secondary to be deleted, otherwise there is no conflict to resolve.
would wrapping %s in ` help? I'm thinking about those column names that might have spaces or special keywords
I don't fully understand this PR, but I took a look at it and it seems to logically make sense.
Still I don't feel experienced enough to say anything else.
We discussed. I think if we have a column name that has spaces, this would not be accepted by SQLBoiler, but also if this is the case we should just fix the column name because it should be snake_case.
i recommend splitting the two cases out into separate functions. it will be far easier, and i don't think there's much overlap
id prefer not to have large comments like this in the code. we can either use godoc-style comments at the top of the function which describe what the function does, or write up the docs somewhere else. other than that, the code should speak for itself as much as possible
some code i threw together real quick
@lyoshenka I split it out into the two methods per your suggestion. I also tested the merge for the flagged edge case and it passes.
im having a bit of trouble following some of this. whats an example in lbry where we'd use this now?
also, could you add some tests for this behavior? sqlboiler already has a lot of features for testing. it would be good to keep using them.
finally, comments like
//Grab scanned values for query arguments
are unnecessary, and may be a sign that the code itself is not readable. please consider rewriting the code instead so that it reads better.log.Fatal?
@ -98,0 +127,4 @@
defer rows.Close()
//This query will adjust dynamically depending on the number of conflicting keys, adding AND expressions for each
// key to ensure the right conflicting rows are deleted.
why don't we know in advance how many columns will be returned. isn't it just
len(conflictingColumns)
?@ -98,0 +127,4 @@
defer rows.Close()
//This query will adjust dynamically depending on the number of conflicting keys, adding AND expressions for each
// key to ensure the right conflicting rows are deleted.
We do, but we can't predict the columns in the code. So we have to be dynamic. One query might have 1, another might have 5. I can't create a generic struct for scanning purposes. Front the research I did online, they said to use this pointer solution for dynamic scans.
@ -98,0 +127,4 @@
defer rows.Close()
//This query will adjust dynamically depending on the number of conflicting keys, adding AND expressions for each
// key to ensure the right conflicting rows are deleted.
@lyoshenka is this acceptable? or were you just curious? If you approve I would like deploy this fix.
Feel free to merge, though you should still remove the
log.Fatal
line.Also, would still love tests for this if possible
@tiger5226 should i merge this?
No, I was talking with @tzarebczan and we will need to make changes to the schema. There are certain conflicts where we don't want to delete. For example, youtube data. So in this example we need to change the index to not be unique.
If its not on github, it didnt happen 😉
What schema changes are you planning to make?
I think youtube_data should probably still be unique. All the code Niko wrote depends on that - it would be a lot of work to change it. I'd prefer allowing Merge to error if there's a conflict, rather than making youtube_data not unique. Then we can handle the merge manually.
In my defense I posted that about 2 minutes after we discussed it :) to make sure it was on github haha. I was actually planning on merging it when Tom flagged his concern based on your comment to merge it.
I talked with Tom about this more tonight. So this PR fixes the fact that manual merging is required. However, this is for things like stripe where we want this to happen for example. However, youtube data is a special exception potentially. @tzarebczan mentioned that he thinks the youtube data is actually unique per channel not per user. He plans on double checking with @nikooo777 tomorrow.
what do you mean by unique per channel and not per user?
the youtube_data table is strictly related to the user table, a youtube_data row can only exist if there is a user associated with it, and only one row can exist per user.
i don't think we have a concept of channels in our database, do we?
@nikooo777 I will let @tzarebczan answer your question, but based on what you said, what is the expected behavior when we merge 2 users that each have a youtube account? If we have two records conflict ( meaning the uniqueness keys all match ) we delete the secondary users record. The main concern is that we would have two accounts for two users that request to be merged into one. I think, we should delete the secondary youtube_data until we support multiple youtube accounts per user. What impacts could this gave?
I don't think deleting data is acceptable in this case. I'd much rather drop the uniqueness constraint as I had previously proposed in an issue on internal-apis.
What this implies is that we'll have to rework rewards to allow a user to claim the YouTube rewards multiple times and verify that nowhere else we depend on that constraint.
I was referring to this comment. I would prefer that we drop the constraint as well and allow multiple channels. Does this mean that this PR is blocked by that or can this be done until that is handled? How common is it that we merge two users with different youtube channels? Also is there an issue for this?
probably not my call to make, but I'd vote for blocked. I don't agree in ever dropping any youtube_data rows.
Opinions @lyoshenka and @kauffj ?
Per @nikooo777 from slack, issue for multiple youtube accounts is https://github.com/lbryio/internal-apis/issues/323, will block this by for now. @alyssaoc just want to make sure this is on your radar.
We have a PR in review https://github.com/lbryio/internal-apis/pull/639. Once this PR is merged we can potentially merge this as well.
@tiger5226 how comfortable do you feel merging this? Should we still run through a few scenarios locally before doing so?
Yes! This can be merged now! I am really confident. Nonetheless, I would still want to retest it since it's a months old issue.