Hi Codeforces!
Colleagues from Google asked to share the announcement. Join, it'll be fun!
Hello!
Google’s team programming competition Hash Code is back for another year of challenging developers around the world to solve a Google engineering problem. Think you could optimize the layout of a Google Data Center? Or how about perfecting video streaming on YouTube?
If you’re up for the challenge, sign up to compete by February 17 at g.co/hashcode.
Hash Code takes place over 2 rounds:
an Online Qualification Round: Thursday, February 20 from 17:30 — 21:30 UTC: Compete from this virtual round wherever you’d like, including from a Hash Code hub. Hubs allow for teams from the same community (e.g. university or coding club) to compete side-by-side in a fun and exciting environment.
a Final Round at Google Ireland: Saturday, April 25: Top scoring teams from the Online Qualification Round are invited to our Dublin office to vie for cash prizes and the title of Hash Code 2020 champion.
Whether you’ve just started coding or you’ve been participating in programming contests for years, Hash Code is a great chance to flex your coding muscles, get a glimpse into software engineering at Google, and have some fun. Take a look at previous Hash Code problem statements to see the engineering challenges participants have tackled in the past.
Register by Feb 17 at g.co/hashcode
I'd be more supportive of this if Google didn't try to pull political stunts in their programming contests, such as creating Google Code Jam for Women-- Either this suggests that Google thinks women can't compete on a level playing field with men, or is Google finding a legal way to artificially bias hiring practices towards women; both of which are ideas I find extremely unappealing.
What now, you don't want a contest only for some country because it's discriminatory? There aren't many women in STEM and events/contests like that are meant to help the situation. I understand complaining about unfair interviews and getting a job, but a contest with the goal of popularizing programming for some underrepresented group of people? Come on.
Bruh, women are much smarter than men. That's why they don't waste their lives on competitive programming.
Upvoted. My all-time favorite comment.
Dang/ had to choose between upvoting this and keeping upvote count to 69. Guess what 69 didn't last.
We can still make it 420.
shit I thought I replied to antontrygubO_o :|
That's discrimination against men and untrue. Fact: The vast majority of famous scientists of all centuries are not women.
I'm not trying to bring discrimination into this or anything, but I would say, yes, it is pretty silly to artificially restrict participants of an online contest to a geographic location. Why take extra effort to exclude people who might want to participate?
(Of course, this doesn't include ICPC regionals, for example, where your geographic region matters. In that case it determines the spot at WF that you are competing for)
In the case of high school contests it makes sense to restrict participation, because you can't expect high school students to be able to be at the same level as college/post-college IGMs who have been competing for years. But that difference in what a person has time to learn isn't present across age/gender/country of residence.
You only addressed the first sentence of my comment. In particular, my second sentence is the answer to your question "Why take extra effort to exclude people who might want to participate?".
Your logic says it's stupid to organize a competition only for some region of a country (let's say, the championship of New York) or for a single school. The goal of those is also to popularize something in that place.
The only reason you should exclude people from participating is if the trait that causes them to be excluded is relevant to the contest. The college I attend holds a programming contest in order to select ICPC teams. High school students and students from other universities can join, but they are ranked unofficially, since they aren't competing for a spot on an ICPC team. In this case, location is relevant to what you are competing for.
It would be silly to exclude people from participating in an online contest on a global platform where geography isn't relevant, in the same way it would be silly to limit the contest to a race or gender when those aren't relevant either.
I agree I'm a female and I think it's embarrassing
There is no rule in competition "Google Code Jam for Women" that bans male participants.
There is a girls-only contest in CCPC(China CPC), and I have seen no complaints about that. There is also a CGMO (China Girl Mathematical Olympiad), its problems are quite hard (higher than provincial level contestes), also no complaints there.
Nobody questions authority in China
¯\_(ツ)_/¯¯\_(ツ)_/¯¯\_(ツ)_/¯
That's not exactly the case, the computer federation who hosts the contests DO get questions for other contests they hold, and they DO change things according to them.
I think the bias did not come from the contest host, but the stereotypes of other people, who thinks that computer geeks should be
So, the problem is not with google (or the authority, or who holds the contests. etc.), it's with these stereotypes. Only if we change these views can we restore the fairness in computing.
My score 4/7.
bald and probably with curly hair do not make sense together.
It's just a general impression of how people see geeks.
(this even looks kinda good, but you can find some truly awful combinations of bald + curly hair)
I know nothing about awful combinations. But there are some legendary!
Women can not compete in chess and competitive programming as good as men indeed — leaderboard on this site and ELO ratings shows that.
Code Jam for Women has nothing to do with politics either. That is just the way to popularize Google's brand, hire more women to have more labor resources. Google's capital allows them to do whatever they want, e.g. infringe the rights of workers.
It shows nothing of the sort. If 1% of people who do competitive programming are women (seems accurate), then obviously there are not going to be many in the top 100. But that doesn't mean that it's harder for women to get into top 100, it just means there are less women.
I don't think there is a conclusive way to prove or disprove what you just said. Codeforces leaderboard doesn't show gender (and it shouldn't), and I doubt you looked at 40000 profiles manually and tried to figure out what their gender is. So my guess is that you are talking out of your ass.
It shows the current state — that is not "nothing of the sort".
In the current state of the field, women perform not that bright as men. One of the reason is there are too few women as you said by yourself.
I have no interest in discussing either cognitive or physical women's capabilities. But I'm open to discuss workers' rights.
You said that "women can not compete ... as good as men". I'm sorry but it really is hard to interpret it as anything else than "the average woman is not as good as the average man".
Cool, but what the hell does any of this have to do with workers' rights???
Ok, I see. Probably, I should have said women do not compete as good as men.
I made a couple of points in my first comment, replying to SecondThread, but you decided to discuss women.
We should stop discrimination, contests should be rated for everyone regardless of their sex
yelllsss!!
Honestly it's Google showing that they're woke — lip service, no real effect. They don't need to artificially bias hiring practices, discrimination West-style is only forbidden against specific groups.
Says the one writing political stuff in an unrelated post
You also reacted to political stuff.
See, this is the shitty thing about political activism: it tends to make one want to respond, so it propagates into unrelated things like programming. Unfortunately, it always ends either with flaming or a circlejerk.
Then write a new thread, rather than spamming Hash Code announcement.
I didn't say anything bad about that ;)
??? You said "Says the one writing political stuff in an unrelated post", it's about reacting to (Google's) political stuff, it doesn't sound like you're praising it or being sarcastic.
Tbh it doesn't belong on CF in any form but I can understand dude's desire to vent.
"in an unrelated post"
Your (or my) posts are also unrelated to this blog, so that still applies.
nice try though
I'm glad you admit that I'm right.
Let me give you my two cents as a woman that participated in the latest "Code Jam for Women" competition. I've met loads of female competitive programmers here in Brazil and discussed why there aren't as many women in competitive programming. Most of them said it is a pretty hostile environment. Guys choose guys over higher-rated girls or refuse to teach to girls as much because they think they won't learn as fast as other dudes. Because of that they are much less likely to ask questions and learn from their peers. What I liked about the Code Jam for Women is that it gave some of the less experienced girls a place to ask questions about previous competitions, discuss with people similar to themselves and be heard. I had never seen them as a group study as hard for anything else. I normally can get people to respect me enough to hear my ideas (in most cases) but the challenge is with girls at the beginning, that before becoming coders face this silly artificial barrier. Code Jam for Women helps with that.
my personal opinion is .they just marry
It won't be funny if you'll give us a problem about moving some units on a plane...
It'd be a final fall of a HashCode.
organizers quickly adding the third dimension
Care to elaborate?
It's very common in optimization/approximation contests to order units to move (taxis, soldiers, workers) and Radewoosh always complains about it. I agree with him that it gets boring and organizers should avoid that.
A few examples (including the contest from the announcement).
It seems there is a political behavior which is more shameful and hostile from Google. I can not represent my country.
I think that it's a good place to share my thoughts. I'm the greatest anti-fan of HashCode's formula. A few reasons:
-- One problem for four people for four hours? Definitely wrong proportions. Do you remember eliminations to Pizza/Deadline24/Marathon24? The last one had five problems for three people. This was something. You had to show your strategic skill: where to spend more time, where to just write something fast and let it run in the background, take problem for yourself or give it to the teammate, etc... In the current formula, it always ends with so small differences. In Marathon, fighting for some epsilons didn't make sense as you can change the problem and get much more.
-- You are not solving the problem, you are solving the testcases. Well, on Deadline/Marathon you were solving it too, but they had $$$10$$$ files with several testcases each. HashCode offers about two seriously hard tests. If the organizers still think that we are solving the problem, come on, you want to compare the solutions on two tests? For example, TopCoder (which is the most famous in the optimization world thanks to marathon matches) runs solutions on $$$2000$$$-$$$5000$$$ tests. OK, you won't force us to run on so many, but still, giving us $$$10$$$ files with several testcases would sometimes force the best teams to analyze some testcase deeply (as you like it so much) -- to find how it's generated, maybe on what should they focus. Currently, teams HAVE to focus on the properties of these two testcases, not on the whole problem. And again, it makes everybody fighting for some small differences in the same places instead of trying to find some other place to get points.
-- The problems (at least on the eliminations) are just boring. There are no places for some nice ideas, everybody ends with almost the same greedy (and has to fight for small differences). A few times I won in a problem on the eliminations to Pizza/Deadline and every time I won thanks to some great idea which nobody else came with.
deadline24 2018 Firstly, cut everything like that:
And then play with merging these beams.
pizza 2018 Make long (almost) diagonal pillars:
(ok, on the drawing they aren't so long, sorry for that) And then play with building one on the other.
I was very proud of myself coming with these (still, rather simple) ideas. What can we come up with on HashCode? "Huh, maybe send some worker to the nearest workplace on the plane"? xd
I guess that even if the problems would be more interesting the aura about good ideas would still be killed with the fact that everybody in the team has to think about the same problem and then more teams would again come up with the same thing. Also, if you think that it's a bad idea to give such a non-standard and complicated problem when there's only one problem in a contest -- you know what to do, give us more problems. In particular, the number of problems $$$\geq$$$ the number of people in one team should hold, then you can feel calm that the best teams would advance to the finals. With more than one problem you can experiment and give us many different types of problems (and then one of them can be boring if you like such ones, it won't hurt so much).
-- The problems could be better prepared. Here's again the fact about solving the testcases instead of problems, but there's more. A typical situation from a short contest: ok, I finished my greedy algorithms, maybe I found something in a smarter way, what now? Everybody should know the answer — metaheuristics like hill climbing, simulated annealing or changing the greedy to beamsearch (it depends on the problem of course). And what turns out on HashCode? The limits are so hight that you won't be able to use any metaheuristic in an efficient way (at least I've always had this feeling). Everybody, do you understand it? They organize a contest to see who's the best in optimization problems and force us to do only some simple greedy shit xd. Of course, giving us only small data would also be bad. Do you have any ideas about what would be good? Maybe $$$10$$$ files with several testcases with various sizes and parameters? We'll never know...
To sum up everything -- maybe each of these defects wouldn't hurt so much (but still would hurt), all of them together make the formula of the contest the worst possible ever. I always wonder, how is it possible that Google, the company which organizes the Code Jam, made such a bad contest as HashCode. Just look here, on the section "Scoring"->"How is my score for a particular problem determined?". It guess that people were thinking about this section several times longer than about the whole HashCode.
"Join, it'll be fun!" -- yea, but what about being professional and making a competition instead of a chaotic game for kids?
Dear HashCode team, not everything is lost. I still believe in you. You can still fix your contest and make it make sense.
Best regards,
Mateusz Radecki a.k.a. Radewoosh
I completly don't agree with your opinion about hashcode elimination round problems. Probably simple greedy allows you to eliminate, but if you will try something simple in upsolving you will see that that you are far from the first place result.
I also don't like that there are only 4-5 tests. May be they do so because all tests are handmade or taken from some "real" data.
Deadline/marathon eliminations were very cool, but that doesn't mean that one problem format is worse. They are different, one problem format is more about team interaction, several problems format is much more individual.
HC/SA worked on almost all hashcode problems, you are just wrong about it.
I remember one time talking with Errichto after the eliminations which were won by them (2018 if I'm not mistaken) to learn what did they do. Guess what. Just greedy with choosing parameters and decisions by analyzing the structure of the test (as there are different amounts of points for different tests, usually one is the most important).
Maybe people had working SA, but we won't know who was the best as there are O(1) tests. I think that many tests on Deadline/Marathon also were handmade, so still, it's possible to do more.
Ok, may be 2018 elimination was bad. I don't remember well, but seems that it was issue with test structure.
In my opinion it makes a perfect sense to have quals and finals in the same format. And, I guess, Google would never make a 24-hours finals. So what are the options in that setting:
I don't aware about competitions with the finals in such formats, except the Hash Code. Are there any? (Or in fact, were there any? Cause all 24-hours competitions are shut down)
-- The problems could be better prepared. Definitely they could be. Do you remember finals of Pizza/Deadline24/Marathon24? Let's take Deadline24 finals 2017 as an example. They changed the rules of the third task during the contest (and erased all points earned) because the rules were easily exploited. And the new version of task still was exploited in a similar way.
Yeah, it's quite easy to take the best from what you like, and compare to worst of what you don't. Deadline was an awesome competition. I guess Pizza/Marathon also were (but I never participated in them).
-- Sure, two hard tests is not enough. But what about five, as in Hash Code finals 2018?
-- Having testcases with the different scale (while not making smaller ones too cheap) sounds like a good idea for me.
-- In some tasks all participants ends up with the same greedy. Or have to solve exactly the same knapsack. That is really boring.
Not all problems from past editions are equally good. But maybe it makes more sense to give a feedback on particular problems, than just claiming that format is a trash?
"In my opinion it makes a perfect sense to have quals and finals in the same format." -- I totally agree, but I don't want to speak too much about finals as I've never been on HashCode finals. I've heard that the problems happen to be a bit better than on the eliminations. Comparing eliminations of Pizza/Deadline24/Marathon24 I guess that it'd be nice to see both eliminations and finals of HashCode with multiple tasks for ~5 hours.
I'm not suggesting making 24 hours finals, as the competition is definitely about optimization, not about writing bots and so on, but imo there is a better way to run the competition for teams.
You are basically saying "I enjoy formats of different optimization contests, so this one is bad because it is not the same".
If all you have to do is to write one greedy, why are you not able to advance?
No, I really tried to show the defects which I see. I told about a small differences. Are you sure that the teams are really sorted according to the strength of their programs? Look here. Yes, probably past glory had a better solution than others, but look at the eliminations. $$$Second = First * 0.998$$$.
To show my thoughts: how many cf rounds which you won you'd win if they'd consist of only one problem, let's say C? You'd have to fight for milliseconds, while with a whole problemset you can show how good are you, not worrying that small mistakes disqualify you.
Yeah, I understand you, but it is still parts of format.
I don't think that it is legit to compare relative difference. For example, in the finals there was a testcase where you can get either $$$2^{19}$$$ or $$$\varepsilon$$$. All top teams got $$$2^{19}$$$ so the relative difference (on other testcases) became smaller. I feel like my comment is messy, what I want to say that if you compare differences to some baseline instead of total scores, the relative difference will increase.
Still, I agree that many changes from 2nd place to 1st place solutions feel insignificant and kinda random, but maybe that's what makes this format unique. 1st place team somehow made these changes and won, right?
Most of your comment is about you not liking some problems and not liking that there is one problem and not many. From my point of view it really is "I enjoy formats of different optimization contests, so this one is bad because it is not the same".
About this one. Yeah, Past Glory had a better solution. Above the required knapsack and some greedy algos (teams on 2nd-6th also had that part) they had hand made patterns, which are kind of a constructive solution for the given tests.
While I don't consider task from 2019 finals as an example of a greatest Hash Code problems, I can't agree to you accusations.
If teams easily get X (let's say X=6M), and then fight for the last Y points. Even if Y << X, it doesn't tell anything about a quality of the problem. It's just non linear scale of the score value (from the solution quality).
In deadline the first greedy that comes to mind gets at least top-30 score (more likely top-10). That's how I qualified for 3/4 deadlines (failed acm-task when not qualified). All tests in deadline are more or less random and you don't have to write separate solutions for some tests.
Hashcode is harder because:
As I never qualified for hashcode finals, I should say it is more balanced (stronger people more likely get higher scores)
I agree with some of those complaints about the format of Google Hash Code (having participated in the qualification 4 times already). I think there is one simple solution that would improve things considerably — have a few hard inputs that are only revealed during the last hour of the competition. This would favour more generic solutions over the ones tailor made for the specific inputs. Also, this would add way more action and tension in the end, compared to the current scoreboard freeze during the last hour.
Can i one tell is it necessary to do that sample question given by hash code ? And which types of question should i practice the most at this time please .
How many problems do we get to solve in "online qualification round" && in "final round" ?? #HashCode
I'm looking for a team -- python preferred, C++ is ok. Anyone got spot on their team or looking for a team?
Teams in the top, can you share your scores on tests? Here are ours (team Errecto: Errichto, kostka, Chameleon2460, kabuszki).
A — 21
B — 5,822,900
C — 5,690,369
D — 5,034,770
E — 5,237,345
F — 5,348,248
total 27,133,653
We think that D can be improved quite more.
total 27,184,696
Lol we got 98% of max possible score on D at the beginning and decided to concentrate on other tests, now we have 70k less than you on this test
Same
a
how did your team get such a high score in D?
Perhaps intentionally, there are 15000 books at 2 libraries each, and 63600 books at 3 libraries each. If the former are variables and the latter are clauses, then we have a 3-SAT instance. Then we did local search on the number of satisfied clauses. Probably using some SAT-solver a perfect score can be achieved as well.
our team also found 3-SAT modeling, but doing such a 'good' local search was difficult part... and congratulations for your team!
Well, our local search is very simple: flip a random variable, keep that change if the number of satisfied clauses does not decrease, wait for as long as you wish :)
We are not really a contender for the top, but we got:
total 26,998,905.
I guess E was our weak point, did you guys use some special algorithm for E?
A — 21
B — 5,822,900
C — 1,421,219
D — 4,815,200
E — 3,748,621
F — 1,253,555
total 17,061,516
We think that everything can be improved quite a lot!
I guess besides A and B :)
Probably not even close to the top after unfreeze, but
A — 21
B — 5,822,900
C — 5,689,820
D — 5,028,530
E — 5,121,579
F — 5,348,248
Total 27,011,098
Team circus_wolves(nmakeenkov, linjek).
In the last hour we did only slightly better (total was 27,009,965), because concentrated on task C
A — 21
B — 5,822,900
C — 5,645,747
D — 4,812,015
E — 4,601,441
F — 5,317,660
Total : 26,199,784
I guess, we were derailed the moment we only tried improving C and F since there was huge margin. But, D and E seems to have been better to focus on.
We had
- A: 21
- B: 5,822,900
- C: 5,689,822
- D: 5,028,790
- E: 5,034,292
- F: 5,317,660
Total — 26,893,485
With tusg25 and zeyunow
With bardek and DamianS we have:
A -- 17
B -- 5,822,900
C -- 5,690,448
D -- 5,028,530
E -- 5,215,303
F -- 5,340,296
total -- 27,097,494
I think your score in A can be improved quite a lot!
I disagree -- only by 4 points :P
That's 20% increase.
A-21 B-5,822,900 C-5,686,950 D-5,028,530 E-5,131,048 F-5,345,656 Total-27,015,105
total 27,070,220
I hope we qualify (((((((
TÜ Randomized algorithms (me and semjon.00)
Total — 27 125 923
What are your predictions for the cutoff?
Team NZEC_ final Score:
A — 21
B — 5,822,900
C — 5,645,747
D — 4,843,540
E — 4,613,390
F — 5,240,161
Total Score — 26,165,759
abhiy13 verma_ankit484 21171_somesh
Schedule shows that results of the qualification round will be published on Feb 25. I get that you need to check for cheating but will we not even get some "preliminary" results before? Or is scoreboard "unfrozen" on Feb 25?
(We got a huge increase in the last hour and are now anxious :D)
Is the live streaming dead?
+
+
†
Team q1w2e3
A: 21
B: 5822900
C: 5645747
D: 4841980
E: 4945508
F: 5289663
Total: 26545819
Pumpkin Patch (WhaleVomit, PlasmaVortex):
A — 21
B — 5822900
C — 5467966
D — 4815395
E — 4854966
F — 5202696
Total — 26163944
What is extended round?
Upsolving
Team tratata: A – 21; B – 5,822,900; C – 5,679,708; D – 5,067,140; E – 5,205,284; F – 5,348,248; Total score — 27,123,301.
several people seem to achieve this same score on F. is that optimal? what's the idea behind it? our team only got 5,001,254 on that particular case.
who r the members of *code team who secured first place.
Members: snuke smeke sigma425 koyumeishi
Metaheuristics in vacuum (MrDindows, Furko)
B — 5,822,900
C — 5,690,378
D — 5,038,215
E — 5,210,736
F — 5,345,656
Total score 27,107,906
A : 21
B : 5,822,900
C : 5,689,997
D : 5,033,665
E : 4,938,403
F : 5,345,656
Total Score 26,830,642
Team : Little_Piplup dlwocks31 diordhd gratus907 coffeetea
Several tweaks / random parameters / UNINTENDED BUGS / weird piece of code :(
A — 21
B — 5,822,900
C — 5,689,502
D — 5,043,025
E — 5,095,422
F — 5,317,660
Total — 26,968,530 (#121 overall)
from #9 team of last final round.
Congraturations for finalists!
A — 21
B — 5,822,900
C — 5,467,996
D — 4,839,510
E — 3,600,887
F — 5,042,544
Total: 24,773,828 Team: me and Polarity, but it's basically me solving them all. I still don't understand why my method failed so bad at E...
Did you perhaps forget to sort the lists for libraries by score?
We did pretty badly, but at least we got 5,690,828 from C (with a set-cover IP formulation), which seems to be the best result posted here so far. Actually, I think it is optimal.
5690870
Joined in the last 90 mins, wrote a stupid O(L^2) heuristics and ended up with a score of 26791720. No idea what happened XD, well it’s hashcode, stupid solution gets decent score
What is your team name?
As far as I can see the highest score in the contest was 27,203,691 ( < 27,791,720).
Update: I just read that you solved in 90s. I thought its 90 mins at first. Seriously you destroyed everyone in 90s?
Sorry it is 26791720 :). Just trying to say the scoring algorithm is kind of strange. Friends of mine wrote some brilliant solutions but received a lower score unfortunately. I’m 100% I don’t deserve the score
S̶t̶i̶l̶l̶,̶ ̶y̶o̶u̶ ̶w̶r̶o̶t̶e̶ ̶t̶h̶a̶t̶ ̶s̶o̶l̶n̶ ̶i̶n̶ ̶t̶h̶e̶ ̶9̶0̶s̶?̶ Duration also updated later.
What was your soln/heuristics?
I just did a sorting with a weighting function involving time for signing-up, and a greedy to scan the best book in the remaining time. (Idea is stupid it was 4:00am here and my brain was not functioning properly but that's the only possible thing u can code in 90 mins I guess)
It ran for ~5 mins on case D (slowest) and got the score I stated above. I am now trying to optimize some kind of knapsack, hope it can get a better score.
what was the exact heuristic? I tried something similar but wasn't able to consistently beat my original greedy solution :(
what heuristic?
Total: 26.876.290
Does anyone have a good solution idea for E?
For any team who got more than 5M points on E: how did you do that? Our team's local search ended up getting a terrible score for E (less than 4M), and we only got 4.6M after doing some random greedy.
Just greedily searching for the next library and then greedily taking all untaken books from it with most score and scoring function for library
sum(book_scores) / library.signUpTime
gave us 5,034,357 points.Seems to be much less than lots of people got though
After you get final order of libraries, you can improve results using maximum vertex-weighted matching in bipartite graph. That will give about 5,210,736 points. With some heuristics for book score based on number of libraries with such book in it I got 5,223,111.
We used the following way to decide the order of the libraries.
Assume we have some order of the libraries; let's try to quickly (in $$$O(N)$$$) estimate the total weight of books we can take. For each library, we can calculate the number of books it can scan: (number of days after signup) * (number of books that can be scanned per day), and just assume it takes the best books possible. Using precalculated prefix sums, calculate the sum of scores each library takes. At this point, we just ignore that multiple libraries will count the same book and will simply count these books multiple times.
Now we have some quick way to estimate how good a given order is. We can use some hill climbing or simulated annealing to find a good order. After we have decided the order, use some min-cost max-flow to assign books to libraries (as mentioned above).
We got 5 236 142 points for E this way. About 200k of it came from this approach to book ordering (before we had a simple greedy ordering + max flow).
Did you run min-cost-max-flow on the whole dataset or splitted them into parts? Running min-cost-max-flow on the whole dataset wouldn't finish anything after a-set for us and all of our splits gave bad scores.
The whole dataset. We have a fairly efficient implementation. It didn't finish for C, but finished for D, E and F (at least after compiling with
-O2
:P).Would it be possible to share that implementation with me?
And there's no assignment to do for C anyway since all books are scanned immediately.
There is a thing called “zkw mincostflow” which really works well in practice. I don’t know the details, but basically it prunes by considering all paths with same distance, and simultaneously pushing flows of same value like in Dinic’s. This finishes in about 0.5-1s for each iteration.
Probably, the number of distinct reward will be small, so one can also compress the nodes with same scores and reduce the graph. I didn’t tried it. Maybe I should’ve tried it.
How do you set up the graph for min-cost max-flow? I created a bipartite graph to maximize the number of books scanned, but I couldn't think of any way that took into account the values of each book.
The second part seems really essential, our implementation gave almost the exact same scores as just greedily assigning books since we didn't think about costs.
The values of each book is exactly what the "min-cost" is needed for.
The flow network consists of the following nodes:
And edges:
Actually, I guess we should have just "min-cost flow" here, maximizing flow isn't what we care about. But we were in a hurry and it worked pretty well ¯\_(ツ)_/¯.
Got 4.8 by changing comparator as this bool cmp(lib l1, lib l2){ ll val1 = l1.cnt * (l1.shiping) * l2.signup, val2 = l1.signup * l2.cnt * (l2.shiping) ; //ll val1 = l1.signup, val2 = l2.signup; // cnt is no of books return val1 > val2; }
It turned out that there were a bug in our team's local search implementation. After fixing the bug, we got approximately 5.1M on E and minor improvements to C, D and F as well.
A 21
B 5,822,900
C 5,645,747
D 4,837,950
E 4,569,203
F 5,305,796
Total 26,181,617
Team : Int elligence
what were your approaches for D and E ?
It was really fun to participate in hashcode for the first time. Learned a lot. We made a time-lapse video while participating.
Check it out — https://youtu.be/7mzgaE9xiFE
After watching the video, some of the mistakes were quite evident. If you could also point out a few mistakes that we made then that would be really helpful.
Thanks! Looking forward to enhancing my problem-solving skills.
Was anybody contacted regarding the finals?
Anyone got e-mail regarding selection for world finals?
Yeah we got the e-mail yesterday (#25).
Great. Congrats! We were actually ranked 60 and were hoping that we somehow get through. But guess will have to try next year.
Yes. 31st place.
Standings and list of finalists
My team recently participated in Google's HashCode 2020 and it was my first time yet we ranked 115th in India and 1127th internationally and here is my HashCode 2020 Experience https://www.linkedin.com/pulse/hashcode-2020-experience-jai-kumar-dewani
My LinkedIn post https://www.linkedin.com/posts/jai-dewani_hashcode-hashcode2020-activity-6636693461091352576-ntpE