Ad
  • Custom User Avatar

    indeed, iBits+2 instead of n+2... but i dunno why i even did a malloc instead of just a fixed sizeof(int)*8+2 byte array or something...

  • Custom User Avatar

    Resu can become unnecessarily huge for large values of n.

  • Custom User Avatar

    I guess great minds think similarlly ;).

  • Custom User Avatar

    That's really clever, but it's undefined C (that is, side-effect operations with binary operations yield undefined effects). And the strict defintion of "undefined behavior" means allowing the compiler to do anything it chooses.

  • Custom User Avatar

    cuz it looks beautiful!

  • Custom User Avatar

    looks elegant but inefficient indeed.

  • Custom User Avatar

    This comment is hidden because it contains spoiler information about the solution

  • Custom User Avatar

    Your array without -199:

    product is 8.101683605554995e+142
    sum is -832
    q will be 9.737600487445906e+139
    

    Your array without 196:

    product is -8.22568896686451e+142
    sum is -1227
    q will be 6.703902988479633e+139
    

    q is obviously smaller when removing 196. I have no idea why your excel is giving you the wrong numbers. It's entirely possible that your implementation in excel is incorrect, or maybe excel is running into some kind of overflow error. Your values of q are obviously incorrect, though. You are dividing the product of the array by the sum of the array, with very few exceptions (and this array does not fit any of the exception criteria) this will produce a q value greater than 1, but your q values are very close to 0, and you can manually do the sums and products yourself to see that the q values you should be producing for this particular array should be very large, and so your q values are far from correct.

    After looking at your code, you may want to pay attention to the data type that you use to hold your product total values keeping in mind that the product total will be very very large in some cases like the above example.

  • Custom User Avatar

    This comment is hidden because it contains spoiler information about the solution

  • Custom User Avatar

    my passing python solution says 196 and also what you say isn't reasonable after reading the instructions

  • Custom User Avatar

    For C++, I honestly think the random tests have a bug. For example, for this list

    180, -85, 196, 88, 65, -22, 8, -86, -37, 85, -70, -82, 87, -54, -99, -163, -75, 107, 56, 191, -141, 22, -179, -199, -47, 134, 69, 49, 93, -16, -193, 120, -182, -110, -151, 104, -57, -156, -124, -148, 18, 76, -137, 80, 192, -36, 23, 152, 176, 6, -144, 124, -184, 126, -42, 112, 47, 68, -105, -3, 13, -104, 84, 39, -103, -154, 19, -100, 27, -122, -177, -32, -119, -79, 75, -20, 67, -72, 
    

    The test harness says the answer should be to remove "196" whereas my code says "-199", which according to Excel, is the smaller q value (1.0269E-140 for -199 vs 1.4917E-140 for 196).

  • Custom User Avatar

    This comment is hidden because it contains spoiler information about the solution

  • Custom User Avatar

    This appears to have NOT been resolved. For n = 63, there an overflow even with the C unsigned long long type. For that series, the last two numbers overflow:

    GIVEN: ... 4'611'686'018'427'387'904, 3'236'635'425'598'607'970, 12'460'007'462'453'383'777
    
    EXPECTED: ... 4'611'686'018'427'387'904, 9'223'372'036'854'775'808, 18'446'744'073'709'551'615  < -- this last number given by the test harness is oddly ULLONG_MAX. 
    

    I recommend stopped at n = 62 (this would also correct the bogus range of the "High Performance" tests to not exceed UCHAR_MAX).

  • Custom User Avatar
  • Custom User Avatar

    LOL -- I made the same mistake: "i < 0" for size_t will always be false... (what saves you is the i >= board.size() check)

  • Loading more items...