+ -
当前位置:首页 → 问答吧 → 编程挑战题:SPOJ LOPOV

编程挑战题:SPOJ LOPOV

时间:2013-12-05

来源:互联网

今日刚刚完成一条 SPOJ 嘅题目:LOPOV,系 COCI 2013 Contest #1 其中一题。完成时间 0.08s 暂时第一,拋离第二位(此人世界挑名第2) 0.29s 颇远(自我感觉良好中)。现想大家一齐研究,可唔可以再快尐。

为方便大家,现翻译题目重点如下:

现有 N 粒宝石,每粒的重量和价值,分别为 Mi 和 Vi。此外,现有 K 个袋,每个袋的容量为 Ci,但只能放入一粒宝石。
问:能放置在袋内的宝石的总价值,最大是多少?

限制:
1 <= N, K <= 300,000
1 <= Mi, Vi <= 1,000,000
1 <= Ci <= 1,000,000,000
全部数值为正整数。

此外,我亦想知大家用乜嘢算法去解决呢题。我谂唔到现成嘅算法,於是创出(可能一早已经有人谂到/创出/发现)现有方法,感觉速度好似不俗。
更新:现在排第二时间系 0.13,同我嘅时间重有一段距离,但已经拉近好多。

[ 本帖最后由 fitcat07 於 2013-10-28 12:10 PM 编辑 ]

作者: fitcat07   发布时间: 2013-12-05

无人有兴趣?
或者首先讲讲解答嘅方法。

最简单但最慢嘅方法:
0. 将 N 粒宝石价值由大至小排列,初始 answer = 0,i = 0
1. 喺 N 粒宝石中,最大价值嘅一粒系第 i 粒
2. 然后喺 K 个袋中,搵出刚刚可以放置以上最大价值粒宝石。假设系第 m 个袋。即 Cm >= Mi, Cn >= Cm for all 0 <= n < N, Cn >= Mi, n != m。
3. 搵到呢个袋嘅话,将粒石嘅价值 Vi 加入 answer 中,并将呢个袋 Km 移除,K = K - 1
4. i = i + 1
5. 重复步骤 1 直至 i = N

以上方法嘅时间复杂度为 O(N*K)。因为 N 同 K 最大可以系 300,000,所以 O(N*K) = O(300,000*300,000) = O(9x10^12)。时间限制系 1s,即使现今最快嘅 Intel CPU 都唔够时间完成,结果必然系 TLE(Time Limited Exceeded)。

咁点可以加快呢?

作者: fitcat07   发布时间: 2013-12-05

引用:原帖由 fitcat07 於 2013-10-27 08:36 PM 发表
无人有兴趣?
或者首先讲讲解答嘅方法。

最简单但最慢嘅方法:
0. 将 N 粒宝石价值由大至小排列,初始 answer = 0,i = 0
1. 喺 N 粒宝石中,最大价值嘅一粒系第 i 粒
2. 然后࢒ ...
有兴趣但无时间。

乜要O(n*k)咩? sort 一 sort就变O(n*log n)囉。。step 0由大至小排列已经系Θ(n*log n)啦。

作者: code4food   发布时间: 2013-12-05

引用:原帖由 code4food 於 2013-10-28 04:00 PM 发表
有兴趣但无时间。
重系咁忙呀...
引用:乜要O(n*k)咩? sort 一 sort就变O(n*log n)囉。。step 0由大至小排列已经系Θ(n*log n)啦。
步骤 0 无错系 O(N*log(N)),但每粒宝石要搵到最合身嘅袋,就要搵 K 次。总共系 O(N*log(N) + N*K),正确应该系 O(N*(log(N) + K))。因最差情况下, K >> log(N),所以我写 O(N*K)。

如果将 K 个袋由小至大排列(O(K*log(K)),咁每次搵就唔驶 K 次,而系 log(K) 次,快好多。总时间变成 O(N*log(N) + K*log(K) + N*log(K))。但问题系,当搵到合身嘅袋,要将之移除。有乜快嘅方法,可以移除后,使能保持搵嘅次数保持系 log(K) 呢?

呢个其实系我将会写嘅帖内容。

作者: fitcat07   发布时间: 2013-12-05

引用:原帖由 fitcat07 於 2013-10-28 12:36 PM 发表
无人有兴趣?
或者首先讲讲解答嘅方法。

最简单但最慢嘅方法:
0. 将 N 粒宝石价值由大至小排列,初始 answer = 0,i = 0
1. 喺 N 粒宝石中,最大价值嘅一粒系第 i 粒
2. 然后࢒ ...
Hi fitcat07,

I just think a little bit on the bus if the steps as described as your previous post that just find out the largest value of jewellery and put it to the bag such that Cm >= Mi, then how to handle the follow case:

input
3 1
4 5
6 6
7 7
10

In this case, the jewellery has largest value should be skipped and the largest total value is 5+6 = 11 and not 7. (Please tell me, if I interpret your meaning incorrectly, )

[ 本帖最后由 assembly.jc 於 2013-10-28 05:39 PM 编辑 ]

作者: assembly.jc   发布时间: 2013-12-05

N = 3, K = 1.

Step 0:
Sort the N pieces of jewellery: (7, 7), (6, 6), (5, 4) <- first and second of the ordered pair means value and mass, respectively
answer = 0
i = 0
Step 1:
The most valuable is Ni, which is (7, 7). Then, Mi = 7, Vi = 7.
Step 2:
We find a bag Cm with size 10 which is the best fit. (m = 0).
Step 3:
Since we found a best-fit bag m in step 2, answer = answer + Vi = 7.
Take away this bag. Now, K = K - 1 = 0.
Step 4:
i = i + 1 = 1.
Step 5:
We repeat step 1 to 5. Since K = 0, we no longer find any bags fit, step 3 will not be executed.
Hence, answer = 7.

The above steps can be improved: when K = 0, terminate immediately.

作者: fitcat07   发布时间: 2013-12-05

引用:原帖由 fitcat07 於 2013-10-28 12:43 AM 发表

重系咁忙呀...
步骤 0 无错系 O(N*log(N)),但每粒宝石要搵到最合身嘅袋,就要搵 K 次。
我话sort一sort,系袋都sort埋。用balanced tree (例如std::set)。建立树嘅时间系O(k * log k), 每次搵(e.g. std::set::lower_bound())个袋同更新(std::set::erase()嘅时间系O(log k),每个袋都装粒宝石要要用O(n* log k). k <= N, 加埋之前Θ(n * log n)都系Θ(n * log n).http://www.cplusplus.com/reference/set/set/

作者: code4food   发布时间: 2013-12-05

引用:原帖由 fitcat07 於 2013-10-28 06:00 PM 发表
N = 3, K = 1.

Step 0:
Sort the N pieces of jewellery: (7, 7), (6, 6), (5, 4)
Hi fitcat07,

Thanks for detail explanation. As I miss this sentence: "but at most one jewellery piece in each bag", so I think the bag can carry 2 piece of jewllery yesterday (and hence the answer should be 5+6 = 11, not 7), and now I read your post #1 and the description of the problem on SPOJ again, I eventually know why I am confused on your solution

Otherwise, if the bag can be allowed to carry any number of piece of jewellery, this problem become very complex

[ 本帖最后由 assembly.jc 於 2013-10-29 10:56 AM 编辑 ]

作者: assembly.jc   发布时间: 2013-12-05

引用:原帖由 code4food 於 2013-10-29 01:17 AM 发表
我话sort一sort,系袋都sort埋。用balanced tree (例如std::set)。建立树嘅时间系O(k * log k), 每次搵(e.g. std::set::lower_bound())个袋同更新(std::set::erase()嘅时间系O(log k),每个袋都装粒宝石 ...
我第一眼都谂到呢个方法,我自以为用 std::set (正确系用 std::multiset)系唔够快,之前无试过。於是我谂用 std::priority_queue 会更快,因为唔须要将所有元素排序,方法系:
0. 将 N 粒宝石嘅价值由大至小排列,K 个袋依容量由小至大排列,i = 0,j = 0,pq = {},answer = 0
1. 如果 Mi <= Cj,将 Vi 加入 pq,i = i + 1
2. 否则,如果 pq 唔系空,answer = answer + pq.top(),pq.pop()
2a. j = j + 1
3. 重复 1 直至 i >= N 或 j >= K
4. 如 j < K 同埋 pq 唔系空,answer = answer + pq.top(),pq.pop(),j = j + 1,重复 4

时间复杂度:
0. O(N*log(N) + K*log(K))
1 - 4. O(N*log(N) + N*log(N)) (加入完素 pq:N*log(N),取出完素 pq:N*log(N))总时间复杂度:O(3N*log(N) + K*log(K)),但出乎意料结果系 TLE。就系因为咁都唔得,我无再尝试 std::multiset,因为大家时间复杂度相似。
今日先发现原来有一个臭虫,改咗之后(错误地将步骤 2a 归入 2),用 std::priority_queue 时间系 0.11s!

经你一提,今日一试 std::multiset,竟然 AC,时间大约系 0.30s。原来呢题真系咁易(自己谂得过於复杂),早知唔驶花咁多时间去谂其他方法。不过,亦因此谂到暂时最快方法。

作者: fitcat07   发布时间: 2013-12-05

引用:原帖由 assembly.jc 於 2013-10-29 10:52 AM 发表
Otherwise, if the bag can be allowed to carry any number of piece of jewellery, this problem become very complex
Yes, it will become a more complicated version of the normal Knapsack problem (NP-complete):
http://en.wikipedia.org/wiki/Knapsack_problem

作者: fitcat07   发布时间: 2013-12-05

引用:原帖由 fitcat07 於 2013-10-29 01:30 AM 发表
我第一眼都谂到呢个方法,我自以为用 std::set (正确系用 std::multiset)系唔够快,之前无试过。於是我谂用 std::priority_queue 会更快,因为唔须要将所有元素排序,方法系:
0. 将 N 粒宝石嘅价值由大至小 ...
哈哈, 谂漏左重复嘅问题。你讲得无错,应该系用multiset。k系O(n), upper bound我会简单写成O(n * log n)算数。

作者: code4food   发布时间: 2013-12-05

引用:原帖由 fitcat07 於 2013-10-29 05:33 PM 发表
Yes, it will become a more complicated version of the normal Knapsack problem (NP-complete):
http://en.wikipedia.org/wiki/Knapsack_problem
Hi fitcat07,

Actually, many problems related to combination, such marksix, probability... combination obtains elegant properties in mathematics but not in computer science.

However, anyway, let us return to jewellery problem. and so, the problem now can be transformed to how to find a bag (that has least mass Ci) to hold the largest value Vi of piece of jewellery. Okey, haha, let me guess your best soluation (fastest version). Firstly, let us split this problem into 2 parts:

1. find the largest value of piece of jewellery
2. find the least mass of bag to hold the jewellery

for part 1, the fastest way seems to sort the values of piece of jewellery, or using heap structure. either method has complexity O(n lg n).
for part 2, instead of sorting the mass Ci hold by bags, we may apply the concept of Hash.

For example, Assume 1 <= Ci <= 100 (actually, it is 1 <= Ci <= 1,000,000, 000), we create an array that hold 10 linked-list, each node of linked-link hold the mass of bag Ci. It may look like:
复制内容到剪贴板代码:struct _linked_list;
typedef struct _linked_list {
int mass;
struct _linked_list next;
} LinkedList;

LinkedList list[10] = {NULL};
and then, once receive a value of piece of jewellery Vi, calculate i = Vi / 10. and using i to locate suitable linked-list and finally find the bag to hold the jewellery. Using Hash we may find the bag in constant time.

[ 本帖最后由 assembly.jc 於 2013-10-31 02:44 PM 编辑 ]

作者: assembly.jc   发布时间: 2013-12-05

引用:原帖由 assembly.jc 於 2013-10-31 02:43 PM 发表
for part 1, the fastest way seems to sort the values of piece of jewellery, or using heap structure. either method has complexity O(n lg n).
Using heap doesn't help here as we have to go through all pieces of jewellery one by one in descending order. I tried and the result showed that using heap performed worse.
引用:.... Using Hash we may find the bag in constant time.
If the sizes of the bags are uniformly distributed, your theory is correct. However, there are no such constraint provided by the problem description. The worst case occurs when all the sizes of the bags fall into a single bucket. The time complexity becomes O(n^2).

作者: fitcat07   发布时间: 2013-12-05

引用:原帖由 fitcat07 於 2013-11-1 11:36 AM 发表 Using heap doesn't help here as we have to go through all pieces of jewellery one by one in descending order. I tried and the result showed that using heap performed worse.If the sizes of the bags are ...
Hi fitcat07,

Before you said that, I expected heap may be faster... but anyway, sorting has more choices, it is good chance for me to test the efficient of variety of sorting method, haha
引用:If the sizes of the bags are uniformly distributed, your theory is correct. However, there are no such constraint provided by the problem description. The worst case occurs when all the sizes of the bags fall into a single bucket.
and so, to avoid all bags fall into a single bucket, we may reduce the size of single bucket as small as possible. may be my previous example is too simple so that unable to present my idea clearly, let us come back to Lopov problem, since the mass hold by bag Ci can be huge (1 <= Ci <= 100,000,000), so we may use a 8-dimension void* pointer array to simulated the Hash, it may look like

void *ciHash[10][10][10][10][10][10][10][10] = {NULL};

and so, the size of each bucket of Hash can be reduced to 10 and such an array occupied 40 Mb memory. However, I am not sure whether C support 8-dimension array, if it is not, we may allocate a large block of memory firstly (say 40Mb), and calculate the hash code for each Ci by myself.

[ 本帖最后由 assembly.jc 於 2013-11-2 10:44 AM 编辑 ]

作者: assembly.jc   发布时间: 2013-12-05

引用:原帖由 assembly.jc 於 2013-11-2 10:38 AM 发表
Before you said that, I expected heap may be faster... but anyway, sorting has more choices, it is good chance for me to test the efficient of variety of sorting method, haha
This is one of the reasons that SPOJ (or other online judges) is good.
引用:to avoid all bags fall into a single bucket, we may reduce the size of single bucket as small as possible. may be my previous example is too simple so that unable to present my idea clearly, let us come back to Lopov problem, since the mass hold by bag Ci can be huge (1 <= Ci <= 100,000,000), so we may use a 8-dimension void* pointer array to simulated the Hash, it may look like

void *ciHash[10][10][10][10][10][10][10][10] = {NULL};

and so, the size of each bucket of Hash can be reduced to 10 and such an array occupied 40 Mb memory. However, I am not sure whether C support 8-dimension array, if it is not, we may allocate a large block of memory firstly (say 40Mb), and calculate the hash code for each Ci by myself.
I don't know the limit of the array dimensions of C/C++. I believe 8 is supported. If not, use of pointers can get rid of the limit.
No matter how small you set the bucket size (unless it is 1 - which decompose to a 1D array), the worst case scenario does not change - all bags can still fall into a single bucket.
Having said that, it is theoretically not helpful but it doesn't mean in practice it performs poorer. Just like quick sort, the worst case time complexity is O(N^2) but it is fast enough in normal use.
If the test cases were large and/or well-constructed, it would include data for such scenario. In practice, it may not. So, try to implement and submit it, and see...

BTW, an array of size 10^8 of pointer type (assume 32-bit pointer) has a size of 4*10^8 = 4*100MB = 400MB.

作者: fitcat07   发布时间: 2013-12-05

I think I may misinterpret your method. You defined a pointer for each element in the range 1 - 100,000,000. Each pointer points to the corresponding mass of the bag. When mass x is required, the pointer of ciHash[x] (simplified instead of 8-D) is selected. Am I understood correctly?

作者: fitcat07   发布时间: 2013-12-05

引用:原帖由 fitcat07 於 2013-11-2 04:33 PM 发表
I think I may misinterpret your method. You defined a pointer for each element in the range 1 - 100,000,000. Each pointer points to the corresponding mass of the bag. When mass x is required, the poin ...
Hi fitcat07,

Yes, you got the point, I read the problem again and find that the limit of memory size is 1536Mb, so, as you said that, it can be simply allocated 400Mb memory for an array such as

int ciHash[100000000] = { 0 };

and when the mass x of bag is provided, perform the follow operation

ciHash[x]++;

However, this method is not work, since when a jewellery associated with value Vi need to find a bag, using the Hash, then when ciHash[Vi] == 0, we can not find the mass y quickly such that

y > Vi and ciHash[y] > 0

Maybe I thank it in wrong direction from the begin

Anyway, besides the above method, I also made a wrong concept on finding the bag(please refers the part 2 as mentioned on previous post #12). that is. When the value of all pieces of jewellery have been sorted and the largest value of piece of jewellery (say Vi) is given, we don't need to find the least mass (say Ci, Ci >= Vi of course) of bag to hold the piece of jewellery, any bags that has the mass >= Vi can be chosen and don't affect the final answer. And so the mass of bags may not need to sort.

[ 本帖最后由 assembly.jc 於 2013-11-4 03:12 AM 编辑 ]

作者: assembly.jc   发布时间: 2013-12-05

引用:原帖由 assembly.jc 於 2013-11-4 03:05 AM 发表
...Anyway, besides the above method, I also made a wrong concept on finding the bag(please refers the part 2 as mentioned on previous post #12). that is. When the value of all pieces of jewellery have been sorted and the largest value of piece of jewellery (say Vi) is given, we don't need to find the least mass (say Ci, Ci >= Vi of course) of bag to hold the piece of jewellery, any bags that has the mass >= Vi can be chosen and don't affect the final answer. And so the mass of bags may not need to sort.
I agree with you that it may not be fast using this method.
However, I am afraid you're wrong regarding the sorting of bags is not needed.
Consider this test case:
复制内容到剪贴板代码:2 21 992 65101
If we place the first one with mass 1 into the the bag of size 10, the second one with mass 2 cannot be placed into the remaining bag with size 1. The answer will be 99, which is incorrect.

作者: fitcat07   发布时间: 2013-12-05

引用:原帖由 fitcat07 於 2013-11-4 11:53 AM 发表

However, I am afraid you're wrong regarding the sorting of bags is not needed.



Consider this test case:
2 21 992 65101
If we place the f ...
Hi fitcat07,

Yes, you're right, I wrongly assume that larger value of piece of jewellery has larger mass

Otherwise, I am confused by the method on #9 as below:
引用:0. 将 N 粒宝石嘅价值由大至小排列,K 个袋依容量由小至大排列,i = 0,j = 0,pq = {},answer = 0
1. 如果 Mi <= Cj,将 Vi 加入 pq,i = i + 1
2. 否则,如果 pq 唔系空,answer = answer + pq.top(),pq.pop()
2a. j = j + 1
3. 重复 1 直至 i >= N 或 j >= K
4. 如 j < K 同埋 pq 唔系空,answer = answer + pq.top(),pq.pop(),j = j + 1,重复 4
Does it iterate the Mi through the descending order of value of piece of jewellery ? In step 3, Is it repeats step 1 to 2a instead of step 1 until i >= N or j >= k?
How to handle the follow case?

Input:
2 1
5 5
2 2
4

N = 2, K = 1
0. Sort the value of piece of jewellery in descending order: [(M:5, V:5), (M:2), (V:2)],
Sort the mass of bag in ascending order : [(C:4)]
i=0, j=0, pg={}, anwser = 0

1. M0 = 5 > C0 = 4 => do nothing
2. pg is empty => do nothing
2a. j = j + 1 => j == 1
3. j == K => goto step 4
4. j == k => end

finally, the answer is 0, but the answer should be 2??

[ 本帖最后由 assembly.jc 於 2013-11-5 12:00 PM 编辑 ]

作者: assembly.jc   发布时间: 2013-12-05

引用:原帖由 assembly.jc 於 2013-11-5 11:53 AM 发表
Does it iterate the Mi through the descending order of value of piece of jewellery ? In step 3, Is it repeats step 1 to 2a instead of step 1 until i >= N or j >= k?
Yes, it should repeat step 1 to 2a. I should use pseudo code instead of steps as it should be more clear.
引用:How to handle the follow case? ...
1. M0 = 5 > C0 = 4 => do nothing
2. pg is empty => do nothing
2a. j = j + 1 => j == 1
3. j == K => goto step 4
4. j == k => end

finally, the answer is 0, but the answer should be 2??
Oops, Step 0 was incorrect. The sorting of jewellery should be in "ascending" order of "mass". Sorry.

作者: fitcat07   发布时间: 2013-12-05

啱啱发现有位叫 bb 嚟自香港嘅人,做出时间 0.09s,同我嘅纪录只差 0.01s。
如你系呢位 bb,可否讲讲你嘅方法,大家交流一下呢?

作者: fitcat07   发布时间: 2013-12-05

引用:原帖由 fitcat07 於 2013-11-5 04:23 PM 发表
Yes, it should repeat step 1 to 2a. I should use pseudo code instead of steps as it should be more clear.Oops, Step 0 was incorrect. The sorting of jewellery should be in "ascending" order of "mass". ...
Hi fitcat07,

Last week, I based on your method (post #9) to test the efficiency of variety of sorting algorithms, it includes insertion sort, quick sort, stooge sort, heap sort and radix sort (+count sort) and the result as below:

insertion sort: TLE
quick sort (randomize quick sort, iterative quick sort come from http://www.geeksforgeeks.org/iterative-quick-sort/): TLE
stooge sort : TLE
heap sort: 0.26s
radix sort: 0.22s

Only heap sort and radix sort can pass the test! and I think the radix sort is the fastest sorting algorithm since its complexity is O(n). However, as you said that on post #9, if use the std::sort and std::prior_queue in C++, it got the running time 0.11s!! Now the questions are :

1. If radix sort faster than the std::sort, then the std::prior_queue faster than my own prior_queue implementation MUCH MORE !? (say 0.22s vs 0.11s. I used binary heap to implement the prior_queue).

2. If the answer of question 1 is yes, then if I improve the efficiency of the prior_queue implementation, then the running time can be faster than 0.11s ? and even near your record 0.08 !? (I got some information from this article ftp://ftp.cs.utexas.edu/pub/techreports/honor_theses/cs-07-34-roche.pdf and according to it, some other heap implementations may be faster than binary heap).

[ 本帖最后由 assembly.jc 於 2013-11-20 03:54 AM 编辑 ]

作者: assembly.jc   发布时间: 2013-12-05

引用:原帖由 assembly.jc 於 2013-11-20 03:49 AM 发表
1. If radix sort faster than the std::sort, then the std::prior_queue faster than my own prior_queue implementation MUCH MORE !? (say 0.22s vs 0.11s. I used binary heap to implement the prior_queue).
The easiest way to answer your question is to use std::sort in your code, submit it and then see the result.The time complexity of radix sort is O(dn) where d is the number of digits. Since max(n) is 300,000, it is not a very large number. std::sort, which is highly optimized, with time complexity O(n*log(n)) may outperform radix sort. Here is my analysis:For radix sort, O(dn) = O(9n). Take the C implementation of radix sort in wiki as an example, the while loop runs 9 times. Inside the while loop, there are 3 for loops each runs n times. The total time becomes O(9*3n) = O(27n).Now, consider std::sort. O(n*log(n)) = O(log(300,000)*n) = O(19n).Clearly, std::sort is faster than radix sort (using wiki C's implementation) in this case.In fact, wiki's implementation can be improved by switching pointers between array a and b. Then, the for loop for copying elements from b to a is no longer necessary. The time complexity becomes O(18n). In theory, it is slightly faster than std::sort. However, due to the non-trivial calculations in radix sort, I believe std::sort will still be faster.
引用:2. If the answer of question 1 is yes, then if I improve the efficiency of the prior_queue implementation, then the running time can be faster than 0.11s ? and even near your record 0.08 !? (I got some information from this article ftp://ftp.cs.utexas.edu/pub/techreports/honor_theses/cs-07-34-roche.pdf and according to it, some other heap implementations may be faster than binary heap).
Since there are only 2 parts in concern (sorting in Step 0 and priority queue in other steps), switching with a faster implementation of sort and/or priority queue can make the running shorter for sure.
My own experience is that using a binary heap with pre-allocation of memory runs faster than std::priority_queue, which must allocate memory dynamically.

作者: fitcat07   发布时间: 2013-12-05

那位 bb,原来系本地大学一班博士生。因为我想知我嘅方法系咪原创,所以问咗一位大学教授,佢搵咗佢嘅学生试做。结果佢哋用 disjoint-set 加埋 path compression,做到同我差不多嘅时间(做之前无畀佢哋睇我嘅方法)。
http://en.wikipedia.org/wiki/Disjoint-set_data_structure
最后,佢哋睇埋我嘅方法后,认为都系同一方法。
我之前虽然已经识得 disjoint-set,但我无谂过可以用嚟解决呢个问题。原来我「重新发明车轮」,哈哈。

作者: fitcat07   发布时间: 2013-12-05

引用:原帖由 fitcat07 於 2013-11-20 11:26 AM 发表
The easiest way to answer your question is to use std::sort in your code, submit it and then see the result.The time complexity of radix sort is O(dn) where d is the number of digits. Since max(n) is ...
Hi fitcat07,

Thanks for your suggestion and kindly reply I shall try std::sort later. (But I have to learn C++ firstly and I have no idea on what is template )

Yes, as you said that the radix sort has been improved by switching pointers instead of copy all elements between arrays before submit to SPOJ. Also, you're right, I make a mistake on previous post, I refers back to my text book (introduction to algorithms, MIT press) the time complexity of radix sort is O(d(2n+k)), i.e. O(d(2n + 10), but since the Ci > 1,000,000 can be reduced to 1,000,001 before sorting (because of the maximum of Mi is 1,000,000) , and so, the time complexity is O(7 (2n + 10)), i.e. O(14n) for sorting the mass of bag and jewellery, right?

Otherwise, Would I ask a stupid question? Does pre-allocation of memory mean using array instead of dynamic allocate memory using malloc() like functions? If it is true, actucally, I have used pre-allocation of memory method

作者: assembly.jc   发布时间: 2013-12-05

If you don't use classes and templates, C++ is very similar to C (especially C99).To use std::sort, you don't need to use templates. See the link to an example below:
http://www.cplusplus.com/reference/algorithm/sort/
The example uses vector. It is very much like array except that it can grow dynamically. Yes, it requires the use of templates - but it is easy to use as you can see from the example. In fact, std::sort can apply to normal C arrays as well, e.g., std::sort(A, A+n) where n is the number of elements of A. If you need to sort in descending order, you can supply the third argument which is a binary function. Again, it is included in the example.

In this problem, you can modify all Ci > 1,000,000 to 1,000,001 before sorting. You're right and it helps to shorten the sorting time.

Since we know the maximum size, we can allocate a sufficient large memory for storing all elements in the heap. The allocated memory can be a fixed size array or it can be allocated using malloc() or similar functions. The point is that memory is not allocated each time an element is pushed into the help or memory has to be resized to cater for the growth. Since std::priority_queue does not support memory pre-allocation, the performance may not be as good as that of pre-allocated memory heaps.

作者: fitcat07   发布时间: 2013-12-05

引用:原帖由 fitcat07 於 2013-11-21 11:53 AM 发表
If you don't use classes and templates, C++ is very similar to C (especially C99).To use std::sort, you don't need to use templates. See the link to an example below:
http://www.cplusplus.com/reference/algorithm/sort/
Hi fitcat07,

Yup, it is quite easy and convenient, although I don't know what it have been done, but it work! I have submitted the c++ version and here is the result:

std::sort + my priority_queue implementation : 0.24s (submitted 3 times got same running time)
std::sort + std::priority_queue: 0.24 - 0.26s (also submitted 3 times)

The results matches our time complexity analysis, the radix sort just faster than std::sort a little bit and since my priority_queue implementation use pre-allocated memory, it seems more stable than std::priority_queue when compare their running time.

However, the question still cannot be solved: even though use std::sort+ std::priority_queue, but why your solution is much faster?? say 0.11s! Do you use std::vector as well? Is it because I use unsigned long long to calculate the answer? Here is the code, Would you mind to point out the difference between that and your faster solution?

The core part: (I just modify your solution on post #9 a little bit, but it seems not the key factor of why it is slower than your original solution)
复制内容到剪贴板代码:typedef struct _jewellery {
unsigned int m;
unsigned int v;
} Jewellery;

bool std_comp_bag(unsigned int i, unsigned int j) {
return(i<j);
}

bool std_comp_mass(Jewellery *i, Jewellery* j) {
return(i->m<j->m);
}

void lopov(std::vector<Jewellery*> &jv, std::vector<unsigned int> &cv) {
unsigned int i, j, n, k;
unsigned long long ans;
std::priority_queue<unsigned int> pq;

std::sort(cv.begin(), cv.end(), std_comp_bag);
std::sort(jv.begin(), jv.end(), std_comp_mass);

i=0; j=0; ans=0;
n = jv.size();
k = cv.size() - 1;

do {
while(i<n && jv->m<=cv[j]) pq.push(jv[i++]->v);
if(! pq.empty()) { ans += pq.top(); pq.pop(); }
} while(j++ < k);

printf("%llu\n", ans);
}
The IO part;
复制内容到剪贴板代码:#define MAX_DATA_LEN 300001
#define MAX_LINE_LEN 1024

void read_num(unsigned int *a) {
char s[MAX_LINE_LEN];
char *p;

fgets(s, MAX_LINE_LEN, stdin);
*a = 0;
p = s;
while(*p != '\n') *a = (*a * 10) + (*p++ - '0');
}

void read_pair(unsigned int *a, unsigned int *b) {
char s[MAX_LINE_LEN];
char *p;

fgets(s, MAX_LINE_LEN, stdin);
*a = 0;
p = s;
while(*p != ' ') *a = (*a * 10) + (*p++ - '0');

*b = 0;
p++;
while(*p != '\n') *b = (*b * 10) + (*p++ - '0');
}

int main() {
unsigned int i, c, n, k;
Jewellery j[MAX_DATA_LEN];

std::vector<unsigned int> cv;
std::vector<Jewellery*> jv;

read_pair(&n, &k);
for(i=0; i<n; i++) {
read_pair(&j[i].m, &j[i].v);
jv.push_back(&j[i]);
}

for(i=0; i<k; i++) {
read_num(&c);
cv.push_back(c);
}

lopov(jv, cv);
return(0);
}
[ 本帖最后由 assembly.jc 於 2013-11-22 01:35 PM 编辑 ]

作者: assembly.jc   发布时间: 2013-12-05

Glad to know my analysis is close to the fact.

The difference in time may be due to the following issues in order of significance:
1. I use fread() to read most (or all depends on the input size) of the input. My buffer size is 64KB and the maximum input size is 6.9MB. So, the total number of calls to read input is at most about 100. Yours is at most 300,000. Calling system functions especially on input/output is time consuming.
2. Since we know the maximum size, we should call std::vector.reserve() to allocate sufficient memory to avoid dynamic memory allocation due to growth in size. Fyi, GCC reserve storage for 16 elements by default, IIRC. I use static array to store both bags and jewellery. Static array is usually faster than std::vector.
3. std::sort by default sorts the elements in ascending order. So, you don't need to pass in std_comp_bag. I don't know whether this affects performance or not. Since std::sort is highly optimized, I believe omitting the third argument will help though the difference may not be significant.

作者: fitcat07   发布时间: 2013-12-05

热门下载

更多