algorithm - How to prove the optimality of this greedy algo? -


given n integers. each of these numbers can increased or decreased once no more given positive integer l. after each operation if numbers become equal consider them 1 number. problem calculate cardinality of minimal set of distinct integers.

constraints: n <= 100, l <= 3200, integers in range [-32000, 32000]

example: n = 3, l = 10 11 21 27

1) increase 11 10 => 21 21 27
2) decrease 27 6 => 21 21 21

the answer 1.

algo in c++ language:

sort(v.begin(), v.end()); // algo tries include elements in interval of length 2 * l int ans = 0; int first = 0;  for(int = 1; < n; ++i) {     if(v[i] - v[first] > 2 * l) { // if can't include i-th element          ans++;                    // current interval            first = i;                // algo construct new      } } ans++; printf("%d", ans); 

i try understand why greedy algo optimal. appreciated.

reframed, we're trying cover set of numbers appear in input few intervals of cardinality 2*l + 1 possible. can imagine that, interval [c - l, c + l], numbers in adjusted c.

given list of intervals in sorted order, can show inductively in k that, considering first k intervals, first k of greedy covers @ least of input. base case k = 0 trivial. inductively, greedy covers next uncovered input element , as can in addition; interval in arbitrary solution covers next uncovered input element must not after greedy's, arbitrary solution has no more coverage.


Comments

Popular posts from this blog

java - Date formats difference between yyyy-MM-dd'T'HH:mm:ss and yyyy-MM-dd'T'HH:mm:ssXXX -

c# - Get rid of xmlns attribute when adding node to existing xml -