Hi Zean,
The best description I've heard about timeslice is "It's the max amount of time a process is *not* gonna get" - i.e. it's the max amount of time *another* process can spend on the CPU while you're in the queue waiting. It's NO guarantee that your process will get that amount - it's just a guarantee that it's the longest amount you could ever not get.
Pete is most certainly correct that in the VAST majority of cases one should leave it at 10 & leave it alone. Never set it less of course. But there have been *rare* instances when setting it higher can yield slight to moderate performance improvements, but GREAT care & research must be taken to clearly document this. You *have* to be able to show that you can reduce the number of context switches - especially forced - by giving all processes essentially the option to spend more time on the CPU. But this should be the last resort.
I think a better initial approach would be to insure that there are no resource contentions (semaphores, messaging, etc.), poor programming techniques (stupid loops, inefficient cache usage, etc.) or non-CPU (memory, disk, cache, etc.) bottlenecks causing inefficient CPU usage that "look" like CPU bottlenecks - this is *very* common. Then after all these are looked at & ruled out, you can also look at lowering the priority on the "hog" through several techniques (nice/renice, setprivgrp, or the rtprio()/rtsched() calls, etc.)
THEN if none of the above have an impact you can seriously think about raising timeslice. BUT measure, measure & measure again before AND after any of the above are tried so you'll have a clue about whether they are working or not.
My 2 cents,
Jeff
PERSEVERANCE -- Remember, whatever does not kill you only makes you stronger!