Search Results

Search found 5638 results on 226 pages for 'scheduling algorithm'.

Page 41/226 | < Previous Page | 37 38 39 40 41 42 43 44 45 46 47 48  | Next Page >

  • How to transfer data between two netowks efficiently

    - by Tono Nam
    I will like to transfer files between two places over the internet. Right now I have a VPN and I am able to browse, download and transfer files. So my question is not really how to transfer the files; Instead, I will like to use the most efficient approach because the two places constantly share a lot of data. The reason why I want to get rid of the vpn is because it is two slow. Having high upload speed is very expensive/impossible on residential places so I will like to use a different approach. I was thinking about using programs such as http://www.dropbox.com . The problem with dropbox is it only enables 2 GB of storage in order for it to be free. I think the deals they offer are ok and I might be willing to pay to get that increase in speed. But I am concerned with the speed of transferring data. Dropbox will upload the file to their server then send it from the server to the other location. I will like it even faster lol. Anyways I was thinking why not create a program my self. This is the algorithm that I was thinking let me know if it sounds to crazy. (remember my goal is to transfer files as fastest as possible) Things that I will use in this algorithm: Server on the internet called S ( has fast download and upload speed. I pay to host a website and some services in there. I want to take advantage of it) Client A on location 1 Client B on location 2 So lets say on location 1 20 large files are created and need to be transferred to location 2. Client A compresses the files with the highest compression ratio possible. Client A starts sending data via UDP to client B. Because I am using UDP I will include the sequence number on each package. Have server S help speed up things. For example every time a package is lost we can use Server S to inform client A that it needs to resend a package. Anyways I think this approach will increase the transfer rate. I do not know if it is possible to start sending data meanwhile it is being compressed. Also if it is possible to start decompressing data even if we are not done receiving all the info. Maybe it will be faster to start sending the files right away without compressing. If I knew that I will always be sending large text files then I will obviously use the compression. I need this as a general algorithm. So i guess my question is should using UDP over TCP could increase performance by using an extra server to keep track of lost packages? and How should I compress files before sending? compressing a 1 GB file with the highest compression ration takes about 1 hour! I will like to take advantage of that time by sending it meanwhile it is compressed.

    Read the article

  • How to transfer data between two networks efficiently

    - by Tono Nam
    I would like to transfer files between two places over the internet. Right now I have a VPN and I am able to browse, download and transfer files. So my question is not really how to transfer the files; Instead, I would like to use the most efficient approach because the two places constantly share a lot of data. The reason why I want to get rid of the VPN is because it is two slow. Having high upload speed is very expensive/impossible in residential places so I would like to use a different approach. I was thinking about using programs such as http://www.dropbox.com . The problem with Dropbox is that the free version comes with only 2 GB of storage. I think the deals they offer are OK and I might be willing to pay to get that increase in speed. But I am concerned with the speed of transferring data. Dropbox will upload the file to their server then send it from the server to the other location. I would like it to be even faster. Anyway I was thinking why not create a program myself. This is the algorithm that I was thinking of. Let me know if it sounds too crazy. (Remember my goal is to transfer files as fast as possible) Things that I will use in this algorithm: Server on the internet called S (Has fast download and upload speed. I pay to host a website and some services in there. I want to take advantage of it.) Client A at location 1 Client B at location 2 So lets say at location 1, 20 large files are created and need to be transferred to location 2. Client A compresses the files with the highest compression ratio possible. Client A starts sending data via UDP to client B. Because I am using UDP I will include the sequence number on each packet. Have server S help speed up things. For example every time a packet is lost we can use Server S to inform client A that it needs to resend a packet. Anyways I think this approach will increase the transfer rate. I do not know if it is possible to start sending data while it is being compressed. Or if it is possible to start decompressing data even if we are not done receiving the whole file. Maybe it will be faster to start sending the files right away without compressing. If I knew that I will always be sending large text files then I will obviously use the compression. I need this as a general algorithm. So I guess my question is could I increase performance by using UDP instead of TCP and by using an extra server to keep track of lost packets? And how should I compress files before sending? Compressing a 1 GB file with the highest compression ratio takes about 1 hour! I would like to take advantage of that time by sending it as it is being compressed.

    Read the article

  • Scheduling notifications in Android

    - by Kilnr
    Hi, I need to be able to schedule multiple Notifications at different times in the future. I tried doing this with an AlarmManager, but that isn't suitable, for the following reason. From AlarmManager.set(): "If there is already an alarm for this Intent scheduled (with the equality of two intents being defined by filterEquals(Intent)), then it will be removed and replaced by this one." Guess what, the sending intents are equal, apart from different Extra's (but those don't count for filterEquals). So how can I schedule multiple notifications, which will still be shown when my application is killed (the whole reason I tried AlarmManager)? Thanks.

    Read the article

  • OpenCart Service Scheduling

    - by Jason Palmer
    I am looking to develop an extension to OpenCart which will allow me to both sell products and sell scheduled services. For the scheduled services, I would like to have the storefront interface change so the customer sees a calendar. It would be Nirvana for the calendar to show only available dates/times, but at the very least they should see a calendar. Then, on the administrative side, there should be a place where there is a list of desired dates/times for scheduled services. I'm wondering if anyone is aware of community-developed extensions which do this, or come close? If not, any recommendations for how to accomplish this properly are welcome. Thanks in advance.

    Read the article

  • Windows Workflow foundatation scheduling.

    - by MushRoom
    If I have an ASP.NET app hosting a worflow where a trouble ticket passes through a fairly standard flow....at one point there needs to be an escalation occurring if a ticket has not been looked at or resolved in 6 months. Now lets say the 6 months have passed, but the IIS machine has been rebooted last month and not used. Will the workflow escalate the ticket? When/how does the runtime check conditions on long running processes? Does it scan through the 1000's of workflows looking...or have an events table it checks? Does it even handle this kind of out-of-process workflow? Seems like it would be a very common situation for a WF to handle but i can't seem to find any information.

    Read the article

  • Lua task scheduling

    - by Martin
    I've been writing some scripts for a game, the scripts are written in Lua. One of the requirements the game has is that the Update method in your lua script (which is called every frame) may take no longer than about 2-3 milliseconds to run, if it does the game just hangs. I solved this problem with coroutines, all I have to do is call Multitasking.RunTask(SomeFunction) and then the task runs as a coroutine, I then have to scatter Multitasking.Yield() throughout my code, which checks how long the task has been running for, and if it's over 2 ms it pauses the task and resumes it next frame. This is ok, except that I have to scatter Multitasking.Yield() everywhere throughout my code, and it's a real mess. Ideally, my code would automatically yield when it's been running too long. So, Is it possible to take a Lua function as an argument, and then execute it line by line (maybe interpreting Lua inside Lua, which I know is possible, but I doubt it's possible if all you have is a function pointer)? In this way I could automatically check the runtime and yield if necessary between every single line.

    Read the article

  • Scheduling a recurring alarm/event

    - by martinjd
    I have a class that extends Application. In the class I make a call to AlarmManager and pass in an intent. As scheduled my EventReceiver class, that extends BroadcastReceiver, processes the call in the onReceive method. How would I call the intent again from the onReceive method to schedule another event?

    Read the article

  • Thread scheduling C

    - by MRP
    include <pthread.h> include <stdio.h> include <stdlib.h> #define NUM_THREADS 4 #define TCOUNT 5 #define COUNT_LIMIT 13 int done = 0; int count = 0; int thread_ids[4] = {0,1,2,3}; int thread_runtime[4] = {0,5,4,1}; pthread_mutex_t count_mutex; pthread_cond_t count_threshold_cv; void *inc_count(void *t) { int i; long my_id = (long)t; long run_time = thread_runtime[my_id]; if (my_id==2 && done ==0) { for(i=0; i< 5 ; i++) { if( i==4 ){done =1;} pthread_mutex_lock(&count_mutex); count++; if (count == COUNT_LIMIT) { pthread_cond_signal(&count_threshold_cv); printf("inc_count(): thread %ld, count = %d Threshold reached.\n", my_id, count); } printf("inc_count(): thread %ld, count = %d, unlocking mutex\n", my_id, count); pthread_mutex_unlock(&count_mutex); } } if (my_id==3 && done==1) { for(i=0; i< 4 ; i++) { if(i == 3 ){ done = 2;} pthread_mutex_lock(&count_mutex); count++; if (count == COUNT_LIMIT) { pthread_cond_signal(&count_threshold_cv); printf("inc_count(): thread %ld, count = %d Threshold reached.\n", my_id, count); } printf("inc_count(): thread %ld, count = %d, unlocking mutex\n", my_id, count); pthread_mutex_unlock(&count_mutex); } } if (my_id==4&& done == 2) { for(i=0; i< 8 ; i++) { pthread_mutex_lock(&count_mutex); count++; if (count == COUNT_LIMIT) { pthread_cond_signal(&count_threshold_cv); printf("inc_count(): thread %ld, count = %d Threshold reached.\n",my_id, count); } printf("inc_count(): thread %ld, count = %d, unlocking mutex\n", my_id, count); pthread_mutex_unlock(&count_mutex); } } pthread_exit(NULL); } void *watch_count(void *t) { long my_id = (long)t; printf("Starting watch_count(): thread %ld\n", my_id); pthread_mutex_lock(&count_mutex); if (count<COUNT_LIMIT) { pthread_cond_wait(&count_threshold_cv, &count_mutex); printf("watch_count(): thread %ld Condition signal received.\n", my_id); count += 125; printf("watch_count(): thread %ld count now = %d.\n", my_id, count); } pthread_mutex_unlock(&count_mutex); pthread_exit(NULL); } int main (int argc, char *argv[]) { int i, rc; long t1=1, t2=2, t3=3, t4=4; pthread_t threads[4]; pthread_attr_t attr; pthread_mutex_init(&count_mutex, NULL); pthread_cond_init (&count_threshold_cv, NULL); pthread_attr_init(&attr); pthread_attr_setdetachstate(&attr,PTHREAD_CREATE_JOINABLE); pthread_create(&threads[0], &attr, watch_count, (void *)t1); pthread_create(&threads[1], &attr, inc_count, (void *)t2); pthread_create(&threads[2], &attr, inc_count, (void *)t3); pthread_create(&threads[3], &attr, inc_count, (void *)t4); for (i=0; i<NUM_THREADS; i++) { pthread_join(threads[i], NULL); } printf ("Main(): Waited on %d threads. Done.\n", NUM_THREADS); pthread_attr_destroy(&attr); pthread_mutex_destroy(&count_mutex); pthread_cond_destroy(&count_threshold_cv); pthread_exit(NULL); } so this code creates 4 threads. thread 1 keeps track of the count value while the other 3 increment the count value. the run time is the number of times the thread will increment the count value. I have a done value that allows the first thread to increment the count value first until its run time is up.. so its like a First Come First Serve. my question is, is there a better way of implementing this? I have read about SCHED_FIFO or SCHED_RR.. I guess I dont know how to implement them into this code or if it can be.

    Read the article

  • Why does this Quicksort work?

    - by IVlad
    I find this Quicksort partitioning approach confusing and wrong, yet it seems to work. I am referring to this pseudocode. Note: they also have a C implementation at the end of the article, but it's very different from their pseudocode, so I don't care about that. I have also written it in C like this, trying to stay true to the pseudocode as much as possible, even if that means doing some weird C stuff: #include <stdio.h> int partition(int a[], int p, int r) { int x = a[p]; int i = p - 1; int j = r + 1; while (1) { do j = j - 1; while (!(a[j] <= x)); do i = i + 1; while (!(a[i] >= x)); if (i < j) { int t = a[i]; a[i] = a[j]; a[j] = t; } else { for (i = 1; i <= a[0]; ++i) printf("%d ", a[i]); printf("- %d\n", j); return j; } } } int main() { int a[100] = //{8, 6,10,13,15,8,3,2,12}; {7, 7, 6, 2, 3, 8, 4, 1}; partition(a, 1, a[0]); return 0; } If you run this, you'll get the following output: 1 6 2 3 4 8 7 - 5 However, this is wrong, isn't it? Clearly a[5] does not have all the values before it lower than it, since a[2] = 6 > a[5] = 4. Not to mention that 7 is supposed to be the pivot (the initial a[p]) and yet its position is both incorrect and lost. The following partition algorithm is taken from wikipedia: int partition2(int a[], int p, int r) { int x = a[r]; int store = p; for (int i = p; i < r; ++i) { if (a[i] <= x) { int t = a[i]; a[i] = a[store]; a[store] = t; ++store; } } int t = a[r]; a[r] = a[store]; a[store] = t; for (int i = 1; i <= a[0]; ++i) printf("%d ", a[i]); printf("- %d\n", store); return store; } And produces this output: 1 6 2 3 8 4 7 - 1 Which is a correct result in my opinion: the pivot (a[r] = a[7]) has reached its final position. However, if I use the initial partitioning function in the following algorithm: void Quicksort(int a[], int p, int r) { if (p < r) { int q = partition(a, p, r); // initial partitioning function Quicksort(a, p, q); Quicksort(a, q + 1, r); // I'm pretty sure q + r was a typo, it doesn't work with q + r. } } ... it seems to be a correct sorting algorithm. I tested it out on a lot of random inputs, including all 0-1 arrays of length 20. I have also tried using this partition function for a selection algorithm, in which it failed to produce correct results. It seems to work and it's even very fast as part of the quicksort algorithm however. So my questions are: Can anyone post an example on which the algorithm DOESN'T work? If not, why does it work, since the partitioning part seems to be wrong? Is this another partitioning approach that I don't know about?

    Read the article

  • First Come, First Served process scheduling

    - by user253530
    i have 4 processes: p1 - bursts 5, priority: 3 p2 - bursts 8, priority: 2 p3 - bursts 12, priority: 2 p4 - bursts 6, priority: 1 Assuming that all processes arrive at the scheduler at the same time what is the average response time and average turnaround time? For FCFS is it ok to have them in the order p1, p2, p3, p4 in the execution queue?

    Read the article

  • Scheduling a Delay Job on Heroku with a Worker Dyno

    - by user1524775
    I'm currently using Heroku's scheduler to run a script. However, the time that the script takes to run is going to increase from a few milliseconds to a few minutes. I'm looking at using the delayed_job gem to push this process off to a Worker Dyno. I want to continue to run this script once-a-day, just offload it to the worker. My current rake task is: desc "This task updates some stuff for you." task :update_some_stuff => :environment do puts "Updating some stuff ..." SomeClass.new.process puts "... done." end Once the gem is installed, migration run, and worker dyno started, will the script just need to change to: desc "This task updates some stuff for you." task :update_some_stuff => :environment do puts "Updating some stuff ..." SomeClass.new.delay.process puts "... done." end With this task still being a rake task scheduled by Heroku's Scheduler, is the only thing that needs to happen here the introduction of the delay method to put this in the Worker's queue? Thanks in advance for any help.

    Read the article

  • Scheduling jobs from a web environment on Linux

    - by Anders Feder
    Hi. I am developing an application in PHP on Linux/Apache. I want to be able to schedule PHP jobs (scripts) for execution at some specific time in the future from within the application. I know that many people will recommend cron and at, but first of all I don't need recurrence (cron) and secondly and most importantly, I need the solution to be able to scale. At was not designed with race condititions in mind, and if two users try to add a job at the same time one or both may fail. It's also important that jobs are executed at their specified time, and not just 'polled' once per minute or so. Can anyone please suggest solutions for this task? Thank you.

    Read the article

  • Android: Scheduling application to start with repeating alarms not working

    - by vikramagain
    I get my Broadcast receiver to set a recurring alarm, to fire up a service. Unfortunately this does not result in the service being called repeatedly (based on logcat). I've experimented with different values for the time interval too. Can someone help? (I'm testing through Eclipse on Android 3.2 Motorola xoom) Below is the code for the Broadcast receiver. alarm = (AlarmManager) arg0.getSystemService(Context.ALARM_SERVICE); Intent intentUploadService = new Intent (arg0, com.vikramdhunta.UploaderService.class); Calendar calendar = Calendar.getInstance(); calendar.setTimeInMillis(System.currentTimeMillis()); calendar.add(Calendar.SECOND, 3); PendingIntent pi = PendingIntent.getBroadcast(arg0, 0, intentUploadService , 0); alarm.setRepeating(AlarmManager.RTC_WAKEUP, calendar.getTimeInMillis(), 5, pi); Below is the code for the Service class public UploaderService() { super("UploaderService"); mycounterid = globalcounter++; } @Override protected void onHandleIntent(Intent intent) { synchronized(this) { try { for (int i = 1;i < 5;i++) { // doesn't do much right now.. but this should appear in logcat Log.i(TAG,"OK " + globalcounter++ + " uploading..." + System.currentTimeMillis()); } } catch(Exception e) { } } } @Override public void onCreate() { super.onCreate(); Log.d("TAG", "Service created."); } @Override public IBinder onBind(Intent arg0) { return null; } @Override public int onStartCommand(Intent intent, int flags, int startId) { Log.i(TAG, "Starting upload service..." + mycounterid); return super.onStartCommand(intent,flags,startId); }

    Read the article

  • Millisecond-Accurate Scheduling of Future Events in C++/CLI

    - by A Grad Student at a University
    I need to create a C++/CLI mixed assembly that can schedule future calls into a native DLL with millisecond accuracy. This will, of course, mean setting a timer (what kind?) for a millisecond or three beforehand, then spinning until the moment and calling the native DLL function. Based on what I've read, I would guess that the callback that the timer calls will need to be native to make sure there are no thunks or GC to delay handling the timer callback. Will the entire thread or process need to be native and CLR-free, though, or can this be done just as accurately with #pragma unmanaged or setting one file of the assembly to compile as native? If so, how? If there is indeed no way to do this in mixed-mode C++/CLI, what would be the easiest way to set up an app/thread (ie, DLL or exe?) to handle it and to get the data back and forth between the native and managed threads/apps?

    Read the article

  • multitreading scheduling related java

    - by vichi
    class A implements Runnable{ B b=new B(); public void run(){ while(true){ System.out.println("H1"+Thread.currentThread().getName()); } } } public class Test { public static void main(String[] str){ A a1 =new A(); // A a2 =new A(); // Thread t1 =new Thread(a1, "Vichi"); Thread t2 =new Thread(a1,"Vishu"); t1.start(); t2.start(); } } what will be the ans: 1) only one of them will get the chance to execute 2) both will get chance in arbitrary manner please suggest possible ans with explations

    Read the article

  • Scheduling a visual studio load test using powershell giving me BSOD

    - by user952342
    I have a visual studio load test which I want to run every hour so that I can start to collect some data. To do this, I thought it would be best to make a little powershell script and put a command like this inside: Invoke-Expression -command "& '$env:VS100COMNTOOLS..\IDE\mstest.exe' /testcontainer:"C:\Users\benb\Documents\Visual Studio 2010\Projects\BBPerformanceTest\bin\Debug\HomePageOnly.loadtest"" That command works fine, but sometimes when its run I get a blue screen of death. However, when I run my load test through the visual studio GUI, I never get a BSOD. two questions: is it possible to avoid this BSOD? Is there another way I can schedule my load test? Thanks

    Read the article

  • PHP site scheduling Java execution?

    - by obfuscation
    I'm trying to get started on combining my (slightly limited) PHP experience with my (better) Java experience, in a project where I need to allow uploads of Java source files to the server, which the server then executes Javac on to compile it. Then, at a set time (e.g. specified on upload) I need to run that once on the server, which will generate some database info for the PHP site to display. To describe my current programming abilities- I have made many desktop Java programs, and am confident in 'pure' Java, but so far have only undertaken a couple of PHP projects (including using the CodeIgniter framework). My motivation for using PHP as the frontend is because I know it is very fast, lightweight and I will be able to display the results I need very easily with it (simple DB readout). Ideally, the technology used should be able to be developed on a localhost (e.g. WAMP, Tomcat etc..) Is there any advice which you could give on what technology I should consider to use to bridge this gap, and what resources could help in using that technology? I have looked at a few, but have struggled to find documentation helping in achieving what I need.

    Read the article

  • Does anyone really understand how HFSC scheduling in Linux/BSD works?

    - by Mecki
    I read the original SIGCOMM '97 PostScript paper about HFSC, it is very technically, but I understand the basic concept. Instead of giving a linear service curve (as with pretty much every other scheduling algorithm), you can specify a convex or concave service curve and thus it is possible to decouple bandwidth and delay. However, even though this paper mentions to kind of scheduling algorithms being used (real-time and link-share), it always only mentions ONE curve per scheduling class (the decoupling is done by specifying this curve, only one curve is needed for that). Now HFSC has been implemented for BSD (OpenBSD, FreeBSD, etc.) using the ALTQ scheduling framework and it has been implemented Linux using the TC scheduling framework (part of iproute2). Both implementations added two additional service curves, that were NOT in the original paper! A real-time service curve and an upper-limit service curve. Again, please note that the original paper mentions two scheduling algorithms (real-time and link-share), but in that paper both work with one single service curve. There never have been two independent service curves for either one as you currently find in BSD and Linux. Even worse, some version of ALTQ seems to add an additional queue priority to HSFC (there is no such thing as priority in the original paper either). I found several BSD HowTo's mentioning this priority setting (even though the man page of the latest ALTQ release knows no such parameter for HSFC, so officially it does not even exist). This all makes the HFSC scheduling even more complex than the algorithm described in the original paper and there are tons of tutorials on the Internet that often contradict each other, one claiming the opposite of the other one. This is probably the main reason why nobody really seems to understand how HFSC scheduling really works. Before I can ask my questions, we need a sample setup of some kind. I'll use a very simple one as seen in the image below: Here are some questions I cannot answer because the tutorials contradict each other: What for do I need a real-time curve at all? Assuming A1, A2, B1, B2 are all 128 kbit/s link-share (no real-time curve for either one), then each of those will get 128 kbit/s if the root has 512 kbit/s to distribute (and A and B are both 256 kbit/s of course), right? Why would I additionally give A1 and B1 a real-time curve with 128 kbit/s? What would this be good for? To give those two a higher priority? According to original paper I can give them a higher priority by using a curve, that's what HFSC is all about after all. By giving both classes a curve of [256kbit/s 20ms 128kbit/s] both have twice the priority than A2 and B2 automatically (still only getting 128 kbit/s on average) Does the real-time bandwidth count towards the link-share bandwidth? E.g. if A1 and B1 both only have 64kbit/s real-time and 64kbit/s link-share bandwidth, does that mean once they are served 64kbit/s via real-time, their link-share requirement is satisfied as well (they might get excess bandwidth, but lets ignore that for a second) or does that mean they get another 64 kbit/s via link-share? So does each class has a bandwidth "requirement" of real-time plus link-share? Or does a class only have a higher requirement than the real-time curve if the link-share curve is higher than the real-time curve (current link-share requirement equals specified link-share requirement minus real-time bandwidth already provided to this class)? Is upper limit curve applied to real-time as well, only to link-share, or maybe to both? Some tutorials say one way, some say the other way. Some even claim upper-limit is the maximum for real-time bandwidth + link-share bandwidth? What is the truth? Assuming A2 and B2 are both 128 kbit/s, does it make any difference if A1 and B1 are 128 kbit/s link-share only, or 64 kbit/s real-time and 128 kbit/s link-share, and if so, what difference? If I use the seperate real-time curve to increase priorities of classes, why would I need "curves" at all? Why is not real-time a flat value and link-share also a flat value? Why are both curves? The need for curves is clear in the original paper, because there is only one attribute of that kind per class. But now, having three attributes (real-time, link-share, and upper-limit) what for do I still need curves on each one? Why would I want the curves shape (not average bandwidth, but their slopes) to be different for real-time and link-share traffic? According to the little documentation available, real-time curve values are totally ignored for inner classes (class A and B), they are only applied to leaf classes (A1, A2, B1, B2). If that is true, why does the ALTQ HFSC sample configuration (search for 3.3 Sample configuration) set real-time curves on inner classes and claims that those set the guaranteed rate of those inner classes? Isn't that completely pointless? (note: pshare sets the link-share curve in ALTQ and grate the real-time curve; you can see this in the paragraph above the sample configuration). Some tutorials say the sum of all real-time curves may not be higher than 80% of the line speed, others say it must not be higher than 70% of the line speed. Which one is right or are they maybe both wrong? One tutorial said you shall forget all the theory. No matter how things really work (schedulers and bandwidth distribution), imagine the three curves according to the following "simplified mind model": real-time is the guaranteed bandwidth that this class will always get. link-share is the bandwidth that this class wants to become fully satisfied, but satisfaction cannot be guaranteed. In case there is excess bandwidth, the class might even get offered more bandwidth than necessary to become satisfied, but it may never use more than upper-limit says. For all this to work, the sum of all real-time bandwidths may not be above xx% of the line speed (see question above, the percentage varies). Question: Is this more or less accurate or a total misunderstanding of HSFC? And if assumption above is really accurate, where is prioritization in that model? E.g. every class might have a real-time bandwidth (guaranteed), a link-share bandwidth (not guaranteed) and an maybe an upper-limit, but still some classes have higher priority needs than other classes. In that case I must still prioritize somehow, even among real-time traffic of those classes. Would I prioritize by the slope of the curves? And if so, which curve? The real-time curve? The link-share curve? The upper-limit curve? All of them? Would I give all of them the same slope or each a different one and how to find out the right slope? I still haven't lost hope that there exists at least a hand full of people in this world that really understood HFSC and are able to answer all these questions accurately. And doing so without contradicting each other in the answers would be really nice ;-)

    Read the article

  • Does anyone really understand how HFSC scheduling in Linux/BSD works?

    - by Mecki
    I read the original SIGCOMM '97 PostScript paper about HFSC, it is very technically, but I understand the basic concept. Instead of giving a linear service curve (as with pretty much every other scheduling algorithm), you can specify a convex or concave service curve and thus it is possible to decouple bandwidth and delay. However, even though this paper mentions to kind of scheduling algorithms being used (real-time and link-share), it always only mentions ONE curve per scheduling class (the decoupling is done by specifying this curve, only one curve is needed for that). Now HFSC has been implemented for BSD (OpenBSD, FreeBSD, etc.) using the ALTQ scheduling framework and it has been implemented Linux using the TC scheduling framework (part of iproute2). Both implementations added two additional service curves, that were NOT in the original paper! A real-time service curve and an upper-limit service curve. Again, please note that the original paper mentions two scheduling algorithms (real-time and link-share), but in that paper both work with one single service curve. There never have been two independent service curves for either one as you currently find in BSD and Linux. Even worse, some version of ALTQ seems to add an additional queue priority to HSFC (there is no such thing as priority in the original paper either). I found several BSD HowTo's mentioning this priority setting (even though the man page of the latest ALTQ release knows no such parameter for HSFC, so officially it does not even exist). This all makes the HFSC scheduling even more complex than the algorithm described in the original paper and there are tons of tutorials on the Internet that often contradict each other, one claiming the opposite of the other one. This is probably the main reason why nobody really seems to understand how HFSC scheduling really works. Before I can ask my questions, we need a sample setup of some kind. I'll use a very simple one as seen in the image below: Here are some questions I cannot answer because the tutorials contradict each other: What for do I need a real-time curve at all? Assuming A1, A2, B1, B2 are all 128 kbit/s link-share (no real-time curve for either one), then each of those will get 128 kbit/s if the root has 512 kbit/s to distribute (and A and B are both 256 kbit/s of course), right? Why would I additionally give A1 and B1 a real-time curve with 128 kbit/s? What would this be good for? To give those two a higher priority? According to original paper I can give them a higher priority by using a curve, that's what HFSC is all about after all. By giving both classes a curve of [256kbit/s 20ms 128kbit/s] both have twice the priority than A2 and B2 automatically (still only getting 128 kbit/s on average) Does the real-time bandwidth count towards the link-share bandwidth? E.g. if A1 and B1 both only have 64kbit/s real-time and 64kbit/s link-share bandwidth, does that mean once they are served 64kbit/s via real-time, their link-share requirement is satisfied as well (they might get excess bandwidth, but lets ignore that for a second) or does that mean they get another 64 kbit/s via link-share? So does each class has a bandwidth "requirement" of real-time plus link-share? Or does a class only have a higher requirement than the real-time curve if the link-share curve is higher than the real-time curve (current link-share requirement equals specified link-share requirement minus real-time bandwidth already provided to this class)? Is upper limit curve applied to real-time as well, only to link-share, or maybe to both? Some tutorials say one way, some say the other way. Some even claim upper-limit is the maximum for real-time bandwidth + link-share bandwidth? What is the truth? Assuming A2 and B2 are both 128 kbit/s, does it make any difference if A1 and B1 are 128 kbit/s link-share only, or 64 kbit/s real-time and 128 kbit/s link-share, and if so, what difference? If I use the seperate real-time curve to increase priorities of classes, why would I need "curves" at all? Why is not real-time a flat value and link-share also a flat value? Why are both curves? The need for curves is clear in the original paper, because there is only one attribute of that kind per class. But now, having three attributes (real-time, link-share, and upper-limit) what for do I still need curves on each one? Why would I want the curves shape (not average bandwidth, but their slopes) to be different for real-time and link-share traffic? According to the little documentation available, real-time curve values are totally ignored for inner classes (class A and B), they are only applied to leaf classes (A1, A2, B1, B2). If that is true, why does the ALTQ HFSC sample configuration (search for 3.3 Sample configuration) set real-time curves on inner classes and claims that those set the guaranteed rate of those inner classes? Isn't that completely pointless? (note: pshare sets the link-share curve in ALTQ and grate the real-time curve; you can see this in the paragraph above the sample configuration). Some tutorials say the sum of all real-time curves may not be higher than 80% of the line speed, others say it must not be higher than 70% of the line speed. Which one is right or are they maybe both wrong? One tutorial said you shall forget all the theory. No matter how things really work (schedulers and bandwidth distribution), imagine the three curves according to the following "simplified mind model": real-time is the guaranteed bandwidth that this class will always get. link-share is the bandwidth that this class wants to become fully satisfied, but satisfaction cannot be guaranteed. In case there is excess bandwidth, the class might even get offered more bandwidth than necessary to become satisfied, but it may never use more than upper-limit says. For all this to work, the sum of all real-time bandwidths may not be above xx% of the line speed (see question above, the percentage varies). Question: Is this more or less accurate or a total misunderstanding of HSFC? And if assumption above is really accurate, where is prioritization in that model? E.g. every class might have a real-time bandwidth (guaranteed), a link-share bandwidth (not guaranteed) and an maybe an upper-limit, but still some classes have higher priority needs than other classes. In that case I must still prioritize somehow, even among real-time traffic of those classes. Would I prioritize by the slope of the curves? And if so, which curve? The real-time curve? The link-share curve? The upper-limit curve? All of them? Would I give all of them the same slope or each a different one and how to find out the right slope? I still haven't lost hope that there exists at least a hand full of people in this world that really understood HFSC and are able to answer all these questions accurately. And doing so without contradicting each other in the answers would be really nice ;-)

    Read the article

  • Sorting a list of numbers with modified cost

    - by David
    First, this was one of the four problems we had to solve in a project last year and I couldn’t find a suitable algorithm so we handle in a brute force solution. Problem: The numbers are in a list that is not sorted and supports only one type of operation. The operation is defined as follows: Given a position i and a position j the operation moves the number at position i to position j without altering the relative order of the other numbers. If i j, the positions of the numbers between positions j and i - 1 increment by 1, otherwise if i < j the positions of the numbers between positions i+1 and j decreases by 1. This operation requires i steps to find a number to move and j steps to locate the position to which you want to move it. Then the number of steps required to move a number of position i to position j is i+j. We need to design an algorithm that given a list of numbers, determine the optimal (in terms of cost) sequence of moves to rearrange the sequence. Attempts: Part of our investigation was around NP-Completeness, we make it a decision problem and try to find a suitable transformation to any of the problems listed in Garey and Johnson’s book: Computers and Intractability with no results. There is also no direct reference (from our point of view) to this kind of variation in Donald E. Knuth’s book: The art of Computer Programing Vol. 3 Sorting and Searching. We also analyzed algorithms to sort linked lists but none of them gives a good idea to find de optimal sequence of movements. Note that the idea is not to find an algorithm that orders the sequence, but one to tell me the optimal sequence of movements in terms of cost that organizes the sequence, you can make a copy and sort it to analyze the final position of the elements if you want, in fact we may assume that the list contains the numbers from 1 to n, so we know where we want to put each number, we are just concerned with minimizing the total cost of the steps. We tested several greedy approaches but all of them failed, divide and conquer sorting algorithms can’t be used because they swap with no cost portions of the list and our dynamic programing approaches had to consider many cases. The brute force recursive algorithm takes all the possible combinations of movements from i to j and then again all the possible moments of the rest of the element’s, at the end it returns the sequence with less total cost that sorted the list, as you can imagine the cost of this algorithm is brutal and makes it impracticable for more than 8 elements. Our observations: n movements is not necessarily cheaper than n+1 movements (unlike swaps in arrays that are O(1)). There are basically two ways of moving one element from position i to j: one is to move it directly and the other is to move other elements around i in a way that it reaches the position j. At most you make n-1 movements (the untouched element reaches its position alone). If it is the optimal sequence of movements then you didn’t move the same element twice.

    Read the article

  • How to analyze the efficiency of this algorithm Part 2

    - by Leonardo Lopez
    I found an error in the way I explained this question before, so here it goes again: FUNCTION SEEK(A,X) 1. FOUND = FALSE 2. K = 1 3. WHILE (NOT FOUND) AND (K < N) a. IF (A[K] = X THEN 1. FOUND = TRUE b. ELSE 1. K = K + 1 4. RETURN Analyzing this algorithm (pseudocode), I can count the number of steps it takes to finish, and analyze its efficiency in theta notation, T(n), a linear algorithm. OK. This following code depends on the inner formulas inside the loop in order to finish, the deal is that there is no variable N in the code, therefore the efficiency of this algorithm will always be the same since we're assigning the value of 1 to both A & B variables: 1. A = 1 2. B = 1 3. UNTIL (B > 100) a. B = 2A - 2 b. A = A + 3 Now I believe this algorithm performs in constant time, always. But how can I use Algebra in order to find out how many steps it takes to finish?

    Read the article

  • How to recalculate all-pairs shorthest paths on-line if nodes are getting removed?

    - by Pavel Shved
    Latest news about underground bombing made me curious about the following problem. Assume we have a weighted undirected graph, nodes of which are sometimes removed. The problem is to re-calculate shortest paths between all pairs of nodes fast after such removals. With a simple modification of Floyd-Warshall algorithm we can calculate shortest paths between all pairs. These paths may be stored in a table, where shortest[i][j] contains the index of the next node on the shortest path between i and j (or NULL value if there's no path). The algorithm requires O(n³) time to build the table, and eacho query shortest(i,j) takes O(1). Unfortunately, we should re-run this algorithm after each removal. The other alternative is to run graph search on each query. This way each removal takes zero time to update an auxiliary structure (because there's none), but each query takes O(E) time. What algorithm can be used to "balance" the query and update time for all-pairs shortest-paths problem when nodes of the graph are being removed?

    Read the article

< Previous Page | 37 38 39 40 41 42 43 44 45 46 47 48  | Next Page >