Search Results

Search found 10330 results on 414 pages for 'real estate'.

Page 16/414 | < Previous Page | 12 13 14 15 16 17 18 19 20 21 22 23  | Next Page >

  • better media player than vlc with a real library [closed]

    - by elluca
    hi i have been using VLC for years. but my library of movies is growing an i would like to have a better management of them. sure, i could use itunes for that. but i really hat quicktime. the UI is even worse than the one windows media player got. i looked at winamp, windows media player and songbird. but none has got a UI comparable to VLC. the most important feature: hotkeys. i want to be able to skip in a video using the mouse scroll wheel. do i have to code one myself? ;) thanks lukas

    Read the article

  • Real-time Image Resize, Cropping and Caching Server Product

    - by Elijah
    I'm investigating what products are out there that will allow you to request images through a HTTP API in arbitrary image sizes. The server would behind a CDN but would still need to be able to handle a fair bit of traffic and be possibly load-balanced. I've been tasked with writing such a service, but I wanted to do some due diligence to see what commercial or open source solutions are out there. Google has not been particularly helpful. It may be because I have been searching for the wrong term. Third-party sites and services are out of the question because of corporate policies.

    Read the article

  • Come up with a real-world problem in which only the best solution will do (a problem from Introduction to algorithms) [closed]

    - by Mike
    EDITED (I realized that the question certainly needs a context) The problem 1.1-5 in the book of Thomas Cormen et al Introduction to algorithms is: "Come up with a real-world problem in which only the best solution will do. Then come up with one in which a solution that is “approximately” the best is good enough." I'm interested in its first statement. And (from my understanding) it is asked to name a real-world problem where only the exact solution will work as opposed to a real-world problem where good-enough solution will be ok. So what is the difference between the exact and good enough solution. Consider some physics problem for example the simulation of the fulid flow in the permeable medium. To make this simulation happen some simplyfing assumptions have to be made when deriving a mathematical model. Otherwise the model becomes at least complex and unsolvable. Virtually any particle in the universe has its influence on the fluid flow. But not all particles are equal. Those that form the permeable medium are much more influental than the ones located light years away. Then when the mathematical model needs to be solved an exact solution can rarely be found unless the mathematical model is simple enough (wich probably means the model isn't close to reality). We take an approximate numerical method and after hours of coding and days of verification come up with the program or algorithm which is a solution. And if the model and an algorithm give results close to a real problem by some degree that is good enough soultion. Its worth noting the difference between exact solution algorithm and exact computation result. When considering real-world problems and real-world computation machines I believe all physical problems solutions where any calculations are taken can not be exact because universal physical constants are represented approximately in the computer. Any numbers are represented with the limited precision, at least limited by amount of memory available to computing machine. I can imagine plenty of problems where good-enough, good to some degree solution will work, like train scheduling, automated trading, satellite orbit calculation, health care expert systems. In that cases exact solutions can't be derived due to constraints on computation time, limitations in computer memory or due to the nature of problems. I googled this question and like what this guy suggests: there're kinds of mathematical problems that need exact solutions (little note here: because the question is taken from the book "Introduction to algorithms" the term "solution" means an algorithm or a program, which in this case gives exact answer on each input). But that's probably more of theoretical interest. So I would like to narrow down the question to: What are the real-world practical problems where only the best (exact) solution algorithm or program will do (but not the good-enough solution)? There are problems like breaking of cryptographic ciphers where only exact solution matters in practice and again in practice the process of deciphering without knowing a secret should take reasonable amount of time. Returning to the original question this is the problem where good-enough (fast-enough) solution will do there's no practical need in instant crack though it's desired. So the quality of "best" can be understood in any sense: exact, fastest, requiring least memory, having minimal possible network traffic etc. And still I want this question to be theoretical if possible. In a sense that there may be example of computer X that has limited resource R of amount Y where the best solution to problem P is the one that takes not more than available Y for inputs of size N*Y. But that's the problem of finding solution for P on computer X which is... well, good enough. My final thought that we live in a world where it is required from programming solutions to practical purposes to be good enough. In rare cases really very very good but still not the best ones. Isn't it? :) If it's not can you provide an example? Or can you name any such unsolved problem of practical interest?

    Read the article

  • Real RAM latency

    - by user32569
    Hi, very quick question. When I look for RAM timing, I got 2 different explanations on what is CAS latency. First states, its the time after command to read has been issued from CPU and data are send to data bus. Second says its time betwen column in memory layout has been activated. So, where is the truth? I mean, when I want to know total RAM latenci, in case 1, ot would be just CAS times one clock time. In second case, it would be CAS+other things like RAstoCAS and so times one clock time. Thanks.

    Read the article

  • Real time audio streaming

    - by Josh K
    I have a remote computer running OS X. I would like to stream the audio from the microphone input over the network so I can listen to it. Primarily I want to do this because I'm out of the office but still need to communicate with people there. I would like to use VLC, but am not fully aware of the options available. I tried SoundFly (as recommended by another answer) but this didn't seem to want to connect. At this point I should note that I'm using a VPN network to connect to the remote computer (using Hamachi). I can open up ports / etc fine though, so I should be able to do this. Alright, I found Nicecase which does exactly what I want but I would prefer to not have to shell out $40 for it.

    Read the article

  • Transfer iptables rules to another server (almost) real time

    - by MrShunz
    I'm running 2 cPanel servers with ConfigServer Security & Firewall plugin. One of the functions of the plugin is to block via iptables (temporarily and/or permanently) IPs which fail various authentications (POP3/IMAP, SMTP, FTP, webmail, mod_security and such). Now, i'd like to push those IP blocks to the border router to drop packets as soon as possible (and doing so protecting the other machines on the network). Keep in mind that after N failed logins IP is blocked for 5 minutes, then re-allowed. If multiple bans occours in an hour IP is blocked permanently and should be unlocked "by hand". So I need a near realtime solution. What I'm looking for is a better way than firing some cronjobs both on cPanels and border router to: dump the rules to file transfer the file to border router (via scp/sftp) load the rules from the file in the border router I'm aware that I will need some scripts to parse and modify the rules as cPanels have one ethernet interface and some aliases while border router has two ehternet interfaces and some loopbacks. All machines involved use Linux. EDIT as per @pjmorse comment. The plugin consists of a bunch of perl and config files. The part I'm intrested in is a process which scans logfiles (lfd) and installs iptables rules (and sends an alert email). Fact is, it upgrades quite often (one or two times a week) and itself is 7000 lines of perl so I'm not comfortable on tampering with it.

    Read the article

  • Android RestTemplate Ok on emulator but fails on real device

    - by Hossein
    I'm using spring RestTemplate and it works perfect on emulator but if I run my app on real device I get HttpMessageNotWritableException ............ nested exception is java.net.SocketException: Broken pipe Here is some lines of my code(keep in mind my app works perfect on emulator) ............ LoggerUtil.logToFile(TAG, "url is [" + url + "]"); LoggerUtil.logToFile(TAG, "NetworkInfo - " + connectivityManager.getActiveNetworkInfo()); ResponseEntity<T> responseEntity = restTemplate.exchange(url, HttpMethod.POST, requestEntity, clazz); ............. I know my device's network works perfect because all other applications on my device are working and also using device browser I'm able to connect to my server so my server is available. My server is up and running and my device is able to connect to my server so why I get java.net.SocketException: Broken pipe ?!!!!!!! Before I call restTemplate.exchange() I log NetworkInfo and it looks ok -type: WIFI -status: CONNECTED/CONNECTED -isAvailable: true Thanks in advance. Update: It is really weird Even if I use HttpURLConnection, it works perfectly on emulator but on real device I get 400 Bad Request Here is my code HttpURLConnection con = null; try { String url = ....; LoggerUtil.logToFile(TAG, "url [" + url + "]" ); con = (HttpURLConnection) new URL(url).openConnection(); con.setRequestMethod("POST"); con.setRequestProperty("Connection", "Keep-Alive"); con.setDoInput(true); con.setDoOutput(true); con.setUseCaches(false); con.connect(); LoggerUtil.logToFile(TAG, "con.getResponseCode is " + con.getResponseCode()); LoggerUtil.logToFile(TAG, "con.getResponseMessage is " + con.getResponseMessage()); } catch(Throwable t){ LoggerUtil.logToFile(TAG, "*** failed [" + t + "]" ); } in log file I see con.getResponseCode is 400 con.getResponseMessage is Bad Request

    Read the article

  • Flash movies in inactive browser tabs pause or don't execute in real time

    - by ZenBlender
    I'm noticing some unexpected behavior. Some time in the last few months, a change in either Firefox, the Flash player, or both, has made it so that Flash movies that are in inactive browser tabs no longer execute in real time. They appear to still execute, but only in bursts, and not in a predictable way. This is a problem because I develop a Flash-based (Actionscript 2.0, Flash CS3) multiplayer game that maintains a network connection and allows players to chat, etc. Many of our players complain about Firefox crashing while playing the game. I have noticed it too, not too frequently, but it crashes several times a week. (Firefox crashes, I do not get a message from Flash player that indicates an infinite loop or problem in my code) My theory is that this new behavior is causing crashes when there is a lot of activity in my game, leading to lots of unhandled network traffic for my game getting buffered before Firefox/Flash will give it a chance to execute. Maybe this leads to a buffer overflow or missing packets, and as a result, something crashes. At times I will switch back to the tab that is running my game and discover a display bug, which looks as though Flash has simply failed to execute something that it was supposed to. I would assume this new behavior is on purpose, for example to prevent all the Flash-based advertisements in inactive tabs from executing and therefore killing performance. In a quick test on Chrome (5.0.342.9 beta), this "pausing" of Flash seems to be there as well, but somehow it seems much less of a problem. My users have only complained about Firefox crashing, not other browsers. My machine: Windows 7 x64 Firefox 3.6.3 Flash Player 10.1.50.426 My game: triplejack.com Any ideas? Ideally I'd like to disable this behavior for my Flash game so it can execute in real time even when in an inactive tab. Thanks for any help!

    Read the article

  • Change the playback rate of a track in real time on Android

    - by android_dev
    Hello, I would like to know if somebody knows a library to changing the playback rate of a track in real time. My idea is to load a track and change its playback rate to half or double. Firstly, I tried with MusicPlayer but is was not possible at all and then I tried with SoundPool. The problem is that with SoundPool I can´t change the rate once the track is loaded. Here is the code I am using (proof of concept): float j = 1; @Override public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.main); Button b = (Button)findViewById(R.id.Button01); b.setOnClickListener(new OnClickListener() { @Override public void onClick(View v) { j = (float) (j +.5); } }); AssetFileDescriptor afd; try { SoundPool sp = new SoundPool(1, AudioManager.STREAM_MUSIC, 0); afd = getAssets().openFd("wav/sample.wav"); int id = sp.load(afd, 1); sp.play(id, 1, 1, 1, 0, j); } catch (IOException e) { // TODO Auto-generated catch block e.printStackTrace(); } } When I press the button, the playback rate should increase but it does not happen. Any idea of how change the rate in real time? Thanks in advance.

    Read the article

  • Group vs role (Any real difference?)

    - by Ondrej
    Can anyone tell me, what's the real difference between group and role? Ive been trying to figure this out for some time now and the more information I read, the more I get the sence, that this is brought up just to confuse people and there is no proper difference in this. Both can do the other one's job. Ive always used a group to manage users and their access rights. Recently, I've come accross an administration software, where is a bunch of users. Each user can have assigned a module (whole system is split into a few parts called modules ie. Administration module, Survey module, Orders module, Customer module). On top of it, each module have a list of functionalities, that can be allowed or denied for each user. So let's say, a user John Smith can access module Orders and can edit any order, but havent given a right to delete any of them. If there was more users with the same competency, I would use a group to manage that. I would aggregate such users into the same group and assign access rights to modules and their functions to the group. All users in the same group would have the same access rights. Why call it a group and not role? I don't know, I just feel it that way. It seems to me, that simply it just doesnt really matter :] But I still would like to know the real difference. What about you guys? Any suggestions why this should be rather called role than group or the other way round? Thanks to everyone.

    Read the article

  • mod_rewrite: no access to real files and directories

    - by tshabalala
    Hello. I use mod_rewrite/.htaccess for pretty URLs. I forward all the requests to my index.php, like this: RewriteRule ^/?([a-zA-Z0-9/-]+)/?$ /index.php [NC,L] The index.php then handles the requests. I'm also using this condition/rule to eliminate trailing slashes (or rather rewrite them to the URL without a trailing slash, with a 301 redirect; I'm doing this to avoid duplicate content and because I like no trailing slashes better): RewriteCond %{HTTP_HOST} !^\.localhost$ [NC] RewriteRule ^(.+)/$ http://%{HTTP_HOST}/$1 [R=301,L] This works well, except that I now get an infinite loop when trying to access a (real) directory (the rewrite rule removes the trailing slash, the server adds it again, ...). I solved this by setting the DirectorySlash directive to Off: DirectorySlash Off I don't know how good this solution is, I don't feel too confident about it tbh. Anyway, what I'd like to do is completely ignore "real" files and directories, since I don't need them and I only use pretty URLs with "virtual" files/directories anyway. This would allow me to avoid the DirectorySlash workaround/hack too. Is this possible? Thanks!

    Read the article

  • What is causing sudden freezing during running real-time program?

    - by Trevor Boyd Smith
    So I run a high intensive (CPU/GPU) real-time program. During normal execution suddenly everything freezes for 1-4 seconds. I opened "Process Explorer" in the background to help gain insight and maybe identify something. Here is what the CPU/GPU graphs looks like when I align them in time: Notice the 4 distinct drops in both the CPU/GPU. You can see that it goes from some sort of positive CPU/GPU usage to almost zero. These drops in the graph align with when the real-time program suddenly freezes. How do I find what is causing these sudden drops? NOTE: When you put your mouse over the graph it tells you the time, accurate to the second, for where your cursor is. Maybe this mouse over feature could be helpful in some way (e.g. what if you had a log of all processes every 100ms). EDIT: The real-time program is a video game and so I can't watch some sort of instrumentation while the video game is running. I need a solution that let's you look back in time somehow to see what was happening when the slow down occurred. EDIT: RE - Recording Data vs using real-time monitor: So the windows performance recorder is for some reason not recording what I expect it to record. So I switched to using "perfmon" and then opening it's "resource monitor". RE - Setting it up so I can view real-time monitor: In the video game I set it to spectate and then put the video game in "windowed" mode so that I can view the real time display that Resource Monitor has. Now that I can get semi-real time (only once per second... how do you get more than once per second?) I started looking at the various real time data readouts. Getting to the cause: I noticed a strong correlation in high disk IO and low CPU usage (which is also seen by having in-game freezing). How do you use resource monitor to find out who is doing all this offending disk IO?

    Read the article

  • Does anyone really understand how HFSC scheduling in Linux/BSD works?

    - by Mecki
    I read the original SIGCOMM '97 PostScript paper about HFSC, it is very technically, but I understand the basic concept. Instead of giving a linear service curve (as with pretty much every other scheduling algorithm), you can specify a convex or concave service curve and thus it is possible to decouple bandwidth and delay. However, even though this paper mentions to kind of scheduling algorithms being used (real-time and link-share), it always only mentions ONE curve per scheduling class (the decoupling is done by specifying this curve, only one curve is needed for that). Now HFSC has been implemented for BSD (OpenBSD, FreeBSD, etc.) using the ALTQ scheduling framework and it has been implemented Linux using the TC scheduling framework (part of iproute2). Both implementations added two additional service curves, that were NOT in the original paper! A real-time service curve and an upper-limit service curve. Again, please note that the original paper mentions two scheduling algorithms (real-time and link-share), but in that paper both work with one single service curve. There never have been two independent service curves for either one as you currently find in BSD and Linux. Even worse, some version of ALTQ seems to add an additional queue priority to HSFC (there is no such thing as priority in the original paper either). I found several BSD HowTo's mentioning this priority setting (even though the man page of the latest ALTQ release knows no such parameter for HSFC, so officially it does not even exist). This all makes the HFSC scheduling even more complex than the algorithm described in the original paper and there are tons of tutorials on the Internet that often contradict each other, one claiming the opposite of the other one. This is probably the main reason why nobody really seems to understand how HFSC scheduling really works. Before I can ask my questions, we need a sample setup of some kind. I'll use a very simple one as seen in the image below: Here are some questions I cannot answer because the tutorials contradict each other: What for do I need a real-time curve at all? Assuming A1, A2, B1, B2 are all 128 kbit/s link-share (no real-time curve for either one), then each of those will get 128 kbit/s if the root has 512 kbit/s to distribute (and A and B are both 256 kbit/s of course), right? Why would I additionally give A1 and B1 a real-time curve with 128 kbit/s? What would this be good for? To give those two a higher priority? According to original paper I can give them a higher priority by using a curve, that's what HFSC is all about after all. By giving both classes a curve of [256kbit/s 20ms 128kbit/s] both have twice the priority than A2 and B2 automatically (still only getting 128 kbit/s on average) Does the real-time bandwidth count towards the link-share bandwidth? E.g. if A1 and B1 both only have 64kbit/s real-time and 64kbit/s link-share bandwidth, does that mean once they are served 64kbit/s via real-time, their link-share requirement is satisfied as well (they might get excess bandwidth, but lets ignore that for a second) or does that mean they get another 64 kbit/s via link-share? So does each class has a bandwidth "requirement" of real-time plus link-share? Or does a class only have a higher requirement than the real-time curve if the link-share curve is higher than the real-time curve (current link-share requirement equals specified link-share requirement minus real-time bandwidth already provided to this class)? Is upper limit curve applied to real-time as well, only to link-share, or maybe to both? Some tutorials say one way, some say the other way. Some even claim upper-limit is the maximum for real-time bandwidth + link-share bandwidth? What is the truth? Assuming A2 and B2 are both 128 kbit/s, does it make any difference if A1 and B1 are 128 kbit/s link-share only, or 64 kbit/s real-time and 128 kbit/s link-share, and if so, what difference? If I use the seperate real-time curve to increase priorities of classes, why would I need "curves" at all? Why is not real-time a flat value and link-share also a flat value? Why are both curves? The need for curves is clear in the original paper, because there is only one attribute of that kind per class. But now, having three attributes (real-time, link-share, and upper-limit) what for do I still need curves on each one? Why would I want the curves shape (not average bandwidth, but their slopes) to be different for real-time and link-share traffic? According to the little documentation available, real-time curve values are totally ignored for inner classes (class A and B), they are only applied to leaf classes (A1, A2, B1, B2). If that is true, why does the ALTQ HFSC sample configuration (search for 3.3 Sample configuration) set real-time curves on inner classes and claims that those set the guaranteed rate of those inner classes? Isn't that completely pointless? (note: pshare sets the link-share curve in ALTQ and grate the real-time curve; you can see this in the paragraph above the sample configuration). Some tutorials say the sum of all real-time curves may not be higher than 80% of the line speed, others say it must not be higher than 70% of the line speed. Which one is right or are they maybe both wrong? One tutorial said you shall forget all the theory. No matter how things really work (schedulers and bandwidth distribution), imagine the three curves according to the following "simplified mind model": real-time is the guaranteed bandwidth that this class will always get. link-share is the bandwidth that this class wants to become fully satisfied, but satisfaction cannot be guaranteed. In case there is excess bandwidth, the class might even get offered more bandwidth than necessary to become satisfied, but it may never use more than upper-limit says. For all this to work, the sum of all real-time bandwidths may not be above xx% of the line speed (see question above, the percentage varies). Question: Is this more or less accurate or a total misunderstanding of HSFC? And if assumption above is really accurate, where is prioritization in that model? E.g. every class might have a real-time bandwidth (guaranteed), a link-share bandwidth (not guaranteed) and an maybe an upper-limit, but still some classes have higher priority needs than other classes. In that case I must still prioritize somehow, even among real-time traffic of those classes. Would I prioritize by the slope of the curves? And if so, which curve? The real-time curve? The link-share curve? The upper-limit curve? All of them? Would I give all of them the same slope or each a different one and how to find out the right slope? I still haven't lost hope that there exists at least a hand full of people in this world that really understood HFSC and are able to answer all these questions accurately. And doing so without contradicting each other in the answers would be really nice ;-)

    Read the article

  • Does anyone really understand how HFSC scheduling in Linux/BSD works?

    - by Mecki
    I read the original SIGCOMM '97 PostScript paper about HFSC, it is very technically, but I understand the basic concept. Instead of giving a linear service curve (as with pretty much every other scheduling algorithm), you can specify a convex or concave service curve and thus it is possible to decouple bandwidth and delay. However, even though this paper mentions to kind of scheduling algorithms being used (real-time and link-share), it always only mentions ONE curve per scheduling class (the decoupling is done by specifying this curve, only one curve is needed for that). Now HFSC has been implemented for BSD (OpenBSD, FreeBSD, etc.) using the ALTQ scheduling framework and it has been implemented Linux using the TC scheduling framework (part of iproute2). Both implementations added two additional service curves, that were NOT in the original paper! A real-time service curve and an upper-limit service curve. Again, please note that the original paper mentions two scheduling algorithms (real-time and link-share), but in that paper both work with one single service curve. There never have been two independent service curves for either one as you currently find in BSD and Linux. Even worse, some version of ALTQ seems to add an additional queue priority to HSFC (there is no such thing as priority in the original paper either). I found several BSD HowTo's mentioning this priority setting (even though the man page of the latest ALTQ release knows no such parameter for HSFC, so officially it does not even exist). This all makes the HFSC scheduling even more complex than the algorithm described in the original paper and there are tons of tutorials on the Internet that often contradict each other, one claiming the opposite of the other one. This is probably the main reason why nobody really seems to understand how HFSC scheduling really works. Before I can ask my questions, we need a sample setup of some kind. I'll use a very simple one as seen in the image below: Here are some questions I cannot answer because the tutorials contradict each other: What for do I need a real-time curve at all? Assuming A1, A2, B1, B2 are all 128 kbit/s link-share (no real-time curve for either one), then each of those will get 128 kbit/s if the root has 512 kbit/s to distribute (and A and B are both 256 kbit/s of course), right? Why would I additionally give A1 and B1 a real-time curve with 128 kbit/s? What would this be good for? To give those two a higher priority? According to original paper I can give them a higher priority by using a curve, that's what HFSC is all about after all. By giving both classes a curve of [256kbit/s 20ms 128kbit/s] both have twice the priority than A2 and B2 automatically (still only getting 128 kbit/s on average) Does the real-time bandwidth count towards the link-share bandwidth? E.g. if A1 and B1 both only have 64kbit/s real-time and 64kbit/s link-share bandwidth, does that mean once they are served 64kbit/s via real-time, their link-share requirement is satisfied as well (they might get excess bandwidth, but lets ignore that for a second) or does that mean they get another 64 kbit/s via link-share? So does each class has a bandwidth "requirement" of real-time plus link-share? Or does a class only have a higher requirement than the real-time curve if the link-share curve is higher than the real-time curve (current link-share requirement equals specified link-share requirement minus real-time bandwidth already provided to this class)? Is upper limit curve applied to real-time as well, only to link-share, or maybe to both? Some tutorials say one way, some say the other way. Some even claim upper-limit is the maximum for real-time bandwidth + link-share bandwidth? What is the truth? Assuming A2 and B2 are both 128 kbit/s, does it make any difference if A1 and B1 are 128 kbit/s link-share only, or 64 kbit/s real-time and 128 kbit/s link-share, and if so, what difference? If I use the seperate real-time curve to increase priorities of classes, why would I need "curves" at all? Why is not real-time a flat value and link-share also a flat value? Why are both curves? The need for curves is clear in the original paper, because there is only one attribute of that kind per class. But now, having three attributes (real-time, link-share, and upper-limit) what for do I still need curves on each one? Why would I want the curves shape (not average bandwidth, but their slopes) to be different for real-time and link-share traffic? According to the little documentation available, real-time curve values are totally ignored for inner classes (class A and B), they are only applied to leaf classes (A1, A2, B1, B2). If that is true, why does the ALTQ HFSC sample configuration (search for 3.3 Sample configuration) set real-time curves on inner classes and claims that those set the guaranteed rate of those inner classes? Isn't that completely pointless? (note: pshare sets the link-share curve in ALTQ and grate the real-time curve; you can see this in the paragraph above the sample configuration). Some tutorials say the sum of all real-time curves may not be higher than 80% of the line speed, others say it must not be higher than 70% of the line speed. Which one is right or are they maybe both wrong? One tutorial said you shall forget all the theory. No matter how things really work (schedulers and bandwidth distribution), imagine the three curves according to the following "simplified mind model": real-time is the guaranteed bandwidth that this class will always get. link-share is the bandwidth that this class wants to become fully satisfied, but satisfaction cannot be guaranteed. In case there is excess bandwidth, the class might even get offered more bandwidth than necessary to become satisfied, but it may never use more than upper-limit says. For all this to work, the sum of all real-time bandwidths may not be above xx% of the line speed (see question above, the percentage varies). Question: Is this more or less accurate or a total misunderstanding of HSFC? And if assumption above is really accurate, where is prioritization in that model? E.g. every class might have a real-time bandwidth (guaranteed), a link-share bandwidth (not guaranteed) and an maybe an upper-limit, but still some classes have higher priority needs than other classes. In that case I must still prioritize somehow, even among real-time traffic of those classes. Would I prioritize by the slope of the curves? And if so, which curve? The real-time curve? The link-share curve? The upper-limit curve? All of them? Would I give all of them the same slope or each a different one and how to find out the right slope? I still haven't lost hope that there exists at least a hand full of people in this world that really understood HFSC and are able to answer all these questions accurately. And doing so without contradicting each other in the answers would be really nice ;-)

    Read the article

  • Check if a file is real or a symbolic link

    - by mattdwen
    Is there a way to tell using C# if a file is real or a symbolic link? I've dug through the MSDN W32 docs (http://msdn.microsoft.com/en-us/library/aa364232(VS.85).aspx), and can't find anything for checking this. I'm using CreateSymbolicLink from here, and it's working fine.

    Read the article

  • C# 4: Real-World Example of Dynamic Types

    - by routeNpingme
    I think I have my brain halfway wrapped around the Dynamic Types concept in C# 4, but can't for the life of me figure out a scenario where I'd actually want to use it. I'm sure there are many, but I'm just having trouble making the connection as to how I could engineer a solution that is better solved with dynamics as opposed to interfaces, dependency injection, etc. So, what's a real-world application scenario where dynamic type usage is appropriate?

    Read the article

  • Real World Experience of db4o and/or Eloquera Database

    - by user341127
    I am evaluating two object databases, db4o (http://www.db4o.com) and Eloquera Database (http://eloquera.com) for a coming project. I have to choose one. My basic requirement is scalability, multi user support and easy type evolution for RAD. Please share your real world experience. If you have both, can you compare these two? Which do you prefer?

    Read the article

  • Real time complex raster image morphing in Flash CS4

    - by cosmorocket
    Is there a way to load an image from some url or a local folder and then make some complex morphing to it in real time? For example, I have a vector animated pseudo 3D paper in my project that, for example, is being flipped different ways. Then I want to place some image inside that paper box and want to morph that image accordingly to the box form changes or at least make it look more realistically, not exactly the same as the paper box. Thanks.

    Read the article

< Previous Page | 12 13 14 15 16 17 18 19 20 21 22 23  | Next Page >