Search Results

Search found 27819 results on 1113 pages for 'linux intel'.

Page 163/1113 | < Previous Page | 159 160 161 162 163 164 165 166 167 168 169 170  | Next Page >

  • On Linux/Unix, does .tar.gz versus .zip matter?

    - by rwallace
    Cross-platform programs are sometimes distributed as .tar.gz for the Unix version and .zip for the Windows version. This makes sense when the contents of each must be different. If, however, the contents are going to be the same, it would be simpler to just have one download. Windows prefers .zip because that's the format it can handle out of the box. Does it matter on Unix? That is, I tried today unzipping a file on Ubuntu Linux, and it worked fine; is there any problem with this on any current Unix-like operating system, or is it okay to just provide a .zip file across the board?

    Read the article

  • EC2 Amazon Linux AMI MySQL CPU @ 62% When Idle?

    - by Jeff
    I am running MySQL on an Amazon Linux AMI. There is nothing connected to it. There are no connections and no other applications running that use MySQL. It is completely idle, but yet, top is reporting that mysql is using 62% of the CPU? Why is this happening and how do I fix it? Cpu(s): 0.2%us, 0.2%sy, 0.0%ni, 97.8%id, 0.0%wa, 0.0%hi, 0.0%si, 1.7%st Mem: 1738504k total, 390708k used, 1347796k free, 56888k buffers Swap: 917500k total, 0k used, 917500k free, 229804k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 2959 mysql 20 0 466m 39m 5244 S 62.2 2.3 4:00.67 mysqld 1 root 20 0 19252 1504 1212 S 0.0 0.1 0:00.20 init 2 root 20 0 0 0 0 S 0.0 0.0 0:00.00 kthreadd There are no connections... mysql> show processlist; +----+------+-----------+------+---------+------+-------+------------------+ | Id | User | Host | db | Command | Time | State | Info | +----+------+-----------+------+---------+------+-------+------------------+ | 5 | root | localhost | NULL | Query | 0 | NULL | show processlist | +----+------+-----------+------+---------+------+-------+------------------+

    Read the article

  • Map keycode 133+54 to shift+control+c in Linux for Mac keyboard?

    - by Edward_178118
    On Linux Mint 13 - Mate using the Terminal program Terminator with a Mac keyboard. I want the command key for COPY/PASTE to behave as it does on the Mac. I have been able to change it to treat the command key as a control key, and this works fine for most apps except in the Terminal program. Using xev when I press command+c it's a keycode of 133 + 54. This is a ^c to the Terminal app which acts like a ^c in a shell. The default for COPY which can be changed in Terminator is Shift+Control+c. Is there a way to map the keycode of 133 + 54 to Shift+Control+c, but only for the Terminator app? Thanks!

    Read the article

  • How can I use target mode in Linux with USB?

    - by dash17291
    Kernel 3.5 introduces: This release includes a driver for using an IEEE-1394 connection as a SCSI transport. This enables to expose SCSI devices to other nodes on the Firewire bus, for example hard disk drives. It's a similar functionality to Firewire Target Disk Mode on many Apple computers. This release also adds a usb-gadget driver that does the same with USB. The driver supports two USB protocols are supported that is BBB or BOT (Bulk Only Transport) and UAS (USB Attached SCSI). BOT is advertised on alternative interface 0 (primary) and UAS is on alternative interface 1. Both protocols can work on USB 2.0 and USB 3.0. UAS utilizes the USB 3.0 feature called streams support. http://kernelnewbies.org/Linux_3.5 I have an Arch Linux with kernel 3.5.3-1 and wanna try out this feature.

    Read the article

  • Why is there no 64-bit Linux Firefox build?

    - by Legooolas
    It seems that I have to build my own 64-bit Firefox for Linux, as Mozilla won't support it until Firefox 4. Why is this? It looks to me as though it works fine, although without some of the speed improvements to the Javascript engine which the 32-bit version gets. (Edit: Yes I could run the 32-bit version but I'm trying to keep my system clear of 32-bit cruft and libraries etc, and all the plug-ins worked fine in 3.0.11 64-bit unofficial builds.) Update : No longer relevant as Mozilla provide 64-bit builds, but they don't show them on the download pages of mozilla.org, just on the ftp site as mentioned in one of the answers below.

    Read the article

  • Override template shell on linux system in Active Directory domain?

    - by benizi
    Is there an easy way to override the Samba "template shell = /bin/bash" setting on a per-user basis? This is for Linux systems joined to an Active Directory domain. Some users want /bin/bash. Others including myself want /bin/zsh. Is there some AD attribute I can set? Anything I've found via googling seems hackish at best (writing a script to replace /bin/sh -- maintenance hassle). A similar serverfault question Override LDAP shell seems OpenLDAP-oriented (but if someone knows how to get it working with AD, please say so).

    Read the article

  • Best Linux Distro for web services (Nginx & node.js) on laptop: Compaq 6710b?

    - by tomByrer
    I haven't used Linux in 5+ years, aside from d/l occasional system recovery CDs off DistroWatch, so I don't know the current landscape. Related postings on this forum are several years old & may not relate to my hardware (Compaq 6710b laptop, Core2Duo Centrino). Requirements: Use the Compaq 6710b laptop's WiFi out of the box enough frequently updated pre-made packages for web hosting & development (Nginx & node.js are biggest concerns, everyone has Apache & PHP, & I'm not crazy about building from source) prefer be easy enough to use, but outside help available (so a small user-base distro is only OK if the community is active & a major disto's packages are compatable) configuration easy to transfer to outside web hosts. You have actually installed/used recommended disto (don't have to be expert) TIA!

    Read the article

  • is there a linux equivalent of iTerm(mac) sending command to multiple tabs functionality?

    - by jabbertalker
    in iTerm, you can send a command to execute simultaneously on a set of already opened tabs. Is there a way to do this in linux (with gnome-terminal preferably)? for instance, supposed that I had 10 tabs already ssh'd into [email protected] and sudoed to root and wanted to send a command to run on all 10 tabs. The goal of this is to be able to stay within a set of tabs and command them, rather than having to use expect scripts to ssh and elevate and run commands. Basically, like how you could do in iTerm.

    Read the article

  • Can I install linux has the host in a dell poweredge server (r710)?

    - by bksunday
    I might have a deal on dual six core poweredge server and I'm about to go test its performance but I'm wondering few things which I can't find answers for, and I can't test them before buying the machine. I don't want vmware at all so can I just wipe it and install linux instead, or is it embedded in some parts I have no access to. Will I still be able to update different firmwares (perc controllers, motherboard, etc) on this dell poweredge or does it require to have the vmware esxi installed as the host os. And optionally.. is there any foreseeable problems in doing so?

    Read the article

  • How to configure linux routing/filtering to send packets out one interface, over a bridge and into another interface on the same box

    - by rj75
    I'm trying to test a ethernet bridging device. I have multiple ethernet ports on a linux box. I would like to send packets out one interface, say eth0 with IP 192.168.1.1, to another interface, say eth1 with IP 192.168.1.2, on the same subnet. I realize that normally you don't configure two interfaces on the same subnet, and if you do the kernel routes directly to each interface, rather than over the wire. How can I override this behavior, so that traffic to 192.168.1.2 goes out the 192.168.1.1 interface, and visa-versa? Thanks in advance!

    Read the article

  • Where can I get a Linux distro iso based on a 2.4 kernel?

    - by Mike
    I need to get my hands on an ISO file for a Linux distribution with a 2.4 kernel. I'm looking for an ISO specifically so I can use it with my Oracle VirtualBox. Since 2.4 is so old these days, I'll explain that I'm looking for it because my company uses an ancient 2.4 uClinux distro on our ancient hardware in our devices. I'd like to run some "desktop" tests using the same kernel version as what's in the hardware. As far as I can tell I can't run uCLinux on a desktop, so next best thing, I'd like to get anything running 2.4.

    Read the article

  • Is there a way to redirect certain URLs to specific web browsers in Linux?

    - by jraxxo
    I'm using Chrome as my default browser in Ubuntu 12.10. I need to use Firefox for business purposes (certain websites pertaining to my work only work with Firefox). Is there a way to force Ubuntu to use Firefox for certain types of URLs (maybe as defined by a regular expression) while maintaining Chrome as my default browser for all my other tasks? Perhaps as a shell script running in the background? I'd like this to work system-wide, covering links from Chrome itself as well as PDFs/ODTs, etc. I have searched for solutions, but I couldn't find anything besides OpenWith, a Firefox extension which adds a button to open certain links in other browsers which would again require me to open Firefox beforehand, which does not help me at all. Does anyone have any ideas? Something like Choosy for Linux?

    Read the article

  • Remote desktop into a linux machine, is this possible?

    - by fire
    I have a VPS by my host running on a linux server, and they have given me SSH access. Is it possible to remote desktop into the server, like you can on Windows, so that I can physcially click on things rather than having to use SSH commands? Surely this must be running on Fedora or Ubuntu etc. so there is some type of OS. You would probably have to install something on the servers end I suppose but just want to know if its possible and what the options are. And before you say "why not ask your host" I find superuser responses are usually much quicker :-)

    Read the article

  • Nginx works on my linux machine but is not accessible from other computers in my local network

    - by crooveck
    In my LAN network I have a server with Scientific Linux (RedHat or Fedora based distro), I've done yum install nginx but the welcome page is not accessible from other computers in my network. When I do telnet open localhost 80 and then GET / HTTP/1.0 I get some html code from nginx, so it's running for sure. But when I want to connect remotly, doing telnet open 192.168.3.130 80 I get: Trying 192.168.3.130... telnet: Unable to connect to remote host: No route to host So I assume that there is something wrong with my network settings, maybe iptables or something else? Next step, I turned off iptables: service iptables stop and it helped, now I can connect remotely using telnet. So I think, I need to fix my iptables rules. I did some googling and found this rule -A INPUT -m state --state NEW -m tcp -p tcp --dport 80 -j ACCEPT but it still didn't allow me to connect remotely when iptables is up. Can someone please help me setting a proper iptables configuration?

    Read the article

  • Configure Linux server hardware buttons for soft reset or power cycle?

    - by Jakobud
    I have a small modest CentOS server at home. I run it headless because anytime I access it, it's always via SSH. Anyways, tonight it became unresponsive to the network. I could not connect to it to investigate. In this case, I have to hook up a keyboard and monitor to see the problem. I ended up just rebooting it. But after this experience, I was wondering if it's possible to configure the hardware buttons on the CPU case to perform a graceful reboot or graceful power cycle in Linux. Even though the server becomes unresponsive once in a blue moon, it would be nice to simply press a button and have it shutdown all services and gracefully reboot. Anyone know how this could be accomplished?

    Read the article

  • Total network data sent/received of a non-daemon Linux process?

    - by leden
    I'm looking for a simple and effective way of measuring total bytes received/sent from a single process upon its termination. Basically, I am looking for a tool which has the interface similar to "time" and "/usr/bin/time", e.g. measure-net-data <prog_to_run> <prog_args> Received (b): XYZ Sent (b): ABC I know that there are many tools for bandwidth/network monitoring, but as far I can tell all of them are performing the measurements it real-time, which is inappropriate not only because of overhead but also because of the inconvenience - I would need to stop the program, capture the output of the tool and then kill it. I have seen that newer versions of Linux 2.6.20+ provide /proc/<pid>/io/ which contain the information I'm looking for; however, everything under /proc/<pid> when the process terminates, so I'm again back to the same problem as with any network monitoring tool.

    Read the article

  • Program to Queue Files for Copy/Move/Delete in linux?

    - by laliga
    I've search the net for Linux's answer to something like Teracopy (Windows)... but could not find anything suitable. The closest things i got are: Krusader. Mentioned in their features but indicated as 'not implemented yet'. MiniCopier. A java based app http://a.courreges.free.fr/projets/minicopier/minicopier-en.php rsync is not an option. Can someone recommend me a simple file copy tool that can queue files for copy/move/delete? Preferably if I can drag and drop from Nautilus. If something like this does not exist, can someone please tell me why? ...am I the only person that needs something like this?

    Read the article

  • A "quick" vector editor (SVG) for Linux (for annotating images?)

    - by sdaau
    I often need to take a bitmap (.png) image, and draw some lines or text on top of it, and possibly export a new, thusly "annotated" image. I know I can basically do all this in inkscape - but inkscape is a complex program, and it needs almost a minute to start up properly on my PCs. So I was thinking - is there something like a "mini" vector editor for Linux, which would start up fast, and allow me to: Right-click, open an image in this editor program The program scales the active "document"/"window size" to the size of the image I can zoom in/zoom out (and possibly crop) the image I can add at least lines, boxes and text in different colors? A bonus for me would be to have the overlay graphics saved as SVG format, say with the same filename as the image - as in, "image.png.svg" being saved in the same directory where the original "image.png" is located (thus allowing opening and editing these "annotations" further, either in this editor, or possibly in inkscape). And another bonus would be the export of the annotated image to a bitmap. Anyone know about anything like this?

    Read the article

  • How to save your Linux state (suspend to disk) periodically to recover from crashes?

    - by WoLpH
    One in a while my laptop crashes/dies because of a bad/empty battery, crappy wifi driver or whatever other reason. For a while I've wondered if it's possible to force Linux to periodically save the state (like vmware snapshots) to disk so you can restore from that with possibly slightly outdated work but at least with all of your apps open in the same state you've left them. I don't really see the point in having to boot everything from cratch constantly, although KDE saves your state on logout, that doesn't happen periodically (by default) either. It would make it much nicer to recover from your crashes if your ram was written to disk periodically. Anyone know if there's a system call to do this without also shutting down the machine? Even a manual button to save the entire state would be nice.

    Read the article

  • The dreaded Brightness issue (Fn keys + Max brightness)

    - by Adam
    I'm trying to get some control over the brightness of my Samsung QX411 (Integrated Intel and discrete Nvidia, though Ubuntu doesn't see the latter yet, I'll play around with Bumblebee later) Using the FN+up/down lowers the screen brightness from max to one peg down or back up. If I try to bring the brightness down any more, it just flickers and stays the same. I can lower the brightness in Settings, but that's delicate and gets reverted to max if I open up the brightness settings again, or log out. The closest I got was adding acpi_backlight=vendor to a line in /etc/default/grub, (source) I could consequently lower the brightness a couple of pegs down to the minimum with FN+down, but then it's as if the problem got inversed, and I'd get stuck in the bottom tier, I could only increase the brightness by one peg and back down. Rebooting would revert to max brightness. acpi_osi=, acpi_osi=Linux, acpi_osi=vendor, acpi_osi='!Windows 2012', acpi_backlight=Linux, acpi_backlight='!Windows 2012' don't do anything for me. I've also tried adding echo 2000 > /sys/class/backlight/intel_backlight/brightness to /etc/rc.local, where my max value from cat /sys/class/backlight/intel_backlight/brightness is 4648, which didn't do anything. (same result with echo 2000 > /sys/class/backlight/acpi_video0/brightness) source Samsung tools also didn't help in this regard. I've spent hours on this, it's getting quite frustrating. Any help would be greatly appreciated.

    Read the article

  • What's wrong with my wireless?

    - by dazzle
    I am having issues with my wireless connection. My connection is constantly disconnecting, then attempting to reconnect, reconnecting momentarily, then disconnecting etc. on times scales that range from seconds to minutes. In the meantime, needless to say I'm having significant packet loss. I'm running Ubuntu 14.04 64bit, updated and upgraded to today. Here is my card and driver: delta@sager:~$ lspci -vq | grep -i wireless -B 1 -A 5 04:00.0 Network controller: Intel Corporation Wireless 7260 (rev 73) Subsystem: Intel Corporation Dual Band Wireless-AC 7260 Flags: bus master, fast devsel, latency 0, IRQ 47 Memory at f7d00000 (64-bit, non-prefetchable) [size=8K] Capabilities: Kernel driver in use: iwlwifi Here is my kernel: delta@sager:~$ uname -r 3.13.0-34-generic None of the other machines on my home network are having these issues. Windows Vista is networking without issue for goodness sake ;-) Here is a small clipping from the output of dmesg. As you can see, I am getting a cfg80211 message of some sort over and over again (FYI, I've replaced my MAC address with a series of dashes, so anytime there is a ---------------, that was where the MAC address was: [ 1881.739161] wlan1: authenticate with --------------- [ 1881.741561] wlan1: send auth to --------------- (try 1/3) [ 1881.743440] wlan1: authenticated [ 1881.746027] wlan1: associate with --------------- (try 1/3) [ 1881.749244] wlan1: RX AssocResp from --------------- (capab=0x411 status=0 aid=4) [ 1881.754727] wlan1: associated [ 1881.754827] cfg80211: Calling CRDA for country: US [ 1881.761552] cfg80211: Regulatory domain changed to country: US [ 1881.761559] cfg80211: (start_freq - end_freq @ bandwidth), (max_antenna_gain, max_eirp) [ 1881.761564] cfg80211: (2402000 KHz - 2472000 KHz @ 40000 KHz), (300 mBi, 2700 mBm) [ 1881.761568] cfg80211: (5170000 KHz - 5250000 KHz @ 40000 KHz), (300 mBi, 1700 mBm) [ 1881.761571] cfg80211: (5250000 KHz - 5330000 KHz @ 40000 KHz), (300 mBi, 2000 mBm) [ 1881.761574] cfg80211: (5490000 KHz - 5600000 KHz @ 40000 KHz), (300 mBi, 2000 mBm) [ 1881.761577] cfg80211: (5650000 KHz - 5710000 KHz @ 40000 KHz), (300 mBi, 2000 mBm) [ 1881.761580] cfg80211: (5735000 KHz - 5835000 KHz @ 40000 KHz), (300 mBi, 3000 mBm) [ 1881.761584] cfg80211: (57240000 KHz - 63720000 KHz @ 2160000 KHz), (N/A, 4000 mBm) [ 1882.391038] cfg80211: Calling CRDA to update world regulatory domain [ 1882.396254] cfg80211: World regulatory domain updated: [ 1882.396260] cfg80211: (start_freq - end_freq @ bandwidth), (max_antenna_gain, max_eirp) [ 1882.396265] cfg80211: (2402000 KHz - 2472000 KHz @ 40000 KHz), (300 mBi, 2000 mBm) [ 1882.396268] cfg80211: (2457000 KHz - 2482000 KHz @ 40000 KHz), (300 mBi, 2000 mBm) [ 1882.396271] cfg80211: (2474000 KHz - 2494000 KHz @ 20000 KHz), (300 mBi, 2000 mBm) [ 1882.396274] cfg80211: (5170000 KHz - 5250000 KHz @ 40000 KHz), (300 mBi, 2000 mBm) [ 1882.396277] cfg80211: (5735000 KHz - 5835000 KHz @ 40000 KHz), (300 mBi, 2000 mBm) [ 1886.148252] wlan1: authenticate with --------------- [ 1886.150005] wlan1: send auth to --------------- (try 1/3) [ 1886.151807] wlan1: authenticated [ 1886.154847] wlan1: associate with --------------- (try 1/3) [ 1886.158147] wlan1: RX AssocResp from --------------- (capab=0x411 status=0 aid=4) [ 1886.163464] wlan1: associated [ 1886.163520] wlan1: Limiting TX power to 30 (30 - 0) dBm as advertised by --------------- [ 1886.163588] cfg80211: Calling CRDA for country: US [ 1886.170500] cfg80211: Regulatory domain changed to country: US [ 1886.170508] cfg80211: (start_freq - end_freq @ bandwidth), (max_antenna_gain, max_eirp) [ 1886.170513] cfg80211: (2402000 KHz - 2472000 KHz @ 40000 KHz), (300 mBi, 2700 mBm) [ 1886.170517] cfg80211: (5170000 KHz - 5250000 KHz @ 40000 KHz), (300 mBi, 1700 mBm) [ 1886.170520] cfg80211: (5250000 KHz - 5330000 KHz @ 40000 KHz), (300 mBi, 2000 mBm) [ 1886.170523] cfg80211: (5490000 KHz - 5600000 KHz @ 40000 KHz), (300 mBi, 2000 mBm) [ 1886.170526] cfg80211: (5650000 KHz - 5710000 KHz @ 40000 KHz), (300 mBi, 2000 mBm) [ 1886.170529] cfg80211: (5735000 KHz - 5835000 KHz @ 40000 KHz), (300 mBi, 3000 mBm) [ 1886.170533] cfg80211: (57240000 KHz - 63720000 KHz @ 2160000 KHz), (N/A, 4000 mBm) [ 1887.200197] cfg80211: Calling CRDA to update world regulatory domain [ 1887.203655] cfg80211: World regulatory domain updated: [ 1887.203659] cfg80211: (start_freq - end_freq @ bandwidth), (max_antenna_gain, max_eirp) [ 1887.203662] cfg80211: (2402000 KHz - 2472000 KHz @ 40000 KHz), (300 mBi, 2000 mBm) [ 1887.203664] cfg80211: (2457000 KHz - 2482000 KHz @ 40000 KHz), (300 mBi, 2000 mBm) [ 1887.203666] cfg80211: (2474000 KHz - 2494000 KHz @ 20000 KHz), (300 mBi, 2000 mBm) [ 1887.203668] cfg80211: (5170000 KHz - 5250000 KHz @ 40000 KHz), (300 mBi, 2000 mBm) [ 1887.203670] cfg80211: (5735000 KHz - 5835000 KHz @ 40000 KHz), (300 mBi, 2000 mBm) I've poked around on AskUbuntu, and have not found any adequate solutions; have also found similar threads that were left unanswered. Any advice/experience/threads I might be able to pull on would be greatly appreciated. In your opinion, is this a kernel issue, hardware issue, etc.? Thanks in advance. EDIT: chili, here's the output of iwconfig: delta@sager:~$ iwconfig wlan1 IEEE 802.11abg ESSID:"LANbeforetime" Mode:Managed Frequency:2.412 GHz Access Point: ----------- Bit Rate=48 Mb/s Tx-Power=16 dBm Retry long limit:7 RTS thr:off Fragment thr:off Power Management:off Link Quality=44/70 Signal level=-66 dBm Rx invalid nwid:0 Rx invalid crypt:0 Rx invalid frag:0 Tx excessive retries:0 Invalid misc:80 Missed beacon:0 eth0 no wireless extensions. lo no wireless extensions.

    Read the article

  • Does anyone really understand how HFSC scheduling in Linux/BSD works?

    - by Mecki
    I read the original SIGCOMM '97 PostScript paper about HFSC, it is very technically, but I understand the basic concept. Instead of giving a linear service curve (as with pretty much every other scheduling algorithm), you can specify a convex or concave service curve and thus it is possible to decouple bandwidth and delay. However, even though this paper mentions to kind of scheduling algorithms being used (real-time and link-share), it always only mentions ONE curve per scheduling class (the decoupling is done by specifying this curve, only one curve is needed for that). Now HFSC has been implemented for BSD (OpenBSD, FreeBSD, etc.) using the ALTQ scheduling framework and it has been implemented Linux using the TC scheduling framework (part of iproute2). Both implementations added two additional service curves, that were NOT in the original paper! A real-time service curve and an upper-limit service curve. Again, please note that the original paper mentions two scheduling algorithms (real-time and link-share), but in that paper both work with one single service curve. There never have been two independent service curves for either one as you currently find in BSD and Linux. Even worse, some version of ALTQ seems to add an additional queue priority to HSFC (there is no such thing as priority in the original paper either). I found several BSD HowTo's mentioning this priority setting (even though the man page of the latest ALTQ release knows no such parameter for HSFC, so officially it does not even exist). This all makes the HFSC scheduling even more complex than the algorithm described in the original paper and there are tons of tutorials on the Internet that often contradict each other, one claiming the opposite of the other one. This is probably the main reason why nobody really seems to understand how HFSC scheduling really works. Before I can ask my questions, we need a sample setup of some kind. I'll use a very simple one as seen in the image below: Here are some questions I cannot answer because the tutorials contradict each other: What for do I need a real-time curve at all? Assuming A1, A2, B1, B2 are all 128 kbit/s link-share (no real-time curve for either one), then each of those will get 128 kbit/s if the root has 512 kbit/s to distribute (and A and B are both 256 kbit/s of course), right? Why would I additionally give A1 and B1 a real-time curve with 128 kbit/s? What would this be good for? To give those two a higher priority? According to original paper I can give them a higher priority by using a curve, that's what HFSC is all about after all. By giving both classes a curve of [256kbit/s 20ms 128kbit/s] both have twice the priority than A2 and B2 automatically (still only getting 128 kbit/s on average) Does the real-time bandwidth count towards the link-share bandwidth? E.g. if A1 and B1 both only have 64kbit/s real-time and 64kbit/s link-share bandwidth, does that mean once they are served 64kbit/s via real-time, their link-share requirement is satisfied as well (they might get excess bandwidth, but lets ignore that for a second) or does that mean they get another 64 kbit/s via link-share? So does each class has a bandwidth "requirement" of real-time plus link-share? Or does a class only have a higher requirement than the real-time curve if the link-share curve is higher than the real-time curve (current link-share requirement equals specified link-share requirement minus real-time bandwidth already provided to this class)? Is upper limit curve applied to real-time as well, only to link-share, or maybe to both? Some tutorials say one way, some say the other way. Some even claim upper-limit is the maximum for real-time bandwidth + link-share bandwidth? What is the truth? Assuming A2 and B2 are both 128 kbit/s, does it make any difference if A1 and B1 are 128 kbit/s link-share only, or 64 kbit/s real-time and 128 kbit/s link-share, and if so, what difference? If I use the seperate real-time curve to increase priorities of classes, why would I need "curves" at all? Why is not real-time a flat value and link-share also a flat value? Why are both curves? The need for curves is clear in the original paper, because there is only one attribute of that kind per class. But now, having three attributes (real-time, link-share, and upper-limit) what for do I still need curves on each one? Why would I want the curves shape (not average bandwidth, but their slopes) to be different for real-time and link-share traffic? According to the little documentation available, real-time curve values are totally ignored for inner classes (class A and B), they are only applied to leaf classes (A1, A2, B1, B2). If that is true, why does the ALTQ HFSC sample configuration (search for 3.3 Sample configuration) set real-time curves on inner classes and claims that those set the guaranteed rate of those inner classes? Isn't that completely pointless? (note: pshare sets the link-share curve in ALTQ and grate the real-time curve; you can see this in the paragraph above the sample configuration). Some tutorials say the sum of all real-time curves may not be higher than 80% of the line speed, others say it must not be higher than 70% of the line speed. Which one is right or are they maybe both wrong? One tutorial said you shall forget all the theory. No matter how things really work (schedulers and bandwidth distribution), imagine the three curves according to the following "simplified mind model": real-time is the guaranteed bandwidth that this class will always get. link-share is the bandwidth that this class wants to become fully satisfied, but satisfaction cannot be guaranteed. In case there is excess bandwidth, the class might even get offered more bandwidth than necessary to become satisfied, but it may never use more than upper-limit says. For all this to work, the sum of all real-time bandwidths may not be above xx% of the line speed (see question above, the percentage varies). Question: Is this more or less accurate or a total misunderstanding of HSFC? And if assumption above is really accurate, where is prioritization in that model? E.g. every class might have a real-time bandwidth (guaranteed), a link-share bandwidth (not guaranteed) and an maybe an upper-limit, but still some classes have higher priority needs than other classes. In that case I must still prioritize somehow, even among real-time traffic of those classes. Would I prioritize by the slope of the curves? And if so, which curve? The real-time curve? The link-share curve? The upper-limit curve? All of them? Would I give all of them the same slope or each a different one and how to find out the right slope? I still haven't lost hope that there exists at least a hand full of people in this world that really understood HFSC and are able to answer all these questions accurately. And doing so without contradicting each other in the answers would be really nice ;-)

    Read the article

  • Does anyone really understand how HFSC scheduling in Linux/BSD works?

    - by Mecki
    I read the original SIGCOMM '97 PostScript paper about HFSC, it is very technically, but I understand the basic concept. Instead of giving a linear service curve (as with pretty much every other scheduling algorithm), you can specify a convex or concave service curve and thus it is possible to decouple bandwidth and delay. However, even though this paper mentions to kind of scheduling algorithms being used (real-time and link-share), it always only mentions ONE curve per scheduling class (the decoupling is done by specifying this curve, only one curve is needed for that). Now HFSC has been implemented for BSD (OpenBSD, FreeBSD, etc.) using the ALTQ scheduling framework and it has been implemented Linux using the TC scheduling framework (part of iproute2). Both implementations added two additional service curves, that were NOT in the original paper! A real-time service curve and an upper-limit service curve. Again, please note that the original paper mentions two scheduling algorithms (real-time and link-share), but in that paper both work with one single service curve. There never have been two independent service curves for either one as you currently find in BSD and Linux. Even worse, some version of ALTQ seems to add an additional queue priority to HSFC (there is no such thing as priority in the original paper either). I found several BSD HowTo's mentioning this priority setting (even though the man page of the latest ALTQ release knows no such parameter for HSFC, so officially it does not even exist). This all makes the HFSC scheduling even more complex than the algorithm described in the original paper and there are tons of tutorials on the Internet that often contradict each other, one claiming the opposite of the other one. This is probably the main reason why nobody really seems to understand how HFSC scheduling really works. Before I can ask my questions, we need a sample setup of some kind. I'll use a very simple one as seen in the image below: Here are some questions I cannot answer because the tutorials contradict each other: What for do I need a real-time curve at all? Assuming A1, A2, B1, B2 are all 128 kbit/s link-share (no real-time curve for either one), then each of those will get 128 kbit/s if the root has 512 kbit/s to distribute (and A and B are both 256 kbit/s of course), right? Why would I additionally give A1 and B1 a real-time curve with 128 kbit/s? What would this be good for? To give those two a higher priority? According to original paper I can give them a higher priority by using a curve, that's what HFSC is all about after all. By giving both classes a curve of [256kbit/s 20ms 128kbit/s] both have twice the priority than A2 and B2 automatically (still only getting 128 kbit/s on average) Does the real-time bandwidth count towards the link-share bandwidth? E.g. if A1 and B1 both only have 64kbit/s real-time and 64kbit/s link-share bandwidth, does that mean once they are served 64kbit/s via real-time, their link-share requirement is satisfied as well (they might get excess bandwidth, but lets ignore that for a second) or does that mean they get another 64 kbit/s via link-share? So does each class has a bandwidth "requirement" of real-time plus link-share? Or does a class only have a higher requirement than the real-time curve if the link-share curve is higher than the real-time curve (current link-share requirement equals specified link-share requirement minus real-time bandwidth already provided to this class)? Is upper limit curve applied to real-time as well, only to link-share, or maybe to both? Some tutorials say one way, some say the other way. Some even claim upper-limit is the maximum for real-time bandwidth + link-share bandwidth? What is the truth? Assuming A2 and B2 are both 128 kbit/s, does it make any difference if A1 and B1 are 128 kbit/s link-share only, or 64 kbit/s real-time and 128 kbit/s link-share, and if so, what difference? If I use the seperate real-time curve to increase priorities of classes, why would I need "curves" at all? Why is not real-time a flat value and link-share also a flat value? Why are both curves? The need for curves is clear in the original paper, because there is only one attribute of that kind per class. But now, having three attributes (real-time, link-share, and upper-limit) what for do I still need curves on each one? Why would I want the curves shape (not average bandwidth, but their slopes) to be different for real-time and link-share traffic? According to the little documentation available, real-time curve values are totally ignored for inner classes (class A and B), they are only applied to leaf classes (A1, A2, B1, B2). If that is true, why does the ALTQ HFSC sample configuration (search for 3.3 Sample configuration) set real-time curves on inner classes and claims that those set the guaranteed rate of those inner classes? Isn't that completely pointless? (note: pshare sets the link-share curve in ALTQ and grate the real-time curve; you can see this in the paragraph above the sample configuration). Some tutorials say the sum of all real-time curves may not be higher than 80% of the line speed, others say it must not be higher than 70% of the line speed. Which one is right or are they maybe both wrong? One tutorial said you shall forget all the theory. No matter how things really work (schedulers and bandwidth distribution), imagine the three curves according to the following "simplified mind model": real-time is the guaranteed bandwidth that this class will always get. link-share is the bandwidth that this class wants to become fully satisfied, but satisfaction cannot be guaranteed. In case there is excess bandwidth, the class might even get offered more bandwidth than necessary to become satisfied, but it may never use more than upper-limit says. For all this to work, the sum of all real-time bandwidths may not be above xx% of the line speed (see question above, the percentage varies). Question: Is this more or less accurate or a total misunderstanding of HSFC? And if assumption above is really accurate, where is prioritization in that model? E.g. every class might have a real-time bandwidth (guaranteed), a link-share bandwidth (not guaranteed) and an maybe an upper-limit, but still some classes have higher priority needs than other classes. In that case I must still prioritize somehow, even among real-time traffic of those classes. Would I prioritize by the slope of the curves? And if so, which curve? The real-time curve? The link-share curve? The upper-limit curve? All of them? Would I give all of them the same slope or each a different one and how to find out the right slope? I still haven't lost hope that there exists at least a hand full of people in this world that really understood HFSC and are able to answer all these questions accurately. And doing so without contradicting each other in the answers would be really nice ;-)

    Read the article

  • Can't install NPM after installing Node on EC2 Linux instance?

    - by frequent
    I'm trying my first attempt on getting a node server set up on an amazon ec2 linux instance. I think I made it quite far. First problem I ran into was when trying to make Node the connection timed out after a while, so I need three attempts until I got this: LINK(target) /home/ec2-user/node/out/Release/node: Finished touch /home/ec2-user/node/out/Release/obj.target/node_dtrace_header.stamp touch /home/ec2-user/node/out/Release/obj.target/node_dtrace_provider.stamp touch /home/ec2-user/node/out/Release/obj.target/node_dtrace_ustack.stamp touch /home/ec2-user/node/out/Release/obj.target/node_etw.stamp make[1]: Leaving directory `/home/ec2-user/node/out' ln -fs out/Release/node node Which tells me, "Node is done", although I'm not sure it is also working as it should. Following this,this and this tutorial, I'm now stuck at installing npm. I think I first cloned into the wrong folder, which always gave me error 127, but even if I'm doing this: cd ~ git clone git://github.com/isaacs/npm.git cd npm sudo -s PATH=/usr/local/bin:$PATH make install I'm still getting this: #after cloning# make[1]: Entering directory `/root/npm' node cli.js install bash: node: command not found make[1]: *** [node_modules/.bin/ronn] Error 127 make[1]: Leaving directory `/root/npm' make: *** [man/man3/start.3] Error 2 Question:: Since I'm pretty much a newby at everything I'm trying here, can someone please tell me what I'm doing wrong and how to get npm to install? Also, in case I cloned into the wrong folder, is there a way to remove the "false clone" or is this not written to disk until I call make install and I don't need to worry? Thanks for helping out!

    Read the article

  • How do I keep folders synced and backed up between two macs using a Linux NAS (rsync?)

    - by Hultner
    I've got two primary computers, one Mac Pro and one MacBook Pro for when I'm on the go. I've also got a Linux sever which also acts as NAS. Currently I backup the entire computers to an external drive with Time Machine which is rather useless and doesn't sync anything. What I really want to do is to keep my important files synced between both computers and my NAS (which is running RAID 5), that way I'm not backing up easily replaceable systemfiles and I've got all my important files in 3 places where two of them are running raid so at least 5 drives would have to crash at the same time before actual data loss occur. Folders I want to keep synced is basically my photo, documents, development, mamp and work folders and then I want to keep the user library folder backed up but not synced. I'm thinking that I'd have to use rsync but don't know how. Before suggesting Dropbox and similar suggestions I don't want to use them because of several reasons some of them being security (Dropbox obviously proved this), Speed (sometimes I'll sync gigabytes of data and that will be significantly faster locally and probably even through VPN as I have a Gigabit pipe), Space (space on my NAS is cheap and only practically limited by my needs), reliability (even if my internet were to go down I still need to be able to keep my files synced incase I'd need to go somewhere on the fly), price (I already have all the hardware and for the amount of gigabytes and bandwidth I'd need I doubt that there's any free or cheap service). Those are my main reason for wanting to keep it locally. I'm sorry for any spelling or grammatical mistakes that I've might have done. I'm writing this on my smartphone from a shaky train and English isn't my mother tongue. I gratefully appreciate any answers even if only partly solving my problem.

    Read the article

< Previous Page | 159 160 161 162 163 164 165 166 167 168 169 170  | Next Page >