Search Results

Search found 7805 results on 313 pages for 'high voltage'.

Page 240/313 | < Previous Page | 236 237 238 239 240 241 242 243 244 245 246 247  | Next Page >

  • Toshiba A205-5804 freezes when plugged in

    - by heron
    Well I have a Toshiba A205-5804 and the problem is that the screen freezes anytime I plug the pc into the external power supply, not as most of the computers having the same issue, my computer DOES freeze in safe mode, and I really can't bear this problem for much longer... It's not an overheating problem, the computer is not getting hot or anything related, I've already tried changing the AC adapter, booting only with AC and no battery, and also all of these suggestions: Try changing the following setting in the bios setup, under the 'Advanced' tab Dynamic CPU Frequency: Mode = Always Low (NOT DYNAMIC) My laptop has been running on AC power without a problem for 24hours, including many restarts, and when I went back to the original bios setting, the problem returned almost straight away. EDIT Other suggestions I found on the web from here and here: Set the power plan to high performance Set the power plan to "Minimal Power Management" (1 and 2 do conflict) Start - Control Panel - Device Manager -- Processor - disable one of two processors - reboot normally 4.Do this: Only plug battery into laptop Turn on the laptop and start Windows normally Plug AC adapter into laptop, the screen will freeze Leave the laptop the way it is for 12-24 hours After 12-24 hours, turn it off the hard way Once it is turned off, turn it back on. The laptop is working now. I have no idea of what can it be...

    Read the article

  • Nginx terminate SSL for wordpress

    - by Mike
    I have a bit of a problem. We run a wordpress blog behind a ngnix proxy and looking to terminate the ssl on the nginx side. Our current nginx config is upstream admin_nossl { server 192.168.100.36:80; } server { listen 192.168.71.178:443; server_name host.domain.com; ssl on; ssl_certificate /etc/nginx/wild.domain.com.crt; ssl_certificate_key /etc/nginx/wild.domain.com.key; ssl_session_timeout 5m; ssl_protocols SSLv2 SSLv3 TLSv1; ssl_prefer_server_ciphers on; ssl_session_cache shared:SSL:10m; ssl_ciphers RC4+RSA:+HIGH:+MEDIUM:+LOW:+SSLv2:+EXP; location / { proxy_read_timeout 2000; proxy_next_upstream error; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Host $http_host; proxy_redirect off; proxy_max_temp_file_size 0; proxy_pass http://admin_nossl; break; It just does not seem to work. If I can hit https://host.domain.com but it quickly switches back to non-secured from what I can see. Any pointers?

    Read the article

  • How do I permanently delete /var/log/lastlog?

    - by GregB
    My /var/log/lastlog file is huge. I know it's really only a few kilobytes, but tar isn't smart enough to know that, so when I image a virtual machine, my restore fails because it thinks I'm trying to load more data than I have capacity on my disk. I want to delete /var/log/lastlog and stop any and all logging to the file. I'm aware of the security implications. This logging needs to stop to preserve my backup strategy. I've made a change to /etc/pam.d/login which I was told would disable logging to /var/log/lastlog, but it does not appear to work as /var/log/lastlog keeps growing. # Prints the last login info upon succesful login # (Replaces the `LASTLOG_ENAB' option from login.defs) #session optional pam_lastlog.so Any ideas? EDIT For anyone interested, I use Centrify Express to authenticate my users via LDAP. Centrify Express is "free", but one of the drawbacks is that I can't manage user UIDs via LDAP, so they are given a dynamic UID when they login to a server. Centrify picks some crazy high UID values (so they don't conflict with local users on the server, presumably). /var/log/lastlog is indexed by UID, and grows to accommodate the largest UID on the system. This means that when a Centrify user logs in, they get a UID in the upper-end of the UID range, which causes lastlog to allocate an obscene amount of space, according to the file system. ~$ ll /var/log/lastlog -rw-rw-r-- 1 root root 291487675780 Apr 10 16:37 /var/log/lastlog ~$ du -h /var/log/lastlog 20K /var/log/lastlog More Into --- Sparse Files

    Read the article

  • Different external ip addresses from different sites

    - by user630286
    My router is ClearOS 6(Centos 6). In my router, I have two external (internet) network connections from two ISP's. The primary connection is eth2 connected to a cable modem and the second one is ppp0 connected to a dsl modem. I have assigned eth2 as the primary connection (with a high metric value). In fact this is done through clearos's multiwan web interface. I have a test in my Nagios to monitor whether the primary connection. This connection is done based on the result of curl ifconfig.me But it seems that ifconfig.me is always giving the ip address of my secondary connection. I tested it through a browser. Yes ifconfig.me gives the secondary internet's(ppp0) ip address. But whatismyipaddress.[com|org] give my primary ip address (eth2). I checked the default route on the router through ip route list 0/0 which also shows the primary connection (eth2) as the default route. The traceroute www.google.com and traceroute ifconfig.me both seems to trace through the primary connection (eth2). As our secondary internet connection has only got a limited download, I don't want to end up having to pay a large sum at the end of the month. Has somebody got an idea why the ifconfig.me shows my secondary address? What is the best way to ensure that my router(and thus the lan) use the right internet connection.

    Read the article

  • Docking Station Sound Doesn't Work on Dell D830 with Windows 7

    - by cisellis
    EDIT: I can only mark one answer as the correct one but the actual solution was a combination of two comments (updating the BIOS to A15 AND installing the Sigmatel audio drivers). I have a Dell Latitude D830 laptop that is running Windows 7 Enterprise x64. I connect to a docking station during the day with multiple monitors, a keyboard and a mouse. Everything runs with no problems including most of the docking station ports (usb, monitors, etc.) However, the sound port from the docking station does not work since the upgrade to Windows-7. Even with the laptop plugged in, the sound always comes out of the laptop, not the headphones plugged into the docking station. Here's what I've tried: I've seen other issues like via Google this that seem to be mostly unanswered. I found one or two that referenced using the Vista x64 drivers, especially the Nvidia drivers. I do not have an Nvidia chipset but I've reinstalled the sound drivers and that has not helped. I don't have a support contract and considering the cost is usually high to call Dell, that's not an option. Dell's forums are pretty much a wasteland and I've found no help there. Since this is a docking station I thought I might need to try the SATA or Intel chipset drivers from the dell site instead, however I'm not really sure and I need to work on this laptop in the meantime. I can't really afford the downtime to experiment with random drivers all day in case they turn out to be incompatible (Dell still hasn't added Windows 7 to their support site as far as I can tell). Does anyone have any other ideas? Has anyone had this issue and solved it? If so, how? Thanks in advance for your help.

    Read the article

  • What does opening a serial port do?

    - by reve_etrange
    What does opening the standard PC serial port do, in electrical terms (i.e. what voltages on which pins)? For example, the ancient VB6 program which controls an apparatus I am tasked with maintaining toggles .PortOpen to control some TTL. The connection only used 2 pins (bad solders fell apart), so which pins do I solder to? The only labels / documentation refer to pins 7 and 9, saying 0V and 5V parenthetically, but does .PortOpen really just put 5V between RI and RTS?. As a post script, this isn't the weirdest thing about the set up. The TTL I referred to above also connects to an instrument via a BNC to DB9 (!), with only 1 pin used. I guess there was an assumption about a common ground, since the BNC shielding isn't connected to the GND pin? The connection is to the instrument's 'foot pedal' pin, it was a way to remotely trigger the device. Update According to this page, the DTR and RTS pins can go high when the port is opened. If they were so configured, they will subsequently go low when the port is closed. If DTR and RTS are not enabled, opening the port should set both to low (and keep them low).

    Read the article

  • To what extent is size a factor in SSD performance?

    - by artif
    To what extent is the size of an SSD a factor in its performance? In my mind, correct me if I'm wrong, a bigger SSD should be, everything else being equal, faster than a smaller one. A bigger SSD would have more erase blocks and thus more leeway for the FTL (flash translation layer) to do garbage collection optimization. Also there would be more time before TRIM became necessary. I see on Wikipedia that it remarks that "The performance of the SSD can scale with the number of parallel NAND flash chips used in the device" so it seems throughput also increases significantly. Also many SSDs contain internal caches of some sort and presumably those caches are larger for correspondingly large SSDs. But supposing this effect exists, I would like a quantitative analysis. Does throughput increase linearly? How much is garbage collection impacted, if at all? Does latency stay the same? And so on. Would the performance of a 8 GB SSD be significantly different from, for example, an 80 GB SSD assuming both used high quality chips, controllers, etc? Are there any resources (webpages, research papers, presentations, books, etc) that discuss correlations between SSD performance (4 KB random write speed, latency, maximum sequential throughput, etc) and size? I realize this does not really sound like a programming question but it is relevant for what I'm working on (using flash for caching hard drive data) which does involve programming. If there is a better place to ask this question, eg a more hardware oriented site, what would that be? Something like the equivalent of stack overflow (or perhaps a forum) for in-depth questions on hardware interfaces, internals, etc would be appreciated.

    Read the article

  • Hearing a clicking noise from soundcard all the time

    - by Mehrdad
    I have installed Fedora 17 on my laptop. A few days ago I updated my fedora (but not upgraded). I shut down my computer and since the next time I turned it on I am hearing a clicking noise all the time from speakers. Even when I plug my headphones in I hear the noise through the headphone. I surfed over the internet and found the following shell commands: su -c 'echo "options snd_hda_intel power_save=0" /etc/modprobe.d/snd_hda_intel.conf' su -c 'echo 0 /sys/module/snd_hda_intel/parameters/power_save' I tried them but they didn't work. Here is the part of "lspci" command related to my sound-card: 00:1b.0 Audio device: Intel Corporation 82801FB/FBM/FR/FW/FRW (ICH6 Family) High Definition Audio Controller (rev 03) I have to add that my sound-card is working and I can play some audio file, I mean I can hear the voice and noise simultaneously. But everything is OK in windows xp which is also installed on my laptop. Could it be related to the sound-card driver? If so, how can I revert it to the previous version?

    Read the article

  • Short, intermittent USB port timeouts

    - by jacobsee
    I have a data acquisition application controlled by a Windows PC. I am using an Intel Desktop Board DH67CL motherboard. This has 6 USB2 ports along with 2 of the new blue USB3 ports. The DAQ instrument connects via USB to the computer. Roughly once every day or two there is a short communication glitch with the USB that causes the instrument to disconnect briefly and then reconnect. This is logged so I know whenever it happens. Occasionally it will cause the data acquisition to stop. I've verified that the glitches will go away if I use the USB3 ports, or if I use a PCI add-on card with USB ports. So it seems that there is something going on with these built-in USB2 ports on this motherboard. I haven't yet had a chance to test with other motherboards. My question: what could be causing these glitches and how to get rid of them for this board, or is there a better motherboard to use? We have standardized on the DH67CL because it's inexpensive, has 3 PCI slots that we need, and is readily available. We don't need the power of higher-end server boards but reliability is important. Thanks. Update: This problem has been reproduced many times on different hardware, though we always use the same model of motherboard (DH67CL) and power supply (Antec EA-430D). I don't think the power requirements are very high but will check on that.

    Read the article

  • VPS Memory Exchausted Even With Light Settings

    - by user101570
    Linux noob here. I have a 256MB VPS on Ubuntu 11.04 server and when I run "free -m" the result shows all memory being used (including the second line re: buffers/cache). I found this very strange, considering I only have 5 Apache processes running each chewing up about 20MB each. MYSQL is taking up 30MB. To my knowledge, and according to "top", I have no other memory hogs operating. Settings that may be relevant: PHP memory_limit = 32M MYSQL key_buffer = 16M Prefork MPM Maxclients = 10 So when I reviewed these settings, I naturally thought maxclients was too high, so I tried switching it to 5. Now not only does my memory still show as being 100% used, my website loads much, much slower, despite not getting any traffic aside from mine at the moment. I don't understand this. I thought a single Apache process handles all requests from a client received within the "KeepAliveTimeout" window, which I've set to 2 seconds. With my initial config. of 10 maxclients, my page load times are around .3ms, so a single process should handle that no problem, correct? So next I went to an extreme level of 1 for maxclients. My memory is still at 100% usage and my site loads painfully slow. I'm a noob at a complete loss here. According to the many tutorials I've read on basic server setup, I should be good to go. Help! Please! Edit: total used free shared buffers cached Mem: 256 256 0 0 0 0 -/+ buffers/cache: 256 0 Swap: 0 0 0

    Read the article

  • Capabilities of business and SOHO routers

    - by Q8Y
    I'm currently studying for the CCNA certifications (especially for Cisco routers and configuration). I know that business routers provide more features than SOHO routers, the processing speed and RAM can be enough. Assume I need to connect a number of users through a network (accessing internet, share files, printers, ...). I have a high speed connection to the internet and I already applied QoS. How can I find out how many users such a single (SOHO) router could handle? In my case I'd attach to it multiple switches until I have the number of ports needed. Would everything work well and smoothly with 50 users? What about 300? At which point would I need a business router instead? If I implemented VLAN here, would it make any difference in the performance? When do I really need to use more than one router? (Both SOHO and business) I'm thinking that I may need them only if I want to increase the performance (instead of replacing the existing one) and if I have multiple locations, so in this situation I need to have multiple routers, right? Put differently: Is there is a need to have another router if my business all in one place?

    Read the article

  • Google Chrome is running my system out of memory

    - by jasondavis
    I am running Windows 7 x64 with 12GB of RAM I often have multiple windows and a ton of tabs open. I use the extension Session Buddy to restore all my windows and tabs once the memory gets too high. So my 12gb of ram will get up to around 93% used because of Chrome, now I can close chrome down and restore the same amount of windows and tabs and it will only use about 25% of memory, it then over time increases back up to the 90% zone after several hours. It seems that when I close tabs, instead of freeing that memory up, it doesn't so that is why the huge increase of memory usage as new tabs are opened and closed it just adds up, this sounds like a huge bug in chrome. Just for an example I just re-booted my system, I only have 1 window with 4 tabs open and in the task manager, it shows 29 chrome.exe processes I then killed all chrome processes and opened a chrome window with just 1 tab, it made 27 chrome.exe processes. Is this an issue that others have? More importantly, is there a fix? UPDATE I just read that each plugin and extension creates a chrome.exe process, I then couunted 24 extensions so that helps explain a portion of the large processes. Still not sure about memory not being freed up though!

    Read the article

  • Postfix : outgoing mail in TLS for a specific domain

    - by vercetty92
    I am trying to configure postfix to send mail in TLS (starttls in fact), but only for a specific destination. I tried with "smtp_tls_policy_maps". This is the only line in my main.cf file regarding TLS configuration, but it seems not working. Here is my main.cf file: queue_directory = /opt/csw/var/spool/postfix command_directory = /opt/csw/sbin daemon_directory = /opt/csw/libexec/postfix html_directory = /opt/csw/share/doc/postfix/html manpage_directory = /opt/csw/share/man sample_directory = /opt/csw/share/doc/postfix/samples readme_directory = /opt/csw/share/doc/postfix/README_FILES mail_spool_directory = /var/spool/mail sendmail_path = /opt/csw/sbin/sendmail newaliases_path = /opt/csw/bin/newaliases mailq_path = /opt/csw/bin/mailq mail_owner = postfix setgid_group = postdrop mydomain = ullink.net myorigin = $myhostname mydestination = $myhostname, localhost.$mydomain, localhost masquerade_domains = vercetty92.net alias_maps = dbm:/etc/opt/csw/postfix/aliases alias_database = dbm:/etc/opt/csw/postfix/aliases transport_maps = dbm:/etc/opt/csw/postfix/transport smtp_tls_policy_maps = dbm:/etc/opt/csw/postfix/tls_policy inet_interfaces = all unknown_local_recipient_reject_code = 550 relayhost = smtpd_banner = $myhostname ESMTP $mail_name debug_peer_level = 2 debugger_command = PATH=/bin:/usr/bin:/usr/local/bin:/usr/X11R6/bin xxgdb $daemon_directory/$process_name $process_id & sleep 5 And here is my "tls_policy" file: gmail.com encrypt protocols=SSLv3:TLSv1 ciphers=high I also tried gmail.com encrypt My wish is to use TLS only for the gmail domain. With this configuration, I don't see any TLS line in the source of the mail. But if I tell postfix to use TLS if possible for all destination with this line, it works: smtp_tls_security_level = may Beause I can see this line in the source of my mail: (version=TLSv1/SSLv3 cipher=OTHER); But I don't want to try to use TLS for the others domains...only for gmail... Do I miss something in my conf? (I also try whith "hash:/etc/opt/csw/postfix/tls_policy", and it's the same) Thanks a lot in advance

    Read the article

  • CPU operating temperature ranges

    - by osij2is
    I have an AMD Phenom II 960T with 2 cores unlocked for a total of 6 cores. I don't overclock at all. I have a Arctic Cooling ACALP64 Heatsink/Fan installed. I'm currently running ESXi 5.0 so I have to go into the BIOS to read the CPU temperatures, which at idle seem to be in the 71-74C range. To me, this is pretty high, but I cannot find any official temperature ranges that AMD says the CPU can work well within. There seems to be a lot of questions on superuser and numerous forums around CPU temperatures but no one seems to have a clear consensus as to what the manufacturer temperature ranges are for specific CPUs. I've tried searching through AMDs site to no avail. At this point, I'd be willing to shut off the 2 extra cores if it keeps the heat down but until I get some sort of tolerance or range for temperature, I have no idea if the CPU is being damaged or not. Can anyone point to a direct source, article, FAQ from AMD that specifically states their CPUs temperature range? Or are CPU temperature ranges so varying that there's no possible baseline? Am I being too paranoid about this? To me, anything over 65C is a bit much and if I'm in the low-mid 70s range with NO VMs running, what can I expect if I have several VMs running?

    Read the article

  • Subdomain is preventing my search results from rising as expected in page rank

    - by culov
    My problem is that I have a site which has requires a dedicated page for every city I choose to support. Early on, I decided to use subdomains rather than a directly after my domain (ie i used la.truxmap.com rather than truxmap.com/la). I realize now that this was a major mistake because Google seems to treat la.truxmap.com as a completely different site as ny.truxmap.com. So for instance, if i search "la food truck map" my site will be near the top, however, if i search "nyc food truck map" im no where in sight because ny.truxmap.com wouldnt be very high in the page rank by itself, and it doesnt have the boost that it ought to be getting from the better known la.truxmap.com So a mistake I made a year ago is now haunting my page rank. I'd like to know what the most painless way of resolving my dilemma might be. I have received so much press at la.truxmap.com that I can't just kill the site, but could I re-direct all requests at la.truxmap.com to truxmap.com/la and do the same for all cities supported without trashing my current, satisfactory page rank results I'm getting from la.truxmap.com ?? EDIT I left out some critical information. I am using Google Apps to manage my domain (that is, to add the subdomains) and Google App Engine to host my site. Thus, Google Apps provides a simple mechanism to mask truxmap.appspot.com (the app engine domain) as la.truxmap.com, but I don't see how I can mask it as truxmap.com/la. If I can get this done, then I can just 301 redirect la.truxmap.com to truxmap.com/la as suggested below. Thanks so much!

    Read the article

  • Raspberry pi slows down my entire network

    - by gnusouth
    Whenever my Raspberry Pi is connected to the network (via ethernet) the entire network is slowed to a crawl. On my main computer, ping times for google.com go from ~10ms to ~200ms and it takes forever to load web pages. Connections are also slow on the Pi, with an apt-get update showing pathetic speeds in the order of 1KB/s. Turning off the Pi completely removes the drag from the network. I've tried static and dynamic IP addresses for the Pi, but both have the same problems. I'm currently using Raspbian (downloaded today), but also had this problem with Arch Linux. I've checked the connection's duplex with dmesg | grep -i duplex, which shows that the Pi's connection is running at 100Mbps, full-duplex, as expected. My modem/router is a Billion 7404VNPX (an Australian thing); relatively high-end, albeit a bit buggy at times (it will occassionally delete all its firewall settings). It assigns IPs in the range 192.168.1.1 to 192.168.1.20 and has 192.168.1.254 as its own IP. When I assign static IPs I tend to use the 192.168.1.200 area. Does anyone have any idea as to what could be causing this weird slowdown? Or any tests I could try? Thanks

    Read the article

  • xen + debian network after upgrade squeeze to wheeze

    - by rush
    I've got a Debian + Xen server. After a system upgrade to the stable version the network doesn't come up after boot. Every time after reboot I need to bring it up manually. The network configuration was not changed during upgrade. Here is /etc/network/interfaces: auto lo iface lo inet loopback auto eth0 iface eth0 inet static address 11.22.33.44 netmask 255.255.255.0 gateway 11.22.33.1 nameserver 8.8.8.8 After boot ip r shows no route and eth0 has no ip address. Manually ip and route setup goes fine and network starts working. Messages from dmesg about network I've found (looks like nothing interesting) [ 3.894401] ACPI: Fan [FAN3] (off) [ 3.894444] ACPI: Fan [FAN4] (off) [ 4.178348] e1000e 0000:00:19.0: eth0: (PCI Express:2.5GT/s:Width x1) 00:1e:67:14:66:c9 [ 4.178351] e1000e 0000:00:19.0: eth0: Intel(R) PRO/1000 Network Connection [ 4.178392] e1000e 0000:00:19.0: eth0: MAC: 10, PHY: 11, PBA No: 0100FF-0FF [ 4.178413] e1000e 0000:02:00.0: Disabling ASPM L0s L1 [ 4.178432] xen: registering gsi 16 triggering 0 polarity 1 -- [ 4.223667] ata5: DUMMY [ 4.223668] ata6: DUMMY [ 4.289153] e1000e 0000:02:00.0: eth1: (PCI Express:2.5GT/s:Width x1) 00:1e:67:14:66:c8 [ 4.289155] e1000e 0000:02:00.0: eth1: Intel(R) PRO/1000 Network Connection [ 4.289245] e1000e 0000:02:00.0: eth1: MAC: 3, PHY: 8, PBA No: 1000FF-0FF [ 4.506908] usb 1-1: new high-speed USB device number 2 using ehci_hcd [ 4.542920] ata2: SATA link up 3.0 Gbps (SStatus 123 SControl 300) -- [ 10.362999] EXT4-fs (dm-23): mounted filesystem with ordered data mode. Opts: (null) [ 10.419103] EXT4-fs (dm-3): mounted filesystem with ordered data mode. Opts: (null) [ 10.988255] ADDRCONF(NETDEV_UP): eth1: link is not ready [ 13.175533] Event-channel device installed. [ 13.287555] XENBUS: Unable to read cpu state -- [ 13.288670] XENBUS: Unable to read cpu state [ 13.965939] Bridge firewalling registered [ 14.134048] e1000e: eth1 NIC Link is Up 1000 Mbps Full Duplex, Flow Control: Rx/Tx [ 14.283862] ADDRCONF(NETDEV_UP): peth0: link is not ready [ 14.284543] ADDRCONF(NETDEV_CHANGE): eth1: link becomes ready [ 17.800627] e1000e: peth0 NIC Link is Up 1000 Mbps Full Duplex, Flow Control: Rx/Tx [ 17.801377] ADDRCONF(NETDEV_CHANGE): peth0: link becomes ready [ 18.307278] device peth0 entered promiscuous mode [ 24.538899] eth1: no IPv6 routers present [ 28.570902] peth0: no IPv6 routers present I've upgraded two servers and I've such behaviour on two of them. How to fix this and get network starts automatically on boot?

    Read the article

  • using i7 "gamer" cpu in a HPC cluster

    - by user1219721
    I'm running WRF weather model. That's a ram intensive, highly parallel application. I need to build a HPC cluster for that. I use 10GB infiniband interconnect. WRF doesn't depends of core count, but on memory bandwidth. That's why a core i7 3820 or 3930K performs better than high-grade xeons E5-2600 or E7 Seems like universities uses xeon E5-2670 for WRF. It costs about $1500. Spec2006 fp_rates WRF bench shows $580 i7 3930K performs the same with 1600MHz RAM. What's interesting is that i7 can handle up to 2400MHz ram, doing a great performance increase for WRF. Then it really outperforms the xeon. Power comsumption is a bit higher, but still less than 20€ a year. Even including additional part I'll need (PSU, infiniband, case), the i7 way is still 700 €/cpu cheaper than Xeon. So, is it ok to use "gamer" hardware in a HPC cluster ? or should I do it pro with xeon ? (This is not a critical application. I can handle downtime. I think I don't need ECC?)

    Read the article

  • which is best smart automatic file replication solution for cloud storage based systems.

    - by TORr0t
    I am looking for a solution for a project i am working on. We are developing a websystem where people can upload their files and other people can download it. (similar to rapidshare.com model) Problem is, some files can be demanded much more than other files. The scenerio is like: I have uploaded my birthday video and shared it with all of my friend, I have uploaded it to myproject.com and it was stored in one of the cluster which has 100mbit connection. Problem is, once all of my friends want to download the file, they cant download it since the bottleneck here is 100mbit which is 15MB per second, but i got 1000 friends and they can only download 15KB per second. I am not taking into account that the hdd is serving same files. My network infrastrucre is as follows: 1 gbit server(client) and connected to 4 Nodes of storage servers that have 100mbit connection. 1gbit server can handle the 1000 users traffic if one of storage node can stream more than 15MB per second to my 1gbit (client) server and visitor will stream directly from client server instead of storage nodes. I can do it by replicating the file into 2 nodes. But i dont want to replicate all files uploadded to my network since it is costing much more. So i need a cloud based system, which will push the files into replicated nodes automatically when demanded to those files are high, and when the demand is low, they will delete from other nodes and it will stay in only 1 node. I have looked to gluster and asked in their irc channel that, gluster cant do such a thing. It is only able to replicate all the files or none of the files. But i need it the cluster software to do it automatically. Any solutions ? (instead of recommending me amazon s3) S

    Read the article

  • iptables drops some packets on port 80 and i don't know the cause.

    - by Janning
    Hi, We are running a firewall with iptables on our Debian Lenny system. I show you only the relevant entries of our firewall. Chain INPUT (policy DROP 0 packets, 0 bytes) target prot opt in out source destination ACCEPT all -- lo * 0.0.0.0/0 0.0.0.0/0 ACCEPT all -- * * 0.0.0.0/0 0.0.0.0/0 state RELATED,ESTABLISHED ACCEPT tcp -- * * 0.0.0.0/0 0.0.0.0/0 tcp dpt:80 state NEW Chain OUTPUT (policy DROP 0 packets, 0 bytes) target prot opt in out source destination ACCEPT all -- * lo 0.0.0.0/0 0.0.0.0/0 ACCEPT all -- * * 0.0.0.0/0 0.0.0.0/0 state RELATED,ESTABLISHED LOGDROP all -- * * 0.0.0.0/0 0.0.0.0/0 Some packets get dropped each day with log messages like this: Feb 5 15:11:02 host1 kernel: [104332.409003] dropped IN= OUT=eth0 SRC= DST= LEN=1420 TOS=0x00 PREC=0x00 TTL=64 ID=18576 DF PROTO=TCP SPT=80 DPT=59327 WINDOW=54 RES=0x00 ACK URGP=0 for privacy reasons I replaced IP Addresses with and This is no reason for any concern, but I just want to understand what's happening. The web server tries to send a packet to the client, but the firewall somehow came to the conclusion that this packet is "UNRELATED" to any prior traffic. I have set a kernel parameter ip_conntrack_ma to a high enough value to be sure to get all connections tracked by iptables state module: sysctl -w net.ipv4.netfilter.ip_conntrack_max=524288 What's funny about that is I get one connection drop every 20 minutes: 06:34:54 droppedIN= 06:52:10 droppedIN= 07:10:48 droppedIN= 07:30:55 droppedIN= 07:51:29 droppedIN= 08:10:47 droppedIN= 08:31:00 droppedIN= 08:50:52 droppedIN= 09:10:50 droppedIN= 09:30:52 droppedIN= 09:50:49 droppedIN= 10:11:00 droppedIN= 10:30:50 droppedIN= 10:50:56 droppedIN= 11:10:53 droppedIN= 11:31:00 droppedIN= 11:50:49 droppedIN= 12:10:49 droppedIN= 12:30:50 droppedIN= 12:50:51 droppedIN= 13:10:49 droppedIN= 13:30:57 droppedIN= 13:51:01 droppedIN= 14:11:12 droppedIN= 14:31:32 droppedIN= 14:50:59 droppedIN= 15:11:02 droppedIN= That's from today, but on other days it looks like this, too (sometimes the rate varies). What might be the reason? Any help is greatly appreciated. kind regards Janning

    Read the article

  • Developer hardware autonomy in a managed desktop environment [closed]

    - by Troy Hunt
    I’m looking for some feedback on how developer PCs are managed within environments that have a strict managed desktop policy (normally large corporations). For example, many corporate environments control the installation of software and the deployment of patches and virus updates through a centralised channel. This usually means also dictating the OS version and architecture (32 bit versus 64 bit) which will likely also mean standardised hardware configurations. I’m particularly interested in feedback from developers who work in this sort of environment but have a high degree of autonomy over their machines. This might mean choosing your own hardware vendor, OS type and version and perhaps how the machines are built and maintained. I have several specific questions: How do you satisfy the needs of security, governance etc whilst maintaining your autonomy? For example, how do you address concerns about keeping virus definitions and OS patches up to date? Do you have a process for gaining exemption from standard desktop builds and if so, what do you need to demonstrate in order to get this? How have you justified this need to the decision makers? Essentially, what is the benefit to your role as a developer by having this degree of autonomy? Thanks very much everyone. Update: There's a great post from Jean-Paul Boodhoo which addresses the developer tool component of the quesiton here: http://blog.jpboodhoo.com/TheFallacyOfTheStandardizedDeveloperMachineimage.aspx

    Read the article

  • WINDOWS: Your computer hangs. You can windows + R (run dialog) but performance is so halted taskMGR

    - by John Sullivan
    The question is, what process are available to try to recover from total system instability before pulling the plug when we can do nothing but programs or batches in the path from the run dialog (windows + r key), and performance is so dead that taskMGR / procEXP / other programs with visual guis are not usable? I am not a windows expert, but ideally someone out there has written a program that does more or less stuff like this: Immediately set (or perhaps I can set from the run prompt) its priority to extremely high, evaluate performance bottlenecks. E.g. is CPU 100%? If so identify offending program(s) or problems. Attempt / log fixes, then provide crude feedback asking the user if his performance has stabilized enough to abort, wait a few seconds, if no feedback continue, etc. etc. Eventually try to do any "system cleanup" if the program decides it cannot recover and perhaps finally provide a series of beeps to the user, or what have you, to say "OK, I give up, time to pull the plug". Ideally create a log, when able. These kinds of horrible hangs are a situation where surely trying something, anything, is better than nothing -- as long as that something is intelligent -- when the alternative is ripping out the power coord. Again, I am not a windows expert, so perhaps there is a much more elegant "hands on" approach I am not aware of.

    Read the article

  • Reverse proxy for mailserver (SMTP + HTTP for web client)

    - by ba
    I'm looking at doing some reverse proxy work for a mail server with corresponding web client. Both servers are running on the same machine, this is not a server with a high load. :) The solution I've discussed with friends is having the mail server/web client on our internal network. Then to put a reverse proxy on the DMZ to service both SMTP and web client HTTP-traffic to the mail server on the internal network. From what I understand this is the recommended secure solution? So far I've thought for the SMTP-proxy part of using postfix which will receive mail, do some spamhause and similar anti-spam measures and if it all checks out, send the mail to the mail server on the inside. The mail server on the inside will send all outgoing mail to the proxy which will then send it out on the Internet. For the web client I'm not sure exactly which software I should be running on the proxy machine, I've been thinking about using Squid -- but that's basically based on the fact that I know squid is a http proxy. The web client data will be sent out over SSL. Reading around some here on Serverfault I've seen other people using Apache with mod_proxy+mod_security for similar situations. Am I thinking correctly for this solution? What software would you guys use and with which modules? Thanks in advance for the help! :)

    Read the article

  • Gigabit network limited to 25MB/s by CPU. How to make it faster?

    - by netvope
    I have a Acer Aspire R1600-U910H with a nForce gigabit network adapter. The maximum TCP throughput of it is about 25MB/s, and apparently it is limited by the single core Intel Atom 230; when the maximum throughput is reached, the CPU usage is about 50%-60%, which corresponds to full utilization considering this is a Hyper-threading enabled CPU. The same problem occurs on both Windows XP and on Ubuntu 8.04. On Windows, I have installed the latest nForce chipset driver, disabled power saving features, and enabled checksum offload. On Linux, the default driver has checksum offload enabled. There is no Linux driver available on Nvidia's website. ethtool -k eth0 shows that checksum offload is enabled: Offload parameters for eth0: rx-checksumming: on tx-checksumming: on scatter-gather: on tcp segmentation offload: on udp fragmentation offload: off generic segmentation offload: off The following is the output of powertop when the network is idle: Wakeups-from-idle per second : 61.9 interval: 10.0s no ACPI power usage estimate available Top causes for wakeups: 90.9% (101.3) <interrupt> : eth0 4.5% ( 5.0) iftop : schedule_timeout (process_timeout) 1.8% ( 2.0) <kernel core> : clocksource_register (clocksource_watchdog) 0.9% ( 1.0) dhcdbd : schedule_timeout (process_timeout) 0.5% ( 0.6) <kernel core> : neigh_table_init_no_netlink (neigh_periodic_timer) And when the maximum throughput of about 25MB/s is reached: Wakeups-from-idle per second : 11175.5 interval: 10.0s no ACPI power usage estimate available Top causes for wakeups: 99.9% (22097.4) <interrupt> : eth0 0.0% ( 5.0) iftop : schedule_timeout (process_timeout) 0.0% ( 2.0) <kernel core> : clocksource_register (clocksource_watchdog) 0.0% ( 1.0) dhcdbd : schedule_timeout (process_timeout) 0.0% ( 0.6) <kernel core> : neigh_table_init_no_netlink (neigh_periodic_timer) Notice the 20000 interrupts per second. Could this be the cause for the high CPU usage and low throughput? If so, how can I improve the situation? The other computers in the network can usually transfer at 50+MB/s without problems. And a minor question: How can I find out what is the driver in use for eth0?

    Read the article

  • Is it a good Idea to switch to a SSD to use less battery?

    - by Walter Maier-Murdnelch
    I am thinking of buying a SSD for my laptop, mainly for the purpose of extended operating time when running on battery. At the moment I use a Hitachi HTS545032B9A300 (320GB) (Datasheet) as main drive and a Seagate Momentus 5400.3 120GB as secondary drive. I dualboot Windows and Linux but I don't need the windows partition any longer, a 120GB SDD would be more than sufficient space-wise. Speed is not an issue for me, I make heavy use of tmpfs (ramdrive) within Linux and transfers of bigger files are mainly through some network filesystem anyways, thus a cheaper SSD should do. For the purpose of comparison I chose the OCZ Vertex Plus 120GB. Power consumption always is a big promotional thing the industry uses to make me want to buy their SSDs, some sheet on the OCZ page provides an astonishing comparison of desktop HDDS and SSDs. The numbers I got comparing my laptop HDD and their SSD were not really astonishing any longer. Hitachi 320GB HDD: Startup (W, peak, max.) 4.5 Seek (W, avg.) 1.7 Read / Write (W, avg.) 1.4 Performance idle (W, avg.) 1.3 Active idle (W, avg.) 0.8 Low power idle (W, avg.) 0.5 Standby (W, avg.) 0.2 Sleep 0.1 OCZ 120GB SSD: 1.5W active 0.3W standby I see that there are differences, but actually they don't seem that high as I though they were. And compared to the power consuption of the rest of my system I wonder if it makes a difference at all. Have I just taken the wrong look at the whole thing or may I be better off to buy another battery for my laptop?

    Read the article

< Previous Page | 236 237 238 239 240 241 242 243 244 245 246 247  | Next Page >