Search Results

Search found 5472 results on 219 pages for 'jack low'.

Page 196/219 | < Previous Page | 192 193 194 195 196 197 198 199 200 201 202 203  | Next Page >

  • Memory interleaving

    - by Tim Green
    Hello, I have this question that has me rather confused. Suppose that a 1G x 32-bit main memory is built using 256M x 4-bit RAM chips and that this memory is byte-addressable. I have deduced that one would require 4*1G = 2^2*2*30 = 2^32 - so 32 bits to address the full memory. My problem now comes with, say, if you had memory (byte) address "14", determine which memory module this would go into. (There would have to be 8 chips per module to make the 32-bit wide memory, and 4 modules overall giving 32 chips in total. Modules are numbered from 0). In high-order interleave, it appears trivial that it's the first (0) memory module given a lot of the first few bits are 0. However, low-order interleave has me stumped. I can't figure out (for sure) how many bits are used to determine a memory module (possibly 2, given there are 4 in total?). The given solution is Module 3. This is not homework in the same sense so I will not be tagging it as such.

    Read the article

  • How can I save a file on an http server using just http requests and javascript?

    - by user170902
    hi all, I'm trying to understand some of the basics of web servers/html/javasacript etc. I'm not interested in any of the various frameworks like php/asp, i'm just trying to get a low level look at things (for now). At the moment I'm trying to understand how data can be sent to/saved on the backend, but i must admit that i'm getting a bit lost under the various specs/technical stuff on w3 at the moment! If I have some data, say xml, that I want to save on the backend how do I go about it? I assume that I would have to use something like an HTTP PUT or POST Request to an html doc that contains some javascript that in turn would process the data, e.g. save it somewhere. Now from googling around I can see that this doesn't seem to be the case, so my assumptions are completely wrong! So how is it done? Can it be done, or do I have to use something like php or asp? TIA. bg

    Read the article

  • Footprint of Lua on a PPC Micro

    - by Adam Shiemke
    We're developing some code on Freescale PPC micros (5517 and 5668 at the moment), and I was wondering if we could put Lua on them. The devices need to be easily programmed/reconfigured in the field, and the current product uses a proprietary interpreted logic language that can be loaded in, and our software (written in C) runs an interpreter. I would like to move to a better language (the implementation is a bit buggy and slow), so I'm considering Lua, but the memory footprint must be very low. For the 5517 (which we may not use), the maximum RAM is 80K. Things are better on the 5668, with 592K of RAM. So does anyone know if I can put Lua on bare metal? We're effectively not running an OS. If so, are there any estimates on what kind of memory footprint we might see? How much effort it would take? Failing this, does anyone know of any kind of interpreter that might be better in a memory-constrained environment without an OS? Or are we better just rolling our own?

    Read the article

  • Hard Drive POV clock

    - by SkinnyMAN
    I am building a hard drive POV clock. (google it, they are pretty cool) I am working on the code for it, right now all i want to do is get the hang of making it do simple patterns with the RGB leds. I am wondering if anyone has any ideas on how to do something simple like make a red line rotate around the platter. right now what i have is an interrupt that triggers a function. int gLED = 8; // pins for RGB led strip int rLED = 9; int bLED = 10; attachInterrupt(0, ledPattern, FALLING); void ledPattern(){ digitalWrite(gLED, HIGH); // This will make a stable image of slice of the delayMicroseconds(500); // platter, but it does not move. digitalWrite(gLED, LOW); } That is the main part of the code (obviously I cut some stuff out that arduino requires) What I am trying to figure out is how can make that slice rotate around the platter. Eventually I will make the pattern more interesting by adding in other colors. Any Ideas?

    Read the article

  • Analyzing BSOD without a dump

    - by eeoo2555
    One of the drivers I'm developing has caused a BSOD. Unfortunately a dump file was not created since it was not configured / low resources. I was trying to reproduce this crash but no luck so far. Is there any way to get some info using WinDbg or any other tool? I have this information: A screenshot of the BSOD The .sys file. Its pdb The source code The machine it was crashed on I have everything except the dump itself. Your help will be much appreciated. As I said above, no dump (/minidump) exists. This is the actual problem. For this specific crash, I know I won't be able to get the stack. Just getting the specific line of code will be good enough. Because the BSOD contains the module's address, it seems like there should be a way to detect which line exactly is it. As I mentioned above, I do have the .sys file, the pdb and the source code. This is the specific code taken from MSDN: SYSTEM_SERVICE_EXCEPTION. How can I know from there what was the specific line? and/or the specific exception raised?

    Read the article

  • BIOS upgrade lowers CPU temperature

    - by N.N.
    Setup I've got a system with an Asus P8Z68-V PRO motherboard and an Intel Core i7-2600K CPU running at stock speed (no overlocking) which I cool with a Noctua NH-U12P. On the heatsink I've got the two included fans connected via the included Low-Noise Adapters (L.N.A.) 1100 RPM, 16.9 dB(A). In the BIOS settings I've set the CPU and chassis fan profile to silent. Issue Yesterday I upgraded from BIOS version 0501 to 0606. After the upgrade I checked the temperatures in the BIOS monitor and was surprised to see that the CPU temperature was slightly ~30°C. Before the upgrade the CPU temperature was ~50°C with the same BIOS settings (see the following heading for details on temperatures). How can this be? It seems a bit odd that a BIOS upgrade can lower the CPU temperature by 20°C and it also seems odd that the CPU temperature is lower than the chassis temperature. Temperatures When I've checked temperatures the room temperature has been ~23°C. I haven't changed the placement of the computer nor the hardware or cooling setup between BIOS versions. BIOS version 0501 BIOS monitor: CPU: ~50°C Chassis: ~33°C I haven't got any temperature measures from lm-sensors or the like for version 0501 because I only discovered the issue after upgrading to version 0606 and the BIOS updater utility won't let me downgrade to version 0501 (it says "outdated image" when I try to load version 0501). BIOS version 0606 BIOS monitor: CPU: ~30°C Chassis: ~33°C lm-sensors in Ubuntu 11.04 Desktop 64-bit (sudo sensors after an uptime of 4 h 52 min and a load average of 0.22, 0.18, 0.15): coretemp-isa-0000 Adapter: ISA adapter Core 0: +32.0°C (high = +80.0°C, crit = +98.0°C) coretemp-isa-0001 Adapter: ISA adapter Core 1: +35.0°C (high = +80.0°C, crit = +98.0°C) coretemp-isa-0002 Adapter: ISA adapter Core 2: +29.0°C (high = +80.0°C, crit = +98.0°C) coretemp-isa-0003 Adapter: ISA adapter Core 3: +36.0°C (high = +80.0°C, crit = +98.0°C) The BIOS monitor temperatures was checked directly after the lm-sensors temperatures was checked. BIOS version 0706, 0801, 1101 and 3203 I get the same kind of temperatures both in the BIOS monitor and with lm-sensors in BIOS version 0706, 0801, 1101 and 3203 as in 0606. Information from Asus The 0606 changelog mentions nothing explicitly about CPU temperature (but item 3., as indicated by sidran32, might affect temperatures): P8Z68-V PRO 0606 BIOS with IRST 10.6.0.1002 Enable the support of Intel Rapid Storage Technology version 10.6.0.1002 Release Improve DRAM compatibility Improve System stability Improve compatibility with some Raid card model Increase IGD share memory size to 512MB However the following FAQ might give a hint: FAQs I find that the CPU temperature reading in BIOS is about 10~20 degrees centigrade hotter than the reading in OS. Is it normal? Page Tools Solution That is normal as BIOS does not send idle command to the CPU, making most of the power saving features useless. You should be getting similar reading if you disable EIST/C1E/CPU C3 Report/CPU C6 Report in BIOS.

    Read the article

  • Fedora 12 Wireless problems (Intel Wireless 4965AGN Card)

    - by Ninefingers
    Hi All, I'm having an interesting experience with my wireless card at the moment. Basically, it does like this: I connect to the local wireless network (netgear router) It works, briefly, allowing me to browse a webpage or maybe two, if I'm lucky. It then stops working / sending any packets, whilst reported still connected. Now, me being me I've had a look to see what I can find. wpa_supplicant.log looks like this: Trying to associate with valid_mac:a2:30 (SSID='vennardwireless' freq=2462 MHz) Associated with valid_mac:a2:30 WPA: Key negotiation completed with valid_mac:a2:30 [PTK=CCMP GTK=TKIP] CTRL-EVENT-CONNECTED - Connection to valid_mac:a2:30 completed (reauth) [id=0 id_str=] CTRL-EVENT-DISCONNECTED - Disconnect event - remove keys So that's working fine. dmesg | grep "*iwl*" spits out this: iwlagn: Intel(R) Wireless WiFi Link AGN driver for Linux, 1.3.27kds iwlagn: Copyright(c) 2003-2009 Intel Corporation iwlagn 0000:03:00.0: PCI INT A -> GSI 17 (level, low) -> IRQ 17 iwlagn 0000:03:00.0: setting latency timer to 64 iwlagn 0000:03:00.0: Detected Intel Wireless WiFi Link 4965AGN REV=0x4 iwlagn 0000:03:00.0: Tunable channels: 13 802.11bg, 19 802.11a channels iwlagn 0000:03:00.0: irq 32 for MSI/MSI-X phy0: Selected rate control algorithm 'iwl-agn-rs' iwlagn 0000:03:00.0: firmware: requesting iwlwifi-4965-2.ucode iwlagn 0000:03:00.0: loaded firmware version 228.61.2.24 Registered led device: iwl-phy0::radio Registered led device: iwl-phy0::assoc Registered led device: iwl-phy0::RX Registered led device: iwl-phy0::TX iwlagn 0000:03:00.0: iwl_tx_agg_start on ra = 00:24:b2:32:a3:30 tid = 0 iwlagn 0000:03:00.0: iwl_tx_agg_start on ra = 00:24:b2:32:a3:30 tid = 0 So that's working too. I can also ping 192.168.0.1 -I wlan0 and arping 192.168.0.1 -I wlan0 the router until the network falls over. uname -r:2.6.32.10-90.fc12.x86_64. Laptop is a Core2 Duo (2Ghz) with 3GB RAM. Other symptoms I've noticed are that wireshark freezes when I capture on the "broken" interface until I disconnect. Am using networkmanager as per normal. Stupidly, I can connect to the same router via eth0/a cat6 cable just fine. Everyone else can connect to the AP fine (from Windows). Yes, I'm sat right next to it and not trying to access a hotspot the other side of the world. Any ideas? Is this a broken update? (I intend to reboot and test an older kernel later)? Anyone else come across this? Edit: iwconfig wlan0 rate auto is the settings I'm using for rates. Also, according to networkmanager the network is still connected. Thanks for any pointers / advice.

    Read the article

  • Unable to start Tomcat6 with HTTPS enabled

    - by ram
    I have the following server.xml settings for my tomcat6 server <!-- COMMENTED <Connector port="8080" maxThreads="150" enableLookups="false" acceptCount="100" scheme="http" redirectPort="8443"/> --> <!-- COMMENTED <Connector port="80" maxThreads="150" enableLookups="false" acceptCount="100" scheme="http" redirectPort="443"/> --> <Connector port="443" maxHttpHeaderSize="8192" maxThreads="150" enableLookups="false" disableUploadTimeout="true" acceptCount="100" scheme="https" secure="true" SSLEnabled="true" SSLCertificateFile="%SSL_CERT%" SSLCertificateKeyFile="%SSL_KEY%" SSLCipherSuite="ALL:!ADH:!kEDH:!SSLv2:!EXPORT40:!EXP:!LOW" compression="on" compressableMimeType="text/html,text/xml,text/plain,application/javascript,application/json,text/javascript"/> Complete server.xml is here but when I try to start the application I get the following error in catalina.*.log file INFO: Initializing Coyote HTTP/1.1 on http-80 Apr 7, 2013 8:38:38 PM org.apache.coyote.http11.Http11AprProtocol init SEVERE: Error initializing endpoint java.lang.Exception: Invalid Server SSL Protocol (error:00000000:lib(0):func(0):reason(0)) at org.apache.tomcat.jni.SSLContext.make(Native Method) at org.apache.tomcat.util.net.AprEndpoint.init(AprEndpoint.java:729) at org.apache.coyote.http11.Http11AprProtocol.init(Http11AprProtocol.java:107) at org.apache.catalina.connector.Connector.initialize(Connector.java:1049) at org.apache.catalina.core.StandardService.initialize(StandardService.java:703) at org.apache.catalina.core.StandardServer.initialize(StandardServer.java:838) at org.apache.catalina.startup.Catalina.load(Catalina.java:538) at org.apache.catalina.startup.Catalina.load(Catalina.java:562) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.catalina.startup.Bootstrap.load(Bootstrap.java:261) at org.apache.catalina.startup.Bootstrap.main(Bootstrap.java:413) Apr 7, 2013 8:38:38 PM org.apache.catalina.core.StandardService initialize SEVERE: Failed to initialize connector [Connector[HTTP/1.1-443]] LifecycleException: Protocol handler initialization failed: java.lang.Exception: Invalid Server SSL Protocol (error:00000000:lib(0):func(0):reason(0)) at org.apache.catalina.connector.Connector.initialize(Connector.java:1051) at org.apache.catalina.core.StandardService.initialize(StandardService.java:703) at org.apache.catalina.core.StandardServer.initialize(StandardServer.java:838) at org.apache.catalina.startup.Catalina.load(Catalina.java:538) at org.apache.catalina.startup.Catalina.load(Catalina.java:562) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.catalina.startup.Bootstrap.load(Bootstrap.java:261) at org.apache.catalina.startup.Bootstrap.main(Bootstrap.java:413) I've checked the following things already I have given read permissions for everyone for .crt and .key files I copied server.xml to a different working tomcat6 server and it works there, server.xml from the mentioned working tomcat5 webserver doesn't work here and it fails with the same error Works well with just HTTP enabled explicitly mentioning protocol in the Connector i.e. protocol="org.apache.coyote.http11.Http11AprProtocol" results in the same exception Please help me if I am missing something. Thanks in advance

    Read the article

  • JVM process resident set size "equals" max heap size, not current heap size

    - by Volune
    After a few reading about jvm memory (here, here, here, others I forgot...), I am expecting the resident set size of my java process to be roughly equal to the current heap space capacity. That's not what the numbers are saying, it seems to be roughly equal to the max heap space capacity: Resident set size: # echo 0 $(cat /proc/1/smaps | grep Rss | awk '{print $2}' | sed 's#^#+#') | bc 11507912 # ps -C java -O rss | gawk '{ count ++; sum += $2 }; END {count --; print "Number of processes =",count; print "Memory usage per process =",sum/1024/count, "MB"; print "Total memory usage =", sum/1024, "MB" ;};' Number of processes = 1 Memory usage per process = 11237.8 MB Total memory usage = 11237.8 MB Java heap # jmap -heap 1 Attaching to process ID 1, please wait... Debugger attached successfully. Server compiler detected. JVM version is 24.55-b03 using thread-local object allocation. Garbage-First (G1) GC with 18 thread(s) Heap Configuration: MinHeapFreeRatio = 10 MaxHeapFreeRatio = 20 MaxHeapSize = 10737418240 (10240.0MB) NewSize = 1363144 (1.2999954223632812MB) MaxNewSize = 17592186044415 MB OldSize = 5452592 (5.1999969482421875MB) NewRatio = 2 SurvivorRatio = 8 PermSize = 20971520 (20.0MB) MaxPermSize = 85983232 (82.0MB) G1HeapRegionSize = 2097152 (2.0MB) Heap Usage: G1 Heap: regions = 2560 capacity = 5368709120 (5120.0MB) used = 1672045416 (1594.586769104004MB) free = 3696663704 (3525.413230895996MB) 31.144272834062576% used G1 Young Generation: Eden Space: regions = 627 capacity = 3279945728 (3128.0MB) used = 1314914304 (1254.0MB) free = 1965031424 (1874.0MB) 40.089514066496164% used Survivor Space: regions = 49 capacity = 102760448 (98.0MB) used = 102760448 (98.0MB) free = 0 (0.0MB) 100.0% used G1 Old Generation: regions = 147 capacity = 1986002944 (1894.0MB) used = 252273512 (240.5867691040039MB) free = 1733729432 (1653.413230895996MB) 12.702574926293766% used Perm Generation: capacity = 39845888 (38.0MB) used = 38884120 (37.082786560058594MB) free = 961768 (0.9172134399414062MB) 97.58628042120682% used 14654 interned Strings occupying 2188928 bytes. Are my expectations wrong? What should I expect? I need the heap space to be able to grow during spikes (to avoid very slow Full GC), but I would like to have the resident set size as low as possible the rest of the time, to benefit the other processes running on the server. Is there a better way to achieve that? Linux 3.13.0-32-generic x86_64 java version "1.7.0_55" Running in Docker version 1.1.2 Java is running elasticsearch 1.2.0: /usr/bin/java -Xms5g -Xmx10g -XX:MinHeapFreeRatio=10 -XX:MaxHeapFreeRatio=20 -Xss256k -Djava.awt.headless=true -XX:+UseG1GC -XX:MaxGCPauseMillis=350 -XX:InitiatingHeapOccupancyPercent=45 -XX:+AggressiveOpts -XX:+UseCompressedOops -XX:-OmitStackTraceInFastThrow -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -XX:+PrintClassHistogram -XX:+PrintTenuringDistribution -XX:+PrintGCApplicationStoppedTime -XX:+PrintGCApplicationConcurrentTime -Xloggc:/opt/elasticsearch/logs/gc.log -XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=/opt elasticsearch/logs/heapdump.hprof -XX:ErrorFile=/opt/elasticsearch/logs/hs_err.log -Des.logger.port=99999 -Des.logger.host=999.999.999.999 -Delasticsearch -Des.foreground=yes -Des.path.home=/opt/elasticsearch -cp :/opt/elasticsearch/lib/elasticsearch-1.2.0.jar:/opt/elasticsearch/lib/*:/opt/elasticsearch/lib/sigar/* org.elasticsearch.bootstrap.Elasticsearch There actually are 5 elasticsearch nodes, each in a different docker container. All have about the same memory usage. Some stats about the index: size: 9.71Gi (19.4Gi) docs: 3,925,398 (4,052,694)

    Read the article

  • Best available technology for layered disk cache in linux

    - by SpliFF
    I've just bought a 6-core Phenom with 16G of RAM. I use it primarily for compiling and video encoding (and occassional web/db). I'm finding all activities get disk-bound and I just can't keep all 6 cores fed. I'm buying an SSD raid to sit between the HDD and tmpfs. I want to setup a "layered" filesystem where reads are cached on tmpfs but writes safely go through to the SSD. I want files (or blocks) that haven't been read lately on the SSD to then be written back to a HDD using a compressed FS or block layer. So basically reads: - Check tmpfs - Check SSD - Check HD And writes: - Straight to SSD (for safety), then tmpfs (for speed) And periodically, or when space gets low: - Move least frequently accessed files down one layer. I've seen a few projects of interest. CacheFS, cachefsd, bcache seem pretty close but I'm having trouble determining which are practical. bcache seems a little risky (early adoption), cachefs seems tied to specific network filesystems. There are "union" projects unionfs and aufs that let you mount filesystems over each other (USB device over a DVD usually) but both are distributed as a patch and I get the impression this sort of "transparent" mounting was going to become a kernel feature rather than a FS. I know the kernel has a built-in disk cache but it doesn't seem to work well with compiling. I see a 20x speed improvement when I move my source files to tmpfs. I think it's because the standard buffers are dedicated to a specific process and compiling creates and destroys thousands of processes during a build (just guessing there). It looks like I really want those files precached. I've read tmpfs can use virtual memory. In that case is it practical to create a giant tmpfs with swap on the SSD? I don't need to boot off the resulting layered filesystem. I can load grub, kernel and initrd from elsewhere if needed. So that's the background. The question has several components I guess: Recommended FS and/or block layer for the SSD and compressed HDD. Recommended mkfs parameters (block size, options etc...) Recommended cache/mount technology to bind the layers transparently Required mount parameters Required kernel options / patches, etc..

    Read the article

  • free open-source linux screenshot & ocr tool

    - by Gryllida
    I'm looking for a tool which would be able to capture a screen region, pass it to OCR and put the result into clipboard. "import ppm:- | gocr -i - | xclip -selection c" works, but gocr is unreliable: simple text on a webpage has errors. It is a clear font but the OCR tool always misses "r" and replaces it with underscore. "import ppm:- | ocrad -i - | xclip -selection c" says "ocrad: maxval 255 in ppm "P6" file." tesseract needs an image file and does not accept piping input to it. xfce4-screenshooter does not do OCR. ABBYY Screenshot Reader is proprietary. tessnet2 is freeware running on a proprietary platform. Google Docs can OCR screenshots in a batch. But my data is confidential and better not put online. Graphical interface solutions would be acceptable for this question, too. There is a number of existing SuperUser questions about OCR. They fall in several categories. Questions just about OCR without the "screenshot taking" part. Open Source OCR for linux Free OCR for Arabic text Looking for recommendations on OCR problem - tabular numeric data Which has better OCR applications: Ubuntu, or Mac/iPad, or Windows? How can I preform OCR from the command line? OCR solution on linux machine from command line (duplicate) Free OCR software OCR for Sanskrit ( OR devanagari) Copy image and paste to OCR (windows) File processing OCR instead of screenshot. Online OCR website for processing an entire pdf file at one time? Practical OCR solution for converting a large book to a digital format? How to extract text with OCR from a PDF on Linux? Batch-OCR many PDFs OCR Image based PDF Copy image and paste to OCR Extract OCR text from Evernote OCR in Word 2013 Replace (OCR) garbled text in PDF? Process files prior to running OCR. How can I make OCR recognize my documents' text better? Tesseract OCR recognition bilingual document. mistakes tolerance level setup OCR for low quality images How do I get the best quality screenshot for OCR (Optical Character Recognition) and what tool would be the best for screenshots? OCR training. Training Tesseract-OCR for english language fonts None of them answer this question.

    Read the article

  • Why can't I play DVDs on Windows 8 Pro with Media Center Pack?

    - by ligos
    I have a laptop with Windows 8 Pro with Media Center (64 bit), but neither Media Player or Media Center can play DVDs. Have I done something wrong? Did the Feature Pack not install correctly? Should this work? Can I somehow uninstall and reinstall the Media Pack? Details So I upgraded by Windows 7 Home Premium laptop to Windows 8 Pro based on Microsoft's low pricing. I also grabbed my free upgrade to Media Pack and followed the instructions on that page to add my feature pack. Alas! I still cannot play DVDs via either Media Center or Player. Various Context Thinking I might need to re-install the pack, I found that I could no longer add any more feature packs (searching for add features settings only shows Turn Windows Features On and Off). Media Centre and Media Player are both enabled in Windows Features. I cannot see any way to remove or downgrade from the Media Pack, nor to add any more feature packs. I installed a codec pack (32bit) from Shark007, which has not allowed me to play DVDs (although did allow me to play various other media files). Media Player can play DTV recorded on another Windows 7 box, but Media Center cannot. VLC plays DVDs OK, but I'd prefer to figure out what the root cause of this problem is. There were no errors or other indications that the Media Pack failed to install; the installation itself was quite smooth. Although I have not checked my event log in detail. Before upgrading to Windows 7, I could play DVDs OK. Screenshots System Information, showing I have Windows 8 Pro with Media Center When playing a DVD, Media Player gives and error: The selected file has an extension that is not recognised by windows... When you click Yes, it fails saying: Windows Media Player cannot find the file... Media Center says: The file type is not recognisd and cannot be played, along with some codec related stuff. I can browse the files OK via My Computer on any video DVD.

    Read the article

  • IRP_MJ_WRITE latency up to 15 seconds

    - by racitup
    We have written an application that performs small (22kB) writes to multiple files at once (one thread performing asynchronous queued writes to multiple locations on behalf of other threads) on the same local volume (RAID1). 99.9% of the writes are low-latency but occasionally (maybe every minute or two) we get one or two huge latency writes (I have seen 10 seconds and above) without any real explanation. Platform: Win2003 Server with NTFS. Monitoring: Sysinternals Process Monitor (see link below) and our own application logging. We have tried multiple things to try and solve this that have been gleaned from a few websites, e.g.: Making the first part of file names unique to aid 8.3 name generation Writing files to multiple directories Changing Intel Disk Write Caching Windows File/Printer Sharing Minimize memory used Balance Maximize data throughput for file sharing Maximize data throughput for network applications System-Advanced-Performance-Advanced NtfsDisableLastAccessUpdate - use fsutil behavior set disablelastaccess 1 disable 8.3 name generation - use "fsutil behavior set disable8dot3 1" + restart Enable a large size file system cache Disable paging of the kernel code IO Page Lock Limit Turn Off (or On) the Indexing Service But nothing seems to make much difference. There's a whole host of things we haven't tried yet but we wondered if anyone had come across the same problem, a reason and a solution (programmatic or not)? We can reproduce the problem using IOMeter and a simple setup: Start IOMeter and remove all but the first worker thread in 'Topology' using the disconnect button. Select the Worker thread and put a cross in the box next to the disk you want to use in the Disk Targets tab and put '2000000' in Maximum Disk Size (NOTE: must have at least 1GB free space; sector size is 512 bytes) Next create a new access specification and add it to the worker thread: Transfer Request Size = 22kB 100% Sequential Percent of Access Spec = 100% Percent Read/Write = 100% Write Change Results Display Update Frequency to 5 seconds, Test Setup Run Time to 20 seconds and both 'Number of Workers to Spawn Automatically' settings to zero. Select the Worker Thread in the Topology panel and hit the Duplicate Worker button 59 times to create 60 threads with identical settings. Hit the 'Go' button (green flag) and monitor the Results tab. The 'Maximum I/O Response Time (ms)' always hits at least 3500 on our machine. Our machine isn't exactly slow (Xeon 8 core rack server with 4GB and onboard RAID). I'd be interested to see what other people get. We have a feeling it might be something to do with the NTFS filesystem (ours is currently 75% full of fragmented files) and we are going to try a few things around this principle. But it is also related to disk performance since we don't see it on a RAMDisk and it's not as severe on a RAID10 array. Any help is much appreciated. Richard Right-click and select 'Open Link in New Tab': ProcMon Result

    Read the article

  • High CPU from httpd process

    - by KHWeb
    I am currently getting high CPU on a server that is just running a couple of sites with very low traffic. One of the sites is in still development going live soon. However, this site is very very slow...When browsing through its pages I can see that the CPU goes from 30% to 100% for httpd (see top output below). I have tuned httpd & MySQL, Apache Solr, Tomcat for high performance, and I am using APC. Not sure what to do from here or how to find the culprit as I have a bunch of messages on the httpd log and have been chasing dead ends for some time...any help is greatly appreciated. Server: AuthenticAMD, Quad-Core AMD Opteron(tm) Processor 2352, RAM 16GB Linux 2.6.27 64-bit, Centos 5.5 Plesk 9.5.4, MySQL 5.1.48, PHP 5.2.17 Apache/2.2.3 (CentOS) DAV/2 mod_jk/1.2.15 mod_ssl/2.2.3 OpenSSL/0.9.8e-fips-rhel5 PHP/5.2.17 mod_perl/2.0.4 Perl/v5.8.8 Tomcat6-6.0.29-1.jpp5, Tomcat-native-1.1.20-1.el5, Apache Solr top 17595 apache 20 0 1825m 507m 10m R 100.4 3.2 0:17.50 httpd 17596 apache 20 0 1565m 247m 9936 R 83.1 1.5 0:10.86 httpd 17598 apache 20 0 1430m 110m 6472 S 54.5 0.7 0:08.66 httpd 17599 apache 20 0 1438m 124m 12m S 37.2 0.8 0:11.20 httpd 16197 mysql 20 0 13.0g 2.0g 5440 S 9.6 12.6 297:12.79 mysqld 17617 root 20 0 12748 1172 812 R 0.7 0.0 0:00.88 top 8169 tomcat 20 0 4613m 268m 6056 S 0.3 1.7 6:40.56 java httpd error_log [debug] prefork.c(991): AcceptMutex: sysvsem (default: sysvsem) [info] mod_fcgid: Process manager 17593 started [debug] proxy_util.c(1854): proxy: grabbed scoreboard slot 0 in child 17594 for worker proxy:reverse [debug] proxy_util.c(1967): proxy: initialized single connection worker 0 in child 17594 for (*) [debug] proxy_util.c(1854): proxy: grabbed scoreboard slot 0 in child 17595 for worker proxy:reverse [debug] proxy_util.c(1873): proxy: worker proxy:reverse already initialized [notice] child pid 22782 exit signal Segmentation fault (11) [error] (43)Identifier removed: apr_global_mutex_lock(jk_log_lock) failed [debug] util_ldap.c(2021): LDAP merging Shared Cache conf: shm=0x7fd29a5478c0 rmm=0x7fd29a547918 for VHOST: example.com [info] APR LDAP: Built with OpenLDAP LDAP SDK [info] LDAP: SSL support available [info] Init: Seeding PRNG with 256 bytes of entropy [info] Init: Generating temporary RSA private keys (512/1024 bits) [info] Init: Generating temporary DH parameters (512/1024 bits) [debug] ssl_scache_shmcb.c(374): shmcb_init allocated 512000 bytes of shared memory [debug] ssl_scache_shmcb.c(554): entered shmcb_init_memory() [debug] ssl_scache_shmcb.c(576): for 512000 bytes, recommending 4265 indexes [debug] ssl_scache_shmcb.c(619): shmcb_init_memory choices follow [debug] ssl_scache_shmcb.c(621): division_mask = 0x1F [debug] ssl_scache_shmcb.c(623): division_offset = 96 [debug] ssl_scache_shmcb.c(625): division_size = 15997 [debug] ssl_scache_shmcb.c(627): queue_size = 2136 [debug] ssl_scache_shmcb.c(629): index_num = 133 [debug] ssl_scache_shmcb.c(631): index_offset = 8 [debug] ssl_scache_shmcb.c(633): index_size = 16 [debug] ssl_scache_shmcb.c(635): cache_data_offset = 8 [debug] ssl_scache_shmcb.c(637): cache_data_size = 13853 [debug] ssl_scache_shmcb.c(650): leaving shmcb_init_memory()

    Read the article

  • Automating the choice between JPEG and PNG with a script

    - by MHC
    Choosing the right format to save your images in is crucial for preserving image quality and reducing artifacts. Different formats follow different compression methods and come with their own set of advantages and disadvantages. JPG, for instance is suited for real life photographs that are rich in color gradients. The lossless PNG, on the other hand, is far superior when it comes to schematic figures: Picking the right format can be a chore when working with a large number of files. That's why I would love to find a way to automate it. A little bit of background on my particular use case: I am working on a number of handouts for a series of lectures at my unversity. The handouts are rich in figures, which I have to extract from PDF-formatted slides. Extracting these images gives me lossless PNGs, which are needlessly large at times. Converting these particular files to JPEG can reduce their size to up to less than 20% of their original file size, while maintaining the same quality. This is important as working with hundreds of large images in word processors is pretty crash-prone. Batch converting all extracted PNGs to JPEGs is not an option I am willing to follow, as many if not most images are better suited to be formatted as PNGs. Converting these would result in insignificant size reductions and sometimes even increases in filesize - that's at least what my test runs showed. What we can take from this is that file size after compression can serve as an indicator on what format is suited best for a particular image. It's not a particularly accurate predictor, but works well enough. So why not use it in form of a script: I included inotifywait because I would prefer for the script be executed automatically as soon as I drag an extracted image into a folder. This is a simpler version of the script that I've been using for the last couple of weeks: #!/bin/bash inotifywait -m --format "%w%f" --exclude '.jpg' -r -e create -e moved_to --fromfile '/home/MHC/.scripts/Workflow/Conversion/include_inotifywait' | while read file; do mogrify -format jpg -quality 92 "$file" done The advanced version of the script would have to be able to handle spaces in file names and directory names preserve the original file names flatten PNG images if an alpha value is set compare the file size between the temporary converted image and its original determine if the difference is greater than a given precentage act accordingly The actual conversion could be done with imagemagick tools: convert -quality 92 -flatten -background white file.png file.jpg Unfortunately, my bash skills aren't even close to advanced enough to convert the scheme above into an actual script, but I am sure many of you can. My reputation points on here are pretty low, but I will gladly award the most helpful answer with the highest bounty I can set. References: http://www.formortals.com/introducing-cnb-imageguide/, http://www.turnkeylinux.org/blog/png-vs-jpg Edit: Also see my comments below for some more information on why I think this script would be the best solution to the problem I am facing.

    Read the article

  • How can you know what is w3wp.exe doing? (or how to diagnose a performance problem)

    - by Daniel Magliola
    I'm having a performance problem in a site we've made, and I'm not exactly sure how to start diagnosing it. The short description is: We have a very small site (http://hearablog.com) with very little traffic, in a crappy dedicated server, CPU is always very high, sometimes it stays at 100% for minutes, and w3wp.exe is taking most of it. A typical scenario is w3wp.exe takes 60%, and SQL Server takes about 30%. Our DB is pretty small too. Long description and more details: The site is hosted in a very crappy server by Cari.Net. From the beginning we had the feeling that the server didn't quite behave correctly, like some things would take just too long, so this could be a configuration problem from the get go. It may also be that we are getting a virtual server while we're supposed to have a dedicated one, although we have no evidence that'd indicate this, except for the fact that the server tends to be quite slow. The server is Windows 2008 Standard 64-bit, with SQL 2008 Express Hardware is a Celeron 2.80 GHz, 1Gb RAM The website is developed in ASP.Net MVC, using Entity Framework for data access. Now, this is pretty crappy hardware, but i've had other servers with these guys, with equivalent (or worse) HW, and performance is much better than this one. That said, the other servers have W2003 and SQL2005, and I'm using ASP.Net "WebForms" 2.0, no MVC, no LINQ, no EF; so I'm not sure whether going to 2008 / the other stuff means a big performance penalty is expected. I'm serving MP3 files (5-20 Mb) regularly, which is a slightly unusual load, maybe that is causing some kind of problems? Would that cause w3wp to use a lot of CPU? Disk usage seems very low. Memory is usually around 90%, but disk usage seems to indicate it's not paging much. I get tons of e-mails every day about SQL timeouts, for queries taking over 30 seconds, although all our queries are pretty straightforward (or should be, but EF may be screwing it up). This is what resource monitor looks like in one of these "sprints" of 100% CPU, in case there's anything useful there. And a snapshot of some performance counters: Now, what confuses me very much is that CPU usage of w3wp is just so high. It shouldn't be doing much really... So my questions are... Is there any way of finding out "what" it is doing? Maybe even profile it? Any performance counters I should be looking at? Is this to be expected given this hardware/software configuration? Is this could be cause by some kind of configuration failure, where would you start looking? Thank you VERY much. Daniel Magliola

    Read the article

  • An alternative to Google Talk, AIM, MSN, et al [closed]

    - by mkaito
    I'm not entirely sure whether this part of stack exchange is the most adequate for my question, but it would seem to me that people sharing this kind of concern would converge either here, or possibly on a more unix-specific sub site. Either way, here goes. Background Feel free to skip to The Question, below. This should, however, help those interested understand where I'm coming from, and where I expect to get, messaging-wise. My online talking place-to-go has been IRC for the last fifteen years. I think it's a great protocol, and clients out there are very good. I still use, and will always continue to use IRC for most of my chat needs. But then, there is private instant messaging. While IRC can solve this with queries and DCC chats, the protocol just isn't meant to work too well on intermittent connections, such as a mobile device, where you can often walk around places with low signal. I used MSN for a while, but didn't like it. The concept was awesome, but I think Microsoft didn't get the implementation quite right. When they started adding all that eye candy, and my buddies started flooding me with custom icons and buzzing my screen to it's knees, I shut my account and told folks that missed me to just email or call me. Much whining happened, I got called many weird things for not using MSN, but folks eventually got over it. Next, Google Talk came along, and seemed to be a lot better than MSN ever was. The protocol was open, so I could use whatever client I felt a fancy for. With the advent of smart phones, I just got myself a gtalk client on the phone, and have had a really decent integrated mostly-universal IM solution. Over the last few months, all Google services have been feeling flaky. IMs will often arrive anywhere between twenty minutes and one hour after being sent, clients will randomly disconnect, client priorities seem to work sometimes, and sometimes just a random device of those connected will get an IM. I think the time has come to look for greener grass. The Question It's rather hard to put what I'm looking for into precise words. I guess I just want something that is kind of like MSN/Gtalk, but that doesn't let me down when I need it. IRC is pretty much perfect, but the protocol just isn't designed to work well on mobile devices. Really, at this point I'm considering sticking to IRC for desktop messaging, and SMS/email on the phone, but I hope that in this day and age there is something better out there.

    Read the article

  • What is a good layout for a somewhat advanced home network and storage solution?

    - by Shaun
    My home network/storage needs are changing and I am searching for some opinions and starting points on what a good network/storage layout would be that can serve my needs for a few years into the future. I think I have a decent starting point for equipment, but I am also willing to invest fairly heavily in a solution that can last me for a while. I am a bit of a tech nerd and I have a moderate tolerance for setup of the solution. I would prefer if maintenance of the system is somewhat low once it is setup, but I am willing to accept some tradeoffs. Existing equipment: Router - Netgear WNDR3700 (gigabit) Router - DLink Gamerlounge DGL-4300 (gigabit) Switch - 16 port Trendnet green switch (gigabit) Switch - 5 port Trendnet green (gigabit) Computer - i7-950 office computer (gigabit ethernet) Computer - Q6600 quad core media center, hooked up to TV, records shows (gigabit ethernet) Computer - Acer 1810T ultraportable laptop (gigabit and N ethernet) NAS - Intel SS4200-E (gigabit) External hard drive - 2TB WD Green drive (esata) All kinds of miscellaneous network connected TV, Bluray, Verizon network extender, HDhomerun TV tuners, etc. Requirements: -Robust backup solution for a growing collection of huge family picture files and personal files, around 1.5TB. (Including offsite backup) -Central location for all user's files, while also keeping them secure from each other. -Storage for terabytes of movie backups and recorded TV, and access to them from all computers (maybe around 4TB eventually) -Possibility to host files to friends and family easily Nice to have: -Backup of terabytes of movie backups Intriguing possibilities: -Capability to have users' Windows desktops and files look the same from all network computers I am not sure if the new Windows Home Server 2011 would fit into this well, if I need a domain server, how best to organize my backups, or how to most effectively use RAID. Currently I am simply backing up all computers to a RAID 1 on the NAS box, which I was thinking could prevent a situation where I reach for a backup and find that the disk is corrupt. One possibility that I am thinking about now is simply using my media center PC with a huge RAID of hard drives on which all files are stored. Pseudo-backup of all files would be present because of the RAID, but important files would also be backed up off site via carrying hard drives to work. But what if corruption seeps into the files and the corrupted data is then backed up? Does RAID protect against this? I really want to take next to zero risks with the irreplaceable files. I can handle some degree of risk with the movies and other files. I'm looking for critiques on this idea as well as other possibilities. To summarize, my goal is high functionality, media capable, and robust backup of irreplaceable files.

    Read the article

  • Persistent Issues on small business network using Cisco 871W and Catalyst Express 500

    - by Ben Campbell
    Being the most qualified (read: still not qualified) to solve our persistant network issues, I've turned to serverfault for guidance. I've done some searching, reading related documentation on cisco.com and tried a bit of troubleshooting. Here is the config: 100mb synchronous connection from a business internet provider (tested multiple times at 100meg at the source) Cisco 871W wireless point & router is where the WAN connection starts (this serves all our wireless). The only wired connection in the 871W is the Catalyst switch listed below. Cisco Catalyst Express 500 (24TT) is where all the wired connections terminate. About 20 Windows workstations and servers (AD/Webservers only). Some services in EC2 including mail and other web servers/apps. I've been TOLD cabling internally should be gigabit-ready. Here are the problems: generally slow download rates from the internet to the desktop/laptop frequent "page cannot be displayed" errors in browsers-sometimes 3 or 4 reloads are necessary... often times CSS wont load or other content requiring the browser to connect to a different server. slow speed within the LAN from workstation to workstation copying files. I would expect extremely fast data transfer workstation to workstation / server to workstation in this simple network. Several things I need to admit: I'm not primarily a network guy. Funding is relatively low, I need to be the guy that finds the solution. I understand most of the terminology and most of the technology. Implementation is where I fail due to lack of experience. Getting to the point: I'm wondering whether experienced network admins think that our small network should be sufficiently served with our current hardware if configured properly... or if we should purchase new equipment and start fresh? If starting fresh is the plan, whatever that new equipment may be is a likely different question entirely. If I haven't provided enough information, I will happily do some troubleshooting and update with the results. I have experience using wireshark and some other tools. Please let me know what you think would be most helpful and thanks in advance. EDIT: I forgot to add that the Cisco applicance will not finish loading the SDM Express console. It hangs every time at the "populating modules... DHCP". It eventually crashes and closes. I've rebooted the hardware and this still happens.

    Read the article

  • Why Does Wireless Gear Degrade Over Time?

    - by bahamat
    I saw this originally posted on slashdot, but their comment format is not conducive to actually getting a correct answer. Having directly experienced this phenomenon myself, I'm now asking here where I think I can actually get an educated answer. Here's the original question verbatim: Lately I have replaced several home wireless routers because the signal strength has been found to be degraded. These devices, when new (2+ years ago) would cover an entire house. Over the years, the strength seems to decrease to a point where it might only cover one or two rooms. Of the three that I have replaced for friends, I have not found a common brand, age, etc. It just seems that after time, the signal strength decreases. I know that routers are cheap and easy to replace but I'm curious what actually causes this. I would have assumed that the components would either work or not work; we would either have a full signal or have no signal. I am not an electrical engineer and I can't find the answer online so I'm reaching out to you. Can someone explain how a transmitter can slowly go bad? Common (incorrect, but repeated) answers from slashdot include: Back then your neighbors didn't have wifi, now they do. They drowning you out. I don't think this is likely because replacing the access point with a new one and using the same frequencies solves the problem. Older devices had low transmit power. Crank that baby. As mentioned by a FreeBSD wireless developer this violates regulations and can physically damage the equipment. It was also mentioned that higher power in one direction is not necessarily reciprocated. This shows higher bars, but not necessarily a better connection. Manufacturers make cheap crap designed to wear out. This one actually may be legitimate although it is overly broad. What specifically causes damage over time? Heat? Excessive power? So can anyone provide an informed answer on this? Is there any way to fix these older access points?

    Read the article

  • How to shutdown VMware Fusion virtual machine on host shutdown

    - by Nikksno
    I have a Mac mini running Mavericks server. I installed the Atmail server + webmail vm [a linux centos distribution] in VMware Fusion Professional 6 with the VMware Tools addon. It works flawlessly. I've set it to start on boot and that works very reliably. However I've been looking for a way to also safely and gracefully shut it down whenever OS X shuts down for whatever reason. The Mac is connected to a UPS and configured to perform an automatic shutdown in case the battery starts running low so that's no additional problem. Now the first thing I did was to go into Fusion's prefs and select "Power off the vm" when closing it. However I noticed that for some arcane reason closing the vm window would actually forcibly power off the vm: so then I found this post that showed me how to change the default power options and I managed to have the vm cleanly shutdown when closing its window or quitting Fusion altogether. At this point I was hoping to have solved the problem but as it turns out upon invoking system shutdown OS X doesn't wait for the vm to shutdown and terminates Fusion before it has a chance to do so. At this point I started looking for a way to automate the process of shutting down the guest os via some advanced setting but had no luck in doing so. That's when I found a command to shut the vm down: vmrun and it worked. The only thing left was to find out a way to execute this script on os x shutdown and giving it a little time to power off completely. However this turned out to be a nightmare: I spent hours looking through several ways to do this with Startup Items, rc.shutdown, cron, launchd, etc... but none of them worked the way I had configured them. I have to say that I found very limited information on using launchd for a shutdown script execution and I know it's the latest thing in the OS X world so I'm hoping someone out there among you will be able to help me out with this. I still think this is an extremely basic feature to ask for and I was really surprised to find this little documentation on so many different aspects of this problem. Is Fusion too basic of an application for this? I really hope someone can help. Thank you very much in advance.

    Read the article

  • Tuning Linux IP routing parameters -- secret_interval and tcp_mem

    - by Jeff Atwood
    We had a little failover problem with one of our HAProxy VMs today. When we dug into it, we found this: Jan 26 07:41:45 haproxy2 kernel: [226818.070059] __ratelimit: 10 callbacks suppressed Jan 26 07:41:45 haproxy2 kernel: [226818.070064] Out of socket memory Jan 26 07:41:47 haproxy2 kernel: [226819.560048] Out of socket memory Jan 26 07:41:49 haproxy2 kernel: [226822.030044] Out of socket memory Which, per this link, apparently has to do with low default settings for net.ipv4.tcp_mem. So we increased them by 4x from their defaults (this is Ubuntu Server, not sure if the Linux flavor matters): current values are: 45984 61312 91968 new values are: 183936 245248 367872 After that, we started seeing a bizarre error message: Jan 26 08:18:49 haproxy1 kernel: [ 2291.579726] Route hash chain too long! Jan 26 08:18:49 haproxy1 kernel: [ 2291.579732] Adjust your secret_interval! Shh.. it's a secret!! This apparently has to do with /proc/sys/net/ipv4/route/secret_interval which defaults to 600 and controls periodic flushing of the route cache The secret_interval instructs the kernel how often to blow away ALL route hash entries regardless of how new/old they are. In our environment this is generally bad. The CPU will be busy rebuilding thousands of entries per second every time the cache is cleared. However we set this to run once a day to keep memory leaks at bay (though we've never had one). While we are happy to reduce this, it seems odd to recommend dropping the entire route cache at regular intervals, rather than simply pushing old values out of the route cache faster. After some investigation, we found /proc/sys/net/ipv4/route/gc_elasticity which seems to be a better option for keeping the route table size in check: gc_elasticity can best be described as the average bucket depth the kernel will accept before it starts expiring route hash entries. This will help maintain the upper limit of active routes. We adjusted elasticity from 8 to 4, in the hopes of the route cache pruning itself more aggressively. The secret_interval does not feel correct to us. But there are a bunch of settings and it's unclear which are really the right way to go here. /proc/sys/net/ipv4/route/gc_elasticity (8) /proc/sys/net/ipv4/route/gc_interval (60) /proc/sys/net/ipv4/route/gc_min_interval (0) /proc/sys/net/ipv4/route/gc_timeout (300) /proc/sys/net/ipv4/route/secret_interval (600) /proc/sys/net/ipv4/route/gc_thresh (?) rhash_entries (kernel parameter, default unknown?) We don't want to make the Linux routing worse, so we're kind of afraid to mess with some of these settings. Can anyone advise which routing parameters are best to tune, for a high traffic HAProxy instance?

    Read the article

  • radvd is not assigning prefix

    - by Samik
    I'm currently trying to setup IPv6 address auto-configuration with router advertisement daemon (radvd) on a virtual machine running CentOS 6.5. But the eth0 interface is not obtaining that prefix. I've obtained the ULA prefix from here. Contents of /etc/sysctl.conf # Kernel sysctl configuration file for Red Hat Linux # # For binary values, 0 is disabled, 1 is enabled. See sysctl(8) and # sysctl.conf(5) for more details. # Controls IP packet forwarding net.ipv4.ip_forward = 0 net.ipv6.conf.all.forwarding = 1 # Controls source route verification net.ipv4.conf.default.rp_filter = 1 # Do not accept source routing net.ipv4.conf.default.accept_source_route = 0 # Controls the System Request debugging functionality of the kernel kernel.sysrq = 0 # Controls whether core dumps will append the PID to the core filename. # Useful for debugging multi-threaded applications. kernel.core_uses_pid = 1 # Controls the use of TCP syncookies net.ipv4.tcp_syncookies = 1 # Disable netfilter on bridges. net.bridge.bridge-nf-call-ip6tables = 0 net.bridge.bridge-nf-call-iptables = 0 net.bridge.bridge-nf-call-arptables = 0 # Controls the default maxmimum size of a mesage queue kernel.msgmnb = 65536 # Controls the maximum size of a message, in bytes kernel.msgmax = 65536 # Controls the maximum shared segment size, in bytes kernel.shmmax = 68719476736 # Controls the maximum number of shared memory segments, in pages kernel.shmall = 4294967296 Contents of /etc/radvd.conf # NOTE: there is no such thing as a working "by-default" configuration file. # At least the prefix needs to be specified. Please consult the radvd.conf(5) # man page and/or /usr/share/doc/radvd-*/radvd.conf.example for help. # # interface eth0 { AdvSendAdvert on; MinRtrAdvInterval 3; MaxRtrAdvInterval 10; AdvDefaultPreference low; AdvHomeAgentFlag off; prefix fd8a:8d9d:808f:1::/64 { AdvOnLink on; AdvAutonomous on; AdvRouterAddr on; }; }; Contents of /etc/sysconfig/network-scripts/ifcfg-eth0 DEVICE=eth0 HWADDR=52:54:00:74:d7:46 TYPE=Ethernet UUID=af5db1cb-e809-4098-be1a-5a74dbb767b1 ONBOOT=yes NM_CONTROLLED=no BOOTPROTO=dhcp IPV6INIT=yes IPV6_AUTOCONF=yes I've also enabled radvd at startup through chkconfig. Though I noticed that radvd is starting after interfaces are brought up. I've tried restarting the network service afterwards but still I get the following link-local address only #ip -6 addr show 1: lo: mtu 16436 inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: eth0: mtu 1500 qlen 1000 inet6 fe80::5054:ff:fe74:d746/64 scope link valid_lft forever preferred_lft forever Edit: Based on the answer given by Sander Steffann I still need clarification on some points but I'm posting here what worked. Contents of /etc/sysconfig/network NETWORKING=yes HOSTNAME=syslog-ng-server NETWORKING_IPV6=yes IPV6FORWARDING=yes Contents of /etc/sysconfig/network-scripts/ifcfg-eth0 DEVICE=eth0 HWADDR=52:54:00:74:d7:46 TYPE=Ethernet UUID=af5db1cb-e809-4098-be1a-5a74dbb767b1 ONBOOT=yes NM_CONTROLLED=no BOOTPROTO=dhcp IPV6INIT=yes IPV6_AUTOCONF=yes IPV6FORWARDING=no Removed following line from /etc/sysctl.conf net.ipv6.conf.all.forwarding = 1 Contents of /etc/radvd.conf is as previous.

    Read the article

  • Slow Memcached: Average 10ms memcached `get`

    - by Chris W.
    We're using Newrelic to measure our Python/Django application performance. Newrelic is reporting that across our system "Memcached" is taking an average of 12ms to respond to commands. Drilling down into the top dozen or so web views (by # of requests) I can see that some Memcache get take up to 30ms; I can't find a single use of Memcache get that returns in less than 10ms. More details on the system architecture: Currently we have four application servers each of which has a memcached member. All four memcached members participate in a memcache cluster. We're running on a cloud hosting provider and all traffic is running across the "internal" network (via "internal" IPs) When I ping from one application server to another the responses are in ~0.5ms Isn't 10ms a slow response time for Memcached? As far as I understand if you think "Memcache is too slow" then "you're doing it wrong". So am I doing it wrong? Here's the output of the memcache-top command: memcache-top v0.7 (default port: 11211, color: on, refresh: 3 seconds) INSTANCE USAGE HIT % CONN TIME EVICT/s GETS/s SETS/s READ/s WRITE/s cache1:11211 37.1% 62.7% 10 5.3ms 0.0 73 9 3958 84.6K cache2:11211 42.4% 60.8% 11 4.4ms 0.0 46 12 3848 62.2K cache3:11211 37.5% 66.5% 12 4.2ms 0.0 75 17 6056 170.4K AVERAGE: 39.0% 63.3% 11 4.6ms 0.0 64 13 4620 105.7K TOTAL: 0.1GB/ 0.4GB 33 13.9ms 0.0 193 38 13.5K 317.2K (ctrl-c to quit.) ** Here is the output of the top command on one machine: ** (Roughly the same on all cluster machines. As you can see there is very low CPU utilization, because these machines only run memcache.) top - 21:48:56 up 1 day, 4:56, 1 user, load average: 0.01, 0.06, 0.05 Tasks: 70 total, 1 running, 69 sleeping, 0 stopped, 0 zombie Cpu(s): 0.0%us, 0.0%sy, 0.0%ni, 99.7%id, 0.0%wa, 0.0%hi, 0.0%si, 0.3%st Mem: 501392k total, 424940k used, 76452k free, 66416k buffers Swap: 499996k total, 13064k used, 486932k free, 181168k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 6519 nobody 20 0 384m 74m 880 S 1.0 15.3 18:22.97 memcached 3 root 20 0 0 0 0 S 0.3 0.0 0:38.03 ksoftirqd/0 1 root 20 0 24332 1552 776 S 0.0 0.3 0:00.56 init 2 root 20 0 0 0 0 S 0.0 0.0 0:00.00 kthreadd 4 root 20 0 0 0 0 S 0.0 0.0 0:00.00 kworker/0:0 5 root 20 0 0 0 0 S 0.0 0.0 0:00.02 kworker/u:0 6 root RT 0 0 0 0 S 0.0 0.0 0:00.00 migration/0 7 root RT 0 0 0 0 S 0.0 0.0 0:00.62 watchdog/0 8 root 0 -20 0 0 0 S 0.0 0.0 0:00.00 cpuset 9 root 0 -20 0 0 0 S 0.0 0.0 0:00.00 khelper ...output truncated...

    Read the article

  • Critique My Backup and Storage Plan

    - by MetaHyperBolic
    My current storage (RAID-1 off of a hardware RAID card) and backup (a spare drive) solutions for my home network are inadequate. I have too much data scattered on various one-off drives. It is time to evolve. Backups seem simple enough, at least: lots of big drives. However, I am bewildered by the number of choices for small home storage. The Drobo S looks appealing. So does the ReadyNAS. I am not looking for bunches of shiny features, I'm mostly interested in reliability. I am not interested in building Yet Another PC to create a file server or doing something in the cloud, or whatever. I'm stupid, so I am keeping it simple. Requirements for Main Volume: Starting working space roughly 2TB, with options for growth up to 5TB RAID or something RAID-like with at least one parity drive eSATA II for speed during backups Ability to shut down gracefully when alerted of low power by a UPS Optional but Desirable: Will take 2TB drives now with options for the larger 3TB drives coming in 2010-2011 Optional but Desirable: : RAID-6 or something similar, with two parity drives Optional but Desirable: : Hot spare Ethernet connection not required, as the volume will be shared via the same machines which runs my home print server Backups: Backup performed via ROBOCOPY in mirror mode to an external hard drive via a eSATA II connection. Start with rotating between two external 2TB hard drives, will go up to six external 2TB drives. Start with a weekly backup, move to a bi-weekly backup as more drives are added. Move to 3TB drives as the size of my main volume increases. Backup drives will be stored on an off-site location. Hard drives: I plan on buying all of the same model, but different batches from different vendors. I found a "burn-in" utility with which I can pound away on the drives for a couple of weeks before adding them to the backup pool or the main volume. I estimate that I am looking at roughly $1,500 to start, once I start throwing in two TB drives for backup and four for storage. So, are there any obvious flaws in my plan? What have I overlooked? Any suggestions for the storage device for my main volume that fits my requirements? Or do I just keep it simple, 2 drives in RAID-1, then perform due diligence with my backups, accepting that I will have to buy a whole new unit when my data grows past 2TB?

    Read the article

< Previous Page | 192 193 194 195 196 197 198 199 200 201 202 203  | Next Page >