Search Results

Search found 6528 results on 262 pages for 'appfabric cache'.

Page 117/262 | < Previous Page | 113 114 115 116 117 118 119 120 121 122 123 124  | Next Page >

  • Connection reset by peer: mod_fcgid: error reading data from FastCGI server Issues

    - by user145857
    Help is greatly needed for our server. We are experiencing random "Connection reset by peer: mod_fcgid: error reading data from FastCGI server" errors which cause a 500 internal server error. If the page is then reloaded it loads normally as it should. We are running MPM Worker with mod FCGID to handle PHP. We had APC cache enabled but disabled it recently to see if it would fix the problem, but the random mod FCGID errors are still continuing. No other opcode cache is active now. Our settings are below: <IfModule worker.c> MinSpareThreads 25 MaxSpareThreads 150 ThreadsPerChild 25 ThreadLimit 100 ServerLimit 700 MaxClients 700 MaxRequestsPerChild 0 </IfModule> <IfModule mod_fcgid.c> FcgidMaxRequestLen 1073741824 FcgidMaxRequestsPerProcess 2000 FcgidMaxProcessesPerClass 100 FcgidMinProcessesPerClass 0 FcgidConnectTimeout 300 FcgidIOTimeout 900 FcgidFixPathinfo 1 FcgidIdleTimeout 300 FcgidIdleScanInterval 120 FcgidBusyTimeout 300 FcgidBusyScanInterval 120 FcgidErrorScanInterval 12 FcgidZombieScanInterval 12 FcgidProcessLifeTime 3600 </IfModule> The server is a 64 core 2.1 GHZ 94 GB RAM so it has some power. Some of the fcgid timeout settings are higher because we run large reports which take up to 15 minutes. Any help is greatly appreciated! Just to clarify, the random fcgid errors are occurring when a user clicks a page on our site and the 500 error page loads instantly. This is random and occurrs less than 1% of the time but it is still an issue.

    Read the article

  • how to design pound -> varnish -> jboss for ha + loadbalancing

    - by andreash
    Hello, I'm planning a new infrastructure for our web application. We have two JBossAS5 servers, running in a cluster. Session state will be replicated via JBoss Cache. In front of that, there should be some cache, to speed up delivery of static elements. However, most of the traffic to our app will be via HTTPS. So far, I had been thinking of two Varnish caches in front of the JBossASs, each being configured for loadbalancing to the two JBossASs via round-robin. Since Varnish doesn't handle HTTPS, then there would need to be two pound proxies in front of the Varnishs, dealing with the HTTPS. The two pounds would be made high-available with Heartbeat/LinuxHA. The traffic to www.example.com would then be going through our firewall, from there to the virtual IP of the pounds, from there to the Varnishs, and from there to the JBossASs. Question 1: Does this make sense? Or is it overly complicated, and the same goal can be reached with simpler methods? Question 2: If my layout is fine, how do I configure the pound - Varnish step? Should I a) make the Varnish service high-available through Heartbeat/LinuxHA as well and direct traffic from pound to the virtual IP of the Varnishs, or should I rather b) Configure two independent Varnishs and use load-balancing in pound to address the different Varnishs? Thanks a lot for your insight! Andreas.

    Read the article

  • Eclipse: Slow startup time

    - by ct2k7
    Hello, I've got Eclipse 3.6.1 on my MacBook Air (2010), and I'm getting slowish startup times. Well, slow, compared to my Desktop, which is somewhat less powerful and a few years old). The startup generally takes 15 seconds, and of this, 4 is spent just on the Eclipse splash screen, before Eclipse loads anything. No projects are open at startup. Here's a copy of my eclipse.ini. -startup ../../../plugins/org.eclipse.equinox.launcher_1.1.0.v20100507.jar --launcher.library ../../../plugins/org.eclipse.equinox.launcher.cocoa.macosx.x86_64_1.1.1.R36x_v20100810 -showsplash org.eclipse.platform --launcher.XXMaxPermSize 512m --launcher.defaultAction openFile -vmargs -Xms256m -Xmx512m -Xdock:icon=../Resources/Eclipse.icns -XstartOnFirstThread -Dorg.eclipse.swt.internal.carbon.smallFonts -Dosgi.requiredJavaVersion=1.6 -Xverify:none -XX:+UseConcMarkSweepGC -XX:+CMSClassUnloadingEnabled -XX:+CMSPermGenSweepingEnabled -XX:+UnlockExperimentalVMOptions -XX:+AggressiveOpts -XX:+StringCache -XX:+UseFastAccessorMethods -XX:+UseLargePages -XX:LargePageSizeInBytes=4m -XX:AllocatePrefetchLines=1 -XX:AllocatePrefetchStyle=1 -Dide.gc=true The problem doesn't seem to be related to plugins - I've disabled the ones which I don't need, and regardless of this configuration or whether all of them are selected on startup, it only takes 1second to load the plugins. I'm running Eclipse 3.6.1 Cocoa x64 build (vanilla) with the Zend Studio plugin. The machine has 4GB RAM, an SSD with over 64% free space, 1.6GHz (4MB L2 Cache). OS is Mac OS X 10.6.6, latest Java available, 1.6. For comparison, my Desktop, an old P4 3GHZ (512K L2 Cache) with a 7200RPM drive, under 40% free space, Eclipse (same config) loads in under 7 seconds, consistently. Note, this one is a Windows machine, with latest Java installed.

    Read the article

  • How much memory will a Windows file-server be able to use effectively.

    - by Zoredache
    In the near future we will be moving our fileserver to a newer box that will be running Windows 2008R2. I want to know how much memory Windows will be able to use for a system that is just a file-server. In searching around I found an old document for Windows 2000 that mentions the maximum size of the file-system cache is 960MB. I suspect this limit no longer applies, but is there a new limit? The file server will be just a standard Windows fileserver. It will have 1TB of attached storage. The large majority of the of the files accessed during the day are just typical Office documents. There are 80-100 people usually using the fileserver during a typical day. This system will only be used as a file server, it doesn't have any other roles. In Windows 2008r2 is there any hard limits for the filesystem cache? What are they? The server we will be re-using for this purpose currently has 4GB of memory, but it can be maxed out at 16GB. Is there any value in doing this for a Windows file-server? Are there any performance counters can I look at on the existing 2003 fileserver that will tell me if adding more memory will be worthwhile.

    Read the article

  • Using npm install as a MS-Windows system account

    - by Guss
    I have a node application running on Windows, which I want to be able to update automatically. When I run npm install -d as the Administrator account - it works fine, but when I try to run it through my automation software (that is running as local system), I get errors when I try to install a private module from a private git repository: npm ERR! git clone [email protected]:team/repository.git fatal: Could not change back to 'C:/Windows/system32/config/systemprofile/AppData/Roaming/npm-cache/_git-remotes/git-bitbucket-org-team-repository-git-06356f5b': No such file or directory npm ERR! Error: Command failed: fatal: Could not change back to 'C:/Windows/system32/config/systemprofile/AppData/Roaming/npm-cache/_git-remotes/git-bitbucket-org-team-repository-git-06356f5b': No such file or directory npm ERR! npm ERR! at ChildProcess.exithandler (child_process.js:637:15) npm ERR! at ChildProcess.EventEmitter.emit (events.js:98:17) npm ERR! at maybeClose (child_process.js:735:16) npm ERR! at Socket.<anonymous> (child_process.js:948:11) npm ERR! at Socket.EventEmitter.emit (events.js:95:17) npm ERR! at Pipe.close (net.js:451:12) npm ERR! If you need help, you may report this log at: npm ERR! <http://github.com/isaacs/npm/issues> npm ERR! or email it to: npm ERR! <[email protected]> npm ERR! System Windows_NT 6.1.7601 npm ERR! command "C:\\Program Files\\nodejs\\\\node.exe" "C:\\Program Files\\nodejs\\node_modules\\npm\\bin\\npm-cli.js" "install" "-d" npm ERR! cwd D:\nodeapp npm ERR! node -v v0.10.8 npm ERR! npm -v 1.2.23 npm ERR! code 128 Just running git clone using the same system works fine. Any ideas?

    Read the article

  • How seriously should I take ECC correctable error warnings?

    - by David Mackintosh
    I have a pile of Sun X2200-M2 servers. These servers have ECC memory. In some of these servers, I am getting warnings in the eLOM about "correctable ECC errors detected", eg: # ssh regress11 ipmitool sel elist 1 | 05/20/2010 | 14:20:27 | Memory CPU0 DIMM2 | Correctable ECC | Asserted 2 | 05/20/2010 | 14:33:47 | Memory CPU0 DIMM2 | Correctable ECC | Asserted ...some more frequently than others. The kernel on this particular system is throwing EDAC errors as well, although with far more frequency than the eLOM is recording ECC events: EDAC k8 MC0: general bus error: participating processor(local node response), time-out(no timeout) memory transaction type(generic read), mem or i/o(mem access), cache level(generic) MC0: CE page 0x42a194, offset 0x60, grain 8, syndrome 0xf654, row 4, channel 1, label "": k8_edac MC0: CE - no information available: k8_edac Error Overflow set EDAC k8 MC0: extended error code: ECC chipkill x4 error EDAC k8 MC0: general bus error: participating processor(local node response), time-out(no timeout) memory transaction type(generic read), mem or i/o(mem access), cache level(generic) MC0: CE page 0x48cb94, offset 0x10, grain 8, syndrome 0xf654, row 5, channel 1, label "": k8_edac MC0: CE - no information available: k8_edac Error Overflow set EDAC k8 MC0: extended error code: ECC chipkill x4 error Now if the server is detecting Uncorrectable ECC, the system resets, so clearly that's bad and removing/replacing the identified stick or pair corrects the issue. But I am thinking that if the error is Correctable, then there's no immediate issue -- I can treat this as a warning and be prepared to pull the stick/pair if an uncorrectable error starts occurring?

    Read the article

  • dd oflag=direct 5x fast

    - by César
    I have Centos 6.2 in server with this specs: 2xCPU 16 Core AMD Opteron 6282 SE 64GB RAM Raid controller H700 1GB cache NV - 2HD 74GB SAS 15Krpm RAID1 stripe 16k (OS Centos 6.2) sda - 4HD 146GB SAS 15Krpm RAID10 stripe 16k (ext4 bs 4096, no barriers) sdb -> /vol01 Raid controller H800 1GB cache nv - MD1200 12HD 300GB SAS 15Krpm RAID10 stripe 256k (For DB Postgres 8.3.18) (ext4 bs 4096, stride 64, stripe-width 384, no barriers) sdc -> /vol02 I'm benchmarking IO speed with dd, and view thah if in RAID10 12 disk exec: dd if=/dev/zero of=DD bs=8M count=10000 oflag=direct 10000+0 records in 10000+0 records out 83886080000 bytes (84 GB) copied, 126,03 s, 666 MB/s but if I remove "oflag=direct" option obtain about 80 MB/s. In read benchmark, results are similar: dd of=/dev/null if=DD bs=8M count=10000 iflag=direct 10000+0 records in 10000+0 records out 83886080000 bytes (84 GB) copied, 79,5918 s, 1,1 GB/s If remove iflag=direct obtain 150MB/s... I don't understand this huge differences, on other machines y don't have this behavior. Can I have some kernel parameter misconfigured? Thanks!

    Read the article

  • High IOWait executing JBoss 3.2.7

    - by user64205
    Server Details: Kernel: Linux wiq31 2.4.21-9.ELsmp #1 SMP Thu Jan 8 17:08:56 EST 2004 i686 i686 i386 GNU/Linux CPU: 4 x Intel(R) Xeon(TM) CPU 3.06GHz Memory: 1028520 kB JBoss version: 3.2.7 Every time i try to start JBoss, in all CPU's, the iowait values starts to raise and the idle values starts to fall. Before executing my JBoss application, the free command returns the following output: *total used free shared buffers cached Mem: 1028520 966400 62120 0 187756 538928 -/+ buffers/cache: 239716 788804 Swap: 2044072 790672 1253400* After starting my JBoss application, the free command returns the following output: *total used free shared buffers cached Mem: 1028520 1007648 20872 0 187116 524084 -/+ buffers/cache: 296448 732072 Swap: 2044072 819096 1224976* After starting my JBoss application, without answering any request, the java process /proc/PID/status file have the following values: State: S (sleeping) SleepAVG: 27% Tgid: 24022 Pid: 24022 PPid: 21011 TracerPid: 0 Uid: 500 500 500 500 Gid: 500 500 500 500 FDSize: 256 Groups: 500 VmSize: 775200 kB VmLck: 0 kB VmRSS: 156752 kB VmData: 696752 kB VmStk: 36 kB VmExe: 21 kB VmLib: 710375 kB StaBrk: 0804f000 kB Brk: 095bb000 kB StaStk: bffff8c0 kB ExecLim: ffffffff Threads: 62 SigPnd: 0000000000000000 ShdPnd: 0000000000000000 SigBlk: 0000000000000000 SigIgn: 0000000000000000 SigCgt: 1000000180015ccf CapInh: 0000000000000000 CapPrm: 0000000000000000 CapEff: 0000000000000000 Is this behavior being caused by memory swapping, or the short memory available in the server is enough to run my application?

    Read the article

  • Dynamic ARP Entries turning into Static ARP entries

    - by Zach
    I recently acquired a client that has a strange ARP caching issue on one of thier servers. I have a server that will eventually start turning it's dynamic ARP entries into static ARP entries. This causes problems because when the machine that has a static ARP entries on this server receives a new IP via DHCP, then the server is not able to communicate with the clients. Clearing the ARP cache resolves the issue and the server is fine for about a week and then it starts slowly turning ARP entries into static ARP entries. I haven't narrowed it down to when or how many it starts to do, but slowly you start seeing 1 static ARP and then 5 and then 10. The server in question is a Windows Server 2003 SP2. It is a DC, DHCP, and DNS server. I've checked the DHCP scope options and there's nothing in there that would indicate anything to do with static ARP entries. The only thing different between this DNS server and our other DNS server is that the 'Dynamically Update DNA A and PTR records for DHCP clients that do not request updates' is checked on the problematic server. I've done a bit of research about this and it seems that this may happen if any PXE type services are running, from what I can tell, there is nothing running a PXE server. I'm a bit lost as I have never seen dynamic ARP entries start to turn into static ARP entries. Right now my solution is a schedule task that runs every 24 hours to clear the ARP cache (arp -d *). I would like to not rely on this schedule task. Has anybody seen this before or have any suggestions on how to troubleshoot this?

    Read the article

  • Host spreads wrong MAC Adress of router on the WIFI

    - by JavaIsMyIsland
    Strange things are going on our network. Since yesterday a host which is actually not on our subnet spreads wrong ARP Replys on our network. To be precise, only on the WIFI. If I connect my Laptop to the cable ethernet, it gets the right MAC adress of the router. Also my Android phone and my Ubuntu system do get the right MAC Adress. So I took a look at wireshark. When I clear the ARP cache of the windows machine, the first ARP response is correct and comes from the router. But like 10 ms later another ARP response comes from another host in the WIFI. The host changes its IP Adresses from time to time and they look like they are not on our subnet. So I can not use the internet because DNS is not working anymore. Sometimes the router wins the race condition and the mac adress is set correctly in the arp cache. I first thought, this is an arp-poisoning mitm attack but it does not make sense if the packets get not routed correctly?! I restarted the router but it didn't help. I have no access to the router, else I would change the shared key to make sure there is no intruder on the wifi.

    Read the article

  • What does "cpuid level" means ? Asking just for curiosity

    - by ogzylz
    For example, I put just 2 core info of a 16 core machine. What does "cpuid level : 6" line means? If u can provide info about lines "bogomips : 5992.10" and "clflush size : 64" I will be appreciated ------------- processor : 0 vendor_id : GenuineIntel cpu family : 15 model : 6 model name : Intel(R) Xeon(TM) CPU 3.00GHz stepping : 8 cpu MHz : 2992.689 cache size : 4096 KB physical id : 0 siblings : 4 core id : 0 cpu cores : 2 fpu : yes fpu_exception : yes cpuid level : 6 wp : yes flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm syscall nx lm constant_tsc pni monitor ds_cpl vmx cid cx16 xtpr lahf_lm bogomips : 5992.10 clflush size : 64 cache_alignment : 128 address sizes : 40 bits physical, 48 bits virtual power management: processor : 1 vendor_id : GenuineIntel cpu family : 15 model : 6 model name : Intel(R) Xeon(TM) CPU 3.00GHz stepping : 8 cpu MHz : 2992.689 cache size : 4096 KB physical id : 1 siblings : 4 core id : 0 cpu cores : 2 fpu : yes fpu_exception : yes cpuid level : 6 wp : yes flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm syscall nx lm constant_tsc pni monitor ds_cpl vmx cid cx16 xtpr lahf_lm bogomips : 5985.23 clflush size : 64 cache_alignment : 128 address sizes : 40 bits physical, 48 bits virtual power management:

    Read the article

  • Zenoss No space left on device Error

    - by Pastelinux
    Site Error An error was encountered while publishing this resource. Sorry, a site error occurred. Traceback (innermost last): Module ZPublisher.Publish, line 231, in publish_module_standard Module ZPublisher.Publish, line 165, in publish Module Zope2.App.startup, line 211, in __call__ Module Products.ZenUI3.browser, line 105, in __call__ Module Products.Five.browser.pagetemplatefile, line 60, in __call__ Module zope.pagetemplate.pagetemplate, line 115, in pt_render Module zope.tal.talinterpreter, line 271, in __call__ Module zope.tal.talinterpreter, line 343, in interpret Module zope.tal.talinterpreter, line 858, in do_defineMacro Module zope.tal.talinterpreter, line 343, in interpret Module zope.tal.talinterpreter, line 533, in do_optTag_tal Module zope.tal.talinterpreter, line 518, in do_optTag Module zope.tal.talinterpreter, line 513, in no_tag Module zope.tal.talinterpreter, line 343, in interpret Module zope.tal.talinterpreter, line 620, in do_insertText_tal Module Products.PageTemplates.Expressions, line 203, in evaluateText Module Products.PageTemplates.Expressions, line 222, in _handleText Module zope.component._api, line 174, in queryUtility Module zope.component.registry, line 165, in queryUtility Module ZODB.Connection, line 834, in setstate Module ZODB.Connection, line 884, in _setstate Module ZEO.ClientStorage, line 815, in load Module ZEO.cache, line 143, in call Module ZEO.cache, line 607, in store IOError: [Errno 28] No space left on device Went in to check my server through zenoss today and it looks like somehow my server is full. Which when i look at my server its only 85% full: unclebob:~# df -h Filesystem Size Used Avail Use% Mounted on /dev/mapper/unclebob--vg0-unclebob--root 1.9G 1.5G 335M 82% / tmpfs 471M 0 471M 0% /lib/init/rw udev 10M 820K 9.2M 9% /dev tmpfs 471M 0 471M 0% /dev/shm overflow 1.0M 1.0M 0 100% /tmp /dev/hde1 942M 36M 859M 5% /boot unclebob:/tmp# df -i Filesystem Inodes IUsed IFree IUse% Mounted on /dev/mapper/unclebob--vg0-unclebob--root 121920 54844 67076 45% / tmpfs 120489 3 120486 1% /lib/init/rw udev 120489 1520 118969 2% /dev tmpfs 120489 1 120488 1% /dev/shm overflow 120489 14 120475 1% /tmp /dev/hde1 61312 33 61279 1% /boot It looks like theres these two files: .ICE-unix/ .X11-unix/ They had been hidden. I'll remove those. Any idea upon what they maybe? Any ideas on a fix? Probably has something to do with Zenoss

    Read the article

  • PHP memory_limit local value does not match php.ini value

    - by Buttle Butkus
    CentOS system. Summary: changed memory_limit in master and local php.ini and yet no change in the local value for a particular virtual host. Trying to improve performance, I set the memory_limit to 1024M in /etc/php.ini phpinfo() shows Master and Local values for other virtual hosts on the server as 1024M. Changing the value in /etc/php.ini changes all values, except one. One site is stuck with a local value of 256M. I thought I found the problem: there is a php.ini file (which I didn't know about) in that site's root, and it had memory_limit = 256M I changed it to 1024M. Problem solved? No. And now I don't know where to look. Obviously, I've restarted apache (/etc/init.d/httpd restart), and that usually does the trick. I also turned off APC cache, though I don't think it would cache ini files. And finally, I tried adding this to the virtual host in httpd.conf: php_value memory_limit 536870912 (yes, that would be 5 GB) And that had no effect whatsoever. What else could be the problem? Thanks.

    Read the article

  • IIS permission configuration issue

    - by Dan
    Sorry the title of this question is a little ambiguous but I don't really have any idea where the issue lies - I'm seeking some clarification of the server error logs. Basically, I had a dedicated server running Windows 2003 and Plesk (v8 I think). Last week the server hardware failed and the entire thing had to be rebuilt from scratch. New hardware was put in, new operating system (Win2008), new Plesk installation (v9.5), new software (MSSQL etc) then all data ported over manually from old C and D drives to restore all 30 client sites. It was hell! All has been okay for a couple of days now but about an hour ago POP! Suddenly all sites went down giving a 500 error. Restarting all services eventually brought everything back online, but I'm now living in total fear. It can - and probably will - happen again. The guys on support gave me the following errors from the server log: The Template Persistent Cache initialization failed for Application Pool 'ASP.NET v4.0 Classic' because of the following error: Could not create a Disk Cache Sub-directory for the Application Pool. The data may have additional error codes.. The worker process for application pool 'domain1.com(domain)(2.0)(pool)' encountered an error 'Cannot read configuration file ' trying to read configuration data from file '\\?\C:\inetpub\temp\apppools\domain1.com(domain)(2.0)(pool).config', line number '0'. The data field contains the error code. The worker process for application pool 'PleskControlPanel' encountered an error 'Cannot read configuration file ' trying to read configuration data from file '\\?\C:\inetpub\temp\apppools\PleskControlPanel.config', line number '0'. The data field contains the error code. The support guys are so ambiguous about this and it scares me horribly. Can anyone positively identify the cause of this error which lead to all client website going offline? What can be done to prevent it from happening again? Any pointers would be very much appreciated! Thanks folks...

    Read the article

  • Secretary cannot add appointments to boss's calendar after exchange restore from backup

    - by therulebookman
    The calendar is the Boss's calendar on Exchange. I have set permissions for it through his Outlook to give the secretary and a few other people "Editor" access to his calendar. All the editors can view the calendar, but only he can add new appointments. Anyone else who tries to add an appointment gets "The item cannot be saved in this folder. The folder was deleted or moved or you do not have permission." The permissions are correct, editor. The item hasn't been deleted or moved. It's in his mailbox on exchange. The message says something about the mailbox size, but he is well under the size limit anyway. He is using Outlook 2003, and I have tried accessing it from 2003 and 2007, but I don't think that is related I tried clearing the forms cache and enabling disabled items: no disabled items and clearing cache didn't help. I also tried "Allow all forms" but this apparently doesn't apply in this scenario as we are not using any custom forms. Is there any way to delete just his calendar and then I can exmerge it back in (after exporting to PST of course)? I really can't exmerge out his mailbox, delete it, and exmerge it back in because he works all sorts of hours, but if this is the only way, then I'll have to do it. Is there any other possible solution?

    Read the article

  • Server nearly unusable when doing disk writes

    - by Wikser
    My question closely relates to my last question here on serverfault. I was copying about 5GB from a 10 year old desktop computer to the server. The copy was done in Windows Explorer. In this situation I would assume the server to be bored by the dataflow. But as usual with this server, it really slowed down. At least I could work with the remote session, even there was some serious latency. The copy took its time (20min?). In this time I went to a colleague and he tried to log in in the same server via remote desktop (for some other reason). It took about a minute to get to the login screen, a minute to open the control panel, a minute to open the performance monitor, ... Icons were loading maybe one per second. We saw the following (from memory): CPU: 2% Avg. Queue Length: 50 Pages/sec: 115 (?) There was no other considerable activity on the server. The server seldom serves some ASP.NET pages, which became also very slow in this time. The relevant configuration is as follows: Windows 2003 SEAGATE ST3500631NS (7200 rpm, 500 GB) LSI MegaRAID based RAID 5 4 disks, 1 hot spare Write Through No read-ahead Direct Cache Mode Harddisk-Cache-Mode: off Is this normal behaviour for such a configuration? What measurements could give further clues? Is it reasonable to reduce the priority of such copy I/O and favour other processes like the remote desktop? How would you do that? Many thanks!

    Read the article

  • Improve speed of "start menu" in Linux Mint 10 - Ubuntu 10.10 derivative

    - by Gabriel L. Oliveira
    I have a global menu (including application, administration and system tabs) that is taking too much time (for me) to load (about 2.5 seconds). Of course, this time is taken only during first start. After it have loaded, next times are better ( less than 0.2 miliseconds) The menu was taking more time before (about 5 seconds), and I found that was because of the 'Other' part of the menu, that included many applications installed with Wine, so I removed all of them (I didn't need them at all). I have a "normal" knowledge of programming, and I think that the process of starting the menu for the first time has some kind of "cache function", that tries to find which apps are present that need to be placed under menu to be shown to user. But didn't found this function so that I could analyze in details what he is doing (if searching for files under "~/.local/share/applications" or anything else). Also, I found that hitting "Alt-F2" also fires this "cache function", because after waiting it to load, the process of opening the menu took less than 0.2 miliseconds. So, could anyone help me in order to reduce this time? I found on internet that some user could reduce the time by resizing the icons of applications. But found here that most of my icons are already at 25x25 size. Any other idead? Maybe a multiprocess to load it, or include it under startup... don't know. Ps: Sorry if this is an awkward question, but I just do not like waiting for things to happen, and think that this process should be smoother than it's now. Also, thanks in advance!

    Read the article

  • linux hardware raid 10 / lvm / virtual machine partition alignment and filesystem optimization

    - by Jason Ward
    I've been reading everything I can find about partition alignment and filesystem optimization (ext4 and xfs) but still don't know enough to be confident in setting up my current configuration. My remaining confusion comes from the LVM layer and if I should use raid parameters on the filesystem in guest os'es. My main questions are: When I use 'pvcreate --dataalignment' do I use the stripe-width as calculated for a filesystem on RAID (128kB for ext4 in my situation), the Stripe size of the RAID set (256kB), something else altogether, or do I not need this? When I create ext2/3/4 or xfs filesystems in guests on the Logical Volumes, should I add the settings for the underlying RAID (e.g. mkfs.ext4 -b 4096 -E stride=64,stripe-width=128)? Does anyone see any glaring errors in my set up below? I'm running some benchmarks now but haven't done enough to start comparing results. I have four drives in RAID 10 on a 3ware 9750-4i controller (more details on the settings below) giving me a 6.0TB device at /dev/sda. Here is my partition table: Model: LSI 9750-4i DISK (scsi) Disk /dev/sda: 5722024MiB Sector size (logical/physical): 512B/512B Partition Table: gpt Number Start End Size File system Name Flags 1 1.00MiB 257MiB 256MiB ext4 BOOTPART boot 2 257MiB 4353MiB 4096MiB linux-swap(v1) 3 4353MiB 266497MiB 262144MiB ext4 4 266497MiB 4460801MiB 4194304MiB Partition 1 is to be the /boot partition for my xen host. Partition 2 is swap. Partition 3 is to be the root (/) for my xen host. Partition 4 is to be (the only) physical volume to be used by LVM (for those who are counting, I left about 1.2TB unallocated for now) For my Xen guests, I usually create a Logical Volume of the needed size and present it to the guests for them to partition as needed. I know there are other ways of handling that but this method works best for my situation. Here's the hardware of interest on my CentOS 6.3 Xen Host: 4x Seagate Barracuda 3TB ST3000DM001 Drives (sector size: 512 logical/4096 physical) 3ware 9750-4i w/BBU (sector size reported: 512 logical/512 physical) All four drives make up a RAID 10 array. Stripe: 256kB Write Cache enabled Read Cache: intelligent StoreSave: Balance Thanks!

    Read the article

  • Four disks - RAID 10 or two mirrored pairs?

    - by ewwhite
    I have this discussion with developers quite often. The context is an application running in Linux that has a medium amount of disk I/O. The servers are HP ProLiant DL3x0 G6 with four disks of equal size @ 15k rpm, backed with a P410 controller and 512MB of battery or flash-based cache. There are two schools of thought here, and I wanted some feedback... 1). I'm of the mind that it makes sense to create an array containing all four disks set up in a RAID 10 (1+0) and partition as necessary. This gives the greatest headroom for growth, has the benefit of leveraging the higher spindle count and better fault-tolerance without degradation. 2). The developers think that it's better to have multiple RAID 1 pairs. One for the OS and one for the application data, citing that the spindle separation would reduce resource contention. However, this limits throughput by halving the number of drives and in this case, the OS doesn't really do much other than regular system logging. Additionally, the fact that we have the battery RAID cache and substantial RAM seems to negate the impact of disk latency... What are your thoughts?

    Read the article

  • Wget save cookies not working

    - by TrymBeast
    I've been trying to login in the pyload through the web api, but wget is not saving the cookies and I don't understand why. I'm using the following command: wget --delete-after --keep-session-cookies --save-cookies=my_cookies.txt --post-data="username=USERNAME&password=PASSWORD" http://localhost:8000/api/login But the content of my_cookies.txt is: # HTTP cookie file. # Generated by Wget on 2012-06-23 22:31:33. # Edit at your own risk. When I run the same command but in debug mode I get the following output that includes the set cookie in the header response: DEBUG output created by Wget 1.10.2 (Red Hat modified) on linux-gnueabi. --22:31:11-- http://localhost:8000/api/login Resolving localhost... 127.0.0.1 Caching localhost => 127.0.0.1 Connecting to localhost|127.0.0.1|:8000... connected. Created socket 3. Releasing 0x000504d0 (new refcount 1). ---request begin--- POST /api/login HTTP/1.0 User-Agent: Wget/1.10.2 (Red Hat modified) Accept: */* Host: localhost:8000 Connection: Keep-Alive Content-Type: application/x-www-form-urlencoded Content-Length: 32 ---request end--- [POST data: username=USERNAME&password=PASSWORD] HTTP request sent, awaiting response... ---response begin--- HTTP/1.1 200 OK Content-Length: 34 Content-Type: application/json Cache-Control: no-cache, must-revalidate Set-cookie: beaker.session.id=405390ddc809efed54820638c95d7997; expires=Tue, 19-Jan-2038 04:14:07 GMT; Path=/ Connection: Keep-Alive Date: Sat, 23 Jun 2012 21:31:11 GMT Server: CherryPy/3.1.2 WSGI Server ---response end--- 200 OK hs->local_file is: login (not existing) Registered socket 3 for persistent reuse. TEXTHTML is on. Length: 34 [application/json] Saving to: `login' 100%[=======================================>] 34 --.-K/s in 0s 22:31:11 (1.28 MB/s) - `login' saved [34/34] Removing file due to --delete-after in main(): Removing login. Saving cookies to my_cookies.txt. Done saving cookies. Can anyone tell me what am I doing wrong? Thanks in advance!

    Read the article

  • DNS requests failing from computers that can ping DNS server

    - by dunxd
    I have a situation where computers in some of our remote offices from time to time lose the ability to use our DNS server (in head office) to resolve hostnames. The offices are connected via VPN using Cisco ASA 5505 (VPNclient config rather than Site to Site). Ping to the IP address of the DNS server works. But nslookup will get a "no response from server" message. Computers in other locations can use DNS fine. This is an intermittent problem. One day/hour it works, another it doesn't. Other offices connected in the same way work when another doesn't. No config changes have been made on routers around the time we see the problem. Some users have reported that the problem goes away after doing a repair connection in Windows XP. I think this could be caused by the DNS cache being flushed as part of this - the Windows DNS cache makes the intermittent problem look less so because it caches failed lookups as well as successful ones. However, it is possible some other aspect of Windows is involved. Windows 7 clients have also had the same problem. Any pointers on deeper troubleshooting, or anyone else found this?

    Read the article

  • Specific DNS sometimes resolves to wildcard, incorrectly

    - by Mojo
    I have an intermittent problem, and I'm not sure where to start trying to troubleshoot it. In our dev environment, we have two visible IP addresses on load balancers, one to the front-end, and one to a number of back-end service machines. The front-end is configured to take a wildcard DNS name to support generic "portals." dev.example.com A 10.1.1.1 *.dev.example.com CNAME dev.example.com The back-end servers are all specific names within the same space: core.dev.example.com A 10.1.1.2 cms.dev.example.com CNAME core.dev.example.com search.dev.example.com CNAME core.dev.example.com Here's the problem. Periodically a developer or a program trying to reach, say, cms.dev.example.com will get a result that points to the front-end, instead of the back-end load balancer: cms.dev.example.com is an alias to core.dev.example.com core.dev.example.com is an alias to dev.example.com (WRONG!) dev.example.com 10.1.1.1 The developers are all on Mac OS X machines, though I've seen the problem occur on an Ubuntu machine as well, using a local cloud host DNS resolver. Sometimes the developer is using a VPN, which directs the DNS to its own resolver, and sometimes he's on the local net using a DNS resolver assigned by the NAT router. Sometimes clearing the Mac OS X DNS cache, logging into the VPN, then logging out of the VPN, will make the problem go away. The origin authoritative server is on zerigo, and a dig directly to their name servers always seems to give the correct answer. The published DNS cache time for these records is 15 minutes, but the problem has been intermittent for about a week. Any troubleshooting suggestions?

    Read the article

  • Performance: Nginx SSL slowness or just SSL slowness in general?

    - by Mauvis Ledford
    I have an Amazon Web Services setup with an Apache instance behind Nginx with Nginx handling SSL and serving everything but the .php pages. In my ApacheBench tests I'm seeing this for my most expensive API call (which cache via Memcached): 100 concurrent calls to API call (http): 115ms (median) 260ms (max) 100 concurrent calls to API call (https): 6.1s (median) 11.9s (max) I've done a bit of research, disabled the most expensive SSL ciphers and enabled SSL caching (I know it doesn't help in this particular test.) Can you tell me why my SSL is taking so long? I've set up a massive EC2 server with 8CPUs and even applying consistent load to it only brings it up to 50% total CPU. I have 8 Nginx workers set and a bunch of Apache. Currently this whole setup is on one EC2 box but I plan to split it up and load balance it. There have been a few questions on this topic but none of those answers (disable expensive ciphers, cache ssl, seem to do anything.) Sample results below: $ ab -k -n 100 -c 100 https://URL This is ApacheBench, Version 2.3 <$Revision: 655654 $> Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/ Licensed to The Apache Software Foundation, http://www.apache.org/ Benchmarking URL.com (be patient).....done Server Software: nginx/1.0.15 Server Hostname: URL.com Server Port: 443 SSL/TLS Protocol: TLSv1/SSLv3,AES256-SHA,2048,256 Document Path: /PATH Document Length: 73142 bytes Concurrency Level: 100 Time taken for tests: 12.204 seconds Complete requests: 100 Failed requests: 0 Write errors: 0 Keep-Alive requests: 0 Total transferred: 7351097 bytes HTML transferred: 7314200 bytes Requests per second: 8.19 [#/sec] (mean) Time per request: 12203.589 [ms] (mean) Time per request: 122.036 [ms] (mean, across all concurrent requests) Transfer rate: 588.25 [Kbytes/sec] received Connection Times (ms) min mean[+/-sd] median max Connect: 65 168 64.1 162 268 Processing: 385 6096 3438.6 6199 11928 Waiting: 379 6091 3438.5 6194 11923 Total: 449 6264 3476.4 6323 12196 Percentage of the requests served within a certain time (ms) 50% 6323 66% 8244 75% 9321 80% 9919 90% 11119 95% 11720 98% 12076 99% 12196 100% 12196 (longest request)

    Read the article

  • open_basedir problems with APC and Symfony2

    - by Stephen Orr
    I'm currently setting up a shared staging environment for one of our applications, written in PHP5.3 and using the Symfony2 framework. If I only host a single instance of the application per server, everything works as it should. However, if I then deploy additional instances of the application (which may or may not share the exact same code, dependent on client customisations), I get errors like this: [Tue Nov 06 10:19:23 2012] [error] [client 127.0.0.1] PHP Warning: require(/var/www/vhosts/application1/httpdocs/vendor/doctrine-common/lib/Doctrine/Common/Annotations/AnnotationRegistry.php): failed to open stream: Operation not permitted in /var/www/vhosts/application2/httpdocs/app/bootstrap.php.cache on line 1193 [Tue Nov 06 10:19:23 2012] [error] [client 127.0.0.1] PHP Fatal error: require(): Failed opening required '/var/www/vhosts/application1/httpdocs/app/../vendor/doctrine-common/lib/Doctrine/Common/Annotations/AnnotationRegistry.php' (include_path='.:/usr/share/pear:/usr/share/php') in /var/www/vhosts/application2/httpdocs/app/bootstrap.php.cache on line 1193 Basically, the second site is trying to require the files from the first site, but due to open_basedir restrictions it can't do that. I'm not willing to disable open_basedir as that is only masking the problem instead of solving it, and creates a dependency between applications that should not be present. I initially believed this was related to a Symfony2 error, but I've now tracked it down to an issue with APC; disabling APC also solves the error, but I'm concerned about the performance impact of doing so. Does anyone have any suggestions on what I might be able to do?

    Read the article

  • Are These Parts compatible?

    - by ell
    I have never assembled a PC before, although I have taken an old one apart and replaced a few parts in others here and there so I have (very) limited experience. I have been looking to make a pc and here are the parts I might buy: Foxconn P45AL Intel P45 (Socket 775) DDR2 Motherboard (with onboard sound I believe) Gigabyte GeForce GTX 460 OC 768MB GDDR5 PCI-Express Graphics Card Already have 2 1gb sticks of dual channel DDR2 memory Intel Core 2 Quad Q8400 LGA775 'Yorkfield' 2.66GHz 4MB-cache Processor Samsung SpinPoint F3 1TB SATA-II 32MB Cache Hard Drive Antec Dark Fleet Series DF10 Gaming Enclosure – Black I already have monitor, mouse, keyboard and DVD/CD drive Akasa Freedom Power 1000W Modular Power Supply I have never done this before so feel free to laugh at me for getting something obvious wrong, forgetting a vital component etc. but is all of this compatible? And have I gone overkill on the PSU, if so, please recommend one. Thanks in advance, ell. EDIT: Added PSU which I forgot to mention EDIT: I would be using this to surf the internet, write e-mails, chat, word process, play games such as team fortress 2 & spring rts (at highest graphics hopefully), some 3d modelling in blender, some opengl programming, and image editing in GIMP.

    Read the article

< Previous Page | 113 114 115 116 117 118 119 120 121 122 123 124  | Next Page >