Search Results

Search found 16528 results on 662 pages for 'chen xiao long'.

Page 572/662 | < Previous Page | 568 569 570 571 572 573 574 575 576 577 578 579  | Next Page >

  • Why is Excel 2010/2013 taking 10 seconds open any file?

    - by jbkly
    I have a fast Windows 7 PC with two SSDs and 16GB of RAM, so I'm used to programs loading very fast. But recently, for no reason I can figure out, Excel has started taking way too long to open Excel files (of any size--even blank files). This is occurring with Excel 2010 and with Excel 2013 after I upgraded, hoping to solve the problem. Here a couple scenarios: If I start Excel directly, it opens almost instantly. No problem there. If I start Excel directly, and then open any Excel file (.xls or .xlsx), it loads almost instantly. Still no problem BUT if I attempt to open any Excel file directly, with Excel not running, it consistently takes 10-11 seconds for Excel to start. I get no error messages, just a spinning cursor for 10-11 seconds, and then the file opens. During the delay while Excel is trying to start, I'm not really seeing any discernible spike in CPU or memory usage, other than explorer.exe. This problem is only occurring with Excel, not Word or any other program I'm aware of. I've searched around quite a bit on this question and found various others who have experienced it, but the solutions that worked for them are not working for me. For a few people it was a problem with scanning network drives, but my problem is purely with local files; I have no network drives, and the problem persists even with all network connections disabled. Some people suggested worksheets with corrupted formulas or links, but I'm experiencing this with ANY Excel file: even blank worksheets. Others thought it was a problem with add-ins, but I have all Excel add-ins disabled (as far as I can tell). One person solved it by disabling a "clipboard manager" process that was running in the background, but I don't have that. I've disabled as many startup and background processes as I can, but the problem persists. I've run malware scans, disk cleanup, CCleaner, and installed Excel 2013. I've deleted temporary files, enabled SuperFetch, and edited registry keys. Still can't get rid of the problem. Any ideas? My system details: Windows 7 Professional SP1 64-bit, Excel 2013 32-bit, 16GB RAM, all programs installed on SSD.

    Read the article

  • openstack, bridging, netfilter and dnat

    - by Craig Sanders
    In a recent upgrade (from Openstack Diablo on Ubuntu Lucid to Openstack Essex on Ubuntu Precise), we found that DNS packets were frequently (almost always) dropped on the bridge interface (br100). For our compute-node hosts, that's a Mellanox MT26428 using the mlx4_en driver module. We've found two workarounds for this: Use an old lucid kernel (e.g. 2.6.32-41-generic). This causes other problems, in particular the lack of cgroups and the old version of the kvm and kvm_amd modules (we suspect the kvm module version is the source of a bug we're seeing where occasionally a VM will use 100% CPU). We've been running with this for the last few months, but can't stay here forever. With the newer Ubuntu Precise kernels (3.2.x), we've found that if we use sysctl to disable netfilter on bridge (see sysctl settings below) that DNS started working perfectly again. We thought this was the solution to our problem until we realised that turning off netfilter on the bridge interface will, of course, mean that the DNAT rule to redirect VM requests for the nova-api-metadata server (i.e. redirect packets destined for 169.254.169.254:80 to compute-node's-IP:8775) will be completely bypassed. Long-story short: with 3.x kernels, we can have reliable networking and broken metadata service or we can have broken networking and a metadata service that would work fine if there were any VMs to service. We haven't yet found a way to have both. Anyone seen this problem or anything like it before? got a fix? or a pointer in the right direction? Our suspicion is that it's specific to the Mellanox driver, but we're not sure of that (we've tried several different versions of the mlx4_en driver, starting with the version built-in to the 3.2.x kernels all the way up to the latest 1.5.8.3 driver from the mellanox web site. The mlx4_en driver in the 3.5.x kernel from Quantal doesn't work at all) BTW, our compute nodes have supermicro H8DGT motherboards with built-in mellanox NIC: 02:00.0 InfiniBand: Mellanox Technologies MT26428 [ConnectX VPI PCIe 2.0 5GT/s - IB QDR / 10GigE] (rev b0) we're not using the other two NICs in the system, only the Mellanox and the IPMI card are connected. Bridge netfilter sysctl settings: net.bridge.bridge-nf-call-arptables = 0 net.bridge.bridge-nf-call-iptables = 0 net.bridge.bridge-nf-call-ip6tables = 0 Since discovering this bridge-nf sysctl workaround, we've found a few pages on the net recommending exactly this (including Openstack's latest network troubleshooting page and a launchpad bug report that linked to this blog-post that has a great description of the problem and the solution)....it's easier to find stuff when you know what to search for :), but we haven't found anything on the DNAT issue that it causes.

    Read the article

  • VMWare Network bug in multiple VMWare Workstation versions if using a hardcoded IP address

    - by onyxruby
    I'm having a very tricky problem with some of my VM sessions being unable to reach the Internet or even ping the gateway. I have just set up a new VM Workstation (7) on a W2K8 64bit server (I'll be converting to ESXI 4 once I can find a decent book on it, so for the meanwhile I use workstation). I have imported a number of VM's and setup some new ones on the server.In short the problem with some of the VM's being unable to reach the Internet is that they can't reach the gateway. I've looking at a number of things and can pretty safely rule out the following: Switch, Router, DHCP Server, DNS, Client IP configuration, Routes and typos. The problem is that some of the new clients cannot reach the gateway if their IP address is hardcoded, they can't even ping it by IP address. That rules out DNS and DHCP. Now, if I allow them to get their IP address by DHCP they can reach the gateway and Internet without issue. The interesting thing on this, is that this behavior occurs even if I leave the DNS information hardcoded under TCP/IP settings. It doesn't work unless the IP and gateway are handed out by DHCP even though the same information IP info is being used by the host. Fundamentally from the standpoint of the clients, they are trying to reach the exact same gateway using the exact same IP information regardless of whether they are hardcoded or assigned by DHCP. Here's an example of one client. IP Address 192.168.7.66 - Subnet Mask 255.255.255.0 - Gateway 192.168.7.254 - DNS1 192.168.7.44 - DNS2 192.168.7.254. The issue occurs across six different microsoft operating systems, Windows 7 and Windows 2008 variants all have the issue. My W2K3, XP, Vista and W98 clients all work without issue with hardcoded IP addresses. I have tried things like rearranging the DNS order, flushing DNS and so on. It's not a routing or switch issue as the clients can work just fine if they get their IP by DHCP. It's not a paramater issue as the exact same paramaters are handed out by DHCP as I plug in by hand. It's not a DNS issue as clients cant reach other clients even with IP addresses only. I have run a tracert to the gateway by IP address and it times out on the very first hop before failing on hop3 with destination host unreachable. If I get the IP address by DHCP the tracert finds the gateway (and Internet) without issue. I have read a few other posts online in forums talking about this problem randomly occuring over the years in other VM versions as well, so I suspect some kind of long standing bug. Does anyone have any ideas on this? Is it possibly a bug with Windows 7 and W2K clients under VM?

    Read the article

  • vagrant up command very slow on OS X Lion

    - by Andy Hume
    When I run vagrant up to provision a new VM on Lion it takes an extremely long time, during which the entire Mac is very laggy and unresponsive. The output is as follows, the key point being the "notice: Finished catalog run in 754.28 seconds" > vagrant up [default] Importing base box 'lucid64'... [default] The guest additions on this VM do not match the install version of VirtualBox! This may cause things such as forwarded ports, shared folders, and more to not work properly. If any of those things fail on this machine, please update the guest additions and repackage the box. Guest Additions Version: 4.1.0 VirtualBox Version: 4.1.6 [default] Matching MAC address for NAT networking... [default] Clearing any previously set forwarded ports... [default] Forwarding ports... [default] -- ssh: 22 => 2222 (adapter 1) [default] -- web: 80 => 4567 (adapter 1) [default] Creating shared folders metadata... [default] Running any VM customizations... [default] Booting VM... [default] Waiting for VM to boot. This can take a few minutes. [default] VM booted and ready for use! [default] Mounting shared folders... [default] -- v-root: /vagrant [default] -- v-data: /var/www [default] -- manifests: /tmp/vagrant-puppet/manifests [default] Running provisioner: Vagrant::Provisioners::Puppet... [default] Running Puppet with lucid64.pp... [default] stdin: is not a tty [default] notice: /Stage[main]/Lucid64/Exec[apt-update]/returns: executed successfully [default] [default] notice: /Stage[main]/Lucid64/Package[apache2]/ensure: ensure changed 'purged' to 'present' [default] [default] notice: /Stage[main]/Lucid64/File[/etc/motd]/ensure: defined content as '{md5}a25e31ba9b8489da9cd5751c447a1741' [default] [default] notice: Finished catalog run in 754.28 seconds [default] [default] err: /File[/var/lib/puppet/rrd]/ensure: change from absent to directory failed: Could not find group puppet [default] [default] err: Could not send report: Got 1 failure(s) while initializing: change from absent to directory failed: Could not find group puppet [default] [default] Running provisioner: Vagrant::Provisioners::Puppet... [default] Running Puppet with lucid64.pp... [default] stdin: is not a tty [default] notice: /Stage[main]/Lucid64/Exec[apt-update]/returns: executed successfully [default] [default] notice: Finished catalog run in 2.05 seconds [default] [default] err: /File[/var/lib/puppet/rrd]: Could not evaluate: Could not find group puppet [default] [default] err: Could not send report: Got 1 failure(s) while initializing: Could not evaluate: Could not find group puppet [default] [default] Running provisioner: Vagrant::Provisioners::Puppet... [default] Running Puppet with lucid64.pp... [default] stdin: is not a tty [default] notice: /Stage[main]/Lucid64/Exec[apt-update]/returns: executed successfully [default] [default] notice: Finished catalog run in 1.36 seconds [default] [default] err: /File[/var/lib/puppet/rrd]: Could not evaluate: Could not find group puppet [default] [default] err: Could not send report: Got 1 failure(s) while initializing: Could not evaluate: Could not find group puppet [default] >

    Read the article

  • uploading via http post (multipart/form-data) silently fails with big files

    - by matteo
    When uploading multipart/form-data forms via a http post request to my apache web server, very big files (i.e. 30MB) are silently discarded. On the server side all looks as if the attached file was received with 0 bytes size. On the client side all looks like it had been uploaded succesfully (it takes the expected long time to upload and the browser gives no error message). On the server, nothing is logged into the error log. An entry is logged into the access log as if everything was ok (a post request and a 200 ok response). These uploads are being posted to a php script. In the php script, If I print_r $_FILES, I see the following information for the relevant file: [file5] => Array ( [name] => MOV023.3gp [type] => video/3gpp [tmp_name] => /tmp/phpgOdvYQ [error] => 0 [size] => 0 ) Note both [error] = 0 (which should mean no error) and [size] = 0 (as if the file was empty). My php script runs fine and receives all the rest of the data except these files. move_uploaded_file succeeds on these files and actually copies them as 0byte files. I've already changed the php directives max_upload_size to 50M and post_max_size to 200M, so neither the single file nor the request exceed any size limit. max_execution_time is not relevant, because the time to transfer the data does not count; and I've increased max_input_time to 1000 seconds, though this shouldn't be necessary since this is the time taken to parse the input data, not the time taken to upload it. Is there any apache configuration, prior to php, that could be causing these files to be discarded even prior to php execution? Some limit in size or in upload time? I've read about a default 300 seconds timeout limit, but this should apply to the time the connection is idle, not the time it takes while actually transferring data, right? Needless to say, uploads with all exactly identical conditions (including file format, client and everything) except smaller file size, work seamlessly, so the issue is clearly related to the file or request size, or to the time it takes to send it.

    Read the article

  • gmail dkim=neutral (no signature)

    - by Bretticus
    After testing much and retracing my steps, I still cannot get google mail to validate. My mail server is Debian 5.0 with exim Exim version 4.72 #1 built 31-Jul-2010 08:12:17 Copyright (c) University of Cambridge, 1995 - 2007 Berkeley DB: Berkeley DB 4.8.24: (August 14, 2009) Support for: crypteq iconv() IPv6 PAM Perl Expand_dlfunc GnuTLS move_frozen_messages Content_Scanning DKIM Old_Demime Lookups: lsearch wildlsearch nwildlsearch iplsearch cdb dbm dbmnz dnsdb dsearch ldap ldapdn ldapm mysql nis nis0 passwd pgsql sqlite Authenticators: cram_md5 cyrus_sasl dovecot plaintext spa Routers: accept dnslookup ipliteral iplookup manualroute queryprogram redirect Transports: appendfile/maildir/mailstore/mbx autoreply lmtp pipe smtp Fixed never_users: 0 Size of off_t: 8 GnuTLS compile-time version: 2.4.2 GnuTLS runtime version: 2.4.2 Configuration file is /var/lib/exim4/config.autogenerated My remote smtp transport configuration: remote_smtp: debug_print = "T: remote_smtp for $local_part@$domain" driver = smtp helo_data = mailer.mydomain.com dkim_domain = mydomain.com dkim_selector = mailer dkim_private_key = /etc/exim4/dkim/mailer.mydomain.com.key dkim_canon = relaxed .ifdef REMOTE_SMTP_HOSTS_AVOID_TLS hosts_avoid_tls = REMOTE_SMTP_HOSTS_AVOID_TLS .endif .ifdef REMOTE_SMTP_HEADERS_REWRITE headers_rewrite = REMOTE_SMTP_HEADERS_REWRITE .endif .ifdef REMOTE_SMTP_RETURN_PATH return_path = REMOTE_SMTP_RETURN_PATH .endif .ifdef REMOTE_SMTP_HELO_FROM_DNS helo_data=REMOTE_SMTP_HELO_DATA .endif The path to my private key is correct. I see a DKIM header in my messages as they end up in my gmail account: DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=mydomain.com; s=mailer; h=Content-Type:MIME-Version:Message-ID:Date:Subject:Reply-To:To:From; bh=nKgQAFyGv<snip>tg=; b=m84lyYvX6<snip>RBBqmW52m1ce2g=; However, gmail headers always report dkim=neutral (no signature): dkim=neutral (no signature) [email protected] My DNS results: dig +short txt mailer._domainkey.mydomain.com mailer._domainkey. mydomain.com descriptive text "v=DKIM1\; k=rsa\; t=y\; p=LS0tLS1CRUdJ<snip>M0RRRUJBUVV" "BQTRHTkFEQ0J<snip>GdLamdaaG" "JwaFZkai93b3<snip>laSCtCYmdsYlBrWkdqeVExN3gxN" "mpQTzF6OWJDN3hoY21LNFhaR0NjeENMR0FmOWI4Z<snip>tLQo=" Note that the base64 public key is 364 chars long so I had to break up the key using bind9. $ORIGIN _domainkey. mydomain.com. mailer TXT ("v=DKIM1; k=rsa; t=y; p=LS0tLS1CRUdJTiBQVUJM<snip>U0liM0RRRUJBUVV" "BQTRHTkFEQ0JpUUtCZ1<snip>15MGdLamdaaG" "JwaFZkai93b3lDK21MR<snip>YlBrWkdqeVExN3gxN" "mpQTzF6OWJDN3hoY21L<snip>Ci0tLS0tRU5E" "IFBVQkxJQyBLRVktLS0tLQo=") Can anyone point me in the right direction? I would really appreciate it.

    Read the article

  • ESXi 5.1 ghettoVCB stuck at Clone: 10% done

    - by stormdrain
    Trying to run ghettoVCB for the first time here. I am using a NAS that is set up as a datastore on the host. I did a dry run and it completed without error. The VM is ~500GB and there is only one on the host that I'm trying to backup. I proceeded to start the actual backup: ./ghettoVCB.sh -m vmname -g ghettoVCB.conf It goes though the config and looks like it's taking off: 2013-10-24 11:43:19 -- info: CONFIG - USING GLOBAL GHETTOVCB CONFIGURATION FILE = ghettoVCB.conf 2013-10-24 11:43:19 -- info: CONFIG - VERSION = 2013_01_11_0 2013-10-24 11:43:19 -- info: CONFIG - GHETTOVCB_PID = 17398616 2013-10-24 11:43:19 -- info: CONFIG - VM_BACKUP_VOLUME = /vmfs/volumes/nas2tb-001/esxi4 2013-10-24 11:43:19 -- info: CONFIG - VM_BACKUP_ROTATION_COUNT = 3 2013-10-24 11:43:19 -- info: CONFIG - VM_BACKUP_DIR_NAMING_CONVENTION = 2013-10-24_11-43-18 2013-10-24 11:43:19 -- info: CONFIG - DISK_BACKUP_FORMAT = thin 2013-10-24 11:43:19 -- info: CONFIG - POWER_VM_DOWN_BEFORE_BACKUP = 0 2013-10-24 11:43:19 -- info: CONFIG - ENABLE_HARD_POWER_OFF = 0 2013-10-24 11:43:19 -- info: CONFIG - ITER_TO_WAIT_SHUTDOWN = 4 2013-10-24 11:43:19 -- info: CONFIG - POWER_DOWN_TIMEOUT = 5 2013-10-24 11:43:19 -- info: CONFIG - SNAPSHOT_TIMEOUT = 15 2013-10-24 11:43:19 -- info: CONFIG - LOG_LEVEL = info 2013-10-24 11:43:19 -- info: CONFIG - BACKUP_LOG_OUTPUT = /tmp/ghettoVCB-2013-10-24_11-43-18-17398616.log 2013-10-24 11:43:19 -- info: CONFIG - ENABLE_COMPRESSION = 0 2013-10-24 11:43:19 -- info: CONFIG - VM_SNAPSHOT_MEMORY = 0 2013-10-24 11:43:19 -- info: CONFIG - VM_SNAPSHOT_QUIESCE = 0 2013-10-24 11:43:19 -- info: CONFIG - ALLOW_VMS_WITH_SNAPSHOTS_TO_BE_BACKEDUP = 0 2013-10-24 11:43:19 -- info: CONFIG - VMDK_FILES_TO_BACKUP = all 2013-10-24 11:43:19 -- info: CONFIG - VM_SHUTDOWN_ORDER = 2013-10-24 11:43:19 -- info: CONFIG - VM_STARTUP_ORDER = 2013-10-24 11:43:19 -- info: CONFIG - EMAIL_LOG = 0 2013-10-24 11:43:19 -- info: 2013-10-24 11:43:22 -- info: Initiate backup for vmname 2013-10-24 11:43:22 -- info: Creating Snapshot "ghettoVCB-snapshot-2013-10-24" for serv2 Destination disk format: VMFS thin-provisioned Cloning disk '/vmfs/volumes/esxi4-storage/vmname/vmname_1.vmdk'... Clone: 10% done. and it's been that way for over an hour now. Stuck at Clone: 10% done.. Thing is: I can see the vmdk on the NAS. And it looks like almost the whole thing is there. On the NAS it's showing ~430GB but on vSphere Client Summary is shows as 507GB. I don't see the vmdk on the NAS growing any more. The logfile mimics some of the above and is sitting at "Creating Snapshot..." and nothing else is coming in. Is the vmdk on the NAS showing all those GB because of the provisioning or something? i.e. is the size of the file not necessarily indicative of the amount of actual data that has been copied? Is there are reason it might be "Stuck" at 10%? i.e. could it really be taking this long? Any other tips? Thanks. Edit: as soon as I hit the Submit button, I glance over to see that it has incremented to 11% done. Good to know it'll be complete sometime around when the sun explodes.

    Read the article

  • How can I troubleshoot a "Hardware Malfunction" blue screen?

    - by AaronSieb
    My computer has suddenly started crashing to a blue screen with the following text: hardware malfunction call your hardware vendor for support *the system has halted* The crash occurs randomly during normal use. I have thus far always been able to reproduce it by transferring the contents of a large folder... But I'm not sure if this is caused by the file transfer, or simply because the transfer takes long enough for something else to trigger it. A bit about my hardware I have an dual core Intel CPU, and Asus motherboard. Video card is by nVidia, and connects via PCIe. My hard drives are in pairs, and connect via SATA to a RAID controller on the motherboard. They are configured to use a RAID0 configuration. What I've tried so far There is nothing in the Windows Event Log. WhoCrashed was unable to find any crash records. ScanDisk runs to completion (it launches prior to Windows load) and reports no errors. MemTest reports no errors (to 200% coverage). System temperatures are in the range of 40 to 50 degrees Celsius, with video card temperatures in the range of 60 to eighty degrees Celsius. I have stripped the system down to a minimal configuration (hard drive, video card, one memory module, motherboard, CPU, power supply). The problem still occurrs. However, this has allowed me to rule out a few components: It is not the video card because the problem still occurred after replacing the video card another one I had on hand. It is not the hard drive or anything software related because the problem occurred after a fresh installation of Windows on a replacement hard drive. It is not the hard drive cables because I replaced those with new ones and still had the problem. It is not the power supply because the problem still occurred after replacing the power supply with another one I had on hand. It is probably not the memory because I've tried three different memory modules in three different memory slots and was still able to replicate the issue. Is there anything I can do to confirm what's causing the issue? At the moment it seems as though it must be either the motherboard or CPU, but those are both difficult components to replace... In addition, both components are relatively new (two to three years old). I will gladly edit in any additional information I can get my hands on, and/or focus the question as I can find more details...

    Read the article

  • How to properly add .NET assemblies to Powershell session?

    - by amandion
    I have a .NET assembly (a dll) which is an API to backup software we use here. It contains some properties and methods I would like to take advantage of in my Powershell script(s). However, I am running into a lot of issues with first loading the assembly, then using any of the types once the assembly is loaded. The complete file path is: C:\rnd\CloudBerry.Backup.API.dll In Powershell I use: $dllpath = "C:\rnd\CloudBerry.Backup.API.dll" Add-Type -Path $dllpath I get the error below: Add-Type : Unable to load one or more of the requested types. Retrieve the LoaderExceptions property for more information. At line:1 char:9 + Add-Type <<<< -Path $dllpath + CategoryInfo : NotSpecified: (:) [Add-Type], ReflectionTypeLoadException + FullyQualifiedErrorId : System.Reflection.ReflectionTypeLoadException,Microsoft.PowerShell.Commands.AddTypeComma ndAdd-Type : Unable to load one or more of the requested types. Retrieve the LoaderExceptions property for more information. Using the same cmdlet on another .NET assembly, DotNetZip, which has examples of using the same functionality on the site also does not work for me. I eventually find that I am seemingly able to load the assembly using reflection: [System.Reflection.Assembly]::LoadFrom($dllpath) Although I don't understand the difference between the methods Load, LoadFrom, or LoadFile that last method seems to work. However, I still seem to be unable to create instances or use objects. Each time I try, I get errors that describe that Powershell is unable to find any of the public types. I know the classes are there: $asm = [System.Reflection.Assembly]::LoadFrom($dllpath) $cbbtypes = $asm.GetExportedTypes() $cbbtypes | Get-Member -Static ---- start of excerpt ---- TypeName: CloudBerryLab.Backup.API.BackupProvider Name MemberType Definition ---- ---------- ---------- PlanChanged Event System.EventHandler`1[CloudBerryLab.Backup.API.Utils.ChangedEventArgs] PlanChanged(Sy... PlanRemoved Event System.EventHandler`1[CloudBerryLab.Backup.API.Utils.PlanRemoveEventArgs] PlanRemoved... CalculateFolderSize Method static long CalculateFolderSize() Equals Method static bool Equals(System.Object objA, System.Object objB) GetAccounts Method static CloudBerryLab.Backup.API.Account[], CloudBerry.Backup.API, Version=1.0.0.1, Cu... GetBackupPlans Method static CloudBerryLab.Backup.API.BackupPlan[], CloudBerry.Backup.API, Version=1.0.0.1,... ReferenceEquals Method static bool ReferenceEquals(System.Object objA, System.Object objB) SetProfilePath Method static System.Void SetProfilePath(string profilePath) ----end of excerpt---- Trying to use static methods fail, I don't know why!!! [CloudBerryLab.Backup.API.BackupProvider]::GetAccounts() Unable to find type [CloudBerryLab.Backup.API.BackupProvider]: make sure that the assembly containing this type is load ed. At line:1 char:42 + [CloudBerryLab.Backup.API.BackupProvider] <<<< ::GetAccounts() + CategoryInfo : InvalidOperation: (CloudBerryLab.Backup.API.BackupProvider:String) [], RuntimeException + FullyQualifiedErrorId : TypeNotFound Any guidance appreciated!!

    Read the article

  • ActiveMQ Pure Master / Slave - Out of sync

    - by pico
    What i have : 1 master broker and 1 slave broker both in ActiveMQ 5.4.0 What i use : waitForSlave on master side and failover uri on slave side (in the master connector URI) What i want to do : I want to wait some delay (like 5 seconds) in case of a tiny network failures between master and slave before starting slave transpôrt connectors So i put this in slave config : <broker xmlns="http://activemq.apache.org/schema/core" brokerName="slave" dataDirectory="${activemq.base}/data" useJmx="true" persistent="true" populateJMSXUserID="true" masterConnectorURI="failover://(tcp://master:61616)?initialReconnectDelay=1000&amp;maxReconnectDelay=30000" shutdownOnMasterFailure="false" advisorySupport="false"> It seems to work but after a network hang between master and slave, the slave reconnect successfully and then the master logs a lot of : 2010-10-18 17:08:44,421 | ERROR | Slave Failed | org.apache.activemq.broker.ft.MasterBroker | ActiveMQ Task java.lang.IllegalStateException: Cannot lookup a connection that had not been registered: ID:master-1040-634226732611718750-0:0 at org.apache.activemq.broker.MapTransportConnectionStateRegister.lookupConnectionState(MapTransportConnectionStateRegister.java:93) at org.apache.activemq.broker.TransportConnection.lookupConnectionState(TransportConnection.java:1412) at org.apache.activemq.broker.TransportConnection.processRemoveConsumer(TransportConnection.java:561) at org.apache.activemq.command.RemoveInfo.visit(RemoveInfo.java:76) at org.apache.activemq.broker.TransportConnection.service(TransportConnection.java:309) at org.apache.activemq.broker.TransportConnection$1.onCommand(TransportConnection.java:185) at org.apache.activemq.transport.ResponseCorrelator.onCommand(ResponseCorrelator.java:116) at org.apache.activemq.transport.TransportFilter.onCommand(TransportFilter.java:69) at org.apache.activemq.transport.vm.VMTransport.iterate(VMTransport.java:218) at org.apache.activemq.thread.DedicatedTaskRunner.runTask(DedicatedTaskRunner.java:98) at org.apache.activemq.thread.DedicatedTaskRunner$1.run(DedicatedTaskRunner.java:36) On the slave side everything is fine. So after that, i've tried to stop the master to see if the slave is capable of turning master after these "network hangs". The master took long time to shutdown (10 seconds) and then some error message appears in slave logs : 2010-10-18 17:09:32,915 | WARN | Async error occurred: java.lang.IllegalStateException: Cannot lookup a connection that had not been registered: ID:master-1049-634226732657812500-0:3 | org.apache.activemq.broker.TransportConnection.Service | VMTransport: vm://slave#5 java.lang.IllegalStateException: Cannot lookup a connection that had not been registered: ID:master-1049-634226732657812500-0:3 at org.apache.activemq.broker.MapTransportConnectionStateRegister.lookupConnectionState(MapTransportConnectionStateRegister.java:93) at org.apache.activemq.broker.TransportConnection.lookupConnectionState(TransportConnection.java:1412) at org.apache.activemq.broker.TransportConnection.processRemoveSession(TransportConnection.java:600) at org.apache.activemq.command.RemoveInfo.visit(RemoveInfo.java:74) at org.apache.activemq.broker.TransportConnection.service(TransportConnection.java:309) at org.apache.activemq.broker.TransportConnection$1.onCommand(TransportConnection.java:185) at org.apache.activemq.transport.ResponseCorrelator.onCommand(ResponseCorrelator.java:116) at org.apache.activemq.transport.TransportFilter.onCommand(TransportFilter.java:69) at org.apache.activemq.transport.vm.VMTransport.iterate(VMTransport.java:218) at org.apache.activemq.thread.DedicatedTaskRunner.runTask(DedicatedTaskRunner.java:98) at org.apache.activemq.thread.DedicatedTaskRunner$1.run(DedicatedTaskRunner.java:36) Are they any ways to keep my kaha stores (they are individual stores) synchronised? The main problem is that the slave never turn master after a master failure, it stay block on this message : 2010-10-18 17:09:33,681 | WARN | Transport (master/172.21.60.61:61616) failed to tcp://master:61616 , attempting to automatically reconnect due to: java.net.SocketException: Software caused connection abort: socket write error | org.apache.activemq.transport.failover.FailoverTransport | ActiveMQ Transport: tcp://master/172.21.60.61:61616 I'm totally stuck with these syncs problems, any help welcome! Regards

    Read the article

  • Can RPM packages be installed into Cygwin?

    - by Dejian Zhao
    I noticed that there is a command - rpm - under Cygwin 1.7. Does that mean RPM packages can be installed into Cygwin? I tried to install ncbi-blast-2.2.26+-3.i686.rpm (see: ftp://ftp.ncbi.nlm.nih.gov/blast/executables/blast+/LATEST/ ) into Cygwin 1.7.13 with the command "install -i ncbi-blast-2.2.26+-3.i686.rpm". However, error message appeared as below. I tried to search for the missing libs using the setup.exe of Cygwin. It seems that some of them were not present, such as libc.so.6, libdl.so.2, libm.so.6, libnsl.so.1, and libz.so.1. Where can I get these libs? Thanks! $ rpm -i ncbi-blast-2.2.26+-3.i686.rpm error: Failed dependencies: /usr/bin/perl is needed by ncbi-blast-2.2.26+-3 libbz2.so.1 is needed by ncbi-blast-2.2.26+-3 libc.so.6 is needed by ncbi-blast-2.2.26+-3 libc.so.6(GLIBC_2.0) is needed by ncbi-blast-2.2.26+-3 libc.so.6(GLIBC_2.1) is needed by ncbi-blast-2.2.26+-3 libc.so.6(GLIBC_2.1.2) is needed by ncbi-blast-2.2.26+-3 libc.so.6(GLIBC_2.1.3) is needed by ncbi-blast-2.2.26+-3 libc.so.6(GLIBC_2.2) is needed by ncbi-blast-2.2.26+-3 libc.so.6(GLIBC_2.3) is needed by ncbi-blast-2.2.26+-3 libdl.so.2 is needed by ncbi-blast-2.2.26+-3 libdl.so.2(GLIBC_2.0) is needed by ncbi-blast-2.2.26+-3 libdl.so.2(GLIBC_2.1) is needed by ncbi-blast-2.2.26+-3 libgcc_s.so.1 is needed by ncbi-blast-2.2.26+-3 libgcc_s.so.1(GCC_3.0) is needed by ncbi-blast-2.2.26+-3 libgcc_s.so.1(GLIBC_2.0) is needed by ncbi-blast-2.2.26+-3 libm.so.6 is needed by ncbi-blast-2.2.26+-3 libnsl.so.1 is needed by ncbi-blast-2.2.26+-3 libpthread.so.0 is needed by ncbi-blast-2.2.26+-3 libpthread.so.0(GLIBC_2.0) is needed by ncbi-blast-2.2.26+-3 libpthread.so.0(GLIBC_2.1) is needed by ncbi-blast-2.2.26+-3 libpthread.so.0(GLIBC_2.2) is needed by ncbi-blast-2.2.26+-3 libpthread.so.0(GLIBC_2.3.2) is needed by ncbi-blast-2.2.26+-3 librt.so.1 is needed by ncbi-blast-2.2.26+-3 libstdc++.so.6 is needed by ncbi-blast-2.2.26+-3 libstdc++.so.6(CXXABI_1.3) is needed by ncbi-blast-2.2.26+-3 libstdc++.so.6(GLIBCXX_3.4) is needed by ncbi-blast-2.2.26+-3 libstdc++.so.6(GLIBCXX_3.4.5) is needed by ncbi-blast-2.2.26+-3 libz.so.1 is needed by ncbi-blast-2.2.26+-3 perl(Archive::Tar) is needed by ncbi-blast-2.2.26+-3 perl(Digest::MD5) is needed by ncbi-blast-2.2.26+-3 perl(File::Temp) is needed by ncbi-blast-2.2.26+-3 perl(File::stat) is needed by ncbi-blast-2.2.26+-3 perl(Getopt::Long) is needed by ncbi-blast-2.2.26+-3 perl(Net::FTP) is needed by ncbi-blast-2.2.26+-3 perl(Pod::Usage) is needed by ncbi-blast-2.2.26+-3 perl(constant) is needed by ncbi-blast-2.2.26+-3 perl(strict) is needed by ncbi-blast-2.2.26+-3 perl(warnings) is needed by ncbi-blast-2.2.26+-3

    Read the article

  • Multi-Role Domain Controllers for Small Offices (< 50 clients)

    - by kce
    Warning: I'm a Linux/*NIX admin so this is all new to me. I understand that it's not considered a good idea to have only a single domain controller, and that it is also probably a good idea for a domain controller to only do AD/DHCP/DNS (Here). We have two offices, location A with 30 users and location B with 10 users. Our two offices are separated by a WAN that is not particularly robust so I have be instructed that we need to have standalone services in each office. This means that according to "best practices" we will need to build a domain controller and a separate file server in each office. Again, I am not knowledgeable in the ways of Windows but this seems a little unnecessary for an organization of 40 users. People have commented that I could "get away with" running file services on the domain controller as long as the "load is light". That just seems to generate more questions than it answers. What constitutes light load? What are the potential consequences of mixing these roles? Ideally I would prefer to only have one physical machine at each location. The one in location A (the location with IT staff) can act as the primary domain controller and the one in the smaller office can act as the backup domain controller. If either domain controller fails we can still use the other one for authentication (albeit with some latency) and if the WAN connection fails each office still has access to their respective "local" domain controller. If the file services are ALSO run on each server (and synchronized with something like DFS), a similar arrangement in terms of redundancy can be had without having to purchase, build and install two additional separate servers. It's not that I'm adverse to that (well, any more adverse than I am to whole thing to begin with) but to my simple mind it just seems, well a bit overkill. I can definitely see the benefits of functional separation when we're talking larger organizations, but I need to consider the additional overhead too. None of this excludes having a DRP setup for the domain controller/s. I assume you can lose two domain controllers just as easily as one.

    Read the article

  • Tuning Linux IP routing parameters -- secret_interval and tcp_mem

    - by Jeff Atwood
    We had a little failover problem with one of our HAProxy VMs today. When we dug into it, we found this: Jan 26 07:41:45 haproxy2 kernel: [226818.070059] __ratelimit: 10 callbacks suppressed Jan 26 07:41:45 haproxy2 kernel: [226818.070064] Out of socket memory Jan 26 07:41:47 haproxy2 kernel: [226819.560048] Out of socket memory Jan 26 07:41:49 haproxy2 kernel: [226822.030044] Out of socket memory Which, per this link, apparently has to do with low default settings for net.ipv4.tcp_mem. So we increased them by 4x from their defaults (this is Ubuntu Server, not sure if the Linux flavor matters): current values are: 45984 61312 91968 new values are: 183936 245248 367872 After that, we started seeing a bizarre error message: Jan 26 08:18:49 haproxy1 kernel: [ 2291.579726] Route hash chain too long! Jan 26 08:18:49 haproxy1 kernel: [ 2291.579732] Adjust your secret_interval! Shh.. it's a secret!! This apparently has to do with /proc/sys/net/ipv4/route/secret_interval which defaults to 600 and controls periodic flushing of the route cache The secret_interval instructs the kernel how often to blow away ALL route hash entries regardless of how new/old they are. In our environment this is generally bad. The CPU will be busy rebuilding thousands of entries per second every time the cache is cleared. However we set this to run once a day to keep memory leaks at bay (though we've never had one). While we are happy to reduce this, it seems odd to recommend dropping the entire route cache at regular intervals, rather than simply pushing old values out of the route cache faster. After some investigation, we found /proc/sys/net/ipv4/route/gc_elasticity which seems to be a better option for keeping the route table size in check: gc_elasticity can best be described as the average bucket depth the kernel will accept before it starts expiring route hash entries. This will help maintain the upper limit of active routes. We adjusted elasticity from 8 to 4, in the hopes of the route cache pruning itself more aggressively. The secret_interval does not feel correct to us. But there are a bunch of settings and it's unclear which are really the right way to go here. /proc/sys/net/ipv4/route/gc_elasticity (8) /proc/sys/net/ipv4/route/gc_interval (60) /proc/sys/net/ipv4/route/gc_min_interval (0) /proc/sys/net/ipv4/route/gc_timeout (300) /proc/sys/net/ipv4/route/secret_interval (600) /proc/sys/net/ipv4/route/gc_thresh (?) rhash_entries (kernel parameter, default unknown?) We don't want to make the Linux routing worse, so we're kind of afraid to mess with some of these settings. Can anyone advise which routing parameters are best to tune, for a high traffic HAProxy instance?

    Read the article

  • Setting up a fileserver, some questions?

    - by Tanax
    Recently I've become very interested in setting up a fileserver, mostly for home usage but also because of the fact that I live in 2 places, I need to be able to access my files from both homes. I have already done some research into this but I am unclear about some things. My requirements are the following; Needs to work on both Mac and PC(only using Windows atm on PC but could be good if it supports more OS's to make it futureproof in case I need Linux or something else) Need to be able to set up a folder/drive/network space to act as a link to a certain folder on the fileserver All files should only be stored on the fileserver, e.g. no "shared" folders like in Dropbox where files are stored on the client computer Would prefer it if folders are password protected or that I can somehow specify what users can access the fileserver's shares Fileserver's OS most likely have to be Windows due to other factors outside of being just a fileserver I've already kinda figured out that I will need to set up a VPN so that I can access my fileserver from outside the local network. Probably going to use OpenVPN. Question 1: How would I go about to set up a VPN server so that I can connect to my local network at the fileserver's location? I know that since I'm on a dynamic IP I will have to get some sort of dynamic DNS server - I've already checked into this and I'm fairly sure I know how to fix that. I also know that I will have to forward the port OpenVPN uses in my router. Question 2: How would I actually share the folders on the fileserver so that I can access them on my other computers? I've researched into Samba but I'm uncertain if it needs to be run on a Linux OS. I know that the clients connecting to it can be Windows for example but can the Samba "server" be run on Windows? Also it appears that Samba shares a folder, meaning it works like Dropbox - I don't want that. So how would I share a folder in that case to make it work like I want it to? Sorry for the incredibly long question, I tried to structure it the best I could for easier read. Thanks in advance!

    Read the article

  • Setting up a fileserver, some questions?

    - by Tanax
    Recently I've become very interested in setting up a fileserver, mostly for home usage but also because of the fact that I live in 2 places, I need to be able to access my files from both homes. I have already done some research into this but I am unclear about some things. My requirements are the following; Needs to work on both Mac and PC(only using Windows atm on PC but could be good if it supports more OS's to make it futureproof in case I need Linux or something else) Need to be able to set up a folder/drive/network space to act as a link to a certain folder on the fileserver All files should only be stored on the fileserver, e.g. no "shared" folders like in Dropbox where files are stored on the client computer Would prefer it if folders are password protected or that I can somehow specify what users can access the fileserver's shares Fileserver's OS most likely have to be Windows due to other factors outside of being just a fileserver I've already kinda figured out that I will need to set up a VPN so that I can access my fileserver from outside the local network. Probably going to use OpenVPN. Question 1: How would I go about to set up a VPN server so that I can connect to my local network at the fileserver's location? I know that since I'm on a dynamic IP I will have to get some sort of dynamic DNS server - I've already checked into this and I'm fairly sure I know how to fix that. I also know that I will have to forward the port OpenVPN uses in my router. Question 2: How would I actually share the folders on the fileserver so that I can access them on my other computers? I've researched into Samba but I'm uncertain if it needs to be run on a Linux OS. I know that the clients connecting to it can be Windows for example but can the Samba "server" be run on Windows? Also it appears that Samba shares a folder, meaning it works like Dropbox - I don't want that. So how would I share a folder in that case to make it work like I want it to? Sorry for the incredibly long question, I tried to structure it the best I could for easier read. Thanks in advance!

    Read the article

  • Need help making an ODBC MySQL Connection

    - by Andy Moore
    Short Version: How do I connect from PowerShell to an ODBC 5.1 MySQL Driver? I can't seem to find any connection strings that accurately have a "Provider" field for this particular instance. (See bottom of this question for examples/errors) ===== Long Version: I'm not a server guy, and I've been handed the task of setting up PowerGadgets on our network. I have a MySQL server running on a Linux box, that is configured for remote access and has a user defined for remote access as well. On my windows desktop PC, I have PowerGadgets installed. I installed the MySQL ODBC 5.1 connector, and went to Control Panel Data Sources and set up a User DSN connection to the database. The connection, user, and pass seem to be correct because it lists the tables of the database in my windows control panel. Where I'm running into trouble is in 3 places in PowerGadgets: When selecting a data source, I can select "SQL Server". Inputting the servers IP address does not work and I can't get this option to work at all. When selecting a data source, I can select "OleDB". This screen has a wizard on it, that appears to populate all the correct information (including database table names!) for me. "Test Connection" runs great. But if I try to complete the wizard, I get the error "The .NET Framework data provider for OLEDB does not support the MS Ole DB provider for ODBC Drivers." When selecting a data source, I can select "ODBC". This screen does not have a wizard and I cannot figure out a "connection string" that works. Typically it will respond with the error "The field 'Provider' is missing". Googling ODBC connection strings doesn't reveal any examples with a "provider" field and have no idea what to put in here. The connection string (for #2) above contains "SQLOLEDB" as a provider, and upon inputting that value into this connection string I get the same connection error that #2 gets. I believe I can solve my problems by figuring out a connection string for #3 but don't know where to get started. (PowerGadgets also allows for PowerShell support but I believe I will run into the same problem there) == Here's my current PowerShell connection that doesn't work: invoke-sql -connection "Driver={MySQL ODBC 5.1 Driver};Initial Catalog=hq_live;Data Source=HQDB" -sql "Select * FROM accounts" Spits back the error: "Invoke-Sql : An OLE DB Provider was not specified in the ConnectionString. An example would be, 'Provider=SQLOLEDB;'. == Another string that doesn't work: invoke-sql -connection "Provider=MSDASQL.1;Persist Security Info=False;Data Source=HQDB;Initial Catalog=hq_live" -sql "select * from accounts" And the error: The .Net Framework Data Provider for OLEDB (System.Data.OleDb) does not support the Microsoft OLE DB Provider for ODBC Drivers (MSDASQL). Use the .Net Framework Data Provider for ODBC (System.Data.Odbc).

    Read the article

  • Communication software wanted: email, sms, IM, phone calls [closed]

    - by user63835
    I am searching for a software solution that integrates / unifies my communication. I use email, instant messaging, SMS and phone. I would like to get all emails, SMS, instant messaging dialogs and meta-data about phone calls into one application. Important is that I can access all past communication with one application. There should be a global address-book to map the communication data to persons or organizations. I want all the communication data in one place to access and backup it easily. The software solution is not required to be multi-user application or server application. It is just for one user (me) only, but server or multi-user applications are not excluded. I may run it on a server hardware. It should run on Linux (Lubuntu / Ubuntu prefered). Free and OpenSource software is prefered. It would be nice if I could perform new communication (like writing a new email, sms, etc.) with one application, but that is not a must have requirement. I could also work with different applications dedicated for different types of communication like IM-application for IM and email-application for email, if all that communication data from the specialized applications will be delivered to one single place where I can access and backup it. I have an android phone and currently I am using Google contacts as the address-book. In the long term this may change, to get back the control over my data. I did some Internet search but did not find a nice solution, yet. If I am looking for unified messaging and unified communication, am I on the right track? The current Thunderbird version has IM functionality integrated. Did not try it, yet. For SMS it may be possible to use an app to send every SMS (incoming and outgoing) as an email, but I am not sure if those SMS-emails can be mapped to an address-book contact. I don't remember exactly, but isn't there a Google android app (I think Google voice) integrating SMS into google services? But in Germany this function has not been released, yet. Maybe a groupware solution would solve the requirements, but I don't have much experience with it. As communication possibilites are groing, I am woundering that there seems to be such a big gap of solutions. I can't believe I am the only one who would like solution, better integrating all the communication channels more easily. If you know a software solution that solves these requirements (partly) I would be glad if you tell me about it. Thanks in advance.

    Read the article

  • chkdsk, SeaTools, and "does not have enough space to replace bad clusters"

    - by Zian Choy
    When I tried to do a Windows Vista Complete PC Backup, I received an error message that blathered about bad sectors. Then, when I ran chkdsk /r on the destination drive, this is what I got: C:\Windows\system32>chkdsk /R E: The type of the file system is NTFS. Volume label is Desktop Backup. CHKDSK is verifying files (stage 1 of 5)... 822016 file records processed. File verification completed. 1 large file records processed. 0 bad file records processed. 0 EA records processed. 0 reparse records processed. CHKDSK is verifying indexes (stage 2 of 5)... 848938 index entries processed. Index verification completed. 0 unindexed files processed. CHKDSK is verifying security descriptors (stage 3 of 5)... 822016 security descriptors processed. Security descriptor verification completed. 13461 data files processed. CHKDSK is verifying file data (stage 4 of 5)... The disk does not have enough space to replace bad clusters detected in file 239649 of name . The disk does not have enough space to replace bad clusters detected in file 239650 of name . The disk does not have enough space to replace bad clusters detected in file 239651 of name . An unspecified error occurred.f 822000 files processed) Yet, when I ran the SeaTools short & long generic tests on the Seagate disk, I didn't receive any errors. I know that I could reformat the disk and try running chkdsk /r again but I'd prefer to avoid waiting 4 hours in the hope that the problem was magically fixed. On the other hand, if I RmA the drive to Seagate, I have no SeaTools error number to use and they may claim that the drive is just fine. What should I try to do next? Side frustration: There is plenty of free hard drive space. The E: partition has 182 GB free.

    Read the article

  • How to improve this bash shell script for turning hardlinks into symlinks?

    - by MountainX
    This shell script is mostly the work of other people. It has gone through several iterations, and I have tweaked it slightly while also trying to fully understand how it works. I think I understand it now, but I don't have confidence to significantly alter it on my own and risk losing data when I run the altered version. So I would appreciate some expert guidance on how to improve this script. The changes I am seeking are: make it even more robust to any strange file names, if possible. It currently handles spaces in file names, but not newlines. I can live with that (because I try to find any file names with newlines and get rid of them). make it more intelligent about which file gets retained as the actual inode content and which file(s) become sym links. I would like to be able to choose to retain the file that is either a) the shortest path, b) the longest path or c) has the filename with the most alpha characters (which will probably be the most descriptive name). allow it to read the directories to process either from parameters passed in or from a file. optionally, write a long of all changes and/or all files not processed. Of all of these, #2 is the most important for me right now. I need to process some files with it and I need to improve the way it chooses which files to turn into symlinks. (I tried using things like the find option -depth without success.) Here's the current script: #!/bin/bash # clean up known problematic files first. ## find /home -type f -wholename '*Icon* ## *' -exec rm '{}' \; # Configure script environment # ~~~~~~~~~~~~~~~~~~~~~~~~~~~~ set -o nounset dir='/SOME/PATH/HERE/' # For each path which has multiple links # ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ # (except ones containing newline) last_inode= while IFS= read -r path_info do #echo "DEBUG: path_info: '$path_info'" inode=${path_info%%:*} path=${path_info#*:} if [[ $last_inode != $inode ]]; then last_inode=$inode path_to_keep=$path else printf "ln -s\t'$path_to_keep'\t'$path'\n" rm "$path" ln -s "$path_to_keep" "$path" fi done < <( find "$dir" -type f -links +1 ! -wholename '* *' -printf '%i:%p\n' | sort --field-separator=: ) # Warn about any excluded files # ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ buf=$( find "$dir" -type f -links +1 -path '* *' ) if [[ $buf != '' ]]; then echo 'Some files not processed because their paths contained newline(s):'$'\n'"$buf" fi exit 0

    Read the article

  • Need to increase nginx throughput to an upstream unix socket -- linux kernel tuning?

    - by Ben Lee
    I am running an nginx server that acts as a proxy to an upstream unix socket, like this: upstream app_server { server unix:/tmp/app.sock fail_timeout=0; } server { listen ###.###.###.###; server_name whatever.server; root /web/root; try_files $uri @app; location @app { proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto $scheme; proxy_set_header Host $http_host; proxy_redirect off; proxy_pass http://app_server; } } Some app server processes, in turn, pull requests off /tmp/app.sock as they become available. The particular app server in use here is Unicorn, but I don't think that's relevant to this question. The issue is, it just seems that past a certain amount of load, nginx can't get requests through the socket at a fast enough rate. It doesn't matter how many app server processes I set up, it doesn't even matter what the app is (tried it with a dummy app with just a single endpoint that returned an empty page with status 404). The bottleneck seems to be the socket, not the app. I'm getting a flood of these messages in the nginx error log: connect() to unix:/tmp/app.sock failed (11: Resource temporarily unavailable) while connecting to upstream Many requests result in status code 502, and those that don't take a long time to complete. The nginx write queue stat hovers around 1000. Anyway, I feel like I'm missing something obvious here, because this particular configuration of nginx and app server is pretty common, especially with Unicorn (it's the recommended method in fact). Are there any linux kernel options that needs to be set, or something in nginx? Any ideas about how to increase the throughput to the upstream socket? Something that I'm clearly doing wrong? Additional information on the environment: $ uname -a Linux app1 3.2.0-24-generic #39-Ubuntu SMP Mon May 21 16:52:17 UTC 2012 x86_64 x86_64 x86_64 GNU/Linux $ ruby -v ruby 1.9.3p194 (2012-04-20 revision 35410) [x86_64-linux] $ unicorn -v unicorn v4.3.1 $ nginx -V nginx version: nginx/1.2.1 built by gcc 4.6.3 (Ubuntu/Linaro 4.6.3-1ubuntu5) TLS SNI support enabled Current kernel tweaks: net.core.rmem_default = 65536 net.core.wmem_default = 65536 net.core.rmem_max = 16777216 net.core.wmem_max = 16777216 net.ipv4.tcp_rmem = 4096 87380 16777216 net.ipv4.tcp_wmem = 4096 65536 16777216 net.ipv4.tcp_mem = 16777216 16777216 16777216 net.ipv4.tcp_window_scaling = 1 net.ipv4.route.flush = 1 net.ipv4.tcp_no_metrics_save = 1 net.ipv4.tcp_moderate_rcvbuf = 1 net.core.somaxconn = 8192 net.netfilter.nf_conntrack_max = 131072

    Read the article

  • Why is vCenter 5.1u1 exiting hosts from maintenance mode?

    - by Shane Madden
    This vCenter server was just upgraded to 5.1 update 1. I'm going through hosts and bringing firmware up to date, then upgrading them from various versions of 5.0 to 5.1u1. vCenter 5.1u1 seems to have an interesting new behavior: it's removing hosts from maintenance mode when they reconnect after being disconnected -- but very inconsistently, I've seen it maybe 4 or 5 times on ~25-30 host reboots. I've only seen it happen on 5.0 hosts that have not yet been upgraded to 5.1. In the image, I placed the host in maint mode and rebooted it into the HP SPP DVD's automatic update mode. After its usual ~40 minute update process, the host came back online.. and 7 seconds before even logging that the host had reconnected, vCenter had sent the host a task to exit maintenance mode. In my understanding, the only time vCenter should drop a host out of maintenance mode is when vCenter put it into maintenance mode itself (such as a VUM upgrade task). Why would this vCenter be unilaterally exiting a host from user-initiated maintenance mode? Edit, additional info: I ran the firmware upgrades on 5 more hosts, all at the same time. Two of them exited maint mode after reconnecting, three did not. The common factor of those exiting maint mode seems to be how long they were offline; the two that took a few tries to boot to the virtual media are the two that got knocked out of maint mode. esx31 (image above): 45 minutes unresponsive esx19 (exited maint): 87 minutes unresponsive esx24 (stayed in maint): 32 minutes unresponsive esx29 (stayed in maint): 39 minutes unresponsive esx32 (stayed in maint): 30 minutes unresponsive esx34 (exited maint): 70 minutes unresponsive Edit: The disconnect time idea seems to have been a red herring, as it's not happening consistently. Additionally, in the vpxd.log the exit maint mode task initiation seems to always immediately follow this vim.EnvironmentBrowser.queryProvisioningPolicy SOAP call. Here's the lines, slightly trimmed for clarity: 15:27:49.535 [info 'vpxdvpxdVmomi'] [ClientAdapterBase::InvokeOnSoap] Invoke done (esx31, vim.EnvironmentBrowser.queryProvisioningPolicy) 15:27:49.560 [info 'commonvpxLro'] [VpxLRO] -- BEGIN task -- esx31 -- HostSystem.exitMaintenanceMode -- Note that on the nodes that don't get the exit task, the vim.EnvironmentBrowser.queryProvisioningPolicy event still occurs. I'm not seeing any other differences in events before or after this in the reconnect process, aside from the extra events caused by exiting maintenance mode. Given the log's mention of provisioning policies, looking for autodeploy-related maintenance mode issues turns up complaints about similar behavior (though I'm not using autodeploy at all).

    Read the article

  • kill -9 doesn't work

    - by Daniel
    I have a server with 3 oracle instances on it, and the file system is nfs with netapp. After shutdown the databases, one process for each database doesn't quit for a long time. Each kill -i doesn't work. I tried to truss, pfile it, the command through error. And iostat shows there are lots of IO to the netapp server. So someone said the process was busy writing data to remote netapp server, and before the write complete, it won't quit. So what need to be done was just wait until all the IO was done. After wait for longer time (about 1.5 hours), the processes exit. So my question is: how can a process ignore the kill signal? As far as I know, if we kill -9, it will stop immediately. Do you encounter such situation kill -i doesn't kill the process right away? TEST7-stdby-phxdbnfs11$ ps -ef|grep dbw0 oracle 1469 25053 0 22:36:53 pts/1 0:00 grep dbw0 oracle 26795 1 0 21:55:23 ? 0:00 ora_dbw0_TEST7 oracle 1051 1 0 Apr 08 ? 3958:51 ora_dbw0_TEST2 oracle 471 1 0 Apr 08 ? 6391:43 ora_dbw0_TEST1 TEST7-stdby-phxdbnfs11$ kill -9 1051 TEST7-stdby-phxdbnfs11$ ps -ef|grep dbw0 oracle 1493 25053 0 22:37:07 pts/1 0:00 grep dbw0 oracle 26795 1 0 21:55:23 ? 0:00 ora_dbw0_TEST7 oracle 1051 1 0 Apr 08 ? 3958:51 ora_dbw0_TEST2 oracle 471 1 0 Apr 08 ? 6391:43 ora_dbw0_TEST1 TEST7-stdby-phxdbnfs11$ kill -9 471 TEST7-stdby-phxdbnfs11$ ps -ef|grep dbw0 oracle 26795 1 0 21:55:23 ? 0:00 ora_dbw0_TEST7 oracle 1051 1 0 Apr 08 ? 3958:51 ora_dbw0_TEST2 oracle 471 1 0 Apr 08 ? 6391:43 ora_dbw0_TEST1 oracle 1495 25053 0 22:37:22 pts/1 0:00 grep dbw0 TEST7-stdby-phxdbnfs11$ ps -ef|grep smon oracle 1524 25053 0 22:38:02 pts/1 0:00 grep smon TEST7-stdby-phxdbnfs11$ ps -ef|grep dbw0 oracle 1526 25053 0 22:38:06 pts/1 0:00 grep dbw0 oracle 26795 1 0 21:55:23 ? 0:00 ora_dbw0_TEST7 oracle 1051 1 0 Apr 08 ? 3958:51 ora_dbw0_TEST2 oracle 471 1 0 Apr 08 ? 6391:43 ora_dbw0_TEST1 TEST7-stdby-phxdbnfs11$ kill -9 1051 471 26795 TEST7-stdby-phxdbnfs11$ ps -ef|grep dbw0 oracle 1528 25053 0 22:38:19 pts/1 0:00 grep dbw0 oracle 26795 1 0 21:55:23 ? 0:00 ora_dbw0_TEST7 oracle 1051 1 0 Apr 08 ? 3958:51 ora_dbw0_TEST2 oracle 471 1 0 Apr 08 ? 6391:43 ora_dbw0_TEST1 TEST7-stdby-phxdbnfs11$ truss -p 26795 truss: unanticipated system error: 26795 TEST7-stdby-phxdbnfs11$ pfiles 26795 pfiles: unanticipated system error: 26795

    Read the article

  • How to set 2 conditions / criterias for VLOOKUP / LOOKUP / etc in OpenOffice Calc (or Excel)

    - by MestreLion
    I have this spreadsheet that started as a silly aid for a game (Mafia Wars 2), but grew into a tricky spreadsheet question. In the game your character have 9 "slots" for weapons and armors, 1 for each "type": Light Weapon, Heavy Weapon, Body Armor, Head Armor, etc. So I made a list of all weapons and armors available in the game, 1 item per row. Example: SHOP ITEM TYPE ITEM NAME ATK DEF PRICE EQUIPPED? Marketplace Weapon Light Konrad Knife 16 5 5.500 Marketplace Weapon Light Ice Queen 19 6 8.200 Marketplace Armor Body Up Layered Polym 0 31 8.600 Marketplace Armor Body Up Full Shield 7 42 17.650 Marketplace Weapon Heavy Konrad Bullpup 53 25 24.500 Marketplace Weapon Heavy Full Moon Blow 73 12 24.500 x Marketplace Armor Body Low Knee Pads 17 26 14.200 x Marketplace Armor Body Low Army Boots 15 55 24.500 Bone Yard Weapon Light Bone Launcher 41 2 9.400 x Neon Strip Vehicle Ground Supercharged 41 34 24.500 Dead End Weapon Heavy Sharp Sickle 21 5 24.500 Dead End Armor Body Low Unholy Boots 5 36 15.000 Dead End Armor Head Hockey Mask 5 18 15.900 x Last columns is an indication of the items i have already bought and equipped (marked with "x"). What I need is a formula that, for each "slot" (item type), returns info related to the item of that kind that I am using. That would be: ITEM TYPE SHOP NAME ITEM NAME ATK DEF PRICE Weapon Light Bone Yard Bone Launcher 41 2 9.400 Weapon Heavy Marketplace Full Moon Blow 73 12 24.500 Weapon Special -- -- -- -- -- Armor Body Up -- -- -- -- -- Armor Body Low Marketplace Knee Pads 17 26 14.200 Armor Head Dead End Hockey Mask 5 18 15.900 Vehicle Ground -- -- -- -- -- Vehicle Water -- -- -- -- -- Vehicle Air -- -- -- -- -- The item types are fixed, so they can be hard coded. Each row for an item type. So, for 1st result line, it would return data from the row where both 2nd column is "Weapon Light" and last column is "x". Basically I need a LOOKUP (or VLOOKUP, or anything else) that uses 2 criteria to find a given row, the item type and the X marker. Question is: HOW? I am using OpenOffice Calc 3.2.1, but since it shares so many functions with MS Excel, answers for Excel are also fine (as long as it only uses regular formulas, no VBScript or Macros or VBA etc) Last but not least, suggestions / solutions for rearranging the data so it makes this problem easier to solve are also welcome. Thanks!

    Read the article

  • OS X Snow Leopard 10.6 Refuses to Load Websites the first time intermittently

    - by Brandon
    Many times when I am browsing the web, Snow Leopard will sit and load a site for 20 seconds or more, until it times out and says it cannot be displayed. If I refresh, it loads RIGHT away, every time. The issue is intermittent but happens from once every couple of days to a few times a day. So the long and short of it is this: Aluminum MacBook (Non-Pro) 2.4GHz Core2Duo, 4GB DDR3 I am using 10.6.6 but I have had this issue since 10.6.0 It happens in Firefox, Chrome, and Safari I have flushed my DNS (using the command 'blablabla flush') I am using custom DNS servers which I hoped would fix it but it had no effect* I am running Apache currently but haven't been for most of the time I've reformatted multiple times, always experiencing the issue I am on Cox cable internet, with a Motorola Surfboard & a Belkin F6D4230-4 v1 (Pre?) N wireless router. I've put the router in G only & N only & G+N to no effect It seems to be domain dependant as I can sometimes load the Google cache right away, and sometimes other sites will load but Google will refuse My Powerbook G4 with Leopard, other Windows XP laptops, & my wired Win7 desktop do not suffer from the issue. *I recently started using these to escape the awful Cox redirect page on timeouts I'm almost positive the issue has happened on other networks but I can't recall a specific instance (I have a terrible memory). The problem is intermittent and fixable enough (I just have to wait until it times out and hit refresh one time) but incredibly annoying since I'm constantly reading documentation from a large variety of sites. EDIT: To clarify, this happens with ALL sites, not only specific sites. I haven't been able to detect any pattern to the failures, but one day Google.com will refuse to load while reddit.com will, and the next day vice versa. Keep in mind that waiting for a timeout and hitting refresh loads the page right away, every time. If I don't wait for the timeout, opening more links, hitting refresh, and clicking the link a billion times have no effect. It seems to be domain neutral, affecting sites seemingly at random. It doesn't seem to have anything to do with connection inactivity either, because I will be SSHed into different servers, uploading files, browsing, downloading, etc, and it will just quit loading Jquery.com (for example) until I sit and wait for a timeout. /EDIT This is my last resort. Please, someone, tell me what is happening. Thank you.

    Read the article

  • So I want to separate my Program Files from the hard disk with the other system files. What is the b

    - by grg-n-sox
    So I am running Windows 7 as my only OS. I have two hard drives on my computer. The first one is a 74GB Western Digital 10K RPM Raptor. The second one is a 1TB Seagate Barracuda (couldn't remember if it was a 7200.12 or some other decimal after the 7200). The OS in installed to the Raptor and I am just using the Barracuda for storage. With this setup, in case you couldn't guess already, the Raptor fills up quick and I am constantly having to maintain file locations. And although it is nice to have that quicker boot time and program loading, the time spent maintaining the drive makes me waste more time overall. So I am looking for a way to try to keep it clear while still keeping up system loading speeds. A performance hit on games and such is easily acceptable and as long as I can guarantee a 5GB space on the Raptor, I can always just temporarily move the disc image there. So I am figuring that having games installed like Boarderlands and Mass Effect, as well as having large files such as linux distro DVD disc images in My Documents, I probably should be moving my personal files and Program Files directories to the Barracuda. I currently have folders on the Barracuda for this, but this means routinely copying files over and I can't really do anything with the Program Files folder that already exists. The best I can do is remember to designate the install directory of any program installation to the alternative install directory, which I can't seem to get to ever work right with Steam. With that in mind, is there a way that is not too drastic to let me just change some folders and system settings once and everything works fine afterwards for my setup? I have considered just reinstalling Windows 7 to the Barracuda but that would defeat the purpose of the Raptor except for running disc images off of. I am also heard a bit about being able to use symlinks to fix this, but I have also heard that symlinks in Windows are not necessarily the same and not as well supported on Windows. An example a friend mentioned was something about how if you have a symlink in Windows on a small hard drive to a large hard drive and the contents the symlink points to is larger than the small hard drive's capacity, then Windows will think the smaller hard drive is full. So is there a fix/workaround that will let me use symlinks across hard drives without the issues or is there a better solution I am not being told about, not mentioning, or not thinking of?

    Read the article

< Previous Page | 568 569 570 571 572 573 574 575 576 577 578 579  | Next Page >