Search Results

Search found 2077 results on 84 pages for 'period'.

Page 58/84 | < Previous Page | 54 55 56 57 58 59 60 61 62 63 64 65  | Next Page >

  • Windows 7 wont boot from any boot loader except for 'Windows Boot Manager' after partition resize

    - by user2468327
    I have a triple boot system on a single SSD. OSX, Windows 7, and Ubuntu. I use Chimera (basically another version of Chameleon) as my boot loader. Usually I can boot all 3 without any issue, but after using GParted to make my Ubuntu partition 2 Gigs larger, Windows 7 throws me an error when trying to boot to it from either Chimera or Grub. The error is consistently: 0xc000000e "cant find \Boot\BCD" (slightly paraphrased). However, I can still get into Windows by selecting "Windows Boot Manager" from the boot options in my bios. I've already tried several known fixes for similar issues, including bootrec /rebuildbcd (and variations), and BootRec.exe/fixMBR + BootRec.exe/fixBoot. Ive also tried Chkdsk. At best this has made it so Windows 7 boots on it's own by default (making me have to reinstall Chimera and change back my boot settings in the bios). At worst this made it so Windows wont boot period. Now I'm back full circle where I started. A detail that might be useful is that bootrec /rebuildbcd says that the number of found Windows installations is 0. How do I get it back so I can boot Win7 through another boot loader so I don't have to manually select it in the bios? Preferably without a reinstall.

    Read the article

  • Memcached session manager in Azure: Connection gets forcibly closed

    - by Edgar Pérez
    I am using Memcached Session Manager to handle Tomcat sessions in non-sticky mode. My deployment in Azure consists of a Worker Role with two instances which connect to an Azure VM running my Memcached server. Everything works pretty well, my session is persisted and retrieved by any of the two instances transparently. The problem arises when the session is idle for about 4 minutes; everything points out that the Azure Loadbalancer is closing the spymemcached connection to the VM after some period of inactivity. My MSM configuration is this: <Manager className="de.javakaffee.web.msm.MemcachedBackupSessionManager" memcachedNodes="n1:my-azure-vm.cloudapp.net:11211" sticky="false" sessionBackupAsync="false" sessionBackupTimeout="10000" lockingMode="uriPattern:/path1|/path2" requestUriIgnorePattern=".*\.(ico|png|gif|jpg|css|js|ttf|eot|svg|woff)$" transcoderFactoryClass="de.javakaffee.web.msm.serializer.kryo.KryoTranscoderFactory" customConverter="de.javakaffee.web.msm.serializer.kryo.HibernateCollectionsSerializerFactory"/> The stacktrace printed by the spymemcached client is this: INFO net.spy.memcached.MemcachedConnection: Reconnecting due to exception on {QA sa=/10.194.132.206:13000, #Rops=1, #Wops=0, #iq=0, topRop=net.spy.memcached.protocol.binary.StoreOperationImpl@1d95da8, topWop=null, toWrite=0, interested=1} java.io.IOException: An existing connection was forcibly closed by the remote host at sun.nio.ch.SocketDispatcher.read0(Native Method) at sun.nio.ch.SocketDispatcher.read(Unknown Source) at sun.nio.ch.IOUtil.readIntoNativeBuffer(Unknown Source) at sun.nio.ch.IOUtil.read(Unknown Source) at sun.nio.ch.SocketChannelImpl.read(Unknown Source) at net.spy.memcached.MemcachedConnection.handleReads (MemcachedConnection.java:303) at net.spy.memcached.MemcachedConnection.handleIO (MemcachedConnection.java:264) at net.spy.memcached.MemcachedConnection.handleIO (MemcachedConnection.java:184) at net.spy.memcached.MemcachedClient.run(MemcachedClient.java:1298) Given this idle time limitation in Azure, is there any other way to make MSM work in the azure cloud?

    Read the article

  • ZFS - destroying deduplicated zvol or data set stalls the server. How to recover?

    - by ewwhite
    I'm using Nexentastor on a secondary storage server running on an HP ProLiant DL180 G6 with 12 Midline (7200 RPM) SAS drives. The system has an E5620 CPU and 8GB RAM. There is no ZIL or L2ARC device. Last week, I created a 750GB sparse zvol with dedup and compression enabled to share via iSCSI to a VMWare ESX host. I then created a Windows 2008 file server image and copied ~300GB of user data to the VM. Once happy with the system, I moved the virtual machine to an NFS store on the same pool. Once up and running with my VMs on the NFS datastore, I decided to remove the original 750GB zvol. Doing so stalled the system. Access to the Nexenta web interface and NMC halted. I was eventually able to get to a raw shell. Most OS operations were fine, but the system was hanging on the zfs destroy -r vol1/filesystem command. Ugly. I found the following two OpenSolaris bugzilla entries and now understand that the machine will be bricked for an unknown period of time. It's been 14 hours, so I need a plan to be able to regain access to the server. http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6924390 and http://bugs.opensolaris.org/bugdatabase/view_bug.do;jsessionid=593704962bcbe0743d82aa339988?bug_id=6924824 In the future, I'll probably take the advice given in one of the buzilla workarounds: Workaround Do not use dedupe, and do not attempt to destroy zvols that had dedupe enabled. Update: I had to force the system to power off. Upon reboot, the system stalls at Importing zfs filesystems. It's been that way for 2 hours now.

    Read the article

  • Any Recommendations for a Web Based Large File Transfer System?

    - by Glen Richards
    I'm looking for a server software product that: Allows my users to share large files with: The general public securely to 1 or more people (notification via email, optionally with a token that gives them x period of time to download) Allows anyone in the general public to share files with my users. Perhaps by invitation. Has to be user friendly enough to allow my users to use this with out having to bug me as the admin. It needs to be a system that we can install on our own server (we don't want shared data sitting on anyone else's server) A web based solution. Using some kind or secure comms channel would be good too, eg, ssh Files to share could be over 1 GB. I found the question below. WebDav does not sound user friendly enough: http://serverfault.com/questions/86878/recommendations-for-a-secure-and-simple-dropbox-system I've done a lot of searching, but I can't get the search terms right. There are too many services that provide this, but I want something we can install on our own server. A last resort would be to roll my own. Any ideas appreciated. Glen EDIT Sorry Tom and Jeff but Glen specifically says that he's looking for a 'product' so given that I specialise in this field thought that my expertise in this area may have been of use to him. I don't see how him writing services is going to be easy for him to maintain going forward (large IT admin overhead) or simple for his users and the general public to work with.

    Read the article

  • Use icacls to make a directory read-only on Windows 7

    - by Dave G
    I'm attempting to test some filesystem exceptions in a Java based application. I need to find a way to create a directory that is located under %TMP% that is set to read-only. Essentially on UNIX/POSIX platforms, I can do a chmod -w and get this effect. Under Windows 7/NTFS this is of course a different story. I'm running into multiple issues on this. My user has "administrative" right (although this may not always be the case) and as such the directory is created with an ACL including: NT AUTHORITY\SYSTEM BUILTIN\Administrators <my current user> Is there a way using icacls to essentially get this directory into a state where it is read-only PERIOD, do my test, then restore the ACL for removal? EDIT With the information provided by @Ansgar Wiechers I was able to come up with a solution. I used the following: icacls dirname /deny %username%:(WD) In the page located here I found this in the remarks section: icacls preserves the canonical order of ACE entries as: * Explicit denials * Explicit grants * Inherited denials * Inherited grants By performing the above icalcs command, I was able to set the current user's ability to write or append files (WD) to the directory to deny. Then it was a question of returning it to a state post test: icacls dirname /reset /t /c Done

    Read the article

  • ssh-agent on ubuntu rapidly restarts

    - by Santa Claus
    I am attempting to use ssh-agent on Ubuntu 13.10 so that I will not have to enter my passphrase to unlock a key every time I want to use ssh or git. As you can see below, ssh-agent appears to be restarting for some reason. These commends were executed within a period of less than 5 seconds: andrew@zaphod:~$ ssh-agent SSH_AUTH_SOCK=/tmp/ssh-pqm5J0s70NxG/agent.2820; export SSH_AUTH_SOCK; SSH_AGENT_PID=2821; export SSH_AGENT_PID; echo Agent pid 2821; andrew@zaphod:~$ ssh-agent SSH_AUTH_SOCK=/tmp/ssh-VpkOH2WKjT1M/agent.2822; export SSH_AUTH_SOCK; SSH_AGENT_PID=2823; export SSH_AGENT_PID; echo Agent pid 2823; andrew@zaphod:~$ ssh-agent SSH_AUTH_SOCK=/tmp/ssh-EQ6X9JHNiBOO/agent.2824; export SSH_AUTH_SOCK; SSH_AGENT_PID=2825; export SSH_AGENT_PID; echo Agent pid 2825; andrew@zaphod:~$ ssh-agent SSH_AUTH_SOCK=/tmp/ssh-8Iij8kFkaapz/agent.2826; export SSH_AUTH_SOCK; SSH_AGENT_PID=2827; export SSH_AGENT_PID; echo Agent pid 2827; andrew@zaphod:~$ My guess is that ssh-agent is crashing, but how would I know? What log file would it log to?

    Read the article

  • Finding all IP ranges blelonging to a specific ISP

    - by Jim Jim
    I'm having an issue with a certain individual who keeps scraping my site in an aggressive manner; wasting bandwidth and CPU resources. I've already implemented a system which tails my web server access logs, adds each new IP to a database, keeps track of the number of requests made from that IP, and then, if the same IP goes over a certain threshold of requests within a certain time period, it's blocked via iptables. It may sound elaborate, but as far as I know, there exists no pre-made solution designed to limit a certain IP to a certain amount of bandwidth/requests. This works fine for most crawlers, but an extremely persistent individual is getting a new IP from his/her ISP pool each time they're blocked. I would like to block the ISP entirely, but don't know how to go about it. Doing a whois on a few sample IPs, I can see that they all share the same "netname", "mnt-by", and "origin/AS". Is there a way I can query the ARIN/RIPE database for all subnets using the same mnt-by/AS/netname? If not, how else could I go about getting every IP belonging to this ISP? Thanks.

    Read the article

  • Files Corrupted on System Restore

    - by Yar
    I restored OSX 10.6.2 today (was 10.6.3 and not booting) by copying the system over from a backup. The data directories were not touched. I am seeing some files as 0 bytes, and getting permission-denied errors when copying, even when using sudo cp or the Finder itself. Some programs, differently, take the files at face value and see no permission problems (such as zip), but they see the files as zero bytes, which would be game-over for recovery. cp: .git/objects/fe/86b676974a44aa7f128a55bf27670f4a1073ca: could not copy extended attributes to /eraseme/blah/.git/objects/fe/86b676974a44aa7f128a55bf27670f4a1073ca: Operation not permitted I have tried sudo chown, sudo chmod -R 777 and sudo chflags -R nouchg which do not change the end result. Strangely, this is only affecting my .git directories (perhaps because they start with a period, but renaming them -- which works -- does not change anything). What else can I do to take ownership of these files? Edit: This question comes from StackOverflow because I originally thought it was a GIT problem. It's definitely not (just) GIT. Anyway, this is to help put some of the comments in context.

    Read the article

  • Why do AWS spot-instance prices spike above the "on demand" pricing?

    - by Laykes
    Amazon Pricing on Spot Instance Inconsistencies This is something which will be best explained through screenshots of a historical chart of instance pricings. If you look at a lot of the instance prices for spot instances, you will notice regular patterns of spikes. See here: As you can see, the price for this compute medium instance, regularly spikes above the on demand price. A c1.medium instance (on demand), would only cost $0.186 per hour. But for a period of a few weeks, in zone B, the price would regularly spike to $1.20. This is some 6 times the actual on demand price. It's also not isolated. If you look at zone-b again for small instances, there is a similar, spike frequently. Which goes 4x the on demand pricing. Does anyone know why this happens? Here are a few suggestions Someone entered $1.2 instead of $0.12 (I would discount this since it happened 20 times over the space of 3 weeks). Amazon regularly artifically inflate their prices by bidding on their own instances to get the most bang for their buck. (I would discount this since it would be ridiculous and bad business) Some company launched 1000 servers at once, and wants to make sure that they all launch. (I would discount this since they would presumably launch them at a price which would be below the minimum on demand price. Why would you pay above on demand for a single server?). It's a bug in their reporting?

    Read the article

  • itunes iphone sync stuck on "Waiting for items to copy"

    - by lindes
    superuser crowd, I've found a number of threads about this, but have yet to find a satisfactory answer. Perhaps I'll have better luck on this site?? I hope so... I'm having a problem with iTunes where syncing my iPhone ends up seeming to basically finish, but it says "Waiting for items to copy" after everything else, and just stays there for... well, a very long time, if I let it. Oh, and I'm not sure this is always the case, but at the moment, it's doing this while "Syncing Genius Data to [my iPhone] (Step 8 of 8)". If I click the little x in iTunes, it then gets stuck on "Canceling sync", for an equally indefinite sort of period. If I simply unplug the phone, everything seems to be synced and happy, but the next time I sync, it happens again. I presume this is a bug of some sort on Apple's part, but it seems like people have found workarounds... I'm just having trouble tracking one down that (a) is well described enough that I can actually follow it, (b) has enough detail that I can do it without losing data (i.e. tells me pathnames that I might want to copy first, or the like, before telling me to remove something) (note: see also point (a) -- I can't remove it if it's telling me to remove something that I don't know where it is!), and (c) otherwise seems sane. I'm hoping that here perhaps I'll get better luck -- with either a workaround, and/or debugging tips for figuring out how to find myself a workaround. Note: I'm busy with some other things at the moment, but can try to add some additional information later, if necessary.

    Read the article

  • Firefox isn't using my download manager (flash videos)

    - by John22
    I installed "Free Download Manager." I see the plugin in Tools-Add-ons (it doesn't have any options). I use several different flash video downloaders, because I haven't found one that works period on any site. When I save the video with two I tried, they are being downloaded by Firefox's default download manager (which means simultaneously - which is why I installed the download manager - I need them to download one at a time - in a prioritized queue.) [I used to use Flashgot (long ago), and it worked with some download manager I had installed - but over time it failed to see most videos. I installed Flashgot again, and it still fails to see anything but images and video ads.] Currently, I have to manually start Free Download Manager (from outside of Firefox), start the download in Firefox, stop it, copy the link location from Firefox's download menu, and then add it manually in Free Download Manager. Yuck. Do I need a different download manager (that takes over - recommendations?), or did I somehow install this one wrong or miss a setting somewhere in Firefox? Thanks for any help.

    Read the article

  • ZFS - destroying deduplicated zvol or data set stalls the server. How to recover?

    - by ewwhite
    I'm using Nexentastor on a secondary storage server running on an HP ProLiant DL180 G6 with 12 Midline (7200 RPM) SAS drives. The system has an E5620 CPU and 8GB RAM. There is no ZIL or L2ARC device. Last week, I created a 750GB sparse zvol with dedup and compression enabled to share via iSCSI to a VMWare ESX host. I then created a Windows 2008 file server image and copied ~300GB of user data to the VM. Once happy with the system, I moved the virtual machine to an NFS store on the same pool. Once up and running with my VMs on the NFS datastore, I decided to remove the original 750GB zvol. Doing so stalled the system. Access to the Nexenta web interface and NMC halted. I was eventually able to get to a raw shell. Most OS operations were fine, but the system was hanging on the zfs destroy -r vol1/filesystem command. Ugly. I found the following two OpenSolaris bugzilla entries and now understand that the machine will be bricked for an unknown period of time. It's been 14 hours, so I need a plan to be able to regain access to the server. http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6924390 and http://bugs.opensolaris.org/bugdatabase/view_bug.do;jsessionid=593704962bcbe0743d82aa339988?bug_id=6924824 In the future, I'll probably take the advice given in one of the buzilla workarounds: Workaround Do not use dedupe, and do not attempt to destroy zvols that had dedupe enabled. Update: I had to force the system to power off. Upon reboot, the system stalls at Importing zfs filesystems. It's been that way for 2 hours now.

    Read the article

  • SQL Server 2008 Remote Access

    - by Timothy Strimple
    I'm having problems connecting to my SQL Server 2008 database from my computer. I have enabled remote connections as described in this answer (http://serverfault.com/questions/7798/how-to-enable-remote-connections-for-sql-server-2008). And I have added the ports listed on the microsoft support page to our Cisco Asa firewall and I'm still unable to connect. The error I'm getting from the SQL Management Studio is: A network-related or instance-specific error occurred while establishing a connection to SQL Server. The server was not found or was not accessible. Verify that the instance name is correct and that SQL Server is configured to allow remote connections. (provider: TCP Provider, error: 0 - A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond.) (Microsoft SQL Server, Error: 10060) Once again, I have double and triple checked that remote connections are enabled under the database properties and that TCP is enabled on the configuration page. I've added tcp ports 135, 1433, 1434, 2382, 2383, and 4022 as well as udp 1434 to the firewall. I've also checked to make sure that 1433 is the static port that is set in the tcp section of the database server configuration. The ports should be configured correctly in the firewall since http/https and rdp are all working and the sql server ports are setup the same way. What am I missing here? Any help you could offer would be greatly appreciated. Edit: I can connect to the server via TCP on the internal network. The servers are colocated in a datacenter and I can connect from my production box to my development box and vice versa. To me that indicates a firewall issue, but I've no idea what else to open. I've even tried allowing all tcp ports to that server without success.

    Read the article

  • Server high memory usage at same time every day

    - by Sam Parmenter
    Right, we moved one of our main sites onto a new AWS box with plenty of grunt as it would allow us more control that we had before and future proof ourselves. About a month ago we started running into issues with high memory usage at the same time every day. In the morning an export is run to export data to a file which is the FTPed to a local machine for processing. The issues were co-inciding with the rough time of the export but when we didn't run the export one day, the server still ran into the same issues. The export has been run at other times in the day since to monitor memory usage to see if it spikes. The conclusion is that the export is fine and barely touches the sides memory wise. No noticeable change in memory usage. When the issue happens, its effect is to kill mysql and require us to restart the process. We think it might be a mysql memory issue, but might just be that mysql is just the first to feel it. Looking at the logs there is no particular query run before the memory usage hits 90%. When it strikes at about 9:20am, the memory usage spikes from a near constant 25% to 98% and very quickly kills mysql to save itself. It usually takes about 3-4 minutes to die. There are no cron jobs running at that time of the day and we haven't noticed a spike in traffic over the period of the issues. Any help would be massively appreciated! thanks.

    Read the article

  • Failed to bring up eth1 in a dual ips solution in ubuntu

    - by lxyu
    I'm using ubuntu 12.04. I tried to assign two ips to two ethernet cards in my server. The content of /etc/network/interfaces is like this: auto lo iface lo inet loopback auto eth0 iface eth0 inet static address 114.80.156.a netmask 255.255.255.224 gateway 114.80.156.b auto eth1 iface eth1 inet static address 114.80.156.c netmask 255.255.255.240 gateway 114.80.156.d a b c d have different values, which means the two ips are in different vlans. But I can only bring up eth0 with this command: $ /etc/init.d/networking restart RTNETLINK answers: File exists Failed to bring up eth1. ...done. I have checked the question here which shows the same problem like the one I encountered: Can only bring up one of two interfaces But it seems it's not really solved. And in my situation, I need the 2 ips to use 2 different gateways. So how to fix this problem? Edit1, changed the example config ip from 192.168.0.0/16 subnet to another 'real' subnet. Edit2, the purpose of doing this is fairly simple. Because the ip range I previous in don't have more room for new servers, and I have to move to another ip range. So I want to make the public servers bind to 2 ips for the transition period. I only have really limited knowledge about routing and subnet. @BillThor @rackandboneman, would you please give me some keywords or links on how to setup route for 2 ips? and @Mike Pennington, how do you know I speak chinese?

    Read the article

  • Windows Server 2008R2 Virtual Lab Activation strategies?

    - by William Hilsum
    I have a ESXi server that I use for testing, however, I am often needing to create additional Windows Server virtual machines. Typically, if I do not need a VM for more than 30 days, I simply do not activate. However, I have been doing a lot of HA/DRS testing recently and I have had a few servers up for more than this time. I have a MSDN account with Microsoft and have already received extra keys for Windows Server 2008 R2. I am doing nothing illegal and I am sure if I asked, they would issue more - but, I do not want to tempt fate! I have got 3 different "activated" windows snapshots I can get to at any time. If I try to clone these machines, I get the usual "did you copy or move them VM" message. If I choose copy, as far as I can see, it changes the BIOS ID and NIC MACs which is enough to disable activation. If I choose move, it keeps the activation fine (obviously, I know to change the NIC MAC - I believe I can leave the BIOS ID without problems). However, either of these options keeps the same SID code for the computer and user accounts. After the activation period has expired, as far as I can see, all that happens is optional updates do not work - it seems that the normal updates work fine. Based on this, as you can easily get in to Windows when not activated without any sort of workaround, I was wondering if it is ok just to leave a machine un activated? (However, I obviously would prefer if it was activated!) Alternatively, how dangerous is it run multiple machines on a non domain environment with the same SID? I am just interested to know if anyone can recommend a strategy for me? I have only found one solution that deals with bypassing activation - I am not interested in doing anything remotely dodgy... at a stretch, I am happy to rearm (I have never needed to keep a server past 100 days), but, I would rather have a proper strategy in place.

    Read the article

  • NetBackup prefers "Scratch" tapes over dedicated tapes

    - by wfaulk
    I have a NetBackup 6.0MP7 installation running on Windows Server 2003. It functions as the only Master Server and Media Server. I swap a full set of tapes in and out every week, but leave a set of tapes with their Volume Pool set to "Scratch" in all the time. The weekly tape sets then get rotated back in after a period of time. Largely, this works fine. I seldom actually need the scratch tapes, but every once in a while, a backup will run over what I have dedicated to the task. However, one week's set of tapes consistently gets declined in favor of the scratch pool. The backup policies are the same for every week, they all have "Policy Volume Pool" set to "NetBackup", and all of the tapes for every week (beside the scratch tapes) have had their pools assigned as "NetBackup", definitely including the week that always gets ignored. That said, it doesn't ignore all of the NetBackup pool tapes for that week. It does usually write to two or three of them, but it writes to like 20 of the scratch tapes. (I haven't thought to look to see if it's always the same two or three tapes.) And this problem never seems to occur for any other week. It doesn't load the tapes and then reject them; it never seems to try to use them at all. They are not flagged as frozen. They are all active and unassigned when I swap them in. The tapes are in a Quantum PX510 tape library. The NetBackup server is attached to the library/robot via fibrechannel going through an HP-branded Brocade switch. I'm not an expert on NetBackup at all. I don't really even know where to look. Any advice on logs to look at or logging to enable or really anything at all would be appreciated. I'll keep an eye on the question and update it if anyone needs any more info to help.

    Read the article

  • CheckPoint/Amazon VPC VPN tunnel working inconsistently

    - by Lee
    First time poster, so please be gentle and correct me if there's Server Fault etiquette I'm missing. We have two CheckPoint edge devices at sites A & B, independently managed, connecting to two Amazon private clouds. In both cases, the two Amazon VPCs are in the same community on the CheckPoint device. A VPN tunnel exists between the two CheckPoint devices as well. Between Sites A & B and the Amazon VPC in Northern Virigina, we are unable to keep more than one tunnel up. Both will come up, but tunnel 2 will drop an hour after initiation and will not come back up while tunnel 1 is up. We believe the 1-hour period is due to IPsec phase 2 renegotiation, but can't be sure. On our side, we see the tunnel 2 remote endpoint as not responding to phase 2 negotiation. Between Sites A & B and the Amazon VPC in Oregon, we have no issues. Both tunnels are up and fail over properly. The CheckPoint gateways are using domain-based VPNs. According to CheckPoint's advice to Amazon, this won't work. Yet, in Oregon, it does. We've pursued this with Amazon and, despite the fact it's working in Oregon, they've refused to troubleshoot with us further. Can anyone suggest anything we can do to try to get this stabilized? Going to route-based VPNs is not an option for us.

    Read the article

  • What characteristic of networking/TCP causes linear relation between TCP activity and latency?

    - by DeLongey
    The core of this problem is that our application uses websockets for real-time interfaces. We are testing our app in a new environment but strangely we're noticing an increasing delay in TCP websocket packets associated with an increase in websocket activity. For example, if one websocket event occurs without any other activity in a 1-minute period, the response from the server is instantaneous. However, if we slowly increase client activity the latency in server response increases with a linear relationship (each packet will take more time to reach the client with more activity). For those wondering this is NOT app-related since our logs show that our server is running and responding to requests in under 100ms as desired. The delay starts once the server processes the request and creates the TCP packet and sends it to the client (and not the other way around). Architecture This new environment runs with a Virtual IP address and uses keepalived on a load balancer to balance the traffic between instances. Two boxes sit behind the balancer and all traffic runs through it. Our host provider manages the balancer and we do not have control over that part of the architecture. Theory Could this somehow be related to something buffering the packets in the new environment? Thanks for your help.

    Read the article

  • I love google Chrome, but some non-static pages like Piwik render it unresponsive

    - by gogowitsch
    The web-stat software Piwik stops reacting on mouse clicks after 1-2 seconds. The same is true for Google Maps and Producteev (but GMail and most other pages work like a charm). These rely heavily on JS, and work without Flash. I can click for a very short time period and then the mouse cursor doesn't feel the UI anymore (it doesn't turn into a I over input fields, though it moves; if the freeze occured while the pointer was over an input field, the cursor keeps being a I) and all clicks on the DOM are being ignored by Chrome. No message appears, neither obvious nor in the Console (F12). There is no obstructing div or the like in the DOM (F12). Since I couldn't find any hints on the source of my problems, I suspected my plugins and extensions. Unfortunately, neither deactivating all plugins nor all extensions solved the problem. for the problematic pages, it always happens no Dropbox running several GB of free RAM the taskmanager doesn't show any high CPU or memory utilization (the offending tab uses 30 MB and uses 0-1 % CPU) all problematic pages work in other browsers (Chrome, Firefox, IE) the rest of the computer is very responsive the computers use different security suites (Kaspersky and Avira) The effect exists between several (synchronized) Chrome instances on different machines, all running Windows 7. Both the OS and Chrome are updated automatically. Other tabs and the Chrome chrome (tabs, menus, toolbar buttons of the browser itself) still work. I really don't like switching between browsers. Any ideas?

    Read the article

  • Domino nchronos.exe multiple instances causing server to die, and Sametime problems

    - by Kevin
    I've had this problem for a few months now. I thought it started when I installed the Traveller software on the server to add ActiveSync support, but I removed that and the problem still persists. Basically new instances of "nchronos.exe" keeps spawning (and not ending), so over a period of a few days the server eventually gets drowned in nchronos.exe processes, stops responding and I need to kill Domino. My process count the last time was up at about 330, and when I killed it and restarted the Domino my process count went to 160. I'm running Domino 8.5.1 with Fix Pack 2. I don't know if it's relevant, but my Domino server was also acting as a Sametime server. At around the same time that nchronos started playing up sametime also stopped working. None of my users can connect to sametime and in the domino log it keeps telling me "stpolicy.exe" has terminated. I've googled for that and tried a few things, but nothing seems to make sametime work again. Any thoughts?? Cheers, Kevin

    Read the article

  • Idle state detection for server

    - by odinmillion
    Windows OS has a service that detects idle state. Details: Task Idle Conditions The computer is considered idle if all the processors and all the disks were idle for more than 90% of the past 15 minutes and if there is no keyboard or mouse input during this period of time. When the Task Scheduler service detects that the computer is idle, the service only waits for user input to mark the end of the idle state. It is very useful for usual PCs that have keyboard amd mouse. We can use standard task scheduler to start some process like defrag when PC in idle state and stop when PC isn't in idle state. But what should we use when we using a standalone server without keyboard and mouse? Server sometimes receives commands by TCP/IP and starts CPU and HDD activity. But sometimes CPU and HDD activity at zero level. I would like to use this periods of time to start defrag or another process. But this started at "idle" state processes should be terminated when another commands will appear. So, standard idle state conditions cant help me because we have not got user input to stop idle state. I need more customizable idle state detector. Automatically started processes shouldn't influence to idle state, but PC should go away from idle state when another process will apperar. What should I use? Maybe exists some advansed task scheduler? Or I should write some useful utility on C#? I hope that it is a standard task and all useful utilities already compiled :)

    Read the article

  • Server hang - data loss on reboot, post mortem analysis

    - by rovangju
    A development server I'm responsible for (ext3 on raid 5 w/Debian Squeeze) froze up over the weekend and I was forced to reset it, as in unresponsive from KVM/physical keyboard access, no eth devices responding, etc. Not even the backup process ran (Figures, the one time I don't check for confirmation) So after the reset, it turns out that every trace of disk IO activity that should have happened for a period of ~24H is completely gone. The log files have a big gap in the dates and times. As if the writes were never committed to disk, no processes seemed to have run. Luckily it was a weekend and nothing of value would have been lost and I don't suspect a hack. What can I do in post mortem to this event - to prevent it from ever happening again? I've seen this happen before on a completely different machine running FreeBSD. I am rounding up the disk checking tools right now - but there must be more going on! Mount options: /dev/sda1 on / type ext3 (rw,errors=remount-ro) Kernel: Linux dev 2.6.32-5-686-bigmem Disk/Inodes: 13%/3%

    Read the article

  • SocketException (Timeout) only when running as scheduled task

    - by BVartin
    I'm running a C# web-scrapper application (that I wrote) on a Windows Server 2003 instance under a user belonging to the local Administrator group. When I run it within a desktop/remote-desktop session the application runs successfully but when I schedule it to run under the same user/security-context outside of the desktop session, all socket connections timeout. The scheduled task calls a batch file which in-turn calls the application. The Windows Server 2003 instance has a very basic configuration and isn't even connected to a domain. I cannot find anything in any firewall or security configuration which is preventing this but maybe I have overlooked something, can anyone be of any assistance? System.Net.WebException: Unable to connect to the remote server --- System.Net.Sockets.SocketException: A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond X.X.X.X:443 at System.Net.Sockets.Socket.DoConnect(EndPoint endPointSnapshot, SocketAddress socketAddress) at System.Net.ServicePoint.ConnectSocketInternal(Boolean connectFailure, Socket s4, Socket s6, Socket& socket, IPAddress& address, ConnectSocketState state, IAsyncResult asyncResult, Int32 timeout, Exception& exception) --- End of inner exception stack trace --- at System.Net.HttpWebRequest.GetResponse()

    Read the article

  • Laptop GPU apparently blew up, motherboard doesn't even turn on its power LED. [But..]

    - by leladax
    EDIT: (Was: Laptop automatic shutdown after 2 seconds) If I take out the GPU, the motherboard LED turns on but then [if it attempts to power up and boot] it turns off after 2 seconds [fans turn on normally in that short period]. [Without the GPUs out there's not even an attempt to boot.] It's an SLI motherboard for a toshiba (model X200-219). If I take out one of the GPUs (they are on top of each other) it surprisingly lets the motherboard turn on too (as it is if both are out) but it still turns off after 2-3 seconds, same behavior. I wonder if it's the GPU that produces the 'turn off after being on' behavior and not something else. [Has anyone seen this behavior with blown up GPUs or could it be something else?] Previous question (before EDIT. Sorry, but someone thought it productive to lock the other one as duplicate): I'm trying to insvestigate which component produces this behavior. Other indications show it may be the GPU but I wonder if anyone knows more. It's a Toshiba Satellite X200 description: AC power shows the power being fed normally, when turned on the fan works and it appears to be starting up but after 2 seconds it shuts down with only the 'AC power connected" led on. -- seconds are about up to 4,maybe not 2 exactly.

    Read the article

< Previous Page | 54 55 56 57 58 59 60 61 62 63 64 65  | Next Page >