Search Results

Search found 23890 results on 956 pages for 'issue'.

Page 158/956 | < Previous Page | 154 155 156 157 158 159 160 161 162 163 164 165  | Next Page >

  • How to solve a deallocated connection in iPhone SDK 3.1.3? - Streams - CFSockets

    - by Christian
    Hi everyone, Debugging my implementation I found a memory leak issue. I know where is the issue, I tried to solve it but sadly without success. I will try to explain you, maybe someone of you can help with this. First I have two classes involved in the issue, the publish class (where publishing the service and socket configuration is done) and the connection (where the socket binding and the streams configuration is done). The main issue is in the connection via native socket. In the 'publish' class the "server" accepts a connection with a callback. The callback has the native-socket information. Then, a connection with native-socket information is created. Next, the socket binding and the streams configuration is done. When those actions are successful the instance of the connection is saved in a mutable array. Thus, the connection is established. static void AcceptCallback(CFSocketRef socket, CFSocketCallBackType type, CFDataRef address, const void *data, void *info) { Publish *rePoint = (Publish *)info; if ( type != kCFSocketAcceptCallBack) { return; } CFSocketNativeHandle nativeSocketHandle = *((CFSocketNativeHandle *)data); NSLog(@"The AcceptCallback was called, a connection request arrived to the server"); [rePoint handleNewNativeSocket:nativeSocketHandle]; } - (void)handleNewNativeSocket:(CFSocketNativeHandle)nativeSocketHandle{ Connection *connection = [[[Connection alloc] initWithNativeSocketHandle:nativeSocketHandle] autorelease]; // Create the connection if (connection == nil) { close(nativeSocketHandle); return; } NSLog(@"The connection from the server was created now try to connect"); if ( ! [connection connect]) { [connection close]; return; } [clients addObject:connection]; //save the connection trying to avoid the deallocation } The next step is receive the information from the client, thus a read-stream callback is triggered with the information of the established connection. But when the callback-handler tries to use this connection the error occurs, it says that such connection is deallocated. The issue here is that I don't know where/when the connection is deallocated and how to know it. I am using the debugger, but after some trials, I don't see more info. void myReadStreamCallBack (CFReadStreamRef stream, CFStreamEventType eventType, void *info) { NSAutoreleasePool *pool = [[NSAutoreleasePool alloc] init]; Connection *handlerEv = [[(Connection *)info retain] autorelease]; // The error -[Connection retain]: message sent to deallocated instance 0x1f5ef0 (Where 0x1f5ef0 is the reference to the established connection) [handlerEv readStreamHandleEvent:stream andEvent:eventType]; [pool drain]; } void myWriteStreamCallBack (CFWriteStreamRef stream, CFStreamEventType eventType, void *info){ NSAutoreleasePool *p = [[NSAutoreleasePool alloc] init]; Connection *handlerEv = [[(Connection *)info retain] autorelease]; //Sometimes the error also happens here, I tried without the pool, but it doesn't help neither. [handlerEv writeStreamHandleEvent:eventType]; [p drain]; } Something strange is that when I run the debugger(with breakpoints) everything goes well, the connection is not deallocated and the callbacks work fine and the server is able to receive the message. I will appreciate any hint!

    Read the article

  • surfaceDestroyed called out of turn

    - by Avasulthiris
    I'm currently developing on minimum sdk version 3 (Android 1.5 - cupcake) and I'm having a strange unexplained issue that I have not been able to solve on my own. It is now becoming a rather urgent issue as I've already missed 1 deadline... I'm writing a high-level library to make long term android development easier and quicker. The one specific module has to capture images for a application... I've gotten everything right so far over the last couple months, except this one little thing and I don't know what to do any more: When I use the Camera object and implement a SurfaceHolder.Callback, the methods surfaceCreated() and surfaceChanged() are called one after the other. Then when the activity finishes, surfaceDestroyed() is called. This is how it should be, but when I stick the exact same code in my library (plain Java library that references the Android API - not in an activity), surfaceDestroyed() is called directly after created and changed. As a result - the camera object is closed before I can use it and the application force closes. What a pain. I can't do anything! This method call is controlled by the device.. Why does the surface close for no reason? Even when I post it to run on the activity thread through my own invokeAndWait(Runnable) method, like I do for many other things. I have 5 different working examples of different ways and implementations of capturing images in android but I still get the same issue when I plug it into my library. I don't understand what the difference is. The code is pretty much the same - and I post all the related code to the UI thread so its not a thread handling issue or anything like that. I've rewritten it about 20 times in different ways - same issue every time.. The only other way to approach it that I know of is creating a new Camera and setting it to the VideoView. The android source (c++ native code) however provides no Camera constructor, only an open() method which automatically forwards the camera's state to 'prepared' but I can only set the camera to the VideoView from the 'initialized' state. Pretty silly, I know, but there is no way around it unless I modify the Android library source code haha. not an option! The API does not allow for this method - you are expected to use it like my first example. So essentially - i just need to understand exactly why surfaceDestroyed() is called out of turn and if there is anything I can do to avoid it closing? If i can just understand the exact logic behind it and how it works! The documentation isn't much help! Secondly, if someone knows of any alternative ways to do it, as my second example, but hopefully one which the API actually allows for? haha Thanks guys. I would post code, but its fairly complicated, a couple thousand lines for this specific class and it would probably take a couple days to explain with all the threading and event listeners and what not. I just need help with this 1 single thing. Please let me know if you have any questions.

    Read the article

  • "Could not establish secure channel for SSL/TLS" in .NET CF application on smart phone

    - by Stefan Mohr
    I have a stubborn communications issue with an application running on the .NET Compact Framework 3.5 on Windows Mobile smartphones. I am constructing a web request using this code: UTF8Encoding encoding = new System.Text.UTF8Encoding(); byte[] Data = encoding.GetBytes(HttpUtility.ConstructQueryString(parameters)); httpRequest = WebRequest.Create((domain)) as HttpWebRequest; httpRequest.Timeout = 10000000; httpRequest.ReadWriteTimeout = 10000000; httpRequest.Credentials = CredentialCache.DefaultCredentials; httpRequest.Method = "POST"; httpRequest.ContentType = "application/x-www-form-urlencoded"; httpRequest.ContentLength = Data.Length; Stream SendReq = httpRequest.GetRequestStream(); SendReq.Write(Data, 0, Data.Length); SendReq.Close(); HttpWebResponse httpResponse = (HttpWebResponse)httpRequest.GetResponse(); return httpResponse.GetResponseStream(); The web service functions by receiving a JSON-encoded document as part of the URL (eg. https://site.com/ws/sync??document={"version":"1.0.0","items":[{"item_1":"item1"}]}&user=usr&password=pw), and as a response receives another JSON document as response data. This code runs fine on all emulators and PDAs running WM 5 and 6. We have seen an issue with a couple of customers running Treo smartphones (and only on the Sprint network). We have tested the code on an identical device on the AT&T network (via DeviceAnywhere) and once again the code worked as we expected. This has to be some sort of security policy on the phone, but we've been unable to determine a workaround or diagnose it thoroughly as we cannot reproduce it in house and have had to resort to getting users to assist with running test drivers for us. When this code executes, the user's device throws the following exception: System.Net.WebException Could not establish secure channel for SSL/TLS Stack trace: at System.Net.HttpWebRequest.finishGetRequestStream() at System.Net.HttpWebRequest.GetRequestStream() at OurApp.GetResponseStream(String domain, Hashtable parameters) inner exception: System.IO.IOException Authentication failed because the remote party has closed the transport stream. Stack trace: at System.Net.SslConnectionState.ClientSideHandshake() at System.Net.SslConnectionState.PerformClientHandShake() at System.Net.Connection.connect(Object ignored) at System.Threading.ThreadPool.WorkItem.doWork(Object o) at System.Threading.Timer.ring() Examining the server Apache logs shows no hits from the user's IP - I don't think the device is even attempting to send a packet before failing. If relevant, the server is running Apache on Linux and is written using the TurboGears Python framework. The server certificate is issued by a CA and is still valid. The test driver where this error was copied from was not code signed, however the same error (without the error messages) is signed with a GeoTrust certificate so we don't believe this is a code signing issue. The application installs and launches without issue on all phones - it's just establishing this SSL connection that is breaking for these users. One significant issue in troubleshooting is that there is a substantial inconvenience each time we try out a solution (need to find a "volunteer" customer), so we're really looking for a silver bullet or a better understanding of the handshaking process so we can be reasonably confident we only need to ask the user to test it one or two more times. One final mention: we have tried the sync both over ActiveSync and also over GPRS with identical results. Any thoughts would be greatly appreciated!

    Read the article

  • SChannel "cannot find certificate in either LocalMachine or CurrentUser store"

    - by Chris J
    We have an in-house application that requires the use of client SSL certificates to authenticate with a remote server (not under our control). This has worked without problems before but on deploying to a new server, we're having problems getting Windows 2008 to use the certificate. The certificate exists as a .pfx file that contains a private key. The same certificate exists in the LocalMachine store, again with its private key. We've ensured the one in the LocalMachine store is correct by creating a website in IIS against that certificate, so we're happy that the certificate, certificate chain, and private key is valid. The PFX has been created by exporting from the Certificates MMC snap-in. The issue is that we get the following in the system diagnostic logs that suggests it can't find the private key: System.Net Information: 0 : [5988] SecureChannel#23264094 – Locating the private key for the certificate: [Subject] CN=internal-server.company.com, OU=Servers, OU=Devices, O=org [Issuer] CN=SubCA02, OU=CA, o=org [Serial Number] 407ABCDE [Not Before] 31/10/2013 11:08:48 AM [Not After] 31/10/2016 11:08:48 AM [Thumbprint] 4354A34F6004F019E60F055979A47E50F62D1504 . System.Net Information: 0 : [5988] SecureChannel#23264094 – Cannot find the certificate in either the LocalMachine store or the CurrentUser store. I've validated the thumbprint, issuer and serial number listed in the log with the certificate in the LocalMachine store and these marry up. From what I can tell with much searching, this appears to be a permissions issue. The user the application is running as has been granted access to the private key (Personal Certificates - right click on the certificate - all tasks - Manage Private Keys), so I'm now at a loss as to which permission(s) it may be that is causing the issue.

    Read the article

  • Issues with ProxyPass and ProxyPassReverse when proxying to localhost and a different TCP port

    - by mbrownnyc
    I am attempting to use ProxyPass and ProxyPassReverse to proxy requests through Apache to another server instance that is bound to the localhost on a different TCP port that the Vhost exists (VHost is bound to :80, when the target is bound to :5000). However, I am repeatedly receiving HTTP 503 when accessing the Location. According to the ProxyPass documentation... <VirtualHost *:80> ServerName apacheserver.domain.local DocumentRoot /var/www/redmine/public ErrorLog logs/redmine_error <Directory /var/www/redmine/public> Allow from all Options -MultiViews Order allow,deny AllowOverride all </Directory> </VirtualHost> PassengerTempDir /tmp/passenger <Location /rhodecode> ProxyPass http://127.0.0.1:5000/rhodecode ProxyPassReverse http://127.0.0.1:5000/rhodecode SetEnvIf X-Url-Scheme https HTTPS=1 </Location> I have tested binding the alternate server to the interface IP address, and the same issue occurs. The server servicing request is an instance of python paste:httpserver, and it has been configured to use the /rhodecode suffix (as I saw this to be mentioned in other posts about ProxyPass). The documentation from the project itself, Rhodecode, reports to use the above. The issue is persistent if I target another server that is serving on a different port. Does ProxyPass allow proxying to a different TCP port? [update] I won't delete this, in case someone comes across the same issue. I had set an ErrorLog, and in that ErrorLog the following error was reported: [Wed Nov 09 11:36:35 2011] [error] (13)Permission denied: proxy: HTTP: attempt to connect to 127.0.0.1:5000 (192.168.100.100) failed [Wed Nov 09 11:36:35 2011] [error] ap_proxy_connect_backend disabling worker for (192.168.100.100) After some more research, I attempted to set SELinux to permissive (echo 0 >/selinux/enforce), and try again. It turns out the SELinux boolean httpd_can_network_connect must be set to 1. For persistence on reboot: setsebool -P httpd_can_network_connect=1

    Read the article

  • Could not continue scan with NOLOCK due to data movement during installation

    - by dbdev1
    I am running Windows Server 2008 Standard Edition R2 x64 and I installed SQL Server 2008 Developer Edition. All of the preliminary checks run fine (Apart from a warning about Windows Firewall and opening ports which is unrelated to this and shouldn't be an issue - I can open those ports). Half way through the actual installation, I get a popup with this error: Could not continue scan with NOLOCK due to data movement. The installation still runs to completion when I press ok. However, at the end, it states that the following services "failed": database engine services sql server replication full-text search reporting services How do I know if this actually means that anything from my installation (which is on a clean Windows Server setup - nothing else on there, no previous SQL Servers, no upgrades, etc) is missing? I know from my programming experience that locks are for concurrency control and the Microsoft help on this issue points to changing my query's lock/transactions in a certain way to fix the issue. But I am not touching any queries? Also, now that I have installed the app, when I login, I keep getting this message: TITLE: Connect to Server ------------------------------ Cannot connect to MSSQLSERVER. ------------------------------ ADDITIONAL INFORMATION: A network-related or instance-specific error occurred while establishing a connection to SQL Server. The server was not found or was not accessible. Verify that the instance name is correct and that SQL Server is configured to allow remote connections. (provider: Named Pipes Provider, error: 40 - Could not open a connection to SQL Server) (Microsoft SQL Server, Error: 67) For help, click: http://go.microsoft.com/fwlink?ProdName=Microsoft+SQL+Server&EvtSrc=MSSQLServer&EvtID=67&LinkId=20476 ------------------------------ BUTTONS: OK ------------------------------ I went into the Configuration Manager and enabled named pipes and restarted the service (this is something I have done before as this message is common and not serious). I have disabled Windows Firewall temporarily. I have checked the instance name against the error logs. Please advise on both of these errors. I think these two errors are related. Thanks

    Read the article

  • Multiple Users use Script to Access Remote Server via Passwordless SSH

    - by jinanwow
    I am currently setting up a linux box that is tied into Active Directory. This box will allow users to SSH into it with their AD username and password to gather information (Box A). The issue is I am trying to create a function in /etc/bash.bashrc so the users has to do is type "get_info" for example, the function will SSH into a remote machine (Box B) run a command and output the information back to the user. The issue with this is, I have generated a rsa key on Box A, added it to the Box B authorized_keys and it works fine. The issue I am running into is, how do I set this up one time for the current users and any new user who logs into Box A. Is there a better approach than what I am currently doing. Essentially I just need to connect to the remote box, run a command, output the information back to the user and that is it. How can I allow new users to connect via a script to the remote box without having to generate RSA keys for them. The get_info fuction will be supplied a value 'get_info 012345' and returns the results.

    Read the article

  • Unable to connect to Postgres on Vagrant Box - Connection refused

    - by Ben Miller
    First off, I'm new to Vagrant and Postgres. I created my Vagrant instance using http://files.vagrantup.com/lucid32.box with out any trouble. I am able to run vagrant up and vagrant ssh with out issue. I followed the instructions http://blog.crowdint.com/2011/08/11/postgresql-in-vagrant.html with one minor alteration. I installed "postgresql-8.4-postgis" package instead of "postgresql postgresql-contrib" I started the server using: postgres@lucid32:/home/vagrant$ /etc/init.d/postgresql-8.4 start While connected to the vagrant instance I can use psql to connect to the instance with out issue. In my Vagrantfile I had already added: config.vm.forward_port 5432, 5432 but when I try to run psql from localhost I get: psql: could not connect to server: Connection refused Is the server running locally and accepting connections on Unix domain socket "/tmp/.s.PGSQL.5432"? I'm sure I am missing something simple. Any ideas? Update: I found a reference to an issue like this and the article suggested using: psql -U postgres -h localhost with that I get: psql: server closed the connection unexpectedly This probably means the server terminated abnormally before or while processing the request.

    Read the article

  • Backup Gmail using Mail.app and IMAP without redundancy

    - by Cawas
    I don't care for actually using mail app, I use mostly the gmail interface and mail app just for offline, for quickly reading and eventually replying. Everything is working fine, I think I've followed every guide out there... Here's a great one. But I could find nothing about avoiding redundancy. Well, I can manually do that either by using POP or by checking off most of my labels out of IMAP. But I do use a lot of labels and I often label messages with more than 1 label. And I want them on mail app. Is there anyway to make it keep just 1 copy of repeated messages? Maybe there's a message id or checksum that could be used... If there isn't a way to do it, be assured I still prefer having the extra messages and "wasting" space rather than not having any. edit: I've came across many solutions for finding duplicate files, but they just delete the files. That just make things worst: Mail will just sync it all again. I've realized it's probably better to keep two accounts setup, POP for backup and IMAP for everything else with removing the "All Mail" from it. That's because if the "All Mail" on the server is deleted for any reason, my "All Mail" local will also get deleted, while POP will keep all files regardless of the server. This doesn't solve the redundancy issue at all, but it doesn't create any new issue as well, and I can even use the search properly, without duplicated results, if I search just on the POP. So it helps optimizing a little bit. But I still think the best way to solve this issue would be having something such as aamann's Mail Scripts tweaked to hardlinking the duplicates rather than deleting, and optimized to not need to scan everything every time. I'm trying to contact him and see what we can do. At any pace, I'm still looking for an answer!

    Read the article

  • Could not continue scan with NOLOCK due to data movement during installation

    - by dbdev1
    Hi, I am running Windows Server 2008 Standard Edition R2 x64 and I installed SQL Server 2008 Developer Edition. All of the preliminary checks run fine (Apart from a warning about Windows Firewall and opening ports which is unrelated to this and shouldn't be an issue - I can open those ports). Half way through the actual installation, I get a popup with this error: Could not continue scan with NOLOCK due to data movement. The installation still runs to completion when I press ok. However, at the end, it states that the following services "failed": database engine services sql server replication full-text search reporting services How do I know if this actually means that anything from my installation (which is on a clean Windows Server setup - nothing else on there, no previous SQL Servers, no upgrades, etc) is missing? I know from my programming experience that locks are for concurrency control and the Microsoft help on this issue points to changing my query's lock/transactions in a certain way to fix the issue. But I am not touching any queries? Also, now that I have installed the app, when I login, I keep getting this message: TITLE: Connect to Server Cannot connect to MSSQLSERVER. ADDITIONAL INFORMATION: A network-related or instance-specific error occurred while establishing a connection to SQL Server. The server was not found or was not accessible. Verify that the instance name is correct and that SQL Server is configured to allow remote connections. (provider: Named Pipes Provider, error: 40 - Could not open a connection to SQL Server) (Microsoft SQL Server, Error: 67) For help, click: http://go.microsoft.com/fwlink?ProdName=Microsoft+SQL+Server&EvtSrc=MSSQLServer&EvtID=67&LinkId=20476 BUTTONS: OK I went into the Configuration Manager and enabled named pipes and restarted the service (this is something I have done before as this message is common and not serious). I have disabled Windows Firewall temporarily. I have checked the instance name against the error logs. Please advise on both of these errors. I think these two errors are related. Thanks

    Read the article

  • SNMPD running but not listening for connections at random

    - by Lukasz
    OS: CentOS release 5.7 (Final) Net-SNMP: net-snmp-5.3.2.2-14.el5_7.1 (from RPM) Periodically my NMS notifies me that SNMP has gone down on this machine. The service is restored in between 10 to 30 minutes. My NMS also pings and check SSH and those services are not affected during the SNMP outage. SNMPD log file shows that it is working and apparently receiving packets (either from local agents from 127.0.0.1 or from my NMS at 172.16.37.37) however attempting to snmpwalk locally or from the NMS system fails with a timeout. I have 7 of these servers running mixture of CentOS 5.7 and RHEL 5.7 with this specific version of Net-SNMP installed from RPM - none of them have this issue except this one. 5 of the machines (including the NMS system and this problem server) are in the same rack connected using one switch. Restarting SNMPD does not fix the issue - it clears up by itself eventually. Any suggestions where I can begin diagnosing the issue? It's a closed subnet so IPTables is not used. SNMPD config below: # Following entries were added by HP Insight Management Agents at # Tue May 15 10:58:17 CLT 2012 dlmod cmaX /usr/lib64/libcmaX64.so rwcommunity public 127.0.0.1 rocommunity public 127.0.0.1 rwcommunity 3adRabRu 172.16.37.37 rocommunity 3adRabRu 172.16.37.37 rwcommunity 3adRabRu 172.16.37.36 rocommunity 3adRabRu 172.16.37.36 trapcommunity callmetraps trapsink 172.16.37.37 callmetraps trapsink 172.16.37.36 callmetraps syscontact Lukasz Piwowarek syslocation Santiago, Chile # ---------------------- END -------------------- agentAddress udp:161 com2sec rwlocal default public com2sec rolocal default public com2sec subnet default 3adRabRu group rwv2c v2c rwlocal group rov2c v2c rolocal group rov2c v2c subnet view all included .1 access rwv2c "" any noauth exact all all none access rov2c "" any noauth exact all none none

    Read the article

  • apache vhost not working consistently

    - by petrus
    I have a vhost on my webserver whose sole and unique goal is to return the client IP adress: petrus@bzn:~$ cat /home/vhosts/domain.org/index.php <?php echo $_SERVER['REMOTE_ADDR']; echo "\n" ?> This helps me troubleshoot networking issues, especially when NAT is involved. As such, I don't always have domain name resolution and this service needs to work even if queried by its IP address. I'm using it this way: petrus@hive:~$ echo "GET /" | nc 88.191.124.41 80 191.51.4.55 petrus@hive:~$ echo "GET /" | nc domain.org 80 191.51.4.55 router#more http://88.191.124.41/index.php 88.191.124.254 However I found that it wasn't working from at least a computer: petrus@seth:~$ echo "GET /" | nc domain.org 80 petrus@seth:~$ petrus@seth:~$ echo "GET /" | nc 88.191.124.41 80 petrus@seth:~$ What I checked: This is not related to ipv6: petrus@seth:~$ echo "GET /" | nc -4 ydct.org 80 petrus@seth:~$ petrus@hive:~$ echo "GET /" | nc ydct.org 80 2a01:e35:ee8c:180:21c:77ff:fe30:9e36 netcat version is the same (except platform, i386 vs x64): petrus@seth:~$ type nc nc est haché (/bin/nc) petrus@seth:~$ file /bin/nc /bin/nc: symbolic link to `/etc/alternatives/nc' petrus@seth:~$ ls -l /etc/alternatives/nc lrwxrwxrwx 1 root root 15 2010-06-26 14:01 /etc/alternatives/nc -> /bin/nc.openbsd petrus@hive:~$ type nc nc est haché (/bin/nc) petrus@hive:~$ file /bin/nc /bin/nc: symbolic link to `/etc/alternatives/nc' petrus@hive:~$ ls -l /etc/alternatives/nc lrwxrwxrwx 1 root root 15 2011-05-26 01:23 /etc/alternatives/nc -> /bin/nc.openbsd It works when used without the pipe: petrus@seth:~$ nc domain.org 80 GET / 2a01:e35:ee8c:180:221:85ff:fe96:e485 And the piping works at least with a test service (netcat listening on 1234/tcp and output to stdout) petrus@bzn:~$ nc -l -p 1234 GET / petrus@bzn:~$ petrus@seth:~$ echo "GET /" | nc domain.org 1234 petrus@seth:~$ I don't know if this issue is more related to netcat or Apache, but I'd appreciate any pointers to troubleshoot this issue ! The IP addresses have been modified but kept consistent for easy reading. bzn is the server, hive is a working client and seth is the client on which I have the issue.

    Read the article

  • My computer loses network connectivity every 30 minutes

    - by Logan Garland
    My LAN connected PC loses network connectivity every 30 minutes. It has a static IP address and I've checked to make sure there aren't any IP conflicts on my network. If I'm streaming from that PC to my Xbox the stream will be interrupted and it normally takes about a minute to come back online. The same happens if I'm actually on the PC and just browsing the web. I'm looking for suggestions on how to track down this issue. I've tried checking the available logs on my router to see if there is an issue with DCHP but have been unsuccessful in finding any evidence. Any suggestions would be helpful. I can't think of any recent changes to my network, PC or software installations that may have caused this. I am a software developer and have intermediate networking knowledge. EDIT: During one outage I told windows to troubleshoot the network problem and it said that it could automatically fix the problem by changing DCHP info. It basically said my network adapter from static to automatically obtain. This did fix the issue quicker than just waiting it out, but the outage occurred again 30 minutes later even when leaving those settings.

    Read the article

  • Virutal Machine loses network connectivity on Hyper V Cluster

    - by Chris W
    We're running a number of VMs on a 6 node failover cluster of blades using Hyper V. We have an intermittent issue (every few days at different times - not a fixed frequency) of VMs losing network connectivity. Console access to the VM suggests all is fine and the underlying blade has normal connectivity. To resolve the problem we either have to re-start the VM or, more usually, we do a live migration to another blade which fires up connectivity and we then migrate it back to the original blade. I've had 3 instances of this happen with a specific VM running on a particular blade however it has happened once with a different VM running on a different blade. All VMs and blades have the same basic setup and are running Windows 2008 R2. Any ideas where I should be looking to diagnose the possible causes of this problem as the event logs provide no help? Edit: I've checked that each blade is running the latest NIC drivers and all seem to be fine. Something that is confusing me - a failover or restart of the VM resolves the issue. Whilst I need to work out the underlying issue that is causing the NICs to hang I'm also concerned that the VM didn't failover to another node which would have solved the outage for me. Is there a way to configure the cluster so that it can tell that the VM guest has lost connectivity and fail it over? As things stand the cluster is assuming that the VM is running happily as I presume Hyper V says everything is great even though there is a problem.

    Read the article

  • kvm works only when kvm-intel is unloaded

    - by Sathya
    I am new to kvm. I have this strange issue. But before explaining the issue, here is my set up. I try to install VM on my Host which is a Acer Laptop 5720 Has T7500 Intel processor. The cpu flags indicate that Virtualization is supported. I run Ubuntu 10.04 (lucid) on it. It comes with kvm. Now coming to the issue - I dont get any errors while executing "sudo modprobe kvm-intel". So I presume my processor does indeed support hardware virtualization. I use virt-manager and create a VM on which I install ubuntu from an *.iso file. When I start the VM it says it is running. No signs of any trouble. I can see the domain list in "virsh list". But when I try to connect to the VM thru VNC, all I get to see is a blank screen (no cursor). There is no response to any key press. I changed the video mode etc. Tried all different combinations but none work. But strangely, if I shutdown the vm an virt-manager and then unload the module by doing "sudo modprove -r kvm-intel", everything works fine. ie., I can see the screen via VNC. I am able to install the OS and so on. So what does this mean ? IS hardware virtualization not supported ? How come there is no error anywhere ? dmesg | grep kvm doesnt report anything. Can someone throw light on what excatly is happening ?

    Read the article

  • Why did MAC-Adress Cloning Fix My Router?

    - by FranticPedantic
    I have a Belkin router, and about a year ago, I suddenly lost my internet connectivity from Comcast. The internet worked fine when I plugged it right into my laptop, so I just ignored it. When I moved to another apartmnet I eventually took the dive and called tech support. The tech told me to clone my MAC address which completely fixed the issue. Now I know what a MAC address is and I've read what MAC cloning is. What has bothered me since is that I don't see how this fixed the issue. As I have understood MAC address cloning, it has the router pretend it has the same MAC address as my computer. Here is why I don't understand why this fixes my issue: I have used several different computers with this router. Cloning the MAC address fixed it for ALL of my computers. The laptop I first used with my ISP was not the one that I eventually had connected when I cloned the address. Furthermore, I didn't have any problems for quite some time after I stopped using the first computer. It wasn't like the internet suddenly stopped working when I changed which laptop I was using Now it occurred to me that maybe there was some sort of expiration? Except... Which MAC address did it clone? It was just an option in the router administration page. Did it just pick whichever computer was connected to it? If my ISP still wanted the MAC of my first computer, how did some other computer's fix it? As mentioned earlier, why did this problem seemingly stem from nowhere? Anyway, I don't have any current problems so this is more just out of general curiosity. If anybody can explain it, it would be appreciated!

    Read the article

  • Starfield Wildcard SSL Certificate Not Trusted in All Browsers

    - by Austen Cameron
    I am at a loss as to what else I might try in order to debug this issue with a Starfield Wildcard SSL Certificate. The problem is that in certain browsers (Safari or the most-updated chrome you can get for OS X 10.5.8 for example) the certificate comes up as untrusted, even on the root domain. My server setup / background info: General LAMP setup - CentOS 6.3 - on a Godaddy VPS Starfield Technologies Wildcard SSL certificate Installed using the instructions from godaddy's support pages ssl.conf lines are basically as follows: SSLCertificateFile /path/to/cert/mysite.com.cert SSLCertificateKeyFile /path/to/cert/mysite.key SSLCertificateChainFile /path/to/cert/sf_bundle.crt Everything seemingly worked fine until the other night when I noticed the problem in OS X, I assume it's more browser version related, but have only been able to replicate it on that particular machine. What I have tried: Updating sf_bundle.crt from godaddy's cert repository and Starfield's repository versions Following This ServerFault answer from Jim Phares - changing the ChainFile line to sf_intermediate.crt from Starfield's repository Using http://www.sslshopper.com/ssl-checker.html on my url It says the domain is correctly listed on the certificate but comes up with an error that reads The certificate is not trusted in all web browsers. You may need to install an Intermediate/chain certificate to link it to a trusted root certificate. What might I try next to remedy the untrusted certificate issue? Let me know if there is any other information needed that might help debugging this issue. Thanks in advance!

    Read the article

  • Time drift in Cloud Server - need to mainpulate GRUB config

    - by Aditya Advani
    We are hosting a VPS on a popular host and are experiencing a regular time drift of several minutes a day forward (approx 7). Linux Kernel: 2.6.18-164.11.1.el5 GNU/Linux Distro: CentOS release 5.4 (Final) We reached out to our hosting provider and their support advised us " This is a known issue with Cloud Servers. To fix this you will need to add one line to your grub config located at: /boot/grub/menu.lst The line you need to add is: noapic nolapic divider=10 nolapic_timer This should correct this issue. You will need to restart after this is added in. " Because I am wary of manipulating grub, mostly I'm terrified that our server may fail to restart - I ask you guys, the pro *nix admins - where exactly in this file does the recommended insertion below: # line from 1&1 for time syncing issue (Case 5163) noapic nolapic divider=10 nolapic_timer go? Please specify where exactly, and whether the order of commands is or is not important. Why is the block below "title CentOS ..." indented? If someone could give me an overview of how this works or point me to a resource that's easy to follow, that's what I'm looking for immediately, a light overview or basic understanding of what I;m doing. If GRUB and bootloaders are a deep dark treasure trove of kernel hacking or something, that's great well-recommended in-depth resources are also very welcome. This is my current /boot/grub/menu.lst # grub.conf generated by anaconda # # Note that you do not have to rerun grub after making changes to this file #boot=/dev/sda # serial --unit=0 --speed=57600 terminal --timeout=5 serial console timeout=5 title CentOS (2.6.18-164.11.1.el5) root (hd0,0) kernel /boot/vmlinuz-2.6.18-164.11.1.el5 ro root=/dev/hda1 console=tty0 console=tty initrd /boot/initrd-2.6.18-164.11.1.el5.img MOST IMPORTANT: I need to know where in the file above it is appropriate to paste the suggested line so I can confidently restart my VPS after manipulating GRUB config

    Read the article

  • 4.00gb (3.25gb usable) in Windows 7 x64

    - by dotnetdev
    Hi, I have setup Windows 7 Ultimate x64 on my PC. I have 4gb RAM and my BIOS states the correct amount (4096mb), but I cannot Windows (System Manager) says I have 4.00gb (3.25gb usable). This seems to be a popular issue, and I have looked for an integrated video card (integrated with my chipset) to disable but haven't found anything. What else can be preventing me from seeing all 4gb? When I had Vista 32bit, it would say 3.25gb RAM not 4.00gb (3.25gb usable). I have an x64 CPU and when I brought my RAM, I used a compatibility tool from Crucial (the memory vendor) to test how much memory my PC can support and 4gb was the answer (this was a windows app I think). Chipset is Intel(R) G33/G31/P35/P31 Express Chipset PCI Express In the bios, I looked for an onboard video card (integrated) and there was no such thing, but a couple of other onboard devices. There are also no "Resource Mappings" settings. FURHTER DETAILS: Chipset North Bridge: Intel Bearlake G33 South Bridge: Intel 82801IR ICH9R Maximum Memory Amount 8 GB Graphics Controller Type Intel GMA 3100 (Enabled) I guess the first thing is, how do I disable the graphics controller? EDIT: This thread (http://forums.legitreviews.com/about23417.html) indicates the issue is with memory mapped devices, but someone on this thread says that does not apply to x64. The rest of the comments points to a mobo issue for the guy who started that thread. Thanks

    Read the article

  • Task Scheduler Crashing MMC

    - by Valrok
    I've been getting errors whenever I try to run the task scheduler for Windows 2008 R2. Each time that I've tried running it, the task scheduler will crash and report the following: Problem signature: Problem Event Name: CLR20r3 Problem Signature 01: mmc.exe Problem Signature 02: 6.1.7600.16385 Problem Signature 03: 4a5bc808 Problem Signature 04: System.Windows.Forms Problem Signature 05: 2.0.0.0 Problem Signature 06: 50c29e85 Problem Signature 07: 151f Problem Signature 08: 18 Problem Signature 09: Exception OS Version: 6.1.7601.2.1.0.16.7 Locale ID: 1033 I've been looking online but so far I keep finding mixed results on what could be the fix for this and was wondering if anyone here has ever ran into this issue before. I read that this issue could be because of Security Update for Microsoft Windows (KB2449742) and that by uninstalling it I would be able to fix this issue, however I was not able to locate this anywhere in the server. Here's the link if interested Patch wise, everything is up to date. Also, I tried running hotfix KB2688730 to see if that would work after doing some research online, however the hotfix is not applicable to the computer. If anyone could provide some information on how to fix this and get the task scheduler running again it would be extremely helpful!

    Read the article

  • Computer loses all installed programs and appears to return OS-only state

    - by Jake
    This is a story regarding 3 laptops of different brand and models. On separate occassions, I configured each of these Windows 7 / Vista computers with the necessary configuration and applications (which are supposedly the same actually), e.g. join office domain, same windows updates, microft office etc. These machinese were configure in our office in Singapore, and then they were taken to India for use. Someday in India, when booting up the laptop, all went fine except when it reach the log in screen, it was no longer possible to login with domain credentials. Logging into the laptop local admin account will lead to discover that the machine has returned to "OS-only state". All the configurations and applications were gone. The actual user profiles are still in the C: drive so files can still be retrieved, but under Control Panel Uninstall Programs it is evident that at least the registry is corrupted. The above scenario happen to the first 2 laptops. For the third, the system reports "Operating System Not Found" on boot up. I cannot think of any reason except to suspect a power fluctuation issue. Question is, will a power issue create this behaviour? What else can cause this issue?

    Read the article

  • Windows 8 & Hyper-V Can't Bridge Wifi Connection

    - by xinunix
    So I have an odd issue that I can't quite figure out... I am running Windows 8 Enterprise on a Dell 6420 laptop. I have a Broadcom 802.11n wireless adapter. I am connected to an home router (Netgear WNDR3700) that is connected to the internet. It is a very simple home network setup. I am trying to stand-up a few VMs in Hyper-V and want the VMs to be able to access the internet over my wireless connection. I have found numerous examples of how to set this up using both External and Internal Virtual Switches but have yet to be able to get it to work on my machine. I have narrowed the issue down to the fact that my host machine always loses internet connection when I bridge my wifi connection (both when it is bridged automatically by windows when I setup an external virtual switch bound to the wifi adapter or if I do it manually by creating an internal virtual switch, right click on it and my wifi network and select "Bridge Connections".) In both cases after the bridge is established, my host machine can no longer connect to the internet. I am not sure where to start with troubleshooting this problem. After the bridge is setup, an ipconfig shows all netowrk devices on the machine as "Media Disconnected". I do know that the wireless adapter is connected to the router b/c it shows the connection as active and full-strength. The only thing I can possibly think of is that this machine also has the Cisco VPN client installed on it which installs a Cisco Virtual Network Adapter. Is it possible that this Cisco Virtual Adapter is causing me issues when I try to bridge? I saw some people had a similar issue with a VirtualBox virtual adapter when trying to share via Hyper-V. Any thoughts or suggestions on how to troubleshoot?

    Read the article

  • How to fix Browser Blue Screen of Death?

    - by WilliamKF
    I am running Windows XP SP3 and Firefox v3.6.2 and Internet Explorer and have an issue with Firefox and IE causing the Blue Screen of Death. If I run in Windows safe mode, it does not occur, but running normally, it seems my firefox profile is going bad and results in certain web pages causing the BSOD. IE is also getting BSOD on some pages. For example, presently, if I visit ebay.com in Firefox, it gets BSOD. It also fails when visiting http://www.google.com/ig?hl=en&source=iglk. IT removed my Firefox profile and that seemed to fix the issue for a while. However, now it has started occurring again. I turned off all firefox extensions and it still occurs. I'd like to fix my system so this does not occur. The IT folks don't seem to be able to solve this, so I am trying to fix it on my own. The BSOD is about something like (from memory) DRIVER_IRQL_NOT_LESS_OR_EQUAL. Why would safe mode avoid the issue and what does that tell us about the probable cause? I don't want to have to keep deleting my profile, so I'd like to find out the cause of the corruption.

    Read the article

  • Determine the time difference between two linux servers

    - by Paul
    I am troubleshooting a latency network issue on a network. It is probably a nic or cabling issue, but while I was going through the process of figuring it out, I was looking at the timings of a ping packet leaving a network card and arriving at another server. Both linux. So I have tcpdump running on both, and I issue a ping from one to the other, and back again, and looking at the timing differences might have shed light on where the latency is coming from. It is an academic exercise now, as I need to eliminate some more fundamental causes, but I was curious as to how this could be achieved. Given that ntpd is installed and running on two servers, how can I confirm the current time discrepency between the two servers, to whatever level of accuracy is possible - given that we are talking about latency on a local lan, which is ideally a millisecond or so. NTP itself is accurate to a couple of ms under good conditions, and as both servers are in the same environment, they should (presumably) achieve a similar level of accuracy, and so should have a time discrepency between them of a only few ms - but how can I check this?

    Read the article

  • Windows 2008 R2 forgets static IP configuration after reboot

    - by Andrew
    I've got an issue where a Windows 2008 R2 Standard (SP1) server loses its static IP configuration upon a reboot. It's a sysprep'd image. The following steps reproduces the problem: Using the SAC, set the IP using 'i' Use the Win32 EnableStatic() method to set an IP (and then SetGateways()) through PowerShell Reboot The machine boots up with the following configuration: Ethernet adapter Local Area Connection: Connection-specific DNS Suffix . : Link-local IPv6 Address . . . . . : [...] Autoconfiguration IPv4 Address. . : 169.254.152.31 (incorrect) Subnet Mask . . . . . . . . . . . : 255.255.0.0 (incorrect, was set to /24) Default Gateway . . . . . . . . . : 1.1.1.1 (correct) Occasionally, the gateway is also incorrect (0.0.0.0) The images have a script that runs 'netsh int ip reset' after sysprep finishes (before the reboot), so it appears that does not solve the issue. (the problem also happens without this step) After the reboot, using 'i' on the SAC resolves the issue permanently. (But I'd like to know the root cause as having to run 'i' again isn't ideal)

    Read the article

< Previous Page | 154 155 156 157 158 159 160 161 162 163 164 165  | Next Page >