Search Results

Search found 34813 results on 1393 pages for 'im fine'.

Page 6/1393 | < Previous Page | 2 3 4 5 6 7 8 9 10 11 12 13  | Next Page >

  • Windows 7 Black Screen On Boot, Seperate Bootable VHD Works Fine

    - by David Osborn
    I have a Window 7 x64 install with a bootable VHD (also Windows 7 x64). I was having problems getting my homeserver to do backups (VSS erred) so I ran check disk and used a tool from MS (cleanc2r.exe) to remove an empty Q drive from the VHD that I believe was a result of installing Office 2010 Beta. (All of this was done on the bootable VHD, not the main install.) Now I can't boot into the main install. It gets past the Starting Windows screen and then goes black. I can still boot into the bootable VHD and everything works fine from there. I have tried to boot the main install in Safe Mode/Safe Mode with Networking/and Safe Mode command prompt and it has the same issue. I ran chkdsk /r on the main install and after doing all the work there was a message about correcting some free space that was marked as allocated and also that it was unable to make an entry into the event log. I tried the startup repair utility and it found no problems. I don't see the setting for restore to last know good configuration so I couldn't do that. I don't recall installing anything new to the main install nor having hooked up any new hardware recently.

    Read the article

  • RemoteApp shows no certificate available but RD Session host finds it fine

    - by Scott Chamberlain
    I am trying to set up remote app for a internal domain. I have a Root CA that is trusted my all of the end computers, that cert has signed a wildcard cert I am trying to use for the server. I added the pfx of the wildcard cert to the local machine personal store. From there I can use it fine for signing the RD Session Host session. However when I try to set up the signature for Remote App the certificate does not show up. What do I need to do to get my certificate to be available for for use? UPDATE: The Certificate was generated through the following commands: makecert -pe -n "CN=*.vw.local" -a sha1 -sky signature -ic VetWebCA.cer -iv VetWebCA.pvk -sv VetWebComputerWildcard.pvk VetWebComputerWildcard.cer pvk2pfx -pvk VetWebComputerWildcard.pvk -spc VetWebComputerWildcard.cer -pfx VetWebComputerWildcard.pfx The resultant pfx was added to the machine local store via mmc. Oddly, going in to Powershell if I add the -CodeSigningCert flag to find the wildcard certificate it is excluded from the serch results for Get-Childitem in my Cert:\Local Machine\My path, but if I don't include it it is there.

    Read the article

  • GeekTool logs "command not found" for commands that work fine in Terminal

    - by Kevin Dowling
    I'm trying to run simple commands so I can have GeekTool output date/time etc. to my desktop. Should be simple enough to do but it never actually outputs anything into the boxes. Console log shows it's getting spammed by GeekTool to say 'command not found', though the same command (e.g. date +"%H:%M") works fine in Terminal. All I want to achieve is the ability to output a clock displaying time/date on my desktop that fits into my wallpaper. I've tried changing the format of the commands, using the built-in editor window as well as the command line box on the Properties tab. I had a look at the permissions in '/' (because GeekTool runs commands from there) and nothing unusual comes up. None of these solved the issue. When I use a command that simply echo's a string it works (e.g. echo "hello" displays the word hello). Does anyone have experience with GeekTool, and understand why it won't run basic commands? As I say, it's spamming my console with 'command not found' despite them working in terminal... Running OS X 10.6.6 on a MacBook Pro (mid-2010).

    Read the article

  • Get Illegal Instruction error when booting Linux in VirtualBox, works fine when booted directly

    - by rkjnsn
    I have a computer on which I am dual booting Windows 7 and Gentoo Linux (both 64-bit). I want to be able to load up my Linux installation in a VM while I am booted into Windows. I have installed VirtualBox and followed the instructions for creating a raw disk VMDK. When I start the VM, Linux starts booting, but then fails with the following error when unlocking my root partition: truecrypt[441] trap invalid opcode ip:373615538e0 sp:3dd0e0dfb60 error:0 in libpixman-1.so.0[373614d6000+8d000] Everything works fine when I boot into Linux directly. What could cause an illegal instruction to be hit in libpixman only when booting in VirtualBox? Update: As a troubleshooting step, I recompiled pixman without "-march", and no longer get an illegal instruction error in that library. (The boot fails in the same spot with the same error in a different library, however.) How can I determine the specific opcode that isn't working in VirtualBox so I can disable it in my CFLAGS without having to disable all CPU-specific optimizations? I am still confused as to why there would be any user-mode instruction that would fail to work in a VM. Is this a known limitation? My CPU is an Intel Core i7 3720QM, and I have hardware virtualization support enabled.

    Read the article

  • The Wifi is working fine but no internet connection in Windows 8 [migrated]

    - by Ali
    I'm currently having a Problem with my msi GT70 laptop. my laptop is running windows 8 and yesterday it requested a Restart to Update. after the Restart and the update I tried to surf the net through google chrome, the WiFi connection is perfect but the page I tried to access is not loading at all, after a while it shows failure to load page. I disabled and reenabled the WiFi chip through the device manager but still no internet connection. I uninstalled and reinstalled the drivers, still no internet. I updated the Driver of the wifi, still no internet, I even went to Wifi configuration and tried to change the DNS and reset it back to automatically, still no Luck.. I'm really lost I don't know what to do, I don't want to go deep and play with Laptop DNA "aka: registry file" and screw things up. I really appreciate any help in this matter. thanks in advance. Note: I tried to access many pages but no luck. I even tried Firefox, Opera, even ie still no luck. The internet is working fine on my tablet and cellphone, except for the Laptop.

    Read the article

  • DNS to \\Server\ wrong - \\Server.company.local\ works fine

    - by JimmyClif
    I had a little network glitch and since then one of my servers shows up wrong at some workstations when typing in \\server\. Example: On workstationA I go to Explorer and and type \\server\ and it brings me to our copier at 192.168.2.101. \\server.company.local\ gets me to the right place at 192.168.2.252. Ping with server pings 192.168.2.252 - same correct result with ping server.company.com nslookup also shows correct result with both. reverse lookup by ip is correct also. I flush the DNS on the workstation and the error still occurs. reboot same result. At that point I give up and start remapping the shares to \\server.company.local\share just to get the user back working... DNS Server has correct entries for that server. Can access the server via \\server\ on dns server, all looks fine. Eventually the workstation figures it out by itself and \\server\ works again but my life wouldn't be as stressful if I had a clue what happened or how to fix it myself. Thanks for your time looking and answering.

    Read the article

  • Application runs fine manually but fails as a scheduled task

    - by user42540
    I wasn't sure if this should go here or on stackoverflow. I have an application that loads some files from a network share (the input folder), extracts certain data from them and saves new files (zips them with SharpZLib) on a different network share (output folder). This application runs fine when you open it directly, but when it is set to a scheduled task, it fails in numerous places. This application is scheduled on a Win 2003 server. Let me say right off the bat, the scheduled task is set to use the same login account that I am currently logged in with, so it's not because it's using the LocalSystem account. Something else is going on here. Originally, the application was assigning a drive letter to the input folder using WNetGetConnectionA(). I don't remember why this was done, someone else on our team did that and she's gone now. I think there was some issue with using the WinZip command line with a UNC path. I switched from the WinZip command line utility to using SharpZLib because there were other issues with using the WinZip command line. Anyway, the application failed when trying to assign a drive letter with the error "connection already established." That wasn't true and even after trying WNetCancelConnection(), it still didn't work. Then I decided to just map the drive manually on the server. Then when the app calls Directory.Exists(inputFolderPath) it returns false, even though it does exist. So, for whatever reason, I cannot read this directory from within the application. I can manually navigate to this folder in Windows Explorer and open files. The app log file shows that the user executing it on the schedule is the user I expect, not LocalSystem. Any ideas?

    Read the article

  • After few days of server running fine with nginx it start throwing 499 and 502

    - by Abhay Kumar
    Nginx start throwing 499 and 502 after running fine for few days, website is a rails app using thin as the webserver. Restarting the Nginx doent not seem to help. Below the the Nginx config Nginx config under sites-enabled upstream domain1 { least_conn; server 127.0.0.1:3009; server 127.0.0.1:3010; server 127.0.0.1:3011; } server { listen 80; # default_server; server_name xyz.com *.xyz.com; client_max_body_size 5M; access_log /home/ubuntu/www/xyz/current/log/access.log; root /home/ubuntu/www/xyz/current/public/; index index.html; location / { proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Host $http_host; proxy_redirect off; proxy_read_timeout 150; if (!-f $request_filename) { proxy_pass http://domain1; break; } } }

    Read the article

  • Users Password does not reset after successful login at the console but works fine with SSH

    - by jnbbender
    The title says it all. I have my unsuccessful login attempts set to three. I purposefully fail logging in 2x, then when I SSH into the box successfully the 3rd time my count drops back to zero; exactly what should happen. But at the console I get failed login attempts EVEN for my successful login attempts. I am using RHEL 5.6 and no I am not able to upgrade. Here is my system-auth file: auth required pam_env.so auth required pam_tally.so onerr=fail deny=3 per_user auth sufficient pam_unix.so nullok try_first_pass auth requisite pam_succeed_if.so uid >= 500 quiet auth required pam_deny.so account required pam_unix.so account required pam_tally.so account sufficient pam_succeed_if.so uid < 500 quiet account required pam_permit.so password requisite pam_cracklib.so try_first_pass retry=3 password sufficient pam_unix.so md5 shadow nullok try_first_pass use_authtok password required pam_deny.so session optional pam_keyinit.co revoke session required pam_limits.so session [success=1 default=ignore] pam_succeed_if.so service in crond quiet use_uid session required pam_unix.so I have tried adding reset after and in place of per_user in the auth required pam_tally.so field. Nothing seems to work and I don't know why SSH is working just fine. Any ideas?

    Read the article

  • MySQL timeout only from office network to remote server, but other connections are fine

    - by Adam
    I've been developing these apps just fine on a local machine as has my co worker. We recently moved our work desks so we're now on a different floor of the building, but we only have one router that we're connected to. Since then, connecting to this one server appears to timeout more often than not. Occasionally I get through, and the loading is instantaneous. Anyhow we have these connections that were tested 1. my computer -> office network -> php pdo -> mysql server A - timeout 2. my computer -> office network -> mysql cli -> mysql server A - timeout 3. my computer -> office network -> mysql cli -> mysql server A - timeout 4. another pc -> office network -> mysql cli -> mysql server A - timeout 5. my computer -> mobile network -> mysql cli -> mysql server A - ok 6. my computer -> office network -> ssh server A -> mysql server A - ok 7. my computer -> office network -> ssh server B -> mysql server A - ok 8. server B web app -> php pdo -> mysql server A - ok 9. my computer -> office network -> php pdo -> mysql server B - ok 10. my computer -> office network -> mysql cli -> mysql server B - ok This has really stumped me.

    Read the article

  • new PC not work with existing router, but works fine when directly connecting to cable modem

    - by user34786
    I bought a new desktop PC (eMachine ET1331G-03W from WalMart) with windows 7 installed, but I can not access internet by connecting to my existing wireless router(LinkSys BEFW11S4) with wired cable. Though all other existing desktops and laptops have no problem connecting to the same router. However, the new desktop PC works fine and able to connect to internet if I bypass the router and directly hook up with the cable modem. At new PC when connecting to the router, I got the below information by typing ipconfig, the IP address looks wrong to me: autoconfiguration IPv4 Address: 169.254.71.140 subnet mask: 255.255.0.0 default gateway: (empty) NetBIOS over Tcpip: Enabled Typing ipconfig at all other desktop and laptop have values like below, which are good to me: Connection-specific DNS Suffix . : IP Address. . . . . . . . . . . . : 192.168.1.140 Subnet Mask . . . . . . . . . . . : 255.255.255.0 Default Gateway . . . . . . . . . : 192.168.1.1 The wireless router was on 192.168.1.1, I do not know why the new desktop got 169.254.71.140 IP? It should have something like 192.168.1.xxx, and it was configured to automatically get IP by DHCP. I have tried to switch cables,power off cable modem, router and reboot new pc many times and got no luck. So I believe this is only an issue related to router or new pc configuration. Can someone help me figure out the issue?

    Read the article

  • Im unable to install ADT on Eclipse 3.5

    - by Jrubins
    Im having a problem regarding installing ADT plugin on Eclipse. the error prompt was An error occurred while collecting items to be installed session context was:(profile=SDKProfile, phase=org.eclipse.equinox.internal.provisional.p2.engine.phases.Collect, operand=, action=). Unable to read repository at http://download.eclipse.org/releases/galileo/201002260900/aggregate/plugins/org.eclipse.emf.common_2.5.0.v200906151043.jar.pack.gz. . . . The server download.eclipse.org failed to respond Is there any other location where i can download ADT for eclipse? thanks

    Read the article

  • c# im getting an error

    - by vj4u
    double dval = 1; for (int i = 0; i < Cols; i++) { k = 0; dval = 1; for (int j = Cols - 1; j >= 0; j--) { colIndex = (i + j) % 3; val *= dval[colIndex, k]; k++; } det -= dval; } im getting an error Cannot apply indexing with [] to an expression of type 'double' for dval help its urgent

    Read the article

  • Ubuntu 9.10 RSA authentication: ssh fails, filezilla runs fine

    - by MariusPontmercy
    This is quite a mistery for me. I usually use passwordless RSA authentication to login into my remote *nix servers with ssh and sftp. Never had any problem until now. I cannot connect to an Ubuntu 9.10 machine: user@myclient$ ssh -i .ssh/Ganymede_key [email protected] [...] debug1: Host 'ganymede.server.com' is known and matches the RSA host key. debug1: Found key in /home/user/.ssh/known_hosts:14 debug2: bits set: 494/1024 debug1: ssh_rsa_verify: signature correct debug2: kex_derive_keys debug2: set_newkeys: mode 1 debug1: SSH2_MSG_NEWKEYS sent debug1: expecting SSH2_MSG_NEWKEYS debug2: set_newkeys: mode 0 debug1: SSH2_MSG_NEWKEYS received debug1: SSH2_MSG_SERVICE_REQUEST sent debug2: service_accept: ssh-userauth debug1: SSH2_MSG_SERVICE_ACCEPT received debug2: key: .ssh/Ganymede_key (0xb96a0ef8) debug2: key: .ssh/Ganymede_key ((nil)) debug1: Authentications that can continue: publickey,password,keyboard-interactive debug1: Next authentication method: publickey debug1: Offering public key: .ssh/Ganymede_key debug2: we sent a publickey packet, wait for reply debug1: Authentications that can continue: publickey,password,keyboard-interactive debug1: Trying private key: .ssh/Ganymede_key debug1: read PEM private key done: type RSA debug2: we sent a publickey packet, wait for reply debug1: Authentications that can continue: publickey,password,keyboard-interactive debug2: we did not send a packet, disable method debug1: Next authentication method: keyboard-interactive debug2: userauth_kbdint debug2: we sent a keyboard-interactive packet, wait for reply debug2: input_userauth_info_req debug2: input_userauth_info_req: num_prompts 1 Then it falls back to password authentication. If I disable password authentication on the remote machine my connection attempt just fails with a "Permission denied (publickey)." state. Same thing for sftp from command line. The "funny" thing is that the exact same RSA key works like a charm with a Filezilla sftp session instead: 12:08:00 Trace: Offered public key from "/home/user/.filezilla/keys/Ganymede_key" 12:08:00 Trace: Offer of public key accepted, trying to authenticate using it. 12:08:01 Trace: Access granted 12:08:01 Trace: Opened channel for session 12:08:01 Trace: Started a shell/command 12:08:01 Status: Connected to ganymede.server.com 12:08:02 Trace: CSftpControlSocket::ConnectParseResponse() 12:08:02 Trace: CSftpControlSocket::ResetOperation(0) 12:08:02 Trace: CControlSocket::ResetOperation(0) 12:08:02 Status: Retrieving directory listing... 12:08:02 Trace: CSftpControlSocket::SendNextCommand() 12:08:02 Trace: CSftpControlSocket::ChangeDirSend() 12:08:02 Command: pwd 12:08:02 Response: Current directory is: "/root" 12:08:02 Trace: CSftpControlSocket::ResetOperation(0) 12:08:02 Trace: CControlSocket::ResetOperation(0) 12:08:02 Trace: CSftpControlSocket::ParseSubcommandResult(0) 12:08:02 Trace: CSftpControlSocket::ListSubcommandResult() 12:08:02 Trace: CSftpControlSocket::ResetOperation(0) 12:08:02 Trace: CControlSocket::ResetOperation(0) 12:08:02 Status: Directory listing successful Any thoughts? M

    Read the article

  • SSL certificates work fine from command line but fails in script

    - by jrallison
    I'm trying to setup email notifications for my continuous integration server. I have a script which uses nail to send the email when the build works: #!/bin/bash echo "Build Worked!" | nail -A myisp -s 'Build Success' [email protected] When I run this from the command line with sh build-worked, it works and I receive the email. However, when I start the continuous integration server which executes the same script, I get the following error: nail: /opt/bitnami/common/lib/libssl.so.0.9.8: no version information available (required by nail) nail: /opt/bitnami/common/lib/libcrypto.so.0.9.8: no version information available (required by nail) Error with certificate at depth: 0 issuer = /C=ZA/ST=Western Cape/L=Cape Town/O=Thawte Consulting cc/OU=Certification Services Division/CN=Thawte Premium Server CA/[email protected] subject = /C=US/ST=California/L=Mountain View/O=Google Inc/CN=smtp.gmail.com err 20: unable to get local issuer certificate Continue (y/n)? could not initiate SSL/TLS connection: error:14090086:SSL routines:SSL3_GET_SERVER_CERTIFICATE:certificate verify failed . . . message not sent. I must be messing some configuration, any ideas?

    Read the article

  • SQL Server 2008: Can't connect to remote server via management studio but can telnet in fine

    - by WarpKid
    Hi, I am in the process of trying to configure SQL Server 2008 to accept remote connections. I have been through all the documentation I can find and yet when I attempt to connect through management studio I get an error stating that the server could not be found. Interestingly I can connect through telnet to the remote server via the port that sql server is listening on. In the SQL Server logs I can see the connection attempt. So SQL Server is up and running and listening on the correct port - no firewall blocking it. It would appear that by default SQL Server is listening on port 50314 by default but management studio attempts to connect on port 1433.Weird. Server Management Studio = no dice. Anyone got any ideas? Server is set to allow remote connections - TCP IP is enabled, firewall is off. Thanks UPDATE FOR TO CLEAR THINGS UP A BIT We are seeing the connection attempt when we telnet in on port 50314 in the sql server logs. When we login through management studio we see it attempting connection on port 1433. There is no sign of this connection attempt in the logs.

    Read the article

  • Word documents very slow to open over network, but fine when opened locally - on one machine

    - by Craig H
    Windows XP, Word 2003, patched. The issue is happening with several Word documents stored on a network drive. The Word documents are clearly a bit wonky (i.e. one is 675k, but if you copy everything but the last paragraph marker into a new document, the new document is only 30k). But that's only part of the problem. On one weird machine, and one machine only, it takes ~20 seconds to open these Word documents from the network drive. Copy the file to C: on that werid machine? Opens immediately. Go to other machines (that are very similar - same patch level, etc.) and open the same document from the network? Opens immediately. Delete normal.dot? 20 seconds. Login with a different user on the weird machine? 20 seconds. Plug wonky machine into a different network port? 20 seconds. So the problem appears to be hardware related (i.e. wonky internal NIC) or related to a setting that is not profile specific. Any ideas? "Scrubbing" all the documents isn't ideal for several reasons. This is driving me nuts because I swear I ran into this before many years ago and eventually figured it out. But I appear to have lost my notes.

    Read the article

  • Flash plugin locks up Firefox, Chrome and Safari behind a corporate proxy, IE6 works fine

    - by Shevek
    At work I am forced by corporate policy to use IE6. Obviously this is not so good so I use FF for most of my browsing. However there is a problem once I have installed the Flash plug-in - FF locks up when trying to load Flash media. Looking at the status bar at the time of the lock up it appears this happens when the browser tries to get cross domain data. The Flash Active X plug-in in IE does not suffer this issue. I have tried it in a brand new profile in FF with Flash as the only plug in and it locks up. We have 2 different proxy servers and both exhibit the same problem. I have also tried Chrome and Safari and both lock up with the plug-in installed. So, has anyone else had this problem and solved it? Or, is there any way to disable cross domain data access in the flash plug-in? Or, is there any way to disable the "This site needs an additional plug-in" ribbon which appears when the plug-in is not installed. Many thanks!

    Read the article

  • can't resolve host (A) but FQDN is fine

    - by user1431356
    I am getting inconsistent name resolution locally with DNS I have 3 2012 Standard servers and some weirdness on 1. It is a standard install with IIS role added. TEST01 is a dev server. 192.119.1.220 with a host header of TEST01. DATA01 runs internal DNS on .240 and all servers and clients point here for DNS. There is a forward in DNS to 192.119.1.1 (router) with ISP external DNS #s mapped. if I ping TEST01 from a non AD machine, it I get "Could not find Host TEST01" If I ping TEST01 from a domain machine(another server), it resolves the IP but does not respond. if I ping TEST01.AD.local, DNS resolves the IP, but times out. I can access IIS by entering http://test.WWWDOMAIN.com and I can RDP to it, just not ping. Any idea where I should start?

    Read the article

  • SSH closing by itself - root works fine

    - by Antti
    I'm trying to connect to a server but if i use any other user than root the connection closes itself after a successful login: XXXXXXX:~ user$ ssh -v [email protected] OpenSSH_5.2p1, OpenSSL 0.9.8l 5 Nov 2009 debug1: Reading configuration data /etc/ssh_config debug1: Connecting to XXXXXXX.XXXXXX.XXX [xxx.xxx.xxx.xxx] port 22. debug1: Connection established. debug1: identity file /Users/user/.ssh/identity type -1 debug1: identity file /Users/user/.ssh/id_rsa type -1 debug1: identity file /Users/user/.ssh/id_dsa type 2 debug1: Remote protocol version 2.0, remote software version OpenSSH_4.3 debug1: match: OpenSSH_4.3 pat OpenSSH_4* debug1: Enabling compatibility mode for protocol 2.0 debug1: Local version string SSH-2.0-OpenSSH_5.2 debug1: SSH2_MSG_KEXINIT sent debug1: SSH2_MSG_KEXINIT received debug1: kex: server->client aes128-ctr hmac-md5 none debug1: kex: client->server aes128-ctr hmac-md5 none debug1: SSH2_MSG_KEX_DH_GEX_REQUEST(1024<1024<8192) sent debug1: expecting SSH2_MSG_KEX_DH_GEX_GROUP debug1: SSH2_MSG_KEX_DH_GEX_INIT sent debug1: expecting SSH2_MSG_KEX_DH_GEX_REPLY debug1: Host 'XXXXXXX.XXXXXX.XXX' is known and matches the RSA host key. debug1: Found key in /Users/user/.ssh/known_hosts:12 debug1: ssh_rsa_verify: signature correct debug1: SSH2_MSG_NEWKEYS sent debug1: expecting SSH2_MSG_NEWKEYS debug1: SSH2_MSG_NEWKEYS received debug1: SSH2_MSG_SERVICE_REQUEST sent debug1: SSH2_MSG_SERVICE_ACCEPT received debug1: Authentications that can continue: publickey,gssapi-with-mic,password debug1: Next authentication method: publickey debug1: Offering public key: /Users/user/.ssh/woo_openssh debug1: Authentications that can continue: publickey,gssapi-with-mic,password debug1: Offering public key: /Users/user/.ssh/sidlee.dsa debug1: Authentications that can continue: publickey,gssapi-with-mic,password debug1: Trying private key: /Users/user/.ssh/identity debug1: Trying private key: /Users/user/.ssh/id_rsa debug1: Offering public key: /Users/user/.ssh/id_dsa debug1: Authentications that can continue: publickey,gssapi-with-mic,password debug1: Next authentication method: password [email protected]'s password: debug1: Authentication succeeded (password). debug1: channel 0: new [client-session] debug1: Entering interactive session. Last login: Mon Mar 29 01:41:51 2010 from 193.67.179.2 debug1: client_input_channel_req: channel 0 rtype exit-status reply 0 debug1: channel 0: free: client-session, nchannels 1 Connection to XXXXXXX.XXXXXX.XXX closed. Transferred: sent 2976, received 2136 bytes, in 0.5 seconds Bytes per second: sent 5892.2, received 4229.1 debug1: Exit status 1 If i log in as root the exact same way it works as expected. I've added the users i want to log in with to a group (sshusers) and added that group to /etc/sshd_config: AllowGroups sshusers I'm not sure what to try next as i don't get a clear error anywhere. I would like to enable specific accounts to log in so that i can disable root. This is a GridServer/Media Temple (CentOS).

    Read the article

  • Mac Mini's internet very slow, every other device fine (PC, iPhone, Xbox 360)

    - by alex
    I recently haven't used my Mac Mini for about 5 days (however it was left on). I seem to be able to connect and get great download / upload speeds through my PC, Xbox 360, iPhone and parents' laptop. However, my Mac Mini is very slow. OS X's Mail.app is downloading mail at 0.4kbps and then dropping to 0. Skype file transfers are doing the same. Browsing the net is a terrible experience. It is taking 30 seconds or more to download basic pages. All of my devices connect wirelessly to a Netgear router / modem. I have tried giving the Mac Mini a manual IP, and renew DHCP lease, as well as flush DNS in Terminal. I have also rebooted the router / modem twice, and the Mac Mini twice. Do you know what could be causing this? Thanks

    Read the article

  • NAS device claims drive in a RAID is degraded but S.M.A.R.T. says it is fine

    - by Nathan Villaescusa
    I have a Synology DS213 with two 600GB drives in RAID 1. Last night the device reported that my second drive had become degraded and that I should replace it. When I ran a extensive S.M.A.R.T. test the results said that the drive is okay. How can I confirm that the drive is actually bad? Is there any case that the degraded drive is the good one and that it is actually the other drive that is bad?

    Read the article

  • Win7 to Win7 Remote Desktop Not working, Xp to 7 working fine

    - by vlad b.
    Hello, I have a small home network and recently i tried to enable remote desktop for one of the pc's. I have a mix of Windows 7, Windows Vista and Xp runing alongside ubuntu, centos and others (some virtual, some real). I have a few Windows 7 pc`s that can be connected to using remote desktop from inside and outside the network (port redirects on routers, etc, etc) and some Xp ones. The trouble is when i tried to do the same thing to a Win7 laptop i discovered i can't connect to it from another win7 pc inside the home network. To sum it up Working: xp -- win7 not working: win7 -- win7 What i tried - disable and enable remote desktop (my computer - remote settings) - removing and adding users to the remote settings window - adding a new user to the machine, administrator or 'normal' user - checking the firewall settings on the machine and set 'allow' to remote desktop for both 'home/work' and 'public'networks Any tips on what should i do next? It displays ' .. secure connection' and after that the window with 'Your security credentials did not work' and it lets me try again with another user/password..

    Read the article

  • Batch copy gives errors, xcopy works fine

    - by ndm13
    I am writing a general file backup program. It searches the drive for files matching a set of types and then writes them to a folder on the desktop. I wrote it using xcopy on Windows XP but upon learning that xcopy was deprecated in favor of robocopy in Vista and newer, still wanting to maintain compatibility I decided to switch to the non-deprecated copy. This is where the problems begin. I'm trying to fix the copy routine. I thought I had everything sorted out, but it doesn't copy anything. My output is zero files copied for every iteration. Original Code using xcopy: for /r %%a in (*.bmp *.dds *.gif *.jpg *.jpeg *.png *.psd *.pspimage *.tga *.thm *.tif *.tiff) do ( echo f | xcopy "%%a" "%HOMEDRIVE%%HOMEPATH%\Desktop\LDR\Images\Bitmap\%%~nxa" /q /y /g /c ) Revised (broken) Code using copy: for /r %%a in (*.bmp *.dds *.gif *.jpg *.jpeg *.png *.psd *.pspimage *.tga *.thm *.tif *.tiff) do ( copy "%%a" "%HOMEDRIVE%%HOMEPATH%\Desktop\LDR\Images\Bitmap\%%~nxa" /d /y /z ) Output: The system cannot find the path specified. 0 files copied. I know that it seems everyone uses either xcopy or robocopy but can anyone help with copy? Note: I'm using Batch to keep it very lightweight and command-line accessible.

    Read the article

< Previous Page | 2 3 4 5 6 7 8 9 10 11 12 13  | Next Page >