Search Results

Search found 2264 results on 91 pages for 'odd rationale'.

Page 63/91 | < Previous Page | 59 60 61 62 63 64 65 66 67 68 69 70  | Next Page >

  • Vmware Workstation, Win7 host, Ubuntu guests with Nat + Host-only networks but they cannot connect to the Internet

    - by Ikon
    I have a Win7 host machine with Vmware Workstation. In the workstation I have 3 Ubuntu installed. All 3 Ubuntu guests have a Nat network - to access the internet without asking the router for a local address - and a Host-only network - to connect all Ubuntu quests and the host in a private network for internal communication, without touching the router. When I try to make any of the Ubuntu quests to get data from the internet - assuming that they would figure out that the Nat-ed interface can access the requested data - they fail and report that there is no route to my query. If I disconnect the 2nd interface on the Ubuntu guests with the Host-only network and restart networking, they start to know the route to the internet. Odd, during the installation of the guests they asked which of the 2 given interfaces - with Nat and Host-only config - should be used to get updates during installation and they oddly managed to get the updates. Not so after the installation has finished and rebooted. I have checked the Virtual Network Editor that the Nat interface should use my real network card to access the net, so there should be no problem. I wish not to use the router's dhcp service to give the Ubuntu quests an address, and also I don't want the guests to be accessable from the local network directly, but only by the host - that's the Host-only network is for. Any suggestions? Edit: 192.168.189.0 is the Nat interface and 192.168.7.0 is the Host-only. $ route -n Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface 192.168.7.0 0.0.0.0 255.255.255.0 U 0 0 0 eth1 192.168.189.0 0.0.0.0 255.255.255.0 U 0 0 0 eth0 0.0.0.0 192.168.189.2 0.0.0.0 UG 100 0 0 eth0

    Read the article

  • ping incorrectly pinging 127.0.0.1

    - by AlexW
    I've got an odd DNS issue. I'm running a dual ipv4/ipv6 environment on Linux. Pinging some sites results in ping pinging 127.0.0.1. e.g. #> ping authserver.mojang.com PING authserver.mojang.com (127.0.0.1) 56(84) bytes of data. 64 bytes from localhost.localdomain (127.0.0.1): icmp_seq=1 ttl=64 time=0.045 ms 64 bytes from localhost.localdomain (127.0.0.1): icmp_seq=2 ttl=64 time=0.043 ms 64 bytes from localhost.localdomain (127.0.0.1): icmp_seq=3 ttl=64 time=0.058 ms --- authserver.mojang.com ping statistics --- 3 packets transmitted, 3 received, 0% packet loss, time 2000ms rtt min/avg/max/mdev = 0.043/0.048/0.058/0.010 ms Dig, however correctly returns the following: # dig authserver.mojang.com ; <<>> DiG 9.9.3-P2 <<>> authserver.mojang.com ;; global options: +cmd ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 15800 ;; flags: qr rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 1 ;; OPT PSEUDOSECTION: ; EDNS: version: 0, flags:; udp: 512 ;; QUESTION SECTION: ;authserver.mojang.com. IN A ;; ANSWER SECTION: authserver.mojang.com. 5 IN A 54.235.119.47 ;; Query time: 14 msec ;; SERVER: 2001:4860:4860::8888#53(2001:4860:4860::8888) ;; WHEN: Sat Nov 09 15:34:40 GMT 2013 ;; MSG SIZE rcvd: 66 I'm confused! My web browser returns the correct website, and the same computer booted into Windows also works correctly.

    Read the article

  • Using AWStats, cannot get MaxNbOfExtraX to limit rows in Extra Report

    - by user137519
    Folks, got something really odd here I'd like to resolve. I've been using Awstats and have a couple of extra reports. I cannot get any of them to limit the rows using MaxNbOfExtraX to work. Here are two examples: ExtraSectionName1="Top 100 Searches" ExtraSectionCodeFilter1="200 304" ExtraSectionCondition1="URL,/search/search_post.php" ExtraSectionFirstColumnTitle1="Search Parameters" ExtraSectionFirstColumnValues1="QUERY_STRING,(.*)" ExtraSectionFirstColumnFormat1="QueryParameters: %s" ExtraSectionStatTypes1=HL ExtraSectionAddAverageRow1=0 ExtraSectionAddSumRow1=1 MaxNbOfExtra1=100 MinHitExtra1=4 ExtraSectionName2="Top 100 Downloads" ExtraSectionCodeFilter2="200 304" ExtraSectionCondition2="URL,/filedownload.php" ExtraSectionFirstColumnTitle2="File Downloads" ExtraSectionFirstColumnValues2="QUERY_STRING,(.[0-9]{5})(h|p)?." ExtraSectionFirstColumnFormat2="File ID: %s" ExtraSectionStatTypes2=HL ExtraSectionAddAverageRow2=0 ExtraSectionAddSumRow2=1 MaxNbOfExtra2=100 MinHitExtra2=3 According to all documentation I've read the MaxNbOfExtra1 should keep the limit to 100. However when I run this, with the debug messages enabled I get a message indicated that the query will be in excess of of 500 and would not run it. I increased the number of ExtraTrackedRowsLimit to 2000 and it would work. But the option I provided should have lowered that. I even tried without the ExtraTrackedRowsLimit with MaxNbOfExtra1=100 but same error: No limit to 100 and the "excess of 500" error. I have the URLWithQuery=1 and my reports do run properly along with my regex filters. I am using MinHitExtra1 to limit the rows and that works, but why can I not get the MaxObOfExtraX option to work. Any ideas? Thanks in advance.

    Read the article

  • Figuring out which PC part is faulty

    - by Davy8
    I have an odd scenario and I'm having trouble figuring out which is the faulty component. First of all, the video doesn't work, monitor says it's not getting a signal. Monitor's not faulty (works on other computer) so the first suspect was video card. However 2 things make me think it's not the video card. (Don't have another machine with PCIe around to test definitively) First, the GPU fan is spinning so it's getting power. Second, tried putting in an older PCI video card that is known to be working (pulled out of another working machine) and there's still no video. Normally if it's not the video card I'd suspect the motherboard, but everything's getting power on the mobo, so I'm not sure. The case apparently doesn't have system speakers, so can't hear any of the diagnostic beeps either. Also not sure whether a faulty CPU would cause no image at all either. The parts are brand new so something's going to get RMA'd but I'm not sure which component is to blame in this case. (Only slightly related, but I also accidentally put too much thermal paste on the CPU. The fan/heatsink instructions said to put the whole tube which seemed like a lot compared to previous experience, and as I started squeezing I knew it was definitely too much and stopped at about 1/3 but against my better judgement I didn't wipe any off. I'm not sure whether that would cause problems other than not cooling as effectively as it should)

    Read the article

  • Why are my Windows 7 updates continuously failing?

    - by Chris C.
    I'm an advanced level user here with an odd issue. I have two Windows Updates that are failing to install, every single time. I'm getting a mysterious "Code 1" error on both updates, an error for which I'm having difficulty finding a solution. The updates in question are: Security Update for Microsoft Visual C++ 2008 Service Pack 1 Redistributable Package (KB2538243) System Update Readiness Tool for Windows 7 for x64-based Systems (KB947821) [May 2011] Because these updates are failing, the Shut Down button in my start menu always has the shield icon next to it, indicating that "new" updates will be installed on shut down. But, of course, they'll fail and when the PC is restarted, the shield icon is still there. When checking the update history and viewing the details of the failed updates, I get the following: Security Update for Microsoft Visual C++ 2008 Service Pack 1 Redistributable Package (KB2538243) Installation date: ?6/?29/?2011 3:00 AM Installation status: Failed Error details: Code 1 Update type: Important A security issue has been identified leading to MFC application vulnerability in DLL planting due to MFC not specifying the full path to system/localization DLLs. You can protect your computer by installing this update from Microsoft. After you install this item, you may have to restart your computer. More information: http://go.microsoft.com/fwlink/?LinkId=216803 and: System Update Readiness Tool for Windows 7 for x64-based Systems (KB947821) [May 2011] Installation date: ?6/?28/?2011 3:00 AM Installation status: Failed Error details: Code 1 Update type: Important This tool is being offered because an inconsistency was found in the Windows servicing store which may prevent the successful installation of future updates, service packs, and software. This tool checks your computer for such inconsistencies and tries to resolve issues if found. More information: http://support.microsoft.com/kb/947821 About My System I'm running Windows 7 Home Premium 64-bit. This is a custom PC build and the OS was installed fresh, not an upgrade from a previous version. I've been running this system for about four months. Windows Updates aside, the system is usually quite stable.

    Read the article

  • Can a cur file have an alpha map (for high quality transparency)?

    - by Codemonkey
    I want to convert this Heroes of Newerth cursor into a cur file so I can use it as a cursor on windows and on my website. The cursor is now in a PNG file with an alpha map, and I have so far been unsuccessful in converting it to a cur file. The things I've tried is using the Photoshop cursor plugin to simply save the png as a cur. The result was a working cur file with a white background instead of a transparent one. I've also tried painting the background pure green (0, 255, 0) and saving as a cur file, in which case the background was still green when I tried using it as a windows cursor. I've also tried what's been mentioned in a few tutorials, which is selecting the area I want transparent and saving the selection in Photoshop (it then ends up as a new channel). So I selected the cursor and inverted the selection and saved it. The result was a bit odd. Instead of a transparent background, I now get a background that inverts the colours behind it, just like the "Windows inverted" mouse scheme. So is there any way to accomplish what I'm trying to do?

    Read the article

  • How to configure default text selection behavior in Windows XP, 7? (eg. mouse click selects entire word vs. mouse click inserts an active cursor)

    - by Mouse of Fury
    I find the mouse click behavior of Windows XP and Windows 7 annoying and intrusive. I don't remember Windows NT being quite this bad, or MacOS 7 - 10 which I used in the nineties. When I'm using a browser and I click on a text field - for example, the address bar, or a search box - the first thing which happens is the entire field is selected.Subsequent clicks seem to select parts of words, often deciding arbitrarily to exclude or include adjacent punctuation. The same in Excel and other apps, and when trying to rename files, so I'm assuming this behavior comes from a system-wide text handling routine. I frequently want to edit text, cut out or replace odd parts of the insides of words or chunks of sentences, and often find that to get a simple cursor to insert I have to click the mouse up to 4 times in succession. I've had to do a lot of this recently and it has been driving me insane. Is there a place at the system level where this can be configured? In a perfect world, I'd like a single click on a new text area to insert a cursor point, and a rapid double click to select the entire area. Words or text within the area could be selected by inserting a cursor, holding down the mouse button and dragging to the exact point where I want the selection to end - even if that's in the middle of a word. No, I don't need or want Windows to "smart select" a word or sentence for me. I've looked in the Mouse and Accessibility Options control panels (Windows XP). Haven't found anything even close. Thanks -

    Read the article

  • Disable internal display on Macbook Pro without closed lid mode?

    - by jslaker
    I have an early 2007 Macbook Pro running 10.5 that I've recently set up on a KVM with my primary desktop system. The problem I've run into is that I have a 20" 1680x1050 LCD, and OS X only provides options to mirror at the resolution of the built-in display or to span. Since the built-in display runs at 1440x900, this leads to running my LCD at non-native res and a fuzzy picture. There isn't any option that I can find to simply disable the built-in display entirely and run the external LCD at its native resolution. I am aware of closed lid mode, but the MBP was disassembled while in storage for about 6 months (took it apart to pull the HDD) and the cable to the touchpad, which controls the sleep sensor was damaged, meaning closed lid mode won't work. I've looked into replacing the cable, but the cheapest I've been able to find it is $75-100, and I'm trying not to invest any more money into this computer as it also has a completely dead battery and a few other minor problems. I've found the app SwitchResX which appears to allow you to do what I need, but it has a lot of functionality I don't need and a ~$20 registration charge attached to it. An odd set of circumstances, I'm aware, but I was hoping somebody might know of an OS hack that would let me just disable the internal display and be done with it. :)

    Read the article

  • Exchange emails not delivering for one user

    - by Cylindric
    We have an Exchange infrastructure going through a migration from 2003 SP2 (call it ExOld) to 2010 (ExNew). All users are now on the new server, but mail is still being directed to ExOld until testing is complete. ExNew sends emails directly to the internet. For one particular user, emails don't seem to be being reliably delivered, but the odd thing is that it's not all emails. I can see external emails in his inbox. If I send an internal email it works fine. If I send an email from Gmail to him it doesn't get through. If I telnet from outside to ExOld I can send an email to him. If I telnet from outside to ExNew I can send an email to him. This is a transcript that results in a successful send: 220 ExOldName Microsoft ESMTP MAIL Service, Version: 6.0.3790.4675 ready at Mon, 22 Oct 2012 10:55:26 +0100 EHLO test.com 500 5.3.3 Unrecognized command EHLO test.com 250-ExOldFQDN Hello [MyTestExternalIp] 250-TURN 250-SIZE 250-ETRN 250-PIPELINING 250-DSN 250-ENHANCEDSTATUSCODES 250-8bitmime 250-BINARYMIME 250-CHUNKING 250-VRFY 250-X-EXPS GSSAPI NTLM LOGIN 250-X-EXPS=LOGIN 250-AUTH GSSAPI NTLM LOGIN 250-AUTH=LOGIN 250-X-LINK2STATE 250-XEXCH50 250 OK MAIL FROM:[email protected] 250 2.1.0 [email protected] OK RCPT TO:[email protected] notify=success,failure 250 2.1.5 [email protected] DATA 354 Start mail input; end with . Subject:Test 1056 Test 10:56 . 250 2.6.0 Queued mail for delivery quit 221 2.0.0 ExOldFQDN Service closing transmission channel Emails go through Symantec Cloud, but their "Track and Trace" shows the messages going through, with a "delivered ok" log entry. 2012-10-22 09:19:56 Connection from: 209.85.212.171 (mail-wi0-f171.google.com) 2012-10-22 09:19:56 Sending server HELO string:mail-wi0-f171.google.com 2012-10-22 09:19:56 Message id:CAE5-_4hzGpY2kXFbzxu7gzEUSj5BAvi+BB5q1Gjb6UUOXOWT3g@mail.gmail.com 2012-10-22 09:19:56 Message reference: 135089759500000177171130001194006 2012-10-22 09:19:56 Sender: [email protected] 2012-10-22 09:19:56 Recipient: [email protected] 2012-10-22 09:20:26 SMTP Status: OK 2012-10-22 09:19:56 Delivery attempt #1 (final) 2012-10-22 09:19:56 Recipient server: ExOldIP (ExOldIP) 2012-10-22 09:19:56 Response: 250 2.6.0 Queued mail for delivery I'm not sure where to look on the old (or new) server for information as to where the mails are ending up.

    Read the article

  • Centos repository packages vs latest developer release

    - by fran
    I have started to run a personal server using CentOS and I have noticed that many packages that are available to install from repository are old compared with the latest release from the developer. I know that installing packages from repository is very easy and I guess that the supplied versions are stable and prepared to work without any trouble, but I still find odd having so much software that lags behind the current version. It's my first time with linux and I don't know what is the "normal" thing, should I stick to whatever version the repository supplies, or try to get the latest from the developer? To be more precisely, the repository supplies the apache httpd web server with version 2.2, I wanted to update to 2.4, so I started removing apache and its dependencies packages that come with centos to use the latest ones, but when I was about to remove pcre v6 to replace it with v8, i found out that 132 installed packages depend on it and probably it is not a good idea to remove it, so that made me think twice about getting the latest software instead of using the packages supplied by the official repositories. Should I leave things as they are instead of going on an upgrade rampage? Thanks

    Read the article

  • glusterfs mounts get unmounted when 1 of the 2 bricks goes offline

    - by Shiquemano
    I have an odd case where 1 of the 2 replicated glusterfs bricks will go offline and take all of the client mounts down with it. As I understand it, this should not be happening. It should fail over to the brick that is still online, but this hasn't been the case. I suspect that this is due to configuration issue. Here is a description of the system: 2 gluster servers on dedicated hardware (gfs0, gfs1) 8 client servers on vms (client1, client2, client3, ... , client8) Half of the client servers are mounted with gfs0 as the primary, and the other half are pointed at gfs1. Each of the clients are mounted with the following entry in /etc/fstab: /etc/glusterfs/datavol.vol /data glusterfs defaults 0 0 Here is the content of /etc/glusterfs/datavol.vol: volume datavol-client-0 type protocol/client option transport-type tcp option remote-subvolume /data/datavol option remote-host gfs0 end-volume volume datavol-client-1 type protocol/client option transport-type tcp option remote-subvolume /data/datavol option remote-host gfs1 end-volume volume datavol-replicate-0 type cluster/replicate subvolumes datavol-client-0 datavol-client-1 end-volume volume datavol-dht type cluster/distribute subvolumes datavol-replicate-0 end-volume volume datavol-write-behind type performance/write-behind subvolumes datavol-dht end-volume volume datavol-read-ahead type performance/read-ahead subvolumes datavol-write-behind end-volume volume datavol-io-cache type performance/io-cache subvolumes datavol-read-ahead end-volume volume datavol-quick-read type performance/quick-read subvolumes datavol-io-cache end-volume volume datavol-md-cache type performance/md-cache subvolumes datavol-quick-read end-volume volume datavol type debug/io-stats option count-fop-hits on option latency-measurement on subvolumes datavol-md-cache end-volume The config above is the latest attempt at making this behave properly. I have also tried the following entry in /etc/fstab: gfs0:/datavol /data glusterfs defaults,backupvolfile-server=gfs1 0 0 This was the entry for half of the clients, while the other half had: gfs1:/datavol /data glusterfs defaults,backupvolfile-server=gfs0 0 0 The results were exactly the same as the above configuration. Both configs connect everything just fine, they just don't fail over. Any help would be appreciated.

    Read the article

  • ssh hangs on "Last login" line

    - by Pavel H
    This happened for the first time three days ago - I ssh to the server, authenticate using a password, get the welcome message but it remains hanging on the "Last login:..." line. The command line doesn't show and the server doesn't react to my input. Other services on the server keep working ok (apache, tomcat, database, ..). The box has an out-of-band management using which I was able to restart it. After the restart the ssh worked ok again and I didn't find anything suspicious in the logs. Three days later the same problem occurs on this box again, and newly on yet another server in the cluster - 100% same symptoms. Both servers have about 2 month old installation of Debian Squeeze (6.0.2) and the problem never occurred before despite frequent ssh-ing, so it should not be a problem of settings. We haven't been installing anything new for quite some time now. I also made sure there is enough disk space on both servers. Since it started to happen all of a sudden on two servers at about the same time, I suspect some bug may have been introduced via Debian updates, yet I haven't been able to find anyone with the same problem. Most similar issues I have found: ssh freezes at the "Last Login Line" - in our case everything worked fine until recently, so nothing related to settings should be our problem. Diskspace checked, I couldn't check the memory but I would expect something would be in the logs if the system had been running out of it. Remote Fedora system unresponsive, odd but consistent behavior when trying to log in - problem with high load on the server; unlike in this case, nothing changes even if I wait for 10+ minutes

    Read the article

  • TFS 2010 Subfolder Permissions

    - by gmcalab
    I am a TFSAdmin and when I have a TFS project in which a subfolder needs specific permissions to deny some users. So, I right click on the folder in question hit Properties, and click the Security tab. There I select the Windows User or Group radio, then click Add. I put in the AD User that I want specific permissions for and hit Check Names. That resolves, so I click OK. Next, I select the permissions to Allow or Deny below in the Permissions for list. I hit OK. The permission are honored by TFS, this user no longer has PendChange permissions and I was expecting. The odd thing is, I was expecting to be able to go back into the Security tab and see that User in the list of Users and Groups and see the current state. But the list is always empty. Not sure why, but the permissions are definitely being honored, I can re-add the user with different permissions and those are also honored. Any ideas why the current users are not showing up in the Users and Groups list under the Security tab for a folder's properties? I also used the tf permission $\... to see if there were any permissions but it always returns There are no permissions set for this item (Inherit: Yes)

    Read the article

  • KVM virtual machine unable to access internet

    - by peachykeen
    I have KVM set up to run a virtual machine (Windows Home Server 2011 acting as a build agent) on a dedicated server (CentOS 6.3). Recently, I ran updates on the host, and the virtual machine is now unable to connect to the internet. The virtual network is running through NAT, the host has an interface (eth0:0) set up with a static IP (virt-manager shows the network and its IP correctly), and all connections to that IP should be sent to the guest. The host and guest can ping one another, but the guest cannot ping anything above the host, nor can I ping the guest from anywhere else (I can ping the host). Results from the guest to another server under my control and from an external system to the guest both return "Destination port unreachable". Running tcpdump on the host and destination shows the host replying to the ping, but the destination never sees it (it doesn't even look like the host is bothering to send it on at all, which leads me to suspect iptables). The ping output matches that, listing replies from 192.168.100.1. The guest can resolve DNS, however, which I find rather odd. The guest's network settings (connection TCP/IPv4 properties) are set up with a static local IP (192.168.100.128), mask of 255.255.255.0, and gateway and DNS at 192.168.100.1. When originally setting up the vm/net, I had set up some iptables rules to enable bridging, but after my hosting company complained about the bridge, I set up a new virtual net using NAT and believe I removed all the rules. The VM's network was working perfectly fine for the last few months, until yesterday. I haven't heard anything from the hosting company, didn't change anything on the guest, so as far as I know, nothing else has changed (unfortunately the list of packages updated has since fallen off scrollback and I didn't note it down).

    Read the article

  • How does rsync --daemon know which way it is being run?

    - by Skaperen
    I am wanting to run rsync over an SSL/TLS encrypted connection. It does not do this directly so I am exploring options. The stunnel program looks promising, although more complicated than designed due to the need to hop connections with the -r option. However, I do find there is a -l option to run a program. I am assuming this works by having two processes, one to carry out the SSL/TLS work, and one to be the worker which the client is communicating to. These would then communicate by a pipe pair or two way socket between them. What struck me as odd when I surveyed a number of web pages to see how to properly set this up is that whether running as a standalone daemon, or under a super daemon like inetd, the arguments for rsync are the same. How does rsync --daemon know whether it should open a socket and listen on it for many connections, or just service one connection by communicating with the stdin/stdout descriptors is has when it starts up (which really would go through the extra process to handle the encryption, description, and SSL/TLS protocol layer)? And then I need to find a way to wrap the client to have it do SSL/TLS in one simple command (as opposed to connection hopping that stunnel seems to favor).

    Read the article

  • Mounting fuse sshfs fails when invoked by Cron on FreeBSD 9.0

    - by Tal
    I have a remote server filesystem that I'm attempting to mount locally on a FreeBSD 9 machine via FUSE sshfs, and Cron for a backup routine. I have ssh keys between the boxes setup to allow for passwordless login as the root user on the local machine. Cron is set to run the following script (in Root's crontab): #!/bin/sh echo "Mounting Share" /usr/local/bin/sshfs -C -o reconnect -o idmap=user -o workaround=all <remote user>@<remote domain>.com: /mnt/remote_server As root, I can run this script on the command line without issue, and without being asked for a password the share mounts successfully. Yet, when run by Cron the script fails. The path to sshfs is identical to the value of which sshfs Here is the email root receives from the Cron Daemon: X-Cron-Env: <SHELL=/bin/sh> X-Cron-Env: <HOME=/root> X-Cron-Env: <PATH=/usr/bin:/bin> X-Cron-Env: <LOGNAME=root> X-Cron-Env: <USER=root> Mounting Share fuse: failed to exec mount program: No such file or directory fuse: failed to mount file system: No such file or directory I'm stumped as to why I'm receiving No such file or directory in this instance. It further seems odd given that the paths appear to be correct. I've also attempted to compare the output of env on the shell with env inserted into the script. I don't see any environment variables that should cause this trouble. At bootup, FUSE reports its version as: fuse4bsd: version 0.3.9-pre1, FUSE ABI 7.8 Help me ServerFault wizards, you're my only hope!

    Read the article

  • Windows Home Server 2011, No disks "suitable for a backup destination"

    - by Scott Beeson
    I recently installed Windows Home Server 2011 and love it. However, when I try to set up server backups, it says no suitable disks are available. Initially, before I set up my RAID, it found one of my twin drives and said it would work. Once I set up the mirroring, that one is no longer available (obviously). However, I have an internal SATA 1TB drive and an external USB2.0 1TB drive hooked up. Both are recognized by Disk Management. WHS11 still says nothing suitable for backups. The two drives details are as follows: Edit to clarify: The system partition is on Disk 0, not listed below. The two below are the two that SHOULD be available for system backups. Disk 1: Dynamic "Data" (D:) 931.51 GB NTFS, Healthy Disk 3: Basic 200 MB Healthy (EFI System Partition) "Backup" 930.66 GB NTFS, Healthy (Primary Partition) What's a bit odd is that in Disk Management the "Backup" volume does not show a drive letter, even though I assigned Z: (which is reflected in "My Computer". I also cannot make this a dynamic disk as it says it's unsupported by the device.

    Read the article

  • Whats the difference between pulling from a branch into master and pushing that branch onto master?

    - by Justin808
    In Tortoisegit, on the repository, I right-click and select sync. At the top of the dialog there are options for Local Branch and Remote Branch. If the local branch is named DeveloperA and the remote branch is master and I do a push, what happens? If the local branch is master and remote branch is DeveloperA and I Pull, what happens? If I am on the master branch and right click, select Merge and change the From to be my DeveloperA branch, what happens? If I try to push from master to remote master and the remote is updated git stops and tells me to pull. It seems if I push from DeveloperA to master it doens't stop, it just clobbers, it that correct? We're having an issue using git where the remote master branch gets clobbered at times and we are trying to figure out why. For example there is a developer working on his DeveloperA branch. He'll pull from master to get any updates, then push to master to push out his changes. But there are times that the push lists more files in the Out Commit list than he's edited. The odd thing is he can't revert those files as git is saying they are up to date and have not been modified. Yet when he pushes git pushes the files out. The problem is if there are changes between his pull and push the changes get clobbered.

    Read the article

  • Display errors using VMWare Player and Remote Desktop on Windows XP

    - by Tim
    I've come up against a weird display issue that I can't seem to find any "fix" for. When I first boot up my computer, everything behaves normally. If I start to use VMWare Player and/or Remote Desktop, my desktop starts having some odd video issues. The frames for some windows aren't drawn at all, if I move windows around rapidly, the area under where the window used to be isn't cleared (still shows artifacts of the content of the window), etc. In some cases, the minimize / restore / maximize buttons aren't drawn (but are click-able if you can guess where they are). I've tried the usual stuff - current drivers all around, using a single monitor, etc.. none of it seems to have any bearing. If I try to disable hardware acceleration, it tends to crash the computer. As I said earlier, it's running Windows XP, dual monitors, an NVidia en8400GS video card, asus p7p55d motherboard. Not sure what other pertinent details are needed. I would appreciate any help or suggestions!

    Read the article

  • Seeing traffic destined for other people's servers in wireshark

    - by user350325
    I rent a dedicated server from a hosting provider. I ran wireshark on my server so that I could see incoming HTTP traffic that was destined to my server. Once I ran wireshark and filtered for HTTP I noticed a load of traffic, but most of it was not for stuff that was hosted on my server and had a destination IP address that was not mine, there were various source IP addresses. My immediate reaction was to think that somebody was tunnelling their HTTP traffic through my server somehow. However when I looked closer I noticed that all of this traffic was going to hosts on the same subnet and all of these IP addresses belonged to the same hosting provider that I was using. So it appears that wireshark was intercepting traffic destined for other customers who's servers are attached to the same part of the network as mine. Now I always assumed that on a switch based network that this should not happen as the switch will only send data to the required host and not to every box attached. I assume in this case that other customers would also be able to see data going to my server. As well as potential privacy concerns, this would surely make ARP poising easy and allow others to steal IP addresses (and therefor domains and websites)? It would seem odd that a network provider would configure the network in such a way. Is there a more rational explanation here?

    Read the article

  • Network problems that might be related to NAT

    - by nenne
    Hello, I have an odd setup where there is a router(Router 2) routing between network network 1 and network 2. One router(Router 1) with nat for internet access that routes between internet and network 1. There are people in both of these networks. All the clients in network 1 can access the internet, the clients in network 2 can access the clients in network 1 and can also access the router 1. Router 1 can also access clients in network 2. However, the clients in network 2 cannot reach the internet. I cannot think about anything in the routing tables that would hinder this, since Router 1 can reach the clients in network 2 and vice versa. Can it be that nat starts the session between router 2 and the internet site/machine instead of the client and the internet machine? Does anyone have any ideas? I have very little control over router 2(its basicly an ISP vpn net service) but full access to router 1. Its an ubuntu 10.04 with iptables for nat/firewall setup.

    Read the article

  • Slower/cached Linux file system required

    - by Chopper3
    I know it sounds odd but I need a slower or cached filesystem. I have a lot of firewalls that are syslog'ing their data to a pair of Linux VMs which write these files to their 'local' (actually FC SAN attached) ext3-formatted disks and also forward the messages to our Splunk servers. The problem is that the syslog server is writing these syslog messages as hundreds, sometimes thousands, of tiny ~4k writes per second back to our FC SAN - which can handle this workload right now but our FW traffic's going to be growing by at least a factor of 5000% (really) in coming months and that'll be a pain for the SAN, I want to fix the root cause before it's a problem. So I need some help figuring out a way of getting these writes cached or held-off in some way from the 'physical' disks so that the VMs fire off larger, but less frequent, writes - there's no way of avoiding these writes but there's no need for it to do so many tiny ones. I've looked at the various ext3 options, setting noatime and nodiratime but that's not made much of a dent in the problem. Obviously I'm investigating other file systems but thought I'd throw this out in case others have the same problem in the future. Oh and I can't just forward these messages to Splunk, our firewall team insist they're in their original format for diag purposes.

    Read the article

  • User unable to delete folder / files "File in use by another user" Server 2003

    - by Az
    I am administering a standalone Windows 2003 Terminal Server with no domain membership. Occasionally (about once a week or so) a user will attempt to delete a sub-folder in a Shared folder and gets denied with "File in use by another user". I tried checking the shared folder snap-in and that folder is not open. She has full control and is the owner of the folder as well. I even checked in "Effective permissions" for some of the folders / files she cant delete and she truly has full control. I am able to delete the folder as Administrator with no problem. Another odd thing, she can delete the files IN the folder most of the time (this issue happens on both folders and files in the share). Sometimes merely waiting a day or two will allow her to delete the folder or files. I am curious as to why she gets the message that it is in use as creator/owner with full control yet I don't get it simply as a member of the Admin group. If anyone out there has any ideas I'd love to hear them! THANK YOU.

    Read the article

  • Why is my Toshiba Satellite L675D laptop not connecting to my HD TV using an HDMI cord?

    - by Akasha Eyre
    I'm having a problem connecting my Toshiba Satellite L675 laptop to my Samsung HD TV. I've done it before using an HDMI cord but now, for some odd reason, it's not connecting at all. My computer screen before would go completely black for a second & then come back & (with it being attached to the TV with the HDMI cord & it being on the proper channel) it would make a sound, come on & work beautifully. Now, I try hooking it up & it doesn't do anything. No black screen, no sound. The TV screen just says there's no connection & that I need to check out the power source. I'm still checking around, trying to find out if there's a virus in my computer (since my dad & I have the same computer & he can connect his to the TV with no problems), or if I deleted something that had to do with the connection. I've tried turning my computer off & letting it sit for a while, I've tried checking out other websites & have come up with nothing.

    Read the article

  • Embedded Windows XP

    - by Kyle
    My company acquired a brake press for bending large structural steel plates. We received it second hand and it came with an embedded copy of windows XP. Now for the part that's driving me nuts: plug and play has been turned off, also accessibility options has been disabled. What does this mean for me? Keyboards will not work! Nothing that plugs into a USB port will work and it does not have a CD ROM drive. I have tried to turn on plug and play using the on screen keyboard but it is not there since accessibility options is turned off. I would just get an updated copy of the the embedded OS but they come from Sweden and are extremely expensive. I assume there has got to be a way to get a USB devices to work. We need to get a wifi adapter on it so I can use team viewer and remotely configure it for our needs. Things to keep in mind: There is no keyboard so everything has to be done with the mouse There is no PS2 port for a keyboard just a mouse. Odd. I am 5 states away from this location and have been working with a tech who is physically installing the machine. System 32 seems to be missing A LOT of files, the tech told me there is only 8 folders in there and no other files (I don't even understand how Windows is running like this). If anyone has ANY ideas I would appreciate it, I am unsure where to go from here.

    Read the article

< Previous Page | 59 60 61 62 63 64 65 66 67 68 69 70  | Next Page >