Search Results

Search found 7583 results on 304 pages for 'roger guess'.

Page 200/304 | < Previous Page | 196 197 198 199 200 201 202 203 204 205 206 207  | Next Page >

  • Sendmail in Nexenta core 3.0.1

    - by maximdim
    I'm trying to setup sendmail in Nexenta core 3.0.1 (Solaris based OS). All I want is to be able to send emails from that host - like notifications about failures, cron jobs output etc. Initially Nexenta core doesn't have sendmail so here is what I've done: apt-get install sunwsndmu Now there is a sendmail in /usr/sbin/sendmail. When I try to send email from command line: $mail maxim test . It doesn't give me any error but in log file I see: Dec 20 12:41:08 nas sendmail[12295]: [ID 801593 mail.info] oBKHf8u7012295: from=maxim, size=107, class=0, nrcpts=1, msgid=<201012201741.oBKHf8u7012295@nas>, relay=maxim@localhost Dec 20 12:41:08 nas sendmail[12295]: [ID 801593 mail.info] oBKHf8u7012295: to=maxim, ctladdr=maxim (1000/10), delay=00:00:00, xdelay=00:00:00, mailer=relay, pri=30107, relay=[127.0.0.1] [127.0.0.1], dsn=4.0.0, stat=Deferred: Connection refused by [127.0.0.1] So I guess I need to have SMTP service running. How do I do that in Nexenta? svcs -a | grep sendmail doesn't return anything and # svcadm enable sendmail svcadm: Pattern 'sendmail' doesn't match any instances I'm not married to sendmail so if there are easier ways to achieve y goal I'm open to suggestions as well. Thanks,

    Read the article

  • Chmod 644 on /etc/ any way to fix?

    - by DazSlayer
    I tried to tab complete something and I guess it wasnt there. I know you are not supposed to set the permissions to /etc/ like that, but my permissions seem to be all messed up. whoami prints out cannot find name for user ID 1002 and I cannot cd into /etc/ anymore. passwd and shadow use 640 and 644 so I am not sure why this is a problem. Regardless, is there any way to fix this? The command run was sudo chmod 644 /etc/ I have no name!@vpn-server:/$ whoami whoami: cannot find name for user ID 1002 I have no name!@vpn-server:/$ cd etc bash: cd: etc: Permission denied I have no name!@vpn-server:/$ ls -al etc d????????? ? ? ? ? ? . d????????? ? ? ? ? ? .. d????????? ? ? ? ? ? acpi -????????? ? ? ? ? ? adduser.conf I have no name!@vpn-server:/$ sudo su sudo: can't open /etc/sudoers: Permission denied

    Read the article

  • How to check DVD region settings in Window 7

    - by jmatthias
    I have installed Windows 7 on my laptop. When I put a movie in the DVD drive, the video player starts up and tells me 'THIS DISC IS NOT FORMATTED TO PLAY IN THIS REGION'. I have changed the DVD region a few times in the past but while I was in Windows XP I changed the region in Region 1. I cannot find how to check the DVD region setting in Windows 7 (it's not located on the hard properties page of the DVD anymore. Does anybody know how to check the DVD region in Windows 7? Update: If I connect an external USB DVD drive I can see the DVD region tab on the hardware properties dialog. I guess there is some compatibility problem with my internal DVD drive and Windows 7 (as I said I was able to inspect/change the DVD region in Windows XP). Solution: I think I had used LtnRPC in the past to remove the region from my DVD drive. It looks like Windows 7 does not like region free DVD drives (at least mine anyway). I was able to use LtnRPC to reset the region back to 1. I can now see the region tab on the DVD drives hardware properties.

    Read the article

  • Can't connect to local MySQL server through socket

    - by Martin
    I was trying to tune the performance of a running mysql-server by running this command: mysqld_safe --key_buffer_size=64M --table_cache=256 --sort_buffer_size=4M --read_buffer_size=1M & After this i'm unable to connect mysql from the server where mysql is running. I get this error: ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (111) However, luckily i can still connect to mysql remotely. So all my webservers still have access to mysql and are running without any problems. Because of this though i don't want to try to restart the mysql server since that will probably mess everything up. Now i know that mysqld_safe is starting the mysql-server, and since the mysql server was already running i guess it's some kind of problem with two mysql servers running and listening to the same port. Is there some way to solve this problem without restarting the initial mysql server? UPDATE: This is what ps xa | grep "mysql" says: 11672 ? S 0:00 /bin/sh /usr/bin/mysqld_safe 11780 ? Sl 175:04 /usr/sbin/mysqld --basedir=/usr --datadir=/var/lib/mysql --user=mysql --pid-file=/var/run/mysqld/mysqld.pid --socket=/var/run/mysqld/mysqld.sock --port=3306 11781 ? S 0:00 logger -t mysqld -p daemon.error 12432 pts/0 R+ 0:00 grep mysql

    Read the article

  • PHP extension causes symbol lookup error

    - by Christian
    Dear, I installed - or better tried to - the NMCryptGate Extension for PHP on my Debian 5.0.8 server. I did this by compiling the sources which came up with no error message. Calling phpinfo() I can see the extension as enabled. BUT, whenever I try calling a method from this extension I get an error logged to the apache error log: /usr/sbin/apache2: symbol lookup error: /usr/lib/php5/20060613+lfs/nmcryptgate.so: undefined symbol: nmlistalloc What is missing? I got two packages from the software company: the php module sources and some files which should - according to their path inside the tar - go to /usr/local/bin|doc|include|lib. I moved them there without any effect. Each of these two packages has its own config file almost looking the same: \# libnmcryptgate.la - a libtool library file \# Generated by ltmain.sh - GNU libtool 1.3.4 (1.385.2.196 1999/12/07 21:47:57) \# \# Please DO NOT delete this file \# It is necessary for linking the library \# The name that we can dlopen(3) dlname='' \# Names of this library library_names='libnmcryptgate.so.1 libnmcryptgate.so libnmcryptgate.so' \# The name of the static archive old_library='' \# Libraries that this one depends upon dependency_libs=' -L. -L/usr/ssl/lib -L/usr/local/ssl/lib -L/usr/local/lib -lssl -lcrypto' \# Version information for libnmcryptgate current=1 age=0 revision=29 \# Is this an already installed library installed=yes \# Directory that this library needs to be installed in libdir='/usr/local/lib' I tried several ways to get it right: moving files, symlinking, changing configurations - always followed by restarting apache - no success. I guess I just have to move the files to the correct location or change the libdir inside the config files but meanwhile I'm totally confused by the two packages: do I need both, which config rules what, do I have to use the libdir variable? And for what? ... Anybody out there hinting me to my source of failure? Thank you in advance, regards, Christian

    Read the article

  • apache2: ssl_error_rx_record_too_long when visiting port 80? help!

    - by John
    Hi, I have an Ubuntu 10 x64 server edition machine. I got a second IP and configured /etc/network/interfaces like so (actual IPs and gateways removed): [code] auto lo iface lo inet loopback iface eth0 inet dhcp auto eth0 auto eth0:0 iface eth0 inet static address [ my first IP ] netmask 255.255.255.0 gateway [ my first gateway ] iface eth0:0 inet static address [ my second IP ] netmask 255.255.255.0 gateway [ my second gateway ] [/code] /etc/apache2/ports.conf: [code] Listen 80 NameVirtualHost [ my first IP ]:80 NameVirtualHost [ my second IP ]:80 # If you add NameVirtualHost *:443 here, you will also have to change # the VirtualHost statement in /etc/apache2/sites-available/default-ssl # to # Server Name Indication for SSL named virtual hosts is currently not # supported by MSIE on Windows XP. Listen 443 NameVirtualHost [ my first IP - some site is running SSL successfully using it ]:443 Listen 443 [/code] /etc/apache2/sites-enabled/mysite.conf: [code] ServerName mysite.com Include /var/www/mysite.com/djangoproject/apache/django.conf [/conf] [/code] Then when visiting http[mysite].com:80 or http[mysite].com (:// removed because serverfault doesn't allow me to post hyperlinks), I get: [code] An error occurred during a connection to [mysite].com. SSL received a record that exceeded the maximum permissible length. (Error code: ssl_error_rx_record_too_long) [/code] My guess is that the configuration file is not being picked up, and apache is therefore looking for the default-ssl file, which is not in conf-enabled. If I were to configure that file properly, it seems I would successfully connect to whatever default directory is specified in the default-ssl file. But I want to connect to my website. Any ideas? Thanks in advance!

    Read the article

  • Remote desktop solution where the desktop sharing party contacts the computer it wants to share with

    - by Kent
    I'm in a situation where I act as a sort of techinical support to my family and less techinically experienced friends. I'm looking for a remote desktop solution where it's possible to setup a "zero-install, double click an icon"-solution where the client computer contacts me so that I may interact with their desktop. The last part is important as the people in need of my help don't know how to configure their router or even the firewall software on their own computer. They are able to click an accept button when asked if a program should be able to make outgoing connections. They have many different kinds of routers, as well as software firewalls, and I rather not deal with the problem of how to connect to them using whatever as well as the actual problem they are having. It must be: Free of charge for non-commercial use. Possible to use it in a mode where the computer wanting to share its desktop should be able to make a connection to my computer. My computer has a DNS name we can use. Compatible with both Windows XP and Windows 7. Independent of a third party server or infrastructure. Explanations of the above: I don't want to spend money on it when I help them for free. If it's free as in freedom, all the better! I guess this boils down to being callable like showdesktopto.exe opscomputer.com where opscomputer.com is my computers DNS name. If that is possible then I can create a shortcut they can use to connect to me when they need help. It's nice if it's possible to specify a password or key file which I can use to authenticate myself, but it's not required. They use the OS which their machine comes installed with. That means Windows XP or 7. I want something which will work in the long run. Using a third party service which might not be available when I need it disqualified such solutions.

    Read the article

  • Does Windows XP automatically reassemble UDP fragments?

    - by Matt Davis
    I've got a Windows application that receives and processes XML messages transmitted via UDP. The application collects the data using Windows "raw" sockets, so the entire layer 3 packet is visible. We've recently run across a problem that has me stumped. If the XML messages (i.e., UDP packets) are large (i.e., 1500 bytes), they get fragmented as expected. Ordinarily, this will cause the XML processor to fail because it attempts to process each UDP packet as if it is a complete XML message. This is a known short-coming in the system at this stage of its development. On Windows 7, this is exactly what happens. The fragments are received and logged, but no processing occurs. On Windows XP, however, the same fragments are seen, and the XML processor seems to handle everything just fine. Does Windows XP automatically reassemble UDP fragments? I guess I could expect this for a normal UDP socket, but it's not expected behavior for a "raw" socket, IMO. Further, if this is the case on Windows XP, why isn't the behavior the same on Windows 7? Is there a way to enable this?

    Read the article

  • Mac OS X: which folders should ClamXav Sentry watch?

    - by trolle3000
    I'm using ClamXav on my Mac. I've read this, and I am aware of the whole Macs-need-no-AV-but-they-do-anyway discussion. I guess that's why I would feel like a real jerk if I somehow managed to compromise my system! So ClamXav has been downloaded and ClamXav Sentry set up to start on log-in, but it doesn't really do anything before you tell it to. Specifically, you have to tell it which folders to watch for virusses/vira so I'm wondering, where are good places to look? Currently it's been set up to look the following places: In the home folder: ~/Downloads ~/Library/Caches ~/Library/Contextual Menu Items ~/Library/Cookies ~/Library/Internet Plug-Ins ~/Library/LaunchAgents In my system folder: /Library/Application Support /Library/Caches /Library/Contextual Menu Items /Library/Cookies /Library/Internet Plug-Ins /Library/LaunchAgents /Library/LaunchDaemons /Library/Startupitems Basically, this is 100% conjecture. All (most of) the folders have something to do with the Internet and things that start up automatically, so I'm guessing that's where vira go. But still, the qustion: Which folders should ClamXav Sentry watch, if any? FYI, I'm not using any mail applications, but please include that in your answer for anyone who might be interested.

    Read the article

  • Get Python to raise MemoryError instead of eating all my disk space

    - by asmeurer
    If I run a Python program with a memory leak, I would normally expect the program to eventually die with MemoryError. But instead, what happens is that all the virtual memory is used until my disk runs out of space. I am running Mac OS X 10.8 on a retina MacBook Pro. My computer generally has between 10GB to 20GB free. Mac OS X is smart enough to not die completely when the disk runs out of space (rather, it gives me a dialog letting me force quit my GUI programs). Is there a way to make Python just die when it runs out of real memory, or some reasonable amount of virtual memory? This is what happens on Linux, as far as I can tell. I guess Mac OS X is more generous than Linux with virtual memory (the fact that I have an SSD might be part of this; I don't know just how smart OS X is with this stuff). Maybe there's a way to tell the Mac OS X kernel to never use so much virtual memory that leaves less than, say, 5 GB free on the hard drive?

    Read the article

  • Domain registrar transfering

    - by Mike Weerasinghe
    In 2004 I registered a domain name when I opened an account with DiscountASP.NET. I presume my domain registration was handled by a reseller. A domain tools who is search shows that registration services are provided by Znode LLC. I changed hosting companies and need to change DNS servers to point to my new hosting company but I have no idea how to do that. There is no control panel I can access. Ideally I would like to transfer registrar's. I emailed Znode support but I have not received any response. I called and left a message and they have not called back. My new hosting company wants an EPP authorization code in order to transfer my domain. I guess I need to get it from Znode LLC. Anyone have any ideas on how I might go about transferring my domain over to a new registrar? The domain name has not expired and is currently active. Thanks in advance for your help.

    Read the article

  • Mac Mail sent messages not in sent folder

    - by Elizabeth Akers
    Hi. My powerbook G4 has been lobbying for a furlough recently, and its latest prank is to not show sent messages in my sent mailbox. I have always had a copy of every sent message, and now, for some unknown reason, I can only see sent messages from May 7 and earlier. When I did a test, and sent myself an email, it shows up fine in my inbox, but there's nothing in the sent folder. Weird. It also has the little arrow next to messages that I replied to in the inbox. I guess I'm going to have to cc myself on everything until I get this fixed. Also, it's got another new trick, which I'm assuming is unrelated, but I mention it here just in case it is: the battery says it's "good" and has a cycle count of 0, and a full charge capacity of 2139, but for some reason it's all of a sudden not charging when it's plugged in. It works fine when connected to the wall, but there is no battery life at all. Zip. Any thoughts? Is it just time for a new 'book? This one just got back from Apple, having its harddrive replaced for the fourth time (for free, but seriously, this thing is a lemon). Thanks so much, Elizabeth

    Read the article

  • Removing SCIM input method as default from gnome terminal

    - by Mark
    Hello - I am recently back into the Linux world after about a 10 year absence. While I can find my way around most things, terminals and desktop managers are different than I remember. One of the biggest problems that I am encountering today is that when running a gnome terminal (this is Suse 10.0 enterprise), I'm getting behavior in the window that I don't want. Specifically, when I type, my typing is underlined as if something is trying to spell check my window. Further, it seems as if when running vi or less, my keystrokes are only processed by these apps when I hit 'return'. I.e. if I'm running less and want to go back a page, I'll hit b, but nothing happens until I hit 'return'. I seem to have tracked this down to the 'input method". Right clicking in the Gnome terminal allows me to set my input method to one of a dozen values. It seems that currently, it's set to "SCIM Input Method". If I then select 'default' or 'X Input Method', apps (i.e. things like less, vi, and even the bash shell) behave as I would expect. Can someone tell me a) what is this SCIM input method b) how can I make it so that it is not the default? I've poked around various configuration files in my home directory as well as in /etc, but I can't see to find how this is set. I guess as a final question, can I just get rid of SCIM? Or is that tied into the window manager somehow? I do appreciate any clarifications that I can get. Thanks.

    Read the article

  • GoDaddy SSL on Shared Hosting

    - by Jon
    So I'm very new to using SSL certificates and I have been trying to install one on a site for a client. He is using shared hosting for multiple domains through GoDaddy, and the site we're working on is not the primary domain. He purchased a UCC certificate for multiple domains and I installed it on the shared hosting account. My thought was that since the domains were under the same hosting account, then they would each be protected under the certificate. This was not the case...apparently. I checked both domains with an SSL checker and the primary domain checked out. The domain that we wanted the SSL on showed the following errors: None of the common names in the certificate match the name that was entered (www.CLIENTDOMAIN.com). You may receive an error when accessing this site in a web browser. I'm not sure how to fix this. It was just purchased yesterday, so if necessary, I guess I could un-install it or re-key it (???). Is there a way to just change the common name to www.CLIENTDOMAIN.com (the correct domain)?

    Read the article

  • Spurious alleged file corruption on Windows 7

    - by Johannes Rössel
    Recently my Laptop sometimes warns about corrupted files on the hard drive (Samsung SSD PB22-JS3 TM). This has only happened so far when updating (or checking out) an SVN repository with either TortoiseSVN or the command line Subversion client. The fun thing is that the corrupted file has always been a .svn directory (although the directory entry may contain files in that directory too, if they're small enough?—?which should be the case with SVN). However, when looking into the warned-about directory I notice nothing strange or unusual and don't get any more warnings about it and another try (SVN stops updating once that error occurs?—?TortoiseSVN even with an appropriate error message) of updating the working copy works (well, mostly; sometimes it does it again, albeit with a different directory). Since the laptop is only a few months old I doubt the SSD is failing already—five months of normal usage shouldn't be too surprising. Also it (so far) occurred only with SVN updates on a large repository. Maybe that's too many writes in a short time and some part between the software and the hardware doesn't quite catch up fast enough or so?—?I don't know enough about this to actually make an informed guess here. Anyone knows what's up here? ETA: Note to add: I've run chkdsk (it seems to schedule itself anyway when this happens) and it didn't find anything out of the ordinary.

    Read the article

  • DHCP Server on local machine

    - by EralpB
    Hello I am trying to setup a dhcp3-server on Ubuntu. But my question is more generic, if dhcp server is in a blockbox and all clients are connected to it I think I get what is going on but when dhcp server is installed on one of the "clients" that confuses me. When I send a dhcp packet from that client to the dhcp server, will my ethernet card read and write at the same time? Or will it handle it internally without writing any data to ethernet cable. It's the first time I am encountering these network things so I am a little bit confused. Also I wonder If I am in a big network with lan IP let's say 192.168.0.100 and I install a dhcp server to my computer, can any other computers accidentally get IP from my dhcp server? Every computer has one ethernet card (if that matters?). And every computer is connected to one router. I guess the answer is no because the broadcast message won't reach to my computer since when router receives a dhcp search packet it will answer and it won't let other computers know about it because they don't need to. And without router sending that packet one by one, it cannot travel further. I'd be glad if someone enlightens me. Thank you very much.

    Read the article

  • Are relative-path symlinks reliable on Rackspace Cloud Sites?

    - by Jakobud
    Rackspace's Cloud Sites have a lot of stupid limitations. For example, no SSH (in or out), no shell, no RSYNC, etc... (even through cron). Recently I learned that you can't reliably use symlinks in Cloud Sites. Apparently this is because the absolute path of your sites could change at any moment, since it's a shared host environment split up between many disks/servers. I guess different account's sites get moved from disk to disk whenever Rackspace decides to. Supposedly to increase efficiency across the board. So after talking with a Rackspace tech, he said they cannot guarantee that symlinks would always work. Obviously this is because if you have a symlink that use's an absolute path like this: //mnt/disk-34566/home/user34566/files/sites/www.mysite.com/mydir If you files go moved to a different disk (or whatever they do), then the absolute path would be different and the link would now be broken. That makes sense. So next, I asked the Rackspace tech if relative path symlinks were reliable. So if I have the following link: files/sites/www.mysite.com/mylink --> ../www.myothersite.com/anotherdir You can see that the symlink simply points to a nearby directory's sub-directory. He said they cannot guarantee that even those would always work either. Since it uses a relative path to another nearby directory I'm not sure how it could ever break from something Rackspace would do. Do relative symlinks somehow rely on absolute paths underneath? Or is Rackspace using some weird custom filesystem where they will break from absolute path changes? It seems like a relative-path symlink would be fine and would only break if the user did something to mess up the directories involved. But when the tech's say that they "don't officially support symlinks of any kind" that makes me hesitant to use them for large commercial websites in Cloud Sites. Can anyone with Rackspace experience give input on this topic?

    Read the article

  • Failed to mount to nfs server with "Program not Registered"

    - by Farrel
    I'm trying to setup nfs server on Fedora 17 and I'm getting "Program not Registered" error when I'm trying to mount. I guess the main reason for this is rpcbind. I'm a newbie in linux, so I don't know what info should I provide you with. Here is some info that might be useful. rpcinfo -p program vers proto port service 100000 4 tcp 111 portmapper 100000 3 tcp 111 portmapper 100000 2 tcp 111 portmapper 100000 4 udp 111 portmapper 100000 3 udp 111 portmapper 100000 2 udp 111 portmapper 100005 1 udp 20048 mountd 100005 1 tcp 20048 mountd 100005 2 udp 20048 mountd 100005 2 tcp 20048 mountd 100005 3 udp 20048 mountd 100005 3 tcp 20048 mountd 100024 1 udp 42223 status 100024 1 tcp 50054 status cat /etc/exports /home/Farrel/prog 192.168.xxx.xxx (ro,sync) service nfs status Redirecting to /bin/systemctl status nfs.service nfs-server.service - NFS Server Loaded: loaded (/usr/lib/systemd/system/nfs-server.service; enabled) Active: active (exited) since Fri, 02 Nov 2012 09:29:04 +0300; 5min ago Process: 924 ExecStartPost=/usr/lib/nfs-utils/scripts/nfs-server.postconfig (code=exited, status=0/SUCCESS) Process: 909 ExecStart=/usr/sbin/rpc.nfsd $RPCNFSDARGS ${RPCNFSDCOUNT} (code=exited, status=0/SUCCESS) Process: 885 ExecStartPre=/usr/sbin/exportfs -r (code=exited, status=0/SUCCESS) Process: 864 ExecStartPre=/usr/lib/nfs-utils/scripts/nfs-server.preconfig (code=exited, status=0/SUCCESS) CGroup: name=systemd:/system/nfs-server.service Warning: Journal has been rotated since unit was started. Log output is incomplete or unavailable. Firewall is disabled on both systems. I spent a lot of time reading on the topic but all manuals on setting up nfs server lead to "Program not Registered" error. Any how-to-fix-it ideas?

    Read the article

  • Dell Poweredge 1950 with Perc 5i keeps losing raid config -> "Foreign Configuration Found"

    - by nosage
    The quick and dirty: the machine is a Dell Poweredge 1950, dual xeon quad cores, 8GB of ram, 2 2TB seagate SATAs in (supposed to be raid1) using a Perc 5i raid card. They are hot-swappable with a back-plane. I can build the raid fine and after a little while an install of server 08 r2 will blue screen and restart. When it comes up the raid controller says "Foreign Configuration Found." When I go into the raid configuration panel there is no raid listed but I can import the "foreign config", and the OS will boot up fine, until it blue screens again after a little while. The issue is OS independent. I have tried swapping raid cards, swapping the RAM module on the raid card and swapping the raid battery, all to no avail. Its almost as if there is a loose connection from the raid card to the back plane and both of disks get lost and the raid card drops the config. But it sees the disks fine when it boots back up. The raid card uses a SCSI SAS cable to connect to the back-plane so I guess the next step is to replace that, but... then I might as well replace the back-plane with a SCSI SAS to sata breakout cable, but... then I need a way to power the disks. Sorry for the wall of txt but it would be great to get some thoughts from people who worked with perc raid cards or poweredge servers with this type of issue before. Ironically I want to get this system up and running so I can work on MCITP labs. Thank you for any/all help and feel free to ask questions!

    Read the article

  • What's with the accesses to $random_existing_file/cache/df.php?

    - by Bernd Jendrissek
    Occasionally I eyeball Apache's access_log and lately I've been noticing these accesses to URLs that I don't serve. They're correctly 404'ed, but I'd like to know just who and what is involved here. "Obviously" it's some sort of vulnerability probing; I'd like to know which. (Not that it affects me, but I like to know the score.) Here's an example: 69.89.31.206 - - [28/Nov/2012:17:36:34 +0200] "GET /cvfull.pdf/cache/df.php HTTP/1.1" 404 489 "-" "-" Oddly, all 26 attempts are to either /cache/df.php, or to /cvfull.pdf/cache/df.php - they come in pairs. A few weeks ago it was zx.php, now it's df.php - I'm assuming the target is the same. Perhaps I should be flattered that a script is thinking of hiring me. Seriously, my CV is one of only two PDF files on my site, so I can only guess that non-PDF URLs aren't interesting? I've tried Googling for "cache df php", but my Google-fu is weak at the best of times, so I can only find a few reports of other script attacks. What's the vulnerability being scanned for here?

    Read the article

  • Why is windows 7 rejecting my key?

    - by acidzombie24
    I'm extremely confused. I have a genuine key and CD (I can take photos) and I am trying to install windows on my new tower. I believe this would be the 3rd PC i installed it (old, laptop now new tower). However i did change the HDD once but i doubt windows would think its a different computer bc of that. After going through phone activation it said i installed it on to many pcs..... i'm extremely confused. I'll be happy to deactivate it off my old tower if i knew how. I already grabbed all the files off of it. I tried to look up the amount of boxes i can install windows home premium on and found this http://en.wikipedia.org/wiki/Windows_7_editions What stuck out was this Maximum physical CPUs supported[40] 1 1 1 2 2 2 My new tower has 4cores (its the intel i7) but has two threads each so it sees 8. Does that have anything to do with this? But apparently ultimate supports '2'. I'm sure windows support more than 2 cpus so... what gives? Actually its physically one CPU so i guess the number of cores doesn't matter? Why is windows 7 rejecting my key?

    Read the article

  • Connecting to server from remote machine

    - by Jannat Arora
    I wish to connect my machine to a server in some other city. For doing the same I am using the following command: mstsc -v ip_address_of_server remote desktop can't connect to remote computer for one of these reasons: 1) Remote access to server is not enabled. 2) Remote computer is turned off 3) Remote computer is not available on network. Make sure remote computer is turned on and connected to the network, and that remote access is enabled. As per previous posts I need to turn off my client computers firewall..which I have...but still it gives me the same message. Can someone please please help me out...so as to how i may resolve this?? I am really new to networking, etc. Also when i am pinging: ping ip_address_of server I am getting the following response: Reply from ip_address_of_server: destination host unreachable Also I did try on ubuntu with rdesktop...still its not been able to connect with it. Also i know there are other people who are able to connect their machines with the server remotely. So i guess its not working for me only. Also when I accessed the same machine through LAN I was able to do so.

    Read the article

  • Mainboard shuts itself off after half a second or so

    - by heishe
    Here's the problem: When I start the PC, the mainboard powers up, then stays that way maybe 0.2-0.5 seconds, and then shuts off again. I say mainboard, and not PC, because I removed all the parts from the system and disconnected everything but the mainboard power supply (the broad 12 pin thingy). When I have the other parts (cpu, graphics card, ram, etc.) installed and connected, the basic behaviour stays the same, but now the mainboard runs for about 6 or 7 seconds (this is a guess) before shutting off. This all started when my monitor wouldn't receive a video signal today, without giving POSTs, so I took the graphics card and the RAM out to see if it changes anything. It didn't, except that from that point on the mainboard would start to have this behavior where it just stays on for a very short time and then shuts off again. I already tested it with a backup PSU - same behavior. What could this be? I'm thinking it can't be on a physical level (transistors burned through or something like that), since then the mainboard either shouldn't start at all or it should detect hardware failures in non-essential parts of the syste and start beeping. Sorry, I forgot to mention. It's an MSI P67A-C43. I already checked the capacitors if someone popped, but I can't find anything. I also tried resetting the cmos, but that didn't change anything.

    Read the article

  • Setting up Windows SBS 2008 network on Xen

    - by samyboy
    I'm trying to install a Windows SBS 2008 server in a Xen environment. The OS is booting fine. Unfortunately I can't figure out how to set up the network settings. Dom0 is a Debian Lenny hosting around 10 virtual servers. Here are the settings I'm using in the hosted Windows SBS: IP address: 10.20.0.8 Network mask: 255.255.0.0 Gateway: 10.20.0.1 Note that during the installation stage, Windows set the net mask at 255.255.255.0 without letting me choose. Gross. Windows SBS tells me I have a "limited connection". I can't ping the gateway nor any other IP except localhost and it's own IP (10.20.0.8). Here is the Xen config file: kernel = '/usr/lib/xen-3.2-1/boot/hvmloader' builder = 'hvm' memory = '4096' device_model='/usr/lib/xen-3.2-1/bin/qemu-dm' acpi=1 apic=1 pae=1 vcpus=1 name = 'winexchange' # Disks disk = [ 'phy:/dev/wnghosts/exchange-disk,ioemu:hda,w', 'file:/mnt/freespace/ISO/DVD1_Installation.iso,ioemu:hdc:cdrom,r' ] # Networking vif = [ 'mac=00:16:3E:0A:D0:1B, type=ioemu, bridge=xenbr0'] # video stdvga=0 serial='pty' ne2000=0 # Behaviour boot='c' sdl=0 # VNC vfb = [ 'type=vnc' ] vnc=1 vncdisplay=1 vncunused=1 usbdevice='tablet' This config is working with others Windows XP domU's. I tried to change the ne2000 values with 0 and 1 with no effect. I am far from having good Windows administration skills so I guess I definitely need some help on this case. Thanks.

    Read the article

  • Spurious alleged file corruption with an SSD

    - by Johannes Rössel
    Recently my Laptop sometimes warns about corrupted files on the hard drive (Samsung SSD PB22-JS3 TM). This has only happened so far when updating (or checking out) an SVN repository with either TortoiseSVN or the command line Subversion client. The fun thing is that the corrupted file has always been a .svn directory (although the directory entry may contain files in that directory too, if they're small enough?—?which should be the case with SVN). However, when looking into the warned-about directory I notice nothing strange or unusual and don't get any more warnings about it and another try (SVN stops updating once that error occurs?—?TortoiseSVN even with an appropriate error message) of updating the working copy works (well, mostly; sometimes it does it again, albeit with a different directory). Since the laptop is only a few months old I doubt the SSD is failing already—five months of normal usage shouldn't be too surprising. Also it (so far) occurred only with SVN updates on a large repository. Maybe that's too many writes in a short time and some part between the software and the hardware doesn't quite catch up fast enough or so?—?I don't know enough about this to actually make an informed guess here. Anyone knows what's up here? ETA: Note to add: I've run chkdsk (it seems to schedule itself anyway when this happens) and it didn't find anything out of the ordinary.

    Read the article

< Previous Page | 196 197 198 199 200 201 202 203 204 205 206 207  | Next Page >