Search Results

Search found 4692 results on 188 pages for 'shuo ran'.

Page 132/188 | < Previous Page | 128 129 130 131 132 133 134 135 136 137 138 139  | Next Page >

  • Who should I run mysql as, on a personal computer?

    - by user664833
    I just installed mysql via homebrew (with brew install mysql, on Mac OS X Mountain Lion - recently installed from scratch). Following the installation, there is a "caveats" section with options around further necessary actions to take: ==> Caveats Set up databases to run AS YOUR USER ACCOUNT with: unset TMPDIR mysql_install_db --verbose --user=`whoami` --basedir="$(brew --prefix mysql)" --datadir=/usr/local/var/mysql --tmpdir=/tmp To set up base tables in another folder, or use a different user to run mysqld, view the help for mysqld_install_db: mysql_install_db --help and view the MySQL documentation: * http://dev.mysql.com/doc/refman/5.5/en/mysql-install-db.html * http://dev.mysql.com/doc/refman/5.5/en/default-privileges.html To run as, for instance, user "mysql", you may need to `sudo`: sudo mysql_install_db ...options... Start mysqld manually with: mysql.server start Note: if this fails, you probably forgot to run the first two steps up above A "/etc/my.cnf" from another install may interfere with a Homebrew-built server starting up correctly. To connect: mysql -uroot To launch on startup: * if this is your first install: mkdir -p ~/Library/LaunchAgents cp /usr/local/Cellar/mysql/5.5.27/homebrew.mxcl.mysql.plist ~/Library/LaunchAgents/ launchctl load -w ~/Library/LaunchAgents/homebrew.mxcl.mysql.plist * if this is an upgrade and you already have the homebrew.mxcl.mysql.plist loaded: launchctl unload -w ~/Library/LaunchAgents/homebrew.mxcl.mysql.plist cp /usr/local/Cellar/mysql/5.5.27/homebrew.mxcl.mysql.plist ~/Library/LaunchAgents/ launchctl load -w ~/Library/LaunchAgents/homebrew.mxcl.mysql.plist You may also need to edit the plist to use the correct "UserName". On previous versions of Mac OS X I ran mysql as mysql user, but now I am confronted by the idea of running it as myself. I am the only one who uses this computer (which happens to be my laptop), and I do programming for work and for pleasure. What are the pros & cons, or best practices, around choosing whether to run mysql AS YOUR USER ACCOUNT or as mysql or something else still?

    Read the article

  • How do you recreate the System Recovery environment in Windows 7?

    - by Howiecamp
    I'm running Windows 7 Home Premium RTM (64-bit) and I want to take advantage of the system recovery tools (eg the Command Prompt) without using the Windows 7 DVD. My understanding is that this environment (WinRE) should be installed to your HDD by default as part of the Windows 7 installation. However, when I hit F8 on boot and select "Repair", I get: Windows failed to start. A recent hardware or software change might be the cause. To fix the problem... Status: 0xc000000e Info: The boot selection failed because a required device is inaccessible. The "Info" line seems like the smoking gun. My next step was to boot from the Windows 7 DVD, and choose "Repair". It indicated my Recovery Environment wasn't on the Windows 7 boot menu (perfect) and offered to fix it. I said yes and rebooted, however same issue as above. In addition, when I booted in to Windows 7 and I looked at the boot menu options, the recovery/repair option was not there. Only my Windows installation. Finally, I ran the Disk Management tool (diskmgmt.msc) and took a look at the contents of my "System Reserved" partition (which was set to "Active" as normal). It's unclear to me what the contents should look like, however it is my understanding that the WinRE environment gets installed to this partition. (As part of the above troubleshooting I followed http://superuser.com/questions/25728/how-to-fix-windows-7-boot-process which lead to http://www.sevenforums.com/tutorials/668-system-recovery-options.html).

    Read the article

  • "Dictionary problem." Error with VMPlayer

    - by George Mauer
    I'm pretty new to using vmware virtualization (been a virtualbox user) so I'm hoping you guys can help me out. I recently got an external usb disk containing a vm for a client, downloaded vmplayer, set it up with "Open a Virtual Machine", ran it, easy as pie. After working with it a bit this morning, I shut the VM down and now trying to start it back up again I get this: I tried removing the vm from my library, now it happens whenever I try to add it back in. In the meantime, I can still access other virtual machines so it seems like the problem might be with the virtual disk. So two questions: This is obviously not a very helpful error message. Where can I go to get more information? My Application EventLog doesn't contain anything from VMWare. What steps can I take to fix the problem? Edit: A couple more pieces of information. I did not take any snapshots. I don't think VM Player even has that ability. I have a zip file of (what I assume) is the state of the VM when it was sent to me. I cannot unzip it as it is huge and simply requires more HD space than I have available but I did extract the vmx file and examine it. Other than the UUIDs and the fact that mine reads cleanShutdown = "FALSE" they are identical. The log contains the following lines Jun 23 10:11:18.080: vmx| SNAPSHOT: SnapshotConfigInfoRead: Unable to load dict from 'E:....\MachineName.vmsd'. Jun 23 10:11:18.080: vmx| SNAPSHOT: SnapshotConfigInfoRead failed for file 'E:....\MachineName.vmx': Dictionary problem (6) Jun 23 10:11:18.082: vmx| SNAPSHOT: Snapshot_TimeStampTiers failed: Dictionary problem (6)

    Read the article

  • GPO - Setting not applied, although policy is applied

    - by Kenny Bones
    This is rather strange. In our domain we have several terminal servers and this morning a user reported that no drives are mapped when he logs on to the terminal server. So, I checked Group Policy Results and compare two users. Both users have the exact same policies applied. But for this particular user, the Script section under User Configuration - Policies - Windows Settings is just not there. For the other user, which this is working fine for, it says under the Script section that Winning GPO is Terminal2008, which is the GPO that contains the script section. And the Terminal2008 GPO is applied to both users. Also, the loopback processing is set to Replace. What could be the cause for this? I've never seen this particular issue before. I mean, both users are in the same OU, they log on to the same terminal server and the same policies are applied to both. They do not however have the exact same group memberships, but should that matter? It's not stated that the script should be run only if the user is a member of a certain group either. Not sure if that could be done through that specific setting either.All I know is, the very same policies are applied to both users, in the same OU and the same computer. Meaning, the same policies should be applied? Edit: I just ran Group Policy Results on one of the other terminal servers, which are also in the same OU, and the Scripts section is there! This means that this particular user don't get this setting when he's logged onto this particular server. What could be the cause of this?

    Read the article

  • Macvlan based interface pings from host but not from namespace

    - by jtlebi
    My setup: Private network vboxnet1 10.0.7.0/24 1 Host, ubuntu desktop 1 VM, ubuntu server (VirtualBox) Adressing layout: HOST: 10.0.7.1 VM: 10.0.7.101 VM MAC NAMESPACE: 10.0.7.102 On the VM, I ran the following commands: ip netns add mac # create a new nmespace ip link add link eth0 mac0 type macvlan # create a new macvlan interface ip link set mac0 netns mac On the mac namespace, inside the VM: ip link set lo up ip link set mac up ip addr add 10.0.7.102/24 dev mac0 So that we basically end up with: (Like Inception ?) +------------------------+ | Host: 10.0.7.1 | | | | +--------------------+ | | | VM: 10.0.7.101 | | | | | | | | +----------------+ | | | | | NS: 10.0.7.102 | | | | | | | | | | | +----------------+ | | | +--------------------+ | +------------------------+ What works: Ping between Host and VM Ping between NS and NS dhclient from NS What does not work: ping between NS and VM ping between NS and Host Where I started to go nuts: tcpdump on host (the real machine) actually shows ARP request AND replies tcpdump on NS shows ARP requests sent to the host tcpdump on VM makes the whole mess work (!) -- ping starts to get answers when tcpdump is started on the VM ?!? So, I bet you were eager for it, my question is: how to I make it work ? I suspect something's wrong with ARP on the macvlan inside the NS but can't figure out what exactly... Btw, I did the same expérimentations with the mac0 interface directly on the VM (no namespace) and it worked flawlessly.

    Read the article

  • SPF for two different outgoing servers?

    - by Marcus
    I have ran into a problem that I think someone should have a really clever answer for. Today we have our own mailserver that looks like "mail.domain.com" – which we use to send out mail to our customers (with a modified PHPMailer script). Usually around 5000 mails every day. Everything from customer support to invoices goes through there. The from-header is set to "[email protected]". We are now thinking of migrating to Google Apps for internal use (with 70+ users). However, we cannot use Gmails SMTP for sending "bulk" mails (they have a limit of 500 outgoing mails per day) so we really want to keep using our current system for sending automated mail to our customers – and using gmails SMTP for our internal use. So, how do we set up our SPF-records (Sender Policy Framework) for this? We do not want to get stuck in any filters for "spoofing" the sender from either type of account (the ones sent from our own server, and through Gmails). In short: we want to be able to use the same e-mail adress (for sending) on two different SMTP servers (and therefore two different IP-adresses). Anyone with a good knowledge off SPF who knows how to go about? Or if it is even possible? Anything else I should think of when switching to Google Apps?

    Read the article

  • How to migrate Outlook Express mail rules?

    - by ronwest
    I have a home computer that only had a 15Gb C: drive, and ran out of space with all the Microsoft Updates, etc, that keep coming down. So I fitted a 160Gb drive as a C: drive and altered the drive jumpers to make the old C: drive into a slave D: drive, to save migrating documents, etc. I've installed a clean copy of Windows XP SP3 and reassigned the new Outlook Express' mailstore path to point to the old mailstore folder that now has a D: drive letter - and it all works OK. However, my extensive list of mail rules have not been transferred to the new OE and I have not been able to identify how they are stored. To find it I added a new rule to the new OE, exited OE, then searched on the whole computer (including hidden/system files) for files altered around the time I added the rule. I hoped I could just overwrite a new empty file with an old one. But the only files that seem to be changed are Windows system-level files and some bits and pieces in the Windows\PreFetch sub-folder. None of them can be opened as XP has them locked, and none of them have names that are anything to do with email or rules. Does anyone know of any way of migrating OE rules, or do I have to re-enter them by hand? Thanks!

    Read the article

  • Master File Table Corrupt, any way to save data?

    - by domen
    hi. I've used search, but none of the results match my problem so I didn't have to ask separate question. I've Installed Windows 7 RTM recently and since then partitions located on one of my HDDs have gone "crazy". They used to "freeze" and didn't open in explorer for some time (minute or two, usually), sometimes all partitions of the drive wouldn't show until reboot and finally, one of those partitions started showing "disk structure is corrupted and unreadable" warning, it appeared in Disk Management window as RAW and chkdsk showed "mft corrupt". There were no important data on the partition and I didn't have enough time to analyze the problem at the moment, so I just reformatted it and ran antivirus scan on system. After that problem settled for some time, but yesterday the problematic HDD vanished again from the system. After reboot chkdsk identified mft of four partitions corrupt and now they are all in same conditions as the above mentioned one. But the difference is that the files stored in them are extremely important. and just for info: I upgraded from Win7 build 7077, but had some performance issues, so I reformatted system drive and installed fresh Win7 RTM on it. I've downloaded TestDisk and it shows all the partitions marked as NTFS (not RAW) and my knowledge of the program wasn't sufficient to obtain any other info from it :-) and the images that could help describe the problem (sorry, I'm not allowed to post images and more than one hyperlink): http:// img22.imageshack.us/img22/5909/chkdskz.jpg http:// img198.imageshack.us/img198/5576/computeray.jpg I'm interested, is there a way to let me restore the MFT or just access files so I can backup them before reformatting the drive. Thanks for your time. :) P.S. my reformatted drive is showing no problems, could there be a problem with windows 7 itself? I googled, but with no results.

    Read the article

  • Can't access server sound card when vnc'd into ubuntu server

    - by Corey Kennedy
    I've set up my ubunutu 10 server with xfce, nxserver, and now tightvncserver so that I can control it remotely from my Windows 7 laptop. NX is working fine for remote access, but when I run (for example) exaile, no sound will be sent through the server's sound card. I installed tightvncserver and connected, but ran into the same problem. Exaile opens, sound isn't muted, I can see that sound cards are installed (via cat /proc/asound/cards), but I can't seem to get the remote sessions to access the server's sound card. Also, just to confirm that the sound card was working I hooked up a montior/keyboard to the server and opened a local xfce session. That worked fine. While I had the local session running, I was also able to open a remote session with NXClient and start exaile - which then successfully piped sound to the local card. After disconnecting the monitor/keyboard and moving the box back to its normal spot, though, I was not able to play sound via either an NX or VNC session. Does anyone have any suggestions? Surely it's possible to configure my remote sessions to pipe sound to the server's sound card, right? Or at least get xfce up and running without a monitor or keyboard but with access to the sound card so I can VNC into it? Thanks!

    Read the article

  • Run a rails server on Amazon EC2 [on hold]

    - by Jashwant
    Context: I've tried rubber gem, but that does not fulfill my requirements ( I needed to deploy on existing instance, so don't recommend me rubber) So, I followed this excellent tutorial http://stackoverflow.com/questions/15535140/installing-ruby-2-0-and-rails-4-0-0beta-on-aws-ec2 Now, I have ruby 2.0 and rails 4.0.0 running on AWS EC2. I successfully ran the server with RDS (mysql) as db and default webrick as server ( Using command rails server ) But, I've read that webrick is a development server and shouldn't be used at production. What I tried: I googled and came up with some alternatives. Capistrano Nginx / apache with passenger Passenger with Capistrano Unicorn Puma My Question: What exactly is capistrano / passenger ? Are they middleware to ease my deployment process ? I don't see any difficulty in doing rails server command. If they are just middleware, nginx with passenger and capistrano does not make any sense ? Why would I add a learning curve ( to learn nginx, passenger and capistrano configs) just to run my server ? I can just use nginx to deploy my app. Can't I ? What combination should I use on Amazon EC2 (or may be at any some other production server).

    Read the article

  • Upgrading PHP, MySQL old-passwords issue

    - by Rushyo
    I've inherited a Windows 2k3 server running an XAMPP-installation from the stone age. I needed to upgrade PHP to facilitate an upgrade to MediaWiki to facilitate a new MediaWiki extension (to facilitate some documentation to facilitate doing my job to facilitate getting paid to facilit... you get the idea). However... installing a new version of PHP resulted in PHP's MySQL libraries refusing to communicate using MySQL's 'old style' 152-bit passwords. Not a problem in theory. The MySQL installation is post-4.1, so it should have the functionality to upgrade the user's passwords from 152-bit to 328-bit (what a weird hashing algorithm...). I ran the following: SET PASSWORD = PASSWORD('foo'); on MySQL but querying: SELECT user, password FROM mysql.user; returned just the same password I started out with - 152-bit. Now... I suspect you're thinking 'AHA! old-passwords is on!'. Unfortunately it's not - I've disabled it in the configuration (explicitly set it to 0), made doubly sure I have an absolute reference to that configuration file and ensured the service isn't using the --old-passwords flag. The service was reset after each and every operation. So I went onto another system and generated the 328-bit hash on there, copying the hash over to the first MySQL instance. Unfortunately, that didn't work either (I did remember to FLUSH PRIVILEGES). The application error is: "'mysqlnd cannot connect to MySQL 4.1+ using the old insecure authentication. Please use an administration tool [...snip...] Is there anything else I can try to get PHP to recognise MySQL as not using the 'old insecure authentication'? MySQL seems to be stuck in 'old-passwords' mode and I can't get it out of it.

    Read the article

  • Computer does not boot, often

    - by tam
    I've ran into a issue with my computer that it does no longer reach POST, but simply powers on for a fraction of a second and powers off. But this is not always, some times it boots just normally and it works as it should, no issues with not enough power or anything. But as soon as I turn it of, I can not turn it back on, but then again at some random point it just powers up again, and resumes normal operation. If I disconnect the 8pin ATX connector from the motherboard, it powers up, fans and disks spinning normally until I power it off again. So this problem only happens when ATX is connected, which seems odd, I normally always saw this kind of an error if ATX was not connected, but here it's the exact opposite. It also does not emit any sound on the buzzer, except the normal beep, when it powers up normally. I have already tried: Remove graphics card Remove one and/or all RAM sticks Disconnect everything non-essential, even hard drives Clear CMOS I have not yet tried to remove all components and tried to boot everything outside of the case, because I did not have the time to disassemble and bleed the water loop. However, I can confirm that nothing is stuck underneath the motherboard, not is any of those brass raisers touching the board where it should not. Specs: Gigabyte GA-970A-UD3 AMD FX6300 ATI HD7850 I think this should be enough for this issue.

    Read the article

  • Chef bash resource not executing as specified user

    - by Arthur Maltson
    I'm writing a Chef cookbook to install Hubot. In the recipe, I do the following: bash "install hubot" do user hubot_user group hubot_group cwd install_dir code <<-EOH wget https://github.com/downloads/github/hubot/hubot-#{node['hubot']['version']}.tar.gz && \ tar xzvf hubot-#{node['hubot']['version']}.tar.gz && \ cd hubot && \ npm install EOH end However, when I try to run chef-client on the server installing the cookbook, I'm getting a permission denied writing to the directory of the user that runs chef-client, not the hubot user. For some reason, npm is trying to run under the wrong user, not the user specified in the bash resource. I am able to run sudo su - hubot -c "npm install /usr/local/hubot/hubot" manually, and this gets the result I want (installs hubot as the hubot user). However, it seems chef-client isn't executing the command as the hubot user. Below you'll find the chef-client execution. Thank you in advance. Saving to: `hubot-2.1.0.tar.gz' 0K ...... 100% 563K=0.01s 2012-01-23 12:32:55 (563 KB/s) - `hubot-2.1.0.tar.gz' saved [7115/7115] npm ERR! Could not create /home/<user-chef-client-uses>/.npm/log/1.2.0/package.tgz npm ERR! Failed creating the tarball. npm ERR! couldn't pack /tmp/npm-1327339976597/1327339976597-0.13104878342710435/contents/package to /home/<user-chef-client-uses>/.npm/log/1.2.0/package.tgz npm ERR! error installing [email protected] Error: EACCES, permission denied '/home/<user-chef-client-uses>/.npm/log' ... npm not ok ---- End output of "bash" "/tmp/chef-script20120123-25024-u9nps2-0" ---- Ran "bash" "/tmp/chef-script20120123-25024-u9nps2-0" returned 1

    Read the article

  • Reset rc.d so software starts at boot again

    - by natli
    I ran the following 2 commands on my VPS box and now it boots without starting any software at all. According to rcconf it's still supposed to start my chosen software (ssh etc.) but it doesn't. update-rc.d vz defaults update-rc.d vzeventd defaults I already tried removing them again with update-rc.d -f vz remove update-rc.d -f vzeventd remove But that didnt't change anything. /etc/rc.local also still correctly lists some scripts I want to run at start-up, but they don't seem to be called either. I expect the top 2 commands to be responsible, but here's everything I did: mkdir /var/openvz-dl cd /var/openvz-dl wget http://download.openvz.org/kernel/branches/rhel6-2.6.32/042stab062.2/vzkernel-2.6.32-042stab062.2.x86_64.rpm wget http://download.openvz.org/kernel/branches/rhel6-2.6.32/042stab062.2/vzkernel-devel-2.6.32-042stab062.2.x86_64.rpm wget http://download.openvz.org/utils/vzctl/4.0/vzctl-4.0-1.x86_64.rpm wget http://download.openvz.org/utils/vzctl/4.0/vzctl-core-4.0-1.x86_64.rpm wget http://download.openvz.org/utils/ploop/1.5/ploop-1.5-1.x86_64.rpm wget http://download.openvz.org/utils/ploop/1.5/ploop-lib-1.5-1.x86_64.rpm wget http://download.openvz.org/utils/vzquota/3.1/vzquota-3.1-1.x86_64.rpm apt-get install fakeroot alien fakeroot alien --to-deb --scripts --keep-version vz*.rpm ploop*.rpm dpkg -i vz*.deb ploop*.deb --force-overwrite update-rc.d vz defaults update-rc.d vzeventd defaults reboot A huge part of that failed because I was running it on an OpenVZ VPS which has a shared kernel that can't be altered, so I also had to fix the dpkg like so (it was moaning about wanting to install vzkernel with a package not being found); rm /var/lib/dpkg/info/vzkernel* dpkg-reconfigure vzkernel --force dpkg --purge --force-all vzkernel But that didn't fix the boot issue either. How do I make my software start at boot again?

    Read the article

  • Moving a lot of small files between servers using rsync

    - by Adirael
    Hello guys, I'm moving a lot of files (about 2 millions) between two servers on different locations using rsync over ssh, it seems to work fine but I just realised I'm losing some files on the process. I got server 1, with the original data, and server 2, with the copy. Server 1 runs CentOS 5 and Server 2 runs on Ubuntu 10. I'm doing the transfer on the Server's 2 command line like this: rsync -e ssh -avzn usr@server1:/remote/path /local/path The first file movement I did using tar, but I didn't though of piping it through ssh and it failed cause the disk on server 1 was almost full, so I transfered it anyways (it was about 200GB) and got about 80% of the files. Then I piped another tar with the rest of the files (they're in folders, I got 100 folders with about 30 subfolders each, with files inside) and now I got everything on server 2. I wanted to be sure, so I my two options are getting the md5sum of all the files and check them or running an rsync on server 2 against server 1, that's what I did. It got some missing stuff and now it says there's nothing more to do (DRY RUN). But I got at least two files that are missing inside a subfolder. I ran that same rsync on that folder, but still dry run. Am I doing something wrong? Thanks, and sorry for the wall of text.

    Read the article

  • Can't find partition tab in disk utility osX ver. 10.6.8

    - by John W
    I just got a used Mac Book Pro. I created a new admin account and deleted the old one as well as one other user. This is an older late 2007 MBP... the osX upgrade to 10.6.8 was just performed. My Macintosh HD is showing up as Partition 2. I ran disk utility (not from install disk), but there was no partition tab. I have a 160GB drive with only 53GB of space left on it. Since I am the only user and have no files on the laptop yet, I don't understand why there is so little space left. Surely the OS can't use up over 100GB. I wanted to run disk utility to see if there were any recovery partitions or other partition left over from the previous owner that could be erased to make room for expanding the main partition. Unfortunately, there is no partition tab in disk utility. The documentation I have found on line states that this version of osX includes that utility. The osX disks I have are for an older version so I wasn't sure if they would be of any use in solving this problem. Also, I was afraid if using the disks, would I lose the little bit of data/apps that I have assembled. I would rather not do a fresh install and have to do all the updates again to achieve this. The previous owner had some apps that I don't want to lose as I would have to pay handsomely to get them back. Simply, if all the previous users data is backed up on here after deleting user is still taking up space on a recovery partition (that I can't see)... I need to locate it erase it and expand the primary partition to re-aquire disk space for my files. I am new to Mac, so please be as descriptive as possible. Thanks.

    Read the article

  • IP to IP forwarding with iptables [centos]

    - by FunkyChicken
    I have 2 servers. Server 1 with ip 1.1.1.1 and server 2 with ip 2.2.2.2 My domain example.com points to 1.1.1.1 at the moment, but very soon I'm going to switch to ip 2.2.2.2. I have already setup a low TTL for domain example.com, but some people will still hit the old ip a after I change the ip address of the domain. Now both machines run centos 5.8 with iptables and nginx as a webserver. I want to forward all traffic that still hits server 1.1.1.1 to 2.2.2.2 so there won't be any downtime. Now I found this tutorial: http://www.debuntu.org/how-to-redirecting-network-traffic-a-new-ip-using-iptables but I cannot seem to get it working. I have enabled ip forwarding: echo "1" > /proc/sys/net/ipv4/ip_forward After that I ran these 2 commands: /sbin/iptables -t nat -A PREROUTING -s 1.1.1.1 -p tcp --dport 80 -j DNAT --to-destination 2.2.2.2:80 /sbin/iptables -t nat -A POSTROUTING -j MASQUERADE But when I load http://1.1.1.1 in my browser, I still get the pages hosted on 1.1.1.1 and not the content from 2.2.2.2. What am I doing wrong?

    Read the article

  • KVM virtual machine unable to access internet

    - by peachykeen
    I have KVM set up to run a virtual machine (Windows Home Server 2011 acting as a build agent) on a dedicated server (CentOS 6.3). Recently, I ran updates on the host, and the virtual machine is now unable to connect to the internet. The virtual network is running through NAT, the host has an interface (eth0:0) set up with a static IP (virt-manager shows the network and its IP correctly), and all connections to that IP should be sent to the guest. The host and guest can ping one another, but the guest cannot ping anything above the host, nor can I ping the guest from anywhere else (I can ping the host). Results from the guest to another server under my control and from an external system to the guest both return "Destination port unreachable". Running tcpdump on the host and destination shows the host replying to the ping, but the destination never sees it (it doesn't even look like the host is bothering to send it on at all, which leads me to suspect iptables). The ping output matches that, listing replies from 192.168.100.1. The guest can resolve DNS, however, which I find rather odd. The guest's network settings (connection TCP/IPv4 properties) are set up with a static local IP (192.168.100.128), mask of 255.255.255.0, and gateway and DNS at 192.168.100.1. When originally setting up the vm/net, I had set up some iptables rules to enable bridging, but after my hosting company complained about the bridge, I set up a new virtual net using NAT and believe I removed all the rules. The VM's network was working perfectly fine for the last few months, until yesterday. I haven't heard anything from the hosting company, didn't change anything on the guest, so as far as I know, nothing else has changed (unfortunately the list of packages updated has since fallen off scrollback and I didn't note it down).

    Read the article

  • Windows Server 2012 licensing issue preventing RDP connections?

    - by QF_Developer
    I am witnessing an unusual behaviour on 1 of 5 Windows Server 2012 R2 machines (clean install) that is preventing any remote connections from being established via RDP. I have run through the prerequisites for RDP here but I am finding that any remote connection attempt instantly stops the "Windows Protection Service". When I check the event logs I see the following entry. The Software Protection Service has stopped Event ID: 903 Source: Security-SPP From what I have read Security-SPP is tasked with enforcing activation and licensing, it appears that RDP requires this service to be in the running state. Is it possible that I have inadvertently activated this instance of Windows with a key that has already been associated to another instance (We have 5 keys as part of an MSDN subscription)? Would this be sufficient to block RDP access? When I look under System Properties (Windows Activation) it states that Windows is activated and there are no other obvious indicators that there's a licensing issue. EDIT 1: I ran a Powershell script to display the product keys for all servers in order to check for any duplication. For the problematic server I am getting the message The RPC server is unavailable.

    Read the article

  • Deploying multiple identical copies of a virtual machine for compute tasks

    - by Reid
    I have a compute task which has a large number of library dependencies. I would like to deploy it on some of my company's large Linux clusters, where I do not have root. I could probably track down, compile, and install the right versions of all the libraries, but this looks to be quite tedious and would have to be repeated if I deployed it again somewhere else. On the other hand, it's pretty easy to install on current Ubuntu. This led me to wonder about a virtual machine approach. Could I put together a virtual machine which booted up, ran the computation (with parameters from and results to the host), and then shut down? In other words, I'd like a command like this that I could run on the host: $ ./run-vm --ram N --task /path/on/host/foo.sh --results /another/host/dir/ This would boot the VM, run foo.sh, and put the (relatively small) results of the computation in /another/host/dir/. It's important to start up many instances of the VM simultaneously, both on a single node and multiple nodes of the cluster. So it would be nice if I didn't have to make many copies of the VM virtual disk and metadata. As the task instances are completely independent, the VMs would not need any network support once deployed, or any outside communications beyond reading and writing the host filesystem. Is this possible, and if so, how might I go about doing it? Are there assumptions I've made above which are bogus?

    Read the article

  • All applications quit when printing on Mac OS X 10.5.8

    - by Tamany
    I recently ran a software update. I'm not sure if my problems are associated with this but I'm pretty sure they are as I printed successfully before update. I checked the log at time of printing: 03/05/2010 22:03:15 Microsoft Word[697] *** -[NSCFString _getValue:forType:]: unrecognized selector sent to instance 0x17a82b50 03/05/2010 22:03:15 [0x0-0x51051].com.microsoft.Word[697] Ignoring Quickdraw drawing between QDBeginCGContext and QDEndCGContext 03/05/2010 22:03:16 [0x0-0x51051].com.microsoft.Word[697] Ignoring Quickdraw drawing between QDBeginCGContext and QDEndCGContext 03/05/2010 22:03:16 [0x0-0x51051].com.microsoft.Word[697] Ignoring Quickdraw drawing between QDBeginCGContext and QDEndCGContext 03/05/2010 22:03:16 [0x0-0x51051].com.microsoft.Word[697] Ignoring Quickdraw drawing between QDBeginCGContext and QDEndCGContext 03/05/2010 22:03:16 [0x0-0x51051].com.microsoft.Word[697] Ignoring Quickdraw drawing between QDBeginCGContext and QDEndCGContext 03/05/2010 22:03:16 [0x0-0x51051].com.microsoft.Word[697] Ignoring Quickdraw drawing between QDBeginCGContext and QDEndCGContext 03/05/2010 22:03:17 [0x0-0x51051].com.microsoft.Word[697] Mon May 3 22:03:17 leopards-imac-2.local Word[697] <Error>: The function `CGPDFDocumentGetMediaBox' is obsolete and will be removed in an upcoming update. Unfortunately, this application, or a library it uses, is using this obsolete function, and is thereby contributing to an overall degradation of system performance. Please use `CGPDFPageGetBoxRect' instead. 03/05/2010 22:22:09 Microsoft Word[697] *** -[NSCFString _getValue:forType:]: unrecognized selector sent to instance 0x1b036500 Any thoughts on how to fix this?

    Read the article

  • Rkhunter reports file properties have changed

    - by CountMurphy
    I am running a fully updated LTS copy of Ubuntu server. Today I ran rkhunter (as I do from time to time). This is the output I got: Warning: The file properties have changed: [15:52:25] File: /bin/ps [15:52:25] Current hash: f22991ec93ae966c856d367f42fc3d8a484bd827 [15:52:25] Stored hash : 1892268bf195ac118076b1b0f53e7a637eb6fbb3 [15:52:25] Current inode: 142902 Stored inode: 130894 [15:52:25] Current file modification time: 1324307913 (19-Dec-2011 07:18:33) [15:52:25] Stored file modification time : 1260992081 (16-Dec-2009 11:34:41) Warning: The file properties have changed: [15:52:33] File: /usr/bin/ldd [15:52:33] Current hash: f1e2ca5aa3a28994e2cebb64c993a72b7d97b28c [15:52:33] Stored hash : 295d9cedb121a5e431a39a6d201ecd7ce5640497 [15:52:33] Current inode: 2236210 Stored inode: 2234359 [15:52:33] Current size: 5280 Stored size: 5279 [15:52:33] Current file modification time: 1331165514 (07-Mar-2012 16:11:54) [15:52:33] Stored file modification time : 1295653965 (21-Jan-2011 15:52:45) Warning: The file properties have changed: [15:52:37] File: /usr/bin/pgrep [15:52:37] Current hash: 3eada9a96760f3e2c9111cfe32901d1432813c1d [15:52:37] Stored hash : ce265d0db9964b173fe5036f703a9b8d66e55df3 [15:52:37] Current inode: 2229646 Stored inode: 2224867 [15:52:37] Current file modification time: 1324307913 (19-Dec-2011 07:18:33) [15:52:37] Stored file modification time : 1260992081 (16-Dec-2009 11:34:41) Warning: The file properties have changed: [15:52:41] File: /usr/bin/top [15:52:41] Current hash: 6be13737d8b0950cea2f1ae3a46d4af713dbe971 [15:52:41] Stored hash : c7b495ecef3982eeb6f08a511861b1a1ae8775e6 [15:52:41] Current inode: 2229629 Stored inode: 2224862 [15:52:41] Current file modification time: 1324307913 (19-Dec-2011 07:18:33) [15:52:41] Stored file modification time : 1260992081 (16-Dec-2009 11:34:41) Warning: The file properties have changed: [15:52:53] File: /usr/sbin/cron [15:52:53] Current hash: e783ca973f970aa8a4bf5edc670e690b33914c3d [15:52:53] Stored hash : 4718257a8060736b9058aed025c992f02a74a5a7 [15:52:53] Current inode: 2224719 Stored inode: 2228839 [15:52:54] Current file modification time: 1330965568 (05-Mar-2012 08:39:28) There were also a few other I left out. Has my server been rooted? I am running fail2ban and do monitor failed ssh logins. nothing has come up. Could someone compare these hashes to their copy of Ubuntu Server (lts)? Please tell me these are false positives..... Edit: is something else like rkhunter I can run for a second scan?

    Read the article

  • Need help trying to allow my remote PowerShell script to run on my Windows 2008 r2 Server

    - by Pure.Krome
    I've got a Windows 2008 r2 server with Sql Server 2008 r2 installed. I've got a Sql Server Agent which tries to run a Powershell job, but fails :- Message Executed as user: FooServer\SqlServerUser. A job step received an error at line 1 in a PowerShell script. The corresponding line is '& '\\polanski\Backups\Database\7ZipFooDatabases.ps1'' Correct the script and reschedule the job. The error information returned by PowerShell is: 'File \\polanski\Backups\Database\7ZipFooDatabases.ps1 cannot be loaded. The file \\polanski\Backups\Database\7ZipFooDatabases.ps1 is not digitally signed. The script will not execute on the system. Please see "get-help about_signing" for more details.. '. Process Exit Code -1. The step failed. Ok. So i run Powershell on that server then set the execution policy to unrestricted. To check .. PS C:\Users\theUser> Get-ExecutionPolicy Unrestricted PS C:\Users\theUser> Kewl :) but it still doesn't work :( Ok ... what happens when i try to run the powershell from the command line.... PS C:\Users\justin.adler . '\polanski\Backups\Database\7ZipMotorshoutDatabases.ps1' Security Warning Run only scripts that you trust. While scripts from the Internet can be useful, this script can potentially harm your computer. Do you want to run \\polanski\Backups\Database\7ZipFooDatabases.ps1? [D] Do not run [R] Run once [S] Suspend [?] Help (default is "D"): er..... didn't I already tell the server that ANY file can be ran? Notice the file is located at... \\polanski\Backups\Database\ So can someone make any suggestions?

    Read the article

  • How do I cancel windows server 2003 repair install?

    - by Kilgore2k
    System: Windows 2003 Server Enterprise Scenario: NTDS db is corrupt and all attempts to fix with esentutl fail. Ran chkdsk which seemed to repair disk error and give access to the ntds.dit file but still esentutl fails. (Attached the drive to a different server to run the esentutl) Error: Access to source database '[path to copy of]/ntds.dit' failed with Jet error -1022. Operation terminated with error -1022 (JET_errDiskIO, Disk IO error) after 0.170 seconds. This error occurs on any disk I cpoy the files to including original location in C:\WINDOWS\NTDS\ Now enter the "Stupid!" and "what was I thinking!?" part (must be the late hour...) Stupid: No updated backup - after using a backup I get a network password error in the lsass error. what was I thinking!?: Started the install repair from the original CD but the install fails since the AD fails to start. Now I cant boot into any mode (safe mode, AD restore etc) nor complete the repair install. I would really like to avoid a fresh install since I have the Exchange server on this DC and would rather migrate to a new server than have to start from scratch. Thanks!

    Read the article

  • After installing Windows 7, I can no longer get to Debian 6 (dual boot)

    - by Jeremy
    I had Debian 6 on my machine (Dell Vostro 260) and used GParted to shrink the partition. I then tried installing Windows 7 on that partition. After Windows 7 installed, I could not choose which OS to run. It would just boot into Windows. I ran GParted again, and saw that Windows created another partition, labeled "System Reserved". That partition had the boot flag set. I tried moving the boot flag to other partitions, including my Debian partition and one with a file system "linux-swap". No option would actually load an OS or anything except for the Windows partition, which is not what I want. Is it worth it to try to fix this installation, or should I start over. I have all my data backed up, so I can easily install from scratch if I need to. If I do start from scratch, which OS should I install first? And then, how do I set up the partitions to install the other OS? Let me know if you have any questions. Thanks for your help.

    Read the article

< Previous Page | 128 129 130 131 132 133 134 135 136 137 138 139  | Next Page >