Search Results

Search found 55281 results on 2212 pages for 'get set'.

Page 688/2212 | < Previous Page | 684 685 686 687 688 689 690 691 692 693 694 695  | Next Page >

  • Clone virtual machine with Server 2008 R2 and Hyper-V?

    - by bwerks
    Hi all, I've recently just started working with Hyper-V, and so far it's quite nice. However, I've been running into problems with what seems like it should be the most basic of workflows. I've set up a baseline Server 2008 R2 configuration, and exported it with the intention of using the export for cloning. I entered "C:\Exports\" as the export folder. However, I run into problems when I try to import the image. From the Hyper-V manager, I select "Import Virtual Machine" and in the resulting window I entered "C:\Exports\BuildServer\" as the folder, set the radial to "Copy the virtual machine (create a new unique ID)" and checked the checkbox for "Duplicate all files so the same virtual machine can be imported again." Doing so results in the following error: "Import failed. Import task failed to copy file from 'H:\Exports\BuildServer\Virtual Hard Disks\BuildServer.vhd' to 'C:\Hyper-V\Virtual Hard Disks\BuildServer.vhd': The file exists. (0x80070050)" Have I somehow messed something up in configuration? Or is this a known thing? I've read it should be possible to clone VMs by copying them in the filesystem but I'd prefer to keep things in the management Ui if possible.

    Read the article

  • NSclient++ NRPE issues

    - by Kyle
    I have had NSclient++ working with Nagios for a while now. Recently I started testing Nagwin just to see how it would work, out of pure curiosity. I stopped checking a test server with my main Nagios config, set NSclient++ to NRPE mode, and pointed Nagwin at it. It worked great for a few hours then suddenly I started seeing "UNKNOWN: No Handler for that command." I figured it has to be Nagwin's fault since it's so new, I'll just unload NRPElistner.dll and return my server to being monitored by check_NT. However now check_NT doesn't work my main Nagios server returns timeout errors and is unable to connect at all. My Nagwin server can connect to it, the server just doesn't know how to handle the check_NRPE commands even though it did with no changes a few hours earlier. I have been working on this for a day now and am fairly certain it is NSclient++ who is to blame here. My nagwin box has successfully stayed connected to a similar server throughout the night, without any issues. And my main Nagios config is not having any problems at all. I have been able to successfully switch another server between being monitored by nagios and nagwin without any problems by simply loading and unloading the NRPE.dll. I have tried uninstalling NSclient++ and reinstalling with fresh configuration but still receive the errors. As of now the firewall is off on the server, NSclient++ is setup to accept connection from any server, there is no password, I have also turned ssl off, and the NRPE module is loaded. Any Ideas would be appreciated, I am not an advanced Nagios user but I do know my way around it and can easily break it down and set it up again. I also want to add that while in test mode NSclient++ is unable to handle check_NRPE commands there either.

    Read the article

  • Windows 2003 R2 x86 Mini dump fails to write to disk

    - by Randy K
    I have 3 blade servers that are Blue Screening with a 0xC2 error as far as we can tell randomly. When it started happening I found that the servers weren't set to do provide a dump because they each have 16GB RAM and a 16GB swap file divided over 4 partitions in 4GB files. I set them to provided a small dump file (64K mini dump), but the dump files aren't being written. On start up the server event log is reporting both Event ID 45 "The system could not sucessfully load the crash dump driver." and Event ID 49 "Configuring the Page file for crash dump failed. Make sure there is a page file on the boot partition and that is large enough to contain all physical memory." I understanding is the the small dump shouldn't need a swap file large enough for all physical memory, but the error seems to point to this not being the case. The issue of course is that the max swap file size is 4GB, so this seems to be impossible. Can anyone point me where to go from here?

    Read the article

  • run script as another user from a root script with no tty stdin

    - by viktor tron
    Using CentOs, I want to run a script as user 'training' as a system service. I use daemontools to monitor the process, which needs a launcher script that is run as root and has no tty standard in. Below I give my four different attempts which all fail. : #!/bin/bash exec >> /var/log/training_service.log 2>&1 setuidgid training training_command This last line is not good enough since for training_command, we need environment for trqaining user to be set. : su - training -c 'training_command' This looks like it (http://serverfault.com/questions/44400/run-a-shell-script-as-a-different-user) but gives 'standard in must be tty' as su making sure tty is present to potentially accept password. I know I could make this disappear by modifying /etc/sudoers (a la http://superuser.com/questions/119376/bash-su-script-giving-an-error-standard-in-must-be-a-tty) but i am reluctant and unsure of consequences. : runuser - training -c 'training_command' This one gives runuser: cannot set groups: Connection refused. I found no sense or resolution to this error. : ssh -p100 training@localhost 'source $HOME/.bashrc; training_command' This one is more of a joke to show desparation. Even this one fails with Host key verification failed. (the host key IS in known_hosts, etc). Note: all of 2,3,4 work as they should if I run the wrapper script from a root shell. problems only occur if the system service monitor (daemontools) launches it (no tty terminal I guess). I am stuck. Is this something so hard to achieve? I appreciate all insight and guidance to best practice. (this has also been posted on superuser: http://superuser.com/questions/434235/script-calling-script-as-other-user)

    Read the article

  • How do you host multiple public facing websites on a VPS?

    - by pedroarvy
    We host about 30 websites using typical shared hosting plans using ASP.NET and SQL 2000/2005/2008. I am now wondering about hosting all of these websites using our own virtual private server. This is clearly cheaper but comes with a lot of questions I need answers to: Is the risk of having to keep this VPS server up and running worth it? Until now, the host provider has managed the server and we have not had to worry about crashes, downtime, software patches etc. We are not server administrators, we are programmers, so this is not really our expertise. On the other hand, it may not be hard to learn. When we make a website live, we log in to a domain management control panel and change the primary and secondary name servers to point to our shared web host: Eg ns1.sharedwebhost.com and ns2.sharedwebhost.com These name servers are going to have to change when we have a VPS. I don’t understand anything about how to set this up. Is there some useful info anyone could direct me to? Or is there software we need to install to make the primary and secondary name servers work on our VPS? The control panel we have for shared hosting comes with DNS management like this: http://www.yart.com.au/stackoverflow/dns.png What software would I need to install to create this for each site we host at a VPS? The control panel we have for shared hosting also comes with a POP email interface that allows email addresses to be added easily by our customers. Is this something that can be easily set up at a VPS so clients can manage their own email addresses? Is there software we need to install to make this work?

    Read the article

  • Upgrading Ubuntu 9.04 to 9.10 when Update Manager doesn't let you

    - by nickf
    I've been trying to upgrade my installation of Ubuntu 9.04 to 9.10, but all of the instructions I've found haven't been helping. They mostly say to run the update manager and it'll tell you that there's a new distribution ready. Well, mine doesn't say that. Things I've run or checked: update-manager -d says: Your system is up-to-date The package information was last updated less than one hour ago. I've set it to get all new distributions, not just LTS $ cat /etc/update-manager/release-upgrades [DEFAULT] # default prompting behavior, valid options: # never - never prompt for a new distribution version # normal - prompt if a new version of the distribution is available # lts - prompt only if a LTS version of the distribution is available Prompt=normal I'm definitely running 9.04 $ lsb_release -r Distributor ID: Ubuntu Description: Ubuntu 9.04 Release: 9.04 Codename: jaunty Even running the release upgrade from console doesn't help: $ sudo do-release-upgrade Checking for a new ubuntu release No new release found This is running from behind a proxy, but I've set it up such that the regular upgrades and apt-get etc doesn't complain. (export http_proxy=http://myuser:mypass@myserver:8080/) Could you think of anything else which might be stopping me from upgrading?

    Read the article

  • 426 Connection closed; transfer aborted.

    - by Jiaoziren
    Hi, I have an IIS FTP set up on Windows 2003 SP2 (S1). Everyday in the early morning, a script on another server (S2) will run and initiate FTP transfer of pulling log files from S1 to S2. The FTP client we're using is built-in FTP.exe in Windows 2000 on S2. Recently we replaced S1 with a new server however we kept the IP address. There are multiple IP addresses on new S1. Ever since the new S1 was in place, the '426 Connection closed; transfer aborted.' errors haven been occuring randomly. The log indicated that the transfer started ok however the file cannot be transferred completely, as per log below: mget access*.log 200 Type set to A. 200 PORT command successful. 150 Opening ASCII mode data connection for access02232010.log(205777167 bytes). 426 Connection closed; transfer aborted. ftp: 20454832 bytes received in 283.95Seconds 72.04Kbytes/sec. The firewall monitor suggested that the connection was setup in passive mode however I've been told that MS FTP.exe doesn't support passive mode. Though I can see the response of 'entering passive mode' from server when typing in 'quote pasv'. My network admin has told me to try the transfer in active mode however I don't know how to open active mode on client side. It's getting really frustrating. Wish someone here has the right knowledge/experience could shed me a light. Cheers.

    Read the article

  • Zimbra MTA settings

    - by user192702
    Hi have some questions for Zimbra v8.0.6GA. Under Configure - MTA - Network, I'm seeing a few settings and am not very clear what to do with them. Web mail MTA Host name Is this for delivering local mail only (ie not for external mails)? According to this link, it says the following. That's a mouthful but what is "composed messages"? Is this for a multi server deployment where the Postfix server for Zimbra isn't installed on the same box that as the rest of the servers? Webmail MTA is used by the Zimbra server for composed messages and must be the location of the Postfix server in the Zimbra MTA. Relay MTA for external delivery My understanding after reading the doc is that if my ISP doesn't force me to relay outgoing mails through them, and I have enabled DNS lookup, I can leave this blank? Inbound SMTP host name Sorry I know this is explained as "If your MX records point to a spam-relay or any other external non-Zimbra server, enter the name of that server in the Inbound SMTP host name field." but I'm not following. Can someone provide an example? MTA Trusted Networks The admin doc says "To set up MTA trusted networks on a per server basis, make sure that MTA trusted networks have been set up as global settings and then go the Configure Servers MTA page and in the MTA Trusted Networks field enter the trusted network addresses for the server." However I see out of the box it has default networks setup for the server whereas on a global level it's blank. Does this mean there is a bug with the install software and I have to copy the setting from the server to the global setting?

    Read the article

  • Use icacls to make a directory read-only on Windows 7

    - by Dave G
    I'm attempting to test some filesystem exceptions in a Java based application. I need to find a way to create a directory that is located under %TMP% that is set to read-only. Essentially on UNIX/POSIX platforms, I can do a chmod -w and get this effect. Under Windows 7/NTFS this is of course a different story. I'm running into multiple issues on this. My user has "administrative" right (although this may not always be the case) and as such the directory is created with an ACL including: NT AUTHORITY\SYSTEM BUILTIN\Administrators <my current user> Is there a way using icacls to essentially get this directory into a state where it is read-only PERIOD, do my test, then restore the ACL for removal? EDIT With the information provided by @Ansgar Wiechers I was able to come up with a solution. I used the following: icacls dirname /deny %username%:(WD) In the page located here I found this in the remarks section: icacls preserves the canonical order of ACE entries as: * Explicit denials * Explicit grants * Inherited denials * Inherited grants By performing the above icalcs command, I was able to set the current user's ability to write or append files (WD) to the directory to deny. Then it was a question of returning it to a state post test: icacls dirname /reset /t /c Done

    Read the article

  • Can I change the "From" name and address without going into Mail.app's preferences?

    - by Arjan van Bentem
    Can I somehow add an editable "From" field to Mail.app, a bit like Virtual Identity for Thunderbird, just like I could change the "Reply To" address on the fly? In Mail.app, one can set up multiple email addresses for a single account by just entering a comma-seperated list like [email protected], [email protected]. Next, when composing a message, Mail offers a dropdown to select a "From" address. And when replying†, it automatically selects the right address if it can find a match: Nice, but I'd like to be able to change the "From" on the fly, without going into the Account Information. Also, in previous versions of Mail one could even specify multiple Full Names for a single account: Email Address: Arjan <[email protected]>, Arjan on SU <[email protected]> But nowadays, Mail only uses the setting of Full Name, and even ignores names in the Email Address field when no value for Full Name is set at all. Hence, it would be great if I could change the Full Name on the fly as well. I had no luck finding a plugin for Mail yet. † When using sub-addressing, anything that is sent to [email protected] is simply delivered to [email protected]. I'd then like to reply with the same full address, rather than just [email protected] if Mail cannot find a match, without going into the Account Information first. I sometimes also want to compose a new message with a new sub-addressing-address.

    Read the article

  • VMware Fusion configuration files missing

    - by jdmuys
    I need to set up port forwarding to my VM in Fusion 5. Everywhere on the net, the solution is described as editing the file: /Library/Application Support/VMware Fusion/vmnet8/nat.conf However, on my install, that file doesn't exist. Neither does the vmnet8 directory. Here is the full content of VMware stuff I have in /Library/Application Support/: /Library/Application Support/ VMware/ VMware Fusion AdminWritable Shared vmInventory usbarb.rules VMware Fusion That's right: /Library/Application Support/VMware Fusion/ exists but is empty. And there is no VMware folder in other Library directories on my system. I am running OS X 10.8.2. I just reinstalled Fusion 5.02, no change. Meanwhile, I have 3 VMs that work just fine. So how am I supposed to set up port forwarding with Fusion 5? Thanks, JD Edit: in a hunch, I tried ps ax | grep natd which returned: 9646 ?? S 0:00.01 /Applications/VMware Fusion.app/Contents/Library/vmnet-natd -s 7 -m /Library/Preferences/VMware Fusion/vmnet8/nat.mac -c /Library/Preferences/VMware Fusion/vmnet8/nat.conf So it seems that the configurations files are now in the directory /Library/Preferences/VMware Fusion. I'll work from here and edit this question as I make progress.

    Read the article

  • Hide account from login screen but can be used in UAC

    - by tvanover
    So I have a Windows 7 home machine with 2 user accounts. One is a standard user account and one is an administrator account. Now this is going to be put in the hands of a very low-tech user so I don't want them to be able to see the administrator account on logon, but they want to have a password to prevent someone else from using the machine. My goal is that when the user turns on the computer, they are presented with their login. After logging in to their non-administrator account, if something needs to be installed then the administrator account can be used through UAC. I have tried creating the reg key HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows NT\CurrentVersion\Winlogon\SpecialAccounts\UserList and adding a dword of the account name and set it to 0. It succeeded in hiding the account from th login screen. As well as hiding it from UAC. So it fails the second requirement, of being able to run things as administrator through UAC. Also since I didn't set an administrator password (left it blank) it seems that I have completely locked myself out of the machine since runas doesn't accept blank passwords. So I also cannot undo it, and have quite effectively bricked the install, prompting an OS reinstall. This is Windows 7 Home, so there is no Users management console.

    Read the article

  • How do I connect two computers with a LAN cable?

    - by John
    I have two machines - Windows XP and a laptop using Windows 7. I connected them with a WLAN cable. On the Windows XP machine, I set the IP address to 192.168.0.10. On the Windows 7 laptop, I set the IP address to 192.168.0.20. The laptop can see the Windows XP machine, but Windows XP machine cannot see the Windows 7 machine. But this does NOT concern me. I want to move the files from my desktop (Windows XP) to Windows 7 (laptop). That's why I'm going through all this. The problem is that when I try to connect from Windows 7 to Windows XP machine, I get this window: I don't understand what username/password is needed. I use none on the Windows XP machine. I tried all usernames - no success. Please explain in deep details how to solve my problem so I can connect to my Windows XP machine. EDIT: Maybe this can help: the Windows XP machine is named 'I' and '???????? III' is the name of the laptop. Both computers share one workgroup - WORKGROUP.

    Read the article

  • ipvsadm lists a few hosts by IP only, rest by name

    - by dmourati
    We use keepalived to manage our Linux Virtual Server (LVS) load balancer. The LVS VIPs are setup to use a FWMARK as configured in iptables. virtual_server fwmark 300000 { delay_loop 10 lb_algo wrr lb_kind NAT persistence_timeout 180 protocol TCP real_server 10.10.35.31 { weight 24 MISC_CHECK { misc_path "/usr/local/sbin/check_php_wrapper.sh 10.10.35.31" misc_timeout 30 } } real_server 10.10.35.32 { weight 24 MISC_CHECK { misc_path "/usr/local/sbin/check_php_wrapper.sh 10.10.35.32" misc_timeout 30 } } real_server 10.10.35.33 { weight 24 MISC_CHECK { misc_path "/usr/local/sbin/check_php_wrapper.sh 10.10.35.33" misc_timeout 30 } } real_server 10.10.35.34 { weight 24 MISC_CHECK { misc_path "/usr/local/sbin/check_php_wrapper.sh 10.10.35.34" misc_timeout 30 } } } http://www.austintek.com/LVS/LVS-HOWTO/HOWTO/LVS-HOWTO.fwmark.html [root@lb1 ~]# iptables -L -n -v -t mangle Chain PREROUTING (policy ACCEPT 182G packets, 114T bytes) 190M 167G MARK tcp -- * * 0.0.0.0/0 w1.x1.y1.4 multiport dports 80,443 MARK set 0x493e0 62M 58G MARK tcp -- * * 0.0.0.0/0 w1.x1.y2.4 multiport dports 80,443 MARK set 0x493e0 [root@lb1 ~]# ipvsadm -L IP Virtual Server version 1.2.1 (size=4096) Prot LocalAddress:Port Scheduler Flags -> RemoteAddress:Port Forward Weight ActiveConn InActConn FWM 300000 wrr persistent 180 -> 10.10.35.31:0 Masq 24 1 0 -> dis2.domain.com:0 Masq 24 3 231 -> 10.10.35.33:0 Masq 24 0 208 -> 10.10.35.34:0 Masq 24 0 0 At the time the realservers were setup, there was a misconfigured dns for some hosts in the 10.10.35.0/24 network. Thereafter, we fixed the DNS. However, the hosts continue to show up as only their IP numbers (10.10.35.31,10.10.35.33,10.10.35.34) above. [root@lb1 ~]# host 10.10.35.31 31.35.10.10.in-addr.arpa domain name pointer dis1.domain.com. OS is CentOS 6.3. Ipvsadm is ipvsadm-1.25-10.el6.x86_64. kernel is kernel-2.6.32-71.el6.x86_64. Keepalived is keepalived-1.2.7-1.el6.x86_64. How can we get ipvsadm -L to list all realservers by their proper hostnames?

    Read the article

  • Win7 Pro x64 task manager hangs when restarting explorer.exe after waking from sleep

    - by Brandon Dybala
    I have a desktop running Windows 7 x64 Pro, set for Hybrid Sleep on a wired network. Wakeup is only enabled from the keyboard (wake on mouse and Wake-On-LAN are both disabled). Sometimes when it wakes up, there is no network connectivity. The notification area icons for both network and volume don't respond to clicks. If I open the Network and Sharing Center, clicking the red X doesn't do anything. Restarting does fix the problem, but I'm looking for a solution that does not require restarting (if at all possible). Drivers are all up to date. I've tried opening Task Manager and restarting the explorer.exe process, but Task Manager freezes for a few minutes, the "New Task" dialog closes, and explorer.exe has not restarted. CPU and memory usage are both normal. One thread suggested making sure the BIOS was set for S3 sleep mode only (not S1 or S1 & S3), but I haven't checked this yet. Going back to sleep and waking back up does not help. So far only a reboot has fixed the issue. System specs: Windows 7 x64 Pro Asus P8Z68-V PRO/GEN3 128 GB Crucial m4 SSD (Firmware version 0309) Intel Core i7 2600 3.4 GHz 16 GB RAM Any ideas? Brandon

    Read the article

  • Does any Certificate Authority support both SAN and wildcards?

    - by nicholas a. evans
    My basic quandry is that wildcard certificates don't support subdomains of subdomains, nor do they help with alternate domain names. Basically, if my CN is example.com, I want a Subject Alternative Name field that looks roughly like so: DNS:example.com DNS*.example.com DNS:*.beta.example.com DNS:example.net DNS:*.example.net DNS:*.beta.example.net Using a self-signed cert, I verified that the browsers will work just fine with this. Unfortunately, none of the Certificate Authorities that I looked into (Thawte, GoDaddy, Verisign, Digicert) seemed to support both wildcard certs and Subject Alternative Name (sometimes referred to as "Multiple Domain UCC"). I even called up GoDaddy tech support to confirm. Is there a CA (trusted by 99% of browsers) that supports wildcards for the Subject Alternative Name? One little restriction: I'm saddled with Amazon EC2's single Elastic IP per instance limitation. Here are what I see as my backup plans: set up three extra EC2 instances, each configured for a different IP address and cert, and nginx reverse proxy from three of them into the app server(s) introduces latency(?), and even the cheapest EC2 instance isn't that cheap instead of dedicated reverse proxy instances, setup the four or more almost identical EC2 app servers, with nginx using the port to determine which cert to deliver, and use haproxy to distribute the traffic amongst themselves. complicated to configure and manage? I'm not using the cheapest EC2 instance type for my app servers. If I don't need 4+ app servers for the load, it raises the cost. set up an external server (outside of EC2) that doesn't have EC2's Elastic IP address restrictions, setup all of the alternate IP addresses and certificates on that server, and nginx reverse proxy from that server into the EC2 app servers. extra IP addresses are almost free (still need to pay for the server of course), but don't come with the robust "elasticity" that Amazon's Elastic IPs provide. even more latency than in the first scenario. Are these approaches crazy or reasonable? Do you have another one to suggest?

    Read the article

  • Why does bash sometimes think my $HOME isn't the correct directory?

    - by Adam Yanalunas
    Like the title says it seems that bash sometimes misidentifies my $HOME. This cropped up after a seemingly unique series of events that I will now replay in broad strokes. Running OS X 10.6 with normal, local account Work binds my account to Active Directory Much time passes with no issues Set up rvm to manage Ruby installs (this becomes important later) Upgraded to OS X 10.7 a few days ago After successful install, attempted to log in, was presented with "Must reset password" dialog that never allowed a password to be reset. Would simply shake the box after new password was entered. Much googling was done. Much more googling was done. Swearing was had. Logged in as root, created new account, set as admin, deleted /Users/[new account], renamed /Users/[old account] to /Users/[new account] Logged out of root, logged into new account with no issues After OS X asking for a my account password a few times to update Keychain and other system-level stuff it was back to business as usual. Opened Terminal, cd to project folder, tried "rails server" and was presented with: /usr/local/lib/ruby/1.9.1/rubygems/dependency.rb:247:in to_specs': Could not find rails (>= 0) amongst [] (Gem::LoadError) from /usr/local/lib/ruby/1.9.1/rubygems/dependency.rb:256:into_spec' from /usr/local/lib/ruby/1.9.1/rubygems.rb:1210:in gem' from /usr/local/bin/rails:18:in' Ran through a few exercises, decided to rm -rf ~/.rvm and reinstall. Running a --trace on the rvm installer shows it dies on this line: mkdir: /Users/[old account]: Permission denied Scrolling back through the --trace log I see many more mentions of /Users/[old account]. When inspect the install script the offending line is looking at "${HOME}/.rvm" as it tries to run the mkdir. To my confusion I also see mentions of /Users/[new account] in the log. I've tried exporting a new HOME in my .bash_profile to no luck. Can anyone guess why /Users/[old account] would still be kicking around?

    Read the article

  • Perl TDS character sets

    - by skiphoppy
    I'm using the FreeTDS driver with DBD::Sybase, connecting to an MS SQL Server. When I query certain values of certain records, I get this error: DBD::Sybase::st fetchrow_arrayref failed: OpenClient message: LAYER = (0) ORIGIN = (0) SEVERITY = (9) NUMBER = (99) Server , database Message String: WARNING! Some character(s) could not be converted into client's character set. Unconverted bytes were changed to question marks ('?'). This seems to happen for records that contain special Windows character-set characters, such as curly quotes, copied and pasted from people's Outlook and Word messages. Unfortunately, I do not have any control of this database; sanitizing the input on the way in is obviously the way to go, but is not available to me. What FreeTDS settings do I need to change to be able to successfully query these records? Additional information: The query works fine from tsql. I only get this error through Perl's DBD::Sybase interface. (Should I test through something else? I don't have the expertise yet to install PHP or Python. I've got jTDS and can use it, but I think that's a completely different implementation, not an interface to FreeTDS.) Adding client charset = UTF-8 to my freetds.conf file results in "Out of memory!" printed to STDERR.

    Read the article

  • memcached append() php ubuntu - bad protocol

    - by awongh
    I am running ubuntu gutsy(7.1) , php5 and I am trying to get memcached running locally. I installed everything as per the docs: memcached daemon, php PECL extension, libevent, etc. But now I can only run half of the example script for memcached append(): <?php $m = new Memcached(); $m->addServer('localhost', 11211); $m->setOption(Memcached::OPT_COMPRESSION, false); $m->set('foo', 'abc'); $m->append('foo', 'def'); var_dump($m->get('foo')); ?> the script terminates @ append() with an RES_BAD_PROTOCOL error message. It still runs the get(). I don't know why memcached would otherwise be working fine (connect, set, get - with the correct value of 'abc') and not work for append. it also doesnt work with prepend. I believe I have the setup correct, but I am not sure. Maybe there are compatibility problems between the versions of the dependecies? thanks much

    Read the article

  • Can a wifi AP act as a client, and a server at the same time?

    - by nbolton
    I feel this is SF worthy (as opposed to SU) as I go into a bit of detail on gateways/routing. Here's my ideal setup (if possible) -- there is a wifi network (lets call it bob's) with which I want access to, but I have a few other computers on my network which I want to keep behind a firewall. So I was thinking of buying a wireless access point so that I could set it up to connect to bob's network from the AP, and then from my server, connect to the AP via ethernet. So that's the first bit. Second part is that I want to have my own private wifi network off the back of this; can I then tell the AP to serve a new network called foobar. When I say private network, I mean that my server is actually a Debian linux install with routing configured (and I also do some QoS stuff on, etc). So ideally, I'd like all the clients on the private network to be behind the server in terms of routing. However, if the private clients connect to the server via wifi, then aren't they exposed to the "public" network? That is, if someone is savvy enough to scan for my IP range. Also, to do routing I'd need to connect two ethernet cables between the server and the AP (because you can't do routing/QoS on virtual devices) -- which isn't a problem really; but I'm not sure whether the AP will allow me to separate the public and private LANs. Or, as well as the AP, am I better getting a wifi-to-ethernet adapter for the server? I could use a wifi usb, but this can be tricky to set up on headless linux; plus the signal strength is a bit lousy. If this question is a bit vague/spurious in places, please comment and I will explain in more detail.

    Read the article

  • SQL Server 2008 Unique Problem for bring DB Online...

    - by Nai
    This is my error I am facing TITLE: Microsoft.SqlServer.Smo Set offline failed for Database 'Go3D_Retailer ------------------------------ ADDITIONAL INFORMATION: An exception occurred while executing a Transact-SQL statement or batch. (Microsoft.SqlServer.ConnectionInfo) Unable to open the physical file "E:\Program Files\Microsoft SQL Server\MSSQL10.MSSQLSERVER\MSSQL\DATA\ftrow_Go3D_catalog.ndf". Operating system error 2: "2(failed to retrieve text for this error. Reason: 15105)". Database 'Go3D_Retailer' cannot be opened due to inaccessible files or insufficient memory or disk space. See the SQL Server errorlog for details. ALTER DATABASE statement failed. (Microsoft SQL Server, Error: 5120) Background to this error I've been trying to move my destination logshipping database to another physical server for analysis purposes. Because I do not have active directory set up, I had to hack my process by using the same username/password for both the source and destination servers to get the process to work. Following that, I used this guy's solution to move the destination database to another server. However, this error occurs when I try to bring the database back online. I don't have an E drive on my server and I have no idea why it's trying to open a file from E drive. I have over a 100gb left on my hard disk so it's definitely not a space issue. This sounds like a bug... Any ideas? I'm running SQL Server 2008 Enterprise edition on Windows Server 2008 R2 64bit

    Read the article

  • Oracle 10g for Windows does not start up on system boot

    - by Mike Dimmick
    We have an Oracle 10g Enterprise Edition installation (10.2.0.1.0) on a Windows Server 2003 virtual machine. It was initially created with Virtual Server 2005 R2 SP1 but has now been migrated to Windows Server 2008 Hyper-V. The services start on system boot, but the instance does not start up. This problem was actually occurring on Virtual Server after a migration from one server to another, but I managed to fix it then with: oradim -edit -sid ORCL -startmode auto However, this now has no effect. oradim.log (in %OracleHome%\database\oradim.log) says: Thu Jun 10 14:14:48 2010 C:\oracle\product\10.2.0\db_3\bin\oradim.exe -startup -sid orcl -usrpwd * -log oradim.log -nocheck 0 Thu Jun 10 14:14:48 2010 ORA-12560: TNS:protocol adapter error sqlnet.log in the same folder has: Fatal NI connect error 12560, connecting to: (DESCRIPTION=(ADDRESS=(PROTOCOL=BEQ)(PROGRAM=oracle)(ARGV0=oracleorcl)(ARGS='(DESCRIPTION=(LOCAL=YES)(ADDRESS=(PROTOCOL=beq)))'))(CONNECT_DATA=(SID=orcl)(CID=(PROGRAM=C:\oracle\product\10.2.0\db_3\bin\oradim.exe)(HOST=ORACLE-VM)(USER=SYSTEM)))) VERSION INFORMATION: TNS for 32-bit Windows: Version 10.2.0.1.0 - Production Oracle Bequeath NT Protocol Adapter for 32-bit Windows: Version 10.2.0.1.0 - Production Time: 10-JUN-2010 14:14:48 Tracing not turned on. Tns error struct: ns main err code: 12560 TNS-12560: TNS:protocol adapter error ns secondary err code: 0 nt main err code: 530 TNS-00530: Protocol adapter error nt secondary err code: 2 nt OS err code: 0 The ORA_ORCL_AUTOSTART registry value is set to TRUE, so it should be auto-starting - and you can see that it's trying to. The problem also occurs when stopping and restarting the OracleServiceORCL service. I've enabled SQL*Net tracing which shows: [10-JUN-2010 15:09:33.919] snlpcss: entry [10-JUN-2010 15:09:34.419] snlpcss: Unable to spawn Oracle oracle (DESCRIPTION=(LOCAL=YES)(ADDRESS=(PROTOCOL=beq))) orcl, error 2. [10-JUN-2010 15:09:34.419] snlpcall: exit On a hunch that error 2 is Windows error 2 (file not found) I tried restarting the service with Process Monitor watching oradim.exe, but this appears to delay things just enough that it always works. Right now I have a horrible hack where I've created a Scheduled Task to run oradim -startup -sid ORCL when the Administrator account logs on, and set the VM to auto-logon. I'd still like to work out why it's not working.

    Read the article

  • IIS 7 - The virtual path 'null' maps to another application, which is not allowed

    - by Miro
    I have run into issue when set up IIS 7 Farm for Load balancing. Add 4 server to IIS Farm with appropriate ports(8080,8081,8082,8083). Also add Inbound rule for IIS Farm. The Tomcat instances listens these ports. When i'm opening url(which i set on inbound rule), i got the following exception: The virtual path 'null' maps to another application, which is not allowed. Source Error: An unhandled exception was generated during the execution of the current web request. Information regarding the origin and location of the exception can be identified using the exception stack trace below. Stack Trace: [ArgumentException: The virtual path 'null' maps to another application, which is not allowed.] System.Web.CachedPathData.GetVirtualPathData(VirtualPath virtualPath, Boolean permitPathsOutsideApp) +8839122 System.Web.HttpContext.GetFilePathData() +36 System.Web.HttpContext.GetConfigurationPathData() +26 System.Web.Configuration.RuntimeConfig.GetConfig(HttpContext context) +43 System.Web.Configuration.CustomErrorsSection.GetSettings(HttpContext context, Boolean canThrow) +41 System.Web.HttpResponse.ReportRuntimeError(Exception e, Boolean canThrow, Boolean localExecute) +101 System.Web.HttpContext.ReportRuntimeErrorIfExists(RequestNotificationStatus& status) +538 How can i solve this issue?

    Read the article

  • How do I prevent Pidgin from loading XMPP chat room history on join?

    - by Mr. Jefferson
    In Pidgin, when I join a chat room, it loads the chat room history. iChat on the Mac has a preference in the Accounts section to set a variable amount of history to load, or disable loading history entirely. How do I do the same thing in Pidgin? Is there a preference somewhere that I've missed? The object is to have the chat room start fresh each day, so I'd also be fine with disabling chat room history entirely on the server if that's possible. But I didn't see that option either when I looked in Server Admin on the server. I found this list of XMPP room types, and it looks like creating a Temporary Room might be the best way to do this, but I don't want to have to create the room manually every morning. Right now I've got Pidgin set to auto-join the room when I log in; I want it to do that without loading history. EDIT: The XMPP multi-user chat spec referenced above also contains a section on managing history. And I got this to work by pulling up the XMPP Console plugin in Pidgin, copying the <presence /> stanza it sent when I joined the room, closing the room, pasting the stanza into the console, adding the <history /> element and sending it. When I opened the room again, I had no history. But it all came back the next time! So: how do I get Pidgin to send the <history /> stanza by default?

    Read the article

  • Getting AWStats to work in Ubuntu 12.04

    - by koogee
    I'm new to apache and i'm trying to set up AWStats on my ubuntu 12.04 server. I've followed the guide at Ubuntu docs https://help.ubuntu.com/community/AWStats I set it up according to the instructions and awstats is able to generate initial stats from apache log successfully. I placed the links to awstats in the default virtual host file. However when I try to run http://server-ip-address:8080/awstats/awstats.pl, I get: Error: SiteDomain parameter not defined in your config/domain file. You must edit it for using this version of AWStats. Setup ('/etc/awstats/awstats.conf' file, web server or permissions) may be wrong. Check config file, permissions and AWStats documentation (in 'docs' directory). Here is my /etc/apache2/sites-available/default file: <VirtualHost *:8080> ServerAdmin webmaster@localhost DocumentRoot /home/saad/www <Directory /> Options FollowSymLinks AllowOverride None </Directory> <Directory /home/saad/www/> Options Indexes FollowSymLinks MultiViews AllowOverride AuthConfig Order allow,deny allow from all </Directory> ScriptAlias /cgi-bin/ /usr/lib/cgi-bin/ <Directory "/usr/lib/cgi-bin"> AllowOverride None Options +ExecCGI -MultiViews +SymLinksIfOwnerMatch Order allow,deny Allow from all </Directory> Alias /awstatsclasses "/usr/share/awstats/lib/" Alias /awstats-icon "/usr/share/awstats/icon/" Alias /awstatscss "/usr/share/doc/awstats/examples/css" ScriptAlias /awstats/ /usr/lib/cgi-bin/ Options ExecCGI -MultiViews +SymLinksIfOwnerMatch ErrorLog ${APACHE_LOG_DIR}/error.log # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. LogLevel warn CustomLog ${APACHE_LOG_DIR}/access.log combined Alias /doc/ "/usr/share/doc/" <Directory "/usr/share/doc/"> Options Indexes MultiViews FollowSymLinks AllowOverride None Order deny,allow Deny from all Allow from 127.0.0.0/255.0.0.0 ::1/128 </Directory> </VirtualHost> The only three variables I edited in /etc/awstats/awstats.conf are: LogFile="/var/log/apache2/access.log" SiteDomain="server-name.noip.org" HostAliases="localhost 127.0.0.1 server-name.no-ip.org" The apache server works fine and i'm able to access other pages stored on the server. Any guidance would be welcome.

    Read the article

< Previous Page | 684 685 686 687 688 689 690 691 692 693 694 695  | Next Page >