Search Results

Search found 21907 results on 877 pages for 'virtual box'.

Page 479/877 | < Previous Page | 475 476 477 478 479 480 481 482 483 484 485 486  | Next Page >

  • VMware ESXi - varying CPU time (CPU reservation)

    - by Tomo
    Hello! I'm running FreeBSD 7.2 under VMware ESXi 3.5. Host has 2 physical CPUs and the BSD box is currently the only running VM. Only one virtual CPU is assigned to the VM. When measuring CPU time of a specific program, I get very different results from time to time. Processor usage is reported differently by VMware, based on the system load. Is it possible to assign a constant share of a physical CPU to specific VM? I would like the CPU time to be more or less much constant. I tried setting CPU reservation when configuring VM in the VMware Infrastructure Client, but the CPU time still varies a lot. Thanks in advance!

    Read the article

  • SUSE 12.1 Apache startup after oci8 installation

    - by DKSan
    I have got a virtual server running opensuse 11.4 with apache, php, oracle instantclient, and oci installed through pecl. The steps it took for me to have it up and running on 11.4 were: # Install instantclient rpm -Uvh oracle-instantclient11.2-basic-11.2.0.2.0.x86_64.rpm rpm -Uvh oracle-instantclient11.2-devel-11.2.0.2.0.x86_64.rpm # Install OCI8 through pecl pecl install oci8 # add oci8 to modules vi /etc/php5/conf.d/oci8.ini extension=oci8.so # add LD_LIBRARY_PATH to apache vi /etc/sysconfig/apache2 # add to bottom of script export LD_LIBRARY_PATH="/usr/lib/oracle/11.2/client64/lib" # restart Apache /etc/init.d/apache2 restart Celebrating the same procedure on a fresh installation of OpenSUSE 12.1 results in apache throwing the following message at startup: PHP Warning: PHP Startup: Unable to load dynamic library '/usr/lib64/php5/extensions/oci8.so' - libnnz11.so: cannot open shared object file: No such file or directory in Unknown on line 0 I can't get any explanation, why it is working for 11.4 and in 12.1 it stops working. Can someone please point me in the right direction..

    Read the article

  • nice linux distro for vxl itona

    - by akiva_eshbal
    I'm trying to have some productivity with my VXL ITONA thin client. it is very minimalistic: 128 MB disk, 64 MB ram, with an Ezra via processor. out of the box it comes with GIO linux, I could run from a USB key puppy linux and damnsmalllinux (but I couldn't make them installed on the SSD drive). more customizable distros like arch-linux and slax fail to load due to kernel failure caused by lack of cmov instruction in the VIA processor. I really like to make it usable, can you please recommend me a nice distro? I'm afraid the Gentoo solution is too scary for me right now (compiling a linux kernel over a a thin client is too much) thanks!

    Read the article

  • 64-bit Cisco VPN client (IPsec) ?

    - by mika
    Cisco VPN client (IPsec) does not support 64bit Windows. Worse, Cisco does not even plan to release a 64-bit version, instead they say that "For x64 (64-bit) Windows support, you must utilize Cisco's next-generation Cisco AnyConnect VPN Client." Cisco VPN Client Introduction Cisco VPN Client FAQ But SSL VPN licences cost extra. For example, most new ASA firewalls come with plenty of IPSec VPN licences but only a few SSL VPN licences. What alternatives do you have for 64-bit Windows? So far, I know two: 32-bit Cisco VPN Client on a virtual machine NCP Secure Entry Client on 64-bit Windows Any other suggestions or experiences? -mika-

    Read the article

  • Oops, no RSA or DSA server certificate found for 'server.host.name:0'?

    - by Scott Warren
    I'm setting up a new web server that hosts a dozen virtual hosts on Ubuntu 12.4 using Apache 2.2.22 with one config file per site. I created all the configuration files all at once and ran a2ensite * to enable them all at once. When I reloaded the configuration it failed and after restarting apache I found the following error message in my error.log: Oops, no RSA or DSA server certificate found for 'server.host.name:0'?! Most of the results for this error message are years old that don't fix the problem or are bugs that have been fixed https://issues.apache.org/bugzilla/show_bug.cgi?id=31709

    Read the article

  • How can I open urls on my host machine with VMWARE?

    - by Yanamon
    I am running a Windows 7 vm with WMware player from Fedora. I have VMWare tools installed successfully and I have successfully some of it's features like Unity mode so it seems to be installed correctly. That being said I still cannot get urls to open up in my host machine's browsers, these are the steps I have taken: Within the vm I set "Default Host Application" to be the application to open urls. Within my host machine I have set Chrome to be my preferred application for opening urls. Enabled Shared Folders in the vm (Not sure if that really helped anything but I saw it suggested on a forum post) After doing that when I click on a link I get the following error message: Default host Application: Make sure the virtual machine's configuration allows the guest to open host applications. I cannot find any option like that in my vm's configuration so I am not sure what the error message is referring to.

    Read the article

  • Problem starting Glassfish on a VPS

    - by Raydon
    I am attempting to install Glassfishv3 on my Ubuntu (8.04) VPS using Java 1.6. I initially tried starting the server using: asadmin start-domain and received the following error message: JVM failed to start: com.sun.enterprise.admin.launcher.GFLauncherException: The server exited prematurely with exit code 1. Before it died, it produced the following output: Error occurred during initialization of VM Could not reserve enough space for object heap Command start-domain failed. I attempted to run it again and received a different message: Waiting for DAS to start Error starting domain: domain1. The server exited prematurely with exit code 1. Before it died, it produced the following output: Error occurred during initialization of VM Could not reserve enough space for object heap Could not create the Java virtual machine. Command start-domain failed. If I run cat /proc/meminfo I get the following (all other values are 0kB): MemTotal: 1310720 kB MemFree: 1150668 kB LowTotal: 1310720 kB LowFree: 1150668 kB I have checked the contents of glassfish/glassfish/domains/domain1/config/domain.xml and the JVM setting is: -Xmx512m Any help on resolving this problem would be appreciated.

    Read the article

  • Setting php values in php-fpm confs instead of php.ini

    - by zsero
    I'd like to set values in php-fpm conf files what are normally set in php.ini. I'm using nginx. I've created the following setting, but I'm not sure if this would work. php_value[memory_limit] = 96M php_value[max_execution_time] = 120 php_value[max_input_time] = 300 php_value[php_post_max_size] = 25M php_value[upload_max_filesize] = 25M Do you think if this is OK like this? What happens when a value is both set in php.ini and in php-fpm conf files? The php-fpm overrides the ini one? Finally, isn't it a problem that this way I can set different values for all virtual hosts? I mean php.ini seems like a global setting, while this is host dependent. Can different hosts run with different memory-limits, etc?

    Read the article

  • Does SOLARIS have similar file to Linux's /etc/security/limits.conf?

    - by SQL Warrior
    I'm doing compliance check on SOLARIS 10 OS. I need to verify the following parameter settings: core file size (blocks, -c) unlimited data seg size (kbytes, -d) unlimited file size (blocks, -f) unlimited open files (-n) 65536 stack size (kbytes, -s) unlimited cpu time (seconds, -t) unlimited virtual memory (kbytes, -v) unlimited Sure I could use ulimit -cH to get display above. But I also need to find where those settings are. I'm from Linux, in Linux we have /etc/security/limts.conf file to hold alike information. Do we have such file in Solaris? TIA!

    Read the article

  • What does it mean when ARP shows <incomplete> on eth1

    - by Geoff Dalgas
    We have been using HAProxy along with heartbeat from the Linux-HA project. We are using two linux instances to provide a failover. Each server has with their own public IP and a single IP which is shared between the two using a virtual interface (eth1:1) at IP: 69.59.196.211 The virtual interface (eth1:1) IP 69.59.196.211 is configured as the gateway for the windows servers behind them and we use ip_forwarding to route traffic. We are experiencing an occasional network outage on one of our windows servers behind our linux gateways. HAProxy will detect the server is offline which we can verify by remoting to the failed server and attempting to ping the gateway: Pinging 69.59.196.211 with 32 bytes of data: Reply from 69.59.196.220: Destination host unreachable. Running arp -a on this failed server shows that there is no entry for the gateway address (69.59.196.211): Interface: 69.59.196.220 --- 0xa Internet Address Physical Address Type 69.59.196.161 00-26-88-63-c7-80 dynamic 69.59.196.210 00-15-5d-0a-3e-0e dynamic 69.59.196.212 00-21-5e-4d-45-c9 dynamic 69.59.196.213 00-15-5d-00-b2-0d dynamic 69.59.196.215 00-21-5e-4d-61-1a dynamic 69.59.196.217 00-21-5e-4d-2c-e8 dynamic 69.59.196.219 00-21-5e-4d-38-e5 dynamic 69.59.196.221 00-15-5d-00-b2-0d dynamic 69.59.196.222 00-15-5d-0a-3e-09 dynamic 69.59.196.223 ff-ff-ff-ff-ff-ff static 224.0.0.22 01-00-5e-00-00-16 static 224.0.0.252 01-00-5e-00-00-fc static 225.0.0.1 01-00-5e-00-00-01 static On our linux gateway instances arp -a shows: peak-colo-196-220.peak.org (69.59.196.220) at <incomplete> on eth1 stackoverflow.com (69.59.196.212) at 00:21:5e:4d:45:c9 [ether] on eth1 peak-colo-196-215.peak.org (69.59.196.215) at 00:21:5e:4d:61:1a [ether] on eth1 peak-colo-196-219.peak.org (69.59.196.219) at 00:21:5e:4d:38:e5 [ether] on eth1 peak-colo-196-222.peak.org (69.59.196.222) at 00:15:5d:0a:3e:09 [ether] on eth1 peak-colo-196-209.peak.org (69.59.196.209) at 00:26:88:63:c7:80 [ether] on eth1 peak-colo-196-217.peak.org (69.59.196.217) at 00:21:5e:4d:2c:e8 [ether] on eth1 Why would arp occasionally set the entry for this failed server as <incomplete>? Should we be defining our arp entries statically? I've always left arp alone since it works 99% of the time, but in this one instance it appears to be failing. Are there any additional troubleshooting steps we can take help resolve this issue? THINGS WE HAVE TRIED I added a static arp entry for testing on one of the linux gateways which still didn't help. root@haproxy2:~# arp -a peak-colo-196-215.peak.org (69.59.196.215) at 00:21:5e:4d:61:1a [ether] on eth1 peak-colo-196-221.peak.org (69.59.196.221) at 00:15:5d:00:b2:0d [ether] on eth1 stackoverflow.com (69.59.196.212) at 00:21:5e:4d:45:c9 [ether] on eth1 peak-colo-196-219.peak.org (69.59.196.219) at 00:21:5e:4d:38:e5 [ether] on eth1 peak-colo-196-209.peak.org (69.59.196.209) at 00:26:88:63:c7:80 [ether] on eth1 peak-colo-196-217.peak.org (69.59.196.217) at 00:21:5e:4d:2c:e8 [ether] on eth1 peak-colo-196-220.peak.org (69.59.196.220) at 00:21:5e:4d:30:8d [ether] PERM on eth1 root@haproxy2:~# arp -i eth1 -s 69.59.196.220 00:21:5e:4d:30:8d root@haproxy2:~# ping 69.59.196.220 PING 69.59.196.220 (69.59.196.220) 56(84) bytes of data. --- 69.59.196.220 ping statistics --- 7 packets transmitted, 0 received, 100% packet loss, time 6006ms Rebooting the windows web server solves this issue temporarily with no other changes to the network but our experience shows this issue will come back. Swapping network cards and switches I noticed the link light on the port of the switch for the failed windows server was running at 100Mb instead of 1Gb on the failed interface. I moved the cable to several other open ports and the link indicated 100Mb for each port that I tried. I also swapped the cable with the same result. I tried changing the properties of the network card in windows and the server locked up and required a hard reset after clicking apply. This windows server has two physical network interfaces so I have swapped the cables and network settings on the two interfaces to see if the problem follows the interface. If the public interface goes down again we will know that it is not an issue with the network card. (We also tried another switch we have on hand, no change) Changing network hardware driver versions We've had the same problem with the latest Broadcom driver, as well as the built-in driver that ships in Windows Server 2008 R2. Replacing network cables As a last ditch effort we remembered another change that occurred was the replacement of all of the patch cords between our servers / switch. We had purchased two sets, one green of lengths 1ft - 3ft for the private interfaces and another set of red cables for the public interfaces. We swapped out all of the public interface patch cables with a different brand and ran our servers without issue for a full week ... aaaaaand then the problem recurred. Disable checksum offload, remove TProxy We also tried disabling TCP/IP checksum offload in the driver, no change. We're now pulling out TProxy and moving to a more traditional x-forwarded-for network arrangement without any fancy IP address rewriting. We'll see if that helps.

    Read the article

  • Urgent :The desktop currently has no desktop sources

    - by vDeepak
    Although i know there are large no. of post already floating over same issue , but am having no luck so far. I am getting below error message when trying to connect desktop by vmware view client 4: "the desktop currently has no desktop sources............. " My configuration is listed below: using view manager server over VM box. successully deployed 10 desktop VM over View manager and they are persistent mode. previoulsy few users were able to coonect to these desktops successfully under desktop sources i am getting "Domain name\user ID" and under status its showing "Ready" So when any other user who is authenticated trying to access desktop getting above mentioned error message. The other users are unique one who previously never logged in. Also i tried to rebooth the VM desktops assuming they might be locked by another user , but still am getting the same error message. Please help.

    Read the article

  • IIS7 Recover Default Website

    - by rideon88
    So I deleted my default website by accident on a Windows 2008 server in IIS7. This is bad because I now lost my Exchange OWA virtual directories... users are not happy =(. Also lost my https certificate I'm pretty sure. So I'm hoping there is an easy way to recover all that stuff being that it's Windows 2008 and all... Any help appreciated. I have no idea where to start or how to recover any of this stuff besides creating a new site called default website and then hitting the MS Exchange manual on how to install OWA from scratch.

    Read the article

  • Windows Network Load Balancing on ESX Cluster with Dell PowerConnect stacks

    - by dunxd
    We recently switched out our Cisco 6500 core switch for a pair of Dell PowerConnect 6248 stacks. Since then, our Network Load Balanced Sharepoint, which runs on two virtual machines on an ESX cluster has been behaving very poorly. The symptoms are that opening and saving documents stored in sharepoint takes a very very long time. There are no errors showing up on the Sharepoint servers or SQL server, just a lot of annoyed users. Initially I thought there was no way NLB could cause this, but as soon as we repointed the DNS records for our intranet to the ip address of one of the web front ends, the problems disappeared. We suspect there is an issue related to multicast in the Dell configs - NLB is configured for multicast, but not IGMP. Has anyone got a similar set up to us and fixed this sort of issue? Sharepoint on VMware ESX, with Dell PowerConnect switches.

    Read the article

  • Configuring IIS7 for TLS 1.0 only

    - by tomfanning
    I have been tasked with configuring an IIS7 server to accept TLS 1.0 HTTPS connections only. I have come up with the following list of cipher suites which I have deduced are TLS 1.0. TLS_DHE_DSS_WITH_3DES_EDE_CBC_SHA TLS_DHE_DSS_WITH_AES_128_CBC_SHA TLS_DHE_DSS_WITH_AES_256_CBC_SHA TLS_RSA_WITH_3DES_EDE_CBC_SHA TLS_RSA_WITH_AES_128_CBC_SHA TLS_RSA_WITH_AES_256_CBC_SHA I have put that list in the box in the following policy: Computer Configuration | Administrative Templates | Network | SSL Configuration Settings | SSL Cipher Suite Order Is that sufficient? Are any of the suites in my list not TLS 1.0? Are there any other TLS 1.0 suites supported by IIS7 that aren't in the list? The server, by the way, is Windows Server 2008 R2. Thanks

    Read the article

  • Multiple IP addresses on one NIC register twice in DNS server

    - by Brad B.
    Hi, We've got a build server (Windows Server 2008 SP2, 64-bit) which has one NIC and two IP addresses registered to that NIC (192.168.1.30 and 192.168.1.31). The build server is registering two identical Host (A) records for itself in our DNS server: buildserver.example.com = 192.168.1.30 buildserver.example.com = 192.168.1.31 I know in the "Advanced TCP/IP Settings" window for the build server's NIC, under the "DNS" tab, there is a check box labeled "Register this connection's addresses in DNS". I only want ONE of the IP addresses (ending in .30) to be registered in DNS not both of them. Can that be done? My best guess is to disable the "Register this connection's addresses in DNS" and manually add the Host (A) record to our DNS server. Thanks for any help!

    Read the article

  • Charging a laptop battery, without a laptop.

    - by Crippledsmurf
    I have an old-ish laptop that only works on AC power because the battery is old and no longer holds a charge. I live in Christchurch New Zealand where there has recently been a number of very large earthquakes. During one of these earthquakes my laptop was thrown from my desk to the floor and now does not respond at all when the AC adapter is connected. Given that the laptop is not responding to power, is there another way I could charge a replacement battery for it as I don't currently have funds to repair the AC adapter on the box. My research would suggest that this isn't possible as chargers need to take into account the specifics of the model of battery being charged

    Read the article

  • .NET Framework 4.5 remote install via PowerShell

    - by user251297
    I am trying to install .NET Framework 4.5 to the remote Win2008R2 Server via PowerShell session in such way (user is in the server Administrators group): $session = New-PSSession -ComputerName $server -Credential Get-Credential Invoke-Command -Session $session -ScriptBlock {Start-Process -FilePath "C:\temp\dotnetfx45_full_x86_x64.exe" -ArgumentList "/q /norestart" -Wait -PassThru} And then I get this error: Executable: C:\temp\dotnetfx45_full_x86_x64.exe v4.5.50709.17929 --- logging level: standard --- Successfully bound to the ClusApi.dll Error 0x80070424: Failed to open the current cluster Cluster drive map: '' Considering drive: 'C:\'... Drive 'C:\' has been selected as the largest fixed drive Directory 'C:\aa113be049433424d2d3ca\' has been selected for file extraction Extracting files to: C:\aa113be049433424d2d3ca\ Error 0x80004005: Failed to extract all files out of box container #0. Error 0x80004005: Failed to extract Exiting with result code: 0x80004005 === Logging stopped: 2013/09/04 16:29:51 === If I run command locally at the server - all works fine. Start-Process -FilePath "C:\temp\dotnetfx45_full_x86_x64.exe" ` -ArgumentList "/q /norestart" -Wait

    Read the article

  • Installing OpenSSL that supports SNI along with previous version of OpenSSL

    - by gh0sT
    So I learned that to host multiple HTTPS websites on the same IP address you need an OpenSSL version that supports SNI (0.9.8f and higher). My RHEL5 box currently has 0.9.8e and Apache version httpd-2.2.26-2.el5. According to a same question here it's not a good idea to replace the original version of OpenSSL and instead to have a parallel installation. It however doesn't explicitly mention how to achieve this. So my questions are: How do I have an alternate installation of OpenSSL without breaking the system? How do I make Apache to use this version of OpenSSL and not the original one? A detailed guide would be extremely helpful.

    Read the article

  • Apple File Sharing fails to connect

    - by Josh
    Running OSX Lion Server (10.7.4), and about once a week or so the Apple File Sharing service stops letting clients connect to its shares. On the client we will see a dialog box stating "There was a problem connecting to the server ". Browsing the server we simply no longer see the shares. The clients are also running the latest OSX (10.7.4) In /var/log/system.log we see entries like the following: Jun 26 08:38:22 w3 AppleFileServer[20511]: received message with invalid client_id 157 Jun 26 08:42:11 w3 AppleFileServer[20511]: received message with invalid client_id 165 Jun 26 08:42:21 w3 AppleFileServer[20511]: received message with invalid client_id 174 Where 20511 appears to be the pid, and client_id appears to be incremented with each failed attempt. Nothing jumps out at me from /Library/Logs/AppleFileService/AppleFileService[Access|Error].log Restarting the service fixes the problem: serveradmin stop afp && serveradmin start afp So I added a script to do this daily using the periodic service. But, we still encounter this problem about once a a week.

    Read the article

  • Differences between RHEL5 NFS and Solaris 2.6 NFS

    - by joshxdr
    I have a legacy application (LabVIEW 7.0) running on my Sun Ultra 5 workstation running Solaris 2.6. I want to use a RHEL5 server to store all files so that I am not cluttering the small HDD on the Sun. I have found that the LabVIEW file browser has a bug which prevents it from seeing some files and folders in an NFS share mounted from RHEL5, but this problem is not present when using an NFS share mounted from another Ultra 5 using Solaris 2.6. I believe in both cases NFSv3 is being used. Is there some way I can make the RHEL5 NFS behave more like Solaris 2.6? If I make a new partition on the RHEL5 box and install OpenSolaris, will this behave more like Solaris 2.6? I am locked into using this buggy LabVIEW program, so somehow I need to make it work.

    Read the article

  • Port Forwarding on 80 vs. 8080

    - by Chadworthington
    I am able to access this url (Don't bother clicking on it, it's just an example): http://my.url.com/ This web site works: http://localhost:8080/tfs/web/ I setup my router to forward port 80 to the PC hosting the web site. I also forward port 8080 to the same box. But when I try to access this url I get the eror "Page Cannot be displayed:" http://my.url.com:8080/tfs/web/ I fwded port 8080 the same way I fwded port 80. I also turned off Windows Firewall, in case it was blocking port 8080. Any theories why port 80 works but 8080 does not?

    Read the article

  • GConf error and gnome does not load properly in RHEL 5.3

    - by Tim
    Hello, I am using Red Hat Enterprise Linux 5.3 . I created a user oracle on the system, using the following command useradd -g oinstall -G dba,oper -d /home/oracle oracle Now, when i try to login as the user oracle, GNOME does not load properly and i get popup box error message like the following GConf error:Failed to contact configuration server;some possible causes are that you need to enable TCP/IP for ORBit,or your have NFS locks due to a system crash.(Details-/:IOR file'/tmp/gcofd-cheetahman/tock/ior' not opened successfully,no gconfd located:Permission denied 2: IOR file /tmp/gconfd-cheetahman/lock/ior not opened succesfully no gconfd located: Permission denied) Any way to fix this ? Thank You

    Read the article

  • OpenVPN slow with Firewall enabled on Zyxel ZyWall USG-100

    - by aleroot
    I have an OpenVPN server on a Linux machine, after installing a ZyWall USG-100 I'm experiencing extremely slowness navigating web servers on my remote LAN through the VPN connection, while accessing the web interface of the ZyWall is fast. I have configured everything : the Virtual Server for the OpenVPN Server, the static route as with the replaced router that I had before installing the ZyWall Today. I even added a rule to the firewall that allows connection to the OpenVPN Server machine : but navigation on the LAN through the VPN still slow, it seems that the Firewall is blocking packages, since if I disable the firewall on the USG-100 everything works fast as usual, while with the firewall enabled it is extremely slow. Why ? Do I need to add some other rule to the firewall to speed up ?

    Read the article

  • unusual backspace behavior in mac terminal

    - by Brandon
    I'm trying to figure out how to get ssh sessions to work how I want using the terminal app on mac os x. I'm used to using PuTTY on windows, where backspace means backspace. On mac when I press delete/backspace on mac it deletes the character following the cursor instead of the one before. I turned on Delete sends Ctrl + H, and that works most of the time, but sometimes it just shows on the screen as ^H this is typically at prompts from some custom python scripts on the box I log into. This doesn't happen with PuTTY on windows. Btw I'm logging into a Ubuntu Linux server running openssh. Any idea what I need to do so that backspace is consistently backspace.

    Read the article

  • SOLVED: network issue ubuntu 8.04 in vmware esx

    - by hoberion
    ok, this is really pissing me off I have one ubuntu 8.04 instance running on vmware (esx) which decided after a reboot to stop resolving dns requests, I also cant connect to it using ssh although I can ping the server and its really that server (when I shutdown the server the ping also stops) stuff I tried: - reboot again :) - nslookup - serverip - setting networking to dhcp - offering some cute kittens to lucifer - removing the virtual nic and adding another (to get a different mac) - migrating the instance to another esx host - drinking 20 cups of espresso - stopped all services - running dnsmasq on another server and connecting to that dns - tcpdumping - disabling ip6 symptoms: cant resolve anything nslookup just says "no servers found..." although I can ping the servers traceroute to gateway doesnt work (even with traceroute -4 -n gatewayip) collegues laughing at me any thoughts solved it: a collegue told me to upgrade/reinstall the vmware tools, I did and it solved my issue after rebooting

    Read the article

< Previous Page | 475 476 477 478 479 480 481 482 483 484 485 486  | Next Page >