Search Results

Search found 15439 results on 618 pages for 'wls configuration'.

Page 509/618 | < Previous Page | 505 506 507 508 509 510 511 512 513 514 515 516  | Next Page >

  • Mod_pagespeed, Varnish and Apache cache issues after new code pushes

    - by WerkkreW
    I have a rather strange issue. In my environment we are running a load balanced cluster of 8 apache servers with a master-master MySQL backend. In front of apache we have Varnish in the cache layer. We have been running Apache mod_pagespeed for several weeks now and for the most part it has been working great. The issue arises when we do fresh code updates from Git, and and/all of the JS/CSS assets change. Basically the problem appears to be two fold. One, after the code push we generally take the opportunity to flush varnish, restart apache, and restart varnish. In doing this all of the mod_pagespeed combinied/minified files are cleared out ensuring that all of the new JS/CSS assets are fresh. The problem is, upon doing this the file names that mod_pagespeed creates change, but the old files (appear) to be still cached for many people client side leading to very unexpected results. However, if we do not restart apache, the changes to the files may or may not appear client side due to the cached minified assets. The simple solution is to disable mod_pagespeed, however I would rather not do that as it has made a fairly large impact in performance. I feel as if there must be a better way to deal with the inconsistencies in cache between the client and server to prevent having people to go to great lengths or perform a large number of page refreshes to see a working page. I can provide configuration snippets if anyone needs them. If you would like to inspect the site, source, headers, or anything try the following addresses: http://wellplayed.org http://wellplayed.org/tv Thanks in advance!

    Read the article

  • No access to Windows 2003 admin shares

    - by ARomo
    This is the environment: Several Win 2003 SP 2 servers and several Win XP SP2 & SP3 clients. All in the same LAN. Firewall is disabled everywhere. No recent Windows updates or configuration changes. This is the problem: Since last Thursday, I log on to any other server or workstation as any regular (non-admin) user and I fail to be able to open ADMIN SHARES ONLY (namely \\server1\c$, \\server1\e$ and \\server1\admin$). The error message is: "\server1\c$ is not accessible. You might not have permission to use this network resource. Contact the administrator of this server to find out if you have access permissions. Multiple connections to a server or shared resource by the same user, using more than one user name, are not allowed. Disconnect all previous connections to the server or shared resource and try again." I can, however, open the same shares if I use FQDN or IP address: \\server1.domain.local\c$ \\172.0.0.1\c$ Other shares do not have this issue and I can open them without any issue. Any ideas or suggestion would be truly appreciated. Thank you in advance.

    Read the article

  • Unable to connect to SQL Database (can the password be reset)

    - by user45450
    I have recently joined a company which has an SQL 2005 Server running a few databases. The server looks like no one has touched it in a couple of years and has this week it ran out of disk space. After a quick hard drive scan it looks like some of the databases have become a little bloated and particularly the Sharepoint_config~*~_log and WSS_Content_log.ldf have grown to about 15GB. I have been able to log into a couple of the other databases and use the shrinkfile command to free up disk space but for some reason I am unable to log into the sharepoint and Microsoft#SSEE databases (which gives me the "cannot connect to Sharepoint, a network related or instance specific error occurred..." when I try and connect) I can see that the database is running via the SQL surface configuration and I have made sure that the remote connection settings allow me to connect locally but I am still unable to log in either with windows authentication or locally. Is there any way to reset or recover the database login details so I can get in? ( I have tried logging in with all the administrative passwords I can find and after tracking down the company who installed it in the first place I found out that they have no idea what the password could have been)

    Read the article

  • install Oracle’s VirtualBox

    - by Shamith c
    I am trying to install Oracle’s VirtualBox. I used sudo dpkg -i virtualbox-4.2_4.2.4-81684\~Ubuntu\~quantal_i386.deb Getting following errors (Reading database ... 226237 files and directories currently installed.) Preparing to replace virtualbox-4.2 4.2.4-81684~Ubuntu~quantal (using virtualbox-4.2_4.2.4-81684~Ubuntu~quantal_i386.deb) ... Unpacking replacement virtualbox-4.2 ... dpkg: dependency problems prevent configuration of virtualbox-4.2: virtualbox-4.2 depends on libc6 (>= 2.15); however: Version of libc6 on system is 2.13-20ubuntu5. virtualbox-4.2 depends on libqtcore4 (>= 4:4.8.0); however: Version of libqtcore4 on system is 4:4.7.4-0ubuntu8.1. virtualbox-4.2 depends on libqtgui4 (>= 4:4.8.0); however: Version of libqtgui4 on system is 4:4.7.4-0ubuntu8.1. dpkg: error processing virtualbox-4.2 (--install): dependency problems - leaving unconfigured Processing triggers for ureadahead ... Processing triggers for shared-mime-info ... How to solve it?

    Read the article

  • Upgrading to Java 7u65 breaks my Deployment Rule Set for Oracle applications

    - by Don Atreides
    My company uses an older version of an Oracle application that requires Java 6u45. Naturally we want to be secure, so we use a Deployment Rule Set to specify 6u45 for that internal application and let other applications use 7u60. Now that we're ready to upgrade the Java 7 half to 7u67, the Oracle application breaks with "Deployment Rule Set required version 1.6.0_45 not available." Of course it is available, it just can't find it for some reason. As a test, I specified that JavaTester.org should use 6u45 also and it works fine with no issues. But when I try to use the same configuration (7u67 and 6u45) against the Oracle application it fails every time. If I downgrade to 7u60, it works. 7u65 or higher, it breaks. The Oracle application hasn't changed so it must be something different in how 7u65+ is handling Deployment Rule Sets or pathing or something. I'm at a complete loss. ruleset.xml: <?xml version="1.0"?> -<ruleset version="1.0+"> -<rule> <id location="*.mycorp.com"/> <action version="1.6.0_45" permission="run"/> </rule> -<rule> <id location="http://javatester.org"/> <action version="1.6.0_45" permission="run"/> </rule> </ruleset>

    Read the article

  • newbie: Allow domain users to change power-savings settings

    - by user65007
    I've just recently installed SMS 2011 on a server and added several computers to it's domain. Now I've noticed that I cannot change power settings (even when logged in as user who is in Domain Administrator role, let's call it Admin for future reference). After some googling I ended up adding Admin to the local administrators group using Group Policy Management Editor (as I have no experience in server administration I'm not sure I did it right: I went to Policy Management, selected Forest: xxxxx - Domains - xxxxx - Group Policy Objects - Windows SBS Client - Windows 7 and Windows Vista Policy - go to Settings tab on the right and right-click on anything and select Edit to go to Group Policy Mangement Editor - User Configuration - Preferences - Control Panel Settings - Local Users and Groups - right-click on it and select New - Local Group, then set Action to "Update", Group Name to "Administrators (built-in)", and added Admin to Members). After that I was able to change the power-savings settings on client computers(when logged in as Admin). Now the question: what should I do to allow any domain user to change this settings? Notice, I do not want to force some predefined power plan to all computers, I want to set it up so that any domain user on any client computer would be able to select a different power plan and to make any adjustments to the selected one. Thank you for any suggestions, just keep in mind that I'm newbie (but not completely dumb), so please answer accordingly :)

    Read the article

  • Restrict access to one SVN repository (overwrite default)

    - by teel
    I'm trying to set up our SVN server so that by default the group developers will have access to all repositories, but I want to override that setting on some certain repositories where I want to allow access only to single defined users (or separate groups) The current configuration is SVN + WebDAV on Apache2. All my repositories are located at /var/lib/svn/ In dav_svn.authz I currently have [/] @developers = rw @users = r Now I want to add one repository (let's call it secret_repo) that would only allow access to one user who is also a member of the developers group.¨ I tried to do [secret_repo:/] * = secret_user = rw Where secret_user is the user I'd like to give access to the repository, but it doesn't seem to work. Currently the server is using Apache's LDAP module to authenticate users from our active directory domain and I'd like to keep it that way if possible. Also I seem to be able to browse all my repos freely with any web browser, which I'd like to block. Second problem is that I have webSVN on the server, which is using Apache's LDAP authentication. Everyone who is a member of our domain can access it, so I'd like to hide this secret_repo from websvn listing. It's configured not with parentPath("/var/lib/svn");. Do I really need to remove that and add every repository separately, except the ones I want to hide?

    Read the article

  • Disable all the idiot-checking in Mac OS X

    - by Fake Name
    I am a Windows/Linux user, who is learning Mac OS X out of interest in doing dev-work for the iPad which I recently purchased. However, OS X is driving me nuts by trying to protect all it's system files, hiding all of the important OS components I want to tweak, and generally making it impossible to do any modification to the OS in general to make it more usable. Therefore, is there a way to turn off all the idiot-checking in Finder? On XP, I can disable "Hide Protected Operating system files" and set "Show Hidden Files". On linux, there really aren't many hidden files, and changing the configuration for .files is easy enough in Gnome and XFCE. How can I set up OS X in a similar way. I am not new to computers, and I am fully aware that deleting system files can damage or even irreparably disable a OS install. Therefore, If I intentionally try to delete a file, or move something, it's probably intentional, and I am willing to accept the consequences in any case. At this point, I have fallen back to doing everything through the command line (which takes forever), because Finder is practically unusable. (As for what I am attempting to do, I also asked about GUI changes here.)

    Read the article

  • SMF restarting service whenever there's output?

    - by Phillip Oldham
    I'm trying to add a custom service to SMF's configuration, which seems successful in that the service starts and there is a log file, but therein lies the problem; the service, on start-up, prints some logging messages to the stderr. It seems that SMF is seeing those messages and, believing them to be errors, restarts the service, giving up after a number of tries and leaving the service off. Here's part of the log output: [ Mar 30 14:59:54 Enabled. ] [ Mar 30 14:59:54 Executing start method ("java server.CustomServer"). ] Starting server... [ Mar 30 15:00:04 Method or service exit timed out. Killing contract 107. ] Running the server directly on the commandline is fine, and AFACS there are no errors being encountered during startup, other than the output. What would be the best way to manage this service with SMF? The logging is needed for diagnosing problems, and would be problematic to disable. Is it possible to configure this service to only restart if the service exists?

    Read the article

  • file read performance degrades as number of files increases

    - by bfallik-bamboom
    We're observing poor file read IO results that we'd like to better understand. We can use fio to write 100 files with a sustained aggregate throughput of ~700MB/s. When we switch the test to read instead of write, the aggregate throughput is only ~55MB/s. The drop seems related to the number of files since the throughput for read and write are comparable for a single file then diverge proportionally as we increase the number of files. The test server has 24 CPU cores, 48GB of memory, and is running CentOS 6.0. The disk hardware is a RAID 6 array with 12 disks and a Dell H800 controller. This device is partitioned with ext4 using the default settings. Increasing the readahead (using blockdev) improves the read throughput significantly but it still doesn't match write speed. For instance, increasing the readahead from 128KB to 1M improved the read throughput to ~145MB/s. Is this a known performance issue in our OS/disk/filesystem configuration? If so, how can we tell? If not, what tools or tests can we use to further isolate the issue? Thanks.

    Read the article

  • Cisco AnyConnect VPN client - prevent connecting as work network

    - by Opmet
    From Windows 7 I'm using "Cisco AnyConnect Secure Mobility Client 3.0" to connect to our corporate network. Every time I establish the VPN connection Windows will set the type as "work network". I don't want this. So I go to "network and sharing center" and manually / interactively change it to "public network". But I have to repeat it for every new VPN connection. Is there any way to make Windows remember / persist this configuration? Can it be configured in the VPN client? Do our IT admins need to change something at server end? Motivation: A "work network" per default uses different firewall settings that allows for stuff like "network discovery" and "file shares". But I just need "remote desktop" (mstsc). Additional info: Our IT admins claimed this would be Windows default behaviour and there was nothing we could do about it: Windows would always initiate a VPN connection as "work network". Based on this statement I assume this is a "general" issue and went ahead posting here (at superuser.com).

    Read the article

  • Can't ssh from CentOS 6.5 to SUSE LINUX 10.1

    - by Pavel Tankov
    We have a quite old installation of SUSE LINUX 10.1 (i586) in the office. The problem shortly: I can successfully ssh to it from machines in the same LAN (192.168.1.0) and not from others (that are in 10.23.0.0). The SuSE has SSH server openssh-4.2p1-18.12. I have ruled out the firewall and hosts.allow and hosts.deny files. When my ssh login attempt fails, here is what the logs say: on the client: $ ssh -vvv 192.168.1.5 OpenSSH_5.3p1, OpenSSL 1.0.1e-fips 11 Feb 2013 debug1: Reading configuration data /etc/ssh/ssh_config debug1: Applying options for * debug2: ssh_connect: needpriv 0 debug1: Connecting to 192.168.1.5 [192.168.1.5] port 22. debug1: Connection established. debug1: identity file /home/nbuild/.ssh/identity type -1 debug1: identity file /home/nbuild/.ssh/identity-cert type -1 debug1: identity file /home/nbuild/.ssh/id_rsa type -1 debug1: identity file /home/nbuild/.ssh/id_rsa-cert type -1 debug1: identity file /home/nbuild/.ssh/id_dsa type -1 debug1: identity file /home/nbuild/.ssh/id_dsa-cert type -1 on the server: Aug 21 16:34:25 serverhost sshd[20736]: debug3: fd 4 is not O_NONBLOCK Aug 21 16:34:25 serverhost sshd[20736]: debug1: Forked child 20739. Aug 21 16:34:25 serverhost sshd[20736]: debug3: send_rexec_state: entering fd = 7 config len 403 Aug 21 16:34:25 serverhost sshd[20736]: debug3: ssh_msg_send: type 0 Aug 21 16:34:25 serverhost sshd[20736]: debug3: send_rexec_state: done Aug 21 16:34:25 serverhost sshd[20739]: debug1: rexec start in 4 out 4 newsock 4 pipe 6 sock 7 Aug 21 16:34:25 serverhost sshd[20739]: debug1: inetd sockets after dupping: 3, 3 Aug 21 16:34:25 serverhost sshd[20739]: debug3: Normalising mapped IPv4 in IPv6 address Aug 21 16:34:25 serverhost sshd[20739]: Connection from 10.23.1.11 port 44340 The above log on the server is when I enable DEBUG3 log level. However, with the default log level (INFO), the only thing the server logs is this: Aug 21 16:38:32 serverhost sshd[20749]: Did not receive identification string from 10.23.1.11 Any hints? I feel I've tried everything already.

    Read the article

  • lighttpd domains and url matching

    - by Manuel
    I'm trying to configure lighttpd so that: www.domain1.org/admin uses config1 any other URL on www.domain1.org uses config2 any url on www.domain2.org uses config2 So basically, domain1 and domain2 should use the same configuration except for when domain1 is accessed via an URL that starts with /admin I tried so far a number of variations, including this one: $HTTP["host"] =~ "domain1.org" { $HTTP["url"] =~ "^/admin" { // config1 alias.url = ("/media/admin" => "/usr/share...", "/static" => "/var/www/...") url.rewrite-once = ( "^(/media/admin.*)$" => "$1", "^(/static.*)$" => "$1", "^/favicon\.ico$" => "/media/favicon.ico", "^(/.*)$" => "/application.fcgi$1", ) server.document-root="/var/www/application" fastcgi.debug = 1 fastcgi.server = ( "/application.fcgi" => ( "main" => ( "socket" => "/var/www/application/application.sock", "check-local" => "disable", ) ), ) } else $HTTP["url"] !~ "^/admin" { // config2 } $HTTP["host"] !~ "domain1.org" { // config2 } But no matter what, accessing domain1.org/admin yields a 404. Is there anything that I am missing?

    Read the article

  • Nginx conditional not evaluating correctly

    - by cjc
    I'm running into a weird problem with nginx and how it evaluates conditionals. Here's the relevant configuration: set $cors FALSE; if ($http_origin ~* (http://example.com|http://dev.example.com:8000|http://dev2.example.com)) { set $cors TRUE; } if ($request_method = 'OPTIONS') { set $cors $cors$request_method; } if ($cors = 'TRUE') { add_header 'Access-Test' "$cors"; add_header 'Access-Control-Allow-Origin' "$http_origin"; add_header 'Access-Control-Allow-Methods' 'POST, OPTIONS'; add_header 'Access-Control-Max-Age' '1728000'; } if ($cors = 'TRUEOPTIONS') { add_header 'Access-Test' "$cors"; add_header 'Access-Control-Allow-Origin' "$http_origin"; add_header 'Access-Control-Allow-Methods' 'POST, OPTIONS'; add_header 'Access-Control-Allow-Headers' 'X-Requested-With, X-Prototype-Version'; add_header 'Access-Control-Max-Age' '1728000'; add_header 'Content-Type' 'text/plain'; } So, the conditional blocks never trigger. When I remove the conditions, I see that the "Access-Test" header and the "Access-Control-Allow-Origin" set correctly, but, as noted, enabling the conditionals causes the headers not to be sent. I'm testing by running: curl -Iv -i --request "OPTIONS" -H "Origin: http://example.com" http://staging.example.com/ Am I missing something obvious? I've tried the "if" with and without quotes, etc. This is nginx 1.2.9.

    Read the article

  • Windows XP - Power surge on hub port

    - by Swift-Tuttle
    Hi, Since last few weeks I constantly get this error, as status bar balloon: Power Surge on Hub Port - A USB device has exceeded the power limits of its hub port. Due to this now I am unable to access any USB devices properly, they get disconnected intermittently. I did quite a few things to resolve this problem, firstly obviously through the Windows help. I even tried all the things told on the Microsoft website(which essentially says is to check and update the driver) but in vain. One suggestion, I found when I google'd was to disable the USB2 controller through the Device Manager and since at every startup the System configuration comes up complaining that it has been changed etc.(On that same site it is mentioned to ignore this message.) But after everything I still cant solve this problem. Any help is much appreciated. The system is installed with Windows XP service Pack 3 and all the updates till last month. Please let me know if any other hardware info is required. **UPDATE** My laptop is about 5 years old now, its an HP with Celeron 1.4G processor. Windows XP SP3 installed. All latest windows updates installed. 2 USB ports available. BIOS is HP 68DTD ver F.0A Do I need to update my BIOS from somewhere ? or is this a hardware problem altogether?

    Read the article

  • Connecting to network device behind NAT from local LAN using the external port and IP

    - by lumbric
    I noticed at several different LANs connected to the Internet through a NAT the following phenomena. There is a server in the LAN and there is a port forwarding to reach this server also from outside the LAN through the NAT. E.g. consider a LAN with the address 192.168.0.* and a SSH server at 192.168.0.2 with port 22 and a forwarding from port 2222 at the NAT 192.168.0.1 to 192.168.0.2:22. If the NAT's external IP is 44.33.22.11, one can connect to the SSH server through 44.33.22.11:2222. Surprisingly this works only from outside the LAN. If one tries to connect to 44.33.22.11:2222 from behind the NAT, there is no answer. Of course one could simply use 192.168.0.2:22, but often it is simpler to use the external IP. The typical use case for me is the configuration on a laptop computer. Usually the user uses any arbitrary Internet connection to connect to his home or office server, but sometimes he will use also the LAN to connect to it and it would be annoying to have to different configurations or bookmarks. Why does it fail to connect from inside the LAN? Is there any good work around?

    Read the article

  • Postfix not delivering from external senders and not logging anything

    - by simendsjo
    Some semi-recent upgrades must have broken my postfix+dovecot configuration, but I'm having problems finding out what the cause is. My domain is simendsjo.me with the MX record mail.simendsjo.me. I can send mail to both local and external recipients, and it delivers mail from internal mailboxes. The problem is that mail from external senders isn't delivered, and nothing is logged at all. The external sender also doesn't receive any errors. I have no idea where to ever start looking as nothing is logged at all when external mail is sent to my server. So the first issue would be: How can I turn on some debug messages for postfix? I've tried: debug_peer_level = 2 debug_peer_list = simendsjo.me .. And _level = 999 and _list = gmail.com where I'm trying to send emails from. but nothing is logged. When sending mails from a local mailbox (but from an outside computer, not localhost), a lot is logged. I don't have any rules in iptables either. Any ideas how I can get some debug messages for postfix?

    Read the article

  • Group Policy - Published software not upgrading

    - by VokinLoksar
    I'm testing this with mercurial MSIs, but it's the same for other packages. I've created a new group policy and added an old version of mercurial to User software installation as a Published package. On a Windows 7 client I install the package through Programs and Features. The installation works fine. Now, I would like to publish an updated version of mercurial. I create a new Published package. Under 'Upgrades' I configure it to replace (upgrade also doesn't work) the old version and mark this upgrade as 'Required'. The old package is not removed. The Windows 7 client is then restarted. When I log back in, I see a status message saying something like 'Removing managed software Mercurial ...'. There is no message about installation of the upgrade. If I look in Programs and Features, I can see the new version of mercurial listed. However, the actual mercurial directory under Program Files is missing. It's as though the installation recorded information about the MSI, but didn't actually install anything after removing the old version. As I mentioned, this isn't specific to mercurial. I've tried using other apps and have yet to find one that can be upgraded via a Published package. Using Assigned packages in Computer Configuration works without problems, but I would like this software to be optional rather than required. Ideas?

    Read the article

  • Force Installing a Radeon HD 2100 on Windows 8

    - by Click Ok
    I'm trying force installing a Radeon HD 2100 on Windows 8. I've found that link from AMD with the drivers for Windows7: http://support.amd.com/us/gpudownload/windows/legacy/Pages/legacy-radeonaiw-vista64.aspx I know too that AMD will stop support Radeon HD 400 and older: http://www.techspot.com/news/48321-amd-drops-windows-8-support-for-radeon-hd-4000-and-older.html Now, let's go to the problem. If I try install the 12.6 driver, Windows will stick with the "basic display adapter", and this is bad for 3d games like Minecraft, that runs really slow now compared with the previous Windows7 installation. Forcing install the catalyst driver can help to fix it. So, I follow that steps: Extract the Catalist Driver (C:\AMD\Support\12-6-legacy_vista_win7_64_dd_ccc_whql) Right click the "basic display adapter" on device manager, and "update driver" Search on PC I will choose the driver "With Disk" "C:\AMD\Support\12-6-legacy_vista_win7_64_dd_ccc_whql\Packages\Drivers\Display\W76A_INF" There is a big list of drivers and the nearest driver to HD 2100 is "Radeon HD 2350 Series" My questions: Why isn't "Radeon HD 2100 Series" listed? (or Where is it listed?) In theory it must be listed" The first link above show that "This article applies to the following configuration(s):" (...) "AMD Radeon HD 2000 Series" Am I doing something wrong?

    Read the article

  • Setting up a global MySQL Cluster in the cloud

    - by GregB
    I'm giving the question an overhaul to more specifically identify where I need help. I use two tools to manage a bunch of cloud server: Puppet and Rundeck. Both of these can be configured to use a mysql backend. I'd like to setup an instance of each application in both the U.S., and the U.K., treating the U.K. servers as hot stand-bys in case of failure in the U.S. I want to use a MySql cluster so that the data is automatically replicated from the U.S. to the U.K. Because these are hot standbys, high performance is not a goal. Redundancy and data integrity are most important. My question revolves around the setup of the mysql cluster. I want to run three servers, each one running a data node, a sql node, and a management node. Is this a valid configuration for mysql server? If so, could someone point me in the right direction for creating such a setup? I've downloaded the offical tarball, and the official debian, and the documentation for them contradicts many of the online tutorials. I'm installing on Ubuntu 10.04.

    Read the article

  • Outlook 2010 search not working after upgrade to windows 8

    - by Klaaz
    After upgrading my computer to Windows 8 Outlook 2010 has stopped displaying search results. Normally you can enter a (part) of a word in the search box on top of the inbox list and it will show you result immediatly. Even mails allready visible on the screen are not found. Somebody familiar with this issue? Update: maybe relevant: I use an Google Apps Pro account. All mail is synced and locally available in Outlook 2010. I did not change this in any way while upgrading, it was working perfectly before. I can scroll through all the e-mails, new mails are coming in as expected. This morning I received two mails from a person by the name of Rosanne. When searching on her name in Outlook it gives me One (1) result, the last mail from today. Update 2: Rebuilding the index seemed to be working. But after another day it stopped working again. No results whatsoever in Outlook search. Rebuilding indexes every day is not an option as it takes several hours. I suspect it has something to do with the fact that I use Google Apps Pro. It acts like a Exchange server to outlook. In indexing options (configuration) I added the directories containg the PST from this service (mail is also synced locally)

    Read the article

  • Is there a way to use VirtualBox without using it's resource registry?

    - by Catskul
    Summary VirtualBox seems to want everything to be "registered" which makes it much more annoying to work with on the command line. I'm attempting to create an automated script which will create, move, start, stop, and destroy virtual machines and virtual disks. Requiring registration will complicate the task for the following reasons. leaves state information around that can cause unpredicted edgecases causing scripts to fail. creates potential name space collisions for multiple process creating VMs with the same name moving/copying resources on the same machine is more complicated because references in the registry need to be updated copying resources (disk + vm combination) to another machine require reconfiguration once they reach their target machine, and require the transfer of extra meta data to do the reconfiguration. If something unexpectedly fails, and an unregister thus fails to happen, left over configuration information can cause problems in subsequent runs. Use Case My specific use case is for a continuous integration server which creates and destroys VMs and Disk images potentially with the same name, and would require more logic to deal with the registry's statefulness. Imaginary Example It seems that I should just be able to for example (using some imaginary and/or incorrect commands): mkdir foobar customdiskimg_script ./foo/foo.vdi vboxmanage createvm --name "foo" --ostype Linux --basefolder ./foo/foo.xml vboxmanage storagectl ./foo/foo.xml --name foo --add ide vboxmanage storageattach --storagectl foo --medium ./foo/foo.vdi ./foo/foo.xml vboxmanage startvm ./foo/foo.xml TLDR Is there a way to use virtualbox without "registering" harddisks and VMs?

    Read the article

  • trouble executing php scripts with nginx

    - by lovesh
    My nginx config looks like this server { listen 80; server_name localhost; location / { root /var/www; index index.php index.html; autoindex on; } location /folder1 { root /var/www/folder1; index index.php index.html index.htm; try_files $uri $uri/ index.php?$query_string; } location /folder2 { root /var/www/folder2; index index.php index.html index.htm; try_files $uri $uri/ index.php?$query_string; } location ~ \.php$ { fastcgi_pass 127.0.0.1:9000; fastcgi_index index.php; include fastcgi_params; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; } } The problem with the above setup is that i am not able to execute php files. Now as per my understanding of nginx config rules, when i am in my webroot(/) which is /var/www the value of $document_root becomes /var/www so when i request for localhost/hi.php the fastcgi_param SCRIPT_FILENAME becomes /var/www/hi.php and that is the actual path of the php script. Similarly when i request for localhost/folder1/hi.php the $document_root becomes /var/www/folder1 because this is specified as the root in folder1's location block so again the fastcgi_param SCRIPT_FILENAME becomes /var/www/folder1/hi.php. But because the above configuration does not work so there is something wrong with my understanding. Please help?

    Read the article

  • Apache LocationMatch does not work for group

    - by dma_k
    I would like to configure Apache to proxy mldonkey running at localhost. Initially I have used the following configuration: <IfModule mod_proxy.c> <LocationMatch /(mldonkey|bittorrent)/> ProxyPass http://localhost:4080/ ProxyPassReverse http://localhost:4080/ </LocationMatch> </IfModule> and it didn't worked! error.log reads [error] [client 192.168.1.1] File does not exist: /var/www/mldonkey which means that Apache does not intersect the URL. However, when I change the regexp to following: <LocationMatch /mldonkey/> it started to work (i.e. mod_proxy functions OK, more over all ). I have tried the following alternatives: <LocationMatch ^/(mldonkey|bittorrent)/> <LocationMatch ^/(mldonkey|bittorrent)/.*> <LocationMatch ^/(mldonkey|bittorrent)> <LocationMatch /(mldonkey|bittorrent)> <LocationMatch "^/(mldonkey|bittorrent)/"> <LocationMatch "/(mldonkey|bittorrent)"> <LocationMatch "/(mldonkey)"> <LocationMatch "/(mldonkey)/"> with no positive result. I am stuck. Please give me a hint where to look at. P.S. Apache Server 2.2.19. P.P.S. Would be happy if <LocationMatch> would work, without using the heavy artillery of mod_rewrite.

    Read the article

  • Mixing both local and nonlocal addresses on three switches

    - by klew
    I have four computers that have nonlocal addresses like 150.X.X.X. Now I also get another few computers that should be only accessible through a gateway (it will be computing cluster) and they addresses are 10.0.0.X. I also wanted to include those four older computers to this new cluster, but I want them to be accessible from internet on nonlocal addresses (so I would like to set up them on both 150.X.X.X and 10.0.0.X addresses - I've set up it as interface eth0:0 since I have only one NIC). Those new computers have their switch and old computers also have their own switch. Both of them are connected to another (third) switch. The problem is that those old computers see each other (I can ping them), and also new computers see each other, but I can't ping old computer from new computer and vice versa. However pinging on nonlocal adresses works as expected. I looked into switch configuration and didn't find anything useful. I have no idea what I missed here. Can somebody help? All computers have Ubuntu Server 10.04

    Read the article

< Previous Page | 505 506 507 508 509 510 511 512 513 514 515 516  | Next Page >