Search Results

Search found 15400 results on 616 pages for 'log4net configuration'.

Page 42/616 | < Previous Page | 38 39 40 41 42 43 44 45 46 47 48 49  | Next Page >

  • Side-By-Side Configuration Error VC90.CRT

    - by Swiss
    I keep receiving the following error when trying to run MikTeX 2.8 or Visual Studio 2008 on 64-Bit Windows Vista. It's particularly odd because both programs were working problem free until a few days ago. The application has failed to start because its side-by-side configuration is incorrect. Please see the application event log for more detail. Opening the Application log provides the following information: Activation context generation failed for "C:\Program Files (x86)\MiKTeX 2.8\miktex\bin\texworks.exe". Error in manifest or policy file "C:\Program Files (x86)\MiKTeX 2.8\miktex\bin\Microsoft.VC90.CRT.MANIFEST" on line 4. Component identity found in manifest does not match the identity of the component requested. Reference is Microsoft.VC90.CRT,processorArchitecture="x86",publicKeyToken="1fc8b3b9a1e18e3b",type="win32",version="9.0.30729.4148". Definition is Microsoft.VC90.CRT,processorArchitecture="x86",publicKeyToken="1fc8b3b9a1e18e3b",type="win32",version="9.0.30729.1". Please use sxstrace.exe for detailed diagnosis. It looks like the problem is with Microsoft.VC90.CRT.MANIFEST, but I am not sure why or how to fix this problem. I have tried uninstalling/reinstalling Visual Studio and MikTeX, as well as uninstalling/reinstalling Microsoft's C++ Redistributable, but nothing seems to be fixing this problem.

    Read the article

  • PXE boot and DHCP server configuration Failing Auto Installation

    - by Harihara Vinayakaram
    I have a ISC DHCP Server installed on Ubuntu 9.10 . I have managed to successfully boot a PXE client , obtain a DHCP address and load the initrd.gz file. But I am facing a vague problem when the debian installer starts up and tries to get a DHCP server The client send a DHCP request and I verified that is the same MAC Address. But I get a DHCP DECLINE (The client declines the address ). It offers all the address in the pool and then there is a DHCP NAK (no more free leases ) I tried using the Option no-ping, and also option one-client-one-lease but it does not help . If I set the client to use a fixed-address then the above problem is not there and the installation proceeds smoothly Can you give me any clues on what should be the DHCP server configuration My dhcpd.conf looks like this { ddns-update-style none; option domain-name "hadoop-myorg.org"; option domain-name-servers 192.168.3.5; default-lease-time 600; max-lease-time 7200; group { filename "pxelinux.0"; next-server 192.168.13.184; host hadoop1 { hardware ethernet 90:e6:ba:d5:53:f8; } } subnet 192.168.13.0 netmask 255.255.255.0 { option routers 10.0.0.254; pool { option domain-name-servers 192.168.3.5; max-lease-time 3000; range 192.168.13.55 192.168.13.65; deny unknown-clients; } } }

    Read the article

  • OpenVPN Bridge LAN-to-LAN Configuration?

    - by Shad Reese
    I'm trying to configure an OpenVPN bridge LAN-to-LAN setup. Currently, I have the OpenVPN bridge Server/Client setup up running. On the server-side my br-lan interface has tap0, eth0, and wlan0 in the bridge group. On the client-side the br-lan interface has eth0 and wlan0 in the bridge group, the client tap0 is outside of the br-lan group. Currently the two bridge groups are connected via the wlanO interfaces (server-side is the Access Point - AP and the client-side is the wireless client). My goal is to connect the two bridge groups with a wireless VPN pipe. My network configuration: Server: br-lan: 10.4.96.50 Client: br-lan: 10.4.96.75 tap0: 10.4.96.100 <---- issued by the VPN server. Unfortunately, I'm stuck with using a bridge instead of a routed OpenVPN setup. My question is how (if possible) do I add the client tap0 interface to the client bridge group, as to ensure all traffic between the server/client bridge groups is using the VPN pipe? SERVER CONFIG FILE. config openvpn sample_server # Set to 1 to enable this instance: option enable 1 option port 1194 option proto udp option dev tap0 option key /etc/easy-rsa/keys/server.key option dh /etc/easy-rsa/keys/dh1024.pem option ifconfig_pool_persist /tmp/ipp.txt option server_bridge "10.4.96.50 255.255.255.0 10.4.96.100 10.4.96.200" list push "redirect-gateway local def1" list push "dhcp-option DNS 10.4.96.14" option duplicate_cn 1 option comp_lzo 1 option max_clients 100 option log /tmp/openvpn.log option verb 3 CLIENT CONFIG FILE: config 'openvpn' 'sample_client' option 'enable' '1' option 'client' '1' option 'dev' 'tap' option 'proto' 'udp' list 'remote' '10.4.96.50 1194' option 'status' /tmp/openvpn-status.log option 'log' /tmp/openvpn.log option 'ca' '/etc/easy-rsa/keys/ca.crt' option 'cert' '/etc/easy-rsa/keys/client.crt' option 'key' '/etc/easy-rsa/keys/client.key' option 'comp_lzo' '1' option 'verb' '5' Thanks in advance,

    Read the article

  • phpBB configuration problem under Nginx

    - by zvikico
    Hi, I have a phpBB site running with Nginx (PHP via FastCGI). It works OK. However, some specific actions like moving or deleting a topic fail. Instead, I'm redirected to the forum index. I think it is a problem with the URLs redirection or rewriting. My rewrite rule looks like this: if (!-e $request_filename) { rewrite ^/(.*)$ /index.php?q=$1 last; break; } Any help would be appreciated. My full configuration file is: server { listen 80; server_name forum.xxxxx.com; access_log /xxxxx/access.log; error_log /xxxxx/error.log; location = / { root /xxxxx/phpBB3/; index index.php; } location / { root /xxxxx/phpBB3/; index index.php index.html; if (!-e $request_filename) { rewrite ^/(.*)$ /index.php?q=$1 last; break; } } error_page 404 /index.php; error_page 403 /index.php; error_page 500 502 503 504 /index.php; # serve static files directly location ~* ^.+\.(jpg|jpeg|gif|css|png|js|ico)$ { access_log off; expires 30d; root /xxxxx/phpBB3/; break; } # hide protected files location ~* \.(engine|inc|info|install|module|profile|po|sh|.*sql|theme|tpl(\.php)?|xtmpl)$|^(code-style\.pl|Entries.*|Repository|Root|Tag|Template)$ { deny all; } location ~ \.php$ { fastcgi_pass 127.0.0.1:8888; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME /xxxxx/phpBB3/$fastcgi_script_name; fastcgi_param QUERY_STRING $query_string; fastcgi_param REQUEST_METHOD $request_method; fastcgi_param CONTENT_TYPE $content_type; fastcgi_param CONTENT_LENGTH $content_length; fastcgi_param REMOTE_ADDR $remote_addr; fastcgi_param REMOTE_PORT $remote_port; } }

    Read the article

  • Windows Server 2008 SMTP & POP3 Configuration

    - by Alex Hope O'Connor
    This is the first time I have ever configured a VPS server without 3rd party applications such as Plesk control panel. I have got most functionality working in all my websites except I am very unsure as to how I can setup my email functionality on this new server. Basically I want the standard POP3 functionality, a bunch of accounts with private boxes, all able to send and receive emails using their individual usernames and passwords. My server setup is pretty simple, its a VPS with IIS & DNS Server running. What I have tried to do to setup SMTP & POP3 is adding the SMTP Server feature through the Server Manager Console (very unsure of the configuration as guides I found did not explain), I then installed a 3rd party application called Visdeno SMTP Extender as it claims to be a POP3 service providing accounting and the ability to communicate with email clients. That is as far as I have gotten as I can not seem to find too much information on the subject. So can someone please tell me how to go about configuring these services in order to provide standard SMTP & POP3 functionality? Thanks, Alex.

    Read the article

  • Trouble with local id / remote id configuration of VPN

    - by Lynn Owens
    I have a NetGear UTM firewall and a Windows machine running NetGear's VPN client. The Windows machine I can put on the UTM network and take off of it. When I am cabled into the local (internal) the following configuration works: UTM: Local Id: Local Wan IP: (The UTM's WAN IP address) Remote Id: User FQDN: utm_remote1.com Client: Local Id: DNS: utm_remote1.com Remote Id: (The UTM's WAN IP address) Gateway authentication: preshared key Policy remote endpoint: FQDN: utm_remote1.com But when I'm off the UTM's internal local network and simply coming in from the internet, this does not work. It simply repeats SEND phase 1 before giving up. Since I know that the UTM WAN IP is accessible from both inside and outside the network, I figured the problem was with the Client local id. So, I tried the following: UTM: Local Id: Local Wan IP: (The UTM's WAN IP address) Remote Id: (A DN of a self-signed certificate I created for the client and uploaded into the UTM certificates) Client: Local Id: (The DN of the aforementioned self signed cert) Remote Id: (The UTM's WAN IP address) Gateway authentication: (the aforementioned self signed cert) Policy remote end point: ... er, ... my choices are IP and FQDN.... Not sure what to put here No matter what I've tried, it just keeps repeating the SEND phase 1. Any ideas?

    Read the article

  • What is optimal hardware configuration for heavy load LAMP application

    - by Piotr K.
    I need to run Linux-Apache-PHP-MySQL application (Moodle e-learning platform) for a large number of concurrent users - I am aiming 5000 users. By concurrent I mean that 5000 people should be able to work with the application at the same time. "Work" means not only do database reads but writes as well. The application is not very typical, since it is doing a lot of inserts/updates on the database, so caching techniques are not helping to much. We are using InnoDB storage engine. In addition application is not written with performance in mind. For instance one Apache thread usually occupies about 30-50 MB of RAM. I would be greatful for information what hardware is needed to build scalable configuration that is able to handle this kind of load. We are using right now two HP DLG 380 with two 4 core processors which are able to handle much lower load (typically 300-500 concurrent users). Is it reasonable to invest in this kind of boxes and build cluster using them or is it better to go with some more high-end hardware? I am particularly curious how many and how powerful servers are needed (number of processors/cores, size of RAM) what network equipment should be used (what kind of switches, network cards) any other hardware, like particular disc storage solutions, etc, that are needed Another thing is how to put together everything, that is what is the most optimal architecture. Clustering with MySQL is rather hard (people are complaining about MySQL Cluster, even here on Stackoverflow).

    Read the article

  • iptables configuration to work with apache2 mod_proxy

    - by swdalex
    Hello! I have iptables config like this: iptables -F INPUT iptables -F OUTPUT iptables -F FORWARD iptables -P INPUT DROP iptables -P OUTPUT DROP iptables -P FORWARD DROP iptables -A INPUT -p tcp --dport 22 -j ACCEPT iptables -A OUTPUT -p tcp --sport 22 -j ACCEPT iptables -A INPUT -p tcp --dport 80 -j ACCEPT iptables -A OUTPUT -p tcp --sport 80 -j ACCEPT iptables -A INPUT -p tcp --dport 443 -j ACCEPT iptables -A OUTPUT -p tcp --sport 443 -j ACCEPT Also, I have apache virtual host: <VirtualHost *:80> ServerName wiki.myite.com <Proxy *> Order deny,allow Allow from all </Proxy> ProxyPass / http://localhost:8901/ ProxyPassReverse / http://localhost:8901/ <Location /> Order allow,deny Allow from all </Location> </VirtualHost> My primary domain www.mysite.com is working well with this configuration (I don't use proxy redirect on it). But my virtual host wiki.mysite.com is not responding. Please, help me to setup iptables config to allow wiki.mysite.com working too. I think, I need to setup iptables FORWARDING options, but I don't know how. update: I have 1 server with 1 IP. On server I have apache2.2 on 80 port. Also I have tomcat6 on 8901 port. In apache I setup to forwarding domain wiki.mysite.com to tomcat (mysite.com:8901). I want to secure my server by disabling all ports, except 80, 22 and 443.

    Read the article

  • Installation Error on Windows Vista: "Side-by-Side configuration is incorrect"

    - by Maxim Z.
    NOTE: This is not a dupe of this other question. That question refers to a similar problem with 2 programs, while I'm only having it with 1, so the solution there doesn't apply to my situation. My relative asked me to install H&R Block 2009 on his Windows Vista 32-bit computer. I ran the installation program, which succeeded, but when I try to open the application itself, it gives me the following error: The application failed to start because its side-by-side configuration is incorrect. Please see the application event log for more detail. Here are the steps I've done so far to try and remedy this problem: In elevated command prompt, run the command: sfc /scannow Uninstall H&R Block 2009 Uninstall Microsoft Visual C++ 2005 Redistributable Reinstall Microsoft Visual C++ 2005 Redistributable by downloading from MSFT website Reinstall H&R Block 2009 This didn't fix it. I've searched for a long time and haven't found anything that works. The H&R Block site itself states that the way to fix this problem is to uninstall and reinstall H&R Block 2009. Has anyone run into this issue before? If so, how can I fix it? Thanks in advance.

    Read the article

  • Configure apache to reverse proxy for specific name

    - by Phrogz
    I have a working intranet server that: Properly serves some content from http://hqmktgwb01/ Is currently properly configured to reverse proxy from http://hqmktgwb01/dashstats to a round-robin of localhost:3000 - localhost:3003 Also has the DNS name dashstats (going to the same IP) The current working configuration file can be found here: http://pastie.org/1426082 I would like to modify the configuration so that:    4. http://dashstats/ performs the same reverse proxying http://hqmktgwb01/dashstats. I (naively) modified the config like this: http://pastie.org/1426047 (added lines 90-98) but this is not a valid Apache config. Please help me to modify the original config file to accomplish 1-4 above.

    Read the article

  • Debian network bridge configuration - /etc/network/interfaces

    - by Mathias
    I'm running a Lenny Xen dom0 hosting multiple virtual machines in a routed IP setup. To get an additional private subnet, I created the bridge xenbr0 in the dom0 with the following commands: brctl addbr xenbr0 ifconfig xenbr0 10.0.0.1 netmask 255.255.255.0 ifconfig xenbr0 up This works as expected, and domU interfaces are added to the bridge by Xen on VM start. My only problem is: how the heck do i specify this configuration in /etc/network/interfaces that it remains permanent and the bridge is available after a reboot? I tried the following config as found on a lot of tutorials: auto xenbr0 iface xenbr0 inet static address 10.0.0.1 netmask 255.255.255.0 network 10.0.0.0 broadcast 10.0.0.255 bridge_stp no I get 2 different errors, depending on if the bridge already exists or not. If it doesn't exist: root@dom0:~# brctl show bridge name bridge id STP enabled interfaces root@dom0:~# /etc/init.d/networking restart Reconfiguring network interfaces...if-up.d/mountnfs[eth0]: waiting for interface xenbr0 before doing NFS mounts (warning). SIOCSIFADDR: No such device xenbr0: ERROR while getting interface flags: No such device SIOCSIFNETMASK: No such device SIOCSIFBRDADDR: No such device xenbr0: ERROR while getting interface flags: No such device xenbr0: ERROR while getting interface flags: No such device Failed to bring up xenbr0. done. And if it exists: root@dom0:~# brctl show bridge name bridge id STP enabled interfaces xenbr0 8000.000000000000 no root@dom0:~# /etc/init.d/networking restart Reconfiguring network interfaces...if-up.d/mountnfs[eth0]: waiting for interface xenbr0 before doing NFS mounts (warning). RTNETLINK answers: File exists Failed to bring up xenbr0. done. Could anyone point me in the right direction please? The bridge works fine when created manually, i just need the right config file entries. The most tutorials I found add some devices to the bridge in the config, is that maybe the problem why it is not working? I don't have any interfaces I want to add to the bridge on creation as they get added later on VM start... Thanks, Mathias

    Read the article

  • vhost configuration for owncloud

    - by Razer
    I'm using apache2 for hosting owncloud. I configured a vhost file for owncloud, but every time I go on the site my browser downloads a ruby file. Here is my vhost configuration: <VirtualHost *:80> ServerAdmin webmaster@localhost ServerName http://rsserver.fritz.box DocumentRoot /var/www/owncloud/ <Directory /var/www/owncloud/> Options Indexes FollowSymLinks MultiViews AllowOverride All Order allow,deny allow from all </Directory> ScriptAlias /cgi-bin/ /usr/lib/cgi-bin/ <Directory "/usr/lib/cgi-bin"> AllowOverride None Options +ExecCGI -MultiViews +SymLinksIfOwnerMatch Order allow,deny Allow from all </Directory> ErrorLog ${APACHE_LOG_DIR}/error.log # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. LogLevel warn CustomLog ${APACHE_LOG_DIR}/access.log combined Alias /doc/ "/usr/share/doc/" <Directory "/usr/share/doc/"> Options Indexes MultiViews FollowSymLinks AllowOverride None Order deny,allow Deny from all Allow from 127.0.0.0/255.0.0.0 ::1/128 </Directory> </VirtualHost> Apache error log tells me: [Sat Jun 16 20:46:04 2012] [error] [client xx.xx.xx.xx] Options FollowSymLinks or SymLinksIfOwnerMatch is off which implies that RewriteRule directive is forbidden: /var/www/owncloud/core/templates/403.php mod_rewrite is enabled. Where is the problem?

    Read the article

  • Varnish configuration, NamevirtualHosts, and IP Forwarding

    - by Brent
    I currently have a bunch of NameVirtualHost based websites, load balanced between 3 apache2 servers using ldirectord. I would like to insert varnish as a reverse-web-proxy between ldirectord and apache in the following way: a request comes in to ldirectord it is then load balanced between the 3 apache2 servers and varnish, with a weight of 1 for the webservers, and 99 for varnish (so if varnish is rebooted, the webservers will take over seamlessly) varnish will then load balance its requests between my apache2 servers. However, the varnish part is not working. I wonder whether this has to do with the fact that my apache servers use x.x.x.x:80 for their NameVirtualHosts, instead of *:80? (they have to do this, since each server hosts multiple IP addresses) Or perhaps it has to do with the need for IP Forwarding to be set up on the varnish server? (I did echo 1 /proc/sys/net/ipv4/ip_forward on this server, is that sufficient?) How can I debug this problem? ldirectord doesn't produce logs of what it does with each request (and if it did, I would be overwhelmed with information since I'm serving hundreds of requests per second) varnish log shows the ldirectord server connecting to it every 5 seconds, but nothing else. I have set up a test site using this configuration, but it fails - no apache access logs, no applicable varnish logs.

    Read the article

  • nconf deployment.ini configuration for a basic Nagios server on CentOS 6.2

    - by jshin47
    I have set up nconf and Nagios but I cannot figure out how to configure deployment.ini to properly deploy the generated configuration to /usr/local/nagios/etc. Here are the directory listings of interest: [jshin@nag0 tmp]$ ls Default_collector global [jshin@nag0 tmp]$ cd Default_collector/ [jshin@nag0 Default_collector]$ ls advanced_services.cfg hostgroups.cfg service_dependencies.cfg services.cfg host_dependencies.cfg hosts.cfg servicegroups.cfg [jshin@nag0 Default_collector]$ cd .. [jshin@nag0 tmp]$ cd global/ [jshin@nag0 global]$ ls checkcommands.cfg contacts.cfg misccommands.cfg timeperiods.cfg contactgroups.cfg host_templates.cfg service_templates.cfg [jshin@nag0 global]$ cd .. [jshin@nag0 tmp]$ cd /usr/local/nagios/etc/ [jshin@nag0 etc]$ ls cgi.cfg htpasswd.users nagios.cfg objects resource.cfg [jshin@nag0 etc]$ cd objects/ [jshin@nag0 objects]$ ls commands.cfg localhost.cfg switch.cfg timeperiods.cfg contacts.cfg printer.cfg templates.cfg windows.cfg Here is my deployment.ini (pretty much the default setting) ;; LOCAL deployment ;; [extract config] type = local source_file = "/var/www/html/nconf/output/NagiosConfig.tgz" target_file = "/tmp/" action = extract [copy collector config] type = local source_file = "/tmp/Default_collector/" target_file = "/usr/local/nagios/etc/Default_collector/" action = copy [copy global config] type = local source_file = "/tmp/global/" target_file = "/usr/local/nagios/etc/global" action = copy reload_command = "service nagios restart" What I am wondering is why the directory structure that the default deployment.ini seems to suggest, with Default_collector and global, is different from the one that Nagios has by default, with only a folder called objects. What am I missing? Or more importantly, how does your deployment.ini look?

    Read the article

  • Laptop hardware recommendations for multi-platform development

    - by iama
    I am thinking of buying a laptop with the following configuration - Intel core 2 duo(or I3-330M)/ 4GB RAM/300+ GB 7200 RPM. I would like to be able to run two server VMs on this laptop with Win2K8 and Ubuntu (preferably 64 bit editions). Windows 7 will be the Host OS since that is the one that ships with the laptop. I am thinking of using VMWare player to run the two server OSs. Is this laptop good enough to run the two VMs side by side or do I need to go for a better configuration? Any suggestions? Thanks.

    Read the article

  • Apache2 configuration, .htacces and 310 error (www redirection)

    - by allstat
    I have an ubuntu apache serveur, with many websites. all my website have the same bug ( so it's look like a misconfiguration) http://www.2sigma.fr <- it's work fine ( we see "en travaux") http://2sigma.fr <- dont work, i got 310 error (cyclic redirection!) here my .htaccess Options +FollowSymlinks RewriteEngine on RewriteCond %{HTTP_HOST} ^2sigma\.fr$ RewriteRule ^(.*) http://www.2sigma.fr/$1 [R=301,L] here my confguration <VirtualHost *:80> <IfModule mpm_itk_module> AssignUserId sigma www-data </IfModule> ServerAdmin [email protected] ServerName 2sigma.fr ServerAlias www.2sigma.fr DocumentRoot /home/sigma/www <Directory /> Options FollowSymLinks AllowOverride All </Directory> <Directory /home/sigma/www> Options Indexes FollowSymLinks MultiViews AllowOverride All Order allow,deny allow from all </Directory> ErrorLog /var/log/apache2/error_sigma # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. LogLevel warn CustomLog /var/log/apache2/access_sigma combined ServerSignature Off If i use this .htaccess it's work fine : Options +FollowSymlinks RewriteEngine on RewriteCond %{HTTP_HOST} ^2sigma\.fr$ RewriteRule ^(.*) http://www.google.fr/$1 [R=301,L] I think that it is a apache configuration probleme... but i dont kno how to solve it. Thanks for your help

    Read the article

  • Configuration Help for Sendmail Required

    - by Vinayak Mahadevan
    Hi I need some help with respect to sendmail configuration. The basic problem is that I have some employees working from other places and they need access to their mail. So what I have done right now is whatever mails which are meant for them which are generated from within the company and collected by my internal mail server is bounced to an external mail server from where the employees access it. This is done through a email id on a different domain. This was working fine till I restricted the external mailing access for certain users using rulesets in sendmail.cf. Once I had put that in place only people who had external mailing rights could send mails to people outside the office. What I would like to know is that is there anyway where I can expose sendmail on two different ips and thereby configure everybody's email id to point to the same internal mail server using 2 different ips. one ip when inside the company and one ip outside the company. Is it possible that I have one static ip configured for both internal access and external access or is there any otherway it can be done with sendmail. Can anybody help me Sorry for the long post Regards Vinayak

    Read the article

  • Is Sql Server 2008 R2 unsupported by Operations Manager (SCOM) 2007 R2?

    - by bwerks
    Hey all, I'm performing a test configuration of System Center Operations Manager 2007 R2, on a system prepared with Sql Server 2008 R2. Unfortunately, the Scom 2007 R2 prerequisites verification program seems to be detecting exact versions of Sql Server, and not simply a minimum version, like it claims: "System Center Operations Manager 2007 R2 requires SQL Server 2005 Standard or Enterprise Edition with SP1 and above or SQL Server 2008 Standard or Enterprise edition with SP1 and above. Note: Operations Manager 2007 R2 does not support a 32-bit Operations Manager Operations database, Reporting Server data warehouse or Audit Collection database on a 64-bit operating system." I had hoped that this was just a helper tool that was assisting in getting me off the ground, but unfortunately it seems as if it's actually used as a gate for the installation to proceed. Has anyone encountered this? If so, is there a way to fool the installer into thinking that it has a proper version, or otherwise alert it to my valid configuration?

    Read the article

  • Laptop hardware recommendations for multi-platform development

    - by iama
    I am thinking of buying a laptop with the following configuration - Intel core 2 duo(or I3-330M)/ 4GB RAM/300+ GB 7200 RPM. I would like to be able to run two server VMs on this laptop with Win2K8 and Ubuntu (preferably 64 bit editions). Windows 7 will be the Host OS since that is the one that ships with the laptop. I am thinking of using VMWare player to run the two server OSs. Is this laptop good enough to run the two VMs side by side or do I need to go for a better configuration? Any suggestions? Thanks.

    Read the article

  • Local DNS and Apache Server Configuration Interferring - example.com / www.example.com

    - by nicorellius
    I have a domain for my site: example.com I am also running local DNS with these lines: www IN CNAME server.<host_provider>.com. dev IN CNAME server.<host_provider>.com. So www.example.com and dev.example.com go to production and development sites, respectively, that are hosted by a host company. In my Apache configuration for the main site, I'm running a rewrite rule like this: RewriteEngine ON RewriteCond %{HTTP_HOST} ^example\.com$|!dev\.example\.com$ [NC] RewriteRule ^(.*)$ http://www\.%{HTTP_HOST}/$1 [R=302,L,NE] This rule seems to work, as when you are off the network and go to example.com in the browser, you get redirected to www.example.com. The problem is when I'm on the network, and I go to example.com I get an error page, saying page can't be found. No server errors; just a page can't be found, as if the local DNS is causing it to stop looking at that point. I'm also using Nettica for DNS service and have this A record in place: example.com Host (A) Default xxx.xx.xxx.xx This handles the external DNS, but my problem seems to be related to my internal DNS. For example, inside my network, I can go to servers on the network with addresses like this: server.example.com server1.example.com server2.example.com These are configured in my local DNS. I'm just not sure how to get past the "empty" subdomain and go to example.com. Adding to this since it might not be clear. If I'm out side the example.com network, on another network, like example123.com, then when I go to example.com I'm redirected to www.example.com as expected, eg, the Apache rewrite rule is working. Thanks in advance for any information.

    Read the article

  • RAID across SAS controller ports

    - by BlueGene
    Hi, I'm managing a HP DL360 G5 machine. It has a SAS controller with two ports. Port 1 is attached to 4 drive bays and Port 2 has 2 bays. The machine currently has 3 drives connected to Port 1 with RAID 5 configuration. I'm trying to max out the bays by adding 1 disk in port 1 and 2 in port 2. Can I group those three disks(1 disk-port, 2 disks-port 2) in a RAID 5 configuration?

    Read the article

  • How to set CA cert file for LDAP backend server in smbpasswd configuration

    - by hayalci
    I am having a problem with smbpasswd, an LDAP backend server and SSL/TLS certificates. The client machine that I run smbpasswd on is a Debian Etch machine, and the Ldap server is Sun DS running on Solaris. All the following occurs on the client. When I disable SSL, by setting "ldap ssl = no" in smb.conf, the smbpasswd program works without errors. When I set "ldap ssl = start tls", the following messages are printed by smbpasswd and there is a long timeout period before any password is asked by it Failed to issue the StartTLS instruction: Connect error Connection to LDAP server failed for the 1 try! ..... long delay ..... New SMB password: Retype new SMB password: Failed to issue the StartTLS instruction: Connect error Connection to LDAP server failed for the 1 try! smbpasswd: /tmp/buildd/openldap2-2.1.30/libraries/liblber/io.c:702: ber_get_next: Assertion `0' failed. Aborted I conducted some tests with "ldapsearch -ZZ". It was not working at first, but after I added the TLS_CACERT line to /etc/ldap/ldap.conf, /etc/libnss-ldap.conf and /etc/pam_ldap.conf, it started working. So relevant TLS sections in all those files are: ssl start_tls tls_checkpeer no tls_cacertfile /path/to/ca-root.pem TLS_CACERT /path/to/ca-root.pem But the smbpasswd program continued giving the error. I tried creating /etc/smbldap-tools/smbldap.conf file with following content (after consulting debian docs for smbldap-tools package) But as I see, smbpasswd comes with samba-common package and does not use the configuration for smbldap-tools utilities. verify="optional" cafile="/path/to/ca-root.pem" My question is: How can I set which SSL CA Certificate is used by smbpasswd program ?

    Read the article

  • Wildcard SSL and Apache configuration

    - by Nitai
    Hi all, I'm pulling my hard on this configuration, which probably is simply. I have a wildcard ssl certificate which is working. I have the website setup to run on domain.com under SSL. Now, I'm in need to run many subdomains (*.domain.com) on the same server with the same SSL certificate. Shouldn't be that hard, right? Well, I can't get it going. Point is, that the first config is another Tomcat server that serves another site and listens to domain.com and www.domain.com. The other config listens to *.domain.com and pulls the content from another Tomcat server. I already tried this whole setup with mod_rewrite, but simply don't see what I'm doing wrong. Any help very much appreciated. Here is my conf in Apache 2.2: <VirtualHost *:443> SSLEngine on SSLCertificateFile ... SSLCertificateKeyFile ... SSLCertificateChainFile ... ServerName domain.com ServerAlias www.domain.com ProxyRequests Off <Proxy *> Order deny,allow Allow from all </Proxy> ProxyPreserveHost On ProxyPass / ajp://localhost:8010/ ProxyPassReverse / ajp://localhost:8010/ </VirtualHost> <VirtualHost *:443> SSLEngine on SSLCertificateFile ... SSLCertificateKeyFile ... SSLCertificateChainFile ... ServerName domain.com ServerAlias *.domain.com ProxyRequests Off <Proxy *> Order deny,allow Allow from all </Proxy> ProxyPreserveHost On ProxyPass / ajp://localhost:8009/ ProxyPassReverse / ajp://localhost:8009/ </VirtualHost> Thanks.

    Read the article

  • Nginx configuration question

    - by Pockata
    Hey guys, i'm trying to make the autoindex feature only run for my ip address with this code: server{ ... autoindex off; ... if ($remote_addr ~ ..*.*) { autoindex on; } ... } But it doesn't work. It gives my a 403 :/ Can someone help me :) Btw, i'm using Debian Lenny and Nginx 0.6 :) EDIT: Here's my full configuration: server { listen 80; server_name site.com; server_name_in_redirect off; client_max_body_size 4M; server_tokens off; # log_subrequest on; autoindex off; # expires max; error_page 500 502 503 504 /var/www/nginx-default/50x.html; # error_page 404 /404.html; set $myhome /bla/bla; set $myroot $myhome/public; set $mysubd $myhome/subdomains; log_format new_log '$remote_addr - $remote_user [$time_local] $request ' '"$status" "$http_referer" ' '"$http_user_agent" "$http_x_forwarded_for"'; # Star nginx :@ access_log /bla/bla/logs/access.log new_log; error_log /bla/bla/logs/error.log; if ($remote_addr ~ 94.156.58.138) { autoindex on; } # Subdomains if ($host ~* (.*)\.site\.org$) { set $myroot $mysubd/$1; } # Static files # location ~* \.(jpg|jpeg|gif|css|png|js|ico)$ { # access_log off; # expires 30d; # } location / { root $myroot; index index.php index.html index.htm; } # PHP location ~ \.php$ { fastcgi_pass 127.0.0.1:9000; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $myroot$fastcgi_script_name; include fastcgi_params; } # .Htaccess location ~ /\.ht { deny all; } } I forgot to mention that when i add the code to remove static files from my access log, the static files cannot be accessed. I don't know if it's relevant :)

    Read the article

  • How can I configure TomCat Java options in a config file?

    - by Kip
    I'm trying to configure Java options passed into TomCat for a 3rd party application that I'm deploying. The instructions that the app provides are: Open the Tomcat configuration tool from the Windows menu at Start All Programs Apache Tomcat Tomcat Configuration. Click Configure and select the Java tab. At the bottom of the Java Options field, enter the following: -Dexample.license.directory="C:\Program Files\example" Stop and restart the application server. However, I need to do this programmatically, so I'd like to know what config file these options can be set in. Using the GUI is impractical for deploying the app to other developers' environments. (I'm using Tomcat 6.0 if that is relevant...)

    Read the article

< Previous Page | 38 39 40 41 42 43 44 45 46 47 48 49  | Next Page >