Search Results

Search found 23079 results on 924 pages for 'local variables'.

Page 260/924 | < Previous Page | 256 257 258 259 260 261 262 263 264 265 266 267  | Next Page >

  • Which Revision Control Software to use for Personal Dropbox?

    - by wag2639
    I want to set up a sync repositiory that would be similar to Dropbox. Goals/Requirements: Free (Open Source very preferable) Linux host (probably Ubuntu) Windows/Mac/Linux clients Potential for multiple users with limited access (optional) Preferable easy, doesn't necessarily need to be automatic Revision control very preferable Basically, I want to be able to use multiple computers, possible with different OS's, and be able to access, use, and sync files across all of them. I also want to have a local copy of the repository for when I'm not connected to the network (as if I'm working on a laptop, I want to keep a local repository to keep revision and merge later with "master" repository). For example, I'm editing a few pictures on my laptop during the day outside of my network, but when I get home, I would like to sync the changes, including incremental changes, with my desktop at home. I would also like my roommates to be able to access and use this repository too but limit access to certain files. For example, I may want to use this to backup financial records but wouldn't want them to have access to those files. I'm a programmer and familiar with SVN but I know that wouldn't be the most appropriate since it doesn't handle binaries well and doesn't keep a local repository. I know better choices exist but I don't really know them well enough to choose the best one.

    Read the article

  • Arch Linux: eth0 no carrier - network fails at boot

    - by user905686
    The problem My computer is connected to a network where dhcp is required. So my network configuration in /etc/rc.conf looks like interface=eth0 address= netmask= broadcast= gateway= My deamons are DAEMONS=(!hwclock syslog-ng network netfs crond ntpd) With this configuration, Arch hangs at boot a long time at "Network" (Still it says "[done]", but after boot I have no connection). I found out two workaround: Workaround 1 remove network from deamons run mii-tool --reset eth0 and dhcpcd eth0 after boot (somehow it does not work when placing these commands in /etc/rc.local. Then dhcp work very quickly (because of the reset!). Before executing the first command, ip link show eth0 has "NO CARRIER" in output. Afterwards, it doesn´t. (Also, mii-tool first shows "no link", afterwards eth0: 10 Mbit, half duplex, link ok. Workaround 2 Change network configuration to interface=eth0 address=x.y.z.21 netmask=255.255.255.0 broadcast=xxx.y.z.255 gateway=x.y.z.254 whereas x, y, z build the specific adresses of the network (Though dhcp is used, I get a static ip). Add the commands mii-tool --reset eth0 and dhcpcd eth0 to /etc/rc.local Now network starts quickly at boot (though I don´t know if successfully), the commands in /etc/rc.local are executed and the connection is fine after login. What to do? So the problem seems to be that dhcpcd stucks at "wating for carrier" or sth. I do not like the workaround, because some deamons need network (though they seem to start). What can I do to have eth0 ready for dhcp at boot? Or is there another problem?

    Read the article

  • Generalized strategy for file server virtualization in Xenserver

    - by Jamie
    I'm not shopping as much as I'm looking for some guidance on good idea / bad idea strategies. I'm sure I'm not in the "best practices" budget range. Currently, I have 3 dell poweredges running xenserver in a pool. Each node has a ubuntu file server, serving about 6TB. One is the primary, the other two are rsync targets for backup. The 6TB is stored on their respective local storage disks as an LVM of 3x2tb virtual disks. The fileserver VM disks are also stored on the node local disks. Each node also runs a smattering of light-weight VMs for web, development, windows VMs, and stuff like that. Several of those VM's disks reside on a QNAP NAS to play with live migration. These VM's are often clients of the primary file server (like all the mail, web content, user files are stored on the file server, not on the mail, web, and samba VMs). This all works fine, and is a major step up for us. The downside is that the QNAP is a single point of failure. And the only thing the QNAP is doing is serving migratable VM images, not client data. Someday the poweredge local arrays will be full, and we will have to reinvent ourselves again. Is it wise to have heavywieght vms (like the fileserver, with its 6+ TB disks) on a SAN or NAS? Would it be better to keep the VMs lightweight, have the VM images on a SAN or NAS, and use 2 or more NAS act as NFS-serving file appliances? A hybrid SAN/NAS that can serve iscsi for images and NFS for the client vms? It seems like live-magration would be a misnomer if you have to migrate a fileserver with its entire 6+ TB disk. I recognize there are plenty of ways to skin the cat. We've already skinned it a few ways. What makes sense?

    Read the article

  • Can a Windows Domain play along with a Hosted Exchange service?

    - by benzado
    I'm setting up a computer network for a small (10-20 people) company. They are currently using a Hosted Exchange service they are totally happy with. Other than that, they are starting from scratch (office doesn't even have furniture yet). They will need some kind of file sharing server set up in their office. If I set up a machine as a file server and nothing more, users will have three passwords to deal with: local machine, file server, and email. If I set up a Domain Controller, identities for local machine and file server will be the same. But what about the Hosted Exchange server? Must the users have a separate email password, or is it possible to combine the two? (I realize it might depend on the specific hosting provider, but is it possible?) If not, it seems like I have these options: Deal with it: users have a separate email password. Host Exchange on the local server: more than they want to manage in-house? Purchase a hosted VPS, make it part of the domain, and host Exchange there. (Or can/should a VPS be a domain controller?) I realize I have a lot of questions in there. The main one: is there any reason to use a Hosted Exchange service if I'm setting up other Windows services?

    Read the article

  • Set up Linux box as WAP for MyBookLive?

    - by AcidFlask
    I inherited an old Linux box as well as a MyBookLive and would like to make the MyBookLive available over my wireless, essentially using the Linux box as a wireless access point. I just wiped the Linux box (home) and installed Ubuntu 12.04 on it. My network setup currently looks like this: (192.168.0.1 netmask 255.255.255.0) ISP --- wireless router --- wlan0 on home (192.168.0.12) | eth0 on home --- MyBookLive MacBook (192.168.0.11) so that the MyBookLive is basically a glorified external hard drive. The router does have an Ethernet port, but it is being used by my roommate's computer so I can't plug the MyBookLive directly into it. Right now I can ping MyBookLive.local and MacBook.local from home, but I am having trouble understanding and figuring out what the correct iptables commands are to make my MacBook see my MyBookLive through the Bonjour network. Also, I'm not sure if I need to set up DNS to forward xxx.local Bonjour/Zeroconf addresses. I tried the following to forward my entire wired network (which has only my MyBookLive) to a single IP address: sysctl net.ipv4.ip_forward=1 iptables -A FORWARD -i wlan0 -o eth0 -j ACCEPT iptables -A FORWARD -i eth0 -o wlan0 -j ACCEPT iptables -t nat -A PREROUTING -i eth0 -p tcp -j DNAT --to 192.168.0.66 iptables -t nat -A PREROUTING -i eth0 -p udp -j DNAT --to 192.168.0.66 but I can't ping this address from my MacBook. This is probably horribly wrong, but I am a complete noob at setting up this kind of network and could use some expert help with setting this up properly.

    Read the article

  • Windows Server 2008 ignores any change made to firewall

    - by Maurice Courtois
    I have been trying for the last 2 hours to make my Windows Server 2008 answer ping. I have tried almost every single solution I have found on the web, so far nothing work. My current setup: 2 NIC (1x Internet connection, 1x Local network) Server act as VPN server. So I set the corresponding NIC as either Public or Private. I also enable the rule for "File and Printer Sharing (Echo Request...)" for all Nic and from any IPs. I always been able to ping from the local network or the local ip while connected to the VPN. I also tried to create a specific rule for ICMP ping and disabling the firewall for all but the public nic. Regardless of all this, I still can't ping that server from Internet. Any idea suggestion what could cause this? I have the impression that when you set the server as VPN (I switch the box on when setting it up to block everything else than VPN connection) that changing anything to the firewall setting thought mmc is pointless !?!?

    Read the article

  • Generic/Text Printer on Windows 7 not prompting for file name

    - by Trevor Tippins
    Hope someone can shed some light on this. I am downloading reports from an AIX-based system by directing them to a TT printer which the terminal emulator (MultiView 2000) intercepts and directs to the default printer on the local system. This local printer is configured as a vanilla Generic/Text printer attached to a FILE port. When I print from AIX, the output is spooled down and the local printer prompts for a file name into which to save the file...but not under Windows 7. This has worked fine for many years, on both Win2K and WinXP. However, on Windows 7 the output gets spooled as a file into spool\PRINTERS (and looks as expected) but the print job then hangs with a status of "Error - Printing" and never prompts for a file name. I have to cancel the job. The Generic/Text printer works as expected with other applications. I have tried setting the printer to print directly rather than spooling but this only serves to hang the terminal session too. I've also tried to run the emulator in Windows 2000 Compatibility Mode and as Administrator in case it was something like that but with no luck. As you might expect, it does work fine in XP Mode (as long as I print to a printer defined therein and not the host's printer) but operationally this isn't going to be an option. Obviously this emulation software is a decade old (at least) and I could just cross/upgrade all the users (at a cost) but, before I do so, has anyone seen this sort of behaviour before and found some sort of fix? Remote OS: AIX 5 Client OS: Windows 7 Pro (32-bit) Printer: Generic/Text on a FILE port TE Software: MultiView 2000 (32-bit) Thanks in advance.

    Read the article

  • Server 2003 and SSL Certificates

    - by Keith Stokes
    I have a Windows 2000 domain with dozens of Windows 2000 servers and a few 2003 servers. Each server runs a custom app talking to a 3rd party utilizing self-signed certificates. To help troubleshooting we've created a custom test app. The 2000 servers are able to talk within seconds. The 2003 servers take anywhere from 10-30 seconds using a domain account and much less, usually under 5 seconds using a local account. The only exception to the local account performance is a new account, which is slow initially then faster. If you leave the test app open and reconnect repeatedly it talks in seconds. If you leave it open for sometime between 1 and 2 hours, it reverts back to the previous 10 seconds, so obviously something is caching. Installing the destination certificates in the local 2003 server store makes no difference. I've installed the certificates in AD and that apparently makes domain accounts work in 9-12 seconds, vs 30 seconds that was regular before. Manually clearing the certificate store on the 2003 server makes no difference. I'm at a loss as to where the certs might be cached and if I'm using some sort of domain certificate store that's hiding from me.

    Read the article

  • Accessing a shared folder in Windows Server 2008 R2.

    - by Triztian
    Hello all, seems my involvement with computers has grown and I've found my self in the need to access a shared folder on a server. I've read some documentation and managed to set up the folder as a share, for this I created a local group and for now just one local user that has access to the share, the folder is in the public user folder and it's permissions should be (and I believe they are) read/write. The problem is that I can't connect from a remote machine I mean I don't know how the way it should be accessed, the server has a public IP and we use it also as a host to our website I don't know if that affects it though, the folder will be used as the "keeper" for the QuickBooks company files and has the database server manager installed. I've tried setting up a VPN Connection to the but no success. The server has a domain name a "http://www.example.com" that redirects to our website, I am unsure if it could be accessed that way, also the share has a location displayed when I right-click properties Heres what I've tried Setting up a VPN Connection (Windows Vista and 7) Got to the point where I got asked for credential and entered the user I created (which is not an admin) but I got a "Connection fail error 800" I suppose this is because in the domain field I entered the servers workgroup. right-click add network connection (Windows 7) Went through the wizard until I reached the point of entering the location, tried many things, the name in the share's properties(\\SOMETHING\Share), the http://www.example.com , the IP address I'm quite unfamiliar with this, so I have my guesses: Since the group and user are local they do not have access to the folder. The firewall in the server is blocking my connection. Anyways, any help and guidence is truly appreciated.

    Read the article

  • then an error occurred during the login process - Connection Error 233

    - by scott brunner
    We have SQL Server 2008 installed on 64Bit Windows Server 2003. When we try connect to the local SQL Server using SQL Server Management Studio at the console, we get the error: A connection was successfully established with the server, but then an error occurred during the login process. provider: Shared Memory Provider, error: 0 - No process is on the other end of the pipe. When we try TCP from same local SSMS to local server, we get the same error but intead of the pipe message its something like "connection forcibly closed". Now, here is the strange part - we CAN connect to this SQL Server from any other machine on the network using SSMS. - AND - WE CAN'T connect to ANY SQL Server from the problem server. So it seems the SQL Server instance is fine and accepting remote connections. However, the SSMS on that machine will not connect to any SQL Server even remotely. When we try an ADO.NET connection from C# remotely we can connect, run that same code on the console of the trouble server and we get the same errors. How can this be solved?

    Read the article

  • Users are getting a temporary profile

    - by Serhiy
    A bit about current setup: It is windows 2008 R2 AD servers (all of them are 2008R2) and couple locations which set as Sites. Each location has DFS on AD server. Roaming profiles are not used nor configured. Users have their home folder configured as mapped S: drive to DFS shared folder. For example: in profile tab user has: Home Folder - connect - S: to \\domain.com\dc\users\%username% We also have redirected Desktop, Documents and Downloads folders to \\domain.com\dc\users. Everything was fine. Suddenly (today), users in most locations lost their local profile (both XP and W7 desktops) and got temporary profiles. Also, it looks like local profile was created today (from folder properties). I checked events at couple machines and there is not errors related to profiles or logon process. I do not see issues in event logs at servers as well. Basically, I run out of ideas what is wrong and why machines lost their local profiles. PS: Laptop users do not have their folders redirected, but lost profiles as well.

    Read the article

  • Auto Launching PHP-FPM

    - by Seth
    My plist file <?xml version='1.0' encoding='UTF-8'?> <!DOCTYPE plist PUBLIC "-//Apple Computer//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd" > <plist version='1.0'> <dict> <key>Label</key><string>org.macports.php-fpm</string> <key>ProgramArguments</key> <array> <string>/opt/local/bin/daemondo</string> <string>--label=php-fpm</string> <string>--start-cmd</string> <string>/opt/local/sbin/php-fpm</string> <string>;</string> <string>--pid=fileauto</string> <string>--pidfile</string> <string>/opt/local/var/run/php-fpm/php-fpm.pid</string> </array> <key>Debug</key><false/> <key>Disabled</key><true/> <key>OnDemand</key><false/> </dict> </plist> After rebooting, it's not loading up automatically. I still have to manually start php-fpm. I have tried unloading and adding RunAtLoad etc. with no luck and tried both these launchctl commands. sudo launchctl load -F /Library/LaunchDaemons/org.macports.php-fpm.plist sudo launchctl load -w /Library/LaunchDaemons/org.macports.php-fpm.plist

    Read the article

  • The suggested way to handle pip(easy_install) with homebrew?

    - by Drake
    I know there are brew-gem and brew-pip but it is still really easy to get confused. Let's say my Mac OS X is 10.7.2. There are at least, as far as I know, 3 locations for Python modules (assume 2.7): /System/Library/Frameworks/Python.framework/Versions/2.7/ /Library/Python/2.7/site-packages /usr/local/lib/python2.7/site-packages/ (controlled within homebrew) For some Python modules, pip install them into 2, the so-called local/customized Python module location, and everything looks and works great. Ex, readline by *easy_install* (ipython suggested me to install readline by *easy_install* instead of pip) For some, it would try to install some miscellaneous files (ex, man, doc, ...) into system-wide location, which requires sudo! Ex, ipython insisted on installing man and doc into /System/Library/Frameworks/Python.framework/Versions/2.7/share/, which violates permission issue and all I can do is to use sudo. For some Python modules installed by brew, they are symbolic linked to /usr/local/lib/python2.7/site-packages/. Everything seems great except that you have to remember to add this location into PYTHONPATH. I am wondering any suggested and uniform way to handle those mass, or any explanation to make those stuff crystal clear.

    Read the article

  • Can't access server sound card when vnc'd into ubuntu server

    - by Corey Kennedy
    I've set up my ubunutu 10 server with xfce, nxserver, and now tightvncserver so that I can control it remotely from my Windows 7 laptop. NX is working fine for remote access, but when I run (for example) exaile, no sound will be sent through the server's sound card. I installed tightvncserver and connected, but ran into the same problem. Exaile opens, sound isn't muted, I can see that sound cards are installed (via cat /proc/asound/cards), but I can't seem to get the remote sessions to access the server's sound card. Also, just to confirm that the sound card was working I hooked up a montior/keyboard to the server and opened a local xfce session. That worked fine. While I had the local session running, I was also able to open a remote session with NXClient and start exaile - which then successfully piped sound to the local card. After disconnecting the monitor/keyboard and moving the box back to its normal spot, though, I was not able to play sound via either an NX or VNC session. Does anyone have any suggestions? Surely it's possible to configure my remote sessions to pipe sound to the server's sound card, right? Or at least get xfce up and running without a monitor or keyboard but with access to the sound card so I can VNC into it? Thanks!

    Read the article

  • Spam or exchange issue?

    - by John
    I am getting an error message to unknow user on my domain. I would like to know is this just a phishing spam email or it was really send from our domain? I have changed our domain name to OURDOMAIN.COM I have Exchange 2010 installed. Body of the email is Delivery has failed to these recipients or distribution lists: sales The recipient's e-mail address was not found in the recipient's e-mail system. Microsoft Exchange will not try to redeliver this message for you. Please check the e-mail address and try resending this message, or provide the following diagnostic text to your system administrator. Sent by Microsoft Exchange Server 2007 Diagnostic information for administrators: Generating server: murraygroup.local [email protected] #550 5.1.1 RESOLVER.ADR.RecipNotFound; not found ## Original message headers: Received: from ironport.mih.co.uk (10.10.29.9) by mih-exca-01.murraygroup.local (10.10.29.133) with Microsoft SMTP Server id 8.3.106.1; Fri, 29 Jun 2012 12:36:12 +0100 Received: from glamf04.netintelligence.com (HELO mailfilter.iomart.com) ([62.128.193.114]) by ironport.mih.co.uk with SMTP; 29 Jun 2012 12:42:48 +0100 Received: from glamta4.netintelligence.com(localhost.localdomain[127.0.0.1]) by mailfilter.iomart.com ; Fri, 29 Jun 2012 12:37:18 BST Received: from [195.43.137.66] ([195.43.137.66]) by glamta4.netintelligence.com (8.13.1/8.12.8) with ESMTP id q5TBbH4j022142 for <[email protected]>; Fri, 29 Jun 2012 12:37:18 +0100 Date: Fri, 29 Jun 2012 12:37:17 +0100 Message-ID: <20120629145229.4C2A817231D8A7958044@SONW> From: Ines Hampton <[email protected]> To: sales <[email protected]> Reply-To: Marguerite Soto <[email protected]> Subject: User sales MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: 7bit Return-Path: [email protected] eporting-MTA: dns;murraygroup.local Received-From-MTA: dns;ironport.mih.co.uk Arrival-Date: Fri, 29 Jun 2012 11:36:12 +0000 Final-Recipient: rfc822;[email protected] Action: failed Status: 5.1.1 Diagnostic-Code: smtp;550 5.1.1 RESOLVER.ADR.RecipNotFound; not found X-Display-Name: sales

    Read the article

  • Who should I run mysql as, on a personal computer?

    - by user664833
    I just installed mysql via homebrew (with brew install mysql, on Mac OS X Mountain Lion - recently installed from scratch). Following the installation, there is a "caveats" section with options around further necessary actions to take: ==> Caveats Set up databases to run AS YOUR USER ACCOUNT with: unset TMPDIR mysql_install_db --verbose --user=`whoami` --basedir="$(brew --prefix mysql)" --datadir=/usr/local/var/mysql --tmpdir=/tmp To set up base tables in another folder, or use a different user to run mysqld, view the help for mysqld_install_db: mysql_install_db --help and view the MySQL documentation: * http://dev.mysql.com/doc/refman/5.5/en/mysql-install-db.html * http://dev.mysql.com/doc/refman/5.5/en/default-privileges.html To run as, for instance, user "mysql", you may need to `sudo`: sudo mysql_install_db ...options... Start mysqld manually with: mysql.server start Note: if this fails, you probably forgot to run the first two steps up above A "/etc/my.cnf" from another install may interfere with a Homebrew-built server starting up correctly. To connect: mysql -uroot To launch on startup: * if this is your first install: mkdir -p ~/Library/LaunchAgents cp /usr/local/Cellar/mysql/5.5.27/homebrew.mxcl.mysql.plist ~/Library/LaunchAgents/ launchctl load -w ~/Library/LaunchAgents/homebrew.mxcl.mysql.plist * if this is an upgrade and you already have the homebrew.mxcl.mysql.plist loaded: launchctl unload -w ~/Library/LaunchAgents/homebrew.mxcl.mysql.plist cp /usr/local/Cellar/mysql/5.5.27/homebrew.mxcl.mysql.plist ~/Library/LaunchAgents/ launchctl load -w ~/Library/LaunchAgents/homebrew.mxcl.mysql.plist You may also need to edit the plist to use the correct "UserName". On previous versions of Mac OS X I ran mysql as mysql user, but now I am confronted by the idea of running it as myself. I am the only one who uses this computer (which happens to be my laptop), and I do programming for work and for pleasure. What are the pros & cons, or best practices, around choosing whether to run mysql AS YOUR USER ACCOUNT or as mysql or something else still?

    Read the article

  • Weekly cron executing 2 times:

    - by yes123
    Hi guys, I have placed a .sh file that runs a php script weekly. This script should run only once, but every sunday it runs at: 1:30 am 6:50 am Any way to fix this? Add1: /etc/cron.weekly/cronweek: #!/bin/bash /usr/bin/php -f /home/my/path/to/script/cronweek.php Add2: crontab file: # /etc/crontab: system-wide crontab # Unlike any other crontab you don't have to run the `crontab' # command to install the new version when you edit this file # and files in /etc/cron.d. These files also have username fields, # that none of the other crontabs do. SHELL=/bin/sh PATH=/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin # m h dom mon dow user command 17 * * * * root cd / && run-parts --report /etc/cron.hourly 25 6 * * * root test -x /usr/sbin/anacron || ( cd / && run-parts --report /etc/cron.daily ) 47 6 * * 7 root test -x /usr/sbin/anacron || ( cd / && run-parts --report /etc/cron.weekly ) 52 6 1 * * root test -x /usr/sbin/anacron || ( cd / && run-parts --report /etc/cron.monthly ) # */1 * * * * root /usr/local/rtm/bin/rtm 35 > /dev/null 2> /dev/null

    Read the article

  • Possible to have different SSLCACertificateFiles under different Location in Apache (client side ssl certs)

    - by Mikko Ohtamaa
    I am setting up Apache to do smartcard authentication. The smartcard login is based on client-side SSL certificates handled by an OS driver. I have currently just one smartcard provider, but in the future there are potentially several of them. I am not sure how Apache 2.2. handles client-side certifications per Location. I did some quick testing and it somehow seemed that only the last SSLCACertificateFile directive would have been effective and this doesn't sound right. Is it possible to have different SSLCACertificateFile per Location in Apache (2.2, 2.4) as described below or is SSL protocol somehow limiting that you cannot have more than one SSLCACertificateFile per IP? Example potential config below how I wish to handle several SSLCACertificateFile on the same server to allow users to log in with different smartcard provides. <VirtualHost 127.0.0.1:443> # Real men use mod_proxy DocumentRoot "/nowhere" ServerName local-apache ServerAdmin [email protected] SSLEngine on SSLOptions +StdEnvVars +ExportCertData # Server-side HTTPS configuration SSLCertificateFile /etc/apache2/certificate-test/server.crt SSLCertificateKeyFile /etc/apache2/certificate-test/server.key # Normal SSL site traffic does not require verify client SSLVerifyClient none SSLVerifyDepth 999 # Provider 1 <Location /@@smartcard-login> SSLVerifyClient require SSLCACertificateFile /etc/apache2/certificate-test/ca.crt # Apache does not natively pass forward headers # created by SSLOptions +StdEnvVars, # so we pass them forward to Python using RequestHeader # from mod_headers RequestHeader set X-Client-DN %{SSL_CLIENT_S_DN}e RequestHeader set X-Client-Verify %{SSL_CLIENT_VERIFY}e </Location> # Provider 2 <Location /@@smartcard-login-provider-2> # For real SSLVerifyClient require SSLCACertificateFile /etc/apache2/certificate-test/provider2.crt # Apache does not natively pass forward headers # created by SSLOptions +StdEnvVars, # so we pass them forward to Python using RequestHeader # from mod_headers RequestHeader set X-Client-DN %{SSL_CLIENT_S_DN}e RequestHeader set X-Client-Verify %{SSL_CLIENT_VERIFY}e </Location> # Connect to Plone ZEO client1 running on fg ProxyPass / http://localhost:8080/VirtualHostBase/https/local-apache:443/folder_sits/sitsngta/VirtualHostRoot/ ProxyPassReverse / http://localhost:8080/VirtualHostBase/https/local-apache:443/folder_sits/sitsngta/VirtualHostRoot/ </VirtualHost>

    Read the article

  • Apache load balancer with https real servers and client certificates

    - by Jack Scheible
    Our network requirements state that ALL network traffic must be encrypted. The network configuration looks like this: ------------ /-- https --> | server 1 | / ------------ |------------| |---------------|/ ------------ | Client | --- https --> | Load Balancer | ---- https --> | server 2 | |------------| |---------------|\ ------------ \ ------------ \-- https --> | server 3 | ------------ And it has to pass client certificates. I've got a config that can do load balancing with in-the-clear real servers: <VirtualHost *:8666> DocumentRoot "/usr/local/apache/ssl_html" ServerName vmbigip1 ServerAdmin [email protected] DirectoryIndex index.html <Proxy *> Order deny,allow Allow from all </Proxy> SSLEngine on SSLProxyEngine On SSLCertificateFile /usr/local/apache/conf/server.crt SSLCertificateKeyFile /usr/local/apache/conf/server.key <Proxy balancer://mycluster> BalancerMember http://1.2.3.1:80 BalancerMember http://1.2.3.2:80 # technically we aren't blocking anyone, but could here Order Deny,Allow Deny from none Allow from all # Load Balancer Settings # A simple Round Robin load balancer. ProxySet lbmethod=byrequests </Proxy> # balancer-manager # This tool is built into the mod_proxy_balancer module allows you # to do simple mods to the balanced group via a gui web interface. <Location /balancer-manager> SetHandler balancer-manager Order deny,allow Allow from all </Location> ProxyRequests Off ProxyPreserveHost On # Point of Balance # Allows you to explicitly name the location in the site to be # balanced, here we will balance "/" or everything in the site. ProxyPass /balancer-manager ! ProxyPass / balancer://mycluster/ stickysession=JSESSIONID </VirtualHost> What I need is for the servers in my load balancer to be BalancerMember https://1.2.3.1:443 BalancerMember https://1.2.3.2:443 But that does not work. I get SSL negotiation errors. Even when I do get that to work, I will need to pass client certificates. Any help would be appreciated.

    Read the article

  • How to determine if my AWS/EC2 server has been compromised / resolution?

    - by ElHaix
    I have recently seen an increase in network in/out activity on my server and am trying to determine if my AWS/EC2 instance has been compromised, and if so, how to resolve? In my security group I have: Inbound: 80 (HTTP) 0.0.0.0/0 Outbound: 80 (HTTP) 0.0.0.0/0 443 (HTTPS) 0.0.0.0/0 Using TCP-UDP Endpoint Viewer: I see a lot of w3wp.exe TCP processes with varying local ports http and numbered, as well as varying remote ports. Some processes go red/yellow/green on updates . I see Remote address for most w3wp processes are my ec2 instance, however I am seeing several to *.deploy.akamaitechnologies.com and *.deploy.static.akamaitechnologies.com with received bytes varying between 4-11 megs. I also see Ec2Config.exe, remote address: 169.254.169.254 System Process Remote Address: fetcher4-4.p.mail.ru (how can I get rid of this one?!) local port: http remote port: 33432 I am also seeing some system processes from 114.216-244-93-rdns.wowrack.com: Protocol: TCP local port: http remote port: varying As well as some baiduspider "System Process"'s. I'm afraid that my system may have been compromised, and wondering if these results are any indication of that. If so, how can I get eliminate these possible threats? I have MS Security Essentials installed.

    Read the article

  • Whats the difference between pulling from a branch into master and pushing that branch onto master?

    - by Justin808
    In Tortoisegit, on the repository, I right-click and select sync. At the top of the dialog there are options for Local Branch and Remote Branch. If the local branch is named DeveloperA and the remote branch is master and I do a push, what happens? If the local branch is master and remote branch is DeveloperA and I Pull, what happens? If I am on the master branch and right click, select Merge and change the From to be my DeveloperA branch, what happens? If I try to push from master to remote master and the remote is updated git stops and tells me to pull. It seems if I push from DeveloperA to master it doens't stop, it just clobbers, it that correct? We're having an issue using git where the remote master branch gets clobbered at times and we are trying to figure out why. For example there is a developer working on his DeveloperA branch. He'll pull from master to get any updates, then push to master to push out his changes. But there are times that the push lists more files in the Out Commit list than he's edited. The odd thing is he can't revert those files as git is saying they are up to date and have not been modified. Yet when he pushes git pushes the files out. The problem is if there are changes between his pull and push the changes get clobbered.

    Read the article

  • Safe to remove Python2.6 files?

    - by darkfeline
    I'm using Linux Mint 11 (will upgrade soon), and I've noticed that, even though I don't have any python2.6 packages installed with apt, there's a bunch of residual python2.6 files scattered around my drive, including, but not limited to, dist-packages in /usr/lib/python2.6 and various /usr/share stuff. Is there any way to test if these files are still being used? I'm tempted to sudo rm -rf the lot of them, but I'm scared it'll break stuff. Also, does anyone have any idea where these files could have come from? I believe I had python2.6 installed once upon a time, but I made sure to --purge them, so there shouldn't be any trace of them left, right? EDIT: after using a quick script to check all of the files, it appears most of them belong to important packages, so I won't try weeding out the few which I know are probably useless. Although I am curious why so many packages have python2.6 files when I don't even have it installed. These files are not associated with any packages and I'm not sure if they are safe to remove: /usr/bin/ipython2.6 /usr/lib/python2.6/dist-packages/distribute-0.6.15.egg-info /usr/lib/python2.6/dist-packages/easy_install.py /usr/lib/python2.6/dist-packages/IPython /usr/lib/python2.6/dist-packages/ipython-0.10.1.egg-info /usr/lib/python2.6/dist-packages/setuptools /usr/lib/python2.6/dist-packages/setuptools.egg-info /usr/lib/python2.6/dist-packages/setuptools.pth /usr/lib/python2.6/dist-packages/site.py /usr/lib/python2.6/dist-packages/wx.pth /usr/local/lib/python2.6 /usr/local/lib/python2.6/dist-packages /usr/local/lib/python2.6/site-packages /usr/share/man/man1/ipython2.6.1.gz

    Read the article

  • Relative path incorrect in the view layer when hosting a rails3 app in a subdirectory using passenger and apache

    - by Saifis
    I want to host multiple Rails apps on a multiple server using sub-directories. And have encountered some relative path problems. I have made a symbolic link to the app's public directory and placed it in the /var/www/html directory, var/www/html/ /test_app (symbolic link to the public folder of test_app) and set apache as so LoadModule passenger_module /usr/local/lib/ruby/gems/1.9.1/gems/passenger-3.0.12/ext/apache2/mod_passenger.so PassengerRoot /usr/local/lib/ruby/gems/1.9.1/gems/passenger-3.0.12 PassengerRuby /usr/local/bin/ruby <VirtualHost *:80> ServerName test.com DocumentRoot /var/www/html Options Indexes FollowSymLinks -MultiViews RailsBaseURI /test_app </Location> </VirtualHost> The links in the app itself works just fine, all the links acknowledge the test_app/ directory and work, however, when it comes to showing images in the public directory in the view, the relative path goes wrong. Say I have /system/files/1/aaa.png it goes looking for it in /var/www/html/system/files/1/aaa.png rather than /var/www/html/test_app/system/files/1/aaa.png As far as I understand this is an Apache setting problem than something to be done in Rails, if its possible I would prefer to have it contained in the conf file of apache rather than having to alter the code.

    Read the article

  • Postfix: How to configure Postfix with virtual Dovecot mailboxes?

    - by user75247
    I have configured a Postfix mail server for two domains: domain1.com and domain2.com. In my configuration domain1 has both virtual users with Maildirs and aliases to forward mail to local users (eg. root, webmaster) and some small mailing lists. It also has some virtual mappings to non-local domains. Domain2 on the other hand has only virtual alias mappings, mainly to corresponding 'users' at domain1 (eg. mails to [email protected] should be forwarded to [email protected]). My problem is that currently Postfix accepts mail even for those users that don't exist in the system. Mail to existing users and /etc/aliases works fine. Postfix documentation states that the same domain should never be specified in both mydestination and virtual_mailbox_maps, but If I specify mydestination as blank then postfix validates recipients against virtual_mailbox_maps but rejects mail for local aliases of domain1.com. /etc/postfix/main.cf: myhostname = domain1.com mydomain = domain1.com mydestinations = $myhostname, localhost.$mydomain, localhost virtual_mailbox_domains = domain1.com virtual_mailbox_maps = hash:/etc/postfix/vmailbox virtual_mailbox_base = /home/vmail/domains virtual_alias_domains = domain2.com virtual_alias_maps = hash:/etc/postfix/virtual alias_maps = hash:/etc/aliases alias_database = hash:/etc/aliases virtual_transport = dovecot /etc/postfix/virtual: domain1.com right-hand-content-does-not-matter firstname.lastname user1 [more aliases..] domain2.com right-hand-content-does-not-matter @domain2.com @domain1.com /etc/postfix/vmailbox: [email protected] user1/Maildir [email protected] user2/Maildir /etc/aliases: root: :include:/etc/postfix/aliases/root webmaster: :include:/etc/postfix/aliases/webmaster [etc..] Is this approach correct or is there some other way to configure Postfix with Dovecot (virtual) Maildirs and Postfix aliases?

    Read the article

  • Join Domain from VM

    - by Adis
    I have two VMs running on VMWare Player. I use NAT adapter settings. The host machine for VMs is running on corporate network. First VM has Domain controller running and I can log in on that machine using domain credentials. I named domain wm.local When I run IP config on this machine: IP: 192.168.87.132 Def Gataway: 192.168.87.2 DNS server: 192.168.87.2 DHCP server: 192.168.87.254 Second VM cannot join domain. When I try it with domain WM I'm propmted for credentials. And I enter Administrator credentials and than it waits for some time and I get response: "The specified domain either does not exist or could not be contacted" If i type wm.local as domain when trying to join it does not prompt me to login but just shows "An Active Directory Domain Controller (AD DC) for the domain wm.local could not be contacted. And here it takes no time to get this error message. Ipconfig on this machine: IP: 192.168.87.134 Def Gataway: 192.168.87.2 DNS server: 192.168.87.2 DHCP server: 192.168.87.254 I can ping second VM from first one. And I disabled firewalls on both machines. Any ideas? Is there any manual for this?

    Read the article

< Previous Page | 256 257 258 259 260 261 262 263 264 265 266 267  | Next Page >