Search Results

Search found 22277 results on 892 pages for 'multiple addresses'.

Page 111/892 | < Previous Page | 107 108 109 110 111 112 113 114 115 116 117 118  | Next Page >

  • Running multiple services on Port 443, Tunnel SSH over HTTPS

    - by lajuette
    Situation: I want to tunnel SSH sessions through HTTPS. I have a very restrictive firewall/proxy which only allows HTTP, FTP and HTTPS traffic. What works: Setting up a tunnel through the proxy to a remote linux box that has a sshd listening at port 443 The problem: I have to have a web server (lighty) running at port 443. HTTPS traffic to other ports is forbidden by the proxy. Ideas so far: Set up a virtual host and proxy all incoming requests to localhost: (e.g. 22) $HTTP["host"] == "tunnel.mylinux.box" { proxy.server = ( "" => (("host" => "127.0.0.1", "port" => 22)) ) } Unfortunately this won't work. Am i doing something wrong, or is there a reason, that this won't work?

    Read the article

  • Windows file locks allowing multiple users to write to open file over network

    - by JPbuntu
    I have 6 windows computers (xp,vista,7) that need to access a samba share (Ubuntu 12.04). I am trying to make it so only one client can open a file at a given time. I thought this was pretty standard behavior of file locks, but I can't get it to work. The way it is right now a file can be open by two users, and changed and saved by either one of them. The last file saved overwrites what ever changes the other user made. At first I thought this was a Samba configuration problem, but I get this behavior even between two windows machines. So far I have only tested: Windows Xp Windows Vista Windows XP Samba << Windows Vista and both give the same behavior. When I tested the Samba configuration, I had set strict locking = yes and get errors logged like this: close_remove_share_mode: Could not get share mode lock for file _prod/part_number_list_COPY.xlsx Eventually all of the files are going to be moved onto the Samba share, so that is the configuration I am most concerned about fixing. Any ideas? Thanks in advance. EDIT: I tested an excel file again, and it is now working properly in both above mentioned cases, I am also no longer getting the above mentioned error. I don't know what happened, perhaps a restart fixed it? (also works with strict locking = no) Although I still need to find a solution for the CAD/CAM files we use, the software is Vector and it does not seem to be using file locks. Is there any software that I can use to manage these files, so two people can't open/edit them at a time? Maybe a windows application that forces file locks? Or a dirt simple version control system? (the only ones I have seen at are too complicated for our needs).

    Read the article

  • multiple wildcard entries

    - by Murali
    my client has around 300,000 domains and they just have a wildcard for all of them * A 12.12.12.12 Now they want to create a sub domain that points to a different IP and still have the flexibility of wildcard, something like ww1.* A 24.24.24.24 * A 12.12.12.12 Looks like in BIND, the lower "*" is catch-all and taking over every query and hence ww1 is not working. One of solutions offered by IT folks was to create seperate 300K zones for just "ww1" and leave the "*" wildcard. Are there any other DNS software's that can achieve this task easily? Any other ways to deal?

    Read the article

  • Is this normal? Multiple httpd process

    - by ilcreatore
    I'm testing a new Server. This isnt really a peak time for my server (2pm), but still its running a bit slow, I was checking the ESTABLISHED connections using the following command: # netstat -ntu | grep :80 | awk '{print $5}' | cut -d: -f1 | sort | uniq -c | sort -n http://i.stack.imgur.com/cZuvP.jpg My MaxClients are set to 50. So as you can see on the picture, only 10 people are eating most of my ram. I got a server with 4GB Ram (2.7GB free for apache) but each apache process is eating 53MB each, wich mean im only allowed to accept 50 process. The KeepAlive = Off, but I notice those connections arent closing fast enough, is that normal?

    Read the article

  • Make Apache to listen in multiple IPs

    - by Enrique Becerra
    Hi I'm in a big LAN, which is behind a proxy/firewall I'm working with an apache/php/mysql application, which is hosted in a small server besides my workstation. This server is connected to the LAN also and is behind the proxy: The server has a local IP assigned: 10.64.x.x Also, this server has a public IP assigned (or redirected from within the proxy/firewall) which is: 200.41.x.x I can't access public IP from LAN, but I can ping to the public IP from outside the building How should I configure Apache to listen also for public IP and open the 80 port for people accessing from outside the building?. It is set now to Listen 10.64.x.x:80 Thanks a lot in advance,

    Read the article

  • Gnu/Linux package manager, multiple versions, no root privilege

    - by user744629
    I'm looking for a Gnu/Linux package manager, that can help to package, distribute, install sofware (actually scientific libraries), like this: do not required root privilege, for example, install in $HOME/opt or $HOME/local manage muliple version of a library, for example, with this directories organisation: $HOME/opt/somelib/4.2.1/lib/ $HOME/opt/somelib/4.2.1/include/ $HOME/opt/somelib/4.2.1/bin/ $HOME/opt/somelib/4.2.2/lib/ $HOME/opt/somelib/4.2.2/include/ $HOME/opt/somelib/4.2.2/bin/ $HOME/opt/anotherlib/1.0.0/lib/ $HOME/opt/anotherlib/1.0.0/include/ $HOME/opt/anotherlib/1.0.0/bin/ package contains source files, not binaries, build is performed during install. support for Mac OS X too would be good. Then it's up to the user to manage it's LD_LIBRARY_PATH, or compile with -L/good/path etc. Does it exists?

    Read the article

  • Running multiple services on different servers with IPv6 and a FQDN

    - by Mark Henderson
    One of the things NAT has permitted us to do in the past decade is split physical services onto different servers whilst hiding behind a single interface. For example, I have example.com behind a NAT on 192.0.2.10. I port-forward :80 and :443 to my web server. I'm also port forward :25 to my mail server, and :3389 to a terminal server and :8080 to the web interface of my computer that downloads torrents, and the story goes on. So I have 5 port forwardings going to 4 different computers on example.com. Then, I go and get me some neat IPv6. I assign example.com an IPv6 address of 2001:db8:88:200::10. That's great for my websites, but I want to go to example.com:8080 to get to my torrents, or example:3389 to log on to my terminal server. How can I do this with IPv6, as there is no NAT. Sure, I could create a bunch of new DNS entries for each new service, but then I have to update all my clients who are used to just typing example.com to get to either the website or the terminal server. My users are dumber than two bricks so they won't remember to connect to rdp.example.com. What options do I have for keeping NAT-style functionality with IPv6? In case you haven't figured it out, the above scenario is not a real scenario for me, or perhaps anyone yet, but it's bound to happen eventually. You know, with devops and all.

    Read the article

  • How to push configurations to multiple Cisco Switches

    - by nixda
    Assume I have around 50 Cisco IE2000 Switches connected together and I want to reconfigure some settings, the same settings for every switch. Normally I would open a command line session via Putty and paste the commands. But as the number of switches is growing, even this method takes its time. I am aware of Kiwi CatTools. Unfortunately it's not free so I'm wondering if there are other efficient ways to configure a large number of Cisco switches.

    Read the article

  • Apache doesn't run multiple requests

    - by Reinderien
    I'm currently running this simple Python CGI script to test rudimentary IPC: #!/usr/bin/python -u import cgi, errno, fcntl, os, os.path, sys, time print("""Content-Type: text/html; charset=utf-8 <!doctype html> <html lang="en"> <head> <meta charset="utf-8" /> <title>IPC test</title> </head> <body> """) ftempname = '/tmp/ipc-messages' master = not os.path.exists(ftempname) if master: fmode = 'w' else: fmode = 'r' print('<p>Opening file</p>') sys.stdout.flush() ftemp = open(ftempname, fmode) print('<p>File opened</p>') if master: print('<p>Operating as master</p>') sys.stdout.flush() for i in range(10): print('<p>' + str(i) + '</p>') sys.stdout.flush() time.sleep(1) ftemp.close() os.remove(ftempname) else: print('<p>Operating as a slave</p>') ftemp.close() print(""" </body> </html>""") The 'server-push' portion works; that is, for the first request, I do see piecewise updates. However, while the first request is being serviced, subsequent requests are not started, only to be started after the first request has finished. Any ideas on why, and how to fix it? Edit: I see the same non-concurrent behaviour with vanilla PHP, running this: <!doctype html> <html lang="en"> <!-- $Id: $--> <head> <meta charset="utf-8" /> <title>IPC test</title> </head> <body> <p> <?php function echofl($str) { echo $str . "</b>\n"; ob_flush(); flush(); } define('tempfn', '/tmp/emailsync'); if (file_exists(tempfn)) $perms = 'r+'; else $perms = 'w'; assert($fsync = fopen(tempfn, $perms)); assert(chmod(tempfn, 0600)); if (!flock($fsync, LOCK_EX | LOCK_NB, $wouldblock)) { assert($wouldblock); $master = false; } else $master = true; if ($master) { echofl('Running as master.'); assert(fwrite($fsync, 'content') != false); assert(sleep(5) == 0); assert(flock($fsync, LOCK_UN)); } else { echofl('Running as slave.'); echofl(fgets($fsync)); } assert(fclose($fsync)); echofl('Done.'); ?> </p> </body> </html>

    Read the article

  • how to run multiple shell scripts in parallel

    - by tom smith
    I've got a few test scripts, each of which runs a test php app. Each script runs forever. So, cat.sh, dog.sh, and foo.sh, each run a php script, and each shell script runs the php app in a loop, so it runs forever, sleeping after each run. I'm trying to figure out how to run the scripts in parallel, and at the same time, see the output of the php apps in the stdout/term window. I thought, simply doing something like foo.sh > &2 dog.sh > &2 cat.sh > &2 in a shell script would be sufficient, but it's not working. foo.sh, runs foo.php once, and it runs correctly dog.sh, runs dog.php in a never ending loop. it runs as expected cat.sh, runs cat.php in a never ending loop *** this never runs!!! it appears that the shell script never gets to run cat.sh. if i run cat.sh by itself in a separate window/term, it runs as expected... thoughts/comments

    Read the article

  • multiple folder structures/views

    - by Sojourner
    Newbie. Setting up a server for a law firm. Want to set up the folder structure as follows: Client 1 Name -- Matter 1 (i.e. setting up corporation) -- Matter 2 (i.e. divorce) -- Matter 3 (i.e. setting up trust) Client 2 Name -- Matter 1 Client 3 Name -- Matter 1 and so on. But the attorneys prefer navigating a folder structure, more based on what case type: Civil -- Client 1 Name (i.e. Smythe) -- Client 2 Name (i.e. Jones) -- Client 3 Name (i.e. Johson) -- Landlord/Tenant -- Client 1 Name (i.e. Jones) -- Client 2 Name (i.e. Johson) -- Class Action Suits -- Suit 1 -- Suit 2 Personal Injury -- Client 1 Name -- Client 2 Name -- Client 3 Name Criminal -- Client 1 Name (i.e. Smythe) I'd like to know if it's possible to set up the server with the first folder structure (it's more organized and easier to employ scripts), while having the second folder structure available for users who find it easier to deal with the same types of cases grouped together.

    Read the article

  • Mod_Perl configuration for multiple domains

    - by daliaessam
    Reading the Mod_Perl module documentation, can we configure it on per domain basis, what I mean can we configure it to run on every domain or specific domain only. What I see in the docs is: Registry Scripts To enable registry scripts add to httpd.conf: Alias /perl/ /home/httpd/2.0/perl/ <Location /perl/> SetHandler perl-script PerlResponseHandler ModPerl::Registry PerlOptions +ParseHeaders Options +ExecCGI </Location> and now assuming that we have the following script: #!/usr/bin/perl print "Content-type: text/plain\n\n"; print "mod_perl 2.0 rocks!\n"; saved in /home/httpd/httpd-2.0/perl/rock.pl. Make the script executable and readable by everybody: % chmod a+rx /home/httpd/httpd-2.0/perl/rock.pl Of course the path to the script should be readable by the server too. In the real world you probably want to have a tighter permissions, but for the purpose of testing, that things are working, this is just fine. From what I understand above, we can run Perl scripts only from one specific folder that we put the directive above. So the question again, can we make this directive per domain for all domains or for specific number of domains?

    Read the article

  • How to setup matlabpool for multiple processors?

    - by JohnIdol
    I just setup a Extra Large Heavy Computation EC2 instance to throw it at my Genetic Algorithms problem, hoping to speed up things. This instance has 8 Intel Xeon processors (around 2.4Ghz each) and 7 Gigs of RAM. On my machine I have an Intel Core Duo, and matlab is able to work with my two cores just fine by runinng: matlabpool open 2 On the EC2 instance though, matlab only is capable of detecting 1 out of 8 processors, and if I try running: matlabpool open 8 I get an error saying that the ClusterSize is 1 since there's only 1 core on my CPU. True, there is only 1 core on each CPU, but I have 8 CPUs on the given EC2 instance! So the difference from my machine and the ec2 instance is that I have my 2 cores on a single processor locally, while the EC2 instance has 8 distinct processors. My question is, how do I get matlab to work with those 8 processors? I found this paper, but it seems related to setting up matlab with multiple EC2 instances (not related to multiple processors on the same instance, EC2 or not), which is not my problem. Any help appreciated!

    Read the article

  • Excel how to get an average for column for rows that meet multiple criteria

    - by Jess
    I would like to know the average days between open and close dates for an item with a close date in a particular month. So from the below example in Jan 2013 items 2,5 and 6 were closed (Closed can be RESOLVED or CANCELLED status), each were open for 26, 9 and 6 days respectivly. So of the jobs that have a closed date in Jan 2013 (between 01/01/2013 and 13/02/13) they have an average open time (between open and close date) of 13.67 days to 2dp. I have tried a few ways to get this to work and i think the issue I am having is with the AVERAGE function. First time using a forum so apologies if my question is unclear. Was unable to post image to have this comma seperated below Item_ID,Open_Date,Status,Close_Date 1,1/06/2012,RESOLVED,16/07/2012 2,20/12/2012,RESOLVED,16/01/2013 3,2/01/2013,IN PROGRESS, 4,3/01/2013,CANCELLED,7/05/2013 5,3/01/2013,RESOLVED,12/01/2013 6,4/01/2013,RESOLVED,10/01/2013 7,1/02/2013,RESOLVED,15/02/2013 8,2/02/2013,OPEN, 9,7/02/2013,CANCELLED,26/02/2013

    Read the article

  • Make flash drive bootable with MULTIPLE Windows Installers

    - by alexander7567
    I have seen many ways to make a USB an installer for about any Windows OS. But how can I (using grub or something like that) make it bootable to all editions on Windows XP and 7? I have tried Googling it and researching, I've even tried to do it, But I dont understand much about GRUB or linux at all. Please keep in mind that I am not very good with linux, so please use as many details as possible.

    Read the article

  • Git version control with multiple users

    - by ignatius
    Hello, i am a little bit lost with this issue, let me explain you my problem: I want to setup a git repository, three of four users will contribute, so they need to download the code and shall be able to upload their changes to the server or update their branch with the latest modifications. So, i setup a linux machine, install git, setup the repository, then add the users in order to enable the acces throught ssh. Now my question is, What's next?, the git documentation is a little bit confusing, i.e. when i try from a dummy user account to clone the repository i got: xxx@xxx-desktop:~/Documentos/git/test$ git clone -v ssh://[email protected]/pub.git Initialized empty Git repository in /home/xxx/Documentos/git/test/pub/.git/ [email protected]'s password: fatal: '/pub.git' does not appear to be a git repository fatal: The remote end hung up unexpectedly is that a problem of privileges? need any special configuration? i want to avoid using git-daemon or gitosis, sorry, maybe my question sound silly but git is powerfull but i admit not so user friendly. Thanks Br

    Read the article

  • Sharepoint: Multiple Alternate Access Mapping Collections for Single Web Application

    - by Russ Giddings
    Hi All, We have a SharePoint MOSS 2007 installation which has two different external hostnames. When inspecting the setup I've noticed that there are two Alternate Access Mapping Collections mapped to the same web application. Each AAM collection contains one url mapped to the default zone. I can't see how AAM collections are mapped to web apps or even how to create a new AAM collection. I've always thought that there was just a one to one mapping between web apps and AAM collections. Does anyone have any idea as to how you would create such a situation? Cheers Russell

    Read the article

  • Exchange 2010, multiple accepted domains, UCC and outside webhosts

    - by westbadger
    We have an Exchange 2010 server configured to send and receive mail on several accepted domains for Outlook Anywhere, with a UCC cert addressing each mail.domain.com and autodiscover.domain.com, mail.otherplace.com etc. This worked fine until an SSL domain validation cert for one of the additional domains - where the www.otherplace.com is hosted outside our org - expired. Now Exchange users in mail.otherplace.com get an expired cert warning for otherplace.com when connecting to our mail.domain.com portal. They still get mail, but with a repeated popup in Outlook 2007 and 2010. If I understand it correctly - Outlook autodiscover connects by first polling otherplace.com/autodiscover - which is the outside www server with the expired cert before continuing on to autodiscover.otherplace.com - which is where the MX record points to our in-house Exchange UCC. I'm trying to find out if we should: 1) turn down all mail functions on the outside webserver 2) delete the expired (useless for an informational site) cert on the outside webserver 3) renew the cert for otherplace.com on the outside webserver - or something completely different? Many thanks in advance for your thoughts.

    Read the article

  • Multiple subnets on isc-dhcp-server using ddns with bind9

    - by legioxi
    On my network I have two subnets: 10.100.1.0/24 - Wired/wireless 10.100.7.0/24 - VPN Both subnets are served by isc-dhcp-server running on a Debian VM. This same VM runs bind9 for my DNS. ISC-DHCP-SERVER is configured to use DDNS and update BIND9 with hosts/IPs. Everything runs great until a device drops off the wired/wireless network and pops onto the VPN. When connecting on the VPN, a DHCP lease is handed out on the new subnet but DDNS does not update BIND9. Since the device has A/TXT/PTR records it appears ISC-DHCP-SERVER won't switch them to the new IP. The logs show: Connect to wireless: Nov 6 20:55:13 core-server named[2417]: client 127.0.0.1#57697: updating zone 'internal.mydomain.com/IN': adding an RR at 'demo-iphone.internal.mydomain.com' A Nov 6 20:55:13 core-server named[2417]: client 127.0.0.1#57697: updating zone 'internal.mydomain.com/IN': adding an RR at 'demo-iphone.internal.mydomain.com' TXT Nov 6 20:55:13 core-server dhcpd: DHCPACK on 10.100.1.160 to FF:FF:FF:FF:FF:FF (demo-iphone) via eth0 Nov 6 20:55:13 core-server dhcpd: Added new forward map from demo-iphone.internal.mydomain.com to 10.100.1.160 Nov 6 20:55:13 core-server dhcpd: Added reverse map from 160.49.21.172.in-addr.arpa. to demo-iphone.internal.mydomain.com Switch to VPN: Nov 6 20:56:34 core-server dhcpd: DHCPOFFER on 10.100.7.101 to BB:BB:BB:BB:BB:BB (demo-iphone) via 10.100.7.0 Nov 6 20:56:34 core-server named[2417]: client 127.0.0.1#57697: updating zone 'internal.mydomain.com/IN': update unsuccessful: demo-iphone.internal.mydomain.com: 'name not in use' prerequisite not satisfied (YXDOMAIN) Nov 6 20:56:34 core-server dhcpd: DHCPREQUEST for 10.100.7.101 (10.100.1.2) from BB:BB:BB:BB:BB:BB (demo-iphone) via 10.100.7.0 Nov 6 20:56:34 core-server dhcpd: DHCPACK on 10.100.7.101 to BB:BB:BB:BB:BB:BB (demo-iphone) via 10.100.7.0 Nov 6 20:56:34 core-server named[2417]: client 127.0.0.1#57697: updating zone 'internal.mydomain.com/IN': update unsuccessful: demo-iphone.internal.mydomain.com/TXT: 'RRset exists (value dependent)' prerequisite not satisfied (NXRRSET) Nov 6 20:56:34 core-server dhcpd: Forward map from demo-iphone.internal.mydomain.com to 10.100.7.101 FAILED: Has an address record but no DHCID, not mine. One thing to note is that the MAC of the device when connecting via VPN is the MAC of my Cisco ASA5512X and not the actual device. The ASA is relaying the DHCP request from the VPN client to the VM running ISC-DHCP-SERVER. Is there a way to get DDNS working in this scenario?

    Read the article

  • Domino nchronos.exe multiple instances causing server to die, and Sametime problems

    - by Kevin
    I've had this problem for a few months now. I thought it started when I installed the Traveller software on the server to add ActiveSync support, but I removed that and the problem still persists. Basically new instances of "nchronos.exe" keeps spawning (and not ending), so over a period of a few days the server eventually gets drowned in nchronos.exe processes, stops responding and I need to kill Domino. My process count the last time was up at about 330, and when I killed it and restarted the Domino my process count went to 160. I'm running Domino 8.5.1 with Fix Pack 2. I don't know if it's relevant, but my Domino server was also acting as a Sametime server. At around the same time that nchronos started playing up sametime also stopped working. None of my users can connect to sametime and in the domino log it keeps telling me "stpolicy.exe" has terminated. I've googled for that and tried a few things, but nothing seems to make sametime work again. Any thoughts?? Cheers, Kevin

    Read the article

  • How to prevent Outlook from receiving multiple copies of the same email

    - by martani_net
    This question might be asked already, but it's different from this one. My boss has Outlook 2003, when he synchronize his emails while connecting through the local server (using Exchange I guess) he gets his emails normally. Once he is outside (not connection to our LAN) he gets each email duplicated 3 or 4 times. Does anyone experienced this before? and how can we fix this? Please no links to FAQ pages. For info, we are using Kaspersky anti virus and Windows 2003 server, with windows XP clients. [Update] Actually we have a bunch of 5 or 6 email accounts on Outlook, and only one of them recieve duplicated copies of the same email, all the others are cool. Further more all these email are using the sae service, Gmail for example. [Update 2] I just found out that Outlook is configured to remove emails from the server already, also, some emails exceeds 5 copies! Thank you

    Read the article

  • What would multiple serial ports be used for?

    - by jasondavis
    I saw someone was using something like the card in the link below on there system for some networking gear on their pc. I am very curious what a person would need 8 serial ports for. What kind of stuff uses this? http://www.newegg.com/Product/Product.aspx?Item=N82E16815124041&cm_re=serial_card--15-124-041--Product

    Read the article

  • pound: multiple domains

    - by niklassaers
    Hi guys, I've been using pound to run mydomain.dk. Now I've bought some other domains and SSL certificates that are mydomain.no, mydomain.se and mydomain.eu. My old config looked roughly like this: ListenHTTPS Address 81.19.246.120 Port 443 Cert "/usr/local/etc/pound.keys/mydomain.dk.pem" Service BackEnd Address 10.0.10.10 Port 8080 End End End At places like here I've seen that I can use HeadRequire in the Service part, but I want the Host header to go together with the Cert, ideally something like ListenHTTPS Address 81.19.246.120 Port 443 HostAndCert "mydomain.dk" "/usr/local/etc/pound.keys/mydomain.dk.pem" HostAndCert "mydomain.se" "/usr/local/etc/pound.keys/mydomain.se.pem" HostAndCert "mydomain.no" "/usr/local/etc/pound.keys/mydomain.no.pem" HostAndCert "mydomain.eu" "/usr/local/etc/pound.keys/mydomain.eu.pem" Service BackEnd Address 10.0.10.10 Port 8080 End End End Any suggestions or clues to how I can accomplish this? Cheers Nik

    Read the article

  • Apache httpd.conf handle multiple domains to run the same application

    - by John Stewart
    So what we are looking for is the ability to do the following: We have an application that can load certain settings based on the domain that it is being accessed from. So if you come from xyz.com we show a different logo and if you come from abc.com we show a different logo. The code is the same, running from same server just detects the domain on the run Now we want to get a dedicated server (any suggestions?) that will enable us to point all the doamins that we want to this server (we change the DNS for the domains to that of our server) and then when the user goes to a certain domain they run the same application. Now as far as I can understand we will need to create a "VirtualHost" in apache to handle this. Can we create a wildcard virtualhost that catches all the domains? I am not an expert with Apache at all. So please forgive if this comes out to be a silly question. Any detailed help would be great. Thanks

    Read the article

< Previous Page | 107 108 109 110 111 112 113 114 115 116 117 118  | Next Page >