Search Results

Search found 22756 results on 911 pages for 'cisco vpn client'.

Page 440/911 | < Previous Page | 436 437 438 439 440 441 442 443 444 445 446 447  | Next Page >

  • Why do my proxy settings keep changing?

    - by tjrobinson
    I am running Windows 7 and a few times a day the "Use a proxy server for your LAN (These settings will not apply to dial-up of VPN connections)." under "Local Area Network (LAN) Settings" checkbox becomes checked seemingly on it's own. Am I pressing some kind of hidden keyboard shortcut accidentally? What sort of applications might alter this setting and why? Unfortunately as this is a developer box, there's a lot of stuff that could be interfering, e.g. Visual Studio 2010, Windows Azure SDK and various browsers. Has anyone else come across this?

    Read the article

  • 500 Internal Server Error when setting up Apache on localhost

    - by Martin Hoe
    I downloaded and installed XAMPP, and to keep my projects nicely separated I want to create a VirtualHost for each one based on its future domain name. For example, in my first project (we'll say it's project.com) I've put this in my Apache configuration: NameVirtualHost 127.0.0.1 <VirtualHost 127.0.0.1:80> DocumentRoot C:/xampp/htdocs/ ServerName localhost ServerAdmin admin@localhost </VirtualHost> <VirtualHost 127.0.0.1:80> DocumentRoot C:/xampp/htdocs/sub/ ServerName sub.project.com ServerAdmin [email protected] </VirtualHost> <VirtualHost 127.0.0.1:80> DocumentRoot C:/xampp/htdocs/project/ ServerName project.com ServerAdmin [email protected] </VirtualHost> And this in my hosts file: # development 127.0.0.1 localhost 127.0.0.1 project.org 127.0.0.1 sub.project.org When I go to project.com in my browser, the project loads up successfully. Same if I go to sub.project.com. But, if I navigate to: http://project.com/register (one of my site pages) I get this error: Internal Server Error The server encountered an internal error or misconfiguration and was unable to complete your request. The error log shows this: [Sun May 20 02:05:54 2012] [error] [client 127.0.0.1] Request exceeded the limit of 10 internal redirects due to probable configuration error. Use 'LimitInternalRecursion' to increase the limit if necessary. Use 'LogLevel debug' to get a backtrace., referer: http://project.com/ Sun May 20 02:05:54 2012] [error] [client 127.0.0.1] Request exceeded the limit of 10 internal redirects due to probable configuration error. Use 'LimitInternalRecursion' to increase the limit if necessary. Use 'LogLevel debug' to get a backtrace., referer: http://project.com/ Any idea what config items I got wrong or how to get this working? It happens on any page that's not in in the root directory of project.com. Thanks.

    Read the article

  • nginx & php-fpm and custom header

    - by nixer
    I would like to pass some custom header (ACCESS_TOKEN) from client RESTful application (JS) to application server (php-fpm). I had read that nginx should pass all http headers to php, but somehow it does not come to my php :( I can see it in firebug http://o7.no/N6DM7q but can't see it in $_SERVER variable. it just does not exist in $_SERVER array. I'm thinking that i need to pass it manually. Now my config looks like that: location @php-fpm { include /etc/nginx/fastcgi_params; fastcgi_pass unix:/tmp/php5-fpm.sock; fastcgi_param REQUEST_URI /index.php$request_uri; fastcgi_param SCRIPT_FILENAME /htdocs/index.php; fastcgi_param PATH_INFO $fastcgi_script_name; fastcgi_param DOCUMENT_ROOT /htdocs; } } and when I add new line in location definition: location @php-fpm { include /etc/nginx/fastcgi_params; ... fastcgi_param ACCESS_TOKEN $http_access_token; } } or even if i will add it into fastcgi_params file it does not help :( if I put into location part next line: fastcgi_param ACCESS_TOKEN $http_access_token; then in php it has empty value :( how I can pass custom header from client to backend (php) via nginx ?

    Read the article

  • Use subpath internal proxy for subdomains, but redirect external clients if they ask for that subpath?

    - by HostileFork
    I have a VirtualHost that I'd like to have several subdomains on. (For the sake of clarity, let's say my domain is example.com and I'm just trying to get started by making foo.example.com work, and build from there.) The simplest way I found for a subdomain to work non-invasively with the framework I have was to proxy to a sub-path via mod_rewrite. Thus paths would appear in the client's URL bar as http://foo.example.com/(whatever) while they'd actually be served http://foo.example.com/foo/(whatever) under the hood. I've managed to do that inside my VirtualHost config file like this: ServerAlias *.example.com RewriteEngine on RewriteCond %{HTTP_HOST} ^foo\.example\.com [NC] # <--- RewriteCond %{REQUEST_URI} !^/foo/.*$ [NC] # AND is implicit with above RewriteRule ^/(.*)$ /foo/$1 [PT] (Note: It was surprisingly hard to find that particular working combination. Specifically, the [PT] seemed to be necessary on the RewriteRule. I could not get it to work with examples I saw elsewhere like [L] or trying just [P]. It would either not show anything or get in loops. Also some browsers seemed to cache the response pages for the bad loops once they got one... a page reload after fixing it wouldn't show it was working! Feedback welcome—in any case—if this part can be done better.) Now I'd like to make what http://foo.example.com/foo/(whatever) provides depend on who asked. If the request came from outside, I'd like the client to be permanently redirected by Apache so they get the URL http://foo.example.com/(whatever) in their browser. If it came internally from the mod_rewrite, I want the request to be handled by the web framework...which is unaware of subdomains. Is something like that possible?

    Read the article

  • ubuntu pptp connections from command line

    - by Ian R
    I'm using a lot of vpn connections daily for my work and I want to program a pptp dialer in python to ease my job a little bit by automating things. I usually go with network-manager-pptp to setup my connections but I would like to skip this gui tool and do it from the script. Something like a dialer. My question is. Can pptp connections be established using the command tools only? Also, where does network-manager-pptp saves its failes so I can take a look and see what configs it generates. Any help is much appreciated.

    Read the article

  • Waht are the best proxy servers for Mikrotik router?

    - by niren
    I want to setup proxy server for my Mikrotik router. There is inbuilt web-proxy for Mikrotik router but I can extend this upto transparent proxy(kind of proxy server) only. We need High anonymity proxy so that we can hide our LAN static IPs(we don't have private IP) from outside Intruder/hackers. And also I know I can setup NAT rule to hide our IP(only private IP not public/static IP) as per this link, but I cann't hide static/public IP. Essentially I want to hide our Public/Static IP (there is static/public IP for all systems in our company) from outside Internet. To achieve this I guess I need other software apart from Mikrotik router gateway setup. can anyone suggest me Is there any other software to achieve my requirement? I know about squid proxy but am not sure whether It can hide our static/public IP. Note: we have assigned public/Static IP to all systems of our company since we have rights to access our company's system from anywhere by dedicated laptop(given by our company with more security) through VPN connection.

    Read the article

  • Hiding a HTTP Auth-Realm by sending 404 to non-known IPs?

    - by zhenech
    I have an Apache (2.2) serving a web-app on example.com. That web-app has a debug-page reachable via example.com/debug. /debug is currently protected with a HTTP basic auth. As there is only a very small user-base who has access to the debug-page, I would like to hide it based on IP address and return 404 to clients not accessing from our VPN. Serving a 404 based on IP-address only is easy and is described in http://serverfault.com/a/13071. But as soon I add authentication, the users see a 401 instead of a 404. Basically, what I need is: if ($REMOTE_ADDR ~ 10.11.12.*): do_basic_auth (aka return 401) else: return 404

    Read the article

  • Simple way to set up port knocking on Linux?

    - by Ace Paus
    There are well known benefits of Port Knocking utilities when utilized in combination with firewall IP table modification. Port Knocking is best used to provide an additional layer of security over other tools such as the OpenSSH server. I would like some help setting it up on a ubuntu server. I looked at some port knocking implementations here: PORTKNOCKING - A system for stealthy authentication across closed ports. IMPLEMENTATIONS http://www.portknocking.org/view/implementations fwknop looked good. I found an Android client here. And fwknop (both client and server) is in the ubuntu repos. Unfortunately, setting it up (on the server) looks difficult. I do not have iptables set up. My proficiency with iptables is limited (but I understand the basics). I'm looking for a series of simple steps to set it up. I only want to open the SSH port in response to a valid knock. Alternatively, I would consider other port knocking implementations, if they are much simpler to set up and the desired Linux and Android clients are available.

    Read the article

  • Network connection keeps dropping - bad hardware?

    - by Bill Sambrone
    Hello all, I've into a bit of a wall with a client of mine. In an office of 20 people, he is the only one who experiences broken connections to his mapped network drives. I have everyone set up with about 6 mapped drives, all pointing to the same server (no DFS), and everyone else can access them lightning fast. The environment consists of a mix of Windows 7 and XP machines, all 32-bit. The server holding the data everyone is mapping to is running on Server 2008 R2, and is a domain controller. We recently swapped out their old 10/100 switch for a shiny new Dell PowerConnect gigabit switch. We have also replaced an old dying Sonicwall with a shiny new one. Everything is running on an ESX host except for the DC, where everyone is getting data from. In my client's office, we have done the following: Swapped out his computer (Win7 and XP box) Swapped out the desktop switch in his office Removed the desktop switch in his office Changed out the network cable going to the wall Ran 'net config server /autodisconnect:-1' on the server Disabled remote differential compression on his current Win7 box When we swapped out his network cable, everything seemed fine for about 4 days. Normally I would get a phone call a couple times per day letting me know that Outlook has crashed (there is a 9GB PST living on the server he is always connected to), or that his software he is running from his L drive has crashed. I almost thought I had this solved, but after we rebooted the DC the other night he all of a sudden couldn't stay connected to his mapped network drives for more than 10 minutes. When I ran 'net use' from the command prompt, it listed all the network drives where were randomly in a state of 'OK', 'Disconnected', or 'Reconnecting'. What else should I try? Maybe there is bad wiring in the wall, patch panel, or a bad port in the new switch I have in the server room?

    Read the article

  • mod_fcgi produces random 500 Errors

    - by DmitrySemenov
    php 5.4.7 via mod_fcgi when I run the site sometimes it works, sometimes it crashed with 500 Internal Error, this is what I see in error.log everytime I run the script [Mon Sep 24 18:50:43 2012] [warn] [client 68.231.194.198] (104)Connection reset by peer: mod_fcgid: error reading data from FastCGI server [Mon Sep 24 18:50:43 2012] [error] [client 68.231.194.198] Premature end of script headers: api.php any ideas? vhost config: <VirtualHost :80> ServerAdmin [email protected] DocumentRoot "/home/www/sites/test.com/html/development" ServerName test.com ServerAlias www.test.com ErrorLog "/home/www/sites/test.com/logs/error_log" CustomLog "/home/www/sites/test.com/logs/access_log" common <IfModule mod_fcgid.c> <Directory /home/www/sites/test.com/html/development> Options +ExecCGI AllowOverride All AddHandler fcgid-script .php FCGIWrapper /home/www/php-fcgi-scripts/php-fcgi-starter .php Order allow,deny Allow from all </Directory> FcgidMaxRequestLen 1073741824 </VirtualHost> fcgi.d conf LoadModule fcgid_module modules/mod_fcgid.so # Use FastCGI to process .fcg .fcgi & .fpl scripts AddHandler fcgid-script fcg fcgi fpl # Sane place to put sockets and shared memory file FcgidIPCDir /var/run/mod_fcgid FcgidProcessTableFile /var/run/mod_fcgid/fcgid_shm IdleTimeout 300 BusyTimeout 300 ProcessLifeTime 7200 IPCConnectTimeout 300 IPCCommTimeout 7200 PHP_Fix_Pathinfo_Enable 1 php-fcgi-starter.php #!/bin/sh PHP_CGI=/usr/local/php547/bin/php-cgi PHP_INI=/etc/php547-fastcgi.ini export PHP_FCGI_TIMEOUT=1200 #export PHP_FCGI_CHILDREN=6 export PHP_FCGI_MAX_REQUESTS=1000 exec $PHP_CGI -c $PHP_INI

    Read the article

  • Vlan on DD-WRT v24 filesharing without internet

    - by user148888
    I'm planning to do a vlan with shared files without internet. my vlan is working correctly until I could lock the internet but I can not share files only I can ping from 172.168.1.x to 172.168.2.x but not backward. can u help me please?? This is my config. WRT54g DD-wrt v24 vlan0 eth 1,2,3 172.168.1.1/24 vlan1 Wan vlan2 eth 4 172.168.2.1/24(don't want internet here just Lan Conection) Pc (my pc) 172.168.1.10/24 gtway 172.168.1.1 (with internet) Ubiquity Nanostation loco M2 172.168.2.20/24(AP)(connect to eth 4 vlan2)(don't want internet here) Ubiquity Nanostation loco M2 172.168.2.21/24(Client)(don't want internet here) Friend pc 172.168.2.115/24(connected from the client)(don't want internet here) Any Help Please.. ![Lan Map][1] ![Vlan Config1][2] ![Vlan Config2][3] ![Vlan Config3][4] ![Command to block internet][5] [url=http://imageshack.us/photo/my-images/194/lanxl.jpg/][img=http://imageshack.us/a/img194/4197/lanxl.th.jpg][/url] [url=http://imageshack.us/photo/my-images/600/commandmf.jpg/][img=http://imageshack.us/a/img600/9530/commandmf.th.jpg][/url] [url=http://imageshack.us/photo/my-images/811/57398524.jpg/][img=http://imageshack.us/a/img811/1917/57398524.th.jpg][/url] [url=http://imageshack.us/photo/my-images/717/48064277.jpg/][img=http://imageshack.us/a/img717/829/48064277.th.jpg][/url] [url=http://imageshack.us/photo/my-images/685/21456517.jpg/][img=http://imageshack.us/a/img685/256/21456517.th.jpg][/url]

    Read the article

  • Remote mouse pointer not visible in VNC

    - by aef
    I used VNC desktops as a kind of collaboration server, as shared planning and pair programming environment for a long time. Now my latest iteration uses a KVM guest running Fedora 17 "Beefy Miracle", the Cinnamon desktop environment and an X11VNC server. The X11VNC server is automatically started with the desktop environment using the following command: x11vnc -localhost -many -shared -display :0 -bg My problem is that depending on the VNC client, the mouse pointer of the remote system which is shown through VNC is not synchronized to my client. I really need this, so I can see what my partner is doing on the desktop. When using Vinagre 3.2.1 on Ubuntu Oneiric Ocelot (11.10) or Vinagre 2.3.0.3 on Debian Squeeze (6.0) and I don't have my local mouse pointer inside the VNC view, I cannot see the mouse pointer of my remote system, nor its movement. When using TightVNC on Windows 7, I can recognize a mouse pointer trace for very short amounts of time after moving the mouse, but it is not clearly visible. Using UltraVNC on Windows 7 the mouse pointer is clearly visible all the time. With Gnome 2 I never had any problems with remote pointer synchronization, using exactly the same clients. I suspect this could have something to do with Cinnamon's dependency on 3D acceleration. On the other hand, it doesn't change anything to start Cinnamon's fallback environment Cinnamon 2D. Update: Same effect when I use Gnome 3.

    Read the article

  • Postfix - Block email from non-existent local addresses

    - by Kelso.b
    My question is very similar to this one, but for postfix. We keep getting emails from addresses like "[email protected]" delivered to other "@ourdomain.com" addresses. From my google research, I understand it might not be practical to verify the email originated from our IP or VPN (Although this would be ideal, so if you can think of a way to do this, let me know), but in most of these cases the sender address (ex. "accounting") is not a valid account. I imagine there must be a way to make sure that a local account exists before delivering the message.

    Read the article

  • server dosnt produce syn-ack

    - by steve
    I have a small program that take packets from the nfqueue . change the ip.dst to my server dst (and ttl), recalc checksum and return the packet to the nfqueue. The server and the client are linux and apache web server is run on the server and listen on port 80. i open telnet in the client to fake ip on port 80 . the packet is changed by my program and sent to the server, but the target server (the new dst ip) get the syn , but dosnt generate syn-ack (the server also belong to me , so i can see that it get the syn with checksum correct , but dosnt generate syn-ack). if i do the same , but with the real server ip as the dest, the tcp handshake is done correct (in this case i just change the ttl and checksum. The change that i did to the ttl is just a test to see that my checksum calc is ok). i compare the sys's , but didnt find and difference. Any idea? Ps. i saw this topic : Server not sending a SYN/ACK packet in response to a SYN packet and i set all flags the same , but this didnt help. Thank you

    Read the article

  • How could one archive all emails sent from employees?

    - by Schnapple
    My client runs a small business. This business has a small number of employees. They are currently hosted through GoDaddy for web and email. For legal reasons the client would like to archive emails sent by their employees. Currently the emails are all done through POP3 so all the email is basically housed in files on individual machines (remember, small business). It's been proposed an inexpensive solution to this would be to have all emails BCC'd to a main account so that conversations with the outside would could be archived and tracked. I have not investigated it myself personally but apparently GoDaddy can do something along these lines for all incoming email but not for outgoing email. Is there a way to set up email accounts for a particular domain to where a specified admin user could be copied on all outgoing email? UPDATE: I've modified the title to reflect employees not users. The goal of this is to archive sent emails for legal reasons. This is something the employees will be cognizant of and on board with. The bottom line here is to basically emulate a feature of a larger-class platform through a smaller, cheaper platform. If the answer is "can't do it, buy an Exchange license" that's fine. My apologies for phrasing this so poorly. I understand why there was so much confusion.

    Read the article

  • Failure to connect to admin share pops up dialog

    - by Jan
    I'm having an issue with a curious error message when accessing the administrative share on a remote machine. Specifically, the client is logged in as the domain administrator on the machine A, and runs some code that tries to access the admin share on B (a domain member). The access is done in .NET, along these lines (though I am not sure if the method of access makes a difference): string path = @"\\B\admin$"; if (Directory.Exists(path)) { try { path += @"\temp\"; if (!Directory.Exists(path)) { Directory.CreateDirectory(path); } path += "myfile_remote"; File.Copy("myfile", path); Now, on some machines this fails. That is not a big problem as we have a fallback. I'd like to know why but it is not the real issue. The problem is that running this piece of code causes a dialog box to pop up for the logged-in user on B, saying "network error trying to access \\B\admin$\temp\myfile_remote. Contact the network administrator and ask for the correct permissions". Unfortunately, it is a foreign language Windows so I'll spare you all posting a screenshot. It is skinned like a standard Windows dialog box. Why exactly is that dialog box popping up for the user and is there anything I can do about it? Edit to add: B is a Windows 7 Enterprise installation. The client is not aware of any GPO policies being installed. There is AV from Trend Micro installed.

    Read the article

  • Queries passed to SQL Server are getting corrupted

    - by adrianbanks
    We are experiencing a bizarre error with our application at a customer site. We have managed to narrow it down to the point where we can replicate the behaviour using just Management Studio and SQL Server. We have two machines, A and B: +------------+ +--------------------+ | [A] | | [B] | | Management | -------------- | SQL Server 2008 R2 | | Studio | | Enterprise x64 | +------------+ +--------------------+ We are running a SQL script in Management Studio on machine A against the SQL Server instance on machine B. We are not actually executing the script, just parsing it. Most of the time, the parse operation works fine. Occasionally (seemingly randomly), the parse operation fails with a syntax error. The error message shows the part of the script with the error, which appears as some SQL from the original script that has been truncated and has random characters appended to it. An example: The original SQL: SELECT DISTINCT ST.TABLE_NAME as TableName FROM INFORMATION_SCHEMA.TABLES AS ST INNER JOIN INFORMATION_SCHEMA.COLUMNS AS SC ON SC.TABLE_NAME = ST.TABLE_NAME WHERE ST.TABLE_TYPE = 'BASE TABLE' AND SC.COLUMN_NAME = 'Identity' AND ST.TABLE_NAME != 'dtproperties' ORDER BY ST.TABLE_NAME The SQL that is in error (as reported by SQL Server): SELECT DISTINCT ST.TABLE_NAME as TableName FROM INFORMATION_SCHEMA.TABLES AS ST INNER JOIN INFORMATION_SCHEMA.COLUMNS AS SC ON SC.TABLE_NAME = Sa? The above example shows how the query is being corrupted. It doesn't always happen, and is not always the same bit of SQL that causes the error. Parsing this script against another SQL Server instance produces no errors, showing that the script is fine. It appears that something is corrupting the SQL that is being received the the server. This leads me to think that the problem lies either with the client end or in the transmission of the SQL from the client to the server. I have a SQL trace from the period where an error occurs, which shows the SQL has been corrupted when SQL Server receives it. We have been unable to track down any possible cause of this behaviour, and so cannot find a fix. Because the errors occur seemingly randomly, it is also very hard to generate reproduction steps to submit a bug report. Any ideas?

    Read the article

  • How to Deploy an ASP.NET Web API- and Browser-based Application to a Production Environment [closed]

    - by lmttag
    Possible Duplicate: How to Deploy an ASP.NET Web API- and Browser-based Application to a Production Environment We have an ASP.NET Web API server that serves up a SQL Server data driven website. The API uses JSON to transfer data from SQL Server to the front end. We need to move it to an internal production environment (nothing will be exposed on the public Internet) and we’re having problems - or just not understanding what needs to be done. There are two domains: The corporate domain - where all users login normally. The process domain - contains the database the Web API needs to access. The IT staff wants to put a DMZ between the two domains to house the IIS app and shield the users on the corporate domain from having access into the process domain directly. The ideal configuration is: corp domain (end users) <–> firewall (open port 80) <–> DMZ (web server running IIS) <–> firewall (open port 80 or 1433????) <–> process domain (IIS for Web API and SQL Server) We don’t really understand how to deploy our browser/Web API application in this scenario. Do we need to break up our application so that all the client code is on the IIS server in the DMZ, while the Web API gets installed on the server in the process domain? Does the entire app (client code and Web API) stay together on the IIS server in the DMZ, which then somehow accesses the SQL Server instance to get data? From the IIS server and app in the DMZ, would you simply access the Web API on the server in the process domain by going to http://server/appname/api/getitmes? In the second firewall between the DMZ and the process domain, would you have to open port 1433 or just port 80 since the Web API is a HTTP endpoint? Or, is there some better way of deployment (i.e., how ASP.NET Web API single page applications written all in HTML5 and JavaScript supposed to be deployed to production environments?)? NB: The servers are Win2k8 R2, SQL Server 2k8 R2, and IIS 7.5.

    Read the article

  • How to stop NAT dropping idle connections?

    - by WGH
    I have a TCP connection that can be idle for many hours. The traffic is flowing from the server to the client only. One might say it's kind of push notification. My home router, however, tends to drop the connection silently after 20 minutes (the value of /proc/sys/net/netfilter/nf_conntrack_tcp_timeout_established). The server detects the loss once it tries to send anything (I assume it receives RST from the router itself). As client never sends anything, it never detects the loss. RFC 5382 "NAT Behavioral Requirements for TCP" states the following: A NAT can check if an endpoint for a session has crashed by sending a TCP keep-alive packet and receiving a TCP RST packet in response. It makes sense. It's much more effective than sending keep-alives by the host itself (as only NAT knows its own timeout). And probably not hard to implement. Is there any NAT solutions implementing this? It would be great if there was a way to enable this in iptables.

    Read the article

  • DNS not responding

    - by Born2win
    I have a Dlink DIR-615 router and I have it around six mounths. My internet connection is VPN (PPTP) based i.e. I have been give a username and password from my isp and my ip address is dynamic. But from few days I am experiencing a serious problem. My router connects normally (i can see yellow light) but my computer is giving "DNS not responding" error. I have tried everything (reset,reboot etc) but no sucess. Any help would be appreciated :)

    Read the article

  • How anti-virus on the host machne affects performance of virtual machines?

    - by Ladislav Mrnka
    I'm diagnosing some issue with Oracle virtual box where virtual machine sometimes perform terribly slow (much slower then notebook with worse configuration): Notebook i7 (2 cores with HT = 4 logical CPUs), 4GB RAM, 5400 rpm disk, Win 7 64bit Virtual machine (Oracle Virtual Box) Host: i7 (4 cores with HT = 8 logical CPUs, 12 GB RAM, system runs from SSD, virtual machine from 7200 rpm disk, Win 7 64bit) Virtual machine: 4 cores assigned, 8 GB RAM assigned, Win 2008 R2 Enterprise (64 bit) Virtual machine uses bridge to separate network interface (machine has two) VPN for network communication No other virtual machine runs on the host Host has installed ESET Smart Security All SW is updated with last version. My question is if anti-virus on the host machine can somehow affect performance of the virtual machine and if so how can I turn it off without turning the anti-virus itself?

    Read the article

  • Iptables rules, forward between two interfaces

    - by Marco
    i have a some difficulties in configuring my ubuntu server firewall ... my situation is this: eth0 - internet eth1 - lan1 eth2 - lan2 I want that clients from lan1 can't communicate with clients from lan2, except for some specific services. E.g. i want that clients in lan1 can ssh into client in lan2, but only that. Any other comunication is forbidden. So, i add this rules to iptables: #Block all traffic between lan, but permit traffic to internet iptables -I FORWARD -i eth1 -o ! eth0 -j DROP iptables -I FORWARD -i eth2 -o ! eth0 -j DROP # Accept ssh traffic from lan1 to client 192.168.20.2 in lan2 iptables -A FORWARD -i eth1 -o eth2 -p tcp --dport 22 -d 192.168.20.2 -j ACCEPT This didn't works. Doing iptables -L FORWARD -v i see: Chain FORWARD (policy DROP 0 packets, 0 bytes) pkts bytes target prot opt in out source destination 33 144 DROP all -- eth1 !eth0 anywhere anywhere 0 0 DROP all -- eth2 !eth0 anywhere anywhere 23630 20M ACCEPT all -- any any anywhere anywhere state RELATED,ESTABLISHED 0 0 ACCEPT all -- eth1 any anywhere anywhere 175 9957 ACCEPT all -- eth1 any anywhere anywhere 107 6420 ACCEPT all -- eth2 any anywhere anywhere 0 0 ACCEPT all -- pptp+ any anywhere anywhere 0 0 ACCEPT all -- tun+ any anywhere anywhere 0 0 ACCEPT tcp -- eth1 eth2 anywhere server2.lan tcp dpt:ssh All packets are dropped, and the count of packets for the last rule is 0 ... How i have to modify my configuration? Thank you. Regards Marco

    Read the article

  • Install MS Office on Windows Server 2008 - in support of Quickbooks RemoteApp

    - by steampowered
    How can I install MS Office on Windows Server 2008? The purpose would be to enable Quickbooks to be able to export to Excel. Quickbooks is set up to run as a RemoteApp in a Terminal Server environment. The Quickbooks applicaiton senses whether or not Excel is installed and will not allow the user to create an Excel report unless Excel is actually installed on the client running Quickbooks. Since the client and the server are the same machine in a Terminal Server environment, Excel must be installed on Windows Server for the Quickbooks Excel exporting feature to work in this setup. There is no need to actually use Excel in a Terminal Services environment. We only need to generate the Excel files using the server, then we can use an installed version Excel on a regular Windows 7 machine to work with the Excel file. MS Office does not normally install on Windows Server. Is there any way to buy a special license? Could we somehow fool Quickbooks into thinking Excel is installed, if that would work?

    Read the article

  • What is the fastest way to clone an INNODB table within the same server?

    - by Vic
    Our development server is a replication slave of our production server. We have a script that developers use if they want to run their applications/bug fixes against fresh data. That script looks like this: dbs=( analytics auth logs users ) server=localhost conn="-h ${server} -u ${username} --password=${password}" # Stop the replication client so we don't encounter weird data. echo "STOP SLAVE" | mysql ${conn} # Bunch of bulk insert optimizations echo "SET autocommit=0" | mysql ${conn} echo "SET unique_checks=0" | mysql ${conn} echo "SET foreign_key_checks=0" | mysql ${conn} # Restore all databases and tables. for sourcedb in ${dbs[*]} do destdb=${prefix}${sourcedb} echo "Dropping database ${destdb}..." echo "DROP DATABASE IF EXISTS ${destdb}" | mysql ${conn} echo "CREATE DATABASE ${destdb}" | mysql ${conn} # First, all the tables. for table in `echo "SHOW FULL TABLES WHERE Table_type <> 'VIEW'" | mysql $conn $sourcedb | tail -n +2`; do if [[ "${table}" != 'BASE' && "${table}" != 'TABLE' && "${table}" != 'VIEW' ]] ; then createTable=`echo "SHOW CREATE TABLE ${table}"|mysql -B -r $conn $sourcedb|tail -n +2|cut -f 2-` echo "Restoring ${destdb}/${table}..." echo "$createTable ;" | mysql $conn $destdb insertData="INSERT INTO ${destdb}.${table} SELECT * FROM ${sourcedb}.${table}" echo "$insertData" | mysql $conn $destdb fi fi done done echo "SET foreign_key_checks=1" | mysql ${conn} echo "SET unique_checks=1" | mysql ${conn} echo "COMMIT" | mysql ${conn} # Restart the replication client echo "START SLAVE" | mysql ${conn} All of these operations are, as I mentioned, within the same server. Is there a faster way to clone the tables I'm not seeing? They're all INNODB tables. Thanks!

    Read the article

  • Generalized strategy for file server virtualization in Xenserver

    - by Jamie
    I'm not shopping as much as I'm looking for some guidance on good idea / bad idea strategies. I'm sure I'm not in the "best practices" budget range. Currently, I have 3 dell poweredges running xenserver in a pool. Each node has a ubuntu file server, serving about 6TB. One is the primary, the other two are rsync targets for backup. The 6TB is stored on their respective local storage disks as an LVM of 3x2tb virtual disks. The fileserver VM disks are also stored on the node local disks. Each node also runs a smattering of light-weight VMs for web, development, windows VMs, and stuff like that. Several of those VM's disks reside on a QNAP NAS to play with live migration. These VM's are often clients of the primary file server (like all the mail, web content, user files are stored on the file server, not on the mail, web, and samba VMs). This all works fine, and is a major step up for us. The downside is that the QNAP is a single point of failure. And the only thing the QNAP is doing is serving migratable VM images, not client data. Someday the poweredge local arrays will be full, and we will have to reinvent ourselves again. Is it wise to have heavywieght vms (like the fileserver, with its 6+ TB disks) on a SAN or NAS? Would it be better to keep the VMs lightweight, have the VM images on a SAN or NAS, and use 2 or more NAS act as NFS-serving file appliances? A hybrid SAN/NAS that can serve iscsi for images and NFS for the client vms? It seems like live-magration would be a misnomer if you have to migrate a fileserver with its entire 6+ TB disk. I recognize there are plenty of ways to skin the cat. We've already skinned it a few ways. What makes sense?

    Read the article

< Previous Page | 436 437 438 439 440 441 442 443 444 445 446 447  | Next Page >