Search Results

Search found 4275 results on 171 pages for 'accept'.

Page 78/171 | < Previous Page | 74 75 76 77 78 79 80 81 82 83 84 85  | Next Page >

  • Postfix: Second sender address for a single mailbox

    - by Bastian Born
    I have got two domains: a.com and b.com. Currently all mails to [email protected] are insert to the mailbox "a". I configure in the file /etc/postfix/vmailbox that all mails to [email protected] are forwarded to [email protected]. Now I want to send mails from b.com, but postfix only accept smtp-requests from [email protected]. How can I add a new SMTP account without creating a new "dummy" mailbox? Is there any possibility to create an alias for outbox-accounts? I'm using ISPConfig 3.0.4.2 as configuration backend for postfix.

    Read the article

  • Security issue on Linux with Netbeans

    - by WebDevHobo
    In order to edit some files in Netbeans, I had to do a chmod 777 on the parent-folder. Reason being that anything else would result in Netbeans not wanting to accept the folder, as it could not be written. Is there an other way to do this besides doing a chmod 777? I'm on Ubuntu 9.10, using Netbeans 6.7.1 And after that, I manually have to give each file the needed rights. There should be an easier way, I just don't know it. EDIT: I am running XAMPP and the files I'm trying to edit are in the htdocs folder. I'm running Netbeans as my local user account, which is how it starts if I have it run from the applications-menu.

    Read the article

  • Can I nohup/screen an already-started process?

    - by ojrac
    I'm doing some test-runs of long-running data migration scripts, over SSH. Let's say I start running a script around 4 PM; now, 6 PM rolls around, and I'm cursing myself for not doing this all in screen. Is there any way to "retroactively" nohup a process, or do I need to leave my computer online all night? If it's not possible to attach screen to/nohup a process I've already started, then why? Something to do with how parent/child proceses interact? (I won't accept a "no" answer that doesn't at least address the question of 'why' -- sorry ;) )

    Read the article

  • Problem in transfering file from server to client using C sockets

    - by coolrockers2007
    I want to ask, why I cannot transfer file from server to client? When I start to send the file from server, the client side program will have problem. So, I spend some times to check the code, But I still cannot find out the problem Can anyone point out the problem for me? CLIENTFILE.C #include stdio.h #include stdlib.h #include time.h #include netinet/in.h #include fcntl.h #include sys/types.h #include string.h #include stdarg.h #define PORT 5678 #define MLEN 1000 int main(int argc, char *argv []) { int sockfd; int number,message; char outbuff[MLEN],inbuff[MLEN]; //char PWD_buffer[_MAX_PATH]; struct sockaddr_in servaddr; FILE *fp; int numbytes; char buf[2048]; if (argc != 2) fprintf(stderr, "error"); if ( (sockfd = socket(AF_INET, SOCK_STREAM, 0)) < 0) fprintf(stderr, "socket error"); memset(&servaddr, 0, sizeof(servaddr)); servaddr.sin_family = AF_INET; servaddr.sin_port = htons(PORT); if (connect(sockfd, (struct sockaddr *) &servaddr, sizeof(servaddr)) < 0) fprintf(stderr, "connect error"); if ( (fp = fopen("/home/na/nall9047/write.txt", "w")) == NULL){ perror("fopen"); exit(1); } printf("Still NO PROBLEM!\n"); //Receive file from server while(1){ numbytes = read(sockfd, buf, sizeof(buf)); printf("read %d bytes, ", numbytes); if(numbytes == 0){ printf("\n"); break; } numbytes = fwrite(buf, sizeof(char), numbytes, fp); printf("fwrite %d bytes\n", numbytes); } fclose(fp); close(sockfd); return 0; } SERVERFILE.C #include stdio.h #include fcntl.h #include stdlib.h #include time.h #include string.h #include netinet/in.h #include errno.h #include sys/types.h #include sys/socket.h #includ estdarg.h #define PORT 5678 #define MLEN 1000 int main(int argc, char *argv []) { int listenfd, connfd; int number, message, numbytes; int h, i, j, alen; int nread; struct sockaddr_in servaddr; struct sockaddr_in cliaddr; FILE *in_file, *out_file, *fp; char buf[4096]; listenfd = socket(AF_INET, SOCK_STREAM, 0); if (listenfd < 0) fprintf(stderr,"listen error") ; memset(&servaddr, 0, sizeof(servaddr)); servaddr.sin_family = AF_INET; servaddr.sin_addr.s_addr = htonl(INADDR_ANY); servaddr.sin_port = htons(PORT); if (bind(listenfd, (struct sockaddr *) &servaddr, sizeof(servaddr)) < 0) fprintf(stderr,"bind error") ; alen = sizeof(struct sockaddr); connfd = accept(listenfd, (struct sockaddr *) &cliaddr, &alen); if (connfd < 0) fprintf(stderr,"error connecting") ; printf("accept one client from %s!\n", inet_ntoa(cliaddr.sin_addr)); fp = fopen ("/home/na/nall9047/read.txt", "r"); // open file stored in server if (fp == NULL) { printf("\nfile NOT exist"); } //Sending file while(!feof(fp)){ numbytes = fread(buf, sizeof(char), sizeof(buf), fp); printf("fread %d bytes, ", numbytes); numbytes = write(connfd, buf, numbytes); printf("Sending %d bytes\n",numbytes); } fclose (fp); close(listenfd); close(connfd); return 0; }

    Read the article

  • Installing Linux kernel 2.6.25.14 on RHEL 5.4

    - by aaron
    I have to install Linux kernel version 2.5.25.14 on a RHEL 5.4 server because of drive compatibility issues. I follow the RedHat "Building a Custom Kernel" instructions running the following: $ make mrproper $ make xconfig $ make clean $ make bzImage $ make modules $ make modules_install $ make install But I get a bunch of warnings like this: WARNING: No module ehci-hcd found for kernel 2.6.25.24, continuing anyway When I try to boot this kernel it is unable to mount the hard drive, and kernel panics on startup. As far as I can tell I'm using a standard configuration (I just accept the defaults and save a .config file). Is there something I'm missing? Thanks.

    Read the article

  • How to retrieve all MySQL settings?

    - by Max Kielland
    I have a configured MySQL server (MySQL 5.1.47-community) that works perfect. I installed a second server (MySQL 5.5.15-community) to see if the new version of MySQL would work with my application before upgrading. When I run the application against the new server it behaves different. When I run it against the old server (MySQL 5.1.47-community) everything works perfect. I remember that I set some parameters through the MySQL prompt to accept larger result set and some other stuff, now I can't remember what I did. So my question is: Is there a way to transfer all the MySQL settings from one server to another? Thanks.

    Read the article

  • How to retrieve all MySQL settings?

    - by Max Kielland
    I have a configured MySQL server (MySQL 5.1.47-community) that works perfect. I installed a second server (MySQL 5.5.15-community) to see if the new version of MySQL would work with my application before upgrading. When I run the application against the new server it behaves different. When I run it against the old server (MySQL 5.1.47-community) everything works perfect. I remember that I set some parameters through the MySQL prompt to accept larger result set and some other stuff, now I can't remember what I did. So my question is: Is there a way to transfer all the MySQL settings from one server to another? Thanks.

    Read the article

  • Perform action based on load avg

    - by sfx
    I'm running some web applications on an debian server and have to struggle with ddos attacks sometimes. It's eating up all my resources and I can't ssh anymore into the server. An idea was to drop all connections if the load avg is too high, so there are still resources for me and accept new connections if the load avg is low enough. Since this has to work under heavy load I'm afraid a cronjob wouldn't be fast enough or take too much resources. tl;dr: Is there a way to configure the behavior if the load avg is above a specific threshold?

    Read the article

  • Post-compromise security scan; anything else?

    - by IVR Avenger
    Hi, all. My girlfriend checked her Gmail yesterday morning, and then found, later on in the day, that it would no longer accept her password. She also found that this happened to her Hotmail and Yahoo! accounts. She's only checked these accounts from her work and home PC, and I've spent the day checking the home PC for problems. A full AVG scan revealed a couple of installers for her webcam software that had questionable security signatures, and a full Windows Defender scan brought back nothing. Assuming that her home PC was compromised, somehow, is there anything else I should use to check it for some sort of lingering malicious app before I tell her it's okay to login to her accounts, again? Furthermore, she's going through the GMail "account recovery" process as the account appears to have been disabled. Does anyone know if this actually works? Thanks so much. IVR Avenger

    Read the article

  • VNC application/terminal server

    - by sebastian nielsen
    Which software should I use, if I want to set up a linux VNC terminal server that works in this way: The VNC server should be able to accept up to X simultanous connections on the same port 5900. The VNC server should use 640x480 on 8 or 16bit color. When the VNC server receives the connection, it should start a new "session" for a user, and auto-launch a specific linux application for that user. If the application is killed, crashes, or is exited in any way, user should be disconnected (kicked) from server. If the user disconnect, the application should be killed in a "graceful way", that allows the application to cleanup. (There should be no way to "pick up" a old session) Any ideas?

    Read the article

  • Android: HttpURLConnection not working properly

    - by giorgiline
    I'm trying to get the cookies from a website after sending user credentials through a POST Request an it seems that it doesn't work in android this way. ¿Am I doing something bad?. Please help. I've searched here in different posts but there's no useful answer. It's curious that this run in a desktop Java implementation it works perfect but it crashes in Android platform. And it is exactly the same code, specifically when calling HttpURLConnection.getHeaderFields(), it also happens with other member methods. It's a simple code and I don't know why the hell isn't working. DESKTOP CODE: This goes just in the main() HttpURLConnection connection = null; OutputStream out = null; try { URL url = new URL("http://www.XXXXXXXX.php"); String charset = "UTF-8"; String postback = "1"; String user = "XXXXXXXXX"; String password = "XXXXXXXX"; String rememberme = "on"; String query = String.format("postback=%s&user=%s&password=%s&rememberme=%s" , URLEncoder.encode(postback, charset) , URLEncoder.encode(user,charset) , URLEncoder.encode(password, charset) , URLEncoder.encode(rememberme, charset)); connection = (HttpURLConnection)url.openConnection(); connection.setRequestMethod("POST"); connection.setRequestProperty("Accept-Charset", charset); connection.setDoOutput(true); connection.setFixedLengthStreamingMode(query.length()); out = connection.getOutputStream (); out.write(query.getBytes(charset)); if (connection.getHeaderFields() == null){ System.out.println("Header null"); }else{ for (String cookie: connection.getHeaderFields().get("Set-Cookie")){ System.out.println(cookie.split(";", 2)[0]); } } } catch (IOException e){ e.printStackTrace(); } finally { try { out.close();} catch (IOException e) { e.printStackTrace();} connection.disconnect(); } So the output is: login_key=20ad8177db4eca3f057c14a64bafc2c9 FASID=cabf20cc471fcacacdc7dc7e83768880 track=30c8183e4ebbe8b3a57b583166326c77 client-data=%7B%22ism%22%3Afalse%2C%22showm%22%3Afalse%2C%22ts%22%3A1349189669%7D ANDROID CODE: This goes inside doInBackground AsyncTask body HttpURLConnection connection = null; OutputStream out = null; try { URL url = new URL("http://www.XXXXXXXXXXXXXX.php"); String charset = "UTF-8"; String postback = "1"; String user = "XXXXXXXXX"; String password = "XXXXXXXX"; String rememberme = "on"; String query = String.format("postback=%s&user=%s&password=%s&rememberme=%s" , URLEncoder.encode(postback, charset) , URLEncoder.encode(user,charset) , URLEncoder.encode(password, charset) , URLEncoder.encode(rememberme, charset)); connection = (HttpURLConnection)url.openConnection(); connection.setRequestMethod("POST"); connection.setRequestProperty("Accept-Charset", charset); connection.setDoOutput(true); connection.setFixedLengthStreamingMode(query.length()); out = connection.getOutputStream (); out.write(query.getBytes(charset)); if (connection.getHeaderFields() == null){ Log.v(TAG, "Header null"); }else{ for (String cookie: connection.getHeaderFields().get("Set-Cookie")){ Log.v(TAG, cookie.split(";", 2)[0]); } } } catch (IOException e){ e.printStackTrace(); } finally { try { out.close();} catch (IOException e) { e.printStackTrace();} connection.disconnect(); } And here there is no output, it seems that connection.getHeaderFields() doesn't return result. It takes al least 30 seconds to show the Log: 10-02 16:56:25.918: V/class com.giorgi.myproject.activities.HomeActivity(2596): Header null

    Read the article

  • Perl not closing TCP sockets if clients are no longer connected?

    - by LM
    The purpose of the application is to listen for a specific UDP multicast and then to forward the data to any TCP clients connected to the server. The code works fine, but I have a problem with the sockets not closing after the TCP clients disconnects. A socketsniffer utility shows the the sockets remain open and all the UDP data continues to be forwarded to the clients. The problem I believe is with the "if ($write-connected())" block as it always return true, even if the TCP client is no longer connected. I use standard Windows Telnet to connect to the server and to see the data. When I close telnet, the TCP socket is suppose to close on the server. Any reason why connected() show the connections as active even if they are not? Also, what alternative should I use then? Code: #!/usr/bin/perl use IO::Socket::Multicast; use IO::Socket; use IO::Select; my $tcp_port = "4550"; my $tcp_socket = IO::Socket::INET->new( Listen => SOMAXCONN, LocalAddr => '0.0.0.0', LocalPort => $tcp_port, Proto => 'tcp', ReuseAddr => 1, ); use Socket qw(IPPROTO_TCP TCP_NODELAY); setsockopt( $tcp_socket, IPPROTO_TCP, TCP_NODELAY, 1); use constant GROUP => '239.2.0.81'; use constant PORT => '6550'; my $udp_socket= IO::Socket::Multicast->new(Proto=>'udp',LocalPort=>PORT); $udp_socket->mcast_add(GROUP) || die "Couldn't set group: $!\n"; my $read_select = IO::Select->new(); my $write_select = IO::Select->new(); $read_select->add($tcp_socket); $read_select->add($udp_socket); ## Loop forever, reading data from the UDP socket and writing it to the ## TCP socket(s). while (1) { ## No timeout specified (see docs for IO::Select). This will block until a TCP ## client connects or we have data. my @read = $read_select->can_read(); foreach my $read (@read) { if ($read == $tcp_socket) { ## Handle connect from TCP client. Note that UDP connections are ## stateless (no accept necessary)... my $new_tcp = $read->accept(); $write_select->add($new_tcp); } elsif ($read == $udp_socket) { ## Handle data received from UDP socket... my $recv_buffer; $udp_socket->recv($recv_buffer, 1024, undef); ## Write the data read from UDP out to the TCP client(s). Again, no ## timeout. This will block until a TCP socket is writable. my @write = $write_select->can_write(); foreach my $write (@write) { ## Make sure the socket is still connected before writing. if ($write->connected()) { $write->send($recv_buffer); } else { $write_select->remove($write); close $write; } } } } }

    Read the article

  • Apache Prepending Header Information to ALL FILES

    - by Michael Robinson
    We're in the middle of setting up new servers, and have been having some odd problems with Apache. Apache is prepending text that looks like this: $15plðI‚‚?E?ðA™@?@??yeÔ|~Ÿ²?PγZ" zS€?8i³?? ,ÀŠ{ÿBHTTP/1.1 200 OK Date: Mon, 02 Feb 2009 22:28:05 GMT Server: Apache/2.2.3 (CentOS) Last-Modified: Mon, 02 Feb 2009 22:28:05 GMT ETag: W/"1238007d-2224e-fe617f40" Accept-Ranges: bytes Content-Length: 139854 Connection: close Content-Type: application/x-javascript To all files. The file I copied the above text from is the prototype library js file. As loaded from our server. I've searched, but couldn't find much about this problem Maybe I don't know what I'm searching for... Anyway, if anyone has seen this behaviour before, could they please let me know either 1) how to fix it so that this content is not prepended to all files, or 2) where to look for further help. Thanks

    Read the article

  • Bridge and OpenVPN with shorewall

    - by Javier Martinez
    I have this scenario and everything it's working OK, but I want to configure my Shorewall and I can't do it. My interfaces are: br0 (bridge of eth0) tun0 (OpenVPN) vnet* (each one of bridged interfaces with public IP's) Public Main IP: 188.165.X.Y OpenVPN IP's: 172.28.0.x Bridge: public ip's So, I have the next configuration for shorewall: /etc/shorewall/zones #ZONE TYPE OPTIONS IN OUT # OPTIONS OPTIONS fw firewall inet ipv4 road ipv4 /etc/shorewall/interfaces #ZONE INTERFACE BROADCAST OPTIONS inet br0 detect routeback road tun+ detect routeback /etc/shorewall/policy #SOURCE DEST POLICY LOG LIMIT: CONNLIMIT: # LEVEL BURST MASK $FW all ACCEPT inet $FW DROP info road all DROP inet road DROP /etc/shorewall/tunnels #TYPE ZONE GATEWAY GATEWAY # ZONE openvpnserver:1194 inet 0.0.0.0/0 The problem is that even with shorewall running I am able to ping or connect to the virtual machines behind the bridge

    Read the article

  • Host wildcard subdomains using postfix.

    - by Jack M.
    I'm trying to work out how I can get postfix to accept email for any sub-domain of my main site. I don't have virtual domains, just a long list of sub-domains for local delivery. In specific, I'm feeding python@*.mydomain.com into a Python using the alias file: python: |/www/proc_email.py The Python can handle delivery from there. I envision this looking something along the lines of: mydestination = encendio, localhost.localdomain, localhost, *.mydomain.com I'm running the latest version of postfix on Ubuntu (not rightly sure how to check the version). Thanks in advance.

    Read the article

  • set up re-direct for lan clients to local t&c page

    - by tb2571989
    Hi, I'm trying to set up something on my network so that when users connect and try and use the internet they are re-directed to a locally-hosted terms and conditions and policy page. Once they click "accept" then they will be passed through to their homepage, otherwise if they decline then the window will close or show them an error message. I've spent a while looking into this and am wondering if it's possible to do witout having to setup/add to a firewall. Otheriwse let me know what my options are and I can pass it on. Many Thanks Tom

    Read the article

  • Windows folder has readonly attribute that cannot be removed

    - by drasto
    I have a zip file containing Eclipse workspace. When I unzip it all the extracted folders have read only attribute set to true. When I uncheck "read only" in properties dialog and click apply the attribute seems to be removed. But when I reopen properties dialog it is there again. Eclipse will not accept folder that is read only as a workspace. I need it to be removed. Some points I'd like to make: Only folders have read only attribute, files don't. I use Windows 7, for unzipping I have used 7zip. I have tryied to remove the attribute from command line like this: attrib -r -s c:\pathToFolder. It did not work.

    Read the article

  • SQL Server Installer Closes Silently Without Errors

    - by ashes999
    When I run the SQL Server 2008 R2 Express installer (32-bit or 64-bit), nothing happens. It will get to the screen where it asks me to accept the terms of license and send anonymous feedback; I check off both boxes and click Next, and it automatically starts installing the Support Files. And then the window disappears. I tried looking through the log files, but didn't see any errors. I've tried: x64 SQL Server R2 express x86 SQL Server R2 express X64 SQL Server R2 (full) X64 SQL Server Management Studio Of these, only management studio installed correctly; the two express editions failed silently, and the full version gave me different errors. I also tried running in Administrator mode (despite being logged in as an administrator user), but again, no difference.

    Read the article

  • make IIS 7.5 cache static content files over diferent pages

    - by Achilles
    On a Windows 2008 R2, using DNS and IIS I've established my development test server; i.e. I'll have a web application that I can browse on http://test.dev I've moved all the static content files like images, js files and css files into another application which is visible on http://cdn.test.dev test.dev, uses cdn.test.dev urls like http://cdn.test.dev/js/jquery.js to load js, css and images. When I first load "~/" of test.dev, all files will load with a response code of 200; when I press F5 in Firefox, all files, except the "~/default.aspx", will load with 304 response code; but pressing Ctrl+F5 loads them again with a 200 code; if I browse another url like "~/pages/" in test.dev, all of those static files will reload with a 200 code... Is this normal or I'm doing something wrong? Actually, I'm looking for a behavior like this: I want the client to load http://cdn.test.dev/js/jquery.js, only once. I want the client's browser to use this jquery.js file, from cache, in all other pages of test.dev Is this possible? This is the web.config file I have in the root directory of cdn.test.dev: <configuration> <system.webServer> <caching> <profiles> <add extension=".png" policy="CacheUntilChange" varyByHeaders="User-Agent" location="Client" /> <add extension=".gif" policy="CacheUntilChange" varyByHeaders="User-Agent" location="Client" /> <add extension=".jpg" policy="CacheUntilChange" varyByHeaders="User-Agent" location="Client" /> <add extension=".js" policy="CacheUntilChange" varyByHeaders="User-Agent" location="Client" /> <add extension=".css" policy="CacheUntilChange" varyByHeaders="User-Agent" location="Client" /> <add extension=".axd" kernelCachePolicy="CacheUntilChange" varyByHeaders="User-Agent" location="Client" /> </profiles> </caching> <httpProtocol allowKeepAlive="true"> <customHeaders> <add name="Cache-Control" value="public, max-age=31536000" /> </customHeaders> </httpProtocol> <validation validateIntegratedModeConfiguration="false" /> <modules runAllManagedModulesForAllRequests="true"> <remove name="RadUploadModule" /> <remove name="RadCompression" /> <add name="RadUploadModule" type="Telerik.Web.UI.RadUploadHttpModule" preCondition="integratedMode" /> <add name="RadCompression" type="Telerik.Web.UI.RadCompression" preCondition="integratedMode" /> </modules> <handlers> <remove name="ChartImage_axd" /> <remove name="Telerik_Web_UI_SpellCheckHandler_axd" /> <remove name="Telerik_Web_UI_DialogHandler_aspx" /> <remove name="Telerik_RadUploadProgressHandler_ashx" /> <remove name="Telerik_Web_UI_WebResource_axd" /> <add name="ChartImage_axd" path="ChartImage.axd" type="Telerik.Web.UI.ChartHttpHandler" verb="*" preCondition="integratedMode" /> <add name="Telerik_Web_UI_SpellCheckHandler_axd" path="Telerik.Web.UI.SpellCheckHandler.axd" type="Telerik.Web.UI.SpellCheckHandler" verb="*" preCondition="integratedMode" /> <add name="Telerik_Web_UI_DialogHandler_aspx" path="Telerik.Web.UI.DialogHandler.aspx" type="Telerik.Web.UI.DialogHandler" verb="*" preCondition="integratedMode" /> <add name="Telerik_RadUploadProgressHandler_ashx" path="Telerik.RadUploadProgressHandler.ashx" type="Telerik.Web.UI.RadUploadProgressHandler" verb="*" preCondition="integratedMode" /> <add name="Telerik_Web_UI_WebResource_axd" path="Telerik.Web.UI.WebResource.axd" type="Telerik.Web.UI.WebResource" verb="*" preCondition="integratedMode" /> </handlers> <security> <requestFiltering> <requestLimits maxAllowedContentLength="10485760" /> </requestFiltering> </security> <staticContent> <clientCache cacheControlMode="UseExpires" httpExpires="Wed, 01 Jan 2020 00:00:00 GMT"/> </staticContent> </system.webServer> <appSettings /> <system.web> <compilation debug="false" targetFramework="4.0" /> <pages> <controls> <add tagPrefix="telerik" namespace="Telerik.Web.UI" assembly="Telerik.Web.UI" /> </controls> </pages> <httpHandlers> <add path="ChartImage.axd" type="Telerik.Web.UI.ChartHttpHandler" verb="*" validate="false" /> <add path="Telerik.Web.UI.SpellCheckHandler.axd" type="Telerik.Web.UI.SpellCheckHandler" verb="*" validate="false" /> <add path="Telerik.Web.UI.DialogHandler.aspx" type="Telerik.Web.UI.DialogHandler" verb="*" validate="false" /> <add path="Telerik.RadUploadProgressHandler.ashx" type="Telerik.Web.UI.RadUploadProgressHandler" verb="*" validate="false" /> <add path="Telerik.Web.UI.WebResource.axd" type="Telerik.Web.UI.WebResource" verb="*" validate="false" /> </httpHandlers> <httpModules> <add name="RadUploadModule" type="Telerik.Web.UI.RadUploadHttpModule" /> <add name="RadCompression" type="Telerik.Web.UI.RadCompression" /> </httpModules> <httpRuntime maxRequestLength="10240" /> </system.web> </configuration> and this is the resulting response header for http://cdn.test.dev/css/global.css: Cache-Control: private,public, max-age=31536000 Content-Type: text/css Content-Encoding: gzip Expires: Wed, 01 Jan 2020 00:00:00 GMT Last-Modified: Mon, 06 Sep 2010 08:53:06 GMT Accept-Ranges: bytes Etag: "0454eca04dcb1:0" Vary: Accept-Encoding Server: Microsoft-IIS/7.5 X-Powered-By: ASP.NET Date: Mon, 06 Sep 2010 14:57:08 GMT Content-Length: 4495

    Read the article

  • Remote Desktop fails after VPN connection.

    - by Samet Sorgut
    The remote computer is connected with Remote Desktop. When the remote computer is connected to VPN the Remote Destop freezes. It is not possible to connect to the remote computer again via Remote Desktop. What can be done to connect to this remote computer after it establishes a VPN connection? The only thing that comes to my mind is to install a second NIC and configure Remote Desktop to accept connection from this NIC while VPN is working from the other... What do you suggest?

    Read the article

  • Can't ping other machines at Linux VPN PPTP server's local lan from outside

    - by Marco Sanchez
    Before anything else, hello guys, this is the first time I ask for something here so I hope someone can give me a hand, please look at the following network diagram: --------------------------------------------------------------- VPN Server Webserver (SuSE SLES11) | | | ------- VPN LAN -------- | Router with Unique IP (With Port Forwarding rules set and VPN through enabled) | PPTP connection over Internet | Workstation (PC or Laptop with Windows) --------------------------------------------------------------- So the idea is for the workstation to connect to the PPTP Server and then be able to access a Web Application on the Webserver, right now I have the PPTP server configured and the VPN works, I can connect to the SLES11 server with no problems from the workstation and I can ping it and everything works fine but if I try to ping the Webserver from the workstation, I can't reach it, I'm making a mistake somewhere but I don't see where, please note that I'm not a network expert and thus I'd greatly appreciate some specific guidance. Here is some info related to the IPs --------------------------------------------------------------- *** SLES11 VPN Server has 2 Network cards: -- eth0 (Internal Network) IP: 192.168.210.5 MASK: 255.55.255.0 -- eth1 (External Network) IP: 192.168.1.105 MASK: 255.55.255.0 *** Webserver has 1 network card -- eth0 (Internal Network) IP: 192.168.210.221 MASK: 255.55.255.0 *** Workstation -- IP info once connection has been established to the VPN PPP adapter Test VPN Connection: Connection-specific DNS Suffix . : Description . . . . . . . . . . . : Test VPN Connection Physical Address. . . . . . . . . : DHCP Enabled. . . . . . . . . . . : No Autoconfiguration Enabled . . . . : Yes IPv4 Address. . . . . . . . . . . : 192.168.210.110(Preferred) Subnet Mask . . . . . . . . . . . : 255.255.255.255 Default Gateway . . . . . . . . . : 0.0.0.0 DNS Servers . . . . . . . . . . . : 189.209.208.181 (Defined as part of the PPTP Server options config script) 189.209.127.244 Primary WINS Server . . . . . . . : 192.168.210.220 (Defined as part of the PPTP Server options config script) NetBIOS over Tcpip. . . . . . . . : Enabled --------------------------------------------------------------- I also defined the following within IP tables: ------------------------------------------------------------- iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE iptables -A INPUT -i eth0 -p tcp --dport 1723 -j ACCEPT iptables -A INPUT -i eth0 -p gre -j ACCEPT ------------------------------------------------------------- If you need any piece of information from the PPTP server scripts please let me know, the thing is that I can actually connect to the VPN server and access its services and everything but after that I can't reach any other computer on that LAN. Any help would be greatly appreciated and thanks in advance

    Read the article

  • Unable to log iptables

    - by ActuatedCrayon
    I'm having trouble getting iptables to log to any file. My iptables looks like: Chain INPUT (policy ACCEPT 1366 packets, 433582 bytes) pkts bytes target prot opt in out source destination 869 60656 LOG icmp -- venet0 * 0.0.0.0/0 0.0.0.0/0 LOG flags 0 level 7 Syslogd is the only log helper running. The default syslog.conf didn't work, so I tried adding "kern.=debug -/var/log/iptables.log". But the file already has "kern.* -/var/log/kern.log". There are recent syslog entries, so it's not a permissions thing. I'm running Ubuntu 12.04.1 with 2.6.32-042stab061.2

    Read the article

  • How much packet loss is normal?

    - by Fabian
    I started monitoring our network using SmokePing. Users occasionally complain about bad network connections, but the problems went away after some minutes usually. I now wanted to get some more quantitative information about those problems. SmokePing regularly pings servers inside our network, in a connected network and outside hosts. I only have a limited amount of control over our internal network and none at all for the connection to the outside and to the second network. I now see quite often (2-4 times a day) that packets to the second network and the outside are dropped. Most of the times it is 1-2 packets out of 20, sometimes more. Inside our internal network no packets are dropped. Is this an expected amount of packet loss, or does it indicate that something is wrong? I'm mainly wondering if I should bother the university IT department about it, or if I should just accept it as it is.

    Read the article

  • Declaring multiple ports for the same VirtualHosts

    - by user65567
    Declare multiple ports for the same VirtualHosts: SSLStrictSNIVHostCheck off # Apache setup which will listen for and accept SSL connections on port 443. Listen 443 # Listen for virtual host requests on all IP addresses NameVirtualHost *:443 <VirtualHost *:443> ServerName domain.localhost DocumentRoot "/Users/<my_user_name>/Sites/domain/public" <Directory "/Users/<my_user_name>/Sites/domain/public"> Order allow,deny Allow from all </Directory> # SSL Configuration SSLEngine on ... </VirtualHost> How can I declare a new port ('listen', ServerName, ...) for 'domain.localhost'? If I add the following code, apache works (too much) also for all other subdomain of 'domain.localhost' (subdomain1.domain.localhost, subdomain2.domain.localhost, ...): <VirtualHost *:80> ServerName pjtmain.localhost:80 DocumentRoot "/Users/Toto85/Sites/pjtmain/public" RackEnv development <Directory "/Users/Toto85/Sites/pjtmain/public"> Order allow,deny Allow from all </Directory> </VirtualHost>

    Read the article

  • Intercept Apache communication

    - by Nathan Adams
    I am looking to develop a solution that eliminates potential spammers. The way this system will work is that it will watch connections and requests. Going into the specifics is more for stackoverflow, But, what I am interested in is if it is possible to tell Apache to pass the request over to my application first and give it the ability to accept/deny the request. Sure, it will make requests slower, but I think that is a trade off I am willing to take. I still want, however, Apache to run the request through any interpreters (such as PHP). The idea is that one wouldn't have to implement anti-spam measures on a per app basis but have an "umbrella" of spam protection.

    Read the article

< Previous Page | 74 75 76 77 78 79 80 81 82 83 84 85  | Next Page >