Search Results

Search found 963 results on 39 pages for 'peer pressure'.

Page 6/39 | < Previous Page | 2 3 4 5 6 7 8 9 10 11 12 13  | Next Page >

  • Does anyone know of a code change management tool that can highlight code changes in Visual Studio?

    - by Leejo
    Hey all, I am trying to find a tool that can highlight code changes in Visual Studio so they can be easily found and reviewed. Below are some requirements for what we are looking for... Identify and use a difference highlighting tool that meets the following criteria: • can highlight areas that need to be reviewed • there is a place to enter comments • retains line numbering from code • preference for doing within IDE Issue addressed: Hard to see what was changed in code - changes not identified. Coders do not provide administrators diffs. No tool that does a nice job to identify differences. Daunting/time consuming to provide a good diff. When highlighting differences was provided, loss of line numbers was a substantial issue (was worse).

    Read the article

  • How to remove blank lines in .txt file

    - by Brant
    I want to change text file format as following , but don't know how to do (2) 5. The function of the condenser is to: a) vapourise the liquid refrigerant b) change high pressure refrigerant vapour to liquid c) pressurise low pressure refrigerant vapour d) vent off vapourised refrigerant e) lower the liquid refrigerant pressure (2) 6. One tonne of refrigeration is: a) 13958 kJ per day b) 100 kJ per minute c) 233 kJ per minute d) 13958 J per hour e) 335 J per second (2) 5. The function of the condenser is to: a) vapourise the liquid refrigerant b) change high pressure refrigerant vapour to liquid c) pressurise low pressure refrigerant vapour d) vent off vapourised refrigerant e) lower the liquid refrigerant pressure (2) 6. One tonne of refrigeration is: a) 13958 kJ per day b) 100 kJ per minute c) 233 kJ per minute d) 13958 J per hour e) 335 J per second

    Read the article

  • Scuttlebutt Reconciliation from "Efficient Reconciliation and Flow Control for Anti-Entropy Protocols"

    - by Maus
    This question might be more suited to math.stackexchange.com, but here goes: Their Version Reconciliation takes two parts-- first the exchange of digests, and then an exchange of updates. I'll first paraphrase the paper's description of each step. To exchange digests, two peers send one another a set of pairs-- (peer, max_version) for each peer in the network, and then each one responds with a set of deltas. The deltas look like: (peer, key, value, version), for all tuples for which peer's state maps the key to the given value and version, and the version number is greater than the maximum version number peer has seen. This seems to require that each node remember the state of each other node, and the highest version number and ID each node has seen. Question Why must we iterate through all peers to exchange information between p and q?

    Read the article

  • When should NTPd broadcast/broadcastclient be used instead of client/server or peer modes?

    - by Luke404
    The NTP deamon if often used in its simplest mode, which is client/server: you specify one or more server directives in your ntp.conf and your clients will use those servers. In addition to that, when you run your own NTP servers, it is good practice to peer them together, so if one of them looses connectivity to its upstream servers, it will get time from its peers. But NTPd can also work with broadcast and/or multicast distribution of time data, with the documentation stating: broadcast and multicast modes are intended for configurations involving one or a few servers and a possibly very large client population The documentation also says elsewhere: It is possible and frequently useful to configure a host as both broadcast client and broadcast server. A number of hosts configured this way and sharing a common broadcast address will automatically organize themselves in an optimum configuration based on stratum and synchronization distance. I can see one obvious administrative benefit: you don't have to manually specify and update your list of NTP servers in the clients ntp.conf, so to me it looks tempting to use broadcast mode even for a small client population (say 5+ clients with 3~4 servers). I expect network traffic to be a little higher with broadcasts instead of client/server associations, but given the usual gigabit ethernet LAN the impact should be negligible unless you have a very very large number of hosts in the same broadcast domain. At the end of the day, when should broadcast mode be used or avoided? Are there pros and cons I haven't seen?

    Read the article

  • Sync Framework Peer Data Sharing Unable to Catch Conflicts, Source Provider Alwasy wins!

    - by Belliez
    Hi, I am using SQL Server 2008 and the WebSharingAppDemo-SqlProviderEndToEnd sample and this is almost perfect for my needs however I am unable to detect conflicts. By default ConflictResolutionPolicy is set to ApplicationDefined. I have tried setting the ResolutionPolicy to SourceWins, DestinationWins or ApplicationDefined and I always get the same results? The Destination Provider Proxy (Peer) always wins? Can someone please provide a sample of how I can detect a conflict and act upon it or point me in the right direction where I can start to look. I am not sure what to do and where I can create events for detecting collisions. Been staring at this too long now and going around in circles! Thanks in advanced. Paul

    Read the article

  • How does an NTP host switch among the various modes?

    - by James A. Rosen
    The NTPv3 RFC describes five operating modes: Symmetric Active (1): A host operating in this mode sends periodic messages regardless of the reachability state or stratum of its peer. By operating in this mode the host announces its willingness to synchronize and be synchronized by the peer. Symmetric Passive (2): This type of association is ordinarily created upon arrival of a message from a peer operating in the symmetric active mode and persists only as long as the peer is reachable and operating at a stratum level less than or equal to the host; otherwise, the association is dissolved. However, the association will always persist until at least one message has been sent in reply. By operating in this mode the host announces its willingness to synchronize and be synchronized by the peer. Client (3): A host operating in this mode sends periodic messages regardless of the reachability state or stratum of its peer. By operating in this mode the host, usually a LAN workstation, announces its willingness to be synchronized by, but not to synchronize the peer. Server (4): This type of association is ordinarily created upon arrival of a client request message and exists only in order to reply to that request, after which the association is dissolved. By operating in this mode the host, usually a LAN time server, announces its willingness to synchronize, but not to be synchronized by the peer. Broadcast (5): A host operating in this mode sends periodic messages regardless of the reachability state or stratum of the peers. By operating in this mode the host, usually a LAN time server operating on a high-speed broadcast medium, announces its willingness to synchronize all of the peers, but not to be synchronized by any of them. It seems to me, though, that any host except a leaf node would probably be in several modes. For example, I might have a local area network with three NTP servers, each in Symmetric Active (1) mode with respect to one another. They would also each be clients (3) of one of the many public stratum two time servers. Lastly, they would all server as servers (4) to the many local clients. Is the point that they're only in a given mode for a moment during the synchronization? If so, how does a host know to switch? I'm only looking for enough depth here to discuss the issue in an educated manner, not to write a custom time server.

    Read the article

  • How can I diagnose "Cannot determine peer address" in my Perl TCP script?

    - by MadBoy
    I've this little script which does it's job pretty well but sometimes it tends to fail. It fails in 2 cases: with error send: Cannot determine peer address at ./tcp-new.pl line 52 with no output or anything, it just fails to deliver what it got to connected Tcp Client. Usually it happens after I disconnect from server, go home and connect it again. To fix this restart is required and it starts working. Sometimes this problem is followed by problem mentioned in point 1. Note: it's not problem when I disconnect and reconnect to it again within short amount of time (unless error nr 1 happens). So can anyone help me make this code be a bit more stable so I don't have to restart it every day? #!/usr/bin/perl use strict; use warnings; use IO::Socket; use IO::Select; my $tcp_port = "10008"; my $udp_port = "2099"; my $tcp_socket = IO::Socket::INET->new( Listen => SOMAXCONN, LocalPort => $tcp_port, Proto => 'tcp', ReuseAddr => 1, ); my $udp_socket = IO::Socket::INET->new( LocalPort => $udp_port, Proto => 'udp', ); my $read_select = IO::Select->new(); my $write_select = IO::Select->new(); $read_select->add($tcp_socket); $read_select->add($udp_socket); while (1) { my @read = $read_select->can_read(); foreach my $read (@read) { if ($read == $tcp_socket) { my $new_tcp = $read->accept(); $write_select->add($new_tcp); } elsif ($read == $udp_socket) { my $recv_buffer; $udp_socket->recv($recv_buffer, 1024, undef); my @write = $write_select->can_write(); foreach my $write (@write) { $write->send($recv_buffer); } } } }

    Read the article

  • Too many open connections... Did I mess up something?

    - by Legend
    I am developing a plugin (for a Bittorrent client named Azureus programmed in Java) that lets me add peers into the current list from which it is downloading. Everything was working fine until yesterday when I started getting these weird errors: DEBUG::Thu Apr 15 10:45:40 CET 2010::org.gudy.azureus2.core3.peer.impl.control.PEPeerControlImpl::addPeer::795: Injected peer 90.35.126.126:33064 was not added - Too many connections DEBUG::Thu Apr 15 10:48:40 CET 2010::org.gudy.azureus2.core3.peer.impl.control.PEPeerControlImpl::addPeer::795: Injected peer 80.25.126.126:33064 was not added - Too many connections DEBUG::Thu Apr 15 10:52:40 CET 2010::org.gudy.azureus2.core3.peer.impl.control.PEPeerControlImpl::addPeer::795: Injected peer 90.35.126.126:33064 was not added - Too many connections I am thinking I messed up something with the TCP sockets. Does anyone know why this could be happening?

    Read the article

  • Call subclass constructor from abstract class in Java

    - by Joel
    public abstract class Parent { private Parent peer; public Parent() { peer = new ??????("to call overloaded constructor"); } public Parent(String someString) { } } public class Child1 extends parent { } public class Child2 extends parent { } When I construct an instance of Child1, I want a "peer" to automatically be constructed which is also of type Child1, and be stored in the peer property. Likewise for Child2, with a peer of type Child2. The problem is, on the assignment of the peer property in the parent class. I can't construct a new Child class by calling new Child1() because then it wouldn't work for Child2. How can I do this? Is there a keyword that I can use that would refer to the child class? Something like new self()?

    Read the article

  • smtp.gmail.com from bash gives "Error in certificate: Peer's certificate issuer is not recognized."

    - by ndasusers
    I needed my script to email admin if there is a problem, and the company only uses Gmail. Following a few posts instructions I was able to set up mailx using a .mailrc file. there was first the error of nss-config-dir I solved that by copying some .db files from a firefox directory. to ./certs and aiming to it in mailrc. A mail was sent. However, the error above came up. By some miracle, there was a Google certificate in the .db. It showed up with this command: ~]$ certutil -L -d certs Certificate Nickname Trust Attributes SSL,S/MIME,JAR/XPI GeoTrust SSL CA ,, VeriSign Class 3 Secure Server CA - G3 ,, Microsoft Internet Authority ,, VeriSign Class 3 Extended Validation SSL CA ,, Akamai Subordinate CA 3 ,, MSIT Machine Auth CA 2 ,, Google Internet Authority ,, Most likely, it can be ignored, because the mail worked anyway. Finally, after pulling some hair and many googles, I found out how to rid myself of the annoyance. First, export the existing certificate to a ASSCII file: ~]$ certutil -L -n 'Google Internet Authority' -d certs -a > google.cert.asc Now re-import that file, and mark it as a trusted for SSL certificates, ala: ~]$ certutil -A -t "C,," -n 'Google Internet Authority' -d certs -i google.cert.asc After this, listing shows it trusted: ~]$ certutil -L -d certs Certificate Nickname Trust Attributes SSL,S/MIME,JAR/XPI ... Google Internet Authority C,, And mailx sends out with no hitch. ~]$ /bin/mailx -A gmail -s "Whadda ya no" [email protected] ho ho ho EOT ~]$ I hope it is helpful to someone looking to be done with the error. Also, I am curious about somethings. How could I get this certificate, if it were not in the mozilla database by chance? Is there for instance, something like this? ~]$ certutil -A -t "C,," \ -n 'gmail.com' \ -d certs \ -i 'http://google.com/cert/this...'

    Read the article

  • How can I peer into a Windows user's RDP session for support, where I initiate the support session?

    - by David Bullock
    I've used both WebEx and GoToAssist, but neither of them have a story to tell for 'unattended' access of a user's desktop unless the user is using the machine's primary console. Unattended in the sense that they phone me and I then appear in their session, rather than they visit a website and enter their details and wait for me. This is a common use-case, since the users' machine is a virtual desktop, and the session broker is connecting the user via RDP. They never have a session with their desktop unless it's a remote desktop session. At the moment, if I use either of the said products to get an unattended support session going, all I can see is the login screen of the physical console, telling me that a remote session is in progress. Are there alternative tools which will make me happy?

    Read the article

  • Component must be a valid peer (when i remove frame.add(Component);)

    - by boyd
    i have this code here for creating and drawing array of pixels into an image import javax.swing.JFrame; import java.awt.Canvas; import java.awt.Graphics; import java.awt.image.BufferStrategy; import java.awt.image.BufferedImage; import java.awt.image.DataBufferInt; public class test extends Canvas implements Runnable{ private static final long serialVersionUID = 1L; public static int WIDTH = 800; public static int HEIGHT = 600; public boolean running=true; public int[] pixels; public BufferedImage img; public static JFrame frame; private Thread thread; public static void main(String[] arg) { test wind = new test(); frame = new JFrame("WINDOW"); frame.add(wind); frame.setVisible(true); frame.setSize(WIDTH, HEIGHT); frame.setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE); wind.init(); } public void init(){ thread=new Thread(this); thread.start(); img=new BufferedImage(WIDTH, HEIGHT,BufferedImage.TYPE_INT_RGB); pixels=((DataBufferInt)img.getRaster().getDataBuffer()).getData(); } public void run(){ while(running){ render(); try { thread.sleep(55); } catch (InterruptedException e) { e.printStackTrace(); } } } public void render(){ BufferStrategy bs=this.getBufferStrategy(); if(bs==null){ createBufferStrategy(4); return; } drawRect(0,0,150,150); Graphics g= bs.getDrawGraphics(); g.drawImage(img, 0, 0, WIDTH, HEIGHT, null); g.dispose(); bs.show(); } private void drawRect(int x, int y, int w, int h) { for(int i=x;i<w;i++) for(int j=x;j<h;j++) pixels[i+j*WIDTH]=346346; } } Why i get "Component must be a valid peer" error when i remove the line: frame.add(wind); Why I want to remove it? Because I want to create a frame using a class object(from another file) and use the code Window myWindow= new Window() to do exactly the same thing BTW: who knows Java and understands what i wrote please send me a message with your skype or yahoo messenger id.I want to cooperate with you for a project (graphics engine for games)

    Read the article

  • extrapolating object state based on updates

    - by user494461
    I have a networked multi-user collaborative application. To maintain a consistent virtual world, I send updates for objects from a master peer to a guest peer. The update state contains x,y,z coordinates of object center and his rotation matrix(CHAI3d api used a 3x3 matrix) with 30Hz frequency. I want to reduce this update rate and want to send with a reduced update rate. I want a predictor on both peers. When the predicted value is outside, say a error value of 10% in comparison to master peers objects original state the master peer triggers a state update. Now for position I used velocity,position updates so that the guest peer can extrapolate position. Like velocity for position what parameter should I use for rotation extrapolition?

    Read the article

  • Is it illegal to forward copyrighted content? [closed]

    - by Mike
    Ok, this may be a strange question, but let's start: If I illegally download a movie (for example...) from a HTTP Web Server, there are many routers between me and the Web Server which are forwarding the data to my PC. As I understand, the owners of the routers are not legally responsible for the data they forward (please correct if I'm wrong). What if I would install a client of a peer-to-peer network on my PC and this client (peer) would forward copyrighted content received from peers to other peers? Hope someone understand what I mean ;-) Any answer or comment would be highly appreciated. Mike Update 1: I'm asking this question because I want to develop a p2p-application and try to figure out how to prevent illegal content sharing/distribution (if forwarding content is really illegal...) Update 2: What if the data forwarded by my peer is encrypted, so I'm technically not able to read and check it?

    Read the article

  • Exploring TCP throughput with DTrace (2)

    - by user12820842
    Last time, I described how we can use the overlap in distributions of unacknowledged byte counts and send window to determine whether the peer's receive window may be too small, limiting throughput. Let's combine that comparison with a comparison of congestion window and slow start threshold, all on a per-port/per-client basis. This will help us Identify whether the congestion window or the receive window are limiting factors on throughput by comparing the distributions of congestion window and send window values to the distribution of outstanding (unacked) bytes. This will allow us to get a visual sense for how often we are thwarted in our attempts to fill the pipe due to congestion control versus the peer not being able to receive any more data. Identify whether slow start or congestion avoidance predominate by comparing the overlap in the congestion window and slow start distributions. If the slow start threshold distribution overlaps with the congestion window, we know that we have switched between slow start and congestion avoidance, possibly multiple times. Identify whether the peer's receive window is too small by comparing the distribution of outstanding unacked bytes with the send window distribution (i.e. the peer's receive window). I discussed this here. # dtrace -s tcp_window.d dtrace: script 'tcp_window.d' matched 10 probes ^C cwnd 80 10.175.96.92 value ------------- Distribution ------------- count 1024 | 0 2048 | 4 4096 | 6 8192 | 18 16384 | 36 32768 |@ 79 65536 |@ 155 131072 |@ 199 262144 |@@@ 400 524288 |@@@@@@ 798 1048576 |@@@@@@@@@@@@@@@@@@@@@@@@@@@@ 3848 2097152 | 0 ssthresh 80 10.175.96.92 value ------------- Distribution ------------- count 268435456 | 0 536870912 |@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ 5543 1073741824 | 0 unacked 80 10.175.96.92 value ------------- Distribution ------------- count -1 | 0 0 | 1 1 | 0 2 | 0 4 | 0 8 | 0 16 | 0 32 | 0 64 | 0 128 | 0 256 | 3 512 | 0 1024 | 0 2048 | 4 4096 | 9 8192 | 21 16384 | 36 32768 |@ 78 65536 |@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ 5391 131072 | 0 swnd 80 10.175.96.92 value ------------- Distribution ------------- count 32768 | 0 65536 |@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ 5543 131072 | 0 Here we are observing a large file transfer via http on the webserver. Comparing these distributions, we can observe: That slow start congestion control is in operation. The distribution of congestion window values lies below the range of slow start threshold values (which are in the 536870912+ range), so the connection is in slow start mode. Both the unacked byte count and the send window values peak in the 65536-131071 range, but the send window value distribution is narrower. This tells us that the peer TCP's receive window is not closing. The congestion window distribution peaks in the 1048576 - 2097152 range while the receive window distribution is confined to the 65536-131071 range. Since the cwnd distribution ranges as low as 2048-4095, we can see that for some of the time we have been observing the connection, congestion control has been a limiting factor on transfer, but for the majority of the time the receive window of the peer would more likely have been the limiting factor. However, we know the window has never closed as the distribution of swnd values stays within the 65536-131071 range. So all in all we have a connection that has been mildly constrained by congestion control, but for the bulk of the time we have been observing it neither congestion or peer receive window have limited throughput. Here's the script: #!/usr/sbin/dtrace -s tcp:::send / (args[4]-tcp_flags & (TH_SYN|TH_RST|TH_FIN)) == 0 / { @cwnd["cwnd", args[4]-tcp_sport, args[2]-ip_daddr] = quantize(args[3]-tcps_cwnd); @ssthresh["ssthresh", args[4]-tcp_sport, args[2]-ip_daddr] = quantize(args[3]-tcps_cwnd_ssthresh); @unacked["unacked", args[4]-tcp_sport, args[2]-ip_daddr] = quantize(args[3]-tcps_snxt - args[3]-tcps_suna); @swnd["swnd", args[4]-tcp_sport, args[2]-ip_daddr] = quantize((args[4]-tcp_window)*(1 tcps_snd_ws)); } One surprise here is that slow start is still in operation - one would assume that for a large file transfer, acknowledgements would push the congestion window up past the slow start threshold over time. The slow start threshold is in fact still close to it's initial (very high) value, so that would suggest we have not experienced any congestion (the slow start threshold is adjusted when congestion occurs). Also, the above measurements were taken early in the connection lifetime, so the congestion window did not get a changes to get bumped up to the level of the slow start threshold. A good strategy when examining these sorts of measurements for a given service (such as a webserver) would be start by examining the distributions above aggregated by port number only to get an overall feel for service performance, i.e. is congestion control or peer receive window size an issue, or are we unconstrained to fill the pipe? From there, the overlap of distributions will tell us whether to drill down into specific clients. For example if the send window distribution has multiple peaks, we may want to examine if particular clients show issues with their receive window.

    Read the article

  • AT&T brevète une technologie permettant de tracer le téléchargement de contenu illicite sur les réseaux P2P

    Serait ce vraiment la fin du réseau peer-to-peer ? AT&T vient de breveter une technologie permettant de tracer le téléchargement de contenu illicite dans les réseaux P2PLes réseaux P2P : adulés par une très forte communauté d'utilisateurs, détestés par les organismes de protection de la propriété intellectuelle attirent de nouveau l'attention des médias. Cette fois-ci, le P2P est mis au-devant de la scène par la compagnie AT&T. Cette dernière vient de breveter une technologie qui permet une bonne traçabilité du contenu des réseaux peer-to-peer.La principale cible de cette technologie sont les fichiers torrents auxquels les utilisateurs ont accès par le biais des flux RSS et moteurs de recherche de ...

    Read the article

  • boolean in java: what am I doing wrong?

    - by Cheesegraterr
    Hello, I am trying to make my boolean value work. I am new at programming java so I must be missing something simple. I am trying to make it so that if one of the tire pressures is below 35 or over 45 the system outputs "bad inflation" For class me must use a boolean which is what I tried. I cant figure out why this isnt working. No matter what I do the boolean is always true. Any tips? public class tirePressure { private static double getDoubleSystem1 () //Private routine to simply read a double in from the command line { String myInput1 = null; //Store the string that is read form the command line double numInput1 = 0; //Used to store the converted string into an double BufferedReader mySystem; //Buffer to store input mySystem = new BufferedReader (new InputStreamReader (System.in)); // creates a connection to system files or cmd try { myInput1 = mySystem.readLine (); //reads in data from console myInput1 = myInput1.trim (); //trim command cuts off unneccesary inputs } catch (IOException e) //checks for errors { System.out.println ("IOException: " + e); return -1; } numInput1 = Double.parseDouble (myInput1); //converts the string to an double return numInput1; //return double value to main program } static public void main (String[] args) { double TireFR; //double to store input from console double TireFL; double TireBR; double TireBL; boolean goodPressure; goodPressure = false; System.out.println ("Tire Pressure Checker"); System.out.println (" "); System.out.print ("Enter pressure of front left tire:"); TireFL = getDoubleSystem1 (); //read in an double from the user if (TireFL < 35 || TireFL > 45) { System.out.println ("Pressure out of range"); goodPressure = false; } System.out.print ("Enter pressure of front right tire:"); TireFR = getDoubleSystem1 (); //read in an double from the user if (TireFR < 35 || TireFR > 45) { System.out.println ("Pressure out of range"); goodPressure = false; } if (TireFL == TireFR) System.out.print (" "); else System.out.println ("Front tire pressures do not match"); System.out.println (" "); System.out.print ("Enter pressure of back left tire:"); TireBL = getDoubleSystem1 (); //read in an double from the user if (TireBL < 35 || TireBL > 45) { System.out.println ("Pressure out of range"); goodPressure = false; } System.out.print ("Enter pressure of back right tire:"); TireBR = getDoubleSystem1 (); //read in an double from the user if (TireBR < 35 || TireBR > 45) { System.out.println ("Pressure out of range"); goodPressure = false; } if (TireBL == TireBR) System.out.print (" "); else System.out.println ("Back tire pressures do not match"); if (goodPressure = true) System.out.println ("Inflation is OK."); else System.out.println ("Inflation is BAD."); System.out.println (goodPressure); } //mainmethod } // tirePressure Class

    Read the article

  • ssh + tinyproxy: poor performance

    - by Paul
    I am currently in China and I would like to still visit some blocked websites (facebook, youtube). I have VPS in the USA and I have installed tinyproxy on it. I log in on my VPS with SSH port-forwarding and I have configured my browser appropriately. Everything works more or less: I can surf to those websites but everything is inusually slow and sometimes data transfer stops abruptly. This probably has to do with the fact that I see some errors in my shell on the VPS like : channel 6: open failed: connect failed: Also in the log-file of tinyproxy I see some bad things: ERROR Sep 06 14:52:14 [28150]: getpeer_information: getpeername() error: Transport endpoint is not connected ERROR Sep 06 14:52:15 [28153]: writebuff: write() error "Connection reset by peer" on file descriptor 7 ERROR Sep 06 14:52:15 [28168]: readbuff: recv() error "Connection reset by peer" on file descriptor 7 ERROR Sep 06 14:52:15 [28151]: readbuff: recv() error "Connection reset by peer" on file descriptor 7 ERROR Sep 06 14:52:15 [28143]: readbuff: recv() error "Connection reset by peer" on file descriptor 7 ERROR Sep 06 14:52:17 [28147]: writebuff: write() error "Connection reset by peer" on file descriptor 7 ERROR Sep 06 14:52:23 [28137]: writebuff: write() error "Connection reset by peer" on file descriptor 7 ERROR Sep 06 14:52:26 [28168]: getpeer_information: getpeername() error: Transport endpoint is not connected ERROR Sep 06 14:52:27 [28186]: read_request_line: Client (file descriptor: 7) closed socket before read. ERROR Sep 06 14:52:31 [28160]: getpeer_information: getpeername() error: Transport endpoint is not connected

    Read the article

  • Peer review maatkits mk-parallel-dump and mk-parallel-restore usage?

    - by Brent
    Hiya Im trying to make use of maatkit as a means of dumping a database and then restoring to another database. For dumps: mk-parallel-dump --user abc --password xyz --databases $db --base-dir /tmp/dump For restore: mk-parallel-restore --create-databases --user abc --password xyz --database devdb /tmp/dump My question is, is my logic and understanding correct, and would it be ok to do it like this. Kind Regards Brent

    Read the article

  • Java: Cleaing up connection reset (but not by peer).

    - by Zombies
    There seems to be some confusion as well contradicting statements on various SO answers: http://stackoverflow.com/questions/585599/whats-causing-my-java-net-socketexception-connection-reset . You can see here that the accepted answer states that the connecteion was closed by other side. But this is not true, closing a connection doesn't cause a connection reset. It is cauesed by "an underlying TCP/IP error." What I want to know is if a SocketException: Connection reset means really besides "unerlying TCP/IP Error." What really causes this? As I doubt it has anything to do with the connection being closed (since closing a connection isn't an exception worthy flag, and reading from a closed connection is, but that isn't an "underlying TCP/IP error." My hypothesis is this Connection reset is caused from a server's failure to acknowledge an ACK packet (either wholly or just improperly as per TCP/IP). And that a SocketTimeoutException is generated only when no data is generated to be read (since this is thrown during a read after a certain duration, and read is waiting for data, but is not concerned with ACK packets). In other words, read() throws SocketTimeoutException if it didn't read any bytes of actual data (DATA LAYER) in its allotted time.

    Read the article

  • Problem with setup VPN in Ubuntu Server 12.04

    - by Yozone W.
    I have a problem with setup VPN server on my Ubuntu VPS, here is my server environments: Ubuntu Server 12.04 x86_64 xl2tpd 1.3.1+dfsg-1 pppd 2.4.5-5ubuntu1 openswan 1:2.6.38-1~precise1 After install software and configuration: ipsec verify Checking your system to see if IPsec got installed and started correctly: Version check and ipsec on-path [OK] Linux Openswan U2.6.38/K3.2.0-24-virtual (netkey) Checking for IPsec support in kernel [OK] SAref kernel support [N/A] NETKEY: Testing XFRM related proc values [OK] [OK] [OK] Checking that pluto is running [OK] Pluto listening for IKE on udp 500 [OK] Pluto listening for NAT-T on udp 4500 [OK] Checking for 'ip' command [OK] Checking /bin/sh is not /bin/dash [WARNING] Checking for 'iptables' command [OK] Opportunistic Encryption Support [DISABLED] /var/log/auth.log message: Oct 16 06:50:54 vpn pluto[3963]: packet from [My IP Address]:2251: received Vendor ID payload [RFC 3947] method set to=115 Oct 16 06:50:54 vpn pluto[3963]: packet from [My IP Address]:2251: received Vendor ID payload [draft-ietf-ipsec-nat-t-ike] meth=114, but already using method 115 Oct 16 06:50:54 vpn pluto[3963]: packet from [My IP Address]:2251: received Vendor ID payload [draft-ietf-ipsec-nat-t-ike-08] meth=113, but already using method 115 Oct 16 06:50:54 vpn pluto[3963]: packet from [My IP Address]:2251: received Vendor ID payload [draft-ietf-ipsec-nat-t-ike-07] meth=112, but already using method 115 Oct 16 06:50:54 vpn pluto[3963]: packet from [My IP Address]:2251: received Vendor ID payload [draft-ietf-ipsec-nat-t-ike-06] meth=111, but already using method 115 Oct 16 06:50:54 vpn pluto[3963]: packet from [My IP Address]:2251: received Vendor ID payload [draft-ietf-ipsec-nat-t-ike-05] meth=110, but already using method 115 Oct 16 06:50:54 vpn pluto[3963]: packet from [My IP Address]:2251: received Vendor ID payload [draft-ietf-ipsec-nat-t-ike-04] meth=109, but already using method 115 Oct 16 06:50:54 vpn pluto[3963]: packet from [My IP Address]:2251: received Vendor ID payload [draft-ietf-ipsec-nat-t-ike-03] meth=108, but already using method 115 Oct 16 06:50:54 vpn pluto[3963]: packet from [My IP Address]:2251: received Vendor ID payload [draft-ietf-ipsec-nat-t-ike-02] meth=107, but already using method 115 Oct 16 06:50:54 vpn pluto[3963]: packet from [My IP Address]:2251: received Vendor ID payload [draft-ietf-ipsec-nat-t-ike-02_n] meth=106, but already using method 115 Oct 16 06:50:54 vpn pluto[3963]: packet from [My IP Address]:2251: ignoring Vendor ID payload [FRAGMENTATION 80000000] Oct 16 06:50:54 vpn pluto[3963]: packet from [My IP Address]:2251: received Vendor ID payload [Dead Peer Detection] Oct 16 06:50:54 vpn pluto[3963]: "L2TP-PSK-NAT"[5] [My IP Address] #5: responding to Main Mode from unknown peer [My IP Address] Oct 16 06:50:54 vpn pluto[3963]: "L2TP-PSK-NAT"[5] [My IP Address] #5: transition from state STATE_MAIN_R0 to state STATE_MAIN_R1 Oct 16 06:50:54 vpn pluto[3963]: "L2TP-PSK-NAT"[5] [My IP Address] #5: STATE_MAIN_R1: sent MR1, expecting MI2 Oct 16 06:50:55 vpn pluto[3963]: "L2TP-PSK-NAT"[5] [My IP Address] #5: NAT-Traversal: Result using draft-ietf-ipsec-nat-t-ike (MacOS X): peer is NATed Oct 16 06:50:55 vpn pluto[3963]: "L2TP-PSK-NAT"[5] [My IP Address] #5: transition from state STATE_MAIN_R1 to state STATE_MAIN_R2 Oct 16 06:50:55 vpn pluto[3963]: "L2TP-PSK-NAT"[5] [My IP Address] #5: STATE_MAIN_R2: sent MR2, expecting MI3 Oct 16 06:50:55 vpn pluto[3963]: "L2TP-PSK-NAT"[5] [My IP Address] #5: ignoring informational payload, type IPSEC_INITIAL_CONTACT msgid=00000000 Oct 16 06:50:55 vpn pluto[3963]: "L2TP-PSK-NAT"[5] [My IP Address] #5: Main mode peer ID is ID_IPV4_ADDR: '192.168.12.52' Oct 16 06:50:55 vpn pluto[3963]: "L2TP-PSK-NAT"[5] [My IP Address] #5: switched from "L2TP-PSK-NAT" to "L2TP-PSK-NAT" Oct 16 06:50:55 vpn pluto[3963]: "L2TP-PSK-NAT"[6] [My IP Address] #5: deleting connection "L2TP-PSK-NAT" instance with peer [My IP Address] {isakmp=#0/ipsec=#0} Oct 16 06:50:55 vpn pluto[3963]: "L2TP-PSK-NAT"[6] [My IP Address] #5: transition from state STATE_MAIN_R2 to state STATE_MAIN_R3 Oct 16 06:50:55 vpn pluto[3963]: "L2TP-PSK-NAT"[6] [My IP Address] #5: new NAT mapping for #5, was [My IP Address]:2251, now [My IP Address]:2847 Oct 16 06:50:55 vpn pluto[3963]: "L2TP-PSK-NAT"[6] [My IP Address] #5: STATE_MAIN_R3: sent MR3, ISAKMP SA established {auth=OAKLEY_PRESHARED_KEY cipher=aes_256 prf=oakley_sha group=modp1024} Oct 16 06:50:55 vpn pluto[3963]: "L2TP-PSK-NAT"[6] [My IP Address] #5: Dead Peer Detection (RFC 3706): enabled Oct 16 06:50:56 vpn pluto[3963]: "L2TP-PSK-NAT"[6] [My IP Address] #5: the peer proposed: [My Server IP Address]/32:17/1701 -> 192.168.12.52/32:17/0 Oct 16 06:50:56 vpn pluto[3963]: "L2TP-PSK-NAT"[6] [My IP Address] #5: NAT-Traversal: received 2 NAT-OA. using first, ignoring others Oct 16 06:50:56 vpn pluto[3963]: "L2TP-PSK-NAT"[6] [My IP Address] #6: responding to Quick Mode proposal {msgid:8579b1fb} Oct 16 06:50:56 vpn pluto[3963]: "L2TP-PSK-NAT"[6] [My IP Address] #6: us: [My Server IP Address]<[My Server IP Address]>:17/1701 Oct 16 06:50:56 vpn pluto[3963]: "L2TP-PSK-NAT"[6] [My IP Address] #6: them: [My IP Address][192.168.12.52]:17/65280===192.168.12.52/32 Oct 16 06:50:56 vpn pluto[3963]: "L2TP-PSK-NAT"[6] [My IP Address] #6: transition from state STATE_QUICK_R0 to state STATE_QUICK_R1 Oct 16 06:50:56 vpn pluto[3963]: "L2TP-PSK-NAT"[6] [My IP Address] #6: STATE_QUICK_R1: sent QR1, inbound IPsec SA installed, expecting QI2 Oct 16 06:50:56 vpn pluto[3963]: "L2TP-PSK-NAT"[6] [My IP Address] #6: Dead Peer Detection (RFC 3706): enabled Oct 16 06:50:56 vpn pluto[3963]: "L2TP-PSK-NAT"[6] [My IP Address] #6: transition from state STATE_QUICK_R1 to state STATE_QUICK_R2 Oct 16 06:50:56 vpn pluto[3963]: "L2TP-PSK-NAT"[6] [My IP Address] #6: STATE_QUICK_R2: IPsec SA established transport mode {ESP=>0x08bda158 <0x4920a374 xfrm=AES_256-HMAC_SHA1 NATOA=192.168.12.52 NATD=[My IP Address]:2847 DPD=enabled} Oct 16 06:51:16 vpn pluto[3963]: "L2TP-PSK-NAT"[6] [My IP Address] #5: received Delete SA(0x08bda158) payload: deleting IPSEC State #6 Oct 16 06:51:16 vpn pluto[3963]: "L2TP-PSK-NAT"[6] [My IP Address] #5: ERROR: netlink XFRM_MSG_DELPOLICY response for flow eroute_connection delete included errno 2: No such file or directory Oct 16 06:51:16 vpn pluto[3963]: "L2TP-PSK-NAT"[6] [My IP Address] #5: received and ignored informational message Oct 16 06:51:16 vpn pluto[3963]: "L2TP-PSK-NAT"[6] [My IP Address] #5: received Delete SA payload: deleting ISAKMP State #5 Oct 16 06:51:16 vpn pluto[3963]: "L2TP-PSK-NAT"[6] [My IP Address]: deleting connection "L2TP-PSK-NAT" instance with peer [My IP Address] {isakmp=#0/ipsec=#0} Oct 16 06:51:16 vpn pluto[3963]: packet from [My IP Address]:2847: received and ignored informational message xl2tpd -D message: xl2tpd[4289]: Enabling IPsec SAref processing for L2TP transport mode SAs xl2tpd[4289]: IPsec SAref does not work with L2TP kernel mode yet, enabling forceuserspace=yes xl2tpd[4289]: setsockopt recvref[30]: Protocol not available xl2tpd[4289]: This binary does not support kernel L2TP. xl2tpd[4289]: xl2tpd version xl2tpd-1.3.1 started on vpn.netools.me PID:4289 xl2tpd[4289]: Written by Mark Spencer, Copyright (C) 1998, Adtran, Inc. xl2tpd[4289]: Forked by Scott Balmos and David Stipp, (C) 2001 xl2tpd[4289]: Inherited by Jeff McAdams, (C) 2002 xl2tpd[4289]: Forked again by Xelerance (www.xelerance.com) (C) 2006 xl2tpd[4289]: Listening on IP address [My Server IP Address], port 1701 Then it just stopped here, and have no any response. I can't connect VPN on my mac client, the /var/log/system.log message: Oct 16 15:17:36 azone-iMac.local configd[17]: SCNC: start, triggered by SystemUIServer, type L2TP, status 0 Oct 16 15:17:36 azone-iMac.local pppd[3799]: pppd 2.4.2 (Apple version 596.13) started by azone, uid 501 Oct 16 15:17:38 azone-iMac.local pppd[3799]: L2TP connecting to server 'vpn.netools.me' ([My Server IP Address])... Oct 16 15:17:38 azone-iMac.local pppd[3799]: IPSec connection started Oct 16 15:17:38 azone-iMac.local racoon[359]: Connecting. Oct 16 15:17:38 azone-iMac.local racoon[359]: IPSec Phase1 started (Initiated by me). Oct 16 15:17:38 azone-iMac.local racoon[359]: IKE Packet: transmit success. (Initiator, Main-Mode message 1). Oct 16 15:17:38 azone-iMac.local racoon[359]: IKE Packet: receive success. (Initiator, Main-Mode message 2). Oct 16 15:17:38 azone-iMac.local racoon[359]: IKE Packet: transmit success. (Initiator, Main-Mode message 3). Oct 16 15:17:38 azone-iMac.local racoon[359]: IKE Packet: receive success. (Initiator, Main-Mode message 4). Oct 16 15:17:38 azone-iMac.local racoon[359]: IKE Packet: transmit success. (Initiator, Main-Mode message 5). Oct 16 15:17:38 azone-iMac.local racoon[359]: IKEv1 Phase1 AUTH: success. (Initiator, Main-Mode Message 6). Oct 16 15:17:38 azone-iMac.local racoon[359]: IKE Packet: receive success. (Initiator, Main-Mode message 6). Oct 16 15:17:38 azone-iMac.local racoon[359]: IKEv1 Phase1 Initiator: success. (Initiator, Main-Mode). Oct 16 15:17:38 azone-iMac.local racoon[359]: IPSec Phase1 established (Initiated by me). Oct 16 15:17:39 azone-iMac.local racoon[359]: IPSec Phase2 started (Initiated by me). Oct 16 15:17:39 azone-iMac.local racoon[359]: IKE Packet: transmit success. (Initiator, Quick-Mode message 1). Oct 16 15:17:39 azone-iMac.local racoon[359]: IKE Packet: receive success. (Initiator, Quick-Mode message 2). Oct 16 15:17:39 azone-iMac.local racoon[359]: IKE Packet: transmit success. (Initiator, Quick-Mode message 3). Oct 16 15:17:39 azone-iMac.local racoon[359]: IKEv1 Phase2 Initiator: success. (Initiator, Quick-Mode). Oct 16 15:17:39 azone-iMac.local racoon[359]: IPSec Phase2 established (Initiated by me). Oct 16 15:17:39 azone-iMac.local pppd[3799]: IPSec connection established Oct 16 15:17:59 azone-iMac.local pppd[3799]: L2TP cannot connect to the server Oct 16 15:17:59 azone-iMac.local racoon[359]: IPSec disconnecting from server [My Server IP Address] Oct 16 15:17:59 azone-iMac.local racoon[359]: IKE Packet: transmit success. (Information message). Oct 16 15:17:59 azone-iMac.local racoon[359]: IKEv1 Information-Notice: transmit success. (Delete IPSEC-SA). Oct 16 15:17:59 azone-iMac.local racoon[359]: IKE Packet: transmit success. (Information message). Oct 16 15:17:59 azone-iMac.local racoon[359]: IKEv1 Information-Notice: transmit success. (Delete ISAKMP-SA). Anyone help? Thanks a million!

    Read the article

  • Problem with setup VPN on Ubuntu Server 12.04

    - by Yozone W.
    I have a problem with setup VPN server on my Ubuntu VPS, here is my server environments: Ubuntu Server 12.04 x86_64 xl2tpd 1.3.1+dfsg-1 pppd 2.4.5-5ubuntu1 openswan 1:2.6.38-1~precise1 After install software and configuration: ipsec verify Checking your system to see if IPsec got installed and started correctly: Version check and ipsec on-path [OK] Linux Openswan U2.6.38/K3.2.0-24-virtual (netkey) Checking for IPsec support in kernel [OK] SAref kernel support [N/A] NETKEY: Testing XFRM related proc values [OK] [OK] [OK] Checking that pluto is running [OK] Pluto listening for IKE on udp 500 [OK] Pluto listening for NAT-T on udp 4500 [OK] Checking for 'ip' command [OK] Checking /bin/sh is not /bin/dash [WARNING] Checking for 'iptables' command [OK] Opportunistic Encryption Support [DISABLED] /var/log/auth.log message: Oct 16 06:50:54 vpn pluto[3963]: packet from [My IP Address]:2251: received Vendor ID payload [RFC 3947] method set to=115 Oct 16 06:50:54 vpn pluto[3963]: packet from [My IP Address]:2251: received Vendor ID payload [draft-ietf-ipsec-nat-t-ike] meth=114, but already using method 115 Oct 16 06:50:54 vpn pluto[3963]: packet from [My IP Address]:2251: received Vendor ID payload [draft-ietf-ipsec-nat-t-ike-08] meth=113, but already using method 115 Oct 16 06:50:54 vpn pluto[3963]: packet from [My IP Address]:2251: received Vendor ID payload [draft-ietf-ipsec-nat-t-ike-07] meth=112, but already using method 115 Oct 16 06:50:54 vpn pluto[3963]: packet from [My IP Address]:2251: received Vendor ID payload [draft-ietf-ipsec-nat-t-ike-06] meth=111, but already using method 115 Oct 16 06:50:54 vpn pluto[3963]: packet from [My IP Address]:2251: received Vendor ID payload [draft-ietf-ipsec-nat-t-ike-05] meth=110, but already using method 115 Oct 16 06:50:54 vpn pluto[3963]: packet from [My IP Address]:2251: received Vendor ID payload [draft-ietf-ipsec-nat-t-ike-04] meth=109, but already using method 115 Oct 16 06:50:54 vpn pluto[3963]: packet from [My IP Address]:2251: received Vendor ID payload [draft-ietf-ipsec-nat-t-ike-03] meth=108, but already using method 115 Oct 16 06:50:54 vpn pluto[3963]: packet from [My IP Address]:2251: received Vendor ID payload [draft-ietf-ipsec-nat-t-ike-02] meth=107, but already using method 115 Oct 16 06:50:54 vpn pluto[3963]: packet from [My IP Address]:2251: received Vendor ID payload [draft-ietf-ipsec-nat-t-ike-02_n] meth=106, but already using method 115 Oct 16 06:50:54 vpn pluto[3963]: packet from [My IP Address]:2251: ignoring Vendor ID payload [FRAGMENTATION 80000000] Oct 16 06:50:54 vpn pluto[3963]: packet from [My IP Address]:2251: received Vendor ID payload [Dead Peer Detection] Oct 16 06:50:54 vpn pluto[3963]: "L2TP-PSK-NAT"[5] [My IP Address] #5: responding to Main Mode from unknown peer [My IP Address] Oct 16 06:50:54 vpn pluto[3963]: "L2TP-PSK-NAT"[5] [My IP Address] #5: transition from state STATE_MAIN_R0 to state STATE_MAIN_R1 Oct 16 06:50:54 vpn pluto[3963]: "L2TP-PSK-NAT"[5] [My IP Address] #5: STATE_MAIN_R1: sent MR1, expecting MI2 Oct 16 06:50:55 vpn pluto[3963]: "L2TP-PSK-NAT"[5] [My IP Address] #5: NAT-Traversal: Result using draft-ietf-ipsec-nat-t-ike (MacOS X): peer is NATed Oct 16 06:50:55 vpn pluto[3963]: "L2TP-PSK-NAT"[5] [My IP Address] #5: transition from state STATE_MAIN_R1 to state STATE_MAIN_R2 Oct 16 06:50:55 vpn pluto[3963]: "L2TP-PSK-NAT"[5] [My IP Address] #5: STATE_MAIN_R2: sent MR2, expecting MI3 Oct 16 06:50:55 vpn pluto[3963]: "L2TP-PSK-NAT"[5] [My IP Address] #5: ignoring informational payload, type IPSEC_INITIAL_CONTACT msgid=00000000 Oct 16 06:50:55 vpn pluto[3963]: "L2TP-PSK-NAT"[5] [My IP Address] #5: Main mode peer ID is ID_IPV4_ADDR: '192.168.12.52' Oct 16 06:50:55 vpn pluto[3963]: "L2TP-PSK-NAT"[5] [My IP Address] #5: switched from "L2TP-PSK-NAT" to "L2TP-PSK-NAT" Oct 16 06:50:55 vpn pluto[3963]: "L2TP-PSK-NAT"[6] [My IP Address] #5: deleting connection "L2TP-PSK-NAT" instance with peer [My IP Address] {isakmp=#0/ipsec=#0} Oct 16 06:50:55 vpn pluto[3963]: "L2TP-PSK-NAT"[6] [My IP Address] #5: transition from state STATE_MAIN_R2 to state STATE_MAIN_R3 Oct 16 06:50:55 vpn pluto[3963]: "L2TP-PSK-NAT"[6] [My IP Address] #5: new NAT mapping for #5, was [My IP Address]:2251, now [My IP Address]:2847 Oct 16 06:50:55 vpn pluto[3963]: "L2TP-PSK-NAT"[6] [My IP Address] #5: STATE_MAIN_R3: sent MR3, ISAKMP SA established {auth=OAKLEY_PRESHARED_KEY cipher=aes_256 prf=oakley_sha group=modp1024} Oct 16 06:50:55 vpn pluto[3963]: "L2TP-PSK-NAT"[6] [My IP Address] #5: Dead Peer Detection (RFC 3706): enabled Oct 16 06:50:56 vpn pluto[3963]: "L2TP-PSK-NAT"[6] [My IP Address] #5: the peer proposed: [My Server IP Address]/32:17/1701 -> 192.168.12.52/32:17/0 Oct 16 06:50:56 vpn pluto[3963]: "L2TP-PSK-NAT"[6] [My IP Address] #5: NAT-Traversal: received 2 NAT-OA. using first, ignoring others Oct 16 06:50:56 vpn pluto[3963]: "L2TP-PSK-NAT"[6] [My IP Address] #6: responding to Quick Mode proposal {msgid:8579b1fb} Oct 16 06:50:56 vpn pluto[3963]: "L2TP-PSK-NAT"[6] [My IP Address] #6: us: [My Server IP Address]<[My Server IP Address]>:17/1701 Oct 16 06:50:56 vpn pluto[3963]: "L2TP-PSK-NAT"[6] [My IP Address] #6: them: [My IP Address][192.168.12.52]:17/65280===192.168.12.52/32 Oct 16 06:50:56 vpn pluto[3963]: "L2TP-PSK-NAT"[6] [My IP Address] #6: transition from state STATE_QUICK_R0 to state STATE_QUICK_R1 Oct 16 06:50:56 vpn pluto[3963]: "L2TP-PSK-NAT"[6] [My IP Address] #6: STATE_QUICK_R1: sent QR1, inbound IPsec SA installed, expecting QI2 Oct 16 06:50:56 vpn pluto[3963]: "L2TP-PSK-NAT"[6] [My IP Address] #6: Dead Peer Detection (RFC 3706): enabled Oct 16 06:50:56 vpn pluto[3963]: "L2TP-PSK-NAT"[6] [My IP Address] #6: transition from state STATE_QUICK_R1 to state STATE_QUICK_R2 Oct 16 06:50:56 vpn pluto[3963]: "L2TP-PSK-NAT"[6] [My IP Address] #6: STATE_QUICK_R2: IPsec SA established transport mode {ESP=>0x08bda158 <0x4920a374 xfrm=AES_256-HMAC_SHA1 NATOA=192.168.12.52 NATD=[My IP Address]:2847 DPD=enabled} Oct 16 06:51:16 vpn pluto[3963]: "L2TP-PSK-NAT"[6] [My IP Address] #5: received Delete SA(0x08bda158) payload: deleting IPSEC State #6 Oct 16 06:51:16 vpn pluto[3963]: "L2TP-PSK-NAT"[6] [My IP Address] #5: ERROR: netlink XFRM_MSG_DELPOLICY response for flow eroute_connection delete included errno 2: No such file or directory Oct 16 06:51:16 vpn pluto[3963]: "L2TP-PSK-NAT"[6] [My IP Address] #5: received and ignored informational message Oct 16 06:51:16 vpn pluto[3963]: "L2TP-PSK-NAT"[6] [My IP Address] #5: received Delete SA payload: deleting ISAKMP State #5 Oct 16 06:51:16 vpn pluto[3963]: "L2TP-PSK-NAT"[6] [My IP Address]: deleting connection "L2TP-PSK-NAT" instance with peer [My IP Address] {isakmp=#0/ipsec=#0} Oct 16 06:51:16 vpn pluto[3963]: packet from [My IP Address]:2847: received and ignored informational message xl2tpd -D message: xl2tpd[4289]: Enabling IPsec SAref processing for L2TP transport mode SAs xl2tpd[4289]: IPsec SAref does not work with L2TP kernel mode yet, enabling forceuserspace=yes xl2tpd[4289]: setsockopt recvref[30]: Protocol not available xl2tpd[4289]: This binary does not support kernel L2TP. xl2tpd[4289]: xl2tpd version xl2tpd-1.3.1 started on vpn.netools.me PID:4289 xl2tpd[4289]: Written by Mark Spencer, Copyright (C) 1998, Adtran, Inc. xl2tpd[4289]: Forked by Scott Balmos and David Stipp, (C) 2001 xl2tpd[4289]: Inherited by Jeff McAdams, (C) 2002 xl2tpd[4289]: Forked again by Xelerance (www.xelerance.com) (C) 2006 xl2tpd[4289]: Listening on IP address [My Server IP Address], port 1701 Then it just stopped here, and have no any response. I can't connect VPN on my mac client, the /var/log/system.log message: Oct 16 15:17:36 azone-iMac.local configd[17]: SCNC: start, triggered by SystemUIServer, type L2TP, status 0 Oct 16 15:17:36 azone-iMac.local pppd[3799]: pppd 2.4.2 (Apple version 596.13) started by azone, uid 501 Oct 16 15:17:38 azone-iMac.local pppd[3799]: L2TP connecting to server 'vpn.netools.me' ([My Server IP Address])... Oct 16 15:17:38 azone-iMac.local pppd[3799]: IPSec connection started Oct 16 15:17:38 azone-iMac.local racoon[359]: Connecting. Oct 16 15:17:38 azone-iMac.local racoon[359]: IPSec Phase1 started (Initiated by me). Oct 16 15:17:38 azone-iMac.local racoon[359]: IKE Packet: transmit success. (Initiator, Main-Mode message 1). Oct 16 15:17:38 azone-iMac.local racoon[359]: IKE Packet: receive success. (Initiator, Main-Mode message 2). Oct 16 15:17:38 azone-iMac.local racoon[359]: IKE Packet: transmit success. (Initiator, Main-Mode message 3). Oct 16 15:17:38 azone-iMac.local racoon[359]: IKE Packet: receive success. (Initiator, Main-Mode message 4). Oct 16 15:17:38 azone-iMac.local racoon[359]: IKE Packet: transmit success. (Initiator, Main-Mode message 5). Oct 16 15:17:38 azone-iMac.local racoon[359]: IKEv1 Phase1 AUTH: success. (Initiator, Main-Mode Message 6). Oct 16 15:17:38 azone-iMac.local racoon[359]: IKE Packet: receive success. (Initiator, Main-Mode message 6). Oct 16 15:17:38 azone-iMac.local racoon[359]: IKEv1 Phase1 Initiator: success. (Initiator, Main-Mode). Oct 16 15:17:38 azone-iMac.local racoon[359]: IPSec Phase1 established (Initiated by me). Oct 16 15:17:39 azone-iMac.local racoon[359]: IPSec Phase2 started (Initiated by me). Oct 16 15:17:39 azone-iMac.local racoon[359]: IKE Packet: transmit success. (Initiator, Quick-Mode message 1). Oct 16 15:17:39 azone-iMac.local racoon[359]: IKE Packet: receive success. (Initiator, Quick-Mode message 2). Oct 16 15:17:39 azone-iMac.local racoon[359]: IKE Packet: transmit success. (Initiator, Quick-Mode message 3). Oct 16 15:17:39 azone-iMac.local racoon[359]: IKEv1 Phase2 Initiator: success. (Initiator, Quick-Mode). Oct 16 15:17:39 azone-iMac.local racoon[359]: IPSec Phase2 established (Initiated by me). Oct 16 15:17:39 azone-iMac.local pppd[3799]: IPSec connection established Oct 16 15:17:59 azone-iMac.local pppd[3799]: L2TP cannot connect to the server Oct 16 15:17:59 azone-iMac.local racoon[359]: IPSec disconnecting from server [My Server IP Address] Oct 16 15:17:59 azone-iMac.local racoon[359]: IKE Packet: transmit success. (Information message). Oct 16 15:17:59 azone-iMac.local racoon[359]: IKEv1 Information-Notice: transmit success. (Delete IPSEC-SA). Oct 16 15:17:59 azone-iMac.local racoon[359]: IKE Packet: transmit success. (Information message). Oct 16 15:17:59 azone-iMac.local racoon[359]: IKEv1 Information-Notice: transmit success. (Delete ISAKMP-SA). Anyone help? Thanks a million!

    Read the article

  • SQL Server and Hyper-V Dynamic Memory Part 2

    - by SQLOS Team
    Part 1 of this series was an introduction and overview of Hyper-V Dynamic Memory. This part looks at SQL Server memory management and how the SQL engine responds to changing OS memory conditions.   Part 2: SQL Server Memory Management As with any Windows process, sqlserver.exe has a virtual address space (VAS) of 4GB on 32-bit and 8TB in 64-bit editions. Pages in its VAS are mapped to pages in physical memory when the memory is committed and referenced for the first time. The collection of VAS pages that have been recently referenced is known as the Working Set. How and when SQL Server allocates virtual memory and grows its working set depends on the memory model it uses. SQL Server supports three basic memory models:   1. Conventional Memory Model   The Conventional model is the default SQL Server memory model and has the following properties: - Dynamic - can grow or shrink its working set in response to load and external (operating system) memory conditions. - OS uses 4K pages – (not to be confused with SQL Server “pages” which are 8K regions of committed memory).- Pageable - Can be paged out to disk by the operating system.   2. Locked Page Model The locked page memory model is set when SQL Server is started with "Lock Pages in Memory" privilege*. It has the following characteristics: - Dynamic - can grow or shrink its working set in the same way as the Conventional model.- OS uses 4K pages - Non-Pageable – When memory is committed it is locked in memory, meaning that it will remain backed by physical memory and will not be paged out by the operating system. A common misconception is to interpret "locked" as non-dynamic. A SQL Server instance using the locked page memory model will grow and shrink (allocate memory and release memory) in response to changing workload and OS memory conditions in the same way as it does with the conventional model.   This is an important consideration when we look at Hyper-V Dynamic Memory – “locked” memory works perfectly well with “dynamic” memory.   * Note in “Denali” (Standard Edition and above), and in SQL 2008 R2 64-bit (Enterprise and above editions) the Lock Pages in Memory privilege is all that is required to set this model. In 2008 R2 64-Bit standard edition it also requires trace flag 845 to be set, in 2008 R2 32-bit editions it requires sp_configure 'awe enabled' 1.   3. Large Page Model The Large page model is set using trace flag 834 and potentially offers a small performance boost for systems that are configured with large pages. It is characterized by: - Static - memory is allocated at startup and does not change. - OS uses large (>2MB) pages - Non-Pageable The large page model is supported with Hyper-V Dynamic Memory (and Hyper-V also supports large pages), but you get no benefit from using Dynamic Memory with this model since SQL Server memory does not grow or shrink. The rest of this article will focus on the locked and conventional SQL Server memory models.   When does SQL Server grow? For “dynamic” configurations (Conventional and Locked memory models), the sqlservr.exe process grows – allocates and commits memory from the OS – in response to a workload. As much memory is allocated as is required to optimally run the query and buffer data for future queries, subject to limitations imposed by:   - SQL Server max server memory setting. If this configuration option is set, the buffer pool is not allowed to grow to more than this value. In SQL Server 2008 this value represents single page allocations, and in “Denali” it represents any size page allocations and also managed CLR procedure allocations.   - Memory signals from OS. The operating system sets a signal on memory resource notification objects to indicate whether it has memory available or whether it is low on available memory. If there is only 32MB free for every 4GB of memory a low memory signal is set, which continues until 64MB/4GB is free. If there is 96MB/4GB free the operating system sets a high memory signal. SQL Server only allocates memory when the high memory signal is set.   To summarize, for SQL Server to grow you need three conditions: a workload, max server memory setting higher than the current allocation, high memory signals from the OS.    When does SQL Server shrink caches? SQL Server as a rule does not like to return memory to the OS, but it will shrink its caches in response to memory pressure. Memory pressure can be divided into “internal” and “external”.   - External memory pressure occurs when the operating system is running low on memory and low memory signals are set. The SQL Server Resource Monitor checks for low memory signals approximately every 5 seconds and it will attempt to free memory until the signals stop.   To free memory SQL Server does the following: ·         Frees unused memory. ·         Notifies Memory Manager Clients to release memory o   Caches – Free unreferenced cache objects. o   Buffer pool - Based on oldest access times.   The freed memory is released back to the operating system. This process continues until the low memory resource notifications stop.    - Internal memory pressure occurs when the size of different caches and allocations increase but the SQL Server process needs to keep its total memory within a target value. For example if max server memory is set and certain caches are growing large, it will cause SQL to free memory for re-use internally, but not to release memory back to the OS. If you lower the value of max server memory you will generate internal memory pressure that will cause SQL to release memory back to the OS.    Memory pressure handling has not changed much since SQL 2005 and it was described in detail in a blog post by Slava Oks.   Note that SQL Server Express is an exception to the above behavior. Unlike other editions it does not assume it is the most important process running on the system but tries to be more “desktop” friendly. It will empty its working set after a period of inactivity.   How does SQL Server respond to changing OS memory?    In SQL Server 2005 support for Hot-Add memory was introduced. This feature, available in Enterprise and above editions, allows the server to make use of any extra physical memory that was added after SQL Server started. Being able to add physical memory when the system is running is limited to specialized hardware, but with the Hyper-V Dynamic Memory feature, when new memory is allocated to a guest virtual machine, it looks like hot-add physical memory to the guest. What this means is that thanks to the hot-add memory feature, SQL Server 2005 and higher can dynamically grow if more “physical” memory is granted to a guest VM by Hyper-V dynamic memory.   SQL Server checks OS memory every second and dynamically adjusts its “target” (based on available OS memory and max server memory) accordingly.   In “Denali” Standard Edition will also have sqlserver.exe support for hot-add memory when running virtualized (i.e. detecting and acting on Hyper-V Dynamic Memory allocations).   How does a SQL Server workload in a guest VM impact Hyper-V dynamic memory scheduling?   When a SQL workload causes the sqlserver.exe process to grow its working set, the Hyper-V memory scheduler will detect memory pressure in the guest VM and add memory to it. SQL Server will then detect the extra memory and grow according to workload demand. In our tests we have seen this feedback process cause a guest VM to grow quickly in response to SQL workload - we are still working on characterizing this ramp-up.    How does SQL Server respond when Hyper-V removes memory from a guest VM through ballooning?   If pressure from other VM's cause Hyper-V Dynamic Memory to take memory away from a VM through ballooning (allocating memory with a virtual device driver and returning it to the host OS), Windows Memory Manager will page out unlocked portions of memory and signal low resource notification events. When SQL Server detects these events it will shrink memory until the low memory notifications stop (see cache shrinking description above).    This raises another question. Can we make SQL Server release memory more readily and hence behave more "dynamically" without compromising performance? In certain circumstances where the application workload is predictable it may be possible to have a job which varies "max server memory" according to need, lowering it when the engine is inactive and raising it before a period of activity. This would have limited applicaability but it is something we're looking into.   What Memory Management changes are there in SQL Server “Denali”?   In SQL Server “Denali” (aka SQL11) the Memory Manager has been re-written to be more efficient. The main changes are summarized in this post. An important change with respect to Hyper-V Dynamic Memory support is that now the max server memory setting includes any size page allocations and managed CLR procedure allocations it now represents a closer approximation to total sqlserver.exe memory usage. This makes it easier to calculate a value for max server memory, which becomes important when configuring virtual machines to work well with Hyper-V Dynamic Memory Startup and Maximum RAM settings.   Another important change is no more AWE or hot-add support for 32-bit edition. This means if you're running a 32-bit edition of Denali you're limited to a 4GB address space and will not be able to take advantage of dynamically added OS memory that wasn't present when SQL Server started (though Hyper-V Dynamic Memory is still a supported configuration).   In part 3 we’ll develop some best practices for configuring and using SQL Server with Dynamic Memory. Originally posted at http://blogs.msdn.com/b/sqlosteam/

    Read the article

  • AIX Grid Control 10.2.0.5 Communication and Monitoring Issue since 31-DEC-2010

    - by jayatheertha.rao(at)oracle.com
    Detailed symptoms for Oracle Management Server (OMS) 10.2.0.5 on AIX Oracle Management Service 10.2.0.5 instances on AIX 5L remain active and functional, but the OMS instances fail to communicate with the Grid Control Management Agents.An SSLPeerUnverified exception will be reported in the file $ORACLE_HOME/sysman/log/emoms.trc when OMS attempts to connect with an Agent:Javax.net.ssl.SSLPeerUnverifiedException: peer not authenticatedat com.sun.net.ssl.internal.ssl.SSLSessionImpl.getPeerCertificateChain(DashoA12275)at oracle.sysman.emSDK.emd.comm.EMDClient.authenticateHTTPConnection(EMDClient.java:2002)at oracle.sysman.emSDK.emd.comm.EMDClient.getConnection(EMDClient.java:1877)at oracle.sysman.emSDK.emd.comm.EMDClient.getConnection(EMDClient.java:1810)at oracle.sysman.emSDK.emd.comm.EMDClient.verifyHttpConnection(EMDClient.java:2540)at oracle.sysman.emSDK.emd.comm.EMDClient.getResponseForRequest(EMDClient.java:2323)at oracle.sysman.emSDK.emd.comm.EMDClient.getUploadManagerStatus(EMDClient.java:4853)at oracle.sysman.eml.admin.rep.emdConfig.EmdConfigTargetsData.getEmdUploadData(EmdConfigTargetsData.java:1640)at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)This error may be reported when:- Accessing the Agent home page in Grid Control- Setting preferred credentials for a target monitored by the Agent- Managing metrics for a target monitored by the Agent The jobs scheduled to be run by Agents can become non-responsiveThe OMS log file $ORACLE_HOME/sysman/log/emoms.trc can show:2010-12-31 00:06:58,204 [JobWorker 430:Thread-34] DEBUG emSDK.comm getStreamResponse.4015 - oracle.sysman.emSDK.emd.comm.CommException: java.io.IOException: javax.net.ssl.SSLPeerUnverifiedException: peer not authenticatedoracle.sysman.emSDK.emd.comm.CommException: java.io.IOException: javax.net.ssl.SSLPeerUnverifiedException: peer not authenticatedat oracle.sysman.emSDK.emd.comm.EMDClient.getStreamResponse_(EMDClient.java:4088)at oracle.sysman.emSDK.emd.comm.EMDClient.getStreamResponse(EMDClient.java:4009)at oracle.sysman.emSDK.emd.comm.EMDClient.remoteOperation(EMDClient.java:3404)at oracle.sysman.emdrep.jobs.CommandManager.requestRemoteCommand(CommandManager.java:765)at oracle.sysman.emdrep.jobs.commands.RemoteOp.executeCommand(RemoteOp.java:434)at oracle.sysman.emdrep.jobs.commands.RemoteOp.executeCommand(RemoteOp.java:491)at oracle.sysman.emdrep.jobs.BaseJobWorker.runStep(BaseJobWorker.java:614)at oracle.sysman.emdrep.jobs.BaseJobWorker.doOneOperation(BaseJobWorker.java:738)at oracle.sysman.emdrep.jobs.JobWorker.doOneOperation(JobWorker.java:306)at oracle.sysman.emdrep.jobs.JobWorker.run(JobWorker.java:288)at oracle.sysman.util.threadPoolManager.WorkerThread.run(Worker.java:261) Detailed symptoms for Grid Control Management Agent 10.2.0.5 on AIX Beginning 31-DEC-2010 00:00:00, 10.2.0.5 Management Agents running on the AIX 5L operating system will fail to monitor Oracle Application Server targets. As a result, the Availability Status for the Oracle Application Server targets will be in the "Metric Error" state. NOTE: The 10.2.0.5.0 Agents would experience these errors regardless of the version/platform of the OMS.The following metric error is seen in the console for the Oracle Application Server targets monitored by a Grid Control Management Agent 10.2.0.5 installed on AIX and experiencing a Root Certificate Authority issue:Message oracle.sysman.emSDK.emd.fetchlet.FetchletException: oracle.sysman.emSDK.emd.fetchlet.FetchletException: oracle.sysman.emSDK.emd.fetchlet.FetchletException: oracle.sysman.emSDK.emd.comm.CommException: java.io.IOException: javax.net.ssl.SSLPeerUnverifiedException: peer not authenticated The Grid Control Management Agent log file $ORACLE_HOME/sysman/log/emagentfetchlet.log (or $ORACLE_HOME/hostname/sysman/log/emagentfetchlet.log for a clustered Agent) includes the following errors:2010-12-31 00:01:03,626 [nmefmgr_getJNIFetchlet] ERROR ias.ResponseMetric getResponseMetric.154 - Unable tocompute application server statusoracle.sysman.emSDK.emd.fetchlet.FetchletException: oracle.sysman.emSDK.emd.comm.CommException: java.io.IOException: javax.net.ssl.SSLPeerUnverifiedException: peer not authenticatedat oracle.sysman.ias.ias.ResponseMetric.getResponseMetric(ResponseMetric.java:108)at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:79)at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)at java.lang.reflect.Method.invoke(Method.java:618)at oracle.sysman.emd.fetchlets.JavaWrapperFetchlet.getMetric(JavaWrapperFetchlet.java:217)at oracle.sysman.emd.fetchlets.FetchletWrapper.getMetric(FetchletWrapper.java:382) Beginning 31-DEC-2010, 10.2.0.5 Management Agents on the AIX 5L platform will fail to secure or re-secure with Oracle Management Service (OMS). This failure will cause installation of 10.2.0.5 Agents on the AIX 5L platform to fail.NOTE: The 10.2.0.5.0 Agents would experience these errors regardless of the version/platform of the OMS.The "emctl secure agent" command will fail with the following error, which will be written to the $ORACLE_HOME/sysman/log/secure.log file (or $ORACLE_HOME/hostname/sysman/log/secure.log for a clustered Agent) :2011-01-03 21:06:11,941 [main] ERROR agent.SecureAgentCmd main.207 - Failedto secure the Agent:javax.net.ssl.SSLPeerUnverifiedException: peer not authenticatedatcom.sun.net.ssl.internal.ssl.SSLSessionImpl.getPeerCertificateChain(DashoA6275)atoracle.sysman.emctl.secure.agent.SecureAgentCmd.checkUpload(SecureAgentCmd.java:478)atoracle.sysman.emctl.secure.agent.SecureAgentCmd.secureAgent(SecureAgentCmd.java:249)atoracle.sysman.emctl.secure.agent.SecureAgentCmd.main(SecureAgentCmd.java:200)  For solution, refer to AIX Grid Control 10.2.0.5 SSL Communication and Monitoring Issue since 31-DEC-2010 (Doc ID 1275070.1)

    Read the article

< Previous Page | 2 3 4 5 6 7 8 9 10 11 12 13  | Next Page >