Search Results

Search found 3315 results on 133 pages for 'magic packet'.

Page 25/133 | < Previous Page | 21 22 23 24 25 26 27 28 29 30 31 32  | Next Page >

  • Creating a UIImage from a rotated UIImageView...

    - by Magic Bullet Dave
    I have a UIImageView with an image in it. I have rotated the image prior to display by setting the transform property of the UIImageView to CGAffineTransformMakeRotation(angle) where angle is the angle in radians. I want to be able to create another UIImage that corresponds to the rotated version that I can see in my view. I am almost there, by rotating the image context I get a rotated image: - (UIImage *) rotatedImageFromImageView: (UIImageView *) imageView { UIImage *rotatedImage; // Get image width, height of the bonuding rectangle CGRect boundingRect = [self getBoundingRectAfterRotation: imageView.bounds byAngle:angle]; // Create a graphics context the size of the boundinf rectangle UIGraphicsBeginImageContext(boundingRect.size); CGContextRef context = UIGraphicsGetCurrentContext(); // Rotate and translate the context CGAffineTransform ourTransform = CGAffineTransformIdentity; ourTransform = CGAffineTransformConcat(ourTransform, CGAffineTransformMakeRotation(angle)); CGContextConcatCTM(context, ourTransform); // Draw the image into the context CGContextDrawImage(context, CGRectMake(0, 0, imageView.image.size.width, imageView.image.size.height), imageView.image.CGImage); // Get an image from the context rotatedImage = [UIImage imageWithCGImage: CGBitmapContextCreateImage(context)]; // Clean up UIGraphicsEndImageContext(); return rotatedImage; } However the image is not rotated about its centre. I have tried all kinds of transforms concatenated with my rotate to get it to rotate around the centre but to no avail. Am I missing a trick? Is this even possible since I am rotating the context not the image? Getting desperate to make this work now, so any help would be appreciated. Dave

    Read the article

  • No method found compiler warning

    - by Magic Bullet Dave
    I have create a class from a string, check it is valid and then check if it responds to a particular method. If it does then I call the method. It all works fine, except I get an annoying compiler warning: "warning: no '-setCurrentID:' method found". Am I doing something wrong here? Is there anyway to tell the compiler all is ok and stop it reporting a warning? The here is the code: // Create an instance of the class id viewController = [[NSClassFromString(class) alloc] init]; // Check the class supports the methods to set the row and section if ([viewController respondsToSelector:@selector(setCurrentID:)]) { [viewController setCurrentID:itemID]; } // Push the view controller onto the tab bar stack [self.navigationController pushViewController:viewController animated:YES]; [viewController release]; Cheers Dave

    Read the article

  • How do I make my multicast program work between computers on different networks?

    - by George
    I made a little chat applet using multicast. It works fine between computers on the same network, but fails if the computers are on different networks. Why is this? import java.io.*; import java.net.*; import java.awt.*; import java.awt.event.*; import javax.swing.*; public class ClientA extends JApplet implements ActionListener, Runnable { JTextField tf; JTextArea ta; MulticastSocket socket; InetAddress group; String name=""; public void start() { try { socket = new MulticastSocket(7777); group = InetAddress.getByName("233.0.0.1"); socket.joinGroup(group); socket.setTimeToLive(255); Thread th = new Thread(this); th.start(); name =JOptionPane.showInputDialog(null,"Please enter your name.","What is your name?",JOptionPane.PLAIN_MESSAGE); tf.grabFocus(); }catch(Exception e) {e.printStackTrace();} } public void init() { JPanel p = new JPanel(new BorderLayout()); ta = new JTextArea(); ta.setEditable(false); ta.setLineWrap(true); JScrollPane sp = new JScrollPane(ta); p.add(sp,BorderLayout.CENTER); JPanel p2 = new JPanel(); tf = new JTextField(30); tf.addActionListener(this); p2.add(tf); JButton b = new JButton("Send"); b.addActionListener(this); p2.add(b); p.add(p2,BorderLayout.SOUTH); add(p); } public void actionPerformed(ActionEvent ae) { String message = name+":"+tf.getText(); tf.setText(""); tf.grabFocus(); byte[] buf = message.getBytes(); DatagramPacket packet = new DatagramPacket(buf,buf.length, group,7777); try { socket.send(packet); } catch(Exception e) {} } public void run() { while(true) { byte[] buf = new byte[256]; String received = ""; DatagramPacket packet = new DatagramPacket(buf, buf.length); try { socket.receive(packet); received = new String(packet.getData()).trim(); } catch(Exception e) {} ta.append(received +"\n"); ta.setCaretPosition(ta.getDocument().getLength()); } } }

    Read the article

  • Creating a mask from a graphics context

    - by Magic Bullet Dave
    I want to be able to create a greyscale image with no alpha from a png in the app bundle. This works, and I get an image created: // Create graphics context the size of the overlapping rectangle UIGraphicsBeginImageContext(rectangleOfOverlap.size); CGContextRef ctx = UIGraphicsGetCurrentContext(); // More stuff CGContextDrawImage(ctx, drawRect2, [UIImage imageNamed:@"Image 01.png"].CGImage); // Create the new UIImage from the context UIImage *newImage = UIGraphicsGetImageFromCurrentImageContext(); However the resulting image is 32 bits per pixel and has an alpha channel, so when I use CGCreateImageWithMask it doesn't work. I've tried creating a bitmap context thus: CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceGray(); CGContextRef ctx =CGBitmapContextCreate(nil, rectangleOfOverlap.size.width, rectangleOfOverlap.size.height, 8, rectangleOfOverlap.size.width , colorSpace, kCGImageAlphaNone); UIGraphicsGetImageFromCurrentImageContext returns zero and the resulting image is not created. Am I doing something dumb here? Any help would be greatly appreciated. Regards Dave

    Read the article

  • What is the best approach to 2D collision detection on the iPhone?

    - by Magic Bullet Dave
    Been working on this problem of collision detection and there appears to be 3 main approaches I could take: Sprite and mask approach. (AND the overlap of the sprites and check for a non-zero number in the resulting sprite pixel data). Bounding circles, rectangles or polygons. (Create one or more shapes that enclose the sprites and do the basic maths to check for overlaps). Use an existing sprite library. The first approach, even though it would have been the way I would have done it in the old days of 16x16 sprite blocks, it appears that there just isn’t an easy way of getting at the individual image pixel data and/or alpha channel within Quartz (or OPENGL for that matter). Detecting the overlap of the bounding box is easy, but then creating a 3rd image from the overlap and then testing it for pixels is complicated and my gut feel is that even if we could get it to work would be slow. Am I missing something neat here? The second approach involves dividing up our sprites into several polygons and testing them for overlaps. The more polygons the more accurate the collision detection. The benefit is that it is fast, and can be accurate. The downside is it makes the sprite creation more complicated. i.e., we have to create the polygons for each sprite. For speed the best approach is to create a tree of polygons. The 3rd approach I’m not sure about as it involves buying code (or using an open source licence). I am not sure what the best library to use is or whether this would make life easier or give us a problem integrating this into our app. So in short I am favouring the polygon and tree approach and would appreciate you views on this before I go and write lots of code. Best regards Dave

    Read the article

  • Using the contents of an array to set individual pixels in a Quartz bitmap context

    - by Magic Bullet Dave
    I have an array that contains the RGB colour values for each pixel in a 320 x 180 display. I would like to be able to set individual pixel values in the a bitmap context of the same size offscreen then display the bitmap context in a view. It appears that I have to create 1x1 rects and either put a stroke on them or a line of length 1 at the point in question. Is that correct? I'm looking for a very efficient way of getting the array data onto the graphics context as you can imagine this is going to be an image buffer that cycles at 25 frames per second and drawing in this way seems inefficient. I guess the other question is should I use OPENGL ES instead? Thoughts/best practice would be much appreciated. Regards Dave OK, have come a short way, but can't make the final hurdle and I am not sure why this isn't working: - (void) displayContentsOfArray1UsingBitmap: (CGContextRef)context { long bitmapData[WIDTH * HEIGHT]; // Build bitmap int i, j, h; for (i = 0; i < WIDTH; i++) { for (j = 0; j < HEIGHT; j++) { h = frameBuffer01[i][j]; bitmapData[i * j] = h; } } // Blit the bitmap to the context CGDataProviderRef providerRef = CGDataProviderCreateWithData(NULL, bitmapData,4 * WIDTH * HEIGHT, NULL); CGColorSpaceRef colorSpaceRef = CGColorSpaceCreateDeviceRGB(); CGImageRef imageRef = CGImageCreate(WIDTH, HEIGHT, 8, 32, WIDTH * 4, colorSpaceRef, kCGImageAlphaFirst, providerRef, NULL, YES, kCGRenderingIntentDefault); CGContextDrawImage(context, CGRectMake(0.0, HEIGHT, WIDTH, HEIGHT), imageRef); CGImageRelease(imageRef); CGColorSpaceRelease(colorSpaceRef); CGDataProviderRelease(providerRef); }

    Read the article

  • How to Route URL from one domain to another..

    - by Magic
    Hello, I am an C# ASP.NET developer. I am trying to route URL from one domain to another using Godaddy IIS Virtual dedicated server or Dedicated server. For example I have a website application called A_Application in my server. An example URL: www.myserver.com/A_Application/product/bear/?productid=1 or using pretty URL www.myserver.com/A_Application/product/bear/1 I would like to setup for my client to point to A_Application using his/her domain. My Client example URL will be: www.hisserver.com/product/bear/?productid=1 or using pretty URL www.hisserver.com/product/bear/1 Thanks!

    Read the article

  • IOS4 UISplitViewController in Portrait Orientation with RootViewController showing like Landscape

    - by magic-c0d3r
    In IOS 3.2 I was able to display my UISplitViewController side by side like in landscape mode. In IOS 4.2 the RootViewController (MasterView) is not showing up in portrait mode. Does anyone know if we need to display the rootviewcontroll in a popover? Can we display it side by side like how it is in landscape mode? I want to avoid having to click on a button to show the masterview (when in portrait mode)

    Read the article

  • iPhone 4.0 Screen Resolution and writing robust code...

    - by Magic Bullet Dave
    Does anyone know what will happen with existing apps when they run on the iPhone 4.0 in terms of the new screen resolution? I am assuming, just like developing for the iPad that there should be no hard coded screen resolutions in your code. I'd also like advice on the best way of writing robust code to work well on any device. For instance, detecting the screen resolution is not enough - on the iPad the screen is physically bigger so you can display more items on it. On the new iPhone the screen is the same physical size but higher resolution, so the likely thing is that you wont want to display more items, just higher resolution versions of them. Any help would be useful, Regards Dave EDIT: I have read the other similar posts, I guess what I really would like to know is what is the recommended way to write code for all App Store devices in a robust way so they a) all work b) make best use of the device.

    Read the article

  • UILabel to render partial character using clip

    - by magic-c0d3r
    I want a UILabel to render a partial character by setting the lineBreakMode to clip. But it is clipping the entire character. Is there a different way to clip a word so that only partial character is displayed? Lets say I have a string like: "Hello Word" and that string is in a myLabel with a width that only fits the 5 characters and part of the "W" I want it still to render part of the "W" and not drop it from the render.

    Read the article

  • Removing a UIView from its superView and expanding its frame to full screen

    - by Magic Bullet Dave
    I have an object that is a subclass of UIView that can be added to a view hierarchy as a subView. I want to be able to remove the UIView from its superView and add it as a subView of the main window and then expand to full screen. Something along the lines of: // Remove from superView and add to mainWindow [self retain]; [self removeFromSuperView]; [self addSubView:mainWindow]; // Animate to full screen [UIView beginAnimations:@"expandToFullScreen" context:nil]; [UIView setAnimationDuration:1.0]; self.frame = [[UIScreen mainScreen] applicationFrame]; [UIView commitAnimations]; [self release]; Firstly am I on the right lines? Secondly, is there an easily way for the object to get a pointer to the mainWindow? Thanks Dave

    Read the article

  • Feedback on Optimizing C# NET Code Block

    - by Brett Powell
    I just spent quite a few hours reading up on TCP servers and my desired protocol I was trying to implement, and finally got everything working great. I noticed the code looks like absolute bollocks (is the the correct usage? Im not a brit) and would like some feedback on optimizing it, mostly for reuse and readability. The packet formats are always int, int, int, string, string. try { BinaryReader reader = new BinaryReader(clientStream); int packetsize = reader.ReadInt32(); int requestid = reader.ReadInt32(); int serverdata = reader.ReadInt32(); Console.WriteLine("Packet Size: {0} RequestID: {1} ServerData: {2}", packetsize, requestid, serverdata); List<byte> str = new List<byte>(); byte nextByte = reader.ReadByte(); while (nextByte != 0) { str.Add(nextByte); nextByte = reader.ReadByte(); } // Password Sent to be Authenticated string string1 = Encoding.UTF8.GetString(str.ToArray()); str.Clear(); nextByte = reader.ReadByte(); while (nextByte != 0) { str.Add(nextByte); nextByte = reader.ReadByte(); } // NULL string string string2 = Encoding.UTF8.GetString(str.ToArray()); Console.WriteLine("String1: {0} String2: {1}", string1, string2); // Reply to Authentication Request MemoryStream stream = new MemoryStream(); BinaryWriter writer = new BinaryWriter(stream); writer.Write((int)(1)); // Packet Size writer.Write((int)(requestid)); // Mirror RequestID if Authenticated, -1 if Failed byte[] buffer = stream.ToArray(); clientStream.Write(buffer, 0, buffer.Length); clientStream.Flush(); } I am going to be dealing with other packet types as well that are formatted the same (int/int/int/str/str), but different values. I could probably create a packet class, but this is a bit outside my scope of knowledge for how to apply it to this scenario. If it makes any difference, this is the Protocol I am implementing. http://developer.valvesoftware.com/wiki/Source_RCON_Protocol

    Read the article

  • Perl parsing ps fwaux output

    - by Magic Hat
    I am trying to figure out children processes of a given parent from ps fwaux (there may very well be a better way to do this). Basically, I have daemons running that may or may not have a child process running at any given time. In another script I want to check if there are any child processes, and if so do something. If not, error out. ps fwaux|grep will show me the tree, but I'm not exactly sure what to do with it. Any suggestions would be great.

    Read the article

  • XNA Multiplayer Games and Networking

    - by JoshReuben
    ·        XNA communication must by default be lightweight – if you are syncing game state between players from the Game.Update method, you must minimize traffic. That game loop may be firing 60 times a second and player 5 needs to know if his tank has collided with any player 3 and the angle of that gun turret. There are no WCF ServiceContract / DataContract niceties here, but at the same time the XNA networking stack simplifies the details. The payload must be simplistic - just an ordered set of numbers that you would map to meaningful enum values upon deserialization.   Overview ·        XNA allows you to create and join multiplayer game sessions, to manage game state across clients, and to interact with the friends list ·        Dependency on Gamer Services - to receive notifications such as sign-in status changes and game invitations ·        two types of online multiplayer games: system link game sessions (LAN) and LIVE sessions (WAN). ·        Minimum dev requirements: 1 Xbox 360 console + Creators Club membership to test network code - run 1 instance of game on Xbox 360, and 1 on a Windows-based computer   Network Sessions ·        A network session is made up of players in a game + up to 8 arbitrary integer properties describing the session ·        create custom enums – (e.g. GameMode, SkillLevel) as keys in NetworkSessionProperties collection ·        Player state: lobby, in-play   Session Types ·        local session - for split-screen gaming - requires no network traffic. ·        system link session - connects multiple gaming machines over a local subnet. ·        Xbox LIVE multiplayer session - occurs on the Internet. Ranked or unranked   Session Updates ·        NetworkSession class Update method - must be called once per frame. ·        performs the following actions: o   Sends the network packets. o   Changes the session state. o   Raises the managed events for any significant state changes. o   Returns the incoming packet data. ·        synchronize the session à packet-received and state-change events à no threading issues   Session Config ·        Session host - gaming machine that creates the session. XNA handles host migration ·        NetworkSession properties: AllowJoinInProgress , AllowHostMigration ·        NetworkSession groups: AllGamers, LocalGamers, RemoteGamers   Subscribe to NetworkSession events ·        GamerJoined ·        GamerLeft ·        GameStarted ·        GameEnded – use to return to lobby ·        SessionEnded – use to return to title screen   Create a Session session = NetworkSession.Create(         NetworkSessionType.SystemLink,         maximumLocalPlayers,         maximumGamers,         privateGamerSlots,         sessionProperties );   Start a Session if (session.IsHost) {     if (session.IsEveryoneReady)     {        session.StartGame();        foreach (var gamer in SignedInGamer.SignedInGamers)        {             gamer.Presence.PresenceMode =                 GamerPresenceMode.InCombat;   Find a Network Session AvailableNetworkSessionCollection availableSessions = NetworkSession.Find(     NetworkSessionType.SystemLink,       maximumLocalPlayers,     networkSessionProperties); availableSessions.AllowJoinInProgress = true;   Join a Network Session NetworkSession session = NetworkSession.Join(     availableSessions[selectedSessionIndex]);   Sending Network Data var packetWriter = new PacketWriter(); foreach (LocalNetworkGamer gamer in session.LocalGamers) {     // Get the tank associated with this player.     Tank myTank = gamer.Tag as Tank;     // Write the data.     packetWriter.Write(myTank.Position);     packetWriter.Write(myTank.TankRotation);     packetWriter.Write(myTank.TurretRotation);     packetWriter.Write(myTank.IsFiring);     packetWriter.Write(myTank.Health);       // Send it to everyone.     gamer.SendData(packetWriter, SendDataOptions.None);     }   Receiving Network Data foreach (LocalNetworkGamer gamer in session.LocalGamers) {     // Keep reading while packets are available.     while (gamer.IsDataAvailable)     {         NetworkGamer sender;          // Read a single packet.         gamer.ReceiveData(packetReader, out sender);          if (!sender.IsLocal)         {             // Get the tank associated with this packet.             Tank remoteTank = sender.Tag as Tank;              // Read the data and apply it to the tank.             remoteTank.Position = packetReader.ReadVector2();             …   End a Session if (session.AllGamers.Count == 1)         {             session.EndGame();             session.Update();         }   Performance •        Aim to minimize payload, reliable in order messages •        Send Data Options: o   Unreliable, out of order -(SendDataOptions.None) o   Unreliable, in order (SendDataOptions.InOrder) o   Reliable, out of order (SendDataOptions.Reliable) o   Reliable, in order (SendDataOptions.ReliableInOrder) o   Chat data (SendDataOptions.Chat) •        Simulate: NetworkSession.SimulatedLatency , NetworkSession.SimulatedPacketLoss •        Voice support – NetworkGamer properties: HasVoice ,IsTalking , IsMutedByLocalUser

    Read the article

  • Using WKA in Large Coherence Clusters (Disabling Multicast)

    - by jpurdy
    Disabling hardware multicast (by configuring well-known addresses aka WKA) will place significant stress on the network. For messages that must be sent to multiple servers, rather than having a server send a single packet to the switch and having the switch broadcast that packet to the rest of the cluster, the server must send a packet to each of the other servers. While hardware varies significantly, consider that a server with a single gigabit connection can send at most ~70,000 packets per second. To continue with some concrete numbers, in a cluster with 500 members, that means that each server can send at most 140 cluster-wide messages per second. And if there are 10 cluster members on each physical machine, that number shrinks to 14 cluster-wide messages per second (or with only mild hyperbole, roughly zero). It is also important to keep in mind that network I/O is not only expensive in terms of the network itself, but also the consumption of CPU required to send (or receive) a message (due to things like copying the packet bytes, processing a interrupt, etc). Fortunately, Coherence is designed to rely primarily on point-to-point messages, but there are some features that are inherently one-to-many: Announcing the arrival or departure of a member Updating partition assignment maps across the cluster Creating or destroying a NamedCache Invalidating a cache entry from a large number of client-side near caches Distributing a filter-based request across the full set of cache servers (e.g. queries, aggregators and entry processors) Invoking clear() on a NamedCache The first few of these are operations that are primarily routed through a single senior member, and also occur infrequently, so they usually are not a primary consideration. There are cases, however, where the load from introducing new members can be substantial (to the point of destabilizing the cluster). Consider the case where cluster in the first paragraph grows from 500 members to 1000 members (holding the number of physical machines constant). During this period, there will be 500 new member introductions, each of which may consist of several cluster-wide operations (for the cluster membership itself as well as the partitioned cache services, replicated cache services, invocation services, management services, etc). Note that all of these introductions will route through that one senior member, which is sharing its network bandwidth with several other members (which will be communicating to a lesser degree with other members throughout this process). While each service may have a distinct senior member, there's a good chance during initial startup that a single member will be the senior for all services (if those services start on the senior before the second member joins the cluster). It's obvious that this could cause CPU and/or network starvation. In the current release of Coherence (3.7.1.3 as of this writing), the pure unicast code path also has less sophisticated flow-control for cluster-wide messages (compared to the multicast-enabled code path), which may also result in significant heap consumption on the senior member's JVM (from the message backlog). This is almost never a problem in practice, but with sufficient CPU or network starvation, it could become critical. For the non-operational concerns (near caches, queries, etc), the application itself will determine how much load is placed on the cluster. Applications intended for deployment in a pure unicast environment should be careful to avoid excessive dependence on these features. Even in an environment with multicast support, these operations may scale poorly since even with a constant request rate, the underlying workload will increase at roughly the same rate as the underlying resources are added. Unless there is an infrastructural requirement to the contrary, multicast should be enabled. If it can't be enabled, care should be taken to ensure the added overhead doesn't lead to performance or stability issues. This is particularly crucial in large clusters.

    Read the article

  • linux routing bug?

    - by Balázs Pozsár
    I have been struggling with this not easily reproducible issue since a while. I am using linux kernel v3.1.0, and sometimes routing to a few IP addresses does not work. What seems to happen is that instead of sending the packet to the gateway, the kernel treats the destination address as local, and tries to gets its MAC address via ARP. For example, now my current IP address is 172.16.1.104/24, the gateway is 172.16.1.254: # ifconfig eth0 eth0 Link encap:Ethernet HWaddr 00:1B:63:97:FC:DC inet addr:172.16.1.104 Bcast:172.16.1.255 Mask:255.255.255.0 UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:230772 errors:0 dropped:0 overruns:0 frame:0 TX packets:171013 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:191879370 (182.9 Mb) TX bytes:47173253 (44.9 Mb) Interrupt:17 # route -n Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface 0.0.0.0 172.16.1.254 0.0.0.0 UG 0 0 0 eth0 172.16.1.0 0.0.0.0 255.255.255.0 U 1 0 0 eth0 I can ping a few addresses, but not 172.16.0.59: # ping -c1 172.16.1.254 PING 172.16.1.254 (172.16.1.254) 56(84) bytes of data. 64 bytes from 172.16.1.254: icmp_seq=1 ttl=64 time=0.383 ms --- 172.16.1.254 ping statistics --- 1 packets transmitted, 1 received, 0% packet loss, time 0ms rtt min/avg/max/mdev = 0.383/0.383/0.383/0.000 ms root@pozsybook:~# ping -c1 172.16.0.1 PING 172.16.0.1 (172.16.0.1) 56(84) bytes of data. 64 bytes from 172.16.0.1: icmp_seq=1 ttl=63 time=5.54 ms --- 172.16.0.1 ping statistics --- 1 packets transmitted, 1 received, 0% packet loss, time 0ms rtt min/avg/max/mdev = 5.545/5.545/5.545/0.000 ms root@pozsybook:~# ping -c1 172.16.0.2 PING 172.16.0.2 (172.16.0.2) 56(84) bytes of data. 64 bytes from 172.16.0.2: icmp_seq=1 ttl=62 time=7.92 ms --- 172.16.0.2 ping statistics --- 1 packets transmitted, 1 received, 0% packet loss, time 0ms rtt min/avg/max/mdev = 7.925/7.925/7.925/0.000 ms root@pozsybook:~# ping -c1 172.16.0.59 PING 172.16.0.59 (172.16.0.59) 56(84) bytes of data. From 172.16.1.104 icmp_seq=1 Destination Host Unreachable --- 172.16.0.59 ping statistics --- 1 packets transmitted, 0 received, +1 errors, 100% packet loss, time 0ms When trying to ping 172.16.0.59, I can see in tcpdump that an ARP req was sent: # tcpdump -n -i eth0|grep ARP tcpdump: verbose output suppressed, use -v or -vv for full protocol decode listening on eth0, link-type EN10MB (Ethernet), capture size 96 bytes 15:25:16.671217 ARP, Request who-has 172.16.0.59 tell 172.16.1.104, length 28 and /proc/net/arp has an incomplete entry for 172.16.0.59: # grep 172.16.0.59 /proc/net/arp 172.16.0.59 0x1 0x0 00:00:00:00:00:00 * eth0 Please note, that 172.16.0.59 is accessible from this LAN from other computers. Does anyone have any idea of what's going on? Thanks. update: replies to the comments below: there are no interfaces besides eth0 and lo the ARP req cannot be seen on the other end, but that's how it should work. the main problem is that an ARP req should not even be sent at the first place the problem persist even if I add an explicit route with the command "route add -host 172.16.0.59 gw 172.16.1.254 dev eth0"

    Read the article

  • How to log kernel panics without KVM

    - by Spacedust
    My server is crashing and I can't find an answer why. It all started after my datacenter upgrade RAM from 16 GB to 32 GB. I also found such logs in dmesg - they've started to show itself just before the first kernel panic: EXT4-fs error (device md2): ext4_ext_find_extent: bad header/extent in inode #97911179: invalid magic - magic 5f69, entries 28769, max 26988(0), depth 24939(0) EXT4-fs error (device md2): ext4_ext_remove_space: bad header/extent in inode #97911179: invalid magic - magic 5f69, entries 28769, max 26988(0), depth 24939(0) EXT4-fs error (device md2): ext4_mb_generate_buddy: EXT4-fs: group 20974: 8589 blocks in bitmap, 54896 in gd JBD: Spotted dirty metadata buffer (dev = md2, blocknr = 0). There's a risk of filesystem corruption in case of system crash. EXT4-fs error (device md2): ext4_ext_split: inode #97911179: (comm pdflush) eh_entries 28769 != eh_max 26988! EXT4-fs (md2): delayed block allocation failed for inode 97911179 at logical offset 1039 with max blocks 1 with error -5 This should not happen!! Data will be lost EXT4-fs error (device md2): ext4_mb_generate_buddy: EXT4-fs: group 21731: 5 blocks in bitmap, 60762 in gd JBD: Spotted dirty metadata buffer (dev = md2, blocknr = 0). There's a risk of filesystem corruption in case of system crash. My system is CentOS 5.8 64-bit with latest kernel 2.6.18-308.20.1.el5. How can I check what is the reason of kernel panic without having an access to the KVM ? I have told my datacenter admins to check the memory in the server.

    Read the article

  • Cablemodem (SBG6580) firewall denying some outbound traffic? Why? Not configured [migrated]

    - by lairdb
    I finally got around to turning the syslog on for my cablemodem (Motorola Surfboard SBG6580) and I'm seeing about the expected amount of inbound attackage being blocked... 2014-05-30 21:59:02 Local0.Alert 192.168.111.1 May 31 04:58:56 2014 SYSLOG[0]: [Host 192.168.111.1] UDP 12.230.209.198,4500 --> 66.27.xx.xx,61459 DENY:Firewall interface [IP Fragmented Packet] attack 2014-05-30 21:59:02 Local0.Alert 192.168.111.1 May 31 04:58:56 2014 SYSLOG[0]: [Host 192.168.111.1] TCP 17.172.232.109,5223 --> 66.27.xx.xx,53814 DENY:Firewall interface access request 2014-05-30 21:59:02 Local0.Alert 192.168.111.1 May 31 04:58:57 2014 SYSLOG[0]: [Host 192.168.111.1] UDP 12.230.209.198,443 --> 66.27.xx.xx,53385 DENY: Firewall interface [IP Fragmented Packet] attack 2014-05-30 21:59:02 Local0.Alert 192.168.111.1 May 31 04:58:57 2014 SYSLOG[0]: [Host 192.168.111.1] UDP 12.230.209.198,4500 --> 66.27.xx.xx,61459 DENY:Firewall interface [IP Fragmented Packet] attack 2014-05-30 21:59:10 Local0.Alert 192.168.111.1 May 31 04:59:04 2014 SYSLOG[0]: [Host 192.168.111.1] UDP 12.230.209.198,443 --> 66.27.xx.xx,59960 DENY: Firewall interface [IP Fragmented Packet] attack 2014-05-30 21:59:10 Local0.Alert 192.168.111.1 May 31 04:59:04 2014 SYSLOG[0]: [Host 192.168.111.1] UDP 12.230.209.198,4500 --> 66.27.xx.xx,61459 DENY:Firewall interface [IP Fragmented Packet] attack ...and that's great. (Sad, but great.) But I'm also seeing a HUGE amount of what appears to be denied outbound connectivity: 2014-05-30 16:30:10 Local0.Alert 192.168.111.1 May 30 23:30:04 2014 SYSLOG[0]: [Host 192.168.111.1] TCP 192.168.111.100,58969 --> 38.81.66.127,443 DENY: Inbound or outbound access request 2014-05-30 16:30:10 Local0.Alert 192.168.111.1 May 30 23:30:04 2014 SYSLOG[0]: [Host 192.168.111.1] TCP 192.168.111.100,58969 --> 38.81.66.127,443 DENY: Inbound or outbound access request 2014-05-30 16:30:10 Local0.Alert 192.168.111.1 May 30 23:30:04 2014 SYSLOG[0]: [Host 192.168.111.1] TCP 192.168.111.100,58965 --> 162.222.41.13,443 DENY: Inbound or outbound access request 2014-05-30 16:30:10 Local0.Alert 192.168.111.1 May 30 23:30:04 2014 SYSLOG[0]: [Host 192.168.111.1] TCP 192.168.111.100,58965 --> 162.222.41.13,443 DENY: Inbound or outbound access request 2014-05-30 16:30:10 Local0.Alert 192.168.111.1 May 30 23:30:04 2014 SYSLOG[0]: [Host 192.168.111.1] TCP 192.168.111.100,58964 --> 38.81.66.179,443 DENY: Inbound or outbound access request 2014-05-30 16:30:10 Local0.Alert 192.168.111.1 May 30 23:30:04 2014 SYSLOG[0]: [Host 192.168.111.1] TCP 192.168.111.100,58964 --> 38.81.66.179,443 DENY: Inbound or outbound access request ...and Spot checking suggests that it's all legitimate traffic (Opening connections to CrashPlan, etc.), I have no restrictions configured in the modem; I don't see why it should be blocking anything. Am I misreading the log entry, and it's not actually being denied? (Seems unlikely.) Is the ISP (TWC) pushing deny tables that are not exposed in the UI? (Tinfoil hat too tight.) I'm confused. (The good news, such as it is, is that AFAIK I'm not experiencing any actual issues... but maybe I am; tough to tell.) Thanks.

    Read the article

  • Need help modifying C++ application to accept continuous piped input in Linux

    - by GreeenGuru
    The goal is to mine packet headers for URLs visited using tcpdump. So far, I can save a packet header to a file using: tcpdump "dst port 80 and tcp[13] & 0x08 = 8" -A -s 300 | tee -a ./Desktop/packets.txt And I've written a program to parse through the header and extract the URL when given the following command: cat ~/Desktop/packets.txt | ./packet-parser.exe But what I want to be able to do is pipe tcpdump directly into my program, which will then log the data: tcpdump "dst port 80 and tcp[13] & 0x08 = 8" -A -s 300 | ./packet-parser.exe Here is the script as it is. The question is: how do I need to change it to support continuous input from tcpdump? #include <boost/regex.hpp> #include <fstream> #include <cstdio> // Needed to define ios::app #include <string> #include <iostream> int main() { // Make sure to open the file in append mode std::ofstream file_out("/var/local/GreeenLogger/url.log", std::ios::app); if (not file_out) std::perror("/var/local/GreeenLogger/url.log"); else { std::string text; // Get multiple lines of input -- raw std::getline(std::cin, text, '\0'); const boost::regex pattern("GET (\\S+) HTTP.*?[\\r\\n]+Host: (\\S+)"); boost::smatch match_object; bool match = boost::regex_search(text, match_object, pattern); if(match) { std::string output; output = match_object[2] + match_object[1]; file_out << output << '\n'; std::cout << output << std::endl; } file_out.close(); } } Thank you ahead of time for the help!

    Read the article

  • Converting a size_t into an integer (c++)

    - by JeanOTF
    Hello, I've been trying to make a for loop that will iterate based off of the length of a network packet. In the API there exists a variable (size_t) by event.packet-dataLength. I want to iterate from 0 to event.packet-dataLength - 7 increasing i by 10 each time it iterates but I am having a world of trouble. I looked for solutions but have been unable to find anything useful. I tried converting the size_t to an unsigned int and doing the arithmetic with that but unfortunately it didn't work. Basically all I want is this: for (int i = 0; i < event.packet->dataLength - 7; i+=10) { } Though every time I do something like this or attempt at my conversions the i < # part is a huge number. They gave a printf statement in a tutorial for the API which used "%u" to print the actual number however when I convert it to an unsigned int it is still incorrect. I'm not sure where to go from here. Any help would be greatly appreciated :)

    Read the article

  • Understanding byte order and functions like CFSwapInt32HostToBig

    - by Typeoneerror
    I've got an enumeration in my game. A simple string message with an appended PacketType is being sent with the message (so it knows what to do with the message) over GameKit WIFI connection. I used Apple's GKRocket sample code as a starting point. The code itself is working fantastically; I just want to understand what the line with CFSwapInt32HostToBig is doing. What on earth does that do? and why does it need to do it? My guess is that it's making sure the PacketType value can be converted to an unsigned integer so it can send it reliably, but that doesn't sound all that correct to me. The documentation states "Converts a 32-bit integer from big-endian format to the host’s native byte order." but I don't understand what the means really. typedef enum { PacketTypeStart, // packet to notify games to start PacketTypeRequestSetup, // server wants client info PacketTypeSetup, // send client info to server PacketTypeSetupComplete, // round trip made for completion PacketTypeTurn, // packet to notify game that a turn is up PacketTypeRoll, // packet to send roll to players PacketTypeEnd // packet to end game } PacketType; // .... - (void)sendPacket:(NSData *)data ofType:(PacketType)type { NSLog(@"sendPacket:ofType(%d)", type); // create the data with enough space for a uint NSMutableData *newPacket = [NSMutableData dataWithCapacity:([data length]+sizeof(uint32_t))]; // Data is prefixed with the PacketType so the peer knows what to do with it. uint32_t swappedType = CFSwapInt32HostToBig((uint32_t)type); // add uint to data [newPacket appendBytes:&swappedType length:sizeof(uint32_t)]; // add the rest of the data [newPacket appendData:data]; // Send data checking for success or failure NSError *error; BOOL didSend = [_gkSession sendDataToAllPeers:newPacket withDataMode:GKSendDataReliable error:&error]; if (!didSend) { NSLog(@"error in sendDataToPeers: %@", [error localizedDescription]); } }

    Read the article

  • recvfrom returns invalid argument when *from* is passed

    - by Aditya Sehgal
    I am currently writing a small UDP server program in linux. The UDP server will receive packets from two different peers and will perform different operations based on from which peer it received the packet. I am trying to determine the source from where I receive the packet. However, when select returns and recvfrom is called, it returns with an error of Invalid Argument. If I pass NULL as the second last arguments, recvfrom succeeds. I have tried declaring fromAddr as struct sockaddr_storage, struct sockaddr_in, struct sockaddr without any success. Is their something wrong with this code? Is this the correct way to determine the source of the packet? The code snippet follows. ` /*TODO : update for TCP. use recv */ if((pkInfo->rcvLen=recvfrom(psInfo->sockFd, pkInfo->buffer, MAX_PKTSZ, 0, /* (struct sockaddr*)&fromAddr,*/ NULL, &(addrLen) )) < 0) { perror("RecvFrom failed\n"); } else { /*Apply Filter */ #if 0 struct sockaddr_in* tmpAddr; tmpAddr = (struct sockaddr_in* )&fromAddr; printf("Received Msg From %s\n",inet_ntoa(tmpAddr->sin_addr)); #endif printf("Packet Received of len = %d\n",pkInfo->rcvLen); } `

    Read the article

  • TCP/IP Implementation General Questions

    - by user2971023
    I've implemented the concepts shown here; http://wiki.unity3d.com/index.php/Simple_TCP/IP_Client_-_Server outside of unity and it works. (though i had to create the TCPIPServerApp from scratch as i could not find the base project anywhere). I have some general questions on how to use tcp/ip properly however. I've done some research on tcp/ip itself but I'm still a little confused. It seems like using the method above doesn't guarantee that I'll see the message (res). It just checks on every update to see if there is a different message in res. What if multiple messages are sent and the program lags or something, will i miss the earlier packet(s)? Should i instead do an array so it stores the last X messages? How do i know the data was received? Do I need to add a message id and build in my own ack into the data? Should i check to see if the port is in use before setting up a connection? Sorry for all the questions. This is all new to me but I enjoy this very much! ... Below already answered By Anton, Thanks It sounds like tcp uses its own packet numbering to ensure the packets end up in the right order on the other side. What if a packet is missed, are the subsequent packets thrown away? Or is this numbering and packet ordering, only for handling data that is broken out into multiple packets? TCP will automatically break the data into multiple packets if necessary right?

    Read the article

  • recvfrom() return values in Stop-and-Wait UDP?

    - by mavErick
    I am trying to implement a Stop-and-Wait UDP client-server socket program in C. As known, there are basically three possible scenarios for Stop-and-Wait flow control. i.e., After transmitting a packet, the sender receives a correct ACK and thus starts transmitting the next packet; the sender receives an incorrect ACK and thus retransmits this packet; the sender receives no ACK within a TIMEOUT and thus retransmits this packet. My idea is to differentiate these three scenarios with the return value of recvfrom() on the sender side. For scenario 1&2: recvfrom() just returns the length of the received ACK. Since in my implementation the incorrect ACK is of the same length of the correct one, so I will have to go deeper and check the contents of the ACK. It's not a big deal. I know how to do. Problems come when I am trying to recognize scenario 3 where no ACK is received. What confuses me is that my recvfrom() is within a while loop, so the recvfrom() will be called constantly. What will it return when the receiver is not actually sending the sender ACK? Is it 0 or 1?

    Read the article

< Previous Page | 21 22 23 24 25 26 27 28 29 30 31 32  | Next Page >