Search Results

Search found 4304 results on 173 pages for 'bytes'.

Page 72/173 | < Previous Page | 68 69 70 71 72 73 74 75 76 77 78 79  | Next Page >

  • Strange behavior of move with strings

    - by Umair Ahmed
    I am testing some enhanced string related functions with which I am trying to use move as a way to copy strings around for faster, more efficient use without delving into pointers. While testing a function for making a delimited string from a TStringList, I encountered a strange issue. The compiler referenced the bytes contained through the index when it was empty and when a string was added to it through move, index referenced the characters contained. Here is a small downsized barebone code sample:- unit UI; interface uses System.SysUtils, System.Types, System.UITypes, System.Rtti, System.Classes, System.Variants, FMX.Types, FMX.Controls, FMX.Forms, FMX.Dialogs, FMX.Layouts, FMX.Memo; type TForm1 = class(TForm) Results: TMemo; procedure FormCreate(Sender: TObject); end; var Form1: TForm1; implementation {$R *.fmx} function StringListToDelimitedString ( const AStringList: TStringList; const ADelimiter: String ): String; var Str : String; Temp1 : NativeInt; Temp2 : NativeInt; DelimiterSize : Byte; begin Result := ' '; Temp1 := 0; DelimiterSize := Length ( ADelimiter ) * 2; for Str in AStringList do Temp1 := Temp1 + Length ( Str ); SetLength ( Result, Temp1 ); Temp1 := 1; for Str in AStringList do begin Temp2 := Length ( Str ) * 2; // Here Index references bytes in Result Move ( Str [1], Result [Temp1], Temp2 ); // From here the index seems to address characters instead of bytes in Result Temp1 := Temp1 + Temp2; Move ( ADelimiter [1], Result [Temp1], DelimiterSize ); Temp1 := Temp1 + DelimiterSize; end; end; procedure TForm1.FormCreate(Sender: TObject); var StrList : TStringList; Str : String; begin // Test 1 : StringListToDelimitedString StrList := TStringList.Create; Str := ''; StrList.Add ( 'Hello1' ); StrList.Add ( 'Hello2' ); StrList.Add ( 'Hello3' ); StrList.Add ( 'Hello4' ); Str := StringListToDelimitedString ( StrList, ';' ); Results.Lines.Add ( Str ); StrList.Free; end; end. Please devise a solution and if possible, some explanation. Alternatives are welcome too.

    Read the article

  • Truncate a UTF-8 string to fit a given byte count in PHP

    - by fsb
    Say we have a UTF-8 string $s and we need to shorten it so it can be stored in N bytes. Blindly truncating it to N bytes could mess it up. But decoding it to find the character boundaries is a drag. Is there a tidy way? [Edit 20100414] In addition to S.Mark’s answer: mb_strcut(), I recently found another function to do the job: grapheme_extract($s, $n, GRAPHEME_EXTR_MAXBYTES); from the intl extension. Since intl is an ICU wrapper, I have a lot of confidence in it.

    Read the article

  • Haskell optimization of a function looking for a bytestring terminator

    - by me2
    Profiling of some code showed that about 65% of the time I was inside the following code. What it does is use the Data.Binary.Get monad to walk through a bytestring looking for the terminator. If it detects 0xff, it checks if the next byte is 0x00. If it is, it drops the 0x00 and continues. If it is not 0x00, then it drops both bytes and the resulting list of bytes is converted to a bytestring and returned. Any obvious ways to optimize this? I can't see it. parseECS = f [] False where f acc ff = do b <- getWord8 if ff then if b == 0x00 then f (0xff:acc) False else return $ L.pack (reverse acc) else if b == 0xff then f acc True else f (b:acc) False

    Read the article

  • Creating and writing file from a FileOutputStream in Java

    - by Althane
    Okay, so I'm working on a project where I use a Java program to initiate a socket connection between two classes (a FileSender and FileReceiver). My basic idea was that the FileSender would look like this: try { writer = new DataOutputStream(connect.getOutputStream()); } catch (IOException e) { // TODO Auto-generated catch block e.printStackTrace(); } //While we have bytes to send while(filein.available() >0){ //We write them out to our buffer writer.write(filein.read(outBuffer)); writer.flush(); } //Then close our filein filein.close(); //And then our socket; connect.close(); } catch (IOException e) { // TODO Auto-generated catch block e.printStackTrace(); The constructor contains code that checks to see if the file exists, and that the socket is connected, and so on. Inside my FileReader is this though: input = recvSocket.accept(); BufferedReader br = new BufferedReader(new InputStreamReader(input.getInputStream())); FileOutputStream fOut= new FileOutputStream(filename); String line = br.readLine(); while(line != null){ fOut.write(line.getBytes()); fOut.flush(); line = br.readLine(); } System.out.println("Before RECV close statements"); fOut.close(); input.close(); recvSocket.close(); System.out.println("After RECV clsoe statements"); All inside a try-catch block. So, what I'm trying to do is have the FileSender reading in the file, converting to bytes, sending and flushing it out. FileReceiver, then reads in the bytes, writes to the fileOut, flushes, and continues waiting for more. I make sure to close all the things that I open, so... here comes the weird part. When I try and open the created text file in Eclipse, it tells me "An SWT error has occured ... recommended to exit the workbench... see .log for more details.". Another window pops up saying "Unhandled event loop exception, (no more handles)". However, if I try to open the sent text file in notepad2, I get ThisIsASentTextfile Which is good (well, minus the fact that there should be line breaks, but I'm working on that...). Does anyone know why this is happening? And while we're checking, how to add the line breaks? (And is this a particularly bad way to transfer files over java without getting some other libraries?)

    Read the article

  • How does sizeof calculate the size of structures

    - by Gearoid Murphy
    I know that a char and an int are calculated as being 8 bytes on 32 bit architectures due to alignment, but I recently came across a situation where a structure with 3 shorts was reported as being 6 bytes by the sizeof operator. Code is as follows: #include <iostream> using namespace std ; struct IntAndChar { int a ; unsigned char b ; }; struct ThreeShorts { unsigned short a ; unsigned short b ; unsigned short c ; }; int main() { cout<<sizeof(IntAndChar)<<endl; // outputs '8' cout<<sizeof(ThreeShorts)<<endl; // outputs '6', I expected this to be '8' return 0 ; } Compiler : g++ (Debian 4.3.2-1.1) 4.3.2. This really puzzles me, why isn't alignment enforced for the structure containing 3 shorts?

    Read the article

  • my realtime network receiving time differs a lot, anyone can help?

    - by sguox002
    I wrote a program using tcpip sockets to send commands to a device and receive the data from the device. The data size would be around 200kB to 600KB. The computer is directly connected to the device using a 100MB network. I found that the sending packets always arrive at the computer at 100MB/s speed (I have debugging information on the unit and I also verified this using some network monitoring software), but the receiving time differs a lot from 40ms to 250ms, even if the size is the same (I have a receiving buffer about 700K and the receiving window of 8092 bytes and changing the window size does not change anything). The phenomena differs also on different computers, but on the same computer the problem is very stable. For example, receiving 300k bytes on computer a would be 40ms, but it may cost 200ms on another computer. I have disabled firewall, antivirus, all other network protocol except the TCP/IP. Any experts on this can give me some hints?

    Read the article

  • Fatal error with php code

    - by basma
    hello i have a problem in my php code that use recursion: <?php solveTowers(5, "A", "B", "C"); function solveTowers($count, $src, $dest, $spare) { if (count == 1) { echo "Move a disk from ".$src." to ".$dest ; } else { solveTowers($count - 1, $src, $spare, $dest); solveTowers(1, $src, $dest, $spare); solveTowers($count - 1, $spare, $dest, $src); } } ?> but it doesnt run !! this error accures : Fatal error: Allowed memory size of 134217728 bytes exhausted (tried to allocate 261904 bytes) in C:\xampp\htdocs\cap492\towers.php on line 13 line 13 is the first call to the function in the else statment can you please help me with that ?!

    Read the article

  • How do I configure the Python logging module in Django?

    - by mipadi
    I'm trying to configure logging for a Django app using the Python logging module. I have placed the following bit of configuration code in my Django project's settings.py file: import logging import logging.handlers import os date_fmt = '%m/%d/%Y %H:%M:%S' log_formatter = logging.Formatter(u'[%(asctime)s] %(levelname)-7s: %(message)s (%(filename)s:%(lineno)d)', datefmt=date_fmt) log_dir = os.path.join(PROJECT_DIR, "var", "log", "my_app") log_name = os.path.join(log_dir, "nyrb.log") bytes = 1024 * 1024 # 1 MB if not os.path.exists(log_dir): os.makedirs(log_dir) handler = logging.handlers.RotatingFileHandler(log_name, maxBytes=bytes, backupCount=7) handler.setFormatter(log_formatter) handler.setLevel(logging.DEBUG) logging.getLogger().setLevel(logging.DEBUG) logging.getLogger().addHandler(handler) logging.getLogger(__name__).info("Initialized logging subsystem") At startup, I get a couple Django-related messages, as well as the "Initialized logging subsystem", in the log files, but then all the log messages end up going to the web server logs (/var/log/apache2/error.log, since I'm using Apache), and use the standard log format (not the formatter I designated). Am I configuring logging incorrectly?

    Read the article

  • 8bpp Bitmap format on the Compact Framework

    - by Kieran
    Hi friends. I am messing around with Conway's Game of Life - http://en.wikipedia.org/wiki/Conway's_Game_of_Life I started out coding algorithmns for winforms and now want to port my work onto windows mobile 6.1 (compact framework). I came across an article by Jon Skeet where he compared several different algorithmns for calculating next generations in the game. He used an array of bytes to store a cells state (alive or dead) and then he would copy this array to an 8bpp bitmap. For each new generation, he works out the state of each byte, then copies the array to a bitmap, then draws that bitmap to a picturebox. void CreateInitialImage() { bitmap = new Bitmap(Width, Height, PixelFormat.Format8bppIndexed); ColorPalette palette = bitmap.Palette; palette.Entries[0] = Color.Black; palette.Entries[1] = Color.White; bitmap.Palette = palette; } public Image Render() { Rectangle rect = new Rectangle(0, 0, Width, Height); BitmapData bmpData = bitmap.LockBits(rect, ImageLockMode.ReadWrite, bitmap.PixelFormat); Marshal.Copy(Data, 0, bmpData.Scan0, Data.Length); bitmap.UnlockBits(bmpData); return bitmap; } His code above is beautifully simple and very fast to render. Jon is using Windows Forms but now I want to port my own version of this onto Windows Mobile 6.1 (Compact Framework) but . . . .there is no way to format a bitmap to 8bpp in the cf. Can anyone suggest a way of rendering an array of bytes to a drawable image in the CF. This array is created in code on the fly (it is NOT loaded from an image file on disk). I basically need to store an array of cells represented by bytes, they are either alive or dead and I then need to draw that array as an image. The game is particularly slow on the CF so I need to implement clever optimised algoritmns but also need to render as fast as possible and the above solution would be pretty dam perfect if only it was available on the compact framework. Many thanks for any help Any suggestions?

    Read the article

  • From string to hex MD5 hash and back

    - by Pablo Fernandez
    I have this pseudo-code in java: bytes[] hash = MD5.hash("example"); String hexString = toHexString(hash); //This returns something like a0394dbe93f bytes[] hexBytes = hexString.getBytes("UTF-8"); Now, hexBytes[] and hash[] are different. I know I'm doing something wrong since hash.length() is 16 and hexBytes.length() is 32. Maybe it has something to do with java using Unicode for chars (just a wild guess here). Anyways, the question would be: how to get the original hash[] array from the hexString. The whole code is here if you want to look at it (it's ~ 40 LOC) http://gist.github.com/434466 The output of that code is: 16 [-24, 32, -69, 74, -70, 90, -41, 76, 90, 111, -15, -84, -95, 102, 65, -10] 32 [101, 56, 50, 48, 98, 98, 52, 97, 98, 97, 53, 97, 100, 55, 52, 99, 53, 97, 54, 102, 102, 49, 97, 99, 97, 49, 54, 54, 52, 49, 102, 54] Thanks a lot!

    Read the article

  • xCode 3.2.1 Alien Leak

    - by Mark
    Used code in entire project: - (void)applicationDidFinishLaunching:(UIApplication *)application { UITabBarController *tb = [[UITabBarController alloc] initWithNibName:nil bundle:nil]; [window addSubview:tb.view]; [tb release]; [window setBackgroundColor:[UIColor blackColor]]; [window makeKeyAndVisible]; } When the UITabBarController is added to the window view the following leak is detected bij Instruments: Leaked Object: Malloc 128 Bytes Address : 0x391ee70 Size : 128 Bytes Responsible Library : CoreGraphics Responsible Frame : open_handle_to_dylib_path This same issue occurs with UINavigationController, but does not appear with UIViewController. Specs: Mac OS X 10.6.2 xCode 3.2.1 Instruments 2.0.1 Compiled for iPhone Simulator 3.1.3 | Debug

    Read the article

  • How can I detect whether an image is a PNG or APNG format?

    - by perlit
    APNG is backwards compatible with PNG. I opened up an apng and png file in a hex editor and the first few bytes look identical. So if a user uploads either of these formats, how do I detect what the format really is? I've seen this done on some sites that block apng. I'm guessing the ImageMagick library makes this easy, but what if I were to do the detect without the use of an image processing library (for learning purposes)? Can I look for specific bytes that tell me if the file is apng? Solutions in any language is welcome.

    Read the article

  • Worst side effects from chars signedness. (Explanation of signedness effects on chars and casts)

    - by JustSmith
    I frequently work with libraries that use char when working with bytes in C++. The alternative is to define a "Byte" as unsigned char but that not the standard they decided to use. I frequently pass bytes from C# into the C++ dlls and cast them to char to work with the library. When casting ints to chars or chars to other simple types what are some of the side effects that can occur. Specifically, when has this broken code that you have worked on and how did you find out it was because of the char signedness? Lucky i haven't run into this in my code, used a char signed casting trick back in an embedded systems class in school. I'm looking to better understand the issue since I feel it is relevant to the work I am doing.

    Read the article

  • How to marshal structs with unknown length string fields in C#

    - by Ofir
    Hi all, I get an array of bytes, I need to unmarshal it to C# struct. I know the type of the struct, it has some strings fields. The strings in the byte array appears as so: two first bytes are the length of the string, then the string itself. I don;t know the length of the strings. I do know that its Unicode! [StructLayout(LayoutKind.Sequential, CharSet = CharSet.Unicode)] public class User { int Id;//should be 1 String UserName;//should be OFIR String FullName;//should be OFIR } the byte array looks like so: 00,00,01,00, 00,00,08,00, 4F,00,46,00,49,00,52,00, 00,00,08,00, 4F,00,46,00,49,00,52,00, I also found this link with same problem unsolved: loading binary data into a structure Thank you all, Ofir

    Read the article

  • Bind can only work for the DNS server inside zone

    - by Bob
    I got a big problem when I added a new zone to my current Bind configuration. ===============/etc/named.conf=============== include "/etc/rndc.key"; controls { inet 127.0.0.1 port 953 allow { 127.0.0.1; } keys { "rndckey"; }; }; acl "trusted" { 127.0.0.1; 208.43.81.157; 69.4.236.88; }; options { directory "/var/named"; allow-query { any; }; recursion yes; allow-recursion { trusted; }; }; zone "." { type hint; file "root.hints"; }; zone "2comu.com" { type master; file "2comu.com.db"; allow-update { none; }; }; zone "usa-diamond.com" { type master; file "usa-diamond.com.db"; allow-update { none; }; }; ===============/var/named/2comu.com.db=============== $TTL 86400 @ IN SOA ns1.2comu.com. root.2comu.com. ( 2011011101 3600 300 3600000 3600 ) IN NS ns1.2comu.com. IN NS ns2.2comu.com. IN MX 10 email.2comu.com. ns1.2comu.com. IN A 208.43.81.157 ns2.2comu.com. IN A 69.4.236.88 www.2comu.com. IN A 208.43.81.157 ftp.2comu.com. IN A 208.43.81.157 email.2comu.com. IN A 208.43.81.157 ===============/var/named/usa-diamond.com=============== $TTL 86400 @ IN SOA ns1.2comu.com. root.usa-diamond.com. ( 2011011115 3600 300 3600000 3600 ) IN NS ns1.2comu.com. IN NS ns2.2comu.com. www.usa-diamond.com. IN A 208.43.81.157 ================================================================ All of the configurations inside domain 2comu.com work well. But when www.usa-diamond.com doesn't work at all. When I tried "dig +trace www.usa-diamond.com", I got the following message ================================================================ ; <<>> DiG 9.3.6-P1-RedHat-9.3.6-4.P1.el5_4.2 <<>> +trace usa-diamond.com ;; global options: printcmd . 517603 IN NS c.root-servers.net. . 517603 IN NS d.root-servers.net. . 517603 IN NS e.root-servers.net. . 517603 IN NS f.root-servers.net. . 517603 IN NS g.root-servers.net. . 517603 IN NS h.root-servers.net. . 517603 IN NS i.root-servers.net. . 517603 IN NS j.root-servers.net. . 517603 IN NS k.root-servers.net. . 517603 IN NS l.root-servers.net. . 517603 IN NS m.root-servers.net. . 517603 IN NS a.root-servers.net. . 517603 IN NS b.root-servers.net. ;; Received 500 bytes from 208.43.81.157#53(208.43.81.157) in 0 ms com. 172800 IN NS j.gtld-servers.net. com. 172800 IN NS d.gtld-servers.net. com. 172800 IN NS e.gtld-servers.net. com. 172800 IN NS i.gtld-servers.net. com. 172800 IN NS f.gtld-servers.net. com. 172800 IN NS m.gtld-servers.net. com. 172800 IN NS b.gtld-servers.net. com. 172800 IN NS k.gtld-servers.net. com. 172800 IN NS l.gtld-servers.net. com. 172800 IN NS c.gtld-servers.net. com. 172800 IN NS h.gtld-servers.net. com. 172800 IN NS a.gtld-servers.net. com. 172800 IN NS g.gtld-servers.net. ;; Received 505 bytes from 192.33.4.12#53(c.root-servers.net) in 3 ms usa-diamond.com. 172800 IN NS ns1.2comu.com. usa-diamond.com. 172800 IN NS ns2.2comu.com. ;; Received 107 bytes from 192.48.79.30#53(j.gtld-servers.net) in 177 ms ;; Received 33 bytes from 208.43.81.157#53(ns1.2comu.com) in 0 ms ========================================================================= It seems I can't get any answer from ns1.2comu.com. Can anyone give some suggestions? Thanks a lot. Bob

    Read the article

  • _CrtDumpMemoryLeaks truncated output??

    - by Marin
    Hello, I am trying to use Visual Studio's capability to detect memory leaks, but I keep getting truncated output, like: Dumping objects -> {174} normal block at 0x0099ADB8, 48 bytes long. Data: <h:\najnovije\tru> 68 3A 5C 6E 61 6A 6E 6F 76 69 6A 65 5C 74 72 75 {170} normal block at 0x0099AD58, 32 bytes long. Data: <h:\najnovije\tru> 68 3A 5C 6E 61 6A 6E 6F 76 69 6A 65 5C 74 72 75 Object dump complete. What am I doing wrong? I added #define _CRTDBG_MAP_ALLOC #include <stdlib.h> #include <crtdbg.h> to the beginning of my code. Thank you.

    Read the article

  • iPhone Image Resources, ICO vs PNG, app bundle filesize

    - by Jasarien
    My application has a collection of around 1940 icons that are used throughout. They're currently in ICO and new images provided to me come in ICO format too. I have noticed that they contain a 16x16 and 32x32 representation of each icon in one file. Each file is roughly 4KB in filesize (as reported by finder, but ls reports that they vary from being ~1000 bytes to 5000 bytes) A very small number of these icons only contain the 32x32 representation, and as a result are only around 700 bytes in size. Currently I am bundling these icons with my application and they are inflating the size of the app a bit more than I would like. Altogether, the images total just about 25.5MB. Xcode must do some kind of compression because the resulting app bundle is about 12.4MB. Compressing this further into a ZIP (as it would be when submitted to the App Store), results in a final file of 5.8MB. I'm aware that the maximum limit for over the air App Store downloads has been raised to 20MB since the introduction of the iPad (I'm not sure if that extends to iPhone apps as well as iPad apps though, if not the limit would be 10MB). My worry is that new icons are going to be added (sometimes up to 10 icons per week), and will continue to inflate the app bundle over time. What is the best way to distribute these icons with my app? Things I've tried and not had much success with: Converting the icons from ICO to PNG: I tried this in the hopes that the pngcrush utility would help out with the filesize. But it appears that it doesn't make much of a difference between a normal PNG and a crushed png (I believe it just optimises the image for display on the iPhone's GPU rather than compress it's size). Also in going from ICO to PNG actually increased the size of the icon file... Zipping the images, and then uncompressing them on first run. While this did reduce the overall image sizes, I found that the effort needed to unzip them, copy them to the documents folder and ensure that duplication doesn't happen on upgrades was too much hassle to be worth the benefit. Also, on original and 3G iPhones unzipping and copying around 25MB of images takes too long and creates a bad experience... Things I've considered but not yet tried: Instead of distributing the icons within the app bundle, host them online, and download each icon on demand (it depends on the user's data as to which icons will actually be displayed and when). Issues with this is that bandwidth costs money, and image downloads will be bandwidth intensive. However, my app currently has a small userbase of around 5,500 users (of which I estimate around 1500 to be active based on Flurry stats), and I have a huge unused bandwidth allowance with my current hosting package. So I'm open to thoughts on how to solve this tricky issue.

    Read the article

  • An image from byte to optimized web page presentation

    - by blgnklc
    I get the data of the stored image on database as byte[] array; then I convert it to System.Drawing.Image like the code shown below; public System.Drawing.Image CreateImage(byte[] bytes) { System.IO.MemoryStream memoryStream = new System.IO.MemoryStream(bytes); System.Drawing.Image image = System.Drawing.Image.FromStream(memoryStream); return image; } (*) On the other hand I am planning to show a list of images on asp.net pages as the client scrolls downs the page. The more user gets down and down on the page he/she does see the more photos. So it means fast page loads and rich user experience. (you may see what I mean on www.mashable.com, just take care the new loads of the photos as you scroll down.) Moreover, the returned imgae object from the method above, how can i show it in a loop dynamically using the (*) conditions above. Regards bk

    Read the article

  • "Streaming" MJPG using python.

    - by tyler
    I have a webcam that I want to do some image processing on using Python. It's coming through as a Motion-JPEG. I want to try to process the stuff "live," but really what I want to do is this: Open the URL, start data streaming to some buffer... Read x bytes (where x is image size) to an image Process that image Display in result panel Return to number 2 The problem is that, while I do have the resolution, I have no idea how many bytes to read. I've tried googling the M-JPEG specification but can't find anything on if the images are separated by some header or what. Anybody have any ideas?

    Read the article

  • SQL server virtual memory usage and perofrmance

    - by user365035
    Hello, I have a very large DB used mostly for analytics. The performance overall is very sluggish. I just noticed that when running the query below, the amount of virtual memory used greatly exceed the amount of physical memory available. Currently, phsycial memory is 10GB (10238 bytes) where as the virtual memory returns significantly more 8388607 bytes. That seems really wrong, but I'm at a bit of a loss on how to proceed. USE [master]; GO select cpu_count , hyperthread_ratio , physical_memory_in_bytes / 1048576 as 'mem_MB' , virtual_memory_in_bytes / 1048576 as 'virtual_mem_MB' , max_workers_count , os_error_mode , os_priority_class from sys.dm_os_sys_info

    Read the article

  • Python BOM error in Ascii file

    - by Intosia
    I have a wierd annoying problem with Python 2.6 I trying to run this file (and the other), on my Embedded Linux ARM board. http://svn.tuxisalive.com/software_suite_v3/smart-core/smart-server/trunk/TDSService.py I get this error File "tuxhttpserver.py", line 1 SyntaxError: encoding problem: with BOM I know that error is about the BOM bytes etc etc. BUT, there are NO BOM bytes, its plain Ascii. I checked with a Hexeditor, and the linux File command says its Ascii. Im freaking out here... The code worked fine on my Sheevaplug (also a ARM based system).

    Read the article

  • MATLAB - Delete elements of binary files without loading entire file

    - by Doresoom
    This may be a stupid question, but Google and MATLAB documentation have failed me. I have a rather large binary file (10 GB) that I need to open and delete the last forty million bytes or so. Is there a way to do this without reading the entire file to memory in chunks and printing it out to a new file? It took 6 hours to generate the file, so I'm cringing at the thought of re-reading the whole thing. EDIT: The file is 14,440,000,000 bytes in size. I need to chop it to 14,400,000,000.

    Read the article

  • Java how to copy part of a file

    - by user3479074
    I have to read a file and depending of the content of the last lines, I have to copy most of its content into a new file. Unfortunately I didn't found a way to copy first n lines or chars of a file in java. The only way I found, is copying the file using nio FileChannels where I can specifiy the length in bytes. However, therefore I would need to know how many bytes the stuff I read needed in the source-file. Does anyone know a solution for one of these problems?

    Read the article

  • How do I prevent TCP connection freezes over an OpenVPN network?

    - by Jason R
    New details added at the end of this question; it's possible that I'm zeroing in on the cause. I have a UDP OpenVPN-based VPN set up in tap mode (I need tap because I need the VPN to pass multicast packets, which doesn't seem to be possible with tun networks) with a handful of clients across the Internet. I've been experiencing frequent TCP connection freezes over the VPN. That is, I will establish a TCP connection (e.g. an SSH connection, but other protocols have similar issues), and at some point during the session, it seems that traffic will cease being transmitted over that TCP session. This seems to be related to points at which large data transfers occur, such as if I execute an ls command in an SSH session, or if I cat a long log file. Some Google searches turn up a number of answers like this previous one on Server Fault, indicating that the likely culprit is an MTU issue: that during periods of high traffic, the VPN is trying to send packets that get dropped somewhere in the pipes between the VPN endpoints. The above-linked answer suggests using the following OpenVPN configuration settings to mitigate the problem: fragment 1400 mssfix This should limit the MTU used on the VPN to 1400 bytes and fix the TCP maximum segment size to prevent the generation of any packets larger than that. This seems to mitigate the problem a bit, but I still frequently see the freezes. I've tried a number of sizes as arguments to the fragment directive: 1200, 1000, 576, all with similar results. I can't think of any strange network topology between the two ends that could trigger such a problem: the VPN server is running on a pfSense machine connected directly to the Internet, and my client is also connected directly to the Internet at another location. One other strange piece of the puzzle: if I run the tracepath utility, then that seems to band-aid the problem. A sample run looks like: [~]$ tracepath -n 192.168.100.91 1: 192.168.100.90 0.039ms pmtu 1500 1: 192.168.100.91 40.823ms reached 1: 192.168.100.91 19.846ms reached Resume: pmtu 1500 hops 1 back 64 The above run is between two clients on the VPN: I initiated the trace from 192.168.100.90 to the destination of 192.168.100.91. Both clients were configured with fragment 1200; mssfix; in an attempt to limit the MTU used on the link. The above results would seem to suggest that tracepath was able to detect a path MTU of 1500 bytes between the two clients. I would assume that it would be somewhat smaller due to the fragmentation settings specified in the OpenVPN configuration. I found that result somewhat strange. Even stranger, however: if I have a TCP connection in the stalled state (e.g. an SSH session with a directory listing that froze in the middle), then executing the tracepath command shown above causes the connection to start up again! I can't figure out any reasonable explanation for why this would be the case, but I feel like this might be pointing toward a solution to ultimately eradicate the problem. Does anyone have any recommendations for other things to try? Edit: I've come back and looked at this a bit further, and have found only more confounding information: I set the OpenVPN connection to fragment at 1400 bytes, as shown above. Then, I connected to the VPN from across the Internet and used Wireshark to look at the UDP packets that were sent to the VPN server while the stall occurred. None were greater than the specified 1400 byte count, so the fragmentation seems to be functioning properly. To verify that even a 1400-byte MTU would be sufficient, I pinged the VPN server using the following (Linux) command: ping <host> -s 1450 -M do This (I believe) sends a 1450-byte packet with fragmentation disabled (I at least verified that it didn't work if I set it to an obviously-too-large value like 1600 bytes). These seem to work just fine; I get replies back from the host with no issue. So, maybe this isn't an MTU issue at all. I'm just confused as to what else it might be! Edit 2: The rabbit hole just keeps getting deeper: I've now isolated the problem a bit more. It seems to be related to the exact OS that the VPN client uses. I have successfully duplicated the problem on at least three Ubuntu machines (versions 12.04 through 13.04). I can reliably duplicate an SSH connection freeze within a minute or so by just cat-ing a large log file. However, if I do the same test using a CentOS 6 machine as a client, then I don't see the problem! I've tested using the exact same OpenVPN client version as I was using on the Ubuntu machines. I can cat log files for hours without seeing the connection freeze. This seems to provide some insight as to the ultimate cause, but I'm just not sure what that insight is. I have examined the traffic over the VPN using Wireshark. I'm not a TCP expert, so I'm not sure what to make of the gory details, but the gist is that at some point, a UDP packet gets dropped due to the limited bandwidth of the Internet link, causing TCP retransmissions inside the VPN tunnel. On the CentOS client, these retransmissions occur properly and things move on happily. At some point with the Ubuntu clients, though, the remote end starts retransmitting the same TCP segment over and over (with the transmit delay increasing between each retransmission). The client sends what looks like a valid TCP ACK to each retransmission, but the remote end still continues to transmit the same TCP segment periodically. This extends ad infinitum and the connection stalls. My question here would be: Does anyone have any recommendations for how to troubleshoot and/or determine the root cause of the TCP issue? It's as if the remote end isn't accepting the ACK messages sent by the VPN client. One common difference between the CentOS node and the various Ubuntu releases is that Ubuntu has a much more recent Linux kernel version (from 3.2 in Ubuntu 12.04 to 3.8 in 13.04). A pointer to some new kernel bug maybe? I'm assuming that if that were so, then I wouldn't be the only one experiencing the problem; I don't think this seems like a particularly exotic setup.

    Read the article

< Previous Page | 68 69 70 71 72 73 74 75 76 77 78 79  | Next Page >