Search Results

Search found 27946 results on 1118 pages for 'output buffer empty'.

Page 138/1118 | < Previous Page | 134 135 136 137 138 139 140 141 142 143 144 145  | Next Page >

  • Reading binary data from serial port using Dejan TComport Delphi component

    - by johnma
    Apologies for this question but I am a bit of a noob with Delphi. I am using Dejan TComport component to get data from a serial port. A box of equipment connected to the port sends about 100 bytes of binary data to the serial port. What I want to do is extract the bytes as numerical values into an array so that I can perform calculations on them. TComport has a method Read(buffer,Count) which reads DATA from input buffer. function Read(var Buffer; Count: Integer): Integer; The help says the Buffer variable must be large enough to hold Count bytes but does not provide any example of how to use this function. I can see that the Count variable holds the number of bytes received but I can't find a way to access the bytes in Buffer. TComport also has a methord Readstr which reads data from input buffer into a STRING variable. function ReadStr(var Str: String; Count: Integer): Integer; Again the Count variable shows the number of bytes received and I can use Memo1.Text:=str to display some information but obviously Memo1 has problems displaying the control characters. I have tried various ways to try and extract the byte data from Str but so far without success. I am sure it must be easy. Here's hoping.

    Read the article

  • How to find cause and of the SocketException with message that an established connection was aborted

    - by cdpnet
    Hi All, I know the similar question may have been asked many times, but I want to represent the behavior I'm seeing and find if somebody can help predict the cause of this. I am writing a windows service which connects to other windows service over TCP. There are 100 user entities of this, and 5 connections per each. These users perform their tasks using their individual connections. The application goes on withough seeing this problem for 1 or 2 days. Or sometimes show the problem right after starting (-rarely). The best run I had was like 4 to 5 days without showing this exception. And after that application died or I had to stop it for various reasons. I want to know what can be causing this? Here is the stacktrace. System.IO.IOException: Unable to write data to the transport connection: An established connection was aborted by the software in your host machine. ---> System.Net.Sockets.SocketException: An established connection was aborted by the software in your host machine at System.Net.Sockets.Socket.Send(Byte[] buffer, Int32 offset, Int32 size, SocketFlags socketFlags) at System.Net.Sockets.NetworkStream.Write(Byte[] buffer, Int32 offset, Int32 size) --- End of inner exception stack trace --- at System.Net.Sockets.NetworkStream.Write(Byte[] buffer, Int32 offset, Int32 size) at System.Net.Security._SslStream.StartWriting(Byte[] buffer, Int32 offset, Int32 count, AsyncProtocolRequest asyncRequest) at System.Net.Security._SslStream.ProcessWrite(Byte[] buffer, Int32 offset, Int32 count, AsyncProtocolRequest asyncRequest) at System.Net.Security.SslStream.Write(Byte[] buffer, Int32 offset, Int32 count)

    Read the article

  • Why is setting HTML5's CanvasPixelArray values ridiculously slow and how can I do it faster?

    - by Nixuz
    I am trying to do some dynamic visual effects using the HTML 5 canvas' pixel manipulation, but I am running into a problem where setting pixels in the CanvasPixelArray is ridiculously slow. For example if I have code like: imageData = ctx.getImageData(0, 0, 500, 500); for (var i = 0; i < imageData.length; i += 4){ imageData.data[i] = buffer[i]; imageData.data[i + 1] = buffer[i + 1]; imageData.data[i + 2] = buffer[i + 2]; } ctx.putImageData(imageData, 0, 0); Profiling with Chrome reveals, it runs 44% slower than the following code where CanvasPixelArray is not used. tempArray = new Array(500 * 500 * 4); imageData = ctx.getImageData(0, 0, 500, 500); for (var i = 0; i < imageData.length; i += 4){ tempArray[i] = buffer[i]; tempArray[i + 1] = buffer[i + 1]; tempArray[i + 2] = buffer[i + 2]; } ctx.putImageData(imageData, 0, 0); My guess is that the reason for this slowdown is due to the conversion between the Javascript doubles and the internal unsigned 8bit integers, used by the CanvasPixelArray. Is this guess correct? Is there anyway to reduce the time spent setting values in the CanvasPixelArray?

    Read the article

  • C# PInvoke VerQueryValue returns back OutOfMemoryException?

    - by Bopha
    Hi, Below is the code sample which I got from online resource but it's suppose to work with fullframework, but when I try to build it using C# smart device, it throws exception saying it's out of memory. Does anybody know how can I fix it to use on compact? the out of memory exception when I make the second call to VerQueryValue which is the last one. thanks, [DllImport("coredll.dll")] public static extern bool VerQueryValue(byte[] buffer, string subblock, out IntPtr blockbuffer, out uint len); [DllImport("coredll.dll")] public static extern bool VerQueryValue(byte[] pBlock, string pSubBlock, out string pValue, out uint len); // private static void GetAssemblyVersion() { string filename = @"\Windows\MyLibrary.dll"; if (File.Exists(filename)) { try { int handle = 0; Int32 size = 0; size = GetFileVersionInfoSize(filename, out handle); if (size > 0) { bool retValue; byte[] buffer = new byte[size]; retValue = GetFileVersionInfo(filename, handle, size, buffer); if (retValue == true) { bool success = false; IntPtr blockbuffer = IntPtr.Zero; uint len = 0; //success = VerQueryValue(buffer, "\\", out blockbuffer, out len); success = VerQueryValue(buffer, @"\VarFileInfo\Translation", out blockbuffer, out len); if(success) { int p = (int)blockbuffer; //Reads a 16-bit signed integer from unmanaged memory int j = Marshal.ReadInt16((IntPtr)p); p += 2; //Reads a 16-bit signed integer from unmanaged memory int k = Marshal.ReadInt16((IntPtr)p); string sb = string.Format("{0:X4}{1:X4}", j, k); string spv = @"\StringFileInfo\" + sb + @"\ProductVersion"; string versionInfo; VerQueryValue(buffer, spv, out versionInfo, out len); } } } } catch (Exception err) { string error = err.Message; } } }

    Read the article

  • Possible to Inspect Innards of Core C# Functionality

    - by Nick Babcock
    I was struck today, with the inclination to compare the innards of Buffer.BlockCopy and Array.CopyTo. I am curious to see if Array.CopyTo called Buffer.BlockCopy behind the scenes. There is no practical purpose behind this, I just want to further my understanding of the C# language and how it is implemented. Don't jump the gun and accuse me of micro-optimization, but you can accuse me of being curious! When I ran ILasm on mscorlib.dll I received this for Array.CopyTo .method public hidebysig newslot virtual final instance void CopyTo(class System.Array 'array', int32 index) cil managed { // Code size 0 (0x0) } // end of method Array::CopyTo and this for Buffer.BlockCopy .method public hidebysig static void BlockCopy(class System.Array src, int32 srcOffset, class System.Array dst, int32 dstOffset, int32 count) cil managed internalcall { .custom instance void System.Security.SecuritySafeCriticalAttribute::.ctor() = ( 01 00 00 00 ) } // end of method Buffer::BlockCopy Which, frankly, baffles me. I've never run ILasm on a dll/exe I didn't create. Does this mean that I won't be able to see how these functions are implemented? Searching around only revealed a stackoverflow question, which Marc Gravell said [Buffer.BlockCopy] is basically a wrapper over a raw mem-copy While insightful, it doesn't answer my question if Array.CopyTo calls Buffer.BlockCopy. I'm specifically interested in if I'm able to see how these two functions are implemented, and if I had future questions about the internals of C#, if it is possible for me to investigate it. Or am I out of luck?

    Read the article

  • Display post data !

    - by Comii
    I am trying to post data from vb.net application to web service asmx that is located on server! For posting data from vb.net application I am using this code: Public Function Post(ByVal url As String, ByVal data As String) As String Dim vystup As String = Nothing Try 'Our postvars Dim buffer As Byte() = Encoding.ASCII.GetBytes(data) 'Initialisation, we use localhost, change if appliable Dim WebReq As HttpWebRequest = DirectCast(WebRequest.Create(url), HttpWebRequest) 'Our method is post, otherwise the buffer (postvars) would be useless WebReq.Method = "POST" 'We use form contentType, for the postvars. WebReq.ContentType = "application/x-www-form-urlencoded" 'The length of the buffer (postvars) is used as contentlength. WebReq.ContentLength = buffer.Length 'We open a stream for writing the postvars Dim PostData As Stream = WebReq.GetRequestStream() 'Now we write, and afterwards, we close. Closing is always important! PostData.Write(buffer, 0, buffer.Length) PostData.Close() 'Get the response handle, we have no true response yet! Dim WebResp As HttpWebResponse = DirectCast(WebReq.GetResponse(), HttpWebResponse) 'Let's show some information about the response Console.WriteLine(WebResp.StatusCode) Console.WriteLine(WebResp.Server) 'Now, we read the response (the string), and output it. Dim Answer As Stream = WebResp.GetResponseStream() Dim _Answer As New StreamReader(Answer) 'Congratulations, you just requested your first POST page, you 'can now start logging into most login forms, with your application 'Or other examples. vystup = _Answer.ReadToEnd() Catch ex As Exception MessageBox.Show(ex.Message) End Try Return vystup.Trim() & vbLf End Function Now how i can retrieve this data in asmx service?

    Read the article

  • Why is setting HTML5's CanvasPixelArray values is ridiculously slow and how can I do it faster?

    - by Nixuz
    I am trying to do some dynamic visual effects using the HTML 5 canvas' pixel manipulation, but I am running into a problem where setting pixels in the CanvasPixelArray is ridiculously slow. For example if I have code like: imageData = ctx.getImageData(0, 0, 500, 500); for (var i = 0; i < imageData.length; i += 4){ imageData.data[index] = buffer[i]; imageData.data[index + 1] = buffer[i]; imageData.data[index + 2] = buffer[i]; } ctx.putImageData(imageData, 0, 0); Profiling with Chrome reveals, it runs 44% slower than the following code where CanvasPixelArray is not used. tempArray = new Array(500 * 500 * 4); imageData = ctx.getImageData(0, 0, 500, 500); for (var i = 0; i < imageData.length; i += 4){ tempArray[index] = buffer[i]; tempArray[index + 1] = buffer[i]; tempArray[index + 2] = buffer[i]; } ctx.putImageData(imageData, 0, 0); My guess is that the reason for this slowdown is due to the conversion between the Javascript doubles and the internal unsigned 8bit integers, used by the CanvasPixelArray. Is this guess correct? Is there anyway to reduce the time spent setting values in the CanvasPixelArray?

    Read the article

  • help on ejb stateless datagram and message driven beans

    - by Kemmal
    i have a client thats sending a message to the ejbserver using UDP, i want the server(stateless bean) to echo back this message to the client but i cant seem to do this. or can i implement the same logic by using JMS? please help and enlighten. this is just a test, in the end i want a midp to be sending the message to the ejb using datagrams. here is my code. @Stateless public class SessionFacadeBean implements SessionFacadeRemote { public SessionFacadeBean() { } public static void main(String[] args) { DatagramSocket aSocket = null; byte[] buffer = null; try { while(true) { DatagramPacket request = new DatagramPacket(buffer, buffer.length); aSocket.receive(request); DatagramPacket reply = new DatagramPacket(request.getData(), request.getLength(), request.getAddress(), request.getPort()); aSocket.send(reply); } } catch (SocketException e) { System.out.println("Socket: " + e.getMessage()); } catch (IOException e) { System.out.println("IO: " + e.getMessage()); } finally { if(aSocket != null) aSocket.close(); } } } and the client: public static void main(String[] args) { DatagramSocket aSocket = null; try { aSocket = new DatagramSocket(); byte [] m = "Test message!".getBytes(); InetAddress aHost = InetAddress.getByName("localhost"); int serverPort = 6789; DatagramPacket request = new DatagramPacket(m, m.length, aHost, serverPort); aSocket.send(request); byte[] buffer = new byte[1000]; DatagramPacket reply = new DatagramPacket(buffer, buffer.length); aSocket.receive(reply); System.out.println("Reply: " + new String(reply.getData())); } catch (SocketException e) { System.out.println("Socket: " + e.getMessage()); } catch (IOException e) { System.out.println("IO: " + e.getMessage()); } finally { if(aSocket != null) aSocket.close(); } } please help.

    Read the article

  • how to hide ssh expect user/password

    - by raindrop18
    my perl cgi script I have the password/user on clear text and want to hide it or the user enter the credential interactively.is that possible? here is my code. please any help!! i am very new for perl. #!/usr/local/bin/expect ####################################################################################################### # Input: It will handle two arguments -> a device and a show command. ####################################################################################################### # ######### Start of Script ###################### # #### Set up Timeouts - Debugging Variables log_user 0 set timeout 10 set userid "USER" set password "PASS" # ############## Get two arguments - (1) Device (2) Command to be executed set device [lindex $argv 0] set command [lindex $argv 1] spawn /usr/local/bin/ssh -l $userid $device match_max [expr 32 * 1024] expect { -re "RSA key fingerprint" {send "yes\r"} timeout {puts "Host is known"} } expect { -re "username: " {send "$userid\r"} -re "(P|p)assword: " {send "$password\r"} -re "Warning:" {send "$password\r"} -re "Connection refused" {puts "Host error -> $expect_out(buffer)";exit} -re "Connection closed" {puts "Host error -> $expect_out(buffer)";exit} -re "no address.*" {puts "Host error -> $expect_out(buffer)";exit} timeout {puts "Timeout error. Is device down or unreachable?? ssh_expect";exit} } expect { -re "\[#>]$" {send "term len 0\r"} timeout {puts "Error reading prompt -> $expect_out(buffer)";exit} } expect { -re "\[#>]$" {send "$command\r"} timeout {puts "Error reading prompt -> $expect_out(buffer)";exit} } expect -re "\[#>]$" set output $expect_out(buffer) send "exit\r" puts "$output\r\n"

    Read the article

  • About redirected stdout in System.Diagnostics.Process

    - by sforester
    I've been recently working on a program that convert flac files to mp3 in C# using flac.exe and lame.exe, here are the code that do the job: ProcessStartInfo piFlac = new ProcessStartInfo( "flac.exe" ); piFlac.CreateNoWindow = true; piFlac.UseShellExecute = false; piFlac.RedirectStandardOutput = true; piFlac.Arguments = string.Format( flacParam, SourceFile ); ProcessStartInfo piLame = new ProcessStartInfo( "lame.exe" ); piLame.CreateNoWindow = true; piLame.UseShellExecute = false; piLame.RedirectStandardInput = true; piLame.RedirectStandardOutput = true; piLame.Arguments = string.Format( lameParam, QualitySetting, ExtractTag( SourceFile ) ); Process flacp = null, lamep = null; byte[] buffer = BufferPool.RequestBuffer(); flacp = Process.Start( piFlac ); lamep = new Process(); lamep.StartInfo = piLame; lamep.OutputDataReceived += new DataReceivedEventHandler( this.ReadStdout ); lamep.Start(); lamep.BeginOutputReadLine(); int count = flacp.StandardOutput.BaseStream.Read( buffer, 0, buffer.Length ); while ( count != 0 ) { lamep.StandardInput.BaseStream.Write( buffer, 0, count ); count = flacp.StandardOutput.BaseStream.Read( buffer, 0, buffer.Length ); } Here I set the command line parameters to tell lame.exe to write its output to stdout, and make use of the Process.OutPutDataRecerved event to gather the output data, which is mostly binary data, but the DataReceivedEventArgs.Data is of type "string" and I have to convert it to byte[] before put it to cache, I think this is ugly and I tried this approach but the result is incorrect. Is there any way that I can read the raw redirected stdout stream, either synchronously or asynchronously, bypassing the OutputDataReceived event? PS: the reason why I don't use lame to write to disk directly is that I'm trying to convert several files in parallel, and direct writing to disk will cause severe fragmentation. Thanks a lot!

    Read the article

  • Help with GetGlyphOutline function(WinAPI)

    - by user146780
    I want to use this function to get contours and within these contours, I want to get cubic bezier. I think I have to call it with GGO_BEZIER. What puzzles me is how the return buffer works. "A glyph outline is returned as a series of one or more contours defined by a TTPOLYGONHEADER structure followed by one or more curves. Each curve in the contour is defined by a TTPOLYCURVE structure followed by a number of POINTFX data points. POINTFX points are absolute positions, not relative moves. The starting point of a contour is given by the pfxStart member of the TTPOLYGONHEADER structure. The starting point of each curve is the last point of the previous curve or the starting point of the contour. The count of data points in a curve is stored in the cpfx member of TTPOLYCURVE structure. The size of each contour in the buffer, in bytes, is stored in the cb member of TTPOLYGONHEADER structure. Additional curve definitions are packed into the buffer following preceding curves and additional contours are packed into the buffer following preceding contours. The buffer contains as many contours as fit within the buffer returned by GetGlyphOutline." I'm really not sure how to access the contours. I know that I can change a pointer another type of pointer but i'm not sure how I go about getting the contours based on this documentation. Thanks

    Read the article

  • Need an explanation on this code - c#

    - by ltech
    I am getting familiar with C# day by day and I came across this piece of code public static void CopyStreamToStream( Stream source, Stream destination, Action<Stream,Stream,Exception> completed) { byte[] buffer = new byte[0x1000]; AsyncOperation asyncOp = AsyncOperationManager.CreateOperation(null); Action<Exception> done = e => { if (completed != null) asyncOp.Post(delegate { completed(source, destination, e); }, null); }; AsyncCallback rc = null; rc = readResult => { try { int read = source.EndRead(readResult); if (read > 0) { destination.BeginWrite(buffer, 0, read, writeResult => { try { destination.EndWrite(writeResult); source.BeginRead( buffer, 0, buffer.Length, rc, null); } catch (Exception exc) { done(exc); } }, null); } else done(null); } catch (Exception exc) { done(exc); } }; source.BeginRead(buffer, 0, buffer.Length, rc, null); } From this article Article What I fail to follow is that how does the delegate get notified that the copy is done? Say after the copy is done I want to perform an operation on the copied file.

    Read the article

  • assembly of pdp-11(simulator)

    - by lego69
    I have this code on pdp-11 tks = 177560 tkb = 177562 tps = 177564 tpb = 177566 lcs = 177546 . = torg + 2000 main: mov #main, sp mov #kb_int, @#60 mov #200, @#62 mov #101, @#tks mov #clock, @#100 mov #300, @#102 mov #100, @#lcs loop: mov @#tks,r2 aslb r2 bmi loop halt clock: tst bufferg beq clk_end mov #msg,-(sp) jsr pc, print_str tst (sp)+ clr bufferg bic #100,@#tks clr @#lcs clk_end:rti kb_int: mov r1,-(sp) jsr pc, read_char movb r1,@buff_ptr inc buff_ptr bis #1,@#tks cmpb r1,#'q bne next_if mov #0, @#tks next_if:cmpb r1,#32. bne end_kb_int clrb @buff_ptr mov #buffer,-(sp) jsr pc, print_str tst (sp)+ mov #buffer,buff_ptr end_kb_int: mov (sp)+,r1 rti ;############################# read_char: tstb @#tks bpl read_char movb @#tkb, r1 rts pc ;############################# print_char: tstb @#tps bpl print_char movb r1, @#tpb rts pc ;############################# print_str: mov r1,-(sp) mov r2,-(sp) mov 6(sp),r2 str_loop: movb (r2)+,r1 beq pr_str_end jsr pc, print_char br str_loop pr_str_end: mov (sp)+,r2 mov (sp)+,r1 rts pc . = torg + 3000 msg:.ascii<Something is wrong!> .byte 0 .even buff_ptr: .word buffer buffer: .blkw 3 bufferg: .word 0 Can somebody please explain how this part is working, thanks in advance movb r1,@buff_ptr inc buff_ptr bis #1,@#tks cmpb r1,#'q bne next_if mov #0, @#tks next_if:cmpb r1,#32. bne end_kb_int clrb @buff_ptr mov #buffer,-(sp) jsr pc, print_str tst (sp)+ mov #buffer,buff_ptr

    Read the article

  • Handling user interface in a multi-threaded application (or being forced to have a UI-only main thre

    - by Patrick
    In my application, I have a 'logging' window, which shows all the logging, warnings, errors of the application. Last year my application was still single-threaded so this worked [quite] good. Now I am introducing multithreading. I quickly noticed that it's not a good idea to update the logging window from different threads. Reading some articles on keeping the UI in the main thread, I made a communication buffer, in which the other threads are adding their logging messages, and from which the main thread takes the messages and shows them in the logging window (this is done in the message loop). Now, in a part of my application, the memory usage increases dramatically, because the separate threads are generating lots of logging messages, and the main thread cannot empty the communication buffer quickly enough. After the while the memory decreases again (if the other threads have finished their work and the main thread gradually empties the communication buffer). I solved this problem by having a maximum size on the communication buffer, but then I run into a problem in the following situation: the main thread has to perform a complex action the main thread takes some parts of the action and let's separate threads execute this while the seperate threads are executing their logic, the main thread processes the results from the other threads and continues with its work if the other threads are finished Problem is that in this situation, if the other threads perform logging, there is no UI-message loop, and so the communication buffer is filled, but not emptied. I see two solutions in solving this problem: require the main thread to do regular polling of the communication buffer only performing user interface logic in the main thread (no other logic) I think the second solution seems the best, but this may not that easy to introduce in a big application (in my case it performs mathematical simulations). Are there any other solutions or tips? Or is one of the two proposed the best, easiest, most-pragmatic solution? Thanks, Patrick

    Read the article

  • How to handle image/gif type response on client side using GWT

    - by user200340
    Hi all, I have a question about how to handle image/gif type response on client side, any suggestion will be great. There is a service which responds for retrieving image (only one each time at the moment) from database. The code is something like, JDBC Connection Construct MYSQL query. Execute query If has ResultSet, retrieve first one { //save image into Blob image, “img” is the only entity in the image table. image = rs.getBlob("img"); } response.setContentType("image/gif"); //set response type InputStream in = image.getBinaryStream(); //output Blob image to InputStream int bufferSize = 1024; //buffer size byte[] buffer = new byte[bufferSize]; //initial buffer int length =0; //read length data from inputstream and store into buffer while ((length = in.read(buffer)) != -1) { out.write(buffer, 0, length); //write into ServletOutputStream } in.close(); out.flush(); //write out The code on client side .... imgform.setAction(GWT.getModuleBaseURL() + "serviceexample/ImgRetrieve"); .... ClickListener { OnClick, then imgform.submit(); } formHandler { onSubmit, form validation onSubmitComplete ??????? //handle response, and display image **Here is my question, i had tried Image img = new Image(GWT.getHostPageBaseURL() +"serviceexample/ImgRetrieve"); mg.setSize("300", "300"); imgpanel.add(img); but i only got a non-displayed image with 300X300 size.** } So, how should i handle the responde in this case? Thanks,

    Read the article

  • How to start matching and saving matched from exact point in a text

    - by yuliya
    I have a text and I write a parser for it using regular expressions and perl. I can match what I need with two empty lines (I use regexp), because there is a pattern that allows recognize blocks of text after two empty lines. But the problem is that the whole text has Introduction part and some text in the end I do not need. Here is a code which matches text when it finds two empty lines #!/usr/bin/perl use strict; use warnings; my $file = 'first'; open(my $fh, '<', $file); my $empty = 0; my $block_num = 1; open(OUT, '>', $block_num . '.txt'); while (my $line = <$fh>) { chomp ($line); if ($line =~ /^\s*$/) { $empty++; } elsif ($empty == 2) { close(OUT); open(OUT, '>', ++$block_num . '.txt'); $empty = 0; } else { $empty = 0;} print OUT "$line\n"; } close(OUT); This is example of the text I need (it's really small :)) this is file example I think that I need to iterate over the text till the moment it will find the word LOREM IPSUM with regexps this kind "/^LOREM IPSUM/", because it is the point from which needed text starts(and save the text in one file when i reach the word). And I need to finish iterating over the text when INDEX word is fount or save the text in separate file. How could I implement it. Should I use next function to proceed with lines or what? BR, Yuliya

    Read the article

  • Problem with reading and writing to binary file in C++

    - by Reem
    I need to make a file that contains "name" which is a string -array of char- and "data" which is array of bytes -array of char in C++- but the first problem I faced is how to separate the "name" from the "data"? newline character could work in this case (assuming that I don't have "\n" in the name) but I could have special characters in the "data" part so there's no way to know when it ends so I'm putting an int value in the file before the data which has the size of the "data"! I tried to do this with code as follow: if((fp = fopen("file.bin","wb")) == NULL) { return false; } char buffer[] = "first data\n"; fwrite( buffer ,1,sizeof(buffer),fp ); int number[1]; number[0]=10; fwrite( number ,1,1, fp ); char data[] = "1234567890"; fwrite( data , 1, number[0], fp ); fclose(fp); but I didn't know if the "int" part was right, so I tried many other codes including this one: char buffer[] = "first data\n"; fwrite( buffer ,1,sizeof(buffer),fp ); int size=10; fwrite( &size ,sizeof size,1, fp ); char data[] = "1234567890"; fwrite( data , 1, number[0], fp ); I see 4 "NULL" characters in the file when I open it instead of seeing an integer. Is that normal? The other problem I'm facing is reading that again from the file! The code I tried to read didn't work at all :( I tried it with "fread" but I'm not sure if I should use "fseek" with it or it just read the other character after it. Forgive me but I'm a beginner :(

    Read the article

  • Gtk: How can I get a part of a file in a textview with scrollbars relating to the full file

    - by badgerman1
    I'm trying to make a very large file editor (where the editor only stores a part of the buffer in memory at a time), but I'm stuck while building my textview object. Basically- I know that I have to be able to update the text view buffer dynamically, and I don't know hot to get the scrollbars to relate to the full file while the textview contains only a small buffer of the file. I've played with Gtk.Adjustment on a Gtk.ScrolledWindow and ScrollBars, but though I can extend the range of the scrollbars, they still apply to the range of the buffer and not the filesize (which I try to set via Gtk.Adjustment parameters) when I load into textview. I need to have a widget that "knows" that it is looking at a part of a file, and can load/unload buffers as necessary to view different parts of the file. So far, I believe I'll respond to the "change_view" to calculate when I'm off, or about to be off the current buffer and need to load the next, but I don't know how to get the scrollbars to have the top relate to the beginning of the file, and the bottom relate to the end of the file, rather than to the loaded buffer in textview. Any help would be greatly appreciated, thanks!

    Read the article

  • Why does mmap() fail with ENOMEM on a 1TB sparse file?

    - by metadaddy
    I've been working with large sparse files on openSUSE 11.2 x86_64. When I try to mmap() a 1TB sparse file, it fails with ENOMEM. I would have thought that the 64 bit address space would be adequate to map in a terabyte, but it seems not. Experimenting further, a 1GB file works fine, but a 2GB file (and anything bigger) fails. I'm guessing there might be a setting somewhere to tweak, but an extensive search turns up nothing. Here's some sample code that shows the problem - any clues? #include <errno.h> #include <fcntl.h> #include <stdio.h> #include <stdlib.h> #include <string.h> #include <sys/mman.h> #include <sys/types.h> #include <unistd.h> int main(int argc, char *argv[]) { char * filename = argv[1]; int fd; off_t size = 1UL << 40; // 30 == 1GB, 40 == 1TB fd = open(filename, O_RDWR | O_CREAT | O_TRUNC, 0666); ftruncate(fd, size); printf("Created %ld byte sparse file\n", size); char * buffer = (char *)mmap(NULL, (size_t)size, PROT_READ | PROT_WRITE, MAP_SHARED, fd, 0); if ( buffer == MAP_FAILED ) { perror("mmap"); exit(1); } printf("Done mmap - returned 0x0%lx\n", (unsigned long)buffer); strcpy( buffer, "cafebabe" ); printf("Wrote to start\n"); strcpy( buffer + (size - 9), "deadbeef" ); printf("Wrote to end\n"); if ( munmap(buffer, (size_t)size) < 0 ) { perror("munmap"); exit(1); } close(fd); return 0; }

    Read the article

  • iPhone AES encryption issue

    - by Dilshan
    Hi, I use following code to encrypt using AES. - (NSData*)AES256EncryptWithKey:(NSString*)key theMsg:(NSData *)myMessage { // 'key' should be 32 bytes for AES256, will be null-padded otherwise char keyPtr[kCCKeySizeAES256 + 1]; // room for terminator (unused) bzero(keyPtr, sizeof(keyPtr)); // fill with zeroes (for padding) // fetch key data [key getCString:keyPtr maxLength:sizeof(keyPtr) encoding:NSUTF8StringEncoding]; NSUInteger dataLength = [myMessage length]; //See the doc: For block ciphers, the output size will always be less than or //equal to the input size plus the size of one block. //That's why we need to add the size of one block here size_t bufferSize = dataLength + kCCBlockSizeAES128; void* buffer = malloc(bufferSize); size_t numBytesEncrypted = 0; CCCryptorStatus cryptStatus = CCCrypt(kCCEncrypt, kCCAlgorithmAES128, kCCOptionPKCS7Padding, keyPtr, kCCKeySizeAES256, NULL /* initialization vector (optional) */, [myMessage bytes], dataLength, /* input */ buffer, bufferSize, /* output */ &numBytesEncrypted); if (cryptStatus == kCCSuccess) { //the returned NSData takes ownership of the buffer and will free it on deallocation return [NSData dataWithBytesNoCopy:buffer length:numBytesEncrypted]; } free(buffer); //free the buffer; return nil; } However the following code chunk returns null if I tried to print the encryptmessage variable. Same thing applies to decryption as well. What am I doing wrong here? NSData *encrData = [self AES256EncryptWithKey:theKey theMsg:myMessage]; NSString *encryptmessage = [[NSString alloc] initWithData:encrData encoding:NSUTF8StringEncoding]; Thank you

    Read the article

  • Storing "binary" data type in C program

    - by puchu
    I need to create a program that converts one number system to other number systems. I used itoa in Windows (Dev C++) and my only problem is that I do not know how to convert binary numbers to other number systems. All the other number systems conversion work accordingly. Does this involve something like storing the input to be converted using %? Here is a snippet of my work: case 2: { printf("\nEnter a binary number: "); scanf("%d", &num); itoa(num,buffer,8); printf("\nOctal %s",buffer); itoa(num,buffer,10); printf("\nDecimal %s",buffer); itoa(num,buffer,16); printf("\nHexadecimal %s \n",buffer); break; } For decimal I used %d, for octal I used %o and for hexadecimal I used %x. What could be the correct one for binary? Thanks for future answers!

    Read the article

  • Canvas draw calls are rendering out of sequence

    - by Tom Murray
    I have the following code for writing draw calls to a "back buffer" canvas, then placing those in a main canvas using drawImage. This is for optimization purposes and to ensure all images get placed in sequence. Before placing the buffer canvas on top of the main one, I'm using fillRect to create a dark-blue background on the main canvas. However, the blue background is rendering after the sprites. This is unexpected, as I am making its fillRect call first. Here is my code: render: function() { this.buffer.clearRect(0,0,this.w,this.h); this.context.fillStyle = "#000044"; this.context.fillRect(0,0,this.w,this.h); for (var i in this.renderQueue) { for (var ii = 0; ii < this.renderQueue[i].length; ii++) { sprite = this.renderQueue[i][ii]; // Draw it! this.buffer.fillStyle = "green"; this.buffer.fillRect(sprite.x, sprite.y, sprite.w, sprite.h); } } this.context.drawImage(this.bufferCanvas,0,0); } This also happens when I use fillRect on the buffer canvas, instead of the main one. Changing the globalCompositeOperation between 'source-over' and 'destination-over' (for both contexts) does nothing to change this. Paradoxically, if I instead place the blue fillRect inside the nested for loops with the other draw calls, it works as expected... Thanks in advance!

    Read the article

  • VB Change Calulator

    - by BlueBeast
    I am creating a VB 2008 change calculator as an assignment. The program is to use the amount paid - the amount due to calculate the total.(this is working fine). After that, it is to break that amount down into dollars, quarters, dimes, nickels, and pennies. The problem I am having is that sometimes the quantity of pennies, nickels or dimes will be a negative number. For example $2.99 = 3 Dollars and -1 Pennies. SOLVED Thanks to the responses, here is what I was able to make work with my limited knowledge. Option Explicit On Option Strict Off Option Infer Off Public Class frmMain Private Sub btnClear_Click(ByVal sender As System.Object, ByVal e As System.EventArgs) Handles btnClear.Click 'Clear boxes lblDollarsAmount.Text = String.Empty lblQuartersAmount.Text = String.Empty lblDimesAmount.Text = String.Empty lblNickelsAmount.Text = String.Empty lblPenniesAmount.Text = String.Empty txtOwed.Text = String.Empty txtPaid.Text = String.Empty lblAmountDue.Text = String.Empty txtOwed.Focus() End Sub Private Sub btnExit_Click(ByVal sender As System.Object, ByVal e As System.EventArgs) Handles btnExit.Click 'Close application' Me.Close() End Sub Private Sub btnCalculate_Click(ByVal sender As System.Object, ByVal e As System.EventArgs) Handles btnCalculate.Click ' Find Difference between Total Price and Total Received lblAmountDue.Text = Val(txtPaid.Text) - Val(txtOwed.Text) Dim intChangeAmount As Integer = lblAmountDue.Text * 100 'Declare Integers Dim intDollarsBack As Integer Dim intQuartersBack As Integer Dim intDimesBack As Integer Dim intNickelsBack As Integer Dim intPenniesBack As Integer ' Change Values Const intDollarValue As Integer = 100 Const intQuarterValue As Integer = 25 Const intDimeValue As Integer = 10 Const intNickelValue As Integer = 5 Const intPennyValue As Integer = 1 'Dollars intDollarsBack = CInt(Val(intChangeAmount \ intDollarValue)) intChangeAmount = intChangeAmount - Val(Val(intDollarsBack) * intDollarValue) lblDollarsAmount.Text = intDollarsBack.ToString 'Quarters intQuartersBack = CInt(Val(intChangeAmount \ intQuarterValue)) intChangeAmount = intChangeAmount - Val(Val(intQuartersBack) * intQuarterValue) lblQuartersAmount.Text = intQuartersBack.ToString 'Dimes intDimesBack = CInt(Val(intChangeAmount \ intDimeValue)) intChangeAmount = intChangeAmount - Val(Val(intDimesBack) * intDimeValue) lblDimesAmount.Text = intDimesBack.ToString 'Nickels intNickelsBack = CInt(Val(intChangeAmount \ intNickelValue)) intChangeAmount = intChangeAmount - Val(Val(intNickelsBack) * intNickelValue) lblNickelsAmount.Text = intNickelsBack.ToString 'Pennies intPenniesBack = CInt(Val(intChangeAmount \ intPennyValue)) intChangeAmount = intChangeAmount - Val(Val(intPenniesBack) * intPennyValue) lblPenniesAmount.Text = intPenniesBack.ToString End Sub End Class

    Read the article

  • Unix sort keys cause performance problems

    - by KenFar
    My data: It's a 71 MB file with 1.5 million rows. It has 6 fields All six fields combine to form a unique key - so that's what I need to sort on. Sort statement: sort -t ',' -k1,1 -k2,2 -k3,3 -k4,4 -k5,5 -k6,6 -o output.csv input.csv The problem: If I sort without keys, it takes 30 seconds. If I sort with keys, it takes 660 seconds. I need to sort with keys to keep this generic and useful for other files that have non-key fields as well. The 30 second timing is fine, but the 660 is a killer. More details using unix time: sort input.csv -o output.csv = 28 seconds sort -t ',' -k1 input.csv -o output.csv = 28 seconds sort -t ',' -k1,1 input.csv -o output.csv = 64 seconds sort -t ',' -k1,1 -k2,2 input.csv -o output.csv = 194 seconds sort -t ',' -k1,1 -k2,2 -k3,3 input.csv -o output.csv = 328 seconds sort -t ',' -k1,1 -k2,2 -k3,3 -k4,4 input.csv -o output.csv = 483 seconds sort -t ',' -k1,1 -k2,2 -k3,3 -k4,4 -k5,5 input.csv -o output.csv = 561 seconds sort -t ',' -k1,1 -k2,2 -k3,3 -k4,4 -k5,5 -k6,6 input.csv -o output.csv = 660 seconds I could theoretically move the temp directory to SSD, and/or split the file into 4 parts, sort them separately (in parallel) then merge the results, etc. But I'm hoping for something simpler since looks like sort is just picking a bad algorithm. Any suggestions? Testing Improvements using buffer-size: With 2 keys I got a 5% improvement with 8, 20, 24 MB and best performance of 8% improvement with 16MB, but 6% worse with 128MB With 6 keys I got a 5% improvement with 8, 20, 24 MB and best performance of 9% improvement with 16MB. Testing improvements using dictionary order (just 1 run each): sort -d --buffer-size=8M -t ',' -k1,1 -k2,2 input.csv -o output.csv = 235 seconds (21% worse) sort -d --buffer-size=8M -t ',' -k1,1 -k2,2 input.csv -o ouput.csv = 232 seconds (21% worse) conclusion: it makes sense that this would slow the process down, not useful Testing with different file system on SSD - I can't do this on this server now. Testing with code to consolidate adjacent keys: def consolidate_keys(key_fields, key_types): """ Inputs: - key_fields - a list of numbers in quotes: ['1','2','3'] - key_types - a list of types of the key_fields: ['integer','string','integer'] Outputs: - key_fields - a consolidated list: ['1,2','3'] - key_types - a list of types of the consolidated list: ['string','integer'] """ assert(len(key_fields) == len(key_types)) def get_min(val): vals = val.split(',') assert(len(vals) <= 2) return vals[0] def get_max(val): vals = val.split(',') assert(len(vals) <= 2) return vals[len(vals)-1] i = 0 while True: try: if ( (int(get_max(key_fields[i])) + 1) == int(key_fields[i+1]) and key_types[i] == key_types[i+1]): key_fields[i] = '%s,%s' % (get_min(key_fields[i]), key_fields[i+1]) key_types[i] = key_types[i] key_fields.pop(i+1) key_types.pop(i+1) continue i = i+1 except IndexError: break # last entry return key_fields, key_types While this code is just a work-around that'll only apply to cases in which I've got a contiguous set of keys - it speeds up the code by 95% in my worst case scenario.

    Read the article

  • Pixel Perfect Collision Detection in Cocos2dx

    - by Happybirthday
    I am trying to port the pixel perfect collision detection in Cocos2d-x the original version was made for Cocos2D and can be found here: http://www.cocos2d-iphone.org/forums/topic/pixel-perfect-collision-detection-using-color-blending/ Here is my code for the Cocos2d-x version bool CollisionDetection::areTheSpritesColliding(cocos2d::CCSprite *spr1, cocos2d::CCSprite *spr2, bool pp, CCRenderTexture* _rt) { bool isColliding = false; CCRect intersection; CCRect r1 = spr1-boundingBox(); CCRect r2 = spr2-boundingBox(); intersection = CCRectMake(fmax(r1.getMinX(),r2.getMinX()), fmax( r1.getMinY(), r2.getMinY()) ,0,0); intersection.size.width = fmin(r1.getMaxX(), r2.getMaxX() - intersection.getMinX()); intersection.size.height = fmin(r1.getMaxY(), r2.getMaxY() - intersection.getMinY()); // Look for simple bounding box collision if ( (intersection.size.width0) && (intersection.size.height0) ) { // If we're not checking for pixel perfect collisions, return true if (!pp) { return true; } unsigned int x = intersection.origin.x; unsigned int y = intersection.origin.y; unsigned int w = intersection.size.width; unsigned int h = intersection.size.height; unsigned int numPixels = w * h; //CCLog("Intersection X and Y %d, %d", x, y); //CCLog("Number of pixels %d", numPixels); // Draw into the RenderTexture _rt-beginWithClear( 0, 0, 0, 0); // Render both sprites: first one in RED and second one in GREEN glColorMask(1, 0, 0, 1); spr1-visit(); glColorMask(0, 1, 0, 1); spr2-visit(); glColorMask(1, 1, 1, 1); // Get color values of intersection area ccColor4B *buffer = (ccColor4B *)malloc( sizeof(ccColor4B) * numPixels ); glReadPixels(x, y, w, h, GL_RGBA, GL_UNSIGNED_BYTE, buffer); _rt-end(); // Read buffer unsigned int step = 1; for(unsigned int i=0; i 0 && color.g 0) { isColliding = true; break; } } // Free buffer memory free(buffer); } return isColliding; } My code is working perfectly if I send the "pp" parameter as false. That is if I do only a bounding box collision but I am not able to get it working correctly for the case when I need Pixel Perfect collision. I think the opengl masking code is not working as I intended. Here is the code for "_rt" _rt = CCRenderTexture::create(visibleSize.width, visibleSize.height); _rt-setPosition(ccp(origin.x + visibleSize.width * 0.5f, origin.y + visibleSize.height * 0.5f)); this-addChild(_rt, 1000000); _rt-setVisible(true); //For testing I think I am making a mistake with the implementation of this CCRenderTexture Can anyone guide me with what I am doing wrong ? Thank you for your time :)

    Read the article

< Previous Page | 134 135 136 137 138 139 140 141 142 143 144 145  | Next Page >