Search Results

Search found 4759 results on 191 pages for 'depth buffer'.

Page 41/191 | < Previous Page | 37 38 39 40 41 42 43 44 45 46 47 48  | Next Page >

  • File Format DOS/Unix/MAC code sample

    - by mac
    I have written the following method to detemine whether file in question is formatted with DOS/ MAC, or UNIX line endings. I see at least 1 obvious issue: 1. i am hoping that i will get the EOL on the first run, say within first 1000 bytes. This may or may not happen. I ask you to review this and suggest improvements which will lead to hardening the code and making it more generic. THANK YOU. new FileFormat().discover(fileName, 0, 1000); and then public void discover(String fileName, int offset, int depth) throws IOException { BufferedInputStream in = new BufferedInputStream(new FileInputStream(fileName)); FileReader a = new FileReader(new File(fileName)); byte[] bytes = new byte[(int) depth]; in.read(bytes, offset, depth); a.close(); in.close(); int thisByte; int nextByte; boolean isDos = false; boolean isUnix = false; boolean isMac = false; for (int i = 0; i < (bytes.length - 1); i++) { thisByte = bytes[i]; nextByte = bytes[i + 1]; if (thisByte == 10 && nextByte != 13) { isDos = true; break; } else if (thisByte == 13) { isUnix = true; break; } else if (thisByte == 10) { isMac = true; break; } } if (!(isDos || isMac || isUnix)) { discover(fileName, offset + depth, depth + 1000); } else { // do something clever } }

    Read the article

  • Wrapping variable width text in emacs lisp

    - by Jonathan Arkell
    I am hacking up a tagging application for emacs. I have got a tag cloud/weighted list successfully displaying on a buffer, but i am running into a snag. I need to be able to properly word-wrap the buffer, but I haven't a clue where to start. The font I am using is a variable width font. On top of that, each tag is going to be in a different size, depending on how many times it shows up on the buffer. Finally, the window that displays the tagcloud could be in a window that is 200 pixels wide, or the full screen width. I really have no idea where to start. I tried longlines mode on the tagcloud buffer, but that didn't work. Source code is at: http://emacswiki.org/cgi-bin/emacs/free-tagging.el

    Read the article

  • How do i judge when the NetWorkStream finishes by using .net TcpClient to communicate

    - by Hwasin
    I try to use stream.DataAvailable to judge if it is finished,but sometimes the value is false but after a little while it is true again,i have to set a counter and judge the end by the symbol '' like this int connectCounter = 0; while (connectCounter < 1200) { if (stream.DataAvailable) { while (stream.DataAvailable) { byte[] buffer = new byte[bufferSize]; int flag = stream.Read(buffer, 0, buffer.Length); string strReadXML_t = System.Text.Encoding.Default.GetString(buffer); strReadXML = strReadXML + strReadXML_t.Replace("\0", string.Empty); } if (strReadXML.Substring(strReadXML.Length - 1, 1).Equals(">")) { break; } } Thread.Sleep(100); connectCounter++; } is there any good methord to deal with it?Thank you!

    Read the article

  • ffmpeg(libavcodec). memory leaks in avcodec_encode_video

    - by gavlig
    I'm trying to transcode a video with help of libavcodec. On transcoding big video files(hour or more) i get huge memory leaks in avcodec_encode_video. I have tried to debug it, but with different video files different functions produce leaks, i have got a little bit confused about that :). [Here] (http://stackoverflow.com/questions/4201481/ffmpeg-with-qt-memory-leak) is the same issue that i have, but i have no idea how did that person solve it. QtFFmpegwrapper seems to do the same i do(or i missed something). my method is lower. I took care about aFrame and aPacket outside with av_free and av_free_packet. int Videocut::encode( AVStream *anOutputStream, AVFrame *aFrame, AVPacket *aPacket ) { AVCodecContext *outputCodec = anOutputStream->codec; if (!anOutputStream || !aFrame || !aPacket) { return 1; /* NOTREACHED */ } uint8_t * buffer = (uint8_t *)malloc( sizeof(uint8_t) * _DefaultEncodeBufferSize ); if (NULL == buffer) { return 2; /* NOTREACHED */ } int packetSize = avcodec_encode_video( outputCodec, buffer, _DefaultEncodeBufferSize, aFrame ); if (packetSize < 0) { free(buffer); return 1; /* NOTREACHED */ } aPacket->data = buffer; aPacket->size = packetSize; return 0; }

    Read the article

  • Convert a byte array to a class containing a byte array in C#

    - by Mathijs
    I've got a C# function that converts a byte array to a class, given it's type: IntPtr buffer = Marshal.AllocHGlobal(rawsize); Marshal.Copy(data, 0, buffer, rawsize); object result = Marshal.PtrToStructure(buffer, type); Marshal.FreeHGlobal(buffer); I use sequential structs: [StructLayout(LayoutKind.Sequential)] public new class PacketFormat : Packet.PacketFormat { } This worked fine, until I tried to convert to a struct/class containing a byte array. [StructLayout(LayoutKind.Sequential)] public new class PacketFormat : Packet.PacketFormat { public byte header; public byte[] data = new byte[256]; } Marshal.SizeOf(type) returns 16, which is too low (should be 257) and causes Marshal.PtrToStructure to fail with the following error: Attempted to read or write protected memory. This is often an indication that other memory is corrupt. I'm guessing that using a fixed array would be a solution, but can it also be done without having to resort to unsafe code?

    Read the article

  • Using ServletOutputStream to write very large files in a Java servlet without memory issues

    - by Martin
    I am using IBM Websphere Application Server v6 and Java 1.4 and am trying to write large CSV files to the ServletOutputStream for a user to download. Files are ranging from a 50-750MB at the moment. The smaller files aren't causing too much of a problem but with the larger files it appears that it is being written into the heap which is then causing an OutOfMemory error and bringing down the entire server. These files can only be served out to authenticated users over https which is why I am serving them through a Servlet instead of just sticking them in Apache. The code I am using is (some fluff removed around this): resp.setHeader("Content-length", "" + fileLength); resp.setContentType("application/vnd.ms-excel"); resp.setHeader("Content-Disposition","attachment; filename=\"export.csv\""); FileInputStream inputStream = null; try { inputStream = new FileInputStream(path); byte[] buffer = new byte[1024]; int bytesRead = 0; do { bytesRead = inputStream.read(buffer, offset, buffer.length); resp.getOutputStream().write(buffer, 0, bytesRead); } while (bytesRead == buffer.length); resp.getOutputStream().flush(); } finally { if(inputStream != null) inputStream.close(); } The FileInputStream doesn't seem to be causing a problem as if I write to another file or just remove the write completly the memory usage doesn't appear to be a problem. What I am thinking is that the resp.getOutputStream().write is being stored in memory until the data can be sent through to the client. So the entire file might be read and stored in the resp.getOutputStream() causing my memory issues and crashing! I have tried Buffering these streams and also tried using Channels from java.nio, none of which seems to make any bit of difference to my memory issues. I have also flushed the outputstream once per iteration of the loop and after the loop, which didn't help.

    Read the article

  • C# Begin/EndReceive - how do I read large data?

    - by ryeguy
    When reading data in chunks of say, 1024, how do I continue to read from a socket that receives a message bigger than 1024 bytes until there is no data left? Should I just use BeginReceive to read a packet's length prefix only, and then once that is retrieved, use Receive() (in the async thread) to read the rest of the packet? Or is there another way? edit: I thought Jon Skeet's link had the solution, but there is a bit of a speedbump with that code. The code I used is: public class StateObject { public Socket workSocket = null; public const int BUFFER_SIZE = 1024; public byte[] buffer = new byte[BUFFER_SIZE]; public StringBuilder sb = new StringBuilder(); } public static void Read_Callback(IAsyncResult ar) { StateObject so = (StateObject) ar.AsyncState; Socket s = so.workSocket; int read = s.EndReceive(ar); if (read > 0) { so.sb.Append(Encoding.ASCII.GetString(so.buffer, 0, read)); if (read == StateObject.BUFFER_SIZE) { s.BeginReceive(so.buffer, 0, StateObject.BUFFER_SIZE, 0, new AyncCallback(Async_Send_Receive.Read_Callback), so); return; } } if (so.sb.Length > 0) { //All of the data has been read, so displays it to the console string strContent; strContent = so.sb.ToString(); Console.WriteLine(String.Format("Read {0} byte from socket" + "data = {1} ", strContent.Length, strContent)); } s.Close(); } Now this corrected works fine most of the time, but it fails when the packet's size is a multiple of the buffer. The reason for this is if the buffer gets filled on a read it is assumed there is more data; but the same problem happens as before. A 2 byte buffer, for exmaple, gets filled twice on a 4 byte packet, and assumes there is more data. It then blocks because there is nothing left to read. The problem is that the receive function doesn't know when the end of the packet is. This got me thinking to two possible solutions: I could either have an end-of-packet delimiter or I could read the packet header to find the length and then receive exactly that amount (as I originally suggested). There's problems with each of these, though. I don't like the idea of using a delimiter, as a user could somehow work that into a packet in an input string from the app and screw it up. It also just seems kinda sloppy to me. The length header sounds ok, but I'm planning on using protocol buffers - I don't know the format of the data. Is there a length header? How many bytes is it? Would this be something I implement myself? Etc.. What should I do?

    Read the article

  • Help needed with AES between Java and Objective-C (iPhone)....

    - by Simon Lee
    I am encrypting a string in objective-c and also encrypting the same string in Java using AES and am seeing some strange issues. The first part of the result matches up to a certain point but then it is different, hence when i go to decode the result from Java onto the iPhone it cant decrypt it. I am using a source string of "Now then and what is this nonsense all about. Do you know?" Using a key of "1234567890123456" The objective-c code to encrypt is the following: NOTE: it is a NSData category so assume that the method is called on an NSData object so 'self' contains the byte data to encrypt. - (NSData *)AESEncryptWithKey:(NSString *)key { char keyPtr[kCCKeySizeAES128+1]; // room for terminator (unused) bzero(keyPtr, sizeof(keyPtr)); // fill with zeroes (for padding) // fetch key data [key getCString:keyPtr maxLength:sizeof(keyPtr) encoding:NSUTF8StringEncoding]; NSUInteger dataLength = [self length]; //See the doc: For block ciphers, the output size will always be less than or //equal to the input size plus the size of one block. //That's why we need to add the size of one block here size_t bufferSize = dataLength + kCCBlockSizeAES128; void *buffer = malloc(bufferSize); size_t numBytesEncrypted = 0; CCCryptorStatus cryptStatus = CCCrypt(kCCEncrypt, kCCAlgorithmAES128, kCCOptionPKCS7Padding, keyPtr, kCCKeySizeAES128, NULL /* initialization vector (optional) */, [self bytes], dataLength, /* input */ buffer, bufferSize, /* output */ &numBytesEncrypted); if (cryptStatus == kCCSuccess) { //the returned NSData takes ownership of the buffer and will free it on deallocation return [NSData dataWithBytesNoCopy:buffer length:numBytesEncrypted]; } free(buffer); //free the buffer; return nil; } And the java encryption code is... public byte[] encryptData(byte[] data, String key) { byte[] encrypted = null; Security.addProvider(new org.bouncycastle.jce.provider.BouncyCastleProvider()); byte[] keyBytes = key.getBytes(); SecretKeySpec keySpec = new SecretKeySpec(keyBytes, "AES"); try { Cipher cipher = Cipher.getInstance("AES/ECB/PKCS7Padding", "BC"); cipher.init(Cipher.ENCRYPT_MODE, keySpec); encrypted = new byte[cipher.getOutputSize(data.length)]; int ctLength = cipher.update(data, 0, data.length, encrypted, 0); ctLength += cipher.doFinal(encrypted, ctLength); } catch (Exception e) { logger.log(Level.SEVERE, e.getMessage()); } finally { return encrypted; } } The hex output of the objective-c code is - 7a68ea36 8288c73d f7c45d8d 22432577 9693920a 4fae38b2 2e4bdcef 9aeb8afe 69394f3e 1eb62fa7 74da2b5c 8d7b3c89 a295d306 f1f90349 6899ac34 63a6efa0 and the java output is - 7a68ea36 8288c73d f7c45d8d 22432577 e66b32f9 772b6679 d7c0cb69 037b8740 883f8211 748229f4 723984beb 50b5aea1 f17594c9 fad2d05e e0926805 572156d As you can see everything is fine up to - 7a68ea36 8288c73d f7c45d8d 22432577 I am guessing I have some of the settings different but can't work out what, I tried changing between ECB and CBC on the java side and it had no effect. Can anyone help!? please....

    Read the article

  • jQuery performance

    - by jAndy
    Hi Folks, imagine you have to do DOM manipulation like a lot (in my case, it's kind of a dynamic list). Look at this example: var $buffer = $('<ul/>', { 'class': '.custom-example', 'css': { 'position': 'absolute', 'top': '500px' } }); $.each(pages[pindex], function(i, v){ $buffer.append(v); }); $buffer.insertAfter($root); "pages" is an array which holds LI elements as jQuery object. "$root" is an UL element What happens after this code is, both UL's are animated (scrolling) and finally, within the callback of animate this code is executed: $root.detach(); $root = $buffer; $root.css('top', '0px'); $buffer = null; This works very well, the only thing I'm pi**ed off is the performance. I do cache all DOM elements I'm laying a hand on. Without looking too deep into jQuery's source code, is there a chance that my performance issues are located there? Does jQuery use DocumentFragments to append things? If you create a new DOM element with var new = $('<div/>') it is only stored in memory at this point isnt it?

    Read the article

  • F# working with DataReader

    - by mamu
    let getBytesData x = let len = reader.GetBytes(1, int64 0, null, 0, 0); // Create a buffer to hold the bytes, and then // read the bytes from the DataTableReader. let buffer : byte array = [||] reader.GetBytes(1, int64 0, buffer, 0, int32 len) |> ignore buffer let retVal = List [ while reader.Read() do yield (reader.GetString(0), getBytesData reader, reader.GetDateTime(2)) ] I have above code to read bytes[] from datareader. getBytesData function takes reader and returns bytes[] from reader. everything works fine but it getBytesData function is working very non-functional way. Is there any way i can optimize in F#? Sorry for kind of question, but i have started a new project on F# to squeeze all juice out of it, so trying to get each line most optimal way

    Read the article

  • how to send file via http with python

    - by ep45
    Hello, I have a problem. I use Apache with mod_wsgi and webpy, and when i send a file on http, a lot packets are lost. This is my code : web.header('Content-Type','video/x-flv') web.header('Content-length',sizeFile) f = file(FILE_PATH, 'rb') while True: buffer = f.read(4*1024) if buffer : yield buffer else : break f.close() What in my code is wrong ? thanks.

    Read the article

  • reading a string with spaces with sscanf

    - by SDLFunTimes
    For a project I'm trying to read an int and a string from a string. The only problem is sscanf appears to break reading an %s when it sees a space. Is there anyway to get around this limitation? Here's an example of what I'm trying to do: #include<stdio.h> #include<stdlib.h> int main(int argc, char** argv) { int age; char* buffer; buffer = malloc(200 * sizeof(char)); sscanf("19 cool kid", "%d %s", &age, buffer); printf("%s is %d years old\n", buffer, age); return 0; } What it prints is: "cool is 19 years old" where I need "cool kid is 19 years old". Does anyone know how to fix this?

    Read the article

  • Sending multipart response for downloads in Zend Framework

    - by takeshin
    I'm sending files in action helper for downloads (in parts if needed) like this: ... $response->sendHeaders(); $chunksize = 1 * (1024 * 1024); $bytesSent = 0; if ($httpRange) { fseek($file, $range); } while(!feof($file) && (!connection_aborted() && ($bytesSent < $newLength)) ) { $buffer = fread($file, $chunksize); // $response->appendBody($buffer); // this would be better print($buffer); flush(); $bytesSent += strlen($buffer); } fclose($file); I suspect that better way would be to make use of $response object instead of print. Which is the recommended way to send big response objects using Zend Framework?

    Read the article

  • Unable to capture standard output of process using Boost.Process

    - by Chris Kaminski
    Currently am using Boost.Process from the Boost sandbox, and am having issues getting it to capture my standard output properly; wondering if someone can give me a second pair of eyeballs into what I might be doing wrong. I'm trying to take thumbnails out of RAW camera images using DCRAW (latest version), and capture them for conversion to QT QImage's. The process launch function: namespace bf = ::boost::filesystem; namespace bp = ::boost::process; QImage DCRawInterface::convertRawImage(string path) { // commandline: dcraw -e -c <srcfile> -> piped to stdout. if ( bf::exists( path ) ) { std::string exec = "bin\\dcraw.exe"; std::vector<std::string> args; args.push_back("-v"); args.push_back("-c"); args.push_back("-e"); args.push_back(path); bp::context ctx; ctx.stdout_behavior = bp::capture_stream(); bp::child c = bp::launch(exec, args, ctx); bp::pistream &is = c.get_stdout(); ofstream output("C:\\temp\\testcfk.jpg"); streamcopy(is, output); } return (NULL); } inline void streamcopy(std::istream& input, std::ostream& out) { char buffer[4096]; int i = 0; while (!input.eof() ) { memset(buffer, 0, sizeof(buffer)); int bytes = input.readsome(buffer, sizeof buffer); out.write(buffer, bytes); i++; } } Invoking the converter: DCRawInterface DcRaw; DcRaw.convertRawImage("test/CFK_2439.NEF"); The goal is to simply verify that I can copy the input stream to an output file. Currently, if I comment out the following line: args.push_back("-c"); then the thumbnail is written by DCRAW to the source directory with a name of CFK_2439.thumb.jpg, which proves to me that the process is getting invoked with the right arguments. What's not happening is connecting to the output pipe properly. FWIW: I'm performing this test on Windows XP under Eclipse 3.5/Latest MingW (GCC 4.4).

    Read the article

  • Put message after long idle time does not work

    - by Sydney
    I wrote a simple Java client using MQ v7 libraries (No JMS). I try to put a message in a queue using the following pattern: Put a message Wait for x minutes Put a message again It works but if the idle time is too long (between 5-7 minutes), I get the following error: MQJE001: An MQException occurred: Completion Code 2, Reason 2195 MQJE007: IO error reading message data Error occured during API call - reason code0 MQJE001: Completion Code 2, Reason 2009 MQJE001: An MQException occurred: Completion Code 2, Reason 2009 MQJE003: IO error transmitting message buffer MQJE001: Completion Code 2, Reason 2009 MQJE001: An MQException occurred: Completion Code 2, Reason 2009 MQJE003: IO error transmitting message buffer MQJE001: Completion Code 2, Reason 2009 MQJE001: An MQException occurred: Completion Code 2, Reason 2009 MQJE003: IO error transmitting message buffer MQJE001: Completion Code 2, Reason 2009 MQJE001: An MQException occurred: Completion Code 2, Reason 2009 MQJE003: IO error transmitting message buffer MQJE001: An MQException occurred: Completion Code 2, Reason 2009 MQJE003: IO error transmitting message buffer MQJE001: Completion Code 2, Reason 2009 An MQSeries error occurred : Completion code 2 Reason code 2009 com.ibm.mq.MQException: MQJE001: Completion Code 2, Reason 2009 at com.ibm.mq.MQQueue.put(MQQueue.java:1511) After reading several threads on the subject, this error is usually creating FDC dumps but I have nothing in the system and queue manager logs. The channel is a SVRCONN channel.

    Read the article

  • Android Dev Help: Saving an image from Res/raw or Asset folder to the Sd card

    - by Lucy
    Android Development Query Hello, I wonder if anyone could help me, i am trying to save an image (jpg or png) from the res/raw or assets folder to the SD card location (/sdcard/DCIM/). I have been following a tutorial which can save an image from a URL to the SD card Root, but i have looked everywhere to be able to save from res/raw or asset folder instead, and to a differnet location onthe sd card /sdcard/DCIM/ Here is the code, can anyone show me how to do the above from this? Thanks Lucy public class home extends Activity { private File file; private String imgNumber; private Button btnDownload; /** Called when the activity is first created. */ @Override public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.main); btnDownload=(Button)findViewById(R.id.btnDownload); btnDownload.setOnClickListener(new OnClickListener() { public void onClick(View v) { btnDownload.setText("Download is in Progress."); String savedFilePath=Download("http://www.domain.com/android1.png"); Toast.makeText(getApplicationContext(), "File is Saved in "+savedFilePath, 1000).show(); if(savedFilePath!=null) { btnDownload.setText("Download Completed."); } } }); } public String Download(String Url) { String filepath=null; try { //set the download URL, a url that points to a file on the internet //this is the file to be downloaded URL url = new URL(Url); //create the new connection HttpURLConnection urlConnection = (HttpURLConnection) url.openConnection(); //set up some things on the connection urlConnection.setRequestMethod("GET"); urlConnection.setDoOutput(true); //and connect! urlConnection.connect(); //set the path where we want to save the file //in this case, going to save it on the root directory of the //sd card. File SDCardRoot = Environment.getExternalStorageDirectory(); //create a new file, specifying the path, and the filename //which we want to save the file as. String filename= "download_"+System.currentTimeMillis()+".png"; // you can download to any type of file ex:.jpeg (image) ,.txt(text file),.mp3 (audio file) Log.i("Local filename:",""+filename); file = new File(SDCardRoot,filename); if(file.createNewFile()) { file.createNewFile(); } //this will be used to write the downloaded data into the file we created FileOutputStream fileOutput = new FileOutputStream(file); //this will be used in reading the data from the internet InputStream inputStream = urlConnection.getInputStream(); //this is the total size of the file int totalSize = urlConnection.getContentLength(); //variable to store total downloaded bytes int downloadedSize = 0; //create a buffer... byte[] buffer = new byte[1024]; int bufferLength = 0; //used to store a temporary size of the buffer //now, read through the input buffer and write the contents to the file while ( (bufferLength = inputStream.read(buffer)) > 0 ) { //add the data in the buffer to the file in the file output stream (the file on the sd card fileOutput.write(buffer, 0, bufferLength); //add up the size so we know how much is downloaded downloadedSize += bufferLength; //this is where you would do something to report the prgress, like this maybe Log.i("Progress:","downloadedSize:"+downloadedSize+"totalSize:"+ totalSize) ; btnDownload.setText("download Status:"+downloadedSize+" / "+totalSize); } //close the output stream when done fileOutput.close(); if(downloadedSize==totalSize) filepath=file.getPath(); //catch some possible errors... } catch (MalformedURLException e) { e.printStackTrace(); } catch (IOException e) { filepath=null; btnDownload.setText("Internet Connection Failed.\n"+e.getMessage()); e.printStackTrace(); } Log.i("filepath:"," "+filepath) ; return filepath; } }

    Read the article

  • Read line and change the line that not consist of certain words and not end with dot

    - by igo
    I wanna read some text files in a folder line by line. for example of 1 txt : Fast and Effective Text Mining Using Linear-time Document Clustering Bjornar Larsen WORD2 Chinatsu Aone SRA International AK, Inc. 4300 Fair Lakes Cow-l Fairfax, VA 22033 {bjornar-larsen, WORD1 I wanna remove line that does not contain of words = word, word2, word3, and does not end with dot . so. from the example, the result will be : Bjornar Larsen WORD2 Chinatsu Aone SRA International, Inc. {bjornar-larsen, WORD1 I am confused, hw to remove the line? it that possible? or can we replace them with a space? here's the code : $url = glob($savePath.'*.txt'); foreach ($url as $file => $files) { $handle = fopen($files, "r") or die ('can not open file'); $ori_content= file_get_contents($files); foreach(preg_split("/((\r?\n)|(\r\n?))/", $ori_content) as $buffer){ $pos1 = stripos($buffer, $word1); $pos2 = stripos($buffer, $word2); $pos3 = stripos($buffer, $word3); $last = $str[strlen($buffer)-1];//read the las character if (true !== $pos1 OR true !== $pos2 OR true !==$pos3 && $last != '.'){ //how to remove } } } please help me, thank you so much :)

    Read the article

  • Reading a string in TASM x86 assembly

    - by I_S_W
    hi all , i am trying to read a string from user in TASM assembly , i know i need a buffer to hold the input , max. length and actual length , but i seem to forgot how exactly we declare a buffer my attempt was smth like this Buffer db 80 ;max length db ? ;actual length db 80 dup(0);i think here is my problem but can't remember the right format Thanks in advance

    Read the article

  • Iterating through String word at a time in Python

    - by AlgoMan
    I have a string buffer of a huge text file. I have to search a given words/phrases in the string buffer. Whats the efficient way to do it ? I tried using re module matches. But As i have a huge text corpus that i have to search through. This is taking large amount of time. Given a Dictionary of words and Phrases. I iterate through the each file, read that into string , search all the words and phrases in the dictionary and increment the count in the dictionary if the keys are found. One small optimization that we thought was to sort the dictionary of phrases/words with the max number of words to lowest. And then compare each word start position from the string buffer and compare the list of words. If one phrase is found, we don search for the other phrases (as it matched the longest phrase ,which is what we want) Can some one suggest how to go about word by word in the string buffer. (Iterate string buffer word by word) ? Also, Is there any other optimization that can be done on this ?

    Read the article

  • Why is it assumed that send may return with less than requested data transmitted on a blocking socke

    - by Ernelli
    The standard method to send data on a stream socket has always been to call send with a chunk of data to write, check the return value to see if all data was sent and then keep calling send again until the whole message has been accepted. For example this is a simple example of a common scheme: int send_all(int sock, unsigned char *buffer, int len) { int nsent; while(len 0) { nsent = send(sock, buffer, len, 0); if(nsent == -1) // error return -1; buffer += nsent; len -= nsent; } return 0; // ok, all data sent } Even the BSD manpage mentions that ...If no messages space is available at the socket to hold the message to be transmitted, then send() normally blocks... Which indicates that we should assume that send may return without sending all data. Now I find this rather broken but even W. Richard Stevens assumes this in his standard reference book about network programming, not in the beginning chapters, but the more advanced examples uses his own writen (write all data) function instead of calling write. Now I consider this still to be more or less broken, since if send is not able to transmit all data or accept the data in the underlying buffer and the socket is blocking, then send should block and return when the whole send request has been accepted. I mean, in the code example above, what will happen if send returns with less data sent is that it will be called right again with a new request. What has changed since last call? At max a few hundred CPU cycles have passed so the buffer is still full. If send now accepts the data why could'nt it accept it before? Otherwise we will end upp with an inefficient loop where we are trying to send data on a socket that cannot accept data and keep trying, or else? So it seems like the workaround, if needed, results in heavily inefficient code and in those circumstances blocking sockets should be avoided at all an non blocking sockets together with select should be used instead.

    Read the article

  • Serial port : Read data problem, not reading complete data

    - by Anuj Mehta
    Hi I have an application where I am sending data via serial port from PC1 (Java App) and reading that data in PC2 (C++ App). The problem that I am facing is that my PC2 (C++ App) is not able to read complete data sent by PC1 i.e. from my PC1 I am sending 190 bytes but PC2 is able to read close to 140 bytes though I am trying to read in a loop. Below is code snippet of my C++ App Open the connection to serial port serialfd = open( serialPortName.c_str(), O_RDWR | O_NOCTTY | O_NDELAY); if (serialfd == -1) { /* * Could not open the port. */ TRACE << "Unable to open port: " << serialPortName << endl; } else { TRACE << "Connected to serial port: " << serialPortName << endl; fcntl(serialfd, F_SETFL, 0); } Configure the Serial Port parameters struct termios options; /* * Get the current options for the port... */ tcgetattr(serialfd, &options); /* * Set the baud rates to 9600... */ cfsetispeed(&options, B38400); cfsetospeed(&options, B38400); /* * 8N1 * Data bits - 8 * Parity - None * Stop bits - 1 */ options.c_cflag &= ~PARENB; options.c_cflag &= ~CSTOPB; options.c_cflag &= ~CSIZE; options.c_cflag |= CS8; /* * Enable hardware flow control */ options.c_cflag |= CRTSCTS; /* * Enable the receiver and set local mode... */ options.c_cflag |= (CLOCAL | CREAD); // Flush the earlier data tcflush(serialfd, TCIFLUSH); /* * Set the new options for the port... */ tcsetattr(serialfd, TCSANOW, &options); Now I am reading data const int MAXDATASIZE = 512; std::vector<char> m_vRequestBuf; char buffer[MAXDATASIZE]; int totalBytes = 0; fcntl(serialfd, F_SETFL, FNDELAY); while(1) { bytesRead = read(serialfd, &buffer, MAXDATASIZE); if(bytesRead == -1) { //Sleep for some time and read again usleep(900000); } else { totalBytes += bytesRead; //Add data read to vector for(int i =0; i < bytesRead; i++) { m_vRequestBuf.push_back(buffer[i]); } int newBytesRead = 0; //Now keep trying to read more data while(newBytesRead != -1) { //clear contents of buffer memset((void*)&buffer, 0, sizeof(char) * MAXDATASIZE); newBytesRead = read(serialfd, &buffer, MAXDATASIZE); totalBytes += newBytesRead; for(int j = 0; j < newBytesRead; j++) { m_vRequestBuf.push_back(buffer[j]); } }//inner while break; } //while

    Read the article

  • Strcpy and malloc issues

    - by mrblippy
    Hi, i am having trouble getting a method relating to a linked list working, i get the errors: assignment makes pointer from integer without a cast and passing argument 1 of âstrcpyâ makes pointer from integer without a cast. i have tried to include all the relevant code, but let me know if you need more info. thanks. struct unit { char code[5]; char *name; node_ptr students; }; typedef struct node *node_ptr; struct node { int student_id; char *studentname; node_ptr next; }; void enrol_student(struct unit u[], int n) { int i, p; int student_id = 0; char code_to_enrol[7]; char buffer[100]; node_ptr studentslist; scanf("%s\n", code_to_enrol); for(i=0; i <= n; i++) { studentslist = u[i].students; if(strcmp(u[i].code ,code_to_enrol)<=0) { scanf("enter student details %d %s\n", &studentID, buffer); p = (char *) malloc (strlen(buffer)+1); strcpy(p, buffer); insert_in_order(student_id, buffer, studentslist); } } } void insert_in_order(int n, char *i, node_ptr list) { node_ptr before = list; node_ptr students = (node_ptr) malloc(sizeof(struct node)); students->ID = n; students->name = *i; while(before->next && (before->next->ID < n)) { before = before->next; } students->next = before->next; before->next = students; }

    Read the article

  • File transfer eating alot of CPU

    - by Dan C.
    I'm trying to transfer a file over a IHttpHandler, the code is pretty simple. However when i start a single transfer it uses about 20% of the CPU. If i were to scale this to 20 simultaneous transfers the CPU is very high. Is there a better way I can be doing this to keep the CPU lower? the client code just sends over chunks of the file 64KB at a time. public void ProcessRequest(HttpContext context) { if (context.Request.Params["secretKey"] == null) { } else { accessCode = context.Request.Params["secretKey"].ToString(); } if (accessCode == "test") { string fileName = context.Request.Params["fileName"].ToString(); byte[] buffer = Convert.FromBase64String(context.Request.Form["data"]); string fileGuid = context.Request.Params["smGuid"].ToString(); string user = context.Request.Params["user"].ToString(); SaveFile(fileName, buffer, user); } } public void SaveFile(string fileName, byte[] buffer, string user) { string DirPath = @"E:\Filestorage\" + user + @"\"; if (!Directory.Exists(DirPath)) { Directory.CreateDirectory(DirPath); } string FilePath = @"E:\Filestorage\" + user + @"\" + fileName; FileStream writer = new FileStream(FilePath, File.Exists(FilePath) ? FileMode.Append : FileMode.Create, FileAccess.Write, FileShare.ReadWrite); writer.Write(buffer, 0, buffer.Length); writer.Close(); }

    Read the article

  • how to continuously send data without blocking?

    - by Donal Rafferty
    I am trying to send rtp audio data from my Android application. I currently can send 1 RTP packet with the code below and I also have another class that extends Thread that listens to and receives RTP packets. My question is how do I continuously send my updated buffer through the packet payload without blocking the receiving thread? public void run() { isRecording = true; android.os.Process.setThreadPriority (android.os.Process.THREAD_PRIORITY_URGENT_AUDIO); int buffersize = AudioRecord.getMinBufferSize(8000, AudioFormat.CHANNEL_CONFIGURATION_MONO, AudioFormat.ENCODING_PCM_16BIT); Log.d("BUFFERSIZE","Buffer size = " + buffersize); arec = new AudioRecord(MediaRecorder.AudioSource.MIC, 8000, AudioFormat.CHANNEL_CONFIGURATION_MONO, AudioFormat.ENCODING_PCM_16BIT, buffersize); short[] readBuffer = new short[80]; byte[] buffer = new byte[160]; arec.startRecording(); while(arec.getRecordingState() == AudioRecord.RECORDSTATE_RECORDING){ int frames = arec.read(readBuffer, 0, 80); @SuppressWarnings("unused") int lenghtInBytes = codec.encode(readBuffer, 0, buffer, frames); RtpPacket rtpPacket = new RtpPacket(); rtpPacket.setV(2); rtpPacket.setX(0); rtpPacket.setM(0); rtpPacket.setPT(0); rtpPacket.setSSRC(123342345); rtpPacket.setPayload(buffer, 160); try { rtpSession2.sendRtpPacket(rtpPacket); } catch (UnknownHostException e) { // TODO Auto-generated catch block e.printStackTrace(); } catch (RtpException e) { // TODO Auto-generated catch block e.printStackTrace(); } catch (IOException e) { // TODO Auto-generated catch block e.printStackTrace(); } } } So when I send on one device and receive on another I get decent audio, but when I send and receive on both I get broken sound like its taking turns to send and receive audio. I have a feeling it could be to do with the while loop? it could be looping around in there and not letting anything else run?

    Read the article

< Previous Page | 37 38 39 40 41 42 43 44 45 46 47 48  | Next Page >