Search Results

Search found 5946 results on 238 pages for 'heavy bytes'.

Page 124/238 | < Previous Page | 120 121 122 123 124 125 126 127 128 129 130 131  | Next Page >

  • Push notification is successfully sent, but the device does not receive (occasionally)

    - by ashiina
    I have been having a problem where some devices will not receive a push notification, since yesterday. The certificate / devicetoken seem to be correct, since the device used to successfully receive push notifications until yesterday. On the server-side, there are no errors or connection refusals, and the push notification seems to be successfully sent every time. But still, there are many occasions where the device does not correctly receive the push. Some surrounding information: I am doing this on the production environment. No errors / connection refusals on the server-side I am sending the exactly same JSON everytime. 2 of our devices are not receiving the push notification AT ALL since yesterday 1 of our device receives push notifications at a lower success rate (about 70%) than yesterday 1~2 of our devices still receive push notifications successfully even now. All of the above devices were able to receive push notifications properly on the production environment until yesterday. There is no difference in the server-side result for when the push is successful, and when the device doesn't receive it... Therefore it is virtually impossible to identify the problem. This is the server-side PHP code I am using: $ctx = stream_context_create(); stream_context_set_option($ctx, 'ssl', 'local_cert', $this->apnsData[$development]['certificate']); $fp = stream_socket_client($this->apnsData[$development]['ssl'], $error, $errorString, 100, (STREAM_CLIENT_C ONNECT|STREAM_CLIENT_PERSISTENT), $ctx); if(!$fp){ $this->_pushFailed($pid); $this->_triggerError("Failed to connect to APNS: {$error} {$errorString}."); } else { $msg = chr(0).pack("n",32).pack('H*',$token).pack("n",strlen($message)).$message; $fwrite = fwrite($fp, $msg); if(!$fwrite) { error_log("[APNS] push failed..."); $this->_pushFailed($pid); $this->_triggerError("Failed writing to stream.", E_USER_ERROR); } else { error_log("[APNS] push successful! ::: $token -> $message ($fwrite bytes)"); } } fclose($fp); The log tells me that the push was successful (Cutting out the token for privacy) : [Wed Dec 12 11:42:00 2012] [error] [client 10.161.6.177] [APNS] push successful! ::: aa4f******44 -> {"aps":{"alert":{"body":"\\u300casdfasdf\\u300d","action-loc-key":"OK"},"badge":4,"sound":"chime"}} (134 bytes) Is there any way I can get help on this problem? Or is there anybody who is having the same problem?? Please help! I am getting complaints from some users on this....

    Read the article

  • Super fast getimagesize in php

    - by Sir Lojik
    Hi all, im trying to get image size(DIMENSIONS) of hundreds of remote images and getimagesize is way too slow. ive done some reading and found out the quickest way would be to use get_file_contents to read a certain aount of bytes from the images and examining the size within the binary data. Anyone attempted this before? How would i examine different formats. Seen any library for this? please let me know

    Read the article

  • How to convert from base-256 to base-N, where N is higher than 16?

    - by mark
    Dear ladies and sirs. I need to convert an array of bytes to another base, namely 85. In math terms the question is how to convert from base-256 to base-85 in the most efficient way? This question is inspired by my previous question - http://stackoverflow.com/questions/2827627/what-is-the-most-efficient-way-to-encode-an-arbitrary-guid-into-readable-ascii-3 Thanks.

    Read the article

  • Returning the size of available virtual memory at run-time in C++

    - by Greenhouse Gases
    In C++ is there a predefined library function that will return the size of RAM currently available on a computer a program is being run on, at run-time? For instance, if an object is 4bytes, then can we divide the available virtual memory by 4 bytes to give an estimate of how many more objects could be stored by the program safely? I have used the sizeof() function to return the size of objects within my program. Thanks

    Read the article

  • which text writing classes are the most efficient for writing lots of small files?

    - by Robert H
    I have to write 300+ files to a server share on a hourly basis. A quick implementation using CreateText takes approximately 1.4 seconds per file. I know there is a better way to do this, but I am unsure which way is actually the quickest/most efficient; hence my question: Which text writing class is the most efficient for writing hundreds of small files ( 336 bytes on average ) to a server share?

    Read the article

  • I got the address of a large managed object in WinDbg, what next?

    - by Mahen
    I created a high memory utilization dump and using !dumpheap -stat and !dumpheap -mt I got the address of two large string generic list of 30 MB each. I want to know more about these lists. What they contain or which piece of code is using them. Is there a way to find them out? 0:000 !do 2b370038 Name: System.Object[] MethodTable: 71e240bc EEClass: 71c0da54 Size: 33554448(0x2000010) bytes Array: Rank 1, Number of elements 8388608, Type CLASS Element Type: System.Collections.Generic.List`1[[System.String, mscorlib]] Fields: None

    Read the article

  • How do I read and write to a file using threads in java?

    - by WarmWaffles
    I'm writing an application where I need to read blocks in from a single file, each block is roughly 512 bytes. I am also needing to write blocks simultaneously. One of the ideas I had was BlockReader implements Runnable and BlockWriter implements Runnable and BlockManager manages both the reader and writer. The problem that I am seeing with most examples that I have found was locking problems and potential deadlock situations. Any ideas how to implement this?

    Read the article

  • How to send stream data via Bluetooth from an iPhone/iPod Touch to a Windows C++ application?

    - by PLinhol
    Hello, I need to develop an iPhone/iPod Touch application that creates a server to send some data stream (characters or bytes) to a Windows C++ application via Bluetooth. I'm thinking of creating a TCP connection, but don't know where to start. What iPhone API should I use do to something like this? Does anyone knows some code examples that i can use to do this? And in Windows, what should I use to support this kind of communication? Thanks

    Read the article

  • how to do event based serial port reading in c?

    - by moon
    i want to read serial port when there is some data present i mean on the event when data arrives only then i will read serial port instead of continuously reading the port i have this code for continuous reading the port how can i make it event based. thanx in advance. while(1) { bReadRC = ReadFile(m_hCom, &byte, 6, &iBytesRead, NULL); printf("Data Recieved Through Serial port and no. of Bytes Recieved is %d",iBytesRead); }

    Read the article

  • Httpclient not returning entire response

    - by whakojacko
    Using HttpClient 4.0, Im having an issue where the response I get from the ResponseHandler is only about half of what the actual page content should be (~61k bytes in the string vs ~125k in the page returned to a browser). I cant seem to find any place where there might be some sort of limit that would limit this. Any ideas?

    Read the article

  • Extending the Radius Protocol

    - by vijay.j
    I am using radius protocol to for sending some values from client to server. Within that, I am using vendor-specific value pairs, and defining our own types. However, the value length for vendor-specific data is 255, and our data length is crossing it. Please can any one tell me how to incorporate data longer than 255 bytes?

    Read the article

  • Indexing on only part of a field in MongoDB

    - by Rob Hoare
    Is there a way to create an index on only part of a field in MongoDB, for example on the first 10 characters? I couldn't find it documented (or asked about on here). The MySQL equivalent would be CREATE INDEX part_of_name ON customer (name(10));. Reason: I have a collection with a single field that varies in length from a few characters up to over 1000 characters, average 50 characters. As there are a hundred million or so documents it's going to be hard to fit the full index in memory (testing with 8% of the data the index is already 400MB, according to stats). Indexing just the first part of the field would reduce the index size by about 75%. In most cases the search term is quite short, it's not a full-text search. A work-around would be to add a second field of 10 (lowercased) characters for each item, index that, then add logic to filter the results if the search term is over ten characters (and that extra field is probably needed anyway for case-insensitive searches, unless anybody has a better way). Seems like an ugly way to do it though. [added later] I tried adding the second field, containing the first 12 characters from the main field, lowercased. It wasn't a big success. Previously, the average object size was 50 bytes, but I forgot that includes the _id and other overheads, so my main field length (there was only one) averaged nearer to 30 bytes than 50. Then, the second field index contains the _id and other overheads. Net result (for my 8% sample) is the index on the main field is 415MB and on the 12 byte field is 330MB - only a 20% saving in space, not worthwhile. I could duplicate the entire field (to work around the case insensitive search problem) but realistically it looks like I should reconsider whether MongoDB is the right tool for the job (or just buy more memory and use twice as much disk space). [added even later] This is a typical document, with the source field, and the short lowercased field: { "_id" : ObjectId("505d0e89f56588f20f000041"), "q" : "Continental Airlines", "f" : "continental " } Indexes: db.test.ensureIndex({q:1}); db.test.ensureIndex({f:1}); The 'f" index, working on a shorter field, is 80% of the size of the "q" index. I didn't mean to imply I included the _id in the index, just that it needs to use that somewhere to show where the index will point to, so it's an overhead that probably helps explain why a shorter key makes so little difference. Access to the index will be essentially random, no part of it is more likely to be accessed than any other. Total index size for the full file will likely be 5GB, so it's not extreme for that one index. Adding some other fields for other search cases, and their associated indexes, and copies of data for lower case, does start to add up, which I why I started looking into a more concise index.

    Read the article

< Previous Page | 120 121 122 123 124 125 126 127 128 129 130 131  | Next Page >