Search Results

Search found 4304 results on 173 pages for 'bytes'.

Page 40/173 | < Previous Page | 36 37 38 39 40 41 42 43 44 45 46 47  | Next Page >

  • Is writing a reference atomic on 64bit VMs

    - by Steffen Heil
    Hi The java memory model mandates that writing a int is atomic: That is, if you write a value to it (consisting of 4 bytes) in one thread and read it in another, you will get all bytes or none, but never 2 new bytes and 2 old bytes or such. This is not guaranteed for long. Here, writing 0x1122334455667788 to a variable holding 0 before could result in another thread reading 0x112233440000000 or 0x0000000055667788. Now the specification does not mandate object references to be either int or long-sized. For type safety reasons I suspect they are guaranteed to be written atomiacally, but on a 64bit VM these references could be very well 64bit values (merely memory addresses). No here are my question: Are there any memory model specs covering this (that I haven't found)? Are long-writes suspect to be atomic on 64bit VMs? Are VMs forced to map references to 32bit? Regards, Steffen

    Read the article

  • smallest filesize for transparent gif

    - by zaf
    I'm looking for the smallest (in terms of filesize) transparent 1 pixel image. Currently I have a gif of 49 bytes which seems to be the most popular. But I remember many years ago having one which was less than 40 bytes. Could have been 32 bytes. Can anyone do better? Graphics format is no concern as long as modern web browsers can display it and respect the transparency.

    Read the article

  • BinaryFormatter with MemoryStream Question

    - by Changeling
    I am testing BinaryFormatter to see how it will work for me and I have a simple question: When using it with the string HELLO, and I convert the MemoryStream to an array, it gives me 29 dimensions, with five of them being the actual data towards the end of the dimensions: BinaryFormatter bf = new BinaryFormatter(); MemoryStream ms = new MemoryStream(); byte[] bytes; string originalData = "HELLO"; bf.Serialize(ms, originalData); ms.Seek(0, 0); bytes = ms.ToArray(); returns - bytes {Dimensions:[29]} byte[] [0] 0 byte [1] 1 byte [2] 0 byte [3] 0 byte [4] 0 byte [5] 255 byte [6] 255 byte [7] 255 byte [8] 255 byte [9] 1 byte [10] 0 byte [11] 0 byte [12] 0 byte [13] 0 byte [14] 0 byte [15] 0 byte [16] 0 byte [17] 6 byte [18] 1 byte [19] 0 byte [20] 0 byte [21] 0 byte [22] 5 byte [23] 72 byte [24] 69 byte [25] 76 byte [26] 76 byte [27] 79 byte [28] 11 byte Is there a way to only return the data encoded as bytes without all the extraneous information?

    Read the article

  • Calculation of charged traffic in GPRS network

    - by TyBoer
    I am working with a distributed application communicating over GPRS. I use UDP packets to send business data and ICMP pings to verify connectivity. And now I have a problem with calculating a traffic for which I will be charged by the provider. I have to consider following factors: UDP payload: that is obvious. UDP overhead: UDP header + IP header = 8 + 20 bytes. ICMP echo request without data: IP header + ICMP payload = 28 bytes. ICMP echo reply: as in 3. Above means that for evey data packet I am charged for payload + 28 bytes and for every ping 56 bytes. Am I right or I am missing/misunderstanding something?

    Read the article

  • Does the DMA Buffer Size should be same as UART FIFO size?

    - by ddpd
    I have written a driver for a UART in omap4460 panda board running on Linux platform.I have enabled DMA in FIFO mode in UART.My user application transfers 100 bytes of data from user space to kernel buffer(DMA buffer). As soon as the DMA channel is enabled, data from DMA buffer is copied to FIFO which is then transmitted to TSR of UART.Since my FIFO size is 64bytes,only 64 bytes is transmitted to TSR. What should I do to transfer remaining bytes from DMA buffer to FIFO?/ IS there any overflow occuring?

    Read the article

  • Multi-part gzip file random access (in Java)

    - by toluju
    This may fall in the realm of "not really feasible" or "not really worth the effort" but here goes. I'm trying to randomly access records stored inside a multi-part gzip file. Specifically, the files I'm interested in are compressed Heretrix Arc files. (In case you aren't familiar with multi-part gzip files, the gzip spec allows multiple gzip streams to be concatenated in a single gzip file. They do not share any dictionary information, it is simple binary appending.) I'm thinking it should be possible to do this by seeking to a certain offset within the file, then scan for the gzip magic header bytes (i.e. 0x1f8b, as per the RFC), and attempt to read the gzip stream from the following bytes. The problem with this approach is that those same bytes can appear inside the actual data as well, so seeking for those bytes can lead to an invalid position to start reading a gzip stream from. Is there a better way to handle random access, given that the record offsets aren't known a priori?

    Read the article

  • Ab Initio - Formatting a number in Left alignment

    - by Veera
    I have a requirement in Ab Initio to format a number in left alignment. I shouldn't be using String conversion (as Strings are left aligned by default), as it might cause compatibility problems in the other end. For example, if my Field has 7 bytes length, and I'm getting only two digits as my input, then these two digits should go into the first two bytes of my field (left aligned), instead of the last two bytes. So, is there any in-built function in Ab Initio, that can format a number as left aligned?

    Read the article

  • Efficiently storing a list of prime numbers

    - by eSKay
    This article says: Every prime number can be expressed as 30k±1, 30k±7, 30k±11, or 30k±13 for some k. That means we can use eight bits per thirty numbers to store all the primes; a million primes can be compressed to 33,334 bytes "That means we can use eight bits per thirty numbers to store all the primes" This "eight bits per thirty numbers" would be for k, correct? But each k value will not necessarily take-up just one bit. Shouldn't it be eight k values instead? "a million primes can be compressed to 33,334 bytes" I am not sure how this is true. We need to indicate two things: VALUE of k (can be arbitrarily large) STATE from one of the eight states (-13,-11,-7,-1,1,7,11,13) I am not following how 33,334 bytes was arrived at, but I can say one thing: as the prime numbers become larger and larger in value, we will need more space to store the value of k. How, then can we fix it at 33,334 bytes?

    Read the article

  • How do chains work in Rainbow tables?

    - by James Moore
    Hello, I was wondering if should one could explain in detail how chains work in rainbow tables as though you would a complete novice but with relevance to programming. I understand that a chain is 16 bytes long. 8 bytes mark the starting point and 8 mark the end. I also understand that in the filename we have the chain length i.e. 2400. Which means that between our starting point and end point in just 16 bytes we have 2400 possible clear texts? What? How does that work? in those 16 bytes how do i get my 2400 hashes and clear texts or am i miss understanding this? Your help is greatly appreciated. Thanks P.s. I have read the related papers and googled this topic a fair bit. I think im just missing something important to make these gears turn.

    Read the article

  • DSA signature verification input

    - by calccrypto
    What is the data inputted into DSA when PGP signs a message? From RFC4880, i found A Signature packet describes a binding between some public key and some data. The most common signatures are a signature of a file or a block of text, and a signature that is a certification of a User ID. im not sure if it is the entire public key, just the public key packet, or some other derivative of a pgp key packet. whatever it is, i cannot get the DSA signature to verify here is a sample im testing my program on: -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 abcd -----BEGIN PGP SIGNATURE----- Version: BCPG v1.39 iFkEARECABkFAk0z65ESHGFiYyAodGVzdCBrZXkpIDw+AAoJEC3Jkh8+bnkusO0A oKG+HPF2Qrsth2zS9pK+eSCBSypOAKDBgC2Z0vf2EgLiiNMk8Bxpq68NkQ== =gq0e -----END PGP SIGNATURE----- Dumped from pgpdump.net Old: Signature Packet(tag 2)(89 bytes) Ver 4 - new Sig type - Signature of a canonical text document(0x01). Pub alg - DSA Digital Signature Algorithm(pub 17) Hash alg - SHA1(hash 2) Hashed Sub: signature creation time(sub 2)(4 bytes) Time - Mon Jan 17 07:11:13 UTC 2011 Hashed Sub: signer's User ID(sub 28)(17 bytes) User ID - abc (test key) <> Sub: issuer key ID(sub 16)(8 bytes) Key ID - 0x2DC9921F3E6E792E Hash left 2 bytes - b0 ed DSA r(160 bits) - a1 be 1c f1 76 42 bb 2d 87 6c d2 f6 92 be 79 20 81 4b 2a 4e DSA s(160 bits) - c1 80 2d 99 d2 f7 f6 12 02 e2 88 d3 24 f0 1c 69 ab af 0d 91 -> hash(DSA q bits) and the public key for it is: -----BEGIN PGP PUBLIC KEY BLOCK----- Version: BCPG v1.39 mOIETTPqeBECALx+i9PIc4MB2DYXeqsWUav2cUtMU1N0inmFHSF/2x0d9IWEpVzE kRc30PvmEHI1faQit7NepnHkkphrXLAoZukAoNP3PB8NRQ6lRF6/6e8siUgJtmPL Af9IZOv4PI51gg6ICLKzNO9i3bcUx4yeG2vjMOUAvsLkhSTWob0RxWppo6Pn6MOg dMQHIM5sDH0xGN0dOezzt/imAf9St2B0HQXVfAAbveXBeRoO7jj/qcGx6hWmsKUr BVzdQhBk7Sku6C2KlMtkbtzd1fj8DtnrT8XOPKGp7/Y7ASzRtBFhYmMgKHRlc3Qg a2V5KSA8PohGBBMRAgAGBQJNM+p5AAoJEC3Jkh8+bnkuNEoAnj2QnqGtdlTgUXCQ Fyvwk5wiLGPfAJ4jTGTL62nWzsgrCDIMIfEG2shm8bjMBE0z6ngQAgCUlP7AlfO4 XuKGVCs4NvyBpd0KA0m0wjndOHRNSIz44x24vLfTO0GrueWjPMqRRLHO8zLJS/BX O/BHo6ypjN87Af0VPV1hcq20MEW2iujh3hBwthNwBWhtKdPXOndJGZaB7lshLJuW v9z6WyDNXj/SBEiV1gnPm0ELeg8Syhy5pCjMAgCFEc+NkCzcUOJkVpgLpk+VLwrJ /Wi9q+yCihaJ4EEFt/7vzqmrooXWz2vMugD1C+llN6HkCHTnuMH07/E/2dzciEYE GBECAAYFAk0z6nkACgkQLcmSHz5ueS7NTwCdED1P9NhgR2LqwyS+AEyqlQ0d5joA oK9xPUzjg4FlB+1QTHoOhuokxxyN =CTgL -----END PGP PUBLIC KEY BLOCK----- the public key packet of the key is mOIETTPqeBECALx+i9PIc4MB2DYXeqsWUav2cUtMU1N0inmFHSF/2x0d9IWEpVzEkRc30PvmEHI1faQi t7NepnHkkphrXLAoZukAoNP3PB8NRQ6lRF6/6e8siUgJtmPLAf9IZOv4PI51gg6ICLKzNO9i3bcUx4ye G2vjMOUAvsLkhSTWob0RxWppo6Pn6MOgdMQHIM5sDH0xGN0dOezzt/imAf9St2B0HQXVfAAbveXBeRoO 7jj/qcGx6hWmsKUrBVzdQhBk7Sku6C2KlMtkbtzd1fj8DtnrT8XOPKGp7/Y7ASzR in radix 64 i have tried many different combinations of sha1(< some data + 'abcd'),but the calculated value v never equals r, of the signature i know that the pgp implementation i used to create the key and signature is correct. i also know that my DSA implementation and PGP key data extraction program are correct. thus, the only thing left is the data to hash. what is the correct data to be hashed?

    Read the article

  • FIFOs implementation

    - by nunos
    Consider the following code: writer.c mkfifo("/tmp/myfifo", 0660); int fd = open("/tmp/myfifo", O_WRONLY); char *foo, *bar; ... write(fd, foo, strlen(foo)*sizeof(char)); write(fd, bar, strlen(bar)*sizeof(char)); reader.c int fd = open("/tmp/myfifo", O_RDONLY); char buf[100]; read(fd, buf, ??); My question is: Since it's not know before hand how many bytes will foo and bar have, how can I know how many bytes to read from reader.c? Because if I, for example, read 10 bytes in reader and foo and bar are together less than 10 bytes, I will have them both in the same variable and that I do not want. Ideally I would have one read function for every variable, but again I don't know before hand how many bytes will the data have. I thought about adding another write instruction in writer.c between the write for foo and bar with a separator and then I would have no problem decoding it from reader.c. Is this the way to go about it? Thanks.

    Read the article

  • smallest filesize for transparent single pixel image

    - by zaf
    I'm looking for the smallest (in terms of filesize) transparent 1 pixel image. Currently I have a gif of 49 bytes which seems to be the most popular. But I remember many years ago having one which was less than 40 bytes. Could have been 32 bytes. Can anyone do better? Graphics format is no concern as long as modern web browsers can display it and respect the transparency.

    Read the article

  • How to use CLEAR USB internet connection in Ubuntu (host) and WindowsXP (guest) using VirtualBox

    - by bithacker
    I'm trying to use CLEAR Motorola WiMax USB in Ubuntu as there is no support for linux as yet. I've installed windowsxp as guest in ubuntu and the version I'm using is 3.2.2. USB is connecting fine in WindowsXP but I can't use internet in Ubuntu. Can you please tell me how to do it. Here is the configuration that could help you guys. Thanks in advance. I'm using Two Network Adapters. Network Adapter 1: PCnet-FAST III (NAT) Adapter 2: PCnet-FAST III (Host-only adapter, 'vboxnet0') ipconfig [on Guest windowsXP] Windows IP Configuration Ethernet adapter Local Area Connection: PCnet-FAST III (NAT) Connection-specific DNS Suffix . : IP Address. . . . . . . . . . . . : 10.0.2.15 Subnet Mask . . . . . . . . . . . : 255.255.255.0 Default Gateway . . . . . . . . . : 10.0.2.2 Ethernet adapter Local Area Connection 3: PCnet-FAST III (Host-only adapter, 'vboxnet0') Connection-specific DNS Suffix . : IP Address. . . . . . . . . . . . : 192.168.56.101 Subnet Mask . . . . . . . . . . . : 255.255.255.0 Default Gateway . . . . . . . . . : Ethernet adapter Local Area Connection 2: Connection-specific DNS Suffix . : CLEAR Motorola USB IP Address. . . . . . . . . . . . : 10.168.242.33 Subnet Mask . . . . . . . . . . . : 255.255.192.0 Default Gateway . . . . . . . . . : 10.168.192.2 IFCONFIG [on Host Ubuntu] (Ethernet) eth0 Link encap:Ethernet HWaddr 00:14:22:b9:9d:76 UP BROADCAST MULTICAST MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) Interrupt:16 eth1 (Wireless) Link encap:Ethernet HWaddr 00:13:ce:f0:9b:0d inet6 addr: fe80::213:ceff:fef0:9b0d/64 Scope:Link UP BROADCAST MULTICAST MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:1 errors:0 dropped:5 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:0 (0.0 B) TX bytes:84 (84.0 B) Interrupt:17 Base address:0xe000 Memory:dfcff000-dfcfffff lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:2292 errors:0 dropped:0 overruns:0 frame:0 TX packets:2292 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:171952 (171.9 KB) TX bytes:171952 (171.9 KB) vboxnet0 Link encap:Ethernet HWaddr 0a:00:27:00:00:00 inet addr:192.168.56.1 Bcast:192.168.56.255 Mask:255.255.255.0 inet6 addr: fe80::800:27ff:fe00:0/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:137 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:0 (0.0 B) TX bytes:21174 (21.1 KB)

    Read the article

  • HTTP Data chunks over multiple packets?

    - by myforwik
    What is the correct way for a HTTP server to send data over multiple packets? For example I want to transfer a file, the first packet I send is: HTTP/1.1 200 OK Content-type: application/force-download Content-Type: application/download Content-Type: application/octet-stream Content-Description: File Transfer Content-disposition: attachment; filename=test.dat Content-Transfer-Encoding: chunked 400 <first 1024 bytes here> 400 <next 1024 bytes here> 400 <next 1024 bytes here> Now I need to make a new packet, if I just send: 400 <next 1024 bytes here> All the clients close there connections on me and the files are cut short. What headers do I put in a second packet to continue on with the data stream?

    Read the article

  • NetworkStream.Read delay .Net

    - by Gilbes
    I have a class that inherits from TcpClient. In that class I have a method to process responses. In that method I call I get the NetworkStream with MyBase.GetStream and call Read on it. This works fine, excpet the first call to read blocks too long. And by too long I mean that the socket has recieved plenty of data, but won't read it until some arbitrary limit is reached. I can see that it has recieved plenty of data using the packet sniffer WireShark. I have set the recieve buffer to small amounts, and very small amounts (like just a few bytes) to no avail. I have done the same with the buffer byte array I pass to the read method, and it still delays. Or to put it another way. I am download 600k. The download takes 5 seconds (at a little over 100k/second connection to the server which makes sense). The initial Read call takes 2-3 seconds and tells me only 256 bytes are availble (256 is the Recieve buffer and the size of the array I read in to). Then magically, the other few hundred thousand bytes can be read in 256 byte chunks in only a few process ticks each. Using a packet sniffer, I know that during those initial 2-3 seconds, the socket has recieved much more than just 256 bytes. My connection wasn't .25k/second for 3 seconds and then 400k for 2 seconds. How do I get the bytes from a socket as they come in?

    Read the article

  • Why does Git.pm on cygwin complain about 'Out of memory during "large" request?

    - by Charles Ma
    Hi, I'm getting this error while doing a git svn rebase in cygwin Out of memory during "large" request for 268439552 bytes, total sbrk() is 140652544 bytes at /usr/lib/perl5/site_perl/Git.pm line 898, <GEN1> line 3. 268439552 is 256MB. Cygwin's maxium memory size is set to 1024MB so I'm guessing that it has a different maximum memory size for perl? How can I increase the maximum memory size that perl programs can use? update: This is where the error occurs (in Git.pm): while (1) { my $bytesLeft = $size - $bytesRead; last unless $bytesLeft; my $bytesToRead = $bytesLeft < 1024 ? $bytesLeft : 1024; my $read = read($in, $blob, $bytesToRead, $bytesRead); //line 898 unless (defined($read)) { $self->_close_cat_blob(); throw Error::Simple("in pipe went bad"); } $bytesRead += $read; } I've added a print before line 898 to print out $bytesToRead and $bytesRead and the result was 1024 for $bytesToRead, and 134220800 for $bytesRead, so it's reading 1024 bytes at a time and it has already read 128MB. Perl's 'read' function must be out of memory and is trying to request for double it's memory size...is there a way to specify how much memory to request? or is that implementation dependent? UPDATE2: While testing memory allocation in cygwin: This C program's output was 1536MB int main() { unsigned int bit=0x40000000, sum=0; char *x; while (bit > 4096) { x = malloc(bit); if (x) sum += bit; bit >>= 1; } printf("%08x bytes (%.1fMb)\n", sum, sum/1024.0/1024.0); return 0; } While this perl program crashed if the file size is greater than 384MB (but succeeded if the file size was less). open(F, "<400") or die("can't read\n"); $size = -s "400"; $read = read(F, $s, $size); The error is similar Out of memory during "large" request for 536875008 bytes, total sbrk() is 217088 bytes at mem.pl line 6.

    Read the article

  • False sense of security with `snprintf_s`

    - by xtofl
    MSVC's "secure" sprintf funcions have a template version that 'knows' the size of the target buffer. However, this code happily paints 567890 over the stack after the end of bytes... char bytes[5]; _snprintf_s( bytes, _TRUNCATE, "%s", "1234567890" ); Any idea what I do wrong, or is this a known bug? (I'm working in VS2005 - didn't test in 2008 or 2010)

    Read the article

  • Best practice PHP Form Action

    - by Rob
    Hi there i've built a new script (from scratch not a CMS) and i've done alot of work on reducing memory usage and the time it takes for the page to be displayed (caching HTML etc) There's one thing that i'm not sure about though. Take a simple example of an article with a comments section. If the comment form posts to another page that then redirects back to the article page I won't have the problem of people clicking refresh and resending the information. However if I do it that way, I have to load up my script twice use twice as much memory and it takes twice as long whilst i'm still only displaying the page once. Here's an example from my load log. The first load of the article is from the cache, the second rebuilds the page after the comment is posted. Example 1 0 queries using 650856 bytes of memory in 0.018667 - domain.com/article/1/my_article.html 9 queries using 1325723 bytes of memory in 0.075825 - domain.com/article/1/my_article/newcomment.html 0 queries using 650856 bytes of memory in 0.029449 - domain.com/article/1/my_article.html Example 2 0 queries using 650856 bytes of memory in 0.023526 - domain.com/article/1/my_article.html 9 queries using 1659096 bytes of memory in 0.060032 - domain.com/article/1/my_article.html Obviously the time fluctuates so you can't really compare that. But as you can see with the first method I use more memory and it takes longer to load. BUT the first method avoides the refresh problem. Does anyone have any suggestions for the best approach or for alternative ways to avoid the extra load (admittadely minimal but i'd still like to avoid it) whilst also avoiding the refresh problem?

    Read the article

  • When should I use indexed arrays of OpenGL vertices?

    - by Tartley
    I'm trying to get a clear idea of when I should be using indexed arrays of OpenGL vertices, drawn with gl[Multi]DrawElements and the like, versus when I should simply use contiguous arrays of vertices, drawn with gl[Multi]DrawArrays. (Update: The consensus in the replies I got is that one should always be using indexed vertices.) I have gone back and forth on this issue several times, so I'm going to outline my current understanding, in the hopes someone can either tell me I'm now finally more or less correct, or else point out where my remaining misunderstandings are. Specifically, I have three conclusions, in bold. Please correct them if they are wrong. One simple case is if my geometry consists of meshes to form curved surfaces. In this case, the vertices in the middle of the mesh will have identical attributes (position, normal, color, texture coord, etc) for every triangle which uses the vertex. This leads me to conclude that: 1. For geometry with few seams, indexed arrays are a big win. Follow rule 1 always, except: For geometry that is very 'blocky', in which every edge represents a seam, the benefit of indexed arrays is less obvious. To take a simple cube as an example, although each vertex is used in three different faces, we can't share vertices between them, because for a single vertex, the surface normals (and possible other things, like color and texture co-ord) will differ on each face. Hence we need to explicitly introduce redundant vertex positions into our array, so that the same position can be used several times with different normals, etc. This means that indexed arrays are of less use. e.g. When rendering a single face of a cube: 0 1 o---o |\ | | \ | | \| o---o 3 2 (this can be considered in isolation, because the seams between this face and all adjacent faces mean than none of these vertices can be shared between faces) if rendering using GL_TRIANGLE_FAN (or _STRIP), then each face of the cube can be rendered thus: verts = [v0, v1, v2, v3] colors = [c0, c0, c0, c0] normal = [n0, n0, n0, n0] Adding indices does not allow us to simplify this. From this I conclude that: 2. When rendering geometry which is all seams or mostly seams, when using GL_TRIANGLE_STRIP or _FAN, then I should never use indexed arrays, and should instead always use gl[Multi]DrawArrays. (Update: Replies indicate that this conclusion is wrong. Even though indices don't allow us to reduce the size of the arrays here, they should still be used because of other performance benefits, as discussed in the comments) The only exception to rule 2 is: When using GL_TRIANGLES (instead of strips or fans), then half of the vertices can still be re-used twice, with identical normals and colors, etc, because each cube face is rendered as two separate triangles. Again, for the same single cube face: 0 1 o---o |\ | | \ | | \| o---o 3 2 Without indices, using GL_TRIANGLES, the arrays would be something like: verts = [v0, v1, v2, v2, v3, v0] normals = [n0, n0, n0, n0, n0, n0] colors = [c0, c0, c0, c0, c0, c0] Since a vertex and a normal are often 3 floats each, and a color is often 3 bytes, that gives, for each cube face, about: verts = 6 * 3 floats = 18 floats normals = 6 * 3 floats = 18 floats colors = 6 * 3 bytes = 18 bytes = 36 floats and 18 bytes per cube face. (I understand the number of bytes might change if different types are used, the exact figures are just for illustration.) With indices, we can simplify this a little, giving: verts = [v0, v1, v2, v3] (4 * 3 = 12 floats) normals = [n0, n0, n0, n0] (4 * 3 = 12 floats) colors = [c0, c0, c0, c0] (4 * 3 = 12 bytes) indices = [0, 1, 2, 2, 3, 0] (6 shorts) = 24 floats + 12 bytes, and maybe 6 shorts, per cube face. See how in the latter case, vertices 0 and 2 are used twice, but only represented once in each of the verts, normals and colors arrays. This sounds like a small win for using indices, even in the extreme case of every single geometry edge being a seam. This leads me to conclude that: 3. When using GL_TRIANGLES, one should always use indexed arrays, even for geometry which is all seams. Please correct my conclusions in bold if they are wrong.

    Read the article

  • RijndaelManaged Padding when data matches block size

    - by trampster
    If I use PKCS7 padding in RijndaelManaged with 16 bytes of data then I get 32 bytes of data output. It appears that for PKCS7 when the data size matches the block size it adds a whole extra block of data. If I use Zeros padding for 16 bytes of data I get out 16 bytes of data. So for Zeros padding if the data matches the block size then it doesn't pad. I have searched through the documentation and it says nothing about this difference in padding behavior. Can someone please point me to some kind of documentation which specifies what the padding behavior should be for the different padding modes when the data size matches the block size.

    Read the article

  • Hard crash when drawing content for CALayer using quartz

    - by Lukasz
    I am trying to figure out why iOS crash my application in the harsh way (no crash logs, immediate shudown with black screen of death with spinner shown for a while). It happens when I render content for CALayer using Quartz. I suspected the memory issue (happens only when testing on the device), but memory logs, as well as instruments allocation logs looks quite OK. Let me past in the fatal function: - (void)renderTiles{ if (rendering) { //NSLog(@"====== RENDERING TILES SKIP ======="); return; } rendering = YES; CGRect b = tileLayer.bounds; CGSize s = b.size; CGFloat imageScale = [[UIScreen mainScreen] scale]; s.height *= imageScale; s.width *= imageScale; dispatch_async(queue, ^{ NSLog(@""); NSLog(@"====== RENDERING TILES START ======="); NSLog(@"1. Before creating context"); report_memory(); CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB(); NSLog(@"2. After creating color space"); report_memory(); NSLog(@"3. About to create context with size: %@", NSStringFromCGSize(s)); CGContextRef ctx = CGBitmapContextCreate(NULL, s.width, s.height, 8, 0, colorSpace, kCGImageAlphaPremultipliedLast); NSLog(@"4. After creating context"); report_memory(); CGAffineTransform flipTransform = CGAffineTransformMake(1.0, 0.0, 0.0, -1.0, 0.0, s.height); CGContextConcatCTM(ctx, flipTransform); CGRect tileRect = CGRectMake(0, 0, tileImageScaledSize.width, tileImageScaledSize.height); CGContextDrawTiledImage(ctx, tileRect, tileCGImageScaled); NSLog(@"5. Before creating cgimage from context"); report_memory(); CGImageRef cgImage = CGBitmapContextCreateImage(ctx); NSLog(@"6. After creating cgimage from context"); report_memory(); dispatch_sync(dispatch_get_main_queue(), ^{ tileLayer.contents = (id)cgImage; }); NSLog(@"7. After asgning tile layer contents = cgimage"); report_memory(); CGColorSpaceRelease(colorSpace); CGContextRelease(ctx); CGImageRelease(cgImage); NSLog(@"8. After releasing image and context context"); report_memory(); NSLog(@"====== RENDERING TILES END ======="); NSLog(@""); rendering = NO; }); } Here are the logs: ====== RENDERING TILES START ======= 1. Before creating context Memory in use (in bytes): 28340224 / 519442432 (5.5%) 2. After creating color space Memory in use (in bytes): 28340224 / 519442432 (5.5%) 3. About to create context with size: {6324, 5208} 4. After creating context Memory in use (in bytes): 28344320 / 651268096 (4.4%) 5. Before creating cgimage from context Memory in use (in bytes): 153649152 / 651333632 (23.6%) 6. After creating cgimage from context Memory in use (in bytes): 153649152 / 783159296 (19.6%) 7. After asgning tile layer contents = cgimage Memory in use (in bytes): 153653248 / 783253504 (19.6%) 8. After releasing image and context context Memory in use (in bytes): 21688320 / 651288576 (3.3%) ====== RENDERING TILES END ======= Application crashes in random places. Sometimes when reaching en of the function and sometime in random step. Which direction should I look for a solution? Is is possible that GDC is causing the problem? Or maybe the context size or some Core Animation underlying references?

    Read the article

  • Memory efficient int-int dict in Python

    - by Bolo
    Hi, I need a memory efficient int-int dict in Python that would support the following operations in O(log n) time: d[k] = v # replace if present v = d[k] # None or a negative number if not present I need to hold ~250M pairs, so it really has to be tight. Do you happen to know a suitable implementation (Python 2.7)? EDIT Removed impossible requirement and other nonsense. Thanks, Craig and Kylotan! To rephrase. Here's a trivial int-int dictionary with 1M pairs: >>> import random, sys >>> from guppy import hpy >>> h = hpy() >>> h.setrelheap() >>> d = {} >>> for _ in xrange(1000000): ... d[random.randint(0, sys.maxint)] = random.randint(0, sys.maxint) ... >>> h.heap() Partition of a set of 1999530 objects. Total size = 49161112 bytes. Index Count % Size % Cumulative % Kind (class / dict of class) 0 1 0 25165960 51 25165960 51 dict (no owner) 1 1999521 100 23994252 49 49160212 100 int On average, a pair of integers uses 49 bytes. Here's an array of 2M integers: >>> import array, random, sys >>> from guppy import hpy >>> h = hpy() >>> h.setrelheap() >>> a = array.array('i') >>> for _ in xrange(2000000): ... a.append(random.randint(0, sys.maxint)) ... >>> h.heap() Partition of a set of 14 objects. Total size = 8001108 bytes. Index Count % Size % Cumulative % Kind (class / dict of class) 0 1 7 8000028 100 8000028 100 array.array On average, a pair of integers uses 8 bytes. I accept that 8 bytes/pair in a dictionary is rather hard to achieve in general. Rephrased question: is there a memory-efficient implementation of int-int dictionary that uses considerably less than 49 bytes/pair?

    Read the article

  • ORA- 01157 / Cant connect to database

    - by Tom
    Hi everyone, this is a follow up from this question. Let me start by saying that i am NOT a DBA, so i'm really really lost with this. A few weeks ago, we lost contact with one of our SID'S. All the other services are working, but this one in particular is not. What we got was this message when trying to connect ORA-01033: ORACLE initialization or shutdown in progress An attempt to alter database open ended up in ORA-01157: cannot identify/lock data file 6 - see DBWR trace file ORA-01110: data file 6: '/u01/app/oracle/oradata/xxx/xxx_data.dbf' I tried to shutdown / restart the database, but got this message. Total System Global Area 566231040 bytes Fixed Size 1220604 bytes Variable Size 117440516 bytes Database Buffers 444596224 bytes Redo Buffers 2973696 bytes Database mounted. ORA-01157: cannot identify/lock data file 6 - see DBWR trace file ORA-01110: data file 6: '/u01/app/oracle/oradata/xxx/xxx_data.dbf' When all continued the same, I erased the dbf files (rm xxx_data.dbf xxx_index.dbf), and recreated them using touch xxx_data.dbf. I also tried to recreate the tablespaces using `CREATE TABLESPACE DATA DATAFILE XXX_DATA.DBF` and got Database not open As I said, i don't know how bad this is, or how far i'm from gaining access to my database (well, to this SID at least, the others are working). I would imagine that a last resource would be to throw everything away, and recreating it, but I don't know how to, and I was hoping there's a less destructive solution. Any help will be greatly appreciated . Thanks in advance.

    Read the article

  • Add two 32-bit integers in Assembler for use in VB6

    - by Emtucifor
    I would like to come up with the byte code in assembler (assembly?) for Windows machines to add two 32-bit longs and throw away the carry bit. I realize the "Windows machines" part is a little vague, but I'm assuming that the bytes for ADD are pretty much the same in all modern Intel instruction sets. I'm just trying to abuse VB a little and make some things faster. So... if the string "8A4C240833C0F6C1E075068B442404D3E0C20800" is the assembly code for SHL that can be "injected" into a VB6 program for a fast SHL operation expecting two Long parameters (we're ignoring here that 32-bit longs in VB6 are signed, just pretend they are unsigned), what is the hex string of bytes representing assembler instructions that will do the same thing to return the sum? The hex code above for SHL is, according to the author: mov eax, [esp+4] mov cl, [esp+8] shl eax, cl ret 8 I spit those bytes into a file and tried unassembling them in a windows command prompt using the old debug utility, but I figured out it's not working with the newer instruction set because it didn't like EAX when I tried assembling something but it was happy with AX. I know from comments in the source code that SHL EAX, CL is D3E0, but I don't have any reference to know what the bytes are for instruction ADD EAX, CL or I'd try it. I tried flat assembler and am not getting anything I can figure out how to use. I used it to assemble the original SHL code and got a very different result, not the same bytes. Help?

    Read the article

  • retrieving multiple versions through API through hbase

    - by sammy
    hello , this is a continuation of my previous question where id used hbase shell.. http://stackoverflow.com/questions/3024417/facing-problems-while-updating-rows-in-hbase i tried the same with API.. im not able to figure out how to retrieve all versions , iterate and print their values for a specific row... i've spending hours reading... please help me out... Scan s = new Scan(Bytes.toBytes("row1")); s.addColumn(Bytes.toBytes("column"),Bytes.toBytes("address")); SETTING RANGE FOR THE VERSIONS s.setTimeRange(0L,6L); ResultScanner scanner = table.getScanner(s); for (Result r : scanner) { for(KeyValue kv : r.sorted()) { System.out.println("To"+kv.getTimestamp()); System.out.println("from "+Bytes.toString(kv.getKey())); System.out.println("To "+Bytes.toString(kv.getValue())); } scanner.close(); } here im intending to print all versions of the column..... but it gives the most recent one... im stuck here...

    Read the article

< Previous Page | 36 37 38 39 40 41 42 43 44 45 46 47  | Next Page >