Search Results

Search found 2468 results on 99 pages for 'splattered bits'.

Page 18/99 | < Previous Page | 14 15 16 17 18 19 20 21 22 23 24 25  | Next Page >

  • Handling mach exceptions in 64bit OS X application

    - by Brad S
    I have been able to register my own mach port to capture mach exceptions in my applications and it works beautifully when I target 32 bit. However when I target 64 bit, my exception handler catch_exception_raise() gets called but the array of exception codes that is passed to the handler are 32 bits wide. This is expected in a 32 bit build but not in 64 bit. In the case where I catch EXC_BAD_ACCESS the first code is the error number and the second code should be the address of the fault. Since the second code is 32 bits wide the high 32 bits of the 64 bit fault address is truncated. I found a flag in <mach/exception_types.h> I can pass in task_set_exception_ports() called MACH_EXCEPTION_CODES which from looking at the Darwin sources appears to control the size of the codes passed to the handler. It looks like it is meant to be ored with the behavior passed in to task_set_exception_ports(). However when I do this and trigger an exception, my mach port gets notified, I call exc_server() but my handler never gets called, and when the reply message is sent back to the kernel I get the default exception behavior. I am targeting the 10.6 SDK. I really wish apple would document this stuff better. Any one have any ideas?

    Read the article

  • Boost Mersenne Twister: how to seed with more than one value?

    - by Eamon Nerbonne
    I'm using the boost mt19937 implementation for a simulation. The simulation needs to be reproducible, and that means storing and potentially reusing the RNG seeds later. I'm using the windows crypto api to generate the seed values because I need an external source for the seeds and not because of any particular guarantees of randomness. The output of any simulation run will have a note including the RNG seed - so the seed needs to be reasonably short. On the other hand, as part of the analysis of the simulation, I'll be comparing several runs - but to be sure that these runs are actually different, I'll need to use different seeds - so the seed needs to be long enough to avoid accidental collisions. I've determined that 64-bits of seeding should suffice; the chance of a collision will reach 50% after about 2^32 runs - that probability is low enough that the average error caused by it is negligible to me. Using just 32-bits of seed is tricky; the chance of a collision reaches 50% already after 2^16 runs; and that's a little too likely for my tastes. Unfortunately, the boost implementation either seeds with a full state vector - which is far, far too long - or a single 32-bit unsigned long - which isn't ideal. How can I seed the generator with more than 32-bits but less than a full state vector? I tried just padding the vector or repeating the seeds to fill the state vector, but even a cursory glance at the results shows that that generates poor results.

    Read the article

  • How do I Emit Escaped XML representation of a Node in my XSLT's HTML Output

    - by Emmanuel
    I'm transforming XML to an HTML document. In this document I want to embed XML markup for a node that was just transformed (the HTML document is a technical spec). For example, if my XML was this: <transform-me> <node id="1"> <stuff type="floatsam"> <details>Various bits</details> </stuff> </node> </transform-me> I'd want my XSLT output to look something like this: <h1>Node 1</h1> <h2>Stuff (floatsam)</h2> Various bits <h2>The XML</h2> &lt;stuff type=&quot;floatsam&quot;&gt; &lt;details&gt;Various bits&lt;/details&gt; &lt;/stuff&gt; I'm hoping there is an XSLT function that I can call in my <stuff> template to which I can pass the current node (.) and get back escaped XML markup for <stuff> and all its descendants. I have a feeling unparsed-text() might be the way to go but can't get it to work.

    Read the article

  • DSA signature verification input

    - by calccrypto
    What is the data inputted into DSA when PGP signs a message? From RFC4880, i found A Signature packet describes a binding between some public key and some data. The most common signatures are a signature of a file or a block of text, and a signature that is a certification of a User ID. im not sure if it is the entire public key, just the public key packet, or some other derivative of a pgp key packet. whatever it is, i cannot get the DSA signature to verify here is a sample im testing my program on: -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 abcd -----BEGIN PGP SIGNATURE----- Version: BCPG v1.39 iFkEARECABkFAk0z65ESHGFiYyAodGVzdCBrZXkpIDw+AAoJEC3Jkh8+bnkusO0A oKG+HPF2Qrsth2zS9pK+eSCBSypOAKDBgC2Z0vf2EgLiiNMk8Bxpq68NkQ== =gq0e -----END PGP SIGNATURE----- Dumped from pgpdump.net Old: Signature Packet(tag 2)(89 bytes) Ver 4 - new Sig type - Signature of a canonical text document(0x01). Pub alg - DSA Digital Signature Algorithm(pub 17) Hash alg - SHA1(hash 2) Hashed Sub: signature creation time(sub 2)(4 bytes) Time - Mon Jan 17 07:11:13 UTC 2011 Hashed Sub: signer's User ID(sub 28)(17 bytes) User ID - abc (test key) <> Sub: issuer key ID(sub 16)(8 bytes) Key ID - 0x2DC9921F3E6E792E Hash left 2 bytes - b0 ed DSA r(160 bits) - a1 be 1c f1 76 42 bb 2d 87 6c d2 f6 92 be 79 20 81 4b 2a 4e DSA s(160 bits) - c1 80 2d 99 d2 f7 f6 12 02 e2 88 d3 24 f0 1c 69 ab af 0d 91 -> hash(DSA q bits) and the public key for it is: -----BEGIN PGP PUBLIC KEY BLOCK----- Version: BCPG v1.39 mOIETTPqeBECALx+i9PIc4MB2DYXeqsWUav2cUtMU1N0inmFHSF/2x0d9IWEpVzE kRc30PvmEHI1faQit7NepnHkkphrXLAoZukAoNP3PB8NRQ6lRF6/6e8siUgJtmPL Af9IZOv4PI51gg6ICLKzNO9i3bcUx4yeG2vjMOUAvsLkhSTWob0RxWppo6Pn6MOg dMQHIM5sDH0xGN0dOezzt/imAf9St2B0HQXVfAAbveXBeRoO7jj/qcGx6hWmsKUr BVzdQhBk7Sku6C2KlMtkbtzd1fj8DtnrT8XOPKGp7/Y7ASzRtBFhYmMgKHRlc3Qg a2V5KSA8PohGBBMRAgAGBQJNM+p5AAoJEC3Jkh8+bnkuNEoAnj2QnqGtdlTgUXCQ Fyvwk5wiLGPfAJ4jTGTL62nWzsgrCDIMIfEG2shm8bjMBE0z6ngQAgCUlP7AlfO4 XuKGVCs4NvyBpd0KA0m0wjndOHRNSIz44x24vLfTO0GrueWjPMqRRLHO8zLJS/BX O/BHo6ypjN87Af0VPV1hcq20MEW2iujh3hBwthNwBWhtKdPXOndJGZaB7lshLJuW v9z6WyDNXj/SBEiV1gnPm0ELeg8Syhy5pCjMAgCFEc+NkCzcUOJkVpgLpk+VLwrJ /Wi9q+yCihaJ4EEFt/7vzqmrooXWz2vMugD1C+llN6HkCHTnuMH07/E/2dzciEYE GBECAAYFAk0z6nkACgkQLcmSHz5ueS7NTwCdED1P9NhgR2LqwyS+AEyqlQ0d5joA oK9xPUzjg4FlB+1QTHoOhuokxxyN =CTgL -----END PGP PUBLIC KEY BLOCK----- the public key packet of the key is mOIETTPqeBECALx+i9PIc4MB2DYXeqsWUav2cUtMU1N0inmFHSF/2x0d9IWEpVzEkRc30PvmEHI1faQi t7NepnHkkphrXLAoZukAoNP3PB8NRQ6lRF6/6e8siUgJtmPLAf9IZOv4PI51gg6ICLKzNO9i3bcUx4ye G2vjMOUAvsLkhSTWob0RxWppo6Pn6MOgdMQHIM5sDH0xGN0dOezzt/imAf9St2B0HQXVfAAbveXBeRoO 7jj/qcGx6hWmsKUrBVzdQhBk7Sku6C2KlMtkbtzd1fj8DtnrT8XOPKGp7/Y7ASzR in radix 64 i have tried many different combinations of sha1(< some data + 'abcd'),but the calculated value v never equals r, of the signature i know that the pgp implementation i used to create the key and signature is correct. i also know that my DSA implementation and PGP key data extraction program are correct. thus, the only thing left is the data to hash. what is the correct data to be hashed?

    Read the article

  • Compressibility Example

    - by user285726
    From my algorithms textbook: The annual county horse race is bringing in three thoroughbreds who have never competed against one another. Excited, you study their past 200 races and summarize these as prob- ability distributions over four outcomes: first (“first place”), second, third, and other. Outcome Aurora Whirlwind Phantasm 0.15 0.30 0.20 first 0.10 0.05 0.30 second 0.70 0.25 0.30 third 0.05 0.40 0.20 other Which horse is the most predictable? One quantitative approach to this question is to look at compressibility. Write down the history of each horse as a string of 200 values (first, second, third, other). The total number of bits needed to encode these track- record strings can then be computed using Huffman’s algorithm. This works out to 290 bits for Aurora, 380 for Whirlwind, and 420 for Phantasm (check it!). Aurora has the shortest encoding and is therefore in a strong sense the most predictable. How did they get 420 for Phantasm? I keep getting 400 bytes, as so: Combine first, other = 0.4, combine second, third = 0.6. End up with 2 bits encoding each position. Is there something I've misunderstood about the Huffman encoding algorithm? Textbook available here: http://www.cs.berkeley.edu/~vazirani/algorithms.html (page 156).

    Read the article

  • java inserting special characters with preparedstatement fails

    - by phill
    I am using an HTML form which sends <input type=hidden name=longdesc value='SMARTNET%^" 8X5XNBD'> this is done by the following javascript code: function masinsert(id) { var currentTime=new Date(); var button = document.getElementById("m"+id); button.onclick=""; button.value="Inserting"; var itemdescription = document.getElementById("itemdescription"+id).value; function handleHttpResponse() { if (http.readyState == 4) { button.value="Item Added"; } } var http = getHTTPObject(); // We create the HTTP Object var tempUrl = "\AInsert"; tempUrl += "itemdescription="+itemdescription+"&"+"itemshortdescription="+itemdescription.substring(0,37)+; alert(tempUrl); http.open("GET", tempUrl, true); http.onreadystatechange = handleHttpResponse; http.send(null); } to a java servlet. AInsert.java in the AInsert.java file, I do a String itemdescription = request.getParameter("longdesc"); which then sends the value to a preparedstatement to run an insert query. In the query, there are sometimes special characters which throw it off. For example, when I run the following insert into itemdescription (longdesc) values ('SMARTNET%^" 8X5XNBD') here is the actual snippet: PreparedStatement ps = conn.prepareStatement("INSERT INTO itemdescription (longdesc) values(?)"); ps.setString(1, itemdescription); ps.executeUpdate(); It will produce an error saying : Cannot insert the value NULL into column 'LongDesc', table 'App.dbo.itemdescription'; column does not allow nulls. Insert fails I have tried urlencode/urldecode String encodedString = URLEncoder.encode(longdesc, "UTF-8"); String decitemdescription = URLDecoder.decode(itemdescription, "UTF-8"); and i've also tried these functions //BEGIN URL Encoder final static String[] hex = { "%00", "%01", "%02", "%03", "%04", "%05", "%06", "%07", "%08", "%09", "%0a", "%0b", "%0c", "%0d", "%0e", "%0f", "%10", "%11", "%12", "%13", "%14", "%15", "%16", "%17", "%18", "%19", "%1a", "%1b", "%1c", "%1d", "%1e", "%1f", "%20", "%21", "%22", "%23", "%24", "%25", "%26", "%27", "%28", "%29", "%2a", "%2b", "%2c", "%2d", "%2e", "%2f", "%30", "%31", "%32", "%33", "%34", "%35", "%36", "%37", "%38", "%39", "%3a", "%3b", "%3c", "%3d", "%3e", "%3f", "%40", "%41", "%42", "%43", "%44", "%45", "%46", "%47", "%48", "%49", "%4a", "%4b", "%4c", "%4d", "%4e", "%4f", "%50", "%51", "%52", "%53", "%54", "%55", "%56", "%57", "%58", "%59", "%5a", "%5b", "%5c", "%5d", "%5e", "%5f", "%60", "%61", "%62", "%63", "%64", "%65", "%66", "%67", "%68", "%69", "%6a", "%6b", "%6c", "%6d", "%6e", "%6f", "%70", "%71", "%72", "%73", "%74", "%75", "%76", "%77", "%78", "%79", "%7a", "%7b", "%7c", "%7d", "%7e", "%7f", "%80", "%81", "%82", "%83", "%84", "%85", "%86", "%87", "%88", "%89", "%8a", "%8b", "%8c", "%8d", "%8e", "%8f", "%90", "%91", "%92", "%93", "%94", "%95", "%96", "%97", "%98", "%99", "%9a", "%9b", "%9c", "%9d", "%9e", "%9f", "%a0", "%a1", "%a2", "%a3", "%a4", "%a5", "%a6", "%a7", "%a8", "%a9", "%aa", "%ab", "%ac", "%ad", "%ae", "%af", "%b0", "%b1", "%b2", "%b3", "%b4", "%b5", "%b6", "%b7", "%b8", "%b9", "%ba", "%bb", "%bc", "%bd", "%be", "%bf", "%c0", "%c1", "%c2", "%c3", "%c4", "%c5", "%c6", "%c7", "%c8", "%c9", "%ca", "%cb", "%cc", "%cd", "%ce", "%cf", "%d0", "%d1", "%d2", "%d3", "%d4", "%d5", "%d6", "%d7", "%d8", "%d9", "%da", "%db", "%dc", "%dd", "%de", "%df", "%e0", "%e1", "%e2", "%e3", "%e4", "%e5", "%e6", "%e7", "%e8", "%e9", "%ea", "%eb", "%ec", "%ed", "%ee", "%ef", "%f0", "%f1", "%f2", "%f3", "%f4", "%f5", "%f6", "%f7", "%f8", "%f9", "%fa", "%fb", "%fc", "%fd", "%fe", "%ff" }; /** * Encode a string to the "x-www-form-urlencoded" form, enhanced * with the UTF-8-in-URL proposal. This is what happens: * * <ul> * <li><p>The ASCII characters 'a' through 'z', 'A' through 'Z', * and '0' through '9' remain the same. * * <li><p>The unreserved characters - _ . ! ~ * ' ( ) remain the same. * * <li><p>The space character ' ' is converted into a plus sign '+'. * * <li><p>All other ASCII characters are converted into the * 3-character string "%xy", where xy is * the two-digit hexadecimal representation of the character * code * * <li><p>All non-ASCII characters are encoded in two steps: first * to a sequence of 2 or 3 bytes, using the UTF-8 algorithm; * secondly each of these bytes is encoded as "%xx". * </ul> * * @param s The string to be encoded * @return The encoded string */ public static String encode(String s) { StringBuffer sbuf = new StringBuffer(); int len = s.length(); for (int i = 0; i < len; i++) { int ch = s.charAt(i); if ('A' <= ch && ch <= 'Z') { // 'A'..'Z' sbuf.append((char)ch); } else if ('a' <= ch && ch <= 'z') { // 'a'..'z' sbuf.append((char)ch); } else if ('0' <= ch && ch <= '9') { // '0'..'9' sbuf.append((char)ch); } else if (ch == ' ') { // space sbuf.append('+'); } else if (ch == '-' || ch == '_' // unreserved || ch == '.' || ch == '!' || ch == '~' || ch == '*' || ch == '\'' || ch == '(' || ch == ')') { sbuf.append((char)ch); } else if (ch <= 0x007f) { // other ASCII sbuf.append(hex[ch]); } else if (ch <= 0x07FF) { // non-ASCII <= 0x7FF sbuf.append(hex[0xc0 | (ch >> 6)]); sbuf.append(hex[0x80 | (ch & 0x3F)]); } else { // 0x7FF < ch <= 0xFFFF sbuf.append(hex[0xe0 | (ch >> 12)]); sbuf.append(hex[0x80 | ((ch >> 6) & 0x3F)]); sbuf.append(hex[0x80 | (ch & 0x3F)]); } } return sbuf.toString(); } //end encode and //decode url private static String unescape(String s) { StringBuffer sbuf = new StringBuffer () ; int l = s.length() ; int ch = -1 ; int b, sumb = 0; for (int i = 0, more = -1 ; i < l ; i++) { /* Get next byte b from URL segment s */ switch (ch = s.charAt(i)) { case '%': ch = s.charAt (++i) ; int hb = (Character.isDigit ((char) ch) ? ch - '0' : 10+Character.toLowerCase((char) ch) - 'a') & 0xF ; ch = s.charAt (++i) ; int lb = (Character.isDigit ((char) ch) ? ch - '0' : 10+Character.toLowerCase ((char) ch)-'a') & 0xF ; b = (hb << 4) | lb ; break ; case '+': b = ' ' ; break ; default: b = ch ; } /* Decode byte b as UTF-8, sumb collects incomplete chars */ if ((b & 0xc0) == 0x80) { // 10xxxxxx (continuation byte) sumb = (sumb << 6) | (b & 0x3f) ; // Add 6 bits to sumb if (--more == 0) sbuf.append((char) sumb) ; // Add char to sbuf } else if ((b & 0x80) == 0x00) { // 0xxxxxxx (yields 7 bits) sbuf.append((char) b) ; // Store in sbuf } else if ((b & 0xe0) == 0xc0) { // 110xxxxx (yields 5 bits) sumb = b & 0x1f; more = 1; // Expect 1 more byte } else if ((b & 0xf0) == 0xe0) { // 1110xxxx (yields 4 bits) sumb = b & 0x0f; more = 2; // Expect 2 more bytes } else if ((b & 0xf8) == 0xf0) { // 11110xxx (yields 3 bits) sumb = b & 0x07; more = 3; // Expect 3 more bytes } else if ((b & 0xfc) == 0xf8) { // 111110xx (yields 2 bits) sumb = b & 0x03; more = 4; // Expect 4 more bytes } else /*if ((b & 0xfe) == 0xfc)*/ { // 1111110x (yields 1 bit) sumb = b & 0x01; more = 5; // Expect 5 more bytes } /* We don't test if the UTF-8 encoding is well-formed */ } return sbuf.toString() ; } but the decoding doesn't change it back to the original special characters. Any ideas? thanks in advance UPDATE: I tried adding these two statements to grab the request String itemdescription = URLDecoder.decode(request.getParameter("itemdescription"), "UTF-8"); String itemshortdescription = URLDecoder.decode(request.getParameter("itemshortdescription"), "UTF-8"); System.out.println("processRequest | short descrip "); and this is failing as well if that helps. UPDATE2: I created an html form and did a direct insert with the encoded itemdescription such as and the insertion works correctly with the special charaters and everything. I guess there is something going on with my javascript submit. Any ideas on this?

    Read the article

  • Is this a correct porting of java.util.Random in objectiveC

    - by dipu
    I have ported the code inside java.util.Random class in objectivec. I want to have an identical random number generator so that it synchs with the server app running on java. Now is this a safe porting and if not is there a way to mimic AtomicLong as it is found in java? Please see my code below. static long long multiplier = 0x5DEECE66DL; static long addend = 0xBL; static long long mask = (0x1000000000000001L << 48) - 1; -(void) initWithSeed:(long long) seed1 { [self setRandomSeed: 0L];// = new AtomicLong(0L); [self setSeed: seed1]; } -(int) next:(int)bits { long long oldseed, nextseed; long long seed1 = [self.randomSeed longLongValue]; //AtomicLong //do { oldseed = seed1; nextseed = (oldseed * multiplier + addend) & mask; //} while (!seed.compareAndSet(oldseed, nextseed)); [self setRandomSeed: [NSNumber numberWithLongLong:nextseed]]; ///int ret = (int)(nextseed >>> (48 - bits)); int ret = (unsigned int)(nextseed >> (48 - bits)); return ret; } -(void) setSeed:(long long) seed1 { seed1 = (seed1 ^ multiplier) & mask; [self setRandomSeed: [NSNumber numberWithLongLong:seed1]]; }

    Read the article

  • How to get the actual address of a pointer in C?

    - by Airjoe
    BACKGROUND: I'm writing a single level cache simulator in C for a homework assignment, and I've been given code that I must work from. In our discussions of the cache, we were told that the way a small cache can hold large addresses is by splitting the large address into the position in the cache and an identifying tag. That is, if you had an 8 slot cache but wanted to store something with address larger than 8, you take the 3 (because 2^3=8) rightmost bits and put the data in that position; so if you had address 22 for example, binary 10110, you would take those 3 rightmost bits 110, which is decimal 5, and put it in slot 5 of the cache. You would also store in this position the tag, which is the remaining bits 10. One function, cache_load, takes a single argument, and integer pointer. So effectively, I'm being given this int* addr which is an actual address and points to some value. In order to store this value in the cache, I need to split the addr. However, the compiler doesn't like when I try to work with the pointer directly. So, for example, I try to get the position by doing: npos=addr%num_slots The compiler gets angry and gives me errors. I tried casting to an int, but this actually got me the value that the pointer was pointing to, not the numerical address itself. Any help is appreciated, thanks!

    Read the article

  • URL shortening: using inode as short name?

    - by Licky Lindsay
    The site I am working on wants to generate its own shortened URLs rather than rely on a third party like tinyurl or bit.ly. Obviously I could keep a running count new URLs as they are added to the site and use that to generate the short URLs. But I am trying to avoid that if possible since it seems like a lot of work just to make this one thing work. As the things that need short URLs are all real physical files on the webserver my current solution is to use their inode numbers as those are already generated for me ready to use and guaranteed to be unique. function short_name($file) { $ino = @fileinode($file); $s = base_convert($ino, 10, 36); return $s; } This seems to work. Question is, what can I do to make the short URL even shorter? On the system where this is being used, the inodes for newly added files are in a range that makes the function above return a string 7 characters long. Can I safely throw away some (half?) of the bits of the inode? And if so, should it be the high bits or the low bits? I thought of using the crc32 of the filename, but that actually makes my short names longer than using the inode. Would something like this have any risk of collisions? I've been able to get down to single digits by picking the right value of "$referencefile". function short_name($file) { $ino = @fileinode($file); // arbitrarily selected pre-existing file, // as all newer files will have higher inodes $ino = $ino - @fileinode($referencefile); $s = base_convert($ino, 10, 36); return $s; }

    Read the article

  • Screen information while Windows system is locked (.NET)

    - by Matt
    We have a nightly process that updates applications on a user's pc, and that requires bringing the application down and back up again (not looking to get into changing that process). The problem is that we are building a Windows AppBar on launch which requires a valid screen, and when the system is locked there isn't one in the Screen class. So none of the visual effects are enabled and it shows up real ugly. The only way we currently have around this is to detect a locked screen and just spin and wait until the user unlocks the desktop, then continue launching. Leaving it down isn't an option, as this is a key part of our user's workflow, and they expect it to be up and running if they left it that way the night before. Any ideas?? I can't seem to find the display information anywhere, but it has to be stored off someplace, since the user is still logged in. The contents of the Screen.AllScreens array: ** When Locked: Device Name : DISPLAY Primary : True Bits Per Pixel : 0 Bounds : {X=-1280,Y=0,Width=2560,Height=1024} Working Area : {X=0,Y=0,Width=1280,Height=1024} ** When Unlocked: Device Name : \\.\DISPLAY1 Primary : True Bits Per Pixel : 32 Bounds : {X=0,Y=0,Width=1280,Height=1024} Working Area : {X=0,Y=0,Width=1280,Height=994} Device Name : \\.\DISPLAY2 Primary : False Bits Per Pixel : 32 Bounds : {X=-1280,Y=0,Width=1280,Height=1024} Working Area : {X=-1280,Y=0,Width=1280,Height=964}

    Read the article

  • Is there "good" PRNG generating values without hidden state?

    - by actual
    I need some good pseudo random number generator that can be computed like a pure function from its previous output without any state hiding. Under "good" I mean: I must be able to parametrize generator in such way that running it for 2^n iterations with any parameters should cover all or almost all values between 0 and 2^n - 1, where n is the number of bits in output value. Combined generator output of n + p bits must cover all or almost all values between 0 and 2^(n + p) - 1 if I run it for 2^n iterations for every possible combination of its parameters, where p is the number of bits in parameters. For example, LCG can be computed like a pure function and it can meet first condition, but it can not meet second one. Say, we have 32-bit generator, m = 2^32 and it is constant, our p = 64 (two 32-bit parameters a and c), n + p = 96, so we must peek data by three ints from output to meet second condition. Unfortunately, condition can not be meet because of strictly alternating sequence of odd and even ints in output. To overcome this, hidden state must be introduced, but that makes function not pure and breaks first condition (period become much longer). Am I wanting too much?

    Read the article

  • Using unions to simplify casts

    - by Steven Lu
    I realize that what I am trying to do isn't safe. But I am just doing some testing and image processing so my focus here is on speed. Right now this code gives me the corresponding bytes for a 32-bit pixel value type. struct Pixel { unsigned char b,g,r,a; }; I wanted to check if I have a pixel that is under a certain value (e.g. r, g, b <= 0x10). I figured I wanted to just conditional-test the bit-and of the bits of the pixel with 0x00E0E0E0 (I could have wrong endianness here) to get the dark pixels. Rather than using this ugly mess (*((uint32_t*)&pixel)) to get the 32-bit unsigned int value, i figured there should be a way for me to set it up so I can just use pixel.i, while keeping the ability to reference the green byte using pixel.g. Can I do this? This won't work: struct Pixel { unsigned char b,g,r,a; }; union Pixel_u { Pixel p; uint32_t bits; }; I would need to edit my existing code to say pixel.p.g to get the green color byte. Same happens if I do this: union Pixel { unsigned char c[4]; uint32_t bits; }; This would work too but I still need to change everything to index into c, which is a bit ugly but I can make it work with a macro if i really needed to.

    Read the article

  • Memory interleaving

    - by Tim Green
    Hello, I have this question that has me rather confused. Suppose that a 1G x 32-bit main memory is built using 256M x 4-bit RAM chips and that this memory is byte-addressable. I have deduced that one would require 4*1G = 2^2*2*30 = 2^32 - so 32 bits to address the full memory. My problem now comes with, say, if you had memory (byte) address "14", determine which memory module this would go into. (There would have to be 8 chips per module to make the 32-bit wide memory, and 4 modules overall giving 32 chips in total. Modules are numbered from 0). In high-order interleave, it appears trivial that it's the first (0) memory module given a lot of the first few bits are 0. However, low-order interleave has me stumped. I can't figure out (for sure) how many bits are used to determine a memory module (possibly 2, given there are 4 in total?). The given solution is Module 3. This is not homework in the same sense so I will not be tagging it as such.

    Read the article

  • Compression Program in C

    - by Delandilon
    I want to compress a series of characters. For example if i type Input : FFFFFBBBBBBBCCBBBAABBGGGGGSSS (27 x 8 bits = 216 bits) Output: F5B7C2B3A2B2G5S3 (14 x 8 bits = 112bits) So far this is what i have, i can count the number of Characters in the Array. But the most important task is to count them in the same sequence. I can't seem to figure that out :( Ive stared doing C just a few weeks back, i have knowledge on Array, pointers, ASCII value but in any case can't seem to count these characters in a sequence. Ive try a bit of everything. This approach is no good but it the closest i came to it. #include <stdio.h> #include <conio.h> int main() { int charcnt=0,dotcnt=0,commacnt=0,blankcnt=0,i, countA, countB; char str[125]; printf("*****String Manipulations*****\n\n"); printf("Enter a string\n\n"); scanf("%[^'\n']s",str); printf("\n\nEntered String is \" %s \" \n",str); for(i=0;str[i]!='\0';i++) { // COUNTING EXCEPTION CHARS if(str[i]==' ') blankcnt++; if(str[i]=='.') dotcnt++; if(str[i]==',') commacnt++; if (str[i]=='A' || str[i]=='a') countA++; if (str[i]=='B' || str[i]=='b') countA++; } //PRINT RESULT OF COUNT charcnt=i; printf("\n\nTotal Characters : %d",charcnt); printf("\nTotal Blanks : %d",blankcnt); printf("\nTotal Full stops : %d",dotcnt); printf("\nTotal Commas : %d\n\n",commacnt); printf("A%d\n", countA); }

    Read the article

  • Red Gate Coder interviews: Robin Hellen

    - by Michael Williamson
    Robin Hellen is a test engineer here at Red Gate, and is also the latest coder I’ve interviewed. We chatted about debugging code, the roles of software engineers and testers, and why Vala is currently his favourite programming language. How did you get started with programming?It started when I was about six. My dad’s a professional programmer, and he gave me and my sister one of his old computers and taught us a bit about programming. It was an old Amiga 500 with a variant of BASIC. I don’t think I ever successfully completed anything! It was just faffing around. I didn’t really get anywhere with it.But then presumably you did get somewhere with it at some point.At some point. The PC emerged as the dominant platform, and I learnt a bit of Visual Basic. I didn’t really do much, just a couple of quick hacky things. A bit of demo animation. Took me a long time to get anywhere with programming, really.When did you feel like you did start to get somewhere?I think it was when I started doing things for someone else, which was my sister’s final year of university project. She called up my dad two days before she was due to submit, saying “We need something to display a graph!”. Dad says, “I’m too busy, go talk to your brother”. So I hacked up this ugly piece of code, sent it off and they won a prize for that project. Apparently, the graph, the bit that I wrote, was the reason they won a prize! That was when I first felt that I’d actually done something that was worthwhile. That was my first real bit of code, and the ugliest code I’ve ever written. It’s basically an array of pre-drawn line elements that I shifted round the screen to draw a very spikey graph.When did you decide that programming might actually be something that you wanted to do as a career?It’s not really a decision I took, I always wanted to do something with computers. And I had to take a gap year for uni, so I was looking for twelve month internships. I applied to Red Gate, and they gave me a job as a tester. And that’s where I really started having to write code well. To a better standard that I had been up to that point.How did you find coming to Red Gate and working with other coders?I thought it was really nice. I learnt so much just from other people around. I think one of the things that’s really great is that people are just willing to help you learn. Instead of “Don’t you know that, you’re so stupid”, it’s “You can just do it this way”.If you could go back to the very start of that internship, is there something that you would tell yourself?Write shorter code. I have a tendency to write massive, many-thousand line files that I break out of right at the end. And then half-way through a project I’m doing something, I think “Where did I write that bit that does that thing?”, and it’s almost impossible to find. I wrote some horrendous code when I started. Just that principle, just keep things short. Even if looks a bit crazy to be jumping around all over the place all of the time, it’s actually a lot more understandable.And how do you hold yourself to that?Generally, if a function’s going off my screen, it’s probably too long. That’s what I tell myself, and within the team here we have code reviews, so the guys I’m with at the moment are pretty good at pulling me up on, “Doesn’t that look like it’s getting a bit long?”. It’s more just the subjective standard of readability than anything.So you’re an advocate of code review?Yes, definitely. Both to spot errors that you might have made, and to improve your knowledge. The person you’re reviewing will say “Oh, you could have done it that way”. That’s how we learn, by talking to others, and also just sharing knowledge of how your project works around the team, or even outside the team. Definitely a very firm advocate of code reviews.Do you think there’s more we could do with them?I don’t know. We’re struggling with how to add them as part of the process without it becoming too cumbersome. We’ve experimented with a few different ways, and we’ve not found anything that just works.To get more into the nitty gritty: how do you like to debug code?The first thing is to do it in my head. I’ll actually think what piece of code is likely to have caused that error, and take a quick look at it, just to see if there’s anything glaringly obvious there. The next thing I’ll probably do is throw in print statements, or throw some exceptions from various points, just to check: is it going through the code path I expect it to? A last resort is to actually debug code using a debugger.Why is the debugger the last resort?Probably because of the environments I learnt programming in. VB and early BASIC didn’t have much of a debugger, the only way to find out what your program was doing was to add print statements. Also, because a lot of the stuff I tend to work with is non-interactive, if it’s something that takes a long time to run, I can throw in the print statements, set a run off, go and do something else, and look at it again later, rather than trying to remember what happened at that point when I was debugging through it. So it also gives me the record of what happens. I hate just sitting there pressing F5, F5, continually. If you’re having to find out what your code is doing at each line, you’ve probably got a very wrong mental model of what your code’s doing, and you can find that out just as easily by inspecting a couple of values through the print statements.If I were on some codebase that you were also working on, what should I do to make it as easy as possible to understand?I’d say short and well-named methods. The one thing I like to do when I’m looking at code is to find out where a value comes from, and the more layers of indirection there are, particularly DI [dependency injection] frameworks, the harder it is to find out where something’s come from. I really hate that. I want to know if the value come from the user here or is a constant here, and if I can’t find that out, that makes code very hard to understand for me.As a tester, where do you think the split should lie between software engineers and testers?I think the split is less on areas of the code you write and more what you’re designing and creating. The developers put a structure on the code, while my major role is to say which tests we should have, whether we should test that, or it’s not worth testing that because it’s a tiny function in code that nobody’s ever actually going to see. So it’s not a split in the code, it’s a split in what you’re thinking about. Saying what code we should write, but alternatively what code we should take out.In your experience, do the software engineers tend to do much testing themselves?They tend to control the lowest layer of tests. And, depending on how the balance of people is in the team, they might write some of the higher levels of test. Or that might go to the testers. I’m the only tester on my team with three other developers, so they’ll be writing quite a lot of the actual test code, with input from me as to whether we should test that functionality, whereas on other teams, where it’s been more equal numbers, the testers have written pretty much all of the high level tests, just because that’s the best use of resource.If you could shuffle resources around however you liked, do you think that the developers should be writing those high-level tests?I think they should be writing them occasionally. It helps when they have an understanding of how testing code works and possibly what assumptions we’ve made in tests, and they can say “actually, it doesn’t work like that under the hood so you’ve missed this whole area”. It’s one of those agile things that everyone on the team should be at least comfortable doing the various jobs. So if the developers can write test code then I think that’s a very good thing.So you think testers should be able to write production code?Yes, although given most testers skills at coding, I wouldn’t advise it too much! I have written a few things, and I did make a few changes that have actually gone into our production code base. They’re not necessarily running every time but they are there. I think having that mix of skill sets is really useful. In some ways we’re using our own product to test itself, so being able to make those changes where it’s not working saves me a round-trip through the developers. It can be really annoying if the developers have no time to make a change, and I can’t touch the code.If the software engineers are consistently writing tests at all levels, what role do you think the role of a tester is?I think on a team like that, those distinctions aren’t quite so useful. There’ll be two cases. There’s either the case where the developers think they’ve written good tests, but you still need someone with a test engineer mind-set to go through the tests and validate that it’s a useful set, or the correct set for that code. Or they won’t actually be pure developers, they’ll have that mix of test ability in there.I think having slightly more distinct roles is useful. When it starts to blur, then you lose that view of the tests as a whole. The tester job is not to create tests, it’s to validate the quality of the product, and you don’t do that just by writing tests. There’s more things you’ve got to keep in your mind. And I think when you blur the roles, you start to lose that end of the tester.So because you’re working on those features, you lose that holistic view of the whole system?Yeah, and anyone who’s worked on the feature shouldn’t be testing it. You always need to have it tested it by someone who didn’t write it. Otherwise you’re a bit too close and you assume “yes, people will only use it that way”, but the tester will come along and go “how do people use this? How would our most idiotic user use this?”. I might not test that because it might be completely irrelevant. But it’s coming in and trying to have a different set of assumptions.Are you a believer that it should all be automated if possible?Not entirely. So an automated test is always better than a manual test for the long-term, but there’s still nothing that beats a human sitting in front of the application and thinking “What could I do at this point?”. The automated test is very good but they follow that strict path, and they never check anything off the path. The human tester will look at things that they weren’t expecting, whereas the automated test can only ever go “Is that value correct?” in many respects, and it won’t notice that on the other side of the screen you’re showing something completely wrong. And that value might have been checked independently, but you always find a few odd interactions when you’re going through something manually, and you always need to go through something manually to start with anyway, otherwise you won’t know where the important bits to write your automation are.When you’re doing that manual testing, do you think it’s important to do that across the entire product, or just the bits that you’ve touched recently?I think it’s important to do it mostly on the bits you’ve touched, but you can’t ignore the rest of the product. Unless you’re dealing with a very, very self-contained bit, you’re almost always encounter other bits of the product along the way. Most testers I know, even if they are looking at just one path, they’ll keep open and move around a bit anyway, just because they want to find something that’s broken. If we find that your path is right, we’ll go out and hunt something else.How do you think this fits into the idea of continuously deploying, so long as the tests pass?With deploying a website it’s a bit different because you can always pull it back. If you’re deploying an application to customers, when you’ve released it, it’s out there, you can’t pull it back. Someone’s going to keep it, no matter how hard you try there will be a few installations that stay around. So I’d always have at least a human element on that path. With websites, you could probably automate straight out, or at least straight out to an internal environment or a single server in a cloud of fifty that will serve some people. But I don’t think you should release to everyone just on automated tests passing.You’ve already mentioned using BASIC and C# — are there any other languages that you’ve used?I’ve used a few. That’s something that has changed more recently, I’ve become familiar with more languages. Before I started at Red Gate I learnt a bit of C. Then last year, I taught myself Python which I actually really enjoyed using. I’ve also come across another language called Vala, which is sort of a C#-like language. It’s basically a pre-processor for C, but it has very nice syntax. I think that’s currently my favourite language.Any particular reason for trying Vala?I have a completely Linux environment at home, and I’ve been looking for a nice language, and C# just doesn’t cut it because I won’t touch Mono. So, I was looking for something like C# but that was useable in an open source environment, and Vala’s what I found. C#’s got a few features that Vala doesn’t, and Vala’s got a few features where I think “It would be awesome if C# had that”.What are some of the features that it’s missing?Extension methods. And I think that’s the only one that really bugs me. I like to use them when I’m writing C# because it makes some things really easy, especially with libraries that you can’t touch the internals of. It doesn’t have method overloading, which is sometimes annoying.Where it does win over C#?Everything is non-nullable by default, you never have to check that something’s unexpectedly null.Also, Vala has code contracts. This is starting to come in C# 4, but the way it works in Vala is that you specify requirements in short phrases as part of your function signature and they stick to the signature, so that when you inherit it, it has exactly the same code contract as the base one, or when you inherit from an interface, you have to match the signature exactly. Just using those makes you think a bit more about how you’re writing your method, it’s not an afterthought when you’ve got contracts from base classes given to you, you can’t change it. Which I think is a lot nicer than the way C# handles it. When are those actually checked?They’re checked both at compile and run-time. The compile-time checking isn’t very strong yet, it’s quite a new feature in the compiler, and because it compiles down to C, you can write C code and interface with your methods, so you can bypass that compile-time check anyway. So there’s an extra runtime check, and if you violate one of the contracts at runtime, it’s game over for your program, there’s no exception to catch, it’s just goodbye!One thing I dislike about C# is the exceptions. You write a bit of code and fifty exceptions could come from any point in your ten lines, and you can’t mentally model how those exceptions are going to come out, and you can’t even predict them based on the functions you’re calling, because if you’ve accidentally got a derived class there instead of a base class, that can throw a completely different set of exceptions. So I’ve got no way of mentally modelling those, whereas in Vala they’re checked like Java, so you know only these exceptions can come out. You know in advance the error conditions.I think Raymond Chen on Old New Thing says “the only thing you know when you throw an exception is that you’re in an invalid state somewhere in your program, so just kill it and be done with it!”You said you’ve also learnt bits of Python. How did you find that compared to Vala and C#?Very different because of the dynamic typing. I’ve been writing a website for my own use. I’m quite into photography, so I take photos off my camera, post-process them, dump them in a file, and I get a webpage with all my thumbnails. So sort of like Picassa, but written by myself because I wanted something to learn Python with. There are some things that are really nice, I just found it really difficult to cope with the fact that I’m not quite sure what this object type that I’m passed is, I might not ever be sure, so it can randomly blow up on me. But once I train myself to ignore that and just say “well, I’m fairly sure it’s going to be something that looks like this, so I’ll use it like this”, then it’s quite nice.Any particular features that you’ve appreciated?I don’t like any particular feature, it’s just very straightforward to work with. It’s very quick to write something in, particularly as you don’t have to worry that you’ve changed something that affects a different part of the program. If you have, then that part blows up, but I can get this part working right now.If you were doing a big project, would you be willing to do it in Python rather than C# or Vala?I think I might be willing to try something bigger or long term with Python. We’re currently doing an ASP.NET MVC project on C#, and I don’t like the amount of reflection. There’s a lot of magic that pulls values out, and it’s all done under the scenes. It’s almost managed to put a dynamic type system on top of C#, which in many ways destroys the language to me, whereas if you’re already in a dynamic language, having things done dynamically is much more natural. In many ways, you get the worst of both worlds. I think for web projects, I would go with Python again, whereas for anything desktop, command-line or GUI-based, I’d probably go for C# or Vala, depending on what environment I’m in.It’s the fact that you can gain from the strong typing in ways that you can’t so much on the web app. Or, in a web app, you have to use dynamic typing at some point, or you have to write a hell of a lot of boilerplate, and I’d rather use the dynamic typing than write the boilerplate.What do you think separates great programmers from everyone else?Probably design choices. Choosing to write it a piece of code one way or another. For any given program you ask me to write, I could probably do it five thousand ways. A programmer who is capable will see four or five of them, and choose one of the better ones. The excellent programmer will see the largest proportion and manage to pick the best one very quickly without having to think too much about it. I think that’s probably what separates, is the speed at which they can see what’s the best path to write the program in. More Red Gater Coder interviews

    Read the article

  • SQL Navigator startup error: Unhandled Exception at startup - Cannot find OCI DLL: oci.dll

    - by Imageree
    I am using 64 bits Windows 7. Oracle Development Tool: SQL Navigator 5.5 was installed on my computer. When I try to start the program I get this error: "Unhandled Exception at startup - Cannot find OCI DLL: oci.dll" Then I get this error: "Access violation at address 0101916B in module 'SQLNav5.exe'. Read of address 00000000" and then the program is terminatied. Any ideas what is the problem? Update: I am trying to install Oracle client - sql navigator - not sure if the server is 64 bits or not.

    Read the article

  • SQL Navigator startup error: Unhandled Exception at startup - Cannot find OCI DLL: oci.dll

    - by Imageree
    I am using 64 bits Windows 7. Oracle Development Tool: SQL Navigator 5.5 was installed on my computer. When I try to start the program I get this error: "Unhandled Exception at startup - Cannot find OCI DLL: oci.dll" Then I get this error: "Access violation at address 0101916B in module 'SQLNav5.exe'. Read of address 00000000" and then the program is terminatied. Any ideas what is the problem? Update: I am trying to install Oracle client - sql navigator - not sure if the server is 64 bits or not.

    Read the article

  • Windows Server 2003 Terminal Server with installed licenses will not go beyond 2 connections

    - by Erwin Blonk
    I installed the Terminal Server role in Windows Server 2003 Standard 64-bits. Still, only 2 connections are allowed. The License Manager says that there are 10 Device CALs available, which is correct, and that none are given out. For good measure I let the server reboot, to no effect. Before this, there was another server (same Windows, except that it is 32 bits) active as a licensing server. I removed the role first and then then added it to the new server. I then removed the Terminal Server Licensing Server component off the old one and added it to the new one. After that, I added to licenses. When that didn't give the required result, I rebooted to new server. Still, the new server, with licenses and all, acts as if it has the 2 license RDP. The server are all stand-alone, there is no active directory been set up. Both servers are in different workgroups.

    Read the article

  • httpd keeps crashing without any reference to why in the logs

    - by Fred
    I have the logs set to debug in the hopes of tracking down what's causing the crash, but I can't find anything. Here is the error_log. [Thu Jan 06 10:27:35 2011] [debug] proxy_util.c(1967): proxy: initialized single connection worker 0 in child 19999 for (*) [Thu Jan 06 14:47:04 2011] [notice] suEXEC mechanism enabled (wrapper: /usr/sbin/suexec) [Thu Jan 06 14:47:04 2011] [info] Init: Seeding PRNG with 256 bytes of entropy [Thu Jan 06 14:47:04 2011] [info] Init: Generating temporary RSA private keys (512/1024 bits) [Thu Jan 06 14:47:04 2011] [info] Init: Generating temporary DH parameters (512/1024 bits) [Thu Jan 06 14:47:04 2011] [info] Init: Initializing (virtual) servers for SSL [Thu Jan 06 14:47:04 2011] [info] Server: Apache/2.2.3, Interface: mod_ssl/2.2.3, Library: OpenSSL/0.9.8e-fips-rhel5 [Thu Jan 06 14:47:04 2011] [notice] Digest: generating secret for digest authentication ... [Thu Jan 06 14:47:04 2011] [notice] Digest: done [Thu Jan 06 14:47:04 2011] [debug] util_ldap.c(2021): LDAP merging Shared Cache conf: shm=0xb9dc2480 rmm=0xb9dc24b0 for VHOST: server.fredfinn.com [Thu Jan 06 14:47:04 2011] [info] APR LDAP: Built with OpenLDAP LDAP SDK [Thu Jan 06 14:47:04 2011] [info] LDAP: SSL support available [Thu Jan 06 14:47:05 2011] [info] Init: Seeding PRNG with 256 bytes of entropy [Thu Jan 06 14:47:05 2011] [info] Init: Generating temporary RSA private keys (512/1024 bits) [Thu Jan 06 14:47:05 2011] [info] Init: Generating temporary DH parameters (512/1024 bits) [Thu Jan 06 14:47:05 2011] [debug] ssl_scache_shmcb.c(374): shmcb_init allocated 512000 bytes of shared memory [Thu Jan 06 14:47:05 2011] [debug] ssl_scache_shmcb.c(554): entered shmcb_init_memory() [Thu Jan 06 14:47:05 2011] [debug] ssl_scache_shmcb.c(576): for 512000 bytes, recommending 4266 indexes [Thu Jan 06 14:47:05 2011] [debug] ssl_scache_shmcb.c(619): shmcb_init_memory choices follow [Thu Jan 06 14:47:05 2011] [debug] ssl_scache_shmcb.c(621): division_mask = 0x1F [Thu Jan 06 14:47:05 2011] [debug] ssl_scache_shmcb.c(623): division_offset = 64 [Thu Jan 06 14:47:05 2011] [debug] ssl_scache_shmcb.c(625): division_size = 15998 [Thu Jan 06 14:47:05 2011] [debug] ssl_scache_shmcb.c(627): queue_size = 1604 [Thu Jan 06 14:47:05 2011] [debug] ssl_scache_shmcb.c(629): index_num = 133 [Thu Jan 06 14:47:05 2011] [debug] ssl_scache_shmcb.c(631): index_offset = 8 [Thu Jan 06 14:47:05 2011] [debug] ssl_scache_shmcb.c(633): index_size = 12 [Thu Jan 06 14:47:05 2011] [debug] ssl_scache_shmcb.c(635): cache_data_offset = 8 [Thu Jan 06 14:47:05 2011] [debug] ssl_scache_shmcb.c(637): cache_data_size = 14386 [Thu Jan 06 14:47:05 2011] [debug] ssl_scache_shmcb.c(650): leaving shmcb_init_memory() [Thu Jan 06 14:47:05 2011] [info] Shared memory session cache initialised [Thu Jan 06 14:47:05 2011] [info] Init: Initializing (virtual) servers for SSL [Thu Jan 06 14:47:05 2011] [info] Server: Apache/2.2.3, Interface: mod_ssl/2.2.3, Library: OpenSSL/0.9.8e-fips-rhel5 [Thu Jan 06 14:47:05 2011] [warn] pid file /etc/httpd/run/httpd.pid overwritten -- Unclean shutdown of previous Apache run? [Thu Jan 06 14:47:05 2011] [debug] proxy_util.c(1854): proxy: grabbed scoreboard slot 0 in child 26527 for worker proxy:reverse [Thu Jan 06 14:47:05 2011] [debug] proxy_util.c(1967): proxy: initialized single connection worker 0 in child 26527 for (*) [Thu Jan 06 14:47:05 2011] [debug] proxy_util.c(1854): proxy: grabbed scoreboard slot 0 in child 26528 for worker proxy:reverse [Thu Jan 06 14:47:05 2011] [debug] proxy_util.c(1873): proxy: worker proxy:reverse already initialized [Thu Jan 06 14:47:05 2011] [debug] proxy_util.c(1967): proxy: initialized single connection worker 0 in child 26528 for (*) [Thu Jan 06 14:47:05 2011] [debug] proxy_util.c(1854): proxy: grabbed scoreboard slot 0 in child 26529 for worker proxy:reverse [Thu Jan 06 14:47:05 2011] [debug] proxy_util.c(1873): proxy: worker proxy:reverse already initialized [Thu Jan 06 14:47:05 2011] [debug] proxy_util.c(1967): proxy: initialized single connection worker 0 in child 26529 for (*) [Thu Jan 06 14:47:05 2011] [debug] proxy_util.c(1854): proxy: grabbed scoreboard slot 0 in child 26530 for worker proxy:reverse [Thu Jan 06 14:47:05 2011] [debug] proxy_util.c(1873): proxy: worker proxy:reverse already initialized [Thu Jan 06 14:47:05 2011] [debug] proxy_util.c(1967): proxy: initialized single connection worker 0 in child 26530 for (*) [Thu Jan 06 14:47:05 2011] [debug] proxy_util.c(1854): proxy: grabbed scoreboard slot 0 in child 26532 for worker proxy:reverse [Thu Jan 06 14:47:05 2011] [debug] proxy_util.c(1873): proxy: worker proxy:reverse already initialized [Thu Jan 06 14:47:05 2011] [debug] proxy_util.c(1967): proxy: initialized single connection worker 0 in child 26532 for (*) [Thu Jan 06 14:47:05 2011] [debug] proxy_util.c(1854): proxy: grabbed scoreboard slot 0 in child 26533 for worker proxy:reverse [Thu Jan 06 14:47:05 2011] [debug] proxy_util.c(1873): proxy: worker proxy:reverse already initialized [Thu Jan 06 14:47:05 2011] [debug] proxy_util.c(1967): proxy: initialized single connection worker 0 in child 26533 for (*) [Thu Jan 06 14:47:05 2011] [debug] proxy_util.c(1854): proxy: grabbed scoreboard slot 0 in child 26534 for worker proxy:reverse [Thu Jan 06 14:47:05 2011] [debug] proxy_util.c(1873): proxy: worker proxy:reverse already initialized [Thu Jan 06 14:47:05 2011] [debug] proxy_util.c(1967): proxy: initialized single connection worker 0 in child 26534 for (*) [Thu Jan 06 14:47:05 2011] [notice] Apache/2.2.3 (CentOS) configured -- resuming normal operations [Thu Jan 06 14:47:05 2011] [info] Server built: Aug 30 2010 12:32:08 [Thu Jan 06 14:47:05 2011] [debug] prefork.c(991): AcceptMutex: sysvsem (default: sysvsem) [Thu Jan 06 14:47:05 2011] [debug] proxy_util.c(1854): proxy: grabbed scoreboard slot 0 in child 26531 for worker proxy:reverse [Thu Jan 06 14:47:05 2011] [debug] proxy_util.c(1873): proxy: worker proxy:reverse already initialized [Thu Jan 06 14:47:05 2011] [debug] proxy_util.c(1967): proxy: initialized single connection worker 0 in child 26531 for (*) The logs are setup as: ErrorLog logs/error_log LogLevel debug LogFormat "%h %l %u %t \"%r\" %>s %b \"%{Referer}i\" \"%{User-Agent}i\"" combined LogFormat "%h %l %u %t \"%r\" %>s %b" common LogFormat "%{Referer}i -> %U" referer LogFormat "%{User-agent}i" agent CustomLog logs/access_log common CustomLog logs/access_log combined ServerSignature On

    Read the article

  • Basic networking problem with Ubuntu 9.04 on Acer Extensa 5635Z laptop

    - by sapporo
    I just installed Ubuntu 9.04 on a brand new Acer Extensa 5635Z laptop, but ethernet networking does't work (wireless doesn't work either, but I'd be happy with ethernet for now). eth0 isn't listed in /etc/network/interfaces: $ cat /etc/network/interfaces auto lo iface lo inet loopback lshw does show the nic, but I can't make much sense out of the information: $ sudo lshw -class network -sanitize *-network DISABLED description: Wireless interface product: AR928X Wireless Network Adapter (PCI-Express) vendor: Atheros Communications Inc. physical id: 0 bus info: pci@0000:07:00.0 logical name: wmaster0 version: 01 serial: [REMOVED] width: 64 bits clock: 33MHz capabilities: pm msi pciexpress msix bus_master cap_list logical ethernet physical wireless configuration: broadcast=yes driver=ath9k latency=0 module=ath9k multicast=yes wireless=IEEE 802.11bgn *-network UNCLAIMED description: Ethernet controller product: Attansic Technology Corp. vendor: Attansic Technology Corp. physical id: 0 bus info: pci@0000:09:00.0 version: c0 width: 64 bits clock: 33MHz capabilities: pm msi pciexpress vpd cap_list configuration: latency=0 *-network DISABLED description: Ethernet interface physical id: 1 logical name: pan0 serial: [REMOVED] capabilities: ethernet physical configuration: broadcast=yes driver=bridge driverversion=2.3 firmware=N/A link=yes multicast=yes Thanks for your help!

    Read the article

  • How-to: determine 64-bitness of Windows? [closed]

    - by warren
    Possible Duplicate: Tell the version of Windows XP (64-bits or 32-bits) Is it possible to determine whether a given installation of Windows is 32- or 64-bit? From right-clicking on My Computer, and selecting Properties, it appears that such information is not readily available. Typing ver at the command prompt also doesn't seem to return anything about the nature of the platform in which it is installed. Under Linux, I'd use uname -a to find out what kernel was running. Is there an analog on Windows?

    Read the article

  • What does "cpuid level" mean?

    - by ogzylz
    For an example, I put just output from 1 core of a 16 core machine. What does the output mean by "cpuid level" of "6"? Also, what do "bogomips" of "5992.10" and "clflush size" of "64" mean? processor : 0 vendor_id : GenuineIntel cpu family : 15 model : 6 model name : Intel(R) Xeon(TM) CPU 3.00GHz stepping : 8 cpu MHz : 2992.689 cache size : 4096 KB physical id : 0 siblings : 4 core id : 0 cpu cores : 2 fpu : yes fpu_exception: yes cpuid level : 6 wp : yes flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm syscall nx lm constant_tsc pni monitor ds_cpl vmx cid cx16 xtpr lahf_lm bogomips : 5992.10 clflush size : 64 cache_alignment: 128 address sizes: 40 bits physical, 48 bits virtual power management:

    Read the article

  • Can I grant permissions on files in windows 7 using a security identifier from another machine

    - by Thomas
    I have an external hard drive, and I wish to grant permissions on some files to users from 2 different computers without having to hook it up to the 2 different computers. I know the SID of the user on the other computer, I'd like to know if and how I can grant permissions to files using the SID. I'm running Windows 7 Professional 64 bits, and "The Other" computer Win 7 Home Premium 64 bits, they are not in a domain, but separate computers on a home network (not even same homegroup). Note: Duplicated question with: Is there a way to give NTFS file permissions to users from other Windows installations?

    Read the article

  • Why does 1GBit card have output limited to 80 MiB ?

    - by Gallus
    I'm trying to utilize maximal bandwidth provided by my 1GiB network card, but it's always limited to 80MiB (real megabytes). What can be the reason? Card description (lshw output): description: Ethernet interface product: DGE-530T Gigabit Ethernet Adapter (rev 11) vendor: D-Link System Inc physical id: 0 bus info: pci@0000:03:00.0 logical name: eth1 version: 11 serial: 00:22:b0:68:70:41 size: 1GB/s capacity: 1GB/s width: 32 bits clock: 66MHz capabilities: pm vpd bus_master cap_list rom ethernet physical tp 10bt 10bt-fd 100bt 100bt-fd 1000bt 1000bt-fd autonegotiation The card is placed in following PCI slot: *-pci:2 description: PCI bridge product: 82801 PCI Bridge vendor: Intel Corporation physical id: 1e bus info: pci@0000:00:1e.0 version: 92 width: 32 bits clock: 33MHz capabilities: pci subtractive_decode bus_master cap_list The PCI isn't any PCI Express right? It's a legacy PCI slot? So maybe this is the reason? OS is a linux.

    Read the article

< Previous Page | 14 15 16 17 18 19 20 21 22 23 24 25  | Next Page >