Search Results

Search found 2353 results on 95 pages for 'hex bob omb'.

Page 43/95 | < Previous Page | 39 40 41 42 43 44 45 46 47 48 49 50  | Next Page >

  • Precising definition of programming paradigm

    - by Kazark
    Wikipedia defines programming paradigm thus: a fundamental style of computer programming which is echoed in the descriptive text of the paradigms tag on this site. I find this a disappointing definition. Anyone who knows the words programming and paradigm could do about that well without knowing anything else about it. There are many styles of computer programming at many level of abstraction; within any given programming paradigm, multiple styles are possible. For example, Bob Martin says in Clean Code (13), Consider this book a description of the Object Mentor School of Clean Code. The techniques and teachings within are the way that we practice our art. We are willing to claim that if you follow these teachings, you will enjoy the benefits that we have enjoyed, and you will learn to write code that is clean and professional. But don't make the mistake of thinking that we are somehow "right" in any absolute sense. Thus Bob Martin is not claiming to have the correct style of Object-Oriented programming, even though he, if anyone, might have some claim to doing so. But even within his school of programming, we might have different styles of formatting the code (K&R, etc). There are many styles of programming at many levels. Sp how can we define programming paradigm rigorously, to distinguish it from other categories of programming styles? Fundamental is somewhat helpful, but not specific. How can we define the phrase in a way that will communicate more than the separate meanings of each of the two words—in other words, how can we define it in a way that will provide additional meaning for someone who speaks English but isn't familiar with a variety of paradigms?

    Read the article

  • What is the precise definition of programming paradigm?

    - by Kazark
    Wikipedia defines programming paradigm thus: a fundamental style of computer programming which is echoed in the descriptive text of the paradigms tag on this site. I find this a disappointing definition. Anyone who knows the words programming and paradigm could do about that well without knowing anything else about it. There are many styles of computer programming at many level of abstraction; within any given programming paradigm, multiple styles are possible. For example, Bob Martin says in Clean Code (13), Consider this book a description of the Object Mentor School of Clean Code. The techniques and teachings within are the way that we practice our art. We are willing to claim that if you follow these teachings, you will enjoy the benefits that we have enjoyed, and you will learn to write code that is clean and professional. But don't make the mistake of thinking that we are somehow "right" in any absolute sense. Thus Bob Martin is not claiming to have the correct style of Object-Oriented programming, even though he, if anyone, might have some claim to doing so. But even within his school of programming, we might have different styles of formatting the code (K&R, etc). There are many styles of programming at many levels. So how can we define programming paradigm rigorously, to distinguish it from other categories of programming styles? Fundamental is somewhat helpful, but not specific. How can we define the phrase in a way that will communicate more than the separate meanings of each of the two words—in other words, how can we define it in a way that will provide additional meaning for someone who speaks English but isn't familiar with a variety of paradigms?

    Read the article

  • Mapping XFCE4/xRDP sessions to users

    - by garrilla
    I have Ubuntu 13.10 with Xubuntu Desktop - XFCE4. I'm trying to use XDRP to allow MS Windows users to login to the machine with their own user. I've been a lot around the houses with this! I've find two half-way solutions, but can't get them to work as I'd like... 1) in /etc/xrdp/xrdp.ini I set the port to -1 [xrdp1] name=sesman-Xvnc lib=libvnc.so username=ask password=ask ip=127.0.0.1 port=-1 each time any user logs on they get a new session - they can never go back to their original session 2) in /etc/xrdp/xrdp.ini I set the port to 5912 (e.g) [xrdp1] name=sesman-Xvnc lib=libvnc.so username=ask password=ask ip=127.0.0.1 port=5912 each time any user logs on they always log on to the same session irrespective of their logon details ??) I found a mid-way solution, to create a lot of sessions by adding adding additional options in the xrdp.ini e.g. [xrdp8] name=Bob's Logon lib=libvnc.so username=ask password=ask ip=127.0.0.1 port=5913 [xrdp9] name=Jill's Logon lib=libvnc.so username=ask password=ask ip=127.0.0.1 port=5914 and so on, but he problem with this is that Jill can still log into Bob's remote session ??? Is it possible to to do what I'm trying to do? Maybe I have to use different tools?

    Read the article

  • How does Google Analytics aggregate the Count of Visits (Frequency & Recency Report)?

    - by Brian Dant
    Here's my simple understanding of Count of Visits: Each person that comes to my site gets one "count" for each visit. They are put into a bucket of people with the same number of total counts -- if you visit twice, you are in the two bucket, if you visit six times, you are in the six bucket. From there, a report (Frequency & Recency) makes a line for each bucket and reaches into the bucket and totals the number of people in that bucket, putting that total in the second column. My Question: Will a two month report automatically put someone into two buckets, and put them on two separate lines in the Count of Visits table? This explaination makes it seem like a two-month long report will put the same person into a bucket twice, one bucket for each month. The two-month report will then show that person's visits on two different lines, instead of aggregating them. Example for Clarification: Bob comes to my site three times in January and seven times in February. I run a report for Jan 1 -- Feb 28. Will Bob be on both the Three Count line and the Seven Count line, or will he be on the Ten Count line?

    Read the article

  • Binary to Ascii and back again

    - by rross
    I'm trying to interface with a hardware device via the serial port. When I use software like Portmon to see the messages they look like this: 42 21 21 21 21 41 45 21 26 21 29 21 26 59 5F 41 30 21 2B 21 27 42 21 21 21 21 41 47 21 27 21 28 21 27 59 5D 41 32 21 2A 21 28 When I run them thru a hex to ascii converter the commands don't make sense. Are these messages in fact something different than hex? My hope was to see the messages the device is passing and emulate them using c#. What can I do to find out exactly what the messages are?

    Read the article

  • Hexagonal Grid Coordinates To Pixel Coordinates

    - by CaptnCraig
    I am working with a hexagonal grid. I have chosen to use this coordinate system because it is quite elegant. This question talks about generating the coordinates themselves, and is quite useful. My issue now is in converting these coordinates to and from actual pixel coordinates. I am looking for a simple way to find the center of a hexagon with coordinates x,y,z. Assume (0,0) in pixel coordinates is at (0,0,0) in hex coords, and that each hexagon has an edge of length s. It seems to me like x,y, and z should each move my coordinate a certain distance along an axis, but they are interrelated in an odd way I can't quite wrap my head around it. Bonus points if you can go the other direction and convert any (x,y) point in pixel coordinates to the hex that point belongs in.

    Read the article

  • JPG segment length encoding

    - by Blorgbeard
    I'm trying to write some code to extract Exif information from a JPG. Exif is stored in the APP1 segment of a JPG file. According to the Exif spec, the format of the APP1 segment is supposed to start like this: FF E1 // APP1 segment marker nn nn // Length of segment 45 // 'E' 78 // 'x' 69 // 'i' 66 // 'f' And it goes until there is an FF followed by something other than FF or 00. Looking at a JPG in a hex editor, I can see FF E1 and the Exif string, but I'm having trouble decoding the length bytes. An example: In one jpg, my hex editor tells me the APP1 segment is 686 bytes long, but the length bytes are F7 C8. How should I use those bytes to come up with 686 decimal?

    Read the article

  • UART speed possibly wrong

    - by Mike
    My brain is fried, so I thought I would pass this one to the community. When sending 1 character to my embedded system, it consistently thinks it receives 2 characters. The first received character seems to map to the transmitted character (in some unkown way) and the second received character is always 0xff Here is what I observed: Tx char (in hex) Rx character (in hex), I left out the second byte (always ff) 31 9D 32 9B 33 99 61 3D 62 3B 63 39 64 37 65 35 41 7D 42 7B 43 79 I have check my clocks and them seem to be ok. The only diff between this non working version and the previous version is that i am now using a RS485 chip. I have traced the signal all the way up to the MCU and it looks fine (confirmed the bit value on the rx pin)

    Read the article

  • How to populate List<string> with Datarow values from single columns...

    - by James
    Hi, I'm still learning (baby steps). Messing about with a function and hoping to find a tidier way to deal with my datatables. For the more commonly used tables throughout the life of the program, I'll dump them to datatables and query those instead. What I'm hoping to do is query the datatables for say column x = "this", and convert the values of column "y" directly to a List to return to the caller: private List<string> LookupColumnY(string hex) { List<string> stringlist = new List<string>(); DataRow[] rows = tblDataTable.Select("Columnx = '" + hex + "'", "Columny ASC"); foreach (DataRow row in rows) { stringlist.Add(row["Columny"].ToString()); } return stringlist; } Anyone know a slightly simpler method? I guess this is easy enough, but I'm wondering if I do enough of these if iterating via foreach loop won't be a performance hit. TIA!

    Read the article

  • Code Golf: Seven Segments

    - by LiraNuna
    The challenge The shortest code by character count to generate seven segment display representation of a given hex number. Input Input is made out of digits [0-9] and hex characters in both lower and upper case [a-fA-F] only. There is no need to handle special cases. Output Output will be the seven segment representation of the input, using those ASCII faces: _ _ _ _ _ _ _ _ _ _ _ _ | | | _| _| |_| |_ |_ | |_| |_| |_| |_ | _| |_ |_ |_| | |_ _| | _| |_| | |_| _| | | |_| |_ |_| |_ | Restrictions The use of the following is forbidden: eval, exec, system, figlet, toilet and external libraries. Test cases: Input: deadbeef Output: _ _ _ _ _ _||_ |_| _||_ |_ |_ |_ |_||_ | ||_||_||_ |_ | Input: 4F790D59 Output: _ _ _ _ _ _ |_||_ ||_|| | _||_ |_| || | _||_||_| _| _| Code count includes input/output (i.e full program).

    Read the article

  • PHP: why uniqid returned value is only 13 digits long

    - by Marco Demaio
    uniqid() function returns a 13 digits long hexadecimal number. According to the spec in php.net site, the function uses microtime to generate the unique value. But microtime returns numbers in string format as the following one: "0.70352700 12689396875" which are basically the microseconds and the seconds elapsed since 1970. This is a 9+11 digits decimal number. Converting a 20 decimal number into hex would result in a 16 digits hexadecimal NOT a 13 digits one. I also thought to take out the "0." part that seem to never change, and the last two digits of the microsec part that seem to remain always "00". Doing this the decimal number would be only 9+11-3 digits long, but still a decimal number of 17 digits when converted into hex would result in 14 digits hexadecimal number NOT 13. You probably think I'm crazy in asking such a thing, but I'm concerned about using uniqid, unique values are important to be unique, a duplicated value could screw up an entire application.

    Read the article

  • How to use ORDER BY, LOWER .. in SQL SERVER 2008 with non-unicode data

    - by hgulyan
    Hi, The question is about Armenian. I'm using sql server 2005, collation SQL_Latin1_General_CP1_CI_AS, data mostly is in Armenian and we can't use unicode. I tested on ms sql 2008 with a windows collation for armenian language ( Cyrillic_General_100_ ), I have found here, ( http://msdn.microsoft.com/en-us/library/ms188046.aspx ) but it didn't help. I have a function, that orders hex values and a lower function, which takes each char in each string and converts it to it's lower form, but it's not acceptable solution, it works really slow, calling that functions on every column of a huge table. Is there any solution for this issue not using unicode and not working with hex values manually?

    Read the article

  • Is it possible to use Regex through Hexadecimal to find email addresses

    - by LukeJenx
    Not sure if this is even possible but I have been looking at using Regex to get an email address that is in Hex. Basically this is to build up some of my automated forensic tools but I am having problems on making a suitable Regex algorithm. Regex for email: /^([a-z0-9_.-]+)@([\da-z.-]+).([a-z.]{2,6})$/ Hex values: @ = 40 . = 2E .com = 636f6d _ = 5f A/a = 41/61 [1] Z/z = 5a/7a - = 2d This is what I have got at the moment (it only takes into account lower case and .com). But it doesn't work! Have I messed something simple up? "/^([61-7a]+)40([61-7a]+)23(636f6d)$/" [1] I know email can only be lower case but I need to take uppercase into account too.

    Read the article

  • How to use ORDER BY, LOWER .. in SQL SERVER 2008 with non-unicode languages

    - by hgulyan
    Hi, The question is about Armenian. I'm using sql server 2005, collation SQL_Latin1_General_CP1_CI_AS, data mostly is in Armenian and we can't use unicode. I tested on ms sql 2008 with a windows collation for armenian language ( Cyrillic_General_100_ ), I have found here, ( http://msdn.microsoft.com/en-us/library/ms188046.aspx ) but it didn't help. I have a function, that orders hex values and lower function, which takes each char in string and covnerts it to it's lower form, but it's not acceptable solution, it works really slow, calling that functions on every column of a huge table. Is there any solution for this issue not using unicode and working with hex values manually?

    Read the article

  • Best calculator software to help programmers

    - by RHaguiuda
    As a embedded systems programmer I always need to make lots of base conversions (dec to hex, hex to bin and so on...), and I must admit: Windows 7 calculator is a good calc, but too limited in my point of view. I work a lot with communications protocols and it`s common to need some base conversion in this field of knowledge. I`m looking for a calculator software (not a hardware one), to help with base conversions, but it must also support scientific calc. Can anyone help on this? Since this subject is intended to help programmers, I did not ask this in SuperUser.com. Thanks.

    Read the article

  • using bash: write bit representation of integer to file

    - by theseion
    Hullo First, I want to use bash for this and the script should run on as many systems as possible (I don't know if the target system will have python or whatever installed). Here's the problem: I have a file with binary data and I need to replace a few bytes in a certain position. I've come up with the following to direct bash to the offset and show me that it found the place I want: dd bs=1 if=file iseek=24 conv=block cbs=2 | hexdump Now, to use "file" as the output: echo anInteger | dd bs=1 of=hextest.txt oseek=24 conv=block cbs=2 This seems to work just fine, I can review the changes made in a hex editor. Problem is, "anInteger" will be written as the ASCII representation of that integer (which makes sense) but I need to write the binary representation. How do I tell the command to convert the input to binary (possibly from a hex)?

    Read the article

  • Why doesn't an octal literal as a string cast to a number?

    - by Andy E
    In JavaScript, why does an octal number string cast as a decimal number? I can cast a hex literal string using Number() or +, why not an octal? For instance: 1000 === +"1000" // -> true 0xFF === +"0xFF" // -> true 0100 === +"0100" // -> false - +"0100" gives 100, not 64 I know I can parse with parseInt("0100" [, 8]), but I'd like to know why casting doesn't work like it does with hex and dec numbers. Also, does anyone know why octal literals are dropped from ECMAScript 5th Edition in strict mode?

    Read the article

  • How to match ColdFusion encryption with Java 1.4.2?

    - by JohnTheBarber
    * sweet - thanks to Edward Smith for the CF Technote that indicated the key from ColdFusion was Base64 encoded. See generateKey() for the 'fix' My task is to use Java 1.4.2 to match the results a given ColdFusion code sample for encryption. Known/given values: A 24-byte key A 16-byte salt (IVorSalt) Encoding is Hex Encryption algorithm is AES/CBC/PKCS5Padding A sample clear-text value The encrypted value of the sample clear-text after going through the ColdFusion code Assumptions: Number of iterations not specified in the ColdFusion code so I assume only one iteration 24-byte key so I assume 192-bit encryption Given/working ColdFusion encryption code sample: <cfset ThisSalt = "16byte-salt-here"> <cfset ThisAlgorithm = "AES/CBC/PKCS5Padding"> <cfset ThisKey = "a-24byte-key-string-here"> <cfset thisAdjustedNow = now()> <cfset ThisDateTimeVar = DateFormat( thisAdjustedNow , "yyyymmdd" )> <cfset ThisDateTimeVar = ThisDateTimeVar & TimeFormat( thisAdjustedNow , "HHmmss" )> <cfset ThisTAID = ThisDateTimeVar & "|" & someOtherData> <cfset ThisTAIDEnc = Encrypt( ThisTAID , ThisKey , ThisAlgorithm , "Hex" , ThisSalt)> My Java 1.4.2 encryption/decryption code swag: package so.example; import java.security.*; import javax.crypto.Cipher; import javax.crypto.spec.IvParameterSpec; import javax.crypto.spec.SecretKeySpec; import org.apache.commons.codec.binary.*; public class SO_AES192 { private static final String _AES = "AES"; private static final String _AES_CBC_PKCS5Padding = "AES/CBC/PKCS5Padding"; private static final String KEY_VALUE = "a-24byte-key-string-here"; private static final String SALT_VALUE = "16byte-salt-here"; private static final int ITERATIONS = 1; private static IvParameterSpec ivParameterSpec; public static String encryptHex(String value) throws Exception { Key key = generateKey(); Cipher c = Cipher.getInstance(_AES_CBC_PKCS5Padding); ivParameterSpec = new IvParameterSpec(SALT_VALUE.getBytes()); c.init(Cipher.ENCRYPT_MODE, key, ivParameterSpec); String valueToEncrypt = null; String eValue = value; for (int i = 0; i < ITERATIONS; i++) { // valueToEncrypt = SALT_VALUE + eValue; // pre-pend salt - Length > sample length valueToEncrypt = eValue; // don't pre-pend salt Length = sample length byte[] encValue = c.doFinal(valueToEncrypt.getBytes()); eValue = Hex.encodeHexString(encValue); } return eValue; } public static String decryptHex(String value) throws Exception { Key key = generateKey(); Cipher c = Cipher.getInstance(_AES_CBC_PKCS5Padding); ivParameterSpec = new IvParameterSpec(SALT_VALUE.getBytes()); c.init(Cipher.DECRYPT_MODE, key, ivParameterSpec); String dValue = null; char[] valueToDecrypt = value.toCharArray(); for (int i = 0; i < ITERATIONS; i++) { byte[] decordedValue = Hex.decodeHex(valueToDecrypt); byte[] decValue = c.doFinal(decordedValue); // dValue = new String(decValue).substring(SALT_VALUE.length()); // when salt is pre-pended dValue = new String(decValue); // when salt is not pre-pended valueToDecrypt = dValue.toCharArray(); } return dValue; } private static Key generateKey() throws Exception { // Key key = new SecretKeySpec(KEY_VALUE.getBytes(), _AES); // this was wrong Key key = new SecretKeySpec(new BASE64Decoder().decodeBuffer(keyValueString), _AES); // had to un-Base64 the 'known' 24-byte key. return key; } } I cannot create a matching encrypted value nor decrypt a given encrypted value. My guess is it's something to do with how I'm handling the initial vector/salt. I'm not very crypto-savvy but I'm thinking I should be able to take the sample clear-text and produce the same encrypted value in Java as ColdFusion produced. I am able to encrypt/decrypt my own data with my Java code (so I'm consistent) but I cannot match nor decrypt the ColdFusion sample encrypted value. I have access to a local webservice that can test the encrypted output. The given ColdFusion output sample passes/decrypts fine (of course). If I try to decrypt the same sample with my Java code (using the actual key and salt) I get a "Given final block not properly padded" error. I get the same net result when I pass my attempt at encryption (using the actual key and salt) to the test webservice. Any Ideas?

    Read the article

  • Exception - Illegal Block size during decryption(Android)

    - by Vamsi
    I am writing an application which encrypts and decrypts the user notes based on the user set password. i used the following algorithms for encryption/decryption 1. PBEWithSHA256And256BitAES-CBC-BC 2. PBEWithMD5And128BitAES-CBC-OpenSSL e_Cipher = Cipher.getInstance(PBEWithSHA256And256BitAES-CBC-BC); d_Cipher = Cipher.getInstance(PBEWithSHA256And256BitAES-CBC-BC); e_Cipher.init() d_Cipher.init() encryption is working well, but when trying to decrypt it gives Exception - Illegal Block size after encryption i am converting the cipherText to HEX and storing it in a sqlite database. i am retrieving correct values from the sqlite database during decyption but when calling d_Cipher.dofinal() it throws the Exception. I thought i missed to specify the padding and tried to check what are the other available cipher algorithms but i was unable to found. so request you to please give the some knowledge on what are the cipher algorithms and padding that are supported by Android? if the algorithm which i used can be used for padding, how should i specify the padding mechanism? I am pretty new to Encryption so tried a couple of algorithms which are available in BouncyCastle.java but unsuccessful. As requested here is the code public class CryptoHelper { private static final String TAG = "CryptoHelper"; //private static final String PBEWithSHA256And256BitAES = "PBEWithSHA256And256BitAES-CBC-BC"; //private static final String PBEWithSHA256And256BitAES = "PBEWithMD5And128BitAES-CBC-OpenSSL"; private static final String PBEWithSHA256And256BitAES = "PBEWithMD5And128BitAES-CBC-OpenSSLPBEWITHSHA1AND3-KEYTRIPLEDES-CB"; private static final String randomAlgorithm = "SHA1PRNG"; public static final int SALT_LENGTH = 8; public static final int SALT_GEN_ITER_COUNT = 20; private final static String HEX = "0123456789ABCDEF"; private Cipher e_Cipher; private Cipher d_Cipher; private SecretKey secretKey; private byte salt[]; public CryptoHelper(String password) throws InvalidKeyException, NoSuchAlgorithmException, NoSuchPaddingException, InvalidAlgorithmParameterException, InvalidKeySpecException { char[] cPassword = password.toCharArray(); PBEKeySpec pbeKeySpec = new PBEKeySpec(cPassword); PBEParameterSpec pbeParamSpec = new PBEParameterSpec(salt, SALT_GEN_ITER_COUNT); SecretKeyFactory keyFac = SecretKeyFactory.getInstance(PBEWithSHA256And256BitAES); secretKey = keyFac.generateSecret(pbeKeySpec); SecureRandom saltGen = SecureRandom.getInstance(randomAlgorithm); this.salt = new byte[SALT_LENGTH]; saltGen.nextBytes(this.salt); e_Cipher = Cipher.getInstance(PBEWithSHA256And256BitAES); d_Cipher = Cipher.getInstance(PBEWithSHA256And256BitAES); e_Cipher.init(Cipher.ENCRYPT_MODE, secretKey, pbeParamSpec); d_Cipher.init(Cipher.DECRYPT_MODE, secretKey, pbeParamSpec); } public String encrypt(String cleartext) throws IllegalBlockSizeException, BadPaddingException { byte[] encrypted = e_Cipher.doFinal(cleartext.getBytes()); return convertByteArrayToHex(encrypted); } public String decrypt(String cipherString) throws IllegalBlockSizeException { byte[] plainText = decrypt(convertStringtobyte(cipherString)); return(new String(plainText)); } public byte[] decrypt(byte[] ciphertext) throws IllegalBlockSizeException { byte[] retVal = {(byte)0x00}; try { retVal = d_Cipher.doFinal(ciphertext); } catch (BadPaddingException e) { Log.e(TAG, e.toString()); } return retVal; } public String convertByteArrayToHex(byte[] buf) { if (buf == null) return ""; StringBuffer result = new StringBuffer(2*buf.length); for (int i = 0; i < buf.length; i++) { appendHex(result, buf[i]); } return result.toString(); } private static void appendHex(StringBuffer sb, byte b) { sb.append(HEX.charAt((b>>4)&0x0f)).append(HEX.charAt(b&0x0f)); } private static byte[] convertStringtobyte(String hexString) { int len = hexString.length()/2; byte[] result = new byte[len]; for (int i = 0; i < len; i++) { result[i] = Integer.valueOf(hexString.substring(2*i, 2*i+2), 16).byteValue(); } return result; } public byte[] getSalt() { return salt; } public SecretKey getSecretKey() { return secretKey; } public static SecretKey createSecretKey(char[] password) throws NoSuchAlgorithmException, InvalidKeySpecException { PBEKeySpec pbeKeySpec = new PBEKeySpec(password); SecretKeyFactory keyFac = SecretKeyFactory.getInstance(PBEWithSHA256And256BitAES); return keyFac.generateSecret(pbeKeySpec); } } I will call mCryptoHelper.decrypt(String str) then this results in Illegal block size exception My Env: Android 1.6 on Eclipse

    Read the article

  • changing output in objective-c app

    - by Zack
    // // RC4.m // Play5 // // Created by svp on 24.05.10. // Copyright 2010 __MyCompanyName__. All rights reserved. // #import "RC4.h" @implementation RC4 @synthesize txtLyrics; @synthesize sbox; @synthesize mykey; - (IBAction) clicked: (id) sender { NSData *asciidata1 = [@"4875" dataUsingEncoding:NSASCIIStringEncoding allowLossyConversion:YES]; NSString *asciistr1 = [[NSString alloc] initWithData:asciidata1 encoding:NSASCIIStringEncoding]; //[txtLyrics setText:@"go"]; NSData *asciidata = [@"sdf883jsdf22" dataUsingEncoding:NSASCIIStringEncoding allowLossyConversion:YES]; NSString *asciistr = [[NSString alloc] initWithData:asciidata encoding:NSASCIIStringEncoding]; //RC4 * x = [RC4 alloc]; [txtLyrics setText:[self decrypt:asciistr1 andKey:asciistr]]; } - (NSMutableArray*) hexToChars: (NSString*) hex { NSMutableArray * arr = [[NSMutableArray alloc] init]; NSRange range; range.length = 2; for (int i = 0; i < [hex length]; i = i + 2) { range.location = 0; NSString * str = [[hex substringWithRange:range] uppercaseString]; unsigned int value; [[NSScanner scannerWithString:str] scanHexInt:&value]; [arr addObject:[[NSNumber alloc] initWithInt:(int)value]]; } return arr; } - (NSString*) charsToStr: (NSMutableArray*) chars { NSString * str = @""; for (int i = 0; i < [chars count]; i++) { str = [NSString stringWithFormat:@"%@%@",[NSString stringWithFormat:@"%c", [chars objectAtIndex:i]],str]; } return str; } //perfect except memory leaks - (NSMutableArray*) strToChars: (NSString*) str { NSData *asciidata = [str dataUsingEncoding:NSASCIIStringEncoding allowLossyConversion:YES]; NSString *asciistr = [[NSString alloc] initWithData:asciidata encoding:NSASCIIStringEncoding]; NSMutableArray * arr = [[NSMutableArray alloc] init]; for (int i = 0; i < [str length]; i++) { [arr addObject:[[NSNumber alloc] initWithInt:(int)[asciistr characterAtIndex:i]]]; } return arr; } - (void) initialize: (NSMutableArray*) pwd { sbox = [[NSMutableArray alloc] init]; mykey = [[NSMutableArray alloc] init]; int a = 0; int b; int c = [pwd count]; int d = 0; while (d < 256) { [mykey addObject:[pwd objectAtIndex:(d % c)]]; [sbox addObject:[[NSNumber alloc] initWithInt:d]]; d++; } d = 0; while (d < 256) { a = (a + [[sbox objectAtIndex:d] intValue] + [[mykey objectAtIndex:d] intValue]) % 256; b = [[sbox objectAtIndex:d] intValue]; [sbox replaceObjectAtIndex:d withObject:[sbox objectAtIndex:a]]; [sbox replaceObjectAtIndex:a withObject:[[NSNumber alloc] initWithInt:b]]; d++; } } - (NSMutableArray*) calculate: (NSMutableArray*) plaintxt andPsw: (NSMutableArray*) psw { [self initialize:psw]; int a = 0; int b = 0; NSMutableArray * c = [[NSMutableArray alloc] init]; int d; int e; int f; int g = 0; while (g < [plaintxt count]) { a = (a + 1) % 256; b = (b + [[sbox objectAtIndex:a] intValue]) % 256; e = [[sbox objectAtIndex:a] intValue]; [sbox replaceObjectAtIndex:a withObject:[sbox objectAtIndex:b]]; [sbox replaceObjectAtIndex:b withObject:[[NSNumber alloc] initWithInt:e]]; int h = ([[sbox objectAtIndex:a]intValue] + [[sbox objectAtIndex:b]intValue]) % 256; d = [[sbox objectAtIndex:h] intValue]; f = [[plaintxt objectAtIndex:g] intValue] ^ d; [c addObject:[[NSNumber alloc] initWithInt:f]]; g++; } return c; } - (NSString*) decrypt: (NSString*) src andKey: (NSString*) key { NSMutableArray * plaintxt = [self hexToChars:src]; NSMutableArray * psw = [self strToChars:key]; NSMutableArray * chars = [self calculate:plaintxt andPsw:psw]; NSData *asciidata = [[self charsToStr:chars] dataUsingEncoding:NSASCIIStringEncoding allowLossyConversion:YES]; NSString *asciistr = [[NSString alloc] initWithData:asciidata encoding:NSUTF8StringEncoding]; return asciistr; } @end This is supposed to decrypt a hex string with an ascii string, using rc4 decryption. I'm converting my java application to objective-c. The output keeps changing, every time i run it.

    Read the article

  • Convert void* representation of a dword to wstring

    - by graham.reeds
    I am having dumb monday so my apologies for posting such a newbie-like question. I am using CRegKey.QueryValue to return a dword value from the registry. QueryValue writes the value into void* pData and the length into ULONG* pnBytes. Now there is a way of getting it from pData into a wstring probably via stringstream. The closest I came was getting the result as a hex string. I was about to work on converting the hex representation to a dword and then from there to a wstring when I decided that was just dumb and ask on here instead of wasting another hour of my life on the problem.

    Read the article

  • Why does Color.IsNamedColor not work when I create a color using Color.FromArgb()?

    - by Jon B
    In my app I allow the user to build a color, and then show him the name or value of the color later on. If the user picks red (full red, not red-ish), I want to show him "red". If he picks some strange color, then the hex value would be just fine. Here's sample code that demonstrates the problem: static string GetName(int r, int g, int b) { Color c = Color.FromArgb(r, g, b); // Note that specifying a = 255 doesn't make a difference if (c.IsNamedColor) { return c.Name; } else { // return hex value } } Even with very obvious colors like red IsNamedColor never returns true. Looking at the ARGB values for my color and Color.Red, I see no difference. However, calling Color.Red.GetHashCode() returns a different hash code than Color.FromArgb(255, 0, 0).GetHashCode(). How can I create a color using user specified RGB values and have the Name property come out right?

    Read the article

  • How to parse a binary file using Javascript and Ajax

    - by Alex Jeffery
    I am trying to use JQuery to pull a binary file from a webserver, parse it in Javascript and display the contents. I can get the file ok and parse some of the file correctly. How ever I am running into trouble with one byte not coming out as expected. I am parsing the file a byte at a time, it is correct until I get to the hex value B6 where I am getting FD instead of B6. Function to read a byte data.charCodeAt(0) & 0xff; File As Hex: 02 00 00 00 55 4C 04 00 B6 00 00 00 The format I want to parse the file out into. short: 0002 short: 0000 string: UL short: 0004 long: 0000B6 Any hints as to why the last value is incorrect?

    Read the article

  • Evaluation of jQuery function variable value during definition of that function

    - by thesnail
    I have a large number of rows in a table within which I wish to attach a unique colorpicker (jQuery plugin) to each cell in a particular column identified by unique ids. Given this, I want to automate the generation of instances of the colorpicker as follows: var myrows={"a","b","c",.....} var mycolours={"ffffff","fcdfcd","123123"...} for (var i=0;i<myrows.length;i++) { $("#"+myrows[i]+"colour").ColorPicker({flat: false, color: mycolours[i], onChange: function (hsb, hex, rgb) { $("#"+myrows[i]+"currentcolour").css('backgroundColor', '#' + hex); } }); Now this doesn't work because the evaluation of the $("#"+myrows[i]+"currentcolour") component occurs at the time the function is called, not when it is defined (which is want I need). Given that this plugin javascript appends its code to the level and not to the underlying DOM component that I am accessing above so can't derive what id this pertains to, how can I evaluate the variable during function declaration/definition? Thanks for any help/insight anyone can give. Brian.

    Read the article

  • How is a h264 idea bitstream organized? / header start codes

    - by Wolax
    I was trying to learn a bit about h264 by looking at the bitstream of a video file with a hex editor. I found here the start codes for a video object planes (0x000001b6) and for i-frames (0x000001b600). But I can't find many of those bytes in video files. Most of the time those start codes appear at the beginning of a file with only a few bites in between. I expected them to show up very regularly, in equal distance all over the file!? Is is even ok to look at a file with a hex editor this way? What other start codes exist and how is a h264 file organised?

    Read the article

< Previous Page | 39 40 41 42 43 44 45 46 47 48 49 50  | Next Page >