Search Results

Search found 796 results on 32 pages for 'hex'.

Page 27/32 | < Previous Page | 23 24 25 26 27 28 29 30 31 32  | Next Page >

  • BN_hex2bn magicaly segfaults in openSSL

    - by xunil154
    Greetings, this is my first post on stackoverflow, and i'm sorry if its a bit long. I'm trying to build a handshake protocol for my own project and am having issues with the server converting the clients RSA's public key to a Bignum. It works in my clent code, but the server segfaults when attempting to convert the hex value of the clients public RSA to a bignum. I have already checked that there is no garbidge before or after the RSA data, and have looked online, but i'm stuck. header segment: typedef struct KEYS { RSA *serv; char* serv_pub; int pub_size; RSA *clnt; } KEYS; KEYS keys; Initializing function: // Generates and validates the servers key /* code for generating server RSA left out, it's working */ //Set client exponent keys.clnt = 0; keys.clnt = RSA_new(); BN_dec2bn(&keys.clnt->e, RSA_E_S); // RSA_E_S contains the public exponent Problem code (in Network::server_handshake): // *Recieved an encrypted message from the network and decrypt into 'buffer' (1024 byte long)* cout << "Assigning clients RSA" << endl; // I have verified that 'buffer' contains the proper key if (BN_hex2bn(&keys.clnt->n, buffer) < 0) { Error("ERROR reading server RSA"); } cout << "clients RSA has been assigned" << endl; The program segfaults at BN_hex2bn(&keys.clnt->n, buffer) with the error (valgrind output) Invalid read of size 8 at 0x50DBF9F: BN_hex2bn (in /usr/lib/libcrypto.so.0.9.8) by 0x40F23E: Network::server_handshake() (Network.cpp:177) by 0x40EF42: Network::startNet() (Network.cpp:126) by 0x403C38: main (server.cpp:51) Address 0x20 is not stack'd, malloc'd or (recently) free'd Process terminating with default action of signal 11 (SIGSEGV) Access not within mapped region at address 0x20 at 0x50DBF9F: BN_hex2bn (in /usr/lib/libcrypto.so.0.9.8) And I don't know why it is, Im using the exact same code in the client program, and it works just fine. Any input is greatly appriciated!

    Read the article

  • Character encoding issues when generating MD5 hash cross-platform

    - by rogueprocess
    This is a general question about character encoding when using MD5 libraries in various languages. My concern is: suppose I generate an MD5 hash using a native Python string object, like this: message = "hello world" m = md5() m.update(message) Then I take a hex version of that MD5 hash using: m.hexdigest() and send the message & MD5 hash via a network, let's say, a JMS message or a HTTP request. Now I get this message in a Java program in the form of a native Java string, along with the checksum. Then I generate an MD5 hash using Java, like this (using the Commons Codec library): String md5 = org.apache.commons.codec.digest.DigestUtils.DigestUtils.md5Hex(s) My feeling is that this is wrong because I have not specified character encodng at either end. So the original hash will be based on the bytes of the Python version of the string; the Java one will be based on the bytes of the Java version of the string , these two byte sequences will often not be the same - is that right? So really I need to specify "UTF-8" or whatever at both ends right? (I am actually getting an intermittent error in my code where the MD5 checksum fails, and I suspect this is the reason - but because it's intermittent, it's difficult to say if changing this fixes it or not. ) Thank you!

    Read the article

  • Text editor capable of viewing invisibles?

    - by Timo
    A recent problem* left me wondering whether there is a text editor out there that lets you see every single character of the file, even if they are invisible? Specifically, I'm not looking for hex editing capabilities, I am interested in a text editor that'll show me all of the invisible characters (not just the common whitespace / line break characters). The BOM marker is just one example, others are e.g. mathematical invisibles or possibly unsupported characters. I'm not looking for a text editor that simply supports a large variety of text encoding / translations between encodings. All text editors I've come across treat the invisible characters correctly i.e. leave them invisible (or simply get removed in the translation as in the case of the BOM marker). I'm asking this mostly out of academic interests, so I'm not particular about any specific OS. I can easily test Linux and OSX solutions, but if you recommend a Windows editor, I would appreciate if you include descriptions of how the editor handles invisibles other than whitespace / line breaks. *The incident that lead me to this question: I wrote a perl script using TextWrangler and managed to change the encoding to UTF8 BOM, which inserts te BOM marker at the start of the file. Perl (or rather the operating system) promptly misses the #! and mayhem ensues. It then took me the better part of an afternoon to figure this out since most text editors do not show the BOM marker even with various "show invisibles" options turned on. Now I've learned my lesson and will use less immediately :-).

    Read the article

  • Upload image from J2ME client to a Servlet

    - by Akash
    I want to send an image from a J2ME client to a Servlet. I am able to get a byte array of the image and send it using HTTP POST. conn = (HttpConnection) Connector.open(url, Connector.READ_WRITE, true); conn.setRequestMethod(HttpConnection.POST); conn.setRequestProperty("Content-Type", "application/x-www-form-urlencoded"); os.write(bytes, 0, bytes.length); // bytes = byte array of image This is the Servlet code: String line; BufferedReader r1 = new BufferedReader(new InputStreamReader(in)); while ((line = r1.readLine()) != null) { System.out.println("line=" + line); buf.append(line); } String s = buf.toString(); byte[] img_byte = s.getBytes(); But the problem I found is, when I send bytes from the J2ME client, some bytes are lost. Their values are 0A and 0D hex. Exactly, the Carriage Return and Line Feed. Thus, either POST method or readLine() are not able to accept 0A and 0D values. Any one have any idea how to do this, or how to use any another method?

    Read the article

  • Percent-Encoded Percent in URI

    - by Lukas
    In our application, it is possible for a user to upload files then download them later. We don't restrict them from having any special characters in the file name. The problem comes in when we create the link for the user to download the file. I use the Java URL encoder to encode the file name that gets put into the href of the link, but I'm still having problems with percent (%) signs. For example, if the user uploads a file named fi%le.jpg, the href that gets generated is fi%25le.jpg, and everything is fine. The problem is when the percent sign is right before the period (i.e., file%.jpg, which gets converted to file%25.jpg). When the user clicks on the link, they get a 404 (Not Found) error. The strange thing is that it is not a problem if the two characters following the percent sign are hex characters.... Weird, eh? Any help is appreciated. I am using Tomcat/Struts. Could the built-in URL decoder have anything to do with this problem?

    Read the article

  • c#: how to read parts of a file? (DICOM)

    - by Xaisoft
    I would like to read a DICOM file in C#. I don't want to do anything fancy, I just for now would like to know how to read in the elements, but first I would actually like to know how to read the header to see if is a valid DICOM file. It consists of Binary Data Elements. The first 128 bytes are unused (set to zero), followed by the string 'DICM'. This is followed by header information, which is organized into groups. A sample DICOM header First 128 bytes: unused DICOM format. Followed by the characters 'D','I','C','M' Followed by extra header information such as: 0002,0000, File Meta Elements Groups Len: 132 0002,0001, File Meta Info Version: 256 0002,0010, Transfer Syntax UID: 1.2.840.10008.1.2.1. 0008,0000, Identifying Group Length: 152 0008,0060, Modality: MR 0008,0070, Manufacturer: MRIcro In the above example, the header is organized into groups. The group 0002 hex is the file meta information group which contains 3 elements: one defines the group length, one stores the file version and the their stores the transfer syntax. Questions How to I read the header file and verify if it is a DICOM file by checking for the 'D','I','C','M' characters after the 128 byte preamble? How do I continue to parse the file reading the other parts of the data?

    Read the article

  • Problem with XML encoding of database contents with Latin characters

    - by user89691
    I have an ASP Access database that contains strings in various European languages. The database was populated prior by agents in the respective countries. It contains entries with accented etc characters as you would expect. If I open the database with MS Access these characters show up fine. For example the the German equivalent of "Open" shows as "Öffnen" (hopefully you can see an "O" with 2 dots above it!). I have ASP code that reads the database and returns records in XML. The text is passed to XMLEncode to construct the XML, but that only seems to deal with the 5 specials like "<", "&", etc. If I dump the XML the accented characters are unchanged. <English>Open</English> <German>Öffnen</German> If I look at the raw packets with Wireshark I see that the "Ö" byte is hex D6, which appears to be it's decimal Unicode and ISO 8859-1 value. The problem starts when I try to parse the XML in client-side JS. I get: "An invalid character was found in text content" from IE. FF and Chrome happily accept the XML without hiccup but the browser shows the "Ö" character as a diamond with a question mark inside. http://www.validome.org/xml/validate/ reports "encoding error." http://www.w3schools.com/dom/dom_validate.asp thinks it is fine. The XML is UTF-8 encoded. What do I need to do to have IE accept my XML without complaint? What do I need to do to have browsers display the stuff correctly?

    Read the article

  • send Image from J2ME to SERVLET

    - by Akash
    Hi, I want to send Image from J2ME to SERVLET. I am able to convert image into Byte Array, and send by Http POST. I have coded as : - From Mobile : conn = (HttpConnection)Connector.open(url,Connector.READ_WRITE,true); conn.setRequestMethod(HttpConnection.POST); conn.setRequestProperty("Content-Type", "application/x-www-form-urlencoded"); os.write(bytes, 0, bytes.length);//bytes = byte array of image At servlet : String line; BufferedReader r1 = new BufferedReader(new InputStreamReader(in)); while ((line = r1.readLine()) != null) { System.out.println("line=" + line); buf.append(line); } String s = buf.toString(); byte[] img_byte = s.getBytes(); Now d problem I found is, when I send Bytes from Mob App, some bytes are LOST , whose value is 0A and 0D-Hex ... Exactly, Cr- Carriage Return & Lf- Line Feed... It means, POST method OR readLine() not able to accept 0A & 0D value... And so I come to know that, LOST bytes are 0A and 0D occurrence in image's byte array.... Any one have any idea, how to do this, or how to use any another method..... Thanks -Akash

    Read the article

  • How do I handle a POST request in Perl and FastCGI?

    - by Peterim
    Unfortunately, I'm not familiar with Perl, so asking here. Actually I'm using FCGI with Perl. I need to 1. accept a POST request - 2. send it via POST to another url - 3. get results - 4. return results to the first POST request (4 steps). To accept a POST request (step 1) I use the following code (found it somewhere in the Internet): $ENV{'REQUEST_METHOD'} =~ tr/a-z/A-Z/; if ($ENV{'REQUEST_METHOD'} eq "POST") { read(STDIN, $buffer, $ENV{'CONTENT_LENGTH'}); } else { print ("some error"); } @pairs = split(/&/, $buffer); foreach $pair (@pairs) { ($name, $value) = split(/=/, $pair); $value =~ tr/+/ /; $value =~ s/%(..)/pack("C", hex($1))/eg; $FORM{$name} = $value; } The content of $name (it's a string) is the result of the first step. Now I need to send $name via POST request to some_url (step 2) which returns me another result (step 3), which I have to return as a result to the very first POST request (step 4). Any help with this would be greatly appreciated. Thank you.

    Read the article

  • Video-codec rater by image comparison algorithm?

    - by Andreas Hornig
    Hi, perhaps anyone knows if this is possible. comparing image quality is almost imposible to describe without subjective influences. When someone rates an image quality as good there is at least one person, that doesn't think so. human preferences are always different. So, I would like to know if there is away to "rate" the image quality by an algorithm that compares the original image to the produced one in following issues colour change(difference pixel by pixel blur rate artifacts and macroblocking the first one would be the easiest one because you could check just the diffeence in colours and can give 3 values in +- of each hex-value both last once I don't know if this is possible, but the blocking could be detected by edge-finding. and the king's quest would be to do that for more then just one image, because video is done with several frames. perhaps you expert programmers could tell me, if such an automated algo can be done to bring some objective measurement divice into rating image quality. this could perhaps calm down some h.264 is better than x264 and better than vp8 and blaaah people :) Andreas 1st posted here http://www.hdtvtotal.com/index.php?name=PNphpBB2&file=viewtopic&p=9705

    Read the article

  • Write a compiler for a language that looks ahead and multiple files?

    - by acidzombie24
    In my language I can use a class variable in my method when the definition appears below the method. It can also call methods below my method and etc. There are no 'headers'. Take this C# example. class A { public void callMethods() { print(); B b; b.notYetSeen(); public void print() { Console.Write("v = {0}", v); } int v=9; } class B { public void notYetSeen() { Console.Write("notYetSeen()\n"); } } How should I compile that? what i was thinking is: pass1: convert everything to an AST pass2: go through all classes and build a list of define classes/variable/etc pass3: go through code and check if there's any errors such as undefined variable, wrong use etc and create my output But it seems like for this to work I have to do pass 1 and 2 for ALL files before doing pass3. Also it feels like a lot of work to do until I find a syntax error (other than the obvious that can be done at parse time such as forgetting to close a brace or writing 0xLETTERS instead of a hex value). My gut says there is some other way. Note: I am using bison/flex to generate my compiler.

    Read the article

  • [C#] Improving method to read signed 8-bit integers from hexadecimal.

    - by JYelton
    Scenario: I have a string of hexadecimal characters which encode 8-bit signed integers. Each two characters represent a byte which employ the leftmost (MSB) bit as the sign (rather than two's complement). I am converting these to signed ints within a loop and wondered if there's a better way to do it. There are too many conversions and I am sure there's a more efficient method that I am missing. Current Code: string strData = "FFC000407F"; // example input data, encodes: -127, -64, 0, 64, 127 int v; for (int x = 0; x < strData.Length/2; x++) { v = HexToInt(strData.Substring(x * 2, 2)); Console.WriteLine(v); // do stuff with v } private int HexToInt(string _hexData) { string strBinary = Convert.ToString(Convert.ToInt32(_hexData, 16), 2).PadLeft(_hexData.Length * 4, '0'); int i = Convert.ToInt32(strBinary.Substring(1, 7), 2); i = (strBinary.Substring(0, 1) == "0" ? i : -i); return i; } Question: Is there a more streamlined and direct approach to reading two hex characters and converting them to an int when they represent a signed int (-127 to 127) using the leftmost bit as the sign?

    Read the article

  • Game login authentication and security.

    - by Charles
    First off I will say I am completely new to security in coding. I am currently helping a friend develop a small game (in Python) which will have a login server. I don't have much knowledge regarding security, but I know many games do have issues with this. Everything from 3rd party applications (bots) to WPE packet manipulation. Considering how small this game will be and the limited user base, I doubt we will have serious issues, but would like to try our best to limit problems. I am not sure where to start or what methods I should use, or what's worth it. For example, sending data to the server such as login name and password. I was told his information should be encrypted when sending, so in-case someone was viewing it (with whatever means), that they couldn't get into the account. However, if someone is able to capture the encrypted string, wouldn't this string always work since it's decrypted server side? In other words, someone could just capture the packet, reuse it, and still gain access to the account? The main goal I am really looking for is to make sure the players are logging into the game with the client we provide, and to make sure it's 'secure' (broad, I know). I have looked around at different methods such as Public and Private Key encryption, which I am sure any hex editor could eventually find. There are many other methods that seem way over my head at the moment and leave the impression of overkill. I realize nothing is 100% secure. I am just looking for any input or reading material (links) to accomplish the main goal stated above. Would appreciate any help, thanks.

    Read the article

  • Draw an Inset NSShadow and Inset Stroke

    - by Alexsander Akers
    I have an NSBezierPath and I want to draw in inset shadow (similar to Photoshop) inside the path. Is there anyway to do this? Also, I know you can -stroke paths, but can you stroke inside a path (similar to Stroke Inside in Photoshop)? Update This is the code I'm using. The first part makes a white shadow downwards. The second part draws the gray gradient. The third part draws the black inset shadow. Assume path is an NSBezierPath instance and that clr(...) returns an NSColor from a hex string. NSShadow * shadow = [NSShadow new]; [shadow setShadowColor: [NSColor colorWithDeviceWhite: 1.0f alpha: 0.5f]]; [shadow setShadowBlurRadius: 0.0f]; [shadow setShadowOffset: NSMakeSize(0, 1)]; [shadow set]; [shadow release]; NSGradient * gradient = [[NSGradient alloc] initWithColorsAndLocations: clr(@"#262729"), 0.0f, clr(@"#37383a"), 0.43f, clr(@"#37383a"), 1.0f, nil]; [gradient drawInBezierPath: path angle: 90.0f]; [gradient release]; [NSGraphicsContext saveGraphicsState]; [path setClip]; shadow = [NSShadow new]; [shadow setShadowColor: [NSColor redColor]]; [shadow setShadowBlurRadius: 0.0f]; [shadow setShadowOffset: NSMakeSize(0, -1)]; [shadow set]; [shadow release]; [path stroke]; [NSGraphicsContext restoreGraphicsState]; Here you can see a gradient fill, a white drop shadow downwards, and a black inner shadow downwards.

    Read the article

  • has_many :through formtastic multi-select field

    - by Tristan O'Neil
    I'm trying to set up a many to many relationship using the has_many :through method and then use a multi-select field to setup the relationships. I'm following this tutorial: http://asciicasts.com/episodes/185-formtastic-part-2 However for some reason the form displays a strange hex number and it changes each page refresh, I'm not exactly sure what I'm doing wrong. Below is my model/view code. company.rb has_many :classifications has_many :sics, :through => :classifications sic.rb has_many :classifications has_many :companies, :through => :classifications classification.rb belongs_to :company belongs_to :sic _form.html.erb <% semantic_form_for @company do |f| %> <% f.inputs do %> <%= f.input :company %> <%= f.input :sics %> <% end %> <%= f.buttons %> <% end %> Also here is the the form looks like it's showing the correct number of entries for the field but it is clearly not showing the correct name for the relationship.

    Read the article

  • Take most significant 8 bytes of the MD5 hash of a string as a long (in Ruby)

    - by Nate Murray
    Hey Friends, I'm trying to implement a java "hash" function in ruby. Here's the java side: import java.nio.charset.Charset; import java.security.MessageDigest; /** * @return most significant 8 bytes of the MD5 hash of the string, as a long */ protected long hash(String value) { byte[] md5hash; md5hash = md5Digest.digest(value.getBytes(Charset.forName("UTF8"))); long hash = 0L; for (int i = 0; i < 8; i++) { hash = hash << 8 | md5hash[i] & 0x00000000000000FFL; } return hash; } So far, my best guess in ruby is: # WRONG - doesn't work properly. #!/usr/bin/env ruby -wKU require 'digest/md5' require 'pp' md5hash = Digest::MD5.hexdigest("0").unpack("U*") pp md5hash hash = 0 0.upto(7) do |i| hash = hash << 8 | md5hash[i] & 0x00000000000000FF end pp hash Problem is, this ruby code doesn't match the java output. For reference, the above java code given these strings returns the corresponding long: "00038c53790ecedfeb2f83102e9115a522475d73" => -2059313900129568948 "0" => -3473083983811222033 "001211e8befc8ac22dd265ecaa77f8c227d0007f" => 3234260774580957018 Thoughts: I'm having problems getting the UTF8 bytes from the ruby string In ruby I'm using hexdigest, I suspect I should be using just digest instead The java code is taking the md5 of the UTF8 bytes whereas my ruby code is taking the bytes of the md5 (as hex) Any suggestions on how to get the exact same output in ruby?

    Read the article

  • Improving method to read signed 8-bit integers from hexadecimal.

    - by JYelton
    Scenario: I have a string of hexadecimal characters which encode 8-bit signed integers. Each two characters represent a byte which employ the leftmost (MSB) bit as the sign (rather than two's complement). I am converting these to signed ints within a loop and wondered if there's a better way to do it. There are too many conversions and I am sure there's a more efficient method that I am missing. Current Code: string strData = "FFC000407F"; // example input data, encodes: -127, -64, 0, 64, 127 int v; for (int x = 0; x < strData.Length/2; x++) { v = HexToInt(strData.Substring(x * 2, 2)); Console.WriteLine(v); // do stuff with v } private int HexToInt(string _hexData) { string strBinary = Convert.ToString(Convert.ToInt32(_hexData, 16), 2).PadLeft(_hexData.Length * 4, '0'); int i = Convert.ToInt32(strBinary.Substring(1, 7), 2); i = (strBinary.Substring(0, 1) == "0" ? i : -i); return i; } Question: Is there a more streamlined and direct approach to reading two hex characters and converting them to an int when they represent a signed int (-127 to 127) using the leftmost bit as the sign?

    Read the article

  • How do you find a functions virtual call address in assembly?

    - by Daniel
    I've googled around but i'm not sure i am asking the right question or not and i couldn't find much regardless, perhaps a link would be helpful. I made a c++ program that shows a message box, then I opened it up with Ollydbg and went to the part where it calls MessageBoxW. The call address of MessageBoxW changes each time i run the app as windows is updating my Imports table to have the correct address of MessageBoxW. So my question is how do i find the virtual addres of MessageBoxW to my imports table and also how can i use this in ollydbg? Basically I'm trying to make a code cave in assembly to call MessageBoxW again. I got fairly close once by searching the executable with a hex editor and found the position of the call, and I think I found the virtual address. But when i call that virtual address in olly and saved it to the executable, the next time i opened it the call was replaced with a bunch of DB xyz (which looked like the virtual address but why did the call get removed? Sorry if my terminology is off as i'm new to this so i'm not quite sure what to call things.

    Read the article

  • Vista 64-bits development tools

    - by Workshop Alex
    Well, okay. There's Visual Studio 2008 and Embarcadero Delphi/Studio that are both able to create 64-bits .NET applications for Vista. And of course a lot of 32-bits applications will run on 64-bits Vista. If not, it's always possible to install VMWare to create a virtual 32-bits Windows XP system to run 32-bits applications. So, plenty of options. But what I would like to see is a list of true 64-bits applications for Windows Vista and better. So if you know any useful 64-bits product, please share! (Especially compilers that generate native 64-bits code.) Tools would basically be anything that would make development a bit easier. Thus, debugging tools, image processing tools to create icons and bitmaps, hex editors to check the contents of binary files, XML editors to change XML files, etc. The tools from SysInternals, for example, seem to provide 64-bits versions or even support 64-bits systems natively. But how about all those other editors, viewers, browsers and other tools that we developers like to use? A 64-bits version of the Norton Commander/Midnight Commander or other file managers would be nice too. And with compilers, how about COBOL/ForTran/ADA/SmallTalk/Lisp/Whatever compiler/languages for Vista? I would just like to see a complete list of anything useful for 64-bits development.

    Read the article

  • Perl: POST request how?

    - by Peterim
    Unfortunately, I'm not familiar with Perl, so asking here. Actually I'm using FCGI with Perl. I need to 1. accept a POST request - 2. send it via POST to another url - 3. get results - 4. return results to the first POST request (4 steps). To accept a POST request (step 1) I use the following peace of code (found it somewhere in the Internet): $ENV{'REQUEST_METHOD'} =~ tr/a-z/A-Z/; if ($ENV{'REQUEST_METHOD'} eq "POST") { read(STDIN, $buffer, $ENV{'CONTENT_LENGTH'}); } else { print ("some error"); } @pairs = split(/&/, $buffer); foreach $pair (@pairs) { ($name, $value) = split(/=/, $pair); $value =~ tr/+/ /; $value =~ s/%(..)/pack("C", hex($1))/eg; $FORM{$name} = $value; } The content of $name (it's a string) is the result of the first step. Now I need to send $name via POST request to some_url (step 2) which returns me another result (step 3), which I have to return as a result to the very first POST request (step 4). Any help with this would be greatly appreciated. Thank you.

    Read the article

  • PostgreSQL: BYTEA vs OID+Large Object?

    - by mlaverd
    I started an application with Hibernate 3.2 and PostgreSQL 8.4. I have some byte[] fields that were mapped as @Basic (= PG bytea) and others that got mapped as @Lob (=PG Large Object). Why the inconsistency? Because I was a Hibernate noob. Now, those fields are max 4 Kb (but average is 2-3 kb). The PostgreSQL documentation mentioned that the LOs are good when the fields are big, but I didn't see what 'big' meant. I have upgraded to PostgreSQL 9.0 with Hibernate 3.6 and I was stuck to change the annotation to @Type(type="org.hibernate.type.PrimitiveByteArrayBlobType"). This bug has brought forward a potential compatibility issue, and I eventually found out that Large Objects are a pain to deal with, compared to a normal field. So I am thinking of changing all of it to bytea. But I am concerned that bytea fields are encoded in Hex, so there is some overhead in encoding and decoding, and this would hurt the performance. Are there good benchmarks about the performance of both of these? Anybody has made the switch and saw a difference?

    Read the article

  • UITabBarControllerDelegate compare value of viewController

    - by T9
    I have a tabBar with 4 tabs on it, and I want to perform some action when a specific tab is selected, so I have uncommented the UITabBarControllerDelegate in the xxAppDelegate.m I also wanted to see the value that was being sent logged in the console - in order to test my "if" statement. However this is where I got stumped. // Optional UITabBarControllerDelegate method - (void)tabBarController:(UITabBarController *)tabBarController didSelectViewController:(UIViewController *)viewController { NSLog(@"%@", viewController); } The console dutifully logged any selected controller that had been selected, but in this particular format: <MyViewController: 0x3b12950> Now, I wasn't expecting the square brackets or the colon or the Hex. So my question is how do I format my IF statement? This is what I thought would work but I get an error mentioned further down. // Optional UITabBarControllerDelegate method - (void)tabBarController:(UITabBarController *)tabBarController didSelectViewController:(UIViewController *)viewController { NSLog(@"%@", viewController); if (viewController == MyViewController) { //do something nice here … }; } ... The error is "Expected expression before 'MyViewController'" Anyone know how I should be doing this?

    Read the article

  • Office Automation: What does destroy my encoding?

    - by Filburt
    I'm facing a problem with a Word Mail Merge Automation controlled by our CRM system. The setup Base for the Mail Merge is a Word .dot template which fires a macro on Document.New. Inside this macro I create a .Net component registered for COM. Set myCOMObject = CreateObject("MyCOMObject") The component pulls some data from a database and hands string values which are assigned to Word DocumentVariables. Set someClass = myCOMObject.GetSomeClass(123) ActiveDocument.Variables("docaddress") = someClass.GetSenderAddress(456) All string values returned from the component are encoded in UTF-8 (codepage 1200). What happens The problem arises when the CRM system calls Word to perform the Mail Merge: The string values from the component are turned into UTF-8 encoded strings. All the static text inside the template and the data pulled for the Mail Merge stay nicely encoded in UTF-16 - example the umlaut ü inside my DocumentVariables is turned into c3 b0 while it stays fc for the rest of the document (checked file in hex editor). If I'm creating a document from a template with the same macro functionallity but without performing a Mail Merge all strings are fine; i.e. are encoded in UTF-16. What changed According to the CRM software vendor the encoding of the Mail Merge data export was changed to UTF-16 with the new version we're currently testing. I found out that MS states that you'll expirience issues when the document and the Mail Merge data file encoding don't match. What I tried Since I'm assuming to merge with UTF-16 encoded data I added the following lines to my macro: ActiveDocument.TextEncoding = msoEncodingWestern ActiveDocument.SaveEncoding = msoEncodingUnicodeLittleEndian This is what the Mail Merge data document specifies in its document properties.

    Read the article

  • Am I correctly extracting JPEG binary data from this mysqldump?

    - by Glenn
    I have a very old .sql backup of a vbulletin site that I ran around 8 years ago. I am trying to see the file attachments that are stored in the DB. The script below extracts them all and is verified to be JPEG by hex dumping and checking the SOI (start of image) and EOI (end of image) bytes (FFD8 and FFD9, respectively) according to the JPEG wiki page. But when I try to open them with evince, I get this message "Error interpreting JPEG image file (JPEG datastream contains no image)" What could be going on here? Some background info: sqldump is around 8 years old vbulletin 2.x was the software that stored the info most likely php 4 was used most likely mysql 4.0, possibly even 3.x the column datatype these attachments are stored in is mediumtext My Python 3.1 script: #!/usr/bin/env python3.1 import re trim_l = re.compile(b"""^INSERT INTO attachment VALUES\('\d+', '\d+', '\d+', '(.+)""") trim_r = re.compile(b"""(.+)', '\d+', '\d+'\);$""") extractor = re.compile(b"""^(.*(?:\.jpe?g|\.gif|\.bmp))', '(.+)$""") with open('attachments.sql', 'rb') as fh: for line in fh: data = trim_l.findall(line)[0] data = trim_r.findall(data)[0] data = extractor.findall(data) if data: name, data = data[0] try: filename = 'files/%s' % str(name, 'UTF-8') ah = open(filename, 'wb') ah.write(data) except UnicodeDecodeError: continue finally: ah.close() fh.close() update The JPEG wiki page says FF bytes are section markers, with the next byte indicating the section type. I see some that are not listed in the wiki page (specifically, I see a lot of 5C bytes, so FF5C). But the list is of "common markers" so I'm trying to find a more complete list. Any guidance here would also be appreciated.

    Read the article

  • Munging non-printable characters to dots using string.translate()

    - by Jim Dennis
    So I've done this before and it's a surprising ugly bit of code for such a seemingly simple task. The goal is to translate any non-printable character into a . (dot). For my purposes "printable" does exclude the last few characters from string.printable (new-lines, tabs, and so on). This is for printing things like the old MS-DOS debug "hex dump" format ... or anything similar to that (where additional whitespace will mangle the intended dump layout). I know I can use string.translate() and, to use that, I need a translation table. So I use string.maketrans() for that. Here's the best I could come up with: filter = string.maketrans( string.translate(string.maketrans('',''), string.maketrans('',''),string.printable[:-5]), '.'*len(string.translate(string.maketrans('',''), string.maketrans('',''),string.printable[:-5]))) ... which is an unreadable mess (though it does work). From there you can call use something like: for each_line in sometext: print string.translate(each_line, filter) ... and be happy. (So long as you don't look under the hood). Now it is more readable if I break that horrid expression into separate statements: ascii = string.maketrans('','') # The whole ASCII character set nonprintable = string.translate(ascii, ascii, string.printable[:-5]) # Optional delchars argument filter = string.maketrans(nonprintable, '.' * len(nonprintable)) And it's tempting to do that just for legibility. However, I keep thinking there has to be a more elegant way to express this!

    Read the article

< Previous Page | 23 24 25 26 27 28 29 30 31 32  | Next Page >