Search Results

Search found 796 results on 32 pages for 'hex'.

Page 28/32 | < Previous Page | 24 25 26 27 28 29 30 31 32  | Next Page >

  • As a newbie, where should I go if I want to create a small GUI program?

    - by jimbmk
    Hello, I'm a newbie with a little experience writing in BASIC, Python and, of all things, a smidgeon of assembler (as part of a videogame ROM hack). I wanted to create small tool for modifying the hex values at particular points, in a particular file, that would have a GUI interface. What I'm looking for is the ability to create small GUI program, that I can distribute as an EXE (or, at least a standalone directory). I'm not keen on the idea of the .NET languages, because I don't want to force people to download a massive .NET framework package. I currently have Python with IDLE and Boa Constructor set up, and the application runs there. I've tried looking up information on compiling a python app that relies on Wxwidgets, but the search results and the information I've found has been confusing, or just completely incomprehensible. My questions are: Is python a good language to use for this sort of project? If I use Py2Exe, will WxWidgets already be included? Or will my users have to somehow install WxWidgets on their machines? Am I right in thinking at Py2Exe just produces a standalone directory, 'dist', that has the necessary files for the user to just double click and run the application? If the program just relies upon Tkinter for GUI stuff, will that be included in the EXE Py2Exe produces? If so, are their any 'visual' GUI builders / IDEs for Python with only Tkinter? Thankyou for your time, JBMK

    Read the article

  • Saving NSBitmapImageRep as NSBMPFileType file. Wrong BMP headers and bitmap content

    - by niko34
    I save a NSBitmapImageRep to a BMP file (Snow Leopard). It seems ok when i open it on macos. But it makes an error on my multimedia device (which can show any BMP file from internet). I cannot figure out what is wrong, but when i look inside the file (with the cool hexfiend app on macos), 2 things wrong: the header have a wrong value for the biHeight parameter : 4294966216 (hex=C8FBFFFF) the header have a correct biWidth parameter : 1920 the first pixel in the bitmap content (after 54 bytes headers in BMP format) correspond to the upper left corner of the original image. In the original BMP file and as specified in the BMP format, it should be the down left corner pixel first. To explain the full workflow in my app, i have an NSImageView where i can drag a BMP image. This View is bind to an NSImage. After a drag & drop i have an action to save this image (with some text drawing over it) to a BMP file. Here's the code for saving the new BMP file : CGColorSpaceRefcolorSpace = CGColorSpaceCreateWithName(kCGColorSpaceGenericRGB); CGContextRefcontext = CGBitmapContextCreate(NULL, (int)1920, (int)1080, 8, 4*(int)1920, colorSpace, kCGImageAlphaNoneSkipLast); [duneScreenViewdrawBackgroundWithDuneFolder:self inContext:context inRect:NSMakeRect(0,0,1920,1080) needScale:NO]; if(folderType==DXFolderTypeMovie) { [duneScreenViewdrawSynopsisContentWithDuneFolder:self inContext:context inRect:NSMakeRect(0,0,1920,1080) withScale:1.0]; } CGImageRef backgroundImageRef = CGBitmapContextCreateImage(context); NSBitmapImageRep*bitmapBackgroundImageRef = [[NSBitmapImageRepalloc] initWithCGImage:backgroundImageRef]; NSData*data = [destinationBitmap representationUsingType:NSBMPFileType properties:nil]; [data writeToFile:[NSStringstringWithFormat:@"%@/%@", folderPath,backgroundPath] atomically: YES]; The duneScreenViewdrawSynopsisContentWithDuneFolder method uses CGContextDrawImage to draw the image. The duneScreenViewdrawSynopsis method uses CoreText to draw some text in the same context. Do you know what's wrong?

    Read the article

  • Can't get data with spaces into the database from Ajax POST request

    - by J Jones
    I have a real simple form with a textbox and a button, and my goal is to have an asynchronous request (jQuery: $.ajax) send the text to the server (PHP/mysql a la Joomla) so that it can be added to a database table. Here's the javascript that is sending the data from the client: var value= $('#myvalue').val(); $.ajax( { type: "POST", url: "/administrator/index.php", data: { option: "com_mycomponent", task: "enterValue", thevalue: value, format: "raw"}, dataType: "text", success: reportSavedValue } ); The problem arises when the user enters text with a space in it. The $_POST variable that I receive has all the spaces stripped out, so that if the user enters "This string has spaces", the server gets the value "Thisstringhasspaces". I have been googling around, and have found lots of references that I need to use encodeURIComponent. So I have tried it, but now the value that I get from $_POST is "This20string20has20spaces". So it appears to be encoding it the way I would expect, only to have the percent signs stripped instead of the spaces, and leaving the hex numbers. I'm really confused. It seems that this sort of question is asked and answered everywhere on the web, and everywhere encodeURIComponent is hailed as the silver bullet. But apparently I'm fighting a different breed of lycanthrope. Does anyone have any suggestions?

    Read the article

  • Encryption puzzle / How to create a ProxyStub for a Remote Assistance ticket

    - by Jon Clegg
    I am trying to create a ticket for Remote Assistance. Part of that requires creating a PassStub parameter. As of the documenation: http://msdn.microsoft.com/en-us/library/cc240115(PROT.10).aspx PassStub: The encrypted novice computer's password string. When the Remote Assistance Connection String is sent as a file over e-mail, to provide additional security, a password is used.<16 In part 16 they detail how to create as PassStub. In Windows XP and Windows Server 2003, when a password is used, it is encrypted using PROV_RSA_FULL predefined Cryptographic provider with MD5 hashing and CALG_RC4, the RC4 stream encryption algorithm. As PassStub looks like this in the file: PassStub="LK#6Lh*gCmNDpj" If you want to generate one yourself run msra.exe in Vista or run the Remote Assistance tool in WinXP. The documentation says this stub is the result of the function CryptEncrypt with the key derived from the password and encrypted with the session id (Those are also in the ticket file). The problem is that CryptEncrypt produces a binary output way larger then the 15 byte PassStub. Also the PassStub isn't encoding in any way I've seen before. Some interesting things about the PassStub encoding. After doing statistical analysis the 3rd char is always a one of: !#$&()+-=@^. Only symbols seen everywhere are: *_ . Otherwise the valid characters are 0-9 a-z A-Z. There are a total of 75 valid characters and they are always 15 bytes. Running msra.exe with the same password always generates a different PassStub, indicating that it is not a direct hash but includes the rasessionid as they say. Some other ideas I've had is that it is not the direct result of CryptEncrypt, but a result of the rasessionid in the MD5 hash. In MS-RA (http://msdn.microsoft.com/en-us/library/cc240013(PROT.10).aspx). The "PassStub Novice" is simply hex encoded, and looks to be the right length. The problem is I have no idea how to go from any hash to way the ProxyStub looks like.

    Read the article

  • Getting "" at the beginning of my XML File after save()

    - by Remy
    I'm opening an existing XML file with c# and I replace some nodes in there. All works fine. Just after I save it, I get the following characters at the beginning of the file:  (EF BB BF in HEX) The whole first line: <?xml version="1.0" encoding="UTF-8" standalone="yes"?> The rest of the file looks like a normal XML file. The simplified code is here: XmlDocument doc = new XmlDocument(); doc.Load(xmlSourceFile); XmlNode translation = doc.SelectSingleNode("//trans-unit[@id='127']"); translation.InnerText = "testing"; doc.Save(xmlTranslatedFile); I'm using a C# WinForm Application. With .NET 4.0. Any ideas? Why would it do that? Can we disable that somehow? It's for Adobe InCopy and it does not open it like this. UPDATE: Alternative Solution: Saving it with the XmlTextWriter works too: XmlTextWriter writer = new XmlTextWriter(inCopyFilename, null); doc.Save(writer);

    Read the article

  • Encryption puzzle / How to create a PassStub for a Remote Assistance ticket

    - by Jon Clegg
    I am trying to create a ticket for Remote Assistance. Part of that requires creating a PassStub parameter. As of the documentation: http://msdn.microsoft.com/en-us/library/cc240115(PROT.10).aspx PassStub: The encrypted novice computer's password string. When the Remote Assistance Connection String is sent as a file over e-mail, to provide additional security, a password is used.<16 In part 16 they detail how to create as PassStub. In Windows XP and Windows Server 2003, when a password is used, it is encrypted using PROV_RSA_FULL predefined Cryptographic provider with MD5 hashing and CALG_RC4, the RC4 stream encryption algorithm. As PassStub looks like this in the file: PassStub="LK#6Lh*gCmNDpj" If you want to generate one yourself run msra.exe in Vista or run the Remote Assistance tool in WinXP. The documentation says this stub is the result of the function CryptEncrypt with the key derived from the password and encrypted with the session id (Those are also in the ticket file). The problem is that CryptEncrypt produces a binary output way larger then the 15 byte PassStub. Also the PassStub isn't encoding in any way I've seen before. Some interesting things about the PassStub encoding. After doing statistical analysis the 3rd char is always a one of: !#$&()+-=@^. Only symbols seen everywhere are: *_ . Otherwise the valid characters are 0-9 a-z A-Z. There are a total of 75 valid characters and they are always 15 bytes. Running msra.exe with the same password always generates a different PassStub, indicating that it is not a direct hash but includes the rasessionid as they say. Some other ideas I've had is that it is not the direct result of CryptEncrypt, but a result of the rasessionid in the MD5 hash. In MS-RA (http://msdn.microsoft.com/en-us/library/cc240013(PROT.10).aspx). The "PassStub Novice" is simply hex encoded, and looks to be the right length. The problem is I have no idea how to go from any hash to way the PassStub looks like.

    Read the article

  • How do I read text from a serial port?

    - by user2164
    I am trying to read data off of a Windows serial port through Java. I have the javax.comm libraries and am able to get some data but not correct data. When I read the port into a byte array and convert it to text I get a series of characters but no real text string. I have tried to specify the byte array as being both "UTF-8" and "US-ASCII". Does anyone know how to get real text out of this? Here is my code: while (inputStream.available() > 0) { int numBytes = inputStream.read(readBuffer); System.out.println("Reading from " + portId.getName() + ": "); System.out.println("Read " + numBytes + " bytes"); } System.out.println(new String(readBuffer)); System.out.println(new String(readBuffer, "UTF-8")); System.out.println(new String(readBuffer, "US-ASCII")); the output of the first three lines will not let me copy and paste (I assume because they are not normal characters). Here is the output of the Hex: 78786000e67e9e60061e8606781e66e0869e98e086f89898861878809e1e9880 I am reading from a Hollux GPS device which does output in string format. I know this for sure because I did it through C#. The settings that I am using for communication which I know are right from the work in the C# app are: Baud Rate: 9600 Databits: 8 Stop bit: 1 parity: none

    Read the article

  • BN_hex2bn magically segfaults in openSSL

    - by xunil154
    Greetings, this is my first post on stackoverflow, and i'm sorry if its a bit long. I'm trying to build a handshake protocol for my own project and am having issues with the server converting the clients RSA's public key to a Bignum. It works in my clent code, but the server segfaults when attempting to convert the hex value of the clients public RSA to a bignum. I have already checked that there is no garbidge before or after the RSA data, and have looked online, but i'm stuck. header segment: typedef struct KEYS { RSA *serv; char* serv_pub; int pub_size; RSA *clnt; } KEYS; KEYS keys; Initializing function: // Generates and validates the servers key /* code for generating server RSA left out, it's working */ //Set client exponent keys.clnt = 0; keys.clnt = RSA_new(); BN_dec2bn(&keys.clnt->e, RSA_E_S); // RSA_E_S contains the public exponent Problem code (in Network::server_handshake): // *Recieved an encrypted message from the network and decrypt into 'buffer' (1024 byte long)* cout << "Assigning clients RSA" << endl; // I have verified that 'buffer' contains the proper key if (BN_hex2bn(&keys.clnt->n, buffer) < 0) { Error("ERROR reading server RSA"); } cout << "clients RSA has been assigned" << endl; The program segfaults at BN_hex2bn(&keys.clnt->n, buffer) with the error (valgrind output) Invalid read of size 8 at 0x50DBF9F: BN_hex2bn (in /usr/lib/libcrypto.so.0.9.8) by 0x40F23E: Network::server_handshake() (Network.cpp:177) by 0x40EF42: Network::startNet() (Network.cpp:126) by 0x403C38: main (server.cpp:51) Address 0x20 is not stack'd, malloc'd or (recently) free'd Process terminating with default action of signal 11 (SIGSEGV) Access not within mapped region at address 0x20 at 0x50DBF9F: BN_hex2bn (in /usr/lib/libcrypto.so.0.9.8) And I don't know why it is, Im using the exact same code in the client program, and it works just fine. Any input is greatly appriciated!

    Read the article

  • PHP Unicode character questions

    - by user271619
    Here's a link I found, which even has a character I need to play with for other projects of mine. http://www.fileformat.info/info/unicode/char/2446/index.htm There is a box with the Title of: "Encodings" on that page. And I am wondering about some of the rows. I obviously need a course on this sort of thing, but I'm wondering what the difference is between "HTML Entity (decimal)" and "HTML Entity (hex)". The funny thing is, which confuses me, I throw those characters on a web page, and they display fine. But I haven't specified any UTF-8 encoding in the php page. <?php $string1 = '&#x2446;'; $string2 = '&#9286;'; echo $string1; echo '<br>'; echo $string2; ?> Does the browser know how to display both automatically? And to make it weirder, I can only see those characters on my Mac, in Firefox. But my windows box doesn't want to show them. I've tested it in chrome, and firefox. Do I need to tell the browsers to view them correctly? Or is it an operating system modification?

    Read the article

  • Writing a JavaScript zip code validation function

    - by mkoryak
    I would like to write a JavaScript function that validates a zip code, by checking if the zip code actually exists. Here is a list of all zip codes: http://www.census.gov/tiger/tms/gazetteer/zips.txt (I only care about the 2nd column) This is really a compression problem. I would like to do this for fun. OK, now that's out of the way, here is a list of optimizations over a straight hashtable that I can think of, feel free to add anything I have not thought of: Break zipcode into 2 parts, first 2 digits and last 3 digits. Make a giant if-else statement first checking the first 2 digits, then checking ranges within the last 3 digits. Or, covert the zips into hex, and see if I can do the same thing using smaller groups. Find out if within the range of all valid zip codes there are more valid zip codes vs invalid zip codes. Write the above code targeting the smaller group. Break up the hash into separate files, and load them via Ajax as user types in the zipcode. So perhaps break into 2 parts, first for first 2 digits, second for last 3. Lastly, I plan to generate the JavaScript files using another program, not by hand. Edit: performance matters here. I do want to use this, if it doesn't suck. Performance of the JavaScript code execution + download time. Edit 2: JavaScript only solutions please. I don't have access to the application server, plus, that would make this into a whole other problem =)

    Read the article

  • Umlaute from JSP-page are misinterpreted

    - by Karin
    I'm getting Input from a JSP page, that can contain Umlaute. (e.g. Ä,Ö,Ü,ä,ö,ü,ß). Whenever an Umlaut is entered in the Input field an incorrect value gets passed on. e.g. If an "ä" (UTF-8:U+00E4) gets entered in the input field, the String that is extracted from the argument is "ä" (UTF-8: U+00C3 and U+00A4) It seems to me as if the UTF-8 hex encoding (which is c3 a4 for an "ä") gets used for the conversion. How can I retrieved the correct value? Here are snippets from the current implementation The JSP-page passes the input value "pk" on to the processing logic: <meta http-equiv="Content-Type" content="text/html; charset=UTF-8"> ... <input type="text" name="<%=pk.toString()%>" value="<%=value%>" size="70"/> <button type="submit" title="Edit" value='Save' onclick="action.value='doSave';pk.value='<%=pk.toString()%>'"><img src="icons/run.png"/>Save</button> The value gets retrieved from args and converted to a string: UUID pk = UUID.fromString(args.get("pk")); //$NON-NLS-1$ String value = args.get(pk.toString()); Note: Umlaute that are saved in the Database get displayed correctly on the page.

    Read the article

  • Why execution of a portion of code loaded from external file is not halted by the OS?

    - by menjaraz
    I've harnessed a project released on internet a long time ago. Here comes the details, all irrelevant things being stripped off for sake of concision and clarity. A binary file whose content is descibed below HEX DUMP: 55 89 E5 83 EC 08 C7 45 FC 00 00 00 00 8B 45 FC 3B 45 10 72 02 EB 19 8B 45 FC 8B 55 0C 01 C2 8B 45 FC 03 45 08 8A 00 88 02 8D 45 FC FF 00 EB DD C6 45 FA 00 83 7D 10 01 76 6C 80 7D FA 00 74 02 EB 64 C6 45 FA 01 C7 45 FC 00 00 00 00 8B 45 10 48 39 45 FC 72 02 EB E2 8B 45 FC 8B 4D 0C 01 C1 8B 45 FC 03 45 0C 8D 50 01 8A 01 3A 02 73 30 8B 45 FC 03 45 0C 8A 00 88 45 FB 8B 45 FC 8B 55 0C 01 C2 8B 45 FC 03 45 0C 40 8A 00 88 02 8B 45 FC 03 45 0C 8D 50 01 8A 45 FB 88 02 C6 45 FA 00 8D 45 FC FF 00 EB A7 C9 C2 0C 00 90 90 90 90 90 90 is loaded into memory and executed using the following method snippet var MySrcArray, MyDestArray: array [1 .. 15] of Byte; // ... MyBuffer: Pointer; TheProc: procedure; SortIt: procedure(ASrc, ADest: Pointer; ASize: LongWord); stdcall; begin // Initialization of MySrcArray with random Bytes and display here ... // Instructions of loading of the binary file into MyBuffer using merely **GetMem** here ... @SortIt := MyBuffer; try SortIt(@MySrcArray, @MyDestArray, 15); // Display of MyDestArray (The outcome of the processing !) except // Invalid code error handling end; // Cleaning code here ... end; works like a charm on my box. My Question: How comes it works without using VirtualAlloc and/or VirtualProtect?

    Read the article

  • git crlf configuration in mixed environment

    - by Jonas Byström
    I'm running a mixed environment, and keep a central, bare repository where I pull and push most of my stuff. This centralized repository runs on Linux, and I check out to Windows XP/7, Mac and Linux. In all repositories I put the following line in my .git/config: [core] autocrlf = true I don't have the flag safecrlf=true anywhere. First time when I modify stuff on my one Windows machine (XP) there is no problem and when I look at the diff, it looks fine. But when I do the same on the other Windows machine (7), all lines are shown as changed but local line endings are \r\n as expected (when checked in a hex editor). The same applies to a MacOSX can. Sometimes I get the feeling that the different systems wrestle on line endings, but I can't be sure (I'm loosing track of all the times I change specific files). I didn't use to have the autocrlf set, but set the flag many months back. Could that be causing my current problems? Do I need to clone everything again to loose some old baggage? Or are there other things that needs configuring too? I tried git checkout -- . about a million times, but with no success.

    Read the article

  • Compile time float packing/punning

    - by detly
    I'm writing C for the PIC32MX, compiled with Microchip's PIC32 C compiler (based on GCC 3.4). My problem is this: I have some reprogrammable numeric data that is stored either on EEPROM or in the program flash of the chip. This means that when I want to store a float, I have to do some type punning: typedef union { int intval; float floatval; } IntFloat; unsigned int float_as_int(float fval) { IntFloat intf; intf.floatval = fval; return intf.intval; } // Stores an int of data in whatever storage we're using void StoreInt(unsigned int data, unsigned int address); void StoreFPVal(float data, unsigned int address) { StoreInt(float_as_int(data), address); } I also include default values as an array of compile time constants. For (unsigned) integer values this is trivial, I just use the integer literal. For floats, though, I have to use this Python snippet to convert them to their word representation to include them in the array: import struct hex(struct.unpack("I", struct.pack("f", float_value))[0]) ...and so my array of defaults has these indecipherable values like: const unsigned int DEFAULTS[] = { 0x00000001, // Some default integer value, 1 0x3C83126F, // Some default float value, 0.005 } (These actually take the form of X macro constructs, but that doesn't make a difference here.) Commenting is nice, but is there a better way? It's be great to be able to do something like: const unsigned int DEFAULTS[] = { 0x00000001, // Some default integer value, 1 COMPILE_TIME_CONVERT(0.005), // Some default float value, 0.005 } ...but I'm completely at a loss, and I don't even know if such a thing is possible. Notes Obviously "no, it isn't possible" is an acceptable answer if true. I'm not overly concerned about portability, so implementation defined behaviour is fine, undefined behaviour is not (I have the IDB appendix sitting in front of me). As fas as I'm aware, this needs to be a compile time conversion, since DEFAULTS is in the global scope. Please correct me if I'm wrong about this.

    Read the article

  • Can I get consistent CSS colors across browsers?

    - by Trevor Burnham
    I'm testing a new site, and I have a div with background-color: #bbf6bb; That seems innocuous enough to me. And yet, on my MacBook Pro, the color looks very different in Firefox 3.6 vs. Safari 4. In Safari, it's the color I'd expect from the hex value: a pale green. In Firefox, there's a definite bluish tint, making the color turquoise. I'm aware of color inconsistencies that result from different treatment of images across browsers, but in pure CSS? Really? I'm guessing that Firefox trying to correct for my display in hopes of delivering better consistency with print, but I'd much rather have my site look the same hue to my users regardless of their choice of browser. Any ideas? Can someone confirm that Firefox is the culprit here? [Update: This seems to have been a fluke. Specifically, it's a narrow issue with Firefox—see my answer below. I'm puzzled, but relieved.]

    Read the article

  • Does anyone have documentation on SHGetSysColor?

    - by Paulo Santos
    I'm trying to find any reference for this function, but I haven't found anything. All I have is an obscure KB from Microsoft referencing that a programmer made boo-boo when coding a part of the Windows Mobile 6 where he should call SHGetSysColor but instead he called GetSysColor that gives a complete different color, for the same spec. From what I could gather the GetSysColor read a color value in the registry from HKEY_LOCAL_MACHINE\Software\Microsoft\Color\SHColor or HKEY_LOCAL_MACHINE\Software\Microsoft\Color\DefSHColor and returns the color according to the index. In that registry I have the following value for a standard Win Mobile 6.5 "DefSHColor"=hex:\ ff,00,00,00,00,00,00,00,dd,dd,dd,00,ff,ff,cc,00,ff,ff,ff,00,15,af,bc,00,15,\ af,bc,00,c9,e7,e9,00,14,9c,a7,00,ff,ff,ff,00,14,9c,a7,00,14,9c,a7,00,14,9c,\ a7,00,15,af,bc,00,14,9c,a7,00,ff,ff,ff,00,c9,e7,e9,00,37,c7,d3,00,37,c7,d3,\ 00,ff,ff,ff,00,00,b7,c9,00,14,9c,a7,00,ff,ff,ff,00,15,af,bc,00,84,84,c3,00,\ 15,af,bc,00,14,9c,a7,00,ff,ff,ff,00,ff,ff,ff,00,00,00,00,00,ff,ff,ff,00,00,\ 00,00,00,ff,ff,ff,00,2e,44,4f,00,00,14,3c,00,00,f0,ff,00,ff,ff,ff,00,c9,e7,\ e9,00,14,9c,a7,00,ff,ff,ff,00,14,9c,a7,00 And I realized that each four bytes represents a different color (RR,GG,BB,AA -- The AA I'm assuming here, as every color there has the AA byte as 00 which would mean that it's a solid color). What I can't get a fix on is what each index mean, as I have 41 different colors in there. Googling for SHGetSysColor in gives me only 7 matches, two of them are the KB from Microsoft (one in English, the other in French) one is from a Russian site (which I don't read), yet another two are from the freepascal.org and one from Koders.com that is describing the commctl.def file. I went to the commctl.h trying to see if I could find reference tom this function, and found absolutely nothing. No search on MSDN, either fro Google, Bing, or the default MSDN search gave me any result. So, does anyone know what indexes are we talking about here?

    Read the article

  • Boost binding a function taking a reference

    - by Jamie Cook
    Hi all, I am having problems compiling the following snippet int temp; vector<int> origins; vector<string> originTokens = OTUtils::tokenize(buffer, ","); // buffer is a char[] array // original loop BOOST_FOREACH(string s, originTokens) { from_string(temp, s); origins.push_back(temp); } // I'd like to use this to replace the above loop std::transform(originTokens.begin(), originTokens.end(), origins.begin(), boost::bind<int>(&FromString<int>, boost::ref(temp), _1)); where the function in question is // the third parameter should be one of std::hex, std::dec or std::oct template <class T> bool FromString(T& t, const std::string& s, std::ios_base& (*f)(std::ios_base&) = std::dec) { std::istringstream iss(s); return !(iss >> f >> t).fail(); } the error I get is 1>Compiling with Intel(R) C++ 11.0.074 [IA-32]... (Intel C++ Environment) 1>C:\projects\svn\bdk\Source\deps\boost_1_42_0\boost/bind/bind.hpp(303): internal error: assertion failed: copy_default_arg_expr: rout NULL, no error (shared/edgcpfe/il.c, line 13919) 1> 1> return unwrapper<F>::unwrap(f, 0)(a[base_type::a1_], a[base_type::a2_]); 1> ^ 1> 1>icl: error #10298: problem during post processing of parallel object compilation Google is being unusually unhelpful so I hope that some one here can provide some insights.

    Read the article

  • Memory Profiling with DotTrace Questions

    - by cam
    I ran dotTrace on my application (which is having some issues). IntPtr System.Windows.Forms.UnsafeNativeMethods.CallWindowProc(IntPtr, IntPtr, Int32, IntPtr, IntPtr) Void System.Windows.Forms.UnsafeNativeMethods.WaitMessage() Are the two main functions that came up, taking about 94% of the application time. Since I didn't know what these two functions were, I ran through my code line by line. It runs smooth and efficiently until a point where it just hangs. "newFrm.Show()". The newFrm only contains a textbox. The larger the file I load into the text box (it's a notepad program), the longer it takes. Now normally this makes sense, but it takes about 30 seconds for a 167 kB file. Now I'm not sure what to do. It runs incredibly slow/stops functioning when you load a textfile and try to resize the window containing the text file too. Then I realized that it is only struggling to open text files with a long string of hex inside (ie) "XX-XX-XX-" etc. With other similarly sized files it struggles with resizing somewhat, but opens within a couple seconds. Does this have something to do with the textbox properties? I've set it to multiline and set maximum characters to 0 (so unlimited). How do I solve this issue? Is there some way I can see what is being called in those functions?

    Read the article

  • Reading Unicode files line by line C++

    - by Roger Nelson
    What is the correct way to read Unicode files line by line in C++? I am trying to read a file saved as Unicode (LE) by Windows Notepad. Suppose the file contains simply the characters A and B on separate lines. In reading the file byte by byte, I see the following byte sequence (hex) : FE FF 41 00 0D 00 0A 00 42 00 0D 00 0A 00 So 2 byte BOM, 2 byte 'A', 2byte CR , 2byte LF, 2 byte 'B', 2 byte CR, 2 byte LF . I tried reading the text file using the following code: std::wifstream file("test.txt"); file.seekg(2); // skip BOM std::wstring A_line; std::wstring B_line; getline(file,A_line); // I get "A" getline(file,B_line); // I get "\0B" I get the same results using operator instead of getline file >> A_line; file >> B_line; It appears that the single byte CR character is is being consumed only as the single byte. or CR NULL LF is being consumed but not the high byte NULL. I would expect wifstream in text mode would read the 2byte CR and 2byte LF. What am I doing wrong? It does not seem right that one should have to read a text file byte by byte in binary mode just to parse the new lines.

    Read the article

  • Object Oriented Perl interface to read from and write to a socket

    - by user654967
    I need a perl client-server implementation as a wrapper for a server in C#. A perl script passes the server address and port number and an input string to a module, this module has to create the socket and send the input string to the server. The data sent has to follow ISO-8859-1 encoding. On receiving the information, the client has to first receive 3 byte, then the next 8 bytes, this has the length of the data that has to be received next.. so based on the length the client has to read the next data. each of the data that is read has to be stored in a variable and sent another module for further processing. Currently this is what my perl client looks like..which I'm sure isn't right..could someone tell me how to do this..and set me on the right direction.. sub WriteInfo { my ($addr, $port, $Input) = @_; $socket = IO::Socket::INET->new( Proto => "tcp", PeerAddr => $addr, PeerPort => $port, ); unless ($socket) { die "cannot connect to remote" } while (1) { $socket->send($Input); } } sub ReadData { while (1) { my $ExecutionResult = $socket->recv( $recv_data, 3); my $DataLength = $socket->recv( $recv_data, 8); $DataLength =~ s/^0+// ; my $decval = hex($DataLength); my $Data = $socket->recv( $recv_data, $decval); return($Data); } thanks a lot.. Archer

    Read the article

  • Weird character at start of json content type

    - by Nek
    Hi, I'm trying to return json content read from MySQL server. This is supposed to be easy but, there is a 'weird' character that keeps appearing at start of the content. I have two pages for returning content: kcb433.sytes.net/as/test.php?json=true&limit=6&input=d this test.php is from a script written by Timothy Groves, which converts an array to json output http://kcb433.sytes.net/k.php?k=4 this one is supposed to do the same I tried to validate it here jsonformatter.curiousconcept.com but just page 1 gets validated, page 2 says that it does not contain JSON data. If accessed directly both pages has no problems. Then what is the difference, why both don't get validated? Then I found this page jsonformat.com and tried the same thing. Page 1 was ok and page 2 wasn't but, surprisingly the data could be read. At a glance, {"a":"b"} may look good but there is a character in front. According to a hex editor online, this is the value of the string above (instead of 9 values, there are 10): -- 7B 22 61 22 3A 22 62 22 7D The code to echo json in page 2 is: header("Content-Type: application/json"); echo "{\"a\":\"b\"}";

    Read the article

  • Good way to identify similar images?

    - by Nick
    I've developed a simple and fast algorithm in PHP to compare images for similarity. Its fast (~40 per second for 800x600 images) to hash and a unoptimised search algorithm can go through 3,000 images in 22 mins comparing each one against the others (3/sec). The basic overview is you get a image, rescale it to 8x8 and then convert those pixels for HSV. The Hue, Saturation and Value are then truncated to 4 bits and it becomes one big hex string. Comparing images basically walks along two strings, and then adds the differences it finds. If the total number is below 64 then its the same image. Different images are usually around 600 - 800. Below 20 and extremely similar. Are there any improvements upon this model I can use? I havent looked at how relevant the different components (hue, saturation and value) are to the comparison. Hue is probably quite important but the others? To speed up searches I could probably split the 4 bits from each part in half, and put the most significant bits first so if they fail the check then the lsb doesnt need to be checked at all. I dont know a efficient way to store bits like that yet still allow them to be searched and compared easily. I've been using a dataset of 3,000 photos (mostly unique) and there havent been any false positives. Its completely immune to resizes and fairly resistant to brightness and contrast changes.

    Read the article

  • PHP: json_decode dumping NULL, BOM not found

    - by SerEnder
    I've been trying to find out why this 'json_encode'd string isn't parsing out correctly, and came across previously answered questions that had the UTF BOM sequence that was throwing the error, but didn't help me here. Here's the code that isn't currently working: //Decode the notes attached to the sig $aNotes = json_decode($rule->getNotes(),true); $bom = pack("CCC",0xef,0xbb,0xbf); if(0 == strncmp($rule->getNotes(),$bom,3)) { print('BOM detected - json encoding in UTF-8<br/>'); } else { print('BOM NOT detected - json encoding correctly<br/>'); } print('rule->getNotes:<br/>' . $rule->getNotes() .'<br/>'); var_dump($aNotes); Which generates this result: BOM NOT detected - json encoding correctly rule->getNotes: [{"lDate":"Unknown","sAuthor":"Unknown","sNote":"This is a general purpose Russian spam rule that matches anything starting with 2, 3 or 4 hex digits followed by a domain name ending with .ru -RSK 2010-05-10"},{"lDate":"1295031463082","sAuthor":"Drew Thorstenson","sNote":"this is Ryan's ru rule"}] NULL I've run it through JSON Lint, which said it was valid, and An Online JSON Parser which parsed it correctly too. Any insight would be greatly appreciated.

    Read the article

  • Reverse reading WORD from a binary file?

    - by Angel
    Hi, I have a structure: struct JFIF_HEADER { WORD marker[2]; // = 0xFFD8FFE0 WORD length; // = 0x0010 BYTE signature[5]; // = "JFIF\0" BYTE versionhi; // = 1 BYTE versionlo; // = 1 BYTE xyunits; // = 0 WORD xdensity; // = 1 WORD ydensity; // = 1 BYTE thumbnwidth; // = 0 BYTE thumbnheight; // = 0 }; This is how I read it from the file: HANDLE file = CreateFile(filename, GENERIC_READ, FILE_SHARE_READ, NULL, OPEN_EXISTING, FILE_ATTRIBUTE_NORMAL, 0); DWORD tmp = 0; DWORD size = GetFileSize(file, &tmp); BYTE *DATA = new BYTE[size]; ReadFile(file, DATA, size, &tmp, 0); JFIF_HEADER header; memcpy(&header, DATA, sizeof(JFIF_HEADER)); This is how the beginning of my file looks in hex editor: 0xFF 0xD8 0xFF 0xE0 0x00 0x10 0x4A 0x46 0x49 0x46 0x00 0x01 0x01 0x00 0x00 0x01 When I print header.marker, it shows exactly what it should (0xFFD8FFE0). But when I print header.length, it shows 0x1000 instead of 0x0010. The same thing is with xdensity and ydensity. Why do I get wrong data when reading a WORD? Thank you.

    Read the article

  • DNS protocol message example

    - by virtual-lab
    hello there, I am trying to figure out how to send out DNS messages from an application socket adapter to a DNSBL. I spent the last two days understanding the basics, including experimenting with WireShark to catch an example of message exchanged. Now I would like to query the DNS without using dig or host command (I'm using Ubuntu); how can I perform this action at low level, without the help of these tools in wrapping the request in a proper DNS message format? How the message should be post it? Hex or String? Thanks in advance for any help. Regards Alessandro Ilardo Comment added I am investigating on JDev and Oracle SOA. The platform provides a Socket Adapter which simply apply a transformation (XSLT) and send the message straight to the socket. How the payload parameters (ex. the host I'm looking up) are wrapped within the message is left to the developer. So basically I have an idea on how the all DNS message is structured, but rather than put everything on JDev stright away I'd like to make some tests on my own just to make sure I got a valid message format. So, I am not using any specific language (I don't even understand why they moved my question from serverfault) and I don't want to use any tools which would hide part of the message, such as the header. I know they work well btw. I guess this stuff has something to do with packet injection. Someone suggested me to use telnet, but I've only used for SMTP or HTTP, I haven't got a clue on how it works for DNS request. Does it make more sense now?

    Read the article

< Previous Page | 24 25 26 27 28 29 30 31 32  | Next Page >