Search Results

Search found 3768 results on 151 pages for 'lite byte'.

Page 51/151 | < Previous Page | 47 48 49 50 51 52 53 54 55 56 57 58  | Next Page >

  • how is data stored at bit level according to "Endianness" ?

    - by bakra
    I read about Endianness and understood squat... so I wrote this main() { int k = 0xA5B9BF9F; BYTE *b = (BYTE*)&k; //value at *b is 9f b++; //value at *b is BF b++; //value at *b is B9 b++; //value at *b is A5 } k was equal to "A5 B9 BF 9F" and (byte)pointer "walk" o/p was "9F BF b9 A5" so I get it bytes are stored backwards...ok. ~ so now I thought how is it stored at BIT level... I means is "9f"(1001 1111) stored as "f9"(1111 1001)? so I wrote this int _tmain(int argc, _TCHAR* argv[]) { int k = 0xA5B9BF9F; void *ptr = &k; bool temp= TRUE; cout<<"ready or not here I come \n"< for(int i=0;i<32;i++) { temp = *( (bool*)ptr + i ); if( temp ) cout<<"1 "; if( !temp) cout<<"0 "; if(i==7||i==15||i==23) cout<<" - "; } } I get some random output even for nos. like "32" I dont get anything sensible. why ?

    Read the article

  • What am I doing wrong with HttpResponse content and headers when downloading a file?

    - by Slauma
    I want to download a PDF file from a SQL Server database which is stored in a binary column. There is a LinkButton on an aspx page. The event handler of this button looks like this: protected void LinkButtonDownload(object sender, EventArgs e) { ... byte[] aByteArray; // Read binary data from database into this ByteArray // aByteArray has the size: 55406 byte Response.ClearHeaders(); Response.ClearContent(); Response.BufferOutput = true; Response.AddHeader("Content-Disposition", "attachment; filename=" + "12345.pdf"); Response.ContentType = "application/pdf"; using (BinaryWriter aWriter = new BinaryWriter(Response.OutputStream)) { aWriter.Write(aByteArray, 0, aByteArray.Length); } } A "File Open/Save dialog" is offered in my browser. When I store this file "12345.pdf" to disk, the file has a size of 71523 Byte. The additional 16kB at the end of the PDF file are the HTML code of my page (as I can see when I view the file in an editor). I am confused because I was believing that ClearContent and ClearHeaders would ensure that the page content is not sent together with the file content. What am I doing wrong here? Thanks for help!

    Read the article

  • Retrieving dll version info via Win32 - VerQueryValue(...) crashes under Win7 x64

    - by user256890
    The respected open source .NET wrapper implementation (SharpBITS) of Windows BITS services fails identifying the underlying BITS version under Win7 x64. Here is the source code that fails. NativeMethods are native Win32 calls wrapped by .NET methods and decorated via DllImport attribute. private static BitsVersion GetBitsVersion() { try { string fileName = Path.Combine( System.Environment.SystemDirectory, "qmgr.dll"); int handle = 0; int size = NativeMethods.GetFileVersionInfoSize(fileName, out handle); if (size == 0) return BitsVersion.Bits0_0; byte[] buffer = new byte[size]; if (!NativeMethods.GetFileVersionInfo(fileName, handle, size, buffer)) { return BitsVersion.Bits0_0; } IntPtr subBlock = IntPtr.Zero; uint len = 0; if (!NativeMethods.VerQueryValue(buffer, @"\VarFileInfo\Translation", out subBlock, out len)) { return BitsVersion.Bits0_0; } int block1 = Marshal.ReadInt16(subBlock); int block2 = Marshal.ReadInt16((IntPtr)((int)subBlock + 2 )); string spv = string.Format( @"\StringFileInfo\{0:X4}{1:X4}\ProductVersion", block1, block2); string versionInfo; if (!NativeMethods.VerQueryValue(buffer, spv, out versionInfo, out len)) { return BitsVersion.Bits0_0; } ... The implementation follows the MSDN instructions by the letter. Still during the second VerQueryValue(...) call, the application crashes and kills the debug session without hesitation. Just a little more debug info right before the crash: spv = "\StringFileInfo\040904B0\ProductVersion" buffer = byte[1900] - full with binary data block1 = 1033 block2 = 1200 I looked at the targeted "C:\Windows\System32\qmgr.dll" file (The implementation of BITS) via Windows. It says that the Product Version is 7.5.7600.16385. Instead of crashing, this value should return in the verionInfo string. Any advice?

    Read the article

  • Why should the "prime-based" hashcode implmentation be used instead of the "naive" one?

    - by Wilhelm
    I have seen that a prime number implmentation of the GetHashCode function is being recommend, for example here. However using the following code (in VB, sorry), it seems as if that implementation gives the same hash density as a "naive" xor implementation. If the density is the same, I would suppose there is the same probability of cllision in both implementations. Am I missing anything on why is the prime approach preferred? I am supossing that if the hash code is a byte I do not lose generality for the integer case. Sub Main() Dim XorHashes(255) As Integer Dim PrimeHashes(255) As Integer For i = 0 To 255 For j = 0 To 255 For k = 0 To 255 XorHashes(GetXorHash(i, j, k)) += 1 PrimeHashes(GetPrimeHash(i, j, k)) += 1 Next Next Next For i = 0 To 255 Console.WriteLine("{0}: {1}, {2}", i, XorHashes(i), PrimeHashes(i)) Next Console.ReadKey() End Sub Public Function GetXorHash(ByVal valueOne As Integer, ByVal valueTwo As Integer, ByVal valueThree As Integer) As Byte Return CByte((valueOne Xor valueTwo Xor valueThree) Mod 256) End Function Public Function GetPrimeHash(ByVal valueOne As Integer, ByVal valueTwo As Integer, ByVal valueThree As Integer) As Byte Dim TempHash = 17 TempHash = 31 * TempHash + valueOne TempHash = 31 * TempHash + valueTwo TempHash = 31 * TempHash + valueThree Return CByte(TempHash Mod 256) End Function

    Read the article

  • Why do System.IO.Log SequenceNumbers have variable length?

    - by Doug McClean
    I'm trying to use the System.IO.Log features to build a recoverable transaction system. I understand it to be implemented on top of the Common Log File System. The usual ARIES approach to write-ahead logging involves persisting log record sequence numbers in places other than the log (for example, in the header of the database page modified by the logged action). Interestingly, the documentation for CLFS says that such sequence numbers are always 64-bit integers. Confusingly, however, the .Net wrapper around those SequenceNumbers can be constructed from a byte[] but not from a UInt64. It's value can also be read as a byte[], but not as a UInt64. Inspecting the implementation of SequenceNumber.GetBytes() reveals that it can in fact return arrays of either 8 or 16 bytes. This raises a few questions: Why do the .Net sequence numbers differ in size from the CLFS sequence numbers? Why are the .Net sequence numbers variable in length? Why would you need 128 bits to represent such a sequence number? It seems like you would truncate the log well before using up a 64-bit address space (16 exbibytes, or around 10^19 bytes, more if you address longer words)? If log sequence numbers are going to be represented as 128 bit integers, why not provide a way to serialize/deserialize them as pairs of UInt64s instead of rather-pointlessly incurring heap allocations for short-lived new byte[]s every time you need to write/read one? Alternatively, why bother making SequenceNumber a value type at all? It seems an odd tradeoff to double the storage overhead of log sequence numbers just so you can have an untruncated log longer than a million terabytes, so I feel like I'm missing something here, or maybe several things. I'd much appreciate it if someone in the know could set me straight.

    Read the article

  • Send already generated MHTML using CDOSYS through C#?

    - by mutex
    I have a ready generated MHTML as a byte array (from Aspose.Words) and would like to send it as an email. I'm trying to do this through CDOSYS, though am open to other suggestions. For now though I have the following: CDO.Message oMsg = new CDO.Message(); CDO.IConfiguration iConfg = oMsg.Configuration; Fields oFields = iConfg.Fields; // Set configuration. Field oField = oFields["http://schemas.microsoft.com/cdo/configuration/sendusing"]; oField.Value = CDO.CdoSendUsing.cdoSendUsingPort; oField = oFields["http://schemas.microsoft.com/cdo/configuration/smtpserver"]; oField.Value = SmtpClient.Host; oField = oFields["http://schemas.microsoft.com/cdo/configuration/smtpserverport"]; oField.Value = SmtpClient.Port; oFields.Update(); //oMsg.CreateMHTMLBody("http://www.microsoft.com", CDO.CdoMHTMLFlags.cdoSuppressNone, "", ""); // NEED MAGIC HERE :) oMsg.Subject = warning.Subject; // string oMsg.From = "[email protected]"; oMsg.To = warning.EmailAddress; oMsg.Send(); In this snippet, the warning variable has a Body property which is a byte[]. Where it says "NEED MAGIC HERE" in the code above I want to use this byte[] to set the body of the CDO Message. I have tried the following, which unsurprisingly doesn't work: oMsg.HTMLBody = System.Text.Encoding.ASCII.GetString(warning.Body); Anybody have any ideas how I can achieve what I want with CDOSYS or something else?

    Read the article

  • How to Use Calculated Color Values with ColorMatrix?

    - by Otaku
    I am changing color values of each pixel in an image based on a calculation. The problem is that this takes over 5 seconds on my machine with a 1000x1333 image and I'm looking for a way to optimize it to be much faster. I think ColorMatrix may be an option, but I'm having a difficult time figure out how I would get a set of pixel RGB values, use that to calculate and then set the new pixel value. I can see how this can be done if I was just modifying (multiplying, subtracting, etc.) the original value with ColorMatrix, but now how I can use the pixels returned value to use it to calculate and new value. For example: Sub DarkenPicture() Dim clrTestFolderPath = "C:\Users\Me\Desktop\ColorTest\" Dim originalPicture = "original.jpg" Dim Luminance As Single Dim bitmapOriginal As Bitmap = Image.FromFile(clrTestFolderPath + originalPicture) Dim Clr As Color Dim newR As Byte Dim newG As Byte Dim newB As Byte For x = 0 To bitmapOriginal.Width - 1 For y = 0 To bitmapOriginal.Height - 1 Clr = bitmapOriginal.GetPixel(x, y) Luminance = ((0.21 * (Clr.R) + (0.72 * (Clr.G)) + (0.07 * (Clr.B))/ 255 newR = Clr.R * Luminance newG = Clr.G * Luminance newB = Clr.B * Luminance bitmapOriginal.SetPixel(x, y, Color.FromArgb(newR, newG, newB)) Next Next bitmapOriginal.Save(clrTestFolderPath + "colorized.jpg", ImageFormat.Jpeg) End Sub The Luminance value is the calculated one. I know I can set ColorMatrix's M00, M11, M22 to 0, 0, 0 respectively and then put a new value in M40, M41, M42, but that new value is calculated based of a value multiplication and addition of that pixel's components (((0.21 * (Clr.R) + (0.72 * (Clr.G)) + (0.07 * (Clr.B)) and the result of that - Luminance - is multiplied by the color component). Is this even possible with ColorMatrix?

    Read the article

  • What is the correct JNA mapping for UniChar on Mac OS X?

    - by Trejkaz
    I have a C struct like this: struct HFSUniStr255 { UInt16 length; UniChar unicode[255]; }; I have mapped this in the expected way: public class HFSUniStr255 extends Structure { public UInt16 length; // UInt16 is just an IntegerType with length 2 for convenience. public /*UniChar*/ char[] unicode = new char[255]; //public /*UniChar*/ byte[] unicode = new byte[255*2]; //public /*UniChar*/ UInt16[] unicode = new UInt16[255]; public HFSUniStr255() { } public HFSUniStr255(Pointer pointer) { super(pointer); } } If I use this version, I get every second character of the string into my char[] ("aits D" for "Macintosh HD".) I am assuming that this is something to do with being on a 64-bit platform and JNA mapping the value to a 32-bit wchar_t but then chopping off the high 16 bits on each wchar_t on copying them back. If I use the byte[] version, I get data which decodes correctly using the UTF-16LE charset. If I use the UInt16[] version, I get the right code point for each character but it is then inconvenient to convert them back into a string. Is there some way I can define my type as char[], and yet have it convert correctly?

    Read the article

  • C# udp can not receive any data

    - by StoneHeart
    here is my code Socket sck = new Socket(AddressFamily.InterNetwork, SocketType.Dgram, ProtocolType.Udp); sck.Bind(new IPEndPoint(IPAddress.Any, 0)); // Broadcast to find server string msg = "Imlookingforaserver:" + udp_listen_port; byte[] sendBytes4 = Encoding.ASCII.GetBytes(msg); IPEndPoint groupEP = new IPEndPoint(IPAddress.Parse("255.255.255.255"), server_port); sck.SetSocketOption(SocketOptionLevel.Socket, SocketOptionName.Broadcast, 1); sck.SendTo(sendBytes4, groupEP); //Wait response from server Socket sck2 = new Socket(AddressFamily.InterNetwork, SocketType.Dgram, ProtocolType.Udp); sck2.Bind(new IPEndPoint(IPAddress.Any, udp_listen_port)); byte[] buffer = new byte[128]; EndPoint remoteEndPoint = new IPEndPoint(IPAddress.Any, udp_listen_port); sck2.ReceiveFrom(buffer, ref remoteEndPoint); //<<< I never pass this line I use above code to try find a server. First i broadcast a message and then i wait response from server. A test i made with the server written in c++ and running in windows vista, client written in C# and run on the same machine with server. Problem is: The server can receive message which client broadcast. But client can not receive anything from server. I try to write a client with c++ and it work like a charm, i think my problem is in C# client.

    Read the article

  • Why Does This Maintainability Index Increase?

    - by Timothy
    I would be appreciative if someone could explain to me the difference between the following two pieces of code in terms of Visual Studio's Code Metrics rules. Why does the Maintainability Index increase slightly if I don't encapsulate everything within using ( )? Sample 1 (MI score of 71) public static String Sha1(String plainText) { using (SHA1Managed sha1 = new SHA1Managed()) { Byte[] text = Encoding.Unicode.GetBytes(plainText); Byte[] hashBytes = sha1.ComputeHash(text); return Convert.ToBase64String(hashBytes); } } Sample 2 (MI score of 73) public static String Sha1(String plainText) { Byte[] text, hashBytes; using (SHA1Managed sha1 = new SHA1Managed()) { text = Encoding.Unicode.GetBytes(plainText); hashBytes = sha1.ComputeHash(text); } return Convert.ToBase64String(hashBytes); } I understand metrics are meaningless outside of a broader context and understanding, and programmers should exercise discretion. While I could boost the score up to 76 with return Convert.ToBase64String(sha1.ComputeHash(Encoding.Unicode.GetBytes(plainText))), I shouldn't. I would clearly be just playing with numbers and it isn't truly any more readable or maintainable at that point. I am curious though as to what the logic might be behind the increase in this case. It's obviously not line-count.

    Read the article

  • Culture Sensitive GetHashCode

    - by user114928
    Hi, I'm writing a c# application that will process some text and provide basic query functions. In order to ensure the best possible support for other languages, I am allowing the users of the application to specify the System.Globalization.CultureInfo (via the "en-GB" style code) and also the full range of collation options using the System.Globalization.CompareOptions flags enum. For regular string comparison I'm then using a combination of: a) String.Compare overload that accepts the culture and options b) For some bulk processes I'm caching the byte data (KeyData) from CompareInfo.GetSortKey (overload that accepts the options) and using a byte-by-byte comparison of the KeyData. This seemed fine (although please comment if you think these two methods shouldn't be mixed), but then I had reason to use the HashSet< class which only has an overload for IEqualityComparer<. MS documentation seems to suggest that I should use StringComparer (which implements both IEqualityComparer< and IComparer<), but this only seems to support the "IgnoreCase" option from CompareOptions and not "IgnoreKanaType", "IgnoreSymbols", "IgnoreWidth" etc. I'm assuming that a StringComparer that ignores these other options could produce different hashcodes for two strings that might be considered the same using my other comparison options. I'd therefore get incorrect results from my application. Only thought at the moment is to create my own IEqualityComparer< that generates a hashcode from the SortKey.KeyData and compares eqality be using the String.Compare overload. Any suggestions?

    Read the article

  • How do I access the CodeDomProvider from a class inheriting from Microsoft.VisualStudio.TextTemplating.VSHost.BaseCodeGeneratorWithSite?

    - by Charlie
    Does anyone know how to get a CodeDomProvider in the new Microsoft.VisualStudio.TextTemplating.VSHost.BaseCodeGeneratorWithSite from the Visual Studio 2010 SDK? I used to get access to it just by in mere inheritance of the class Microsoft.CustomTool.BaseCodeGeneratorWithSite, but now with this new class it is not there. I see a GlobalServiceProvider and a SiteServiceProvider but I can't find any example on how to use them. Microsoft.VisualStudio.TextTemplating.VSHost.BaseCodeGeneratorWithSite: http://msdn.microsoft.com/en-us/library/bb932625.aspx I was to do this: public class Generator : Microsoft.VisualStudio.TextTemplating.VSHost.BaseCodeGeneratorWithSite { public override string GetDefaultExtension() { // GetDefaultExtension IS ALSO NOT ACCESSIBLE... return this.InputFilePath.Substring(this.InputFilePath.LastIndexOf(".")) + ".designer" + base.GetDefaultExtension(); } // This method is being called every time the attached xml is saved. protected override byte[] GenerateCode(string inputFileName, string inputFileContent) { try { // Try to generate the wrapper file. return GenerateSourceCode(inputFileName); } catch (Exception ex) { // In case of a faliure - print the exception // as a comment in the source code. return GenerateExceptionCode(ex); } } public byte[] GenerateSourceCode(string inputFileName) { Dictionary<string, CodeCompileUnit> oCodeUnits; // THIS IS WHERE CodeProvider IS NOT ACCESSIBLE CodeDomProvider oCodeDomProvider = this.CodeProvider; string[] aCode = new MyCustomAPI.GenerateCode(inputFileName, ref oCodeDomProvider); return Encoding.ASCII.GetBytes(String.Join(@" ", aCode)); } private byte[] GenerateExceptionCode(Exception ex) { CodeCompileUnit oCode = new CodeCompileUnit(); CodeNamespace oNamespace = new CodeNamespace("System"); oNamespace.Comments.Add(new CodeCommentStatement(MyCustomAPI.Print(ex))); oCode.Namespaces.Add(oNamespace); string sCode = null; using (StringWriter oSW = new StringWriter()) { using (IndentedTextWriter oITW = new IndentedTextWriter(oSW)) { this.CodeProvider.GenerateCodeFromCompileUnit(oCode, oITW, null); sCode = oSW.ToString(); } } return Encoding.ASCII.GetBytes(sCode ); } } Thanks for your help!

    Read the article

  • parse content away from structure in a binary file

    - by Jeff Godfrey
    Using C#, I need to read a packed binary file created using FORTRAN. The file is stored in an "Unformatted Sequential" format as described here (about half-way down the page in the "Unformatted Sequential Files" section): http://www.tacc.utexas.edu/services/userguides/intel8/fc/f_ug1/pggfmsp.htm As you can see from the URL, the file is organized into "chunks" of 130 bytes or less and includes 2 length bytes (inserted by the FORTRAN compiler) surrounding each chunk. So, I need to find an efficient way to parse the actual file payload away from the compiler-inserted formatting. Once I've extracted the actual payload from the file, I'll then need to parse it up into its varying data types. That'll be the next exercise. My first thoughts are to slurp up the entire file into a byte array using File.ReadAllBytes. Then, just iterate through the bytes, skipping the formatting and transferring the actual data to a second byte array. In the end, that second byte array should contain the actual file contents minus all the formatting, which I'd then need to go back through to get what I need. As I'm fairly new to C#, I thought there might be a better, more accepted way of tackling this. Also, in case it's helpful, these files could be fairly large (say 30MB), though most will be much smaller...

    Read the article

  • OpenGL Calls Lock/Freeze

    - by Necrolis
    I am using some dell workstations(running WinXP Pro SP 2 & DeepFreeze) for development, but something was recenlty loaded onto these machines that prevents any opengl call(the call locks) from completing(and I know the code works as I have tested it on 'clean' machines, I also tested with simple opengl apps generated by dev-cpp, which will also lock on the dell machines). I have tried to debug my own apps to see where exactly the gl calls freeze, but there is some global system hook on ZwQueryInformationProcess that messes up calls to ZwQueryInformationThread(used by ExitThread), preventing me from debugging at all(it causes the debugger, OllyDBG, to go into an access violation reporting loop or the program to crash if the exception is passed along). the hook: ntdll.ZwQueryInformationProcess 7C90D7E0 B8 9A000000 MOV EAX,9A 7C90D7E5 BA 0003FE7F MOV EDX,7FFE0300 7C90D7EA FF12 CALL DWORD PTR DS:[EDX] 7C90D7EC - E9 0F28448D JMP 09D50000 7C90D7F1 9B WAIT 7C90D7F2 0000 ADD BYTE PTR DS:[EAX],AL 7C90D7F4 00BA 0003FE7F ADD BYTE PTR DS:[EDX+7FFE0300],BH 7C90D7FA FF12 CALL DWORD PTR DS:[EDX] 7C90D7FC C2 1400 RETN 14 7C90D7FF 90 NOP ntdll.ZwQueryInformationToken 7C90D800 B8 9C000000 MOV EAX,9C the messed up function + call: ntdll.ZwQueryInformationThread 7C90D7F0 8D9B 000000BA LEA EBX,DWORD PTR DS:[EBX+BA000000] 7C90D7F6 0003 ADD BYTE PTR DS:[EBX],AL 7C90D7F8 FE ??? ; Unknown command 7C90D7F9 7F FF JG SHORT ntdll.7C90D7FA 7C90D7FB 12C2 ADC AL,DL 7C90D7FD 14 00 ADC AL,0 7C90D7FF 90 NOP ntdll.ZwQueryInformationToken 7C90D800 B8 9C000000 MOV EAX,9C So firstly, anyone know what if anything would lead to OpenGL calls cause an infinite lock,and if there are any ways around it? and what would be creating such a hook in kernal memory ? Update: After some more fiddling, I have discovered a few more kernal hooks, a lot of them are used to nullify data returned by system information calls(such as the remote debugging port), I also managed to find out the what ever is doing this is using madchook.dll(by madshi) to do this, this dll is also injected into every running process(these seem to be some anti debugging code). Also, on the OpenGL side, it seems Direct X is fine/unaffected(I ran one of the DX 9 demo's without problems), so could one of these kernal hooks somehow affect OpenGL?

    Read the article

  • Computing "average" of two colors

    - by Francisco P.
    This is only marginally programming related - has much more to do w/ colors and their representation. I am working on a very low level app. I have an array of bytes in memory. Those are characters. They were rendered with anti-aliasing: they have values from 0 to 255, 0 being fully transparent and 255 totally opaque (alpha, if you wish). I am having trouble conceiving an algorithm for the rendering of this font. I'm doing the following for each pixel: // intensity is the weight I talked about: 0 to 255 intensity = glyphs[text[i]][x + GLYPH_WIDTH*y]; if (intensity == 255) continue; // Don't draw it, fully transparent else if (intensity == 0) setPixel(x + xi, y + yi, color, base); // Fully opaque, can draw original color else { // Here's the tricky part // Get the pixel in the destination for averaging purposes pixel = getPixel(x + xi, y + yi, base); // transfer is an int for calculations transfer = (int) ((float)((float) (255.0 - (float) intensity/255.0) * (float) color.red + (float) pixel.red)/2); // This is my attempt at averaging newPixel.red = (Byte) transfer; transfer = (int) ((float)((float) (255.0 - (float) intensity/255.0) * (float) color.green + (float) pixel.green)/2); newPixel.green = (Byte) transfer; // transfer = (int) ((float) ((float) 255.0 - (float) intensity)/255.0 * (((float) color.blue) + (float) pixel.blue)/2); transfer = (int) ((float)((float) (255.0 - (float) intensity/255.0) * (float) color.blue + (float) pixel.blue)/2); newPixel.blue = (Byte) transfer; // Set the newpixel in the desired mem. position setPixel(x+xi, y+yi, newPixel, base); } The results, as you can see, are less than desirable. That is a very zoomed in image, at 1:1 scale it looks like the text has a green "aura". Any idea for how to properly compute this would be greatly appreciated. Thanks for your time!

    Read the article

  • Reading in bytes produced by PHP script in Java to create a bitmap

    - by Kareem
    I'm having trouble getting the compressed jpeg image (stored as a blob in my database). here is the snippet of code I use to output the image that I have in my database: if($row = mysql_fetch_array($sql)) { $size = $row['image_size']; $image = $row['image']; if($image == null){ echo "no image!"; } else { header('Content-Type: content/data'); header("Content-length: $size"); echo $image; } } here is the code that I use to read in from the server: URL sizeUrl = new URL(MYURL); URLConnection sizeConn = sizeUrl.openConnection(); // Get The Response BufferedReader sizeRd = new BufferedReader(new InputStreamReader(sizeConn.getInputStream())); String line = ""; while(line.equals("")){ line = sizeRd.readLine(); } int image_size = Integer.parseInt(line); if(image_size == 0){ return null; } URL imageUrl = new URL(MYIMAGEURL); URLConnection imageConn = imageUrl.openConnection(); // Get The Response InputStream imageRd = imageConn.getInputStream(); byte[] bytedata = new byte[image_size]; int read = imageRd.read(bytedata, 0, image_size); Log.e("IMAGEDOWNLOADER", "read "+ read + " amount of bytes"); Log.e("IMAGEDOWNLOADER", "byte data has length " + bytedata.length); Bitmap theImage = BitmapFactory.decodeByteArray(bytedata, 0, image_size); if(theImage == null){ Log.e("IMAGEDOWNLOADER", "the bitmap is null"); } return theImage; My logging shows that everything has the right length, yet theImage is always null. I'm thinking it has to do with my content type. Or maybe the way I'm uploading?

    Read the article

  • How to get encoding from MAPI message with PR_BODY_A tag (windows mobile)?

    - by SadSido
    Hi, everyone! I am developing a program, that handles incoming e-mail and sms through windows-mobile MAPI. The code basically looks like that: ulBodyProp = PR_BODY_A; hr = piMessage->OpenProperty(ulBodyProp, NULL, STGM_READ, 0, (LPUNKNOWN*)&piStream); if (hr == S_OK) { // ... get body size in bytes ... STATSTG statstg; piStream->Stat(&statstg, 0); ULONG cbBody = statstg.cbSize.LowPart; // ... allocate memory for the buffer ... BYTE* pszBodyInBytes = NULL; boost::scoped_array<BYTE> szBodyInBytesPtr(pszBodyInBytes = new BYTE[cbBody+2]); // ... read body into the pszBodyInBytes ... } That works and I have a message body. The problem is that this body is multibyte encoded and I need to return a Unicode string. I guess, I have to use ::MultiByteToWideChar() function, but how can I guess, what codepage should I apply? Using CP_UTF8 is naive, because it can simply be not in UTF8. Using CP_ACP works, well, sometimes, but sometimes does not. So, my question is: how can I retrieve the information about message codepage. Does MAPI provide any functions for it? Or is there a way to decode multibyte string, other than MultiByteToWideChar()? Thanks!

    Read the article

  • Botan linking error on Windows MSVC

    - by Jake Petroules
    I am trying to compile a library linking to the version of Botan from the Qt Creator sources with MSVC 2008 but am receiving the following error. MinGW compiles and links it fine. What is the issue? databasecrypto.obj:-1: error: LNK2019: unresolved external symbol "public: static unsigned int const Botan::Pipe::DEFAULT_MESSAGE" (?DEFAULT_MESSAGE@Pipe@Botan@@2IB) referenced in function "private: static class std::basic_string<char,struct std::char_traits<char>,class std::allocator<char> > __cdecl DatabaseCrypto::b64_encode(class Botan::SecureVector<unsigned char> const &)" (?b64_encode@DatabaseCrypto@@CA?AV?$basic_string@DU?$char_traits@D@std@@V?$allocator@D@2@@std@@ABV?$SecureVector@E@Botan@@@Z) /*! Encodes the Botan byte array \a in as a base 64 string. \param in The Botan byte array to encode. */ std::string DatabaseCrypto::b64_encode(const SecureVector<Botan::byte> &in) { Pipe pipe(new Base64_Encoder); pipe.process_msg(in); return pipe.read_all_as_string(); // <-- default parameter here is Botan::Pipe::DEFAULT_MESSAGE }

    Read the article

  • Is there really such a thing as a char or short in modern programming?

    - by Dean P
    Howdy all, I've been learning to program for a Mac over the past few months (I have experience in other languages). Obviously that has meant learning the Objective C language and thus the plainer C it is predicated on. So I have stumbles on this quote, which refers to the C/C++ language in general, not just the Mac platform. With C and C++ prefer use of int over char and short. The main reason behind this is that C and C++ perform arithmetic operations and parameter passing at integer level, If you have an integer value that can fit in a byte, you should still consider using an int to hold the number. If you use a char, the compiler will first convert the values into integer, perform the operations and then convert back the result to char. So my question, is this the case in the Mac Desktop and IPhone OS environments? I understand when talking about theses environments we're actually talking about 3-4 different architectures (PPC, i386, Arm and the A4 Arm variant) so there may not be a single answer. Nevertheless does the general principle hold that in modern 32 bit / 64 bit systems using 1-2 byte variables that don't align with the machine's natural 4 byte words doesn't provide much of the efficiency we may expect. For instance, a plain old C-Array of 100,000 chars is smaller than the same 100,000 ints by a factor of four, but if during an enumeration, reading out each index involves a cast/boxing/unboxing of sorts, will we see overall lower 'performance' despite the saved memory overhead?

    Read the article

  • Encoding/Decoding hex packet

    - by Roberto Pulvirenti
    I want to send this hex packet: 00 38 60 dc 00 00 04 33 30 3c 00 00 00 20 63 62 39 62 33 61 36 37 34 64 31 36 66 32 31 39 30 64 30 34 30 63 30 39 32 66 34 66 38 38 32 62 00 06 35 2e 31 33 2e 31 00 00 02 3c so i build the string: string packet = "003860dc0000" + textbox1.text+ "00000020" + textbox2.text+ "0006" + textbox3.text; then "convert" it to ascii: conn_str = HexString2Ascii(packet); then i send the packet... but i have this: 00 38 60 **c3 9c** 00 00 04 33 30 3c 00 00 00 20 63 62 39 62 33 61 36 37 34 64 31 36 66 32 31 39 30 64 30 34 30 63 30 39 32 66 34 66 38 38 32 62 00 06 35 2e 31 33 2e 31 00 00 02 3c **0a** why?? Thank you! P.S. the function is: private string HexString2Ascii(string hexString) { byte[] tmp; int j = 0; int lenght; lenght=hexString.Length-2; tmp = new byte[(hexString.Length)/2]; for (int i = 0; i <= lenght; i += 2) { tmp[j] =(byte)Convert.ToChar(Int32.Parse(hexString.Substring(i, 2), System.Globalization.NumberStyles.HexNumber)); j++; } return Encoding.GetEncoding(1252).GetString(tmp); }

    Read the article

  • What is the purpose of the s==NULL case for mbrtowc?

    - by R..
    mbrtowc is specified to handle a NULL pointer for the s (multibyte character pointer) argument as follows: If s is a null pointer, the mbrtowc() function shall be equivalent to the call: mbrtowc(NULL, "", 1, ps) In this case, the values of the arguments pwc and n are ignored. As far as I can tell, this usage is largely useless. If ps is not storing any partially-converted character, the call will simply return 0 with no side effects. If ps is storing a partially-converted character, then since '\0' is not valid as the next byte in a multibyte sequence ('\0' can only be a string terminator), the call will return (size_t)-1 with errno==EILSEQ. and leave ps in an undefined state. The intended usage seems to have been to reset the state variable, particularly when NULL is passed for ps and the internal state has been used, analogous to mbtowc's behavior with stateful encodings, but this is not specified anywhere as far as I can tell, and it conflicts with the semantics for mbrtowc's storage of partially-converted characters (if mbrtowc were to reset state when encountering a 0 byte after a potentially-valid initial subsequence, it would be unable to detect this dangerous invalid sequence). If mbrtowc were specified to reset the state variable only when s is NULL, but not when it points to a 0 byte, a desirable state-reset behavior would be possible, but such behavior would violate the standard as written. Is this a defect in the standard? As far as I can tell, there is absolutely no way to reset the internal state (used when ps is NULL) once an illegal sequence has been encountered, and thus no correct program can use mbrtowc with ps==NULL.

    Read the article

  • Memorystream and Large Object Heap

    - by Flo
    I have to transfer large files between computers on via unreliable connections using WCF. Because I want to be able to resume the file and I don't want to be limited in my filesize by WCF, I am chunking the files into 1MB pieces. These "chunk" are transported as stream. Which works quite nice, so far. My steps are: open filestream read chunk from file into byet[] and create memorystream transfer chunk back to 2. until the whole file is sent My problem is in step 2. I assume that when I create a memory stream from a byte array, it will end up on the LOH and ultimately cause an outofmemory exception. I could not actually create this error, maybe I am wrong in my assumption. Now, I don't want to send the byte[] in the message, as WCF will tell me the array size is too big. I can change the max allowed array size and/or the size of my chunk, but I hope there is another solution. My actual question(s): Will my current solution create objects on the LOH and will that cause me problem? Is there a better way to solve this? Btw.: On the receiving side I simple read smaller chunks from the arriving stream and write them directly into the file, so no large byte arrays involved.

    Read the article

  • How do I post a .wav file from CS5 Flash, AS3 to a Java servlet?

    - by Muostar
    Hi, I am trying to send a byteArray from my .fla to my running tomcat server integrated in Eclipse. From flash I am using the following code: var loader:URLLoader = new URLLoader(); var header:URLRequestHeader = new URLRequestHeader("audio/wav", "application/octet-stream"); var request:URLRequest = new URLRequest("http://localhost:8080/pdp/Server?wav=" + tableID); request.requestHeaders.push(header); request.method = URLRequestMethod.POST; request.data = file;//wav; loader.load(request); And my java servlet looks as follows: try{ int readBytes = -1; int lengthOfBuffer = request.getContentLength(); InputStream input = request.getInputStream(); byte[] buffer = new byte[lengthOfBuffer]; ByteArrayOutputStream output = new ByteArrayOutputStream(lengthOfBuffer); while((readBytes = input.read(buffer, 0, lengthOfBuffer)) != -1) { output.write(buffer, 0, readBytes); } byte[] finalOutput = output.toByteArray(); input.close(); FileOutputStream fos = new FileOutputStream(getServletContext().getRealPath(""+"/temp/"+wav+".wav")); fos.write(finalOutput); fos.close(); When i run the flash .swf file and send the bytearray to the server, I receive following in the server's console window:: (loads of loads of Chinese symbols) May 20, 2010 7:04:57 PM org.apache.tomcat.util.http.Parameters processParameters WARNING: Parameters: Character decoding failed. Parameter '? (loads of loads of Chinese symbols) and then looping this for a long time. It is like I recieve the bytes but not encoding/decoding them correctly. What can I do?

    Read the article

  • What is the proper way to code a read-while loop in Scala?

    - by ARKBAN
    What is the "proper" of writing the standard read-while loop in Scala? By proper I mean written in a Scala-like way as opposed to a Java-like way. Here is the code I have in Java: MessageDigest md = MessageDigest.getInstance( "MD5" ); InputStream input = new FileInputStream( "file" ); byte[] buffer = new byte[1024]; int readLen; while( ( readLen = input.read( buffer ) ) != -1 ) md.update( buffer, 0, readLen ); return md.digest(); Here is the code I have in Scala: val md = MessageDigest.getInstance( hashInfo.algorithm ) val input = new FileInputStream( "file" ) val buffer = new Array[ Byte ]( 1024 ) var readLen = 0 while( readLen != -1 ) { readLen = input.read( buffer ) if( readLen != -1 ) md.update( buffer, 0, readLen ) } md.digest The Scala code is correct and works, but feels very un-Scala-ish. For one it is a literal translation of the Java code, taking advantage of none of the advantages of Scala. Further it is actually longer than the Java code! I really feel like I'm missing something, but I can't figure out what. I'm fairly new to Scala, and so I'm asking the question to avoid falling into the pitfall of writing Java-style code in Scala. I'm more interested in the Scala way to solve this kind of problem than in any specific helper method that might be provided by the Scala API to hash a file. (I apologize in advance for my ad hoc Scala adjectives throughout this question.)

    Read the article

  • C# Web Service gets stuck waiting for lock, does not return

    - by blue
    We have a C#(2.0) application which talks to our server(in java) via web services. Lately we have started seeing following behavior in (ONLY)one of our lab machines(XP): Once in a while(every few days), one of the webservice request will just get stuck, will not return or timeout. Following is the stacktrace where it seem to be stuck. Have no clue what is going on here. Any pointer would be of great help. ESP EIP 05eceeec 7c90eb94 [GCFrame: 05eceeec] 05ecefbc 7c90eb94 [HelperMethodFrame_1OBJ: 05ecefbc] System.Threading.Monitor.Enter(System.Object) 05ecf014 7a5b0034 System.Net.ConnectionGroup.Disassociate(System.Net.Connection) 05ecf040 7a5aeaa7 System.Net.Connection.PrepareCloseConnectionSocket(System.Net.ConnectionReturnResult ByRef) 05ecf0a4 7a5ac0e1 System.Net.Connection.ReadStartNextRequest(System.Net.WebRequest, System.Net.ConnectionReturnResult ByRef) 05ecf0e8 7a5b1119 System.Net.ConnectStream.CallDone(System.Net.ConnectionReturnResult) 05ecf0fc 7a5b3b5a System.Net.ConnectStream.ReadChunkedSync(Byte[], Int32, Int32) 05ecf114 7a5b2b90 System.Net.ConnectStream.ReadWithoutValidation(Byte[], Int32, Int32, Boolean) 05ecf160 7a5b29cc System.Net.ConnectStream.Read(Byte[], Int32, Int32) 05ecf1a0 79473cab System.IO.StreamReader.ReadBuffer(Char[], Int32, Int32, Boolean ByRef) 05ecf1c4 79473bd6 System.IO.StreamReader.Read(Char[], Int32, Int32) 05ecf1e8 69c29119 System.Xml.XmlTextReaderImpl.ReadData() 05ecf1f8 69c2ad70 System.Xml.XmlTextReaderImpl.ParseDocumentContent() 05ecf20c 69c292d7 System.Xml.XmlTextReaderImpl.Read() 05ecf21c 69c2929d System.Xml.XmlTextReader.Read() 05ecf220 6991b3e7 System.Web.Services.Protocols.SoapHttpClientProtocol.ReadResponse(System.Web.Services.Protocols.SoapClientMessage, System.Net.WebResponse, System.IO.Stream, Boolean) 05ecf268 69919ed1 System.Web.Services.Protocols.SoapHttpClientProtocol.Invoke(System.String, System.Object[])

    Read the article

< Previous Page | 47 48 49 50 51 52 53 54 55 56 57 58  | Next Page >