Search Results

Search found 3768 results on 151 pages for 'lite byte'.

Page 88/151 | < Previous Page | 84 85 86 87 88 89 90 91 92 93 94 95  | Next Page >

  • Get image data for Direct3d rendering stream

    - by Mr Bell
    I would like to get at the raw image data, as in a pointed to a byte array or something like that, of the image output from a direct3d app without actually rendering it to the monitor. I need to do this so that I can render direct3d as a directshow source filter Visual studio 2008 c++

    Read the article

  • Code golf - hex to (raw) binary conversion

    - by Alnitak
    In response to this question asking about hex to (raw) binary conversion, a comment suggested that it could be solved in "5-10 lines of C, or any other language." I'm sure that for (some) scripting languages that could be achieved, and would like to see how. Can we prove that comment true, for C, too? NB: this doesn't mean hex to ASCII binary - specifically the output should be a raw octet stream corresponding to the input ASCII hex. Also, the input parser should skip/ignore white space. edit (by Brian Campbell) May I propose the following rules, for consistency? Feel free to edit or delete these if you don't think these are helpful, but I think that since there has been some discussion of how certain cases should work, some clarification would be helpful. The program must read from stdin and write to stdout (we could also allow reading from and writing to files passed in on the command line, but I can't imagine that would be shorter in any language than stdin and stdout) The program must use only packages included with your base, standard language distribution. In the case of C/C++, this means their respective standard libraries, and not POSIX. The program must compile or run without any special options passed to the compiler or interpreter (so, 'gcc myprog.c' or 'python myprog.py' or 'ruby myprog.rb' are OK, while 'ruby -rscanf myprog.rb' is not allowed; requiring/importing modules counts against your character count). The program should read integer bytes represented by pairs of adjacent hexadecimal digits (upper, lower, or mixed case), optionally separated by whitespace, and write the corresponding bytes to output. Each pair of hexadecimal digits is written with most significant nibble first. The behavior of the program on invalid input (characters besides [a-fA-F \t\r\n], spaces separating the two characters in an individual byte, an odd number of hex digits in the input) is undefined; any behavior (other than actively damaging the user's computer or something) on bad input is acceptable (throwing an error, stopping output, ignoring bad characters, treating a single character as the value of one byte, are all OK) The program may write no additional bytes to output. Code is scored by fewest total bytes in the source file. (Or, if we wanted to be more true to the original challenge, the score would be based on lowest number of lines of code; I would impose an 80 character limit per line in that case, since otherwise you'd get a bunch of ties for 1 line).

    Read the article

  • union marshalling issue in C#

    - by senthil
    I have union inside structure and the structure looks like struct tDeviceProperty { DWORD Tag; DWORD Size; union _DP value; }; typedef union _DP { short int i; LONG l; ULONG ul; float flt; double dbl; BOOL b; double at; FILETIME ft; LPSTR lpszA; LPWSTR lpszW; LARGE_INTEGER li; struct tBinary bin; BYTE reserved[40]; } __UDP; struct tBinary { ULONG size; BYTE * bin; }; from the tBinary structure bin has to be converted to tImage (structure is given below) struct tImage { DWORD x; DWORD y; DWORD z; DWORD Resolution; DWORD type; DWORD ID; diccid_t SourceID; const void *buffer; const char *Info; const char *UserImageID; }; to use the same in c# I have done marshaling but not giving proper values when converting the pointer to structure. The C# code is follows, tBinary tBin = new tBinary(); IntPtr tBinbuffer = Marshal.AllocCoTaskMem(Marshal.SizeOf(tBin)); Marshal.StructureToPtr(tBin.bin, tBinbuffer, false); tDeviceProperty tDevice = new tDeviceProperty(); tDevice.bin = tBinbuffer; IntPtr tDevicebuffer = Marshal.AllocCoTaskMem(Marshal.SizeOf(tDevice)); Marshal.StructureToPtr(tDevice.bin, tDevicebuffer, false); Battary tbatt = new Battary(); tbatt.value = tDevicebuffer; IntPtr tbattbuffer = Marshal.AllocCoTaskMem(Marshal.SizeOf(tbatt)); Marshal.StructureToPtr(tbatt.value, tbattbuffer, false); result = GetDeviceProperty(ref tbattbuffer); Battary v = (Battary)Marshal.PtrToStructure(tbattbuffer, typeof(Battary)); tDeviceProperty v2 = (tDeviceProperty)Marshal.PtrToStructure(tDevicebuffer, typeof(tDeviceProperty)); tBinary v3 = (tBinary)Marshal.PtrToStructure(tBinbuffer, typeof(tBinary));

    Read the article

  • Theory: Can JIT Compiler be used to parse the whole program first, then execute later?

    - by unknownthreat
    Normally, JIT Compiler works by reads the byte code, translate it into machine code, and execute it. This is what I understand, but in theory, is it possible to make the JIT Compiler parses the whole program first, then execute the program later as machine code? I do not know how JIT Compiler works technically and exactly, so I don't know any feasibility in this case. But theoretically, is it possible? Or am I doing it wrong?

    Read the article

  • PHP: Combine Two 16-bit Integers into a 32-bit integer

    - by Goro
    Hello, I am trying to combine two integers in my application. By combine I mean stick one byte stream at the end of the other, not concatenate the strings. The two integers are passed from hardware that can't pass a 32 bit value directly, but passes two consecutive 16-bit values separately. Thanks,

    Read the article

  • How to encrypt and save a binary stream after serialization and read it back?

    - by Anindya Chatterjee
    I am having some problems in using CryptoStream when I want to encrypt a binary stream after binary serialization and save it to a file. I am getting the following exception System.ArgumentException : Stream was not readable. Can anybody please show me how to encrypt a binary stream and save it to a file and deserialize it back correctly? The code is as follows: class Program { public static void Main(string[] args) { var b = new B {Name = "BB"}; WriteFile<B>(@"C:\test.bin", b, true); var bb = ReadFile<B>(@"C:\test.bin", true); Console.WriteLine(b.Name == bb.Name); Console.ReadLine(); } public static T ReadFile<T>(string file, bool decrypt) { T bObj = default(T); var _binaryFormatter = new BinaryFormatter(); Stream buffer = null; using (var stream = new FileStream(file, FileMode.OpenOrCreate)) { if(decrypt) { const string strEncrypt = "*#4$%^.++q~!cfr0(_!#$@$!&#&#*&@(7cy9rn8r265&$@&*E^184t44tq2cr9o3r6329"; byte[] dv = {0x12, 0x34, 0x56, 0x78, 0x90, 0xAB, 0xCD, 0xEF}; CryptoStream cs; DESCryptoServiceProvider des = null; var byKey = Encoding.UTF8.GetBytes(strEncrypt.Substring(0, 8)); using (des = new DESCryptoServiceProvider()) { cs = new CryptoStream(stream, des.CreateEncryptor(byKey, dv), CryptoStreamMode.Read); } buffer = cs; } else buffer = stream; try { bObj = (T) _binaryFormatter.Deserialize(buffer); } catch(SerializationException ex) { Console.WriteLine(ex.Message); } } return bObj; } public static void WriteFile<T>(string file, T bObj, bool encrypt) { var _binaryFormatter = new BinaryFormatter(); Stream buffer; using (var stream = new FileStream(file, FileMode.Create)) { try { if(encrypt) { const string strEncrypt = "*#4$%^.++q~!cfr0(_!#$@$!&#&#*&@(7cy9rn8r265&$@&*E^184t44tq2cr9o3r6329"; byte[] dv = {0x12, 0x34, 0x56, 0x78, 0x90, 0xAB, 0xCD, 0xEF}; CryptoStream cs; DESCryptoServiceProvider des = null; var byKey = Encoding.UTF8.GetBytes(strEncrypt.Substring(0, 8)); using (des = new DESCryptoServiceProvider()) { cs = new CryptoStream(stream, des.CreateEncryptor(byKey, dv), CryptoStreamMode.Write); buffer = cs; } } else buffer = stream; _binaryFormatter.Serialize(buffer, bObj); buffer.Flush(); } catch(SerializationException ex) { Console.WriteLine(ex.Message); } } } } [Serializable] public class B { public string Name {get; set;} } It throws the serialization exception as follows The input stream is not a valid binary format. The starting contents (in bytes) are: 3F-17-2E-20-80-56-A3-2A-46-63-22-C4-49-56-22-B4-DA ...

    Read the article

  • Ruby On Rails and UTF-8

    - by Semyon Perepelitsa
    I have an Rails application with SayController, hello action and view template say/hello.html.erb. When I add some cyrillic character like "?", I get an error: ArgumentError in SayController#hello invalid byte sequence in UTF-8 Headers: {"Cache-Control"=>"no-cache", "X-Runtime"=>"11", "Content-Type"=>"text/html; charset=utf-8"} I use Windows 7 x64, Ruby 1.9.1p378, Rails 2.3.5, WEBrick server.

    Read the article

  • mounting ext4 fs with block size of 65536

    - by seaquest
    I am doing some benchmarking on EXT4 performance on Compact Flash media. I have created an ext4 fs with block size of 65536. however I can not mount it on ubuntu-10.10-netbook-i386. (it is already mounting ext4 fs with 4096 bytes of block sizes) According to my readings on ext4 it should allow such big block sized fs. I want to hear your comments. root@ubuntu:~# mkfs.ext4 -b 65536 /dev/sda3 Warning: blocksize 65536 not usable on most systems. mke2fs 1.41.12 (17-May-2010) mkfs.ext4: 65536-byte blocks too big for system (max 4096) Proceed anyway? (y,n) y Warning: 65536-byte blocks too big for system (max 4096), forced to continue Filesystem label= OS type: Linux Block size=65536 (log=6) Fragment size=65536 (log=6) Stride=0 blocks, Stripe width=0 blocks 19968 inodes, 19830 blocks 991 blocks (5.00%) reserved for the super user First data block=0 1 block group 65528 blocks per group, 65528 fragments per group 19968 inodes per group Writing inode tables: done Creating journal (1024 blocks): done Writing superblocks and filesystem accounting information: done This filesystem will be automatically checked every 37 mounts or 180 days, whichever comes first. Use tune2fs -c or -i to override. root@ubuntu:~# tune2fs -l /dev/sda3 tune2fs 1.41.12 (17-May-2010) Filesystem volume name: <none> Last mounted on: <not available> Filesystem UUID: 4cf3f507-e7b4-463c-be11-5b408097099b Filesystem magic number: 0xEF53 Filesystem revision #: 1 (dynamic) Filesystem features: has_journal ext_attr resize_inode dir_index filetype extent flex_bg sparse_super large_file huge_file uninit_bg dir_nlink extra_isize Filesystem flags: signed_directory_hash Default mount options: (none) Filesystem state: clean Errors behavior: Continue Filesystem OS type: Linux Inode count: 19968 Block count: 19830 Reserved block count: 991 Free blocks: 18720 Free inodes: 19957 First block: 0 Block size: 65536 Fragment size: 65536 Blocks per group: 65528 Fragments per group: 65528 Inodes per group: 19968 Inode blocks per group: 78 Flex block group size: 16 Filesystem created: Sat Feb 5 14:39:55 2011 Last mount time: n/a Last write time: Sat Feb 5 14:40:02 2011 Mount count: 0 Maximum mount count: 37 Last checked: Sat Feb 5 14:39:55 2011 Check interval: 15552000 (6 months) Next check after: Thu Aug 4 14:39:55 2011 Lifetime writes: 70 MB Reserved blocks uid: 0 (user root) Reserved blocks gid: 0 (group root) First inode: 11 Inode size: 256 Required extra isize: 28 Desired extra isize: 28 Journal inode: 8 Default directory hash: half_md4 Directory Hash Seed: afb5b570-9d47-4786-bad2-4aacb3b73516 Journal backup: inode blocks root@ubuntu:~# mount -t ext4 /dev/sda3 /mnt/ mount: wrong fs type, bad option, bad superblock on /dev/sda3, missing codepage or helper program, or other error In some cases useful info is found in syslog - try dmesg | tail or so

    Read the article

  • Convert between python array and .NET Array

    - by dungema
    I have a python method that returns a Python byte array.array('c'). Now, I want to copy this array using System.Runtime.InteropServices.Marshal.Copy. This method however expects a .NET array. import array from System.Runtime.InteropServices import Marshal bytes = array.array('c') bytes.append('a') bytes.append('b') bytes.append('c') Marshal.Copy(bytes, dest, 0, 3) Is there a way to make this work without copying the data? If not, how do I convert the data in the Python array to the .NET array?

    Read the article

  • Memory corruption in System.Move due to changed 8087CW mode (png + stretchblt)

    - by André Mussche
    I have strange a memory corruption problem. After many hours debugging and trying I think I found something. For example: I do a simple string assignment: sTest := 'SET LOCK_TIMEOUT '; However, the result sometimes becomes: sTest = 'SET LOCK'#0'TIMEOUT ' So, the _ gets replaced by an 0 byte. I have seen this happening once (reproducing is tricky, dependent on timing) in the System.Move function, when it uses the FPU stack (fild, fistp) for fast memory copy (in case of 9 till 32 bytes to move): ... @@SmallMove: {9..32 Byte Move} fild qword ptr [eax+ecx] {Load Last 8} fild qword ptr [eax] {Load First 8} cmp ecx, 8 jle @@Small16 fild qword ptr [eax+8] {Load Second 8} cmp ecx, 16 jle @@Small24 fild qword ptr [eax+16] {Load Third 8} fistp qword ptr [edx+16] {Save Third 8} ... Using the FPU view and 2 memory debug views (Delphi - View - Debug - CPU - Memory) I saw it going wrong... once... could not reproduce however... This morning I read something about the 8087CW mode, and yes, if this is changed into $27F I get memory corruption! Normally it is $133F: The difference between $133F and $027F is that $027F sets up the FPU for doing less precise calculations (limiting to Double in stead of Extended) and different infiniti handling (which was used for older FPU’s, but is not used any more). Okay, now I found why but not when! I changed the working of my AsmProfiler with a simple check (so all functions are checked at enter and leave): if Get8087CW = $27F then //normally $1372? if MainThreadID = GetCurrentThreadId then //only check mainthread DebugBreak; I "profiled" some units and dll's and bingo (see stack): Windows.StretchBlt(3372289943,0,0,514,345,4211154027,0,0,514,345,13369376) pngimage.TPNGObject.DrawPartialTrans(4211154027,(0, 0, 514, 345, (0, 0), (514, 345))) pngimage.TPNGObject.Draw($7FF62450,(0, 0, 514, 345, (0, 0), (514, 345))) Graphics.TCanvas.StretchDraw((0, 0, 514, 345, (0, 0), (514, 345)),$7FECF3D0) ExtCtrls.TImage.Paint Controls.TGraphicControl.WMPaint((15, 4211154027, 0, 0)) So it is happening in StretchBlt... What to do now? Is it a fault of Windows, or a bug in PNG (included in D2007)? Or is the System.Move function not failsafe?

    Read the article

  • Base32 Decoding

    - by trampster
    I have a base32 string which I need to convert to a byte array. And I'm having trouble finding a conversion method in the .net framework. I can find methods for base64 but not for base32. Convert.FromBase64String – something like this for base32 would be perfect. Is there such a method in the framework or do I have to roll my own?

    Read the article

  • Using pointers, references, handles to generic datatypes, as generic and flexible as possible

    - by Patrick
    In my application I have lots of different data types, e.g. Car, Bicycle, Person, ... (they're actually other data types, but this is just for the example). Since I also have quite some 'generic' code in my application, and the application was originally written in C, pointers to Car, Bicycle, Person, ... are often passed as void-pointers to these generic modules, together with an identification of the type, like this: Car myCar; ShowNiceDialog ((void *)&myCar, DATATYPE_CAR); The 'ShowNiceDialog' method now uses meta-information (functions that map DATATYPE_CAR to interfaces to get the actual data out of Car) to get information of the car, based on the given data type. That way, the generic logic only has to be written once, and not every time again for every new data type. Of course, in C++ you could make this much easier by using a common root class, like this class RootClass { public: string getName() const = 0; }; class Car : public RootClass { ... }; void ShowNiceDialog (RootClass *root); The problem is that in some cases, we don't want to store the data type in a class, but in a totally different format to save memory. In some cases we have hundreds of millions of instances that we need to manage in the application, and we don't want to make a full class for every instance. Suppose we have a data type with 2 characteristics: A quantity (double, 8 bytes) A boolean (1 byte) Although we only need 9 bytes to store this information, putting it in a class means that we need at least 16 bytes (because of the padding), and with the v-pointer we possibly even need 24 bytes. For hundreds of millions of instances, every byte counts (I have a 64-bit variant of the application and in some cases it needs 6 GB of memory). The void-pointer approach has the advantage that we can almost encode anything in a void-pointer and decide how to use it if we want information from it (use it as a real pointer, as an index, ...), but at the cost of type-safety. Templated solutions don't help since the generic logic forms quite a big part of the application, and we don't want to templatize all this. Additionally, the data model can be extended at run time, which also means that templates won't help. Are there better (and type-safer) ways to handle this than a void-pointer? Any references to frameworks, whitepapers, research material regarding this?

    Read the article

  • How many bytes is \n\r?

    - by donde
    I have a .net app that is trying to ftp a file and we're ending up with 1 extra byte per line. My Line Separatro is Environment.NewLine which I beleive transaltes into \n\r. How many bytes is that?

    Read the article

  • Getresponse not working after authentication

    - by Hazler
    For starters, here's my code: // Create a request using a URL that can receive a post. WebRequest request = WebRequest.Create("http://mydomain.com/cms/csharptest.php"); request.Credentials = new NetworkCredential("myUser", "myPass"); // Set the Method property of the request to POST. request.Method = "POST"; // Create POST data and convert it to a byte array. string postData = "name=PersonName&age=25"; byte[] byteArray = Encoding.UTF8.GetBytes(postData); // Set the ContentType property of the WebRequest. request.ContentType = "application/x-www-form-urlencoded"; // Set the ContentLength property of the WebRequest. request.ContentLength = byteArray.Length; // Get the request stream. Stream dataStream = request.GetRequestStream(); // Write the data to the request stream. dataStream.Write(byteArray, 0, byteArray.Length); // Close the Stream object. dataStream.Close(); // Get the response. HttpWebResponse response = (HttpWebResponse)request.GetResponse(); // Display the status. Console.WriteLine((response).StatusDescription); // Get the stream containing content returned by the server. dataStream = response.GetResponseStream(); // Open the stream using a StreamReader for easy access. StreamReader reader = new StreamReader(dataStream); // Read the content. string responseFromServer = reader.ReadToEnd(); // Display the content. Console.WriteLine(responseFromServer); // Clean up the streams. reader.Close(); dataStream.Close(); response.Close(); The directory cms/ requires authentication, but if I try running this same code somewhere, where authentication isn't needed, it works fine. The error (System.Net.WebException: The remote server returned an error: (403) Forbidden) occurs at HttpWebResponse response = (HttpWebResponse)request.GetResponse(); I have managed in reading data after authenticating, but not if I also send POST data. What's wrong with this?

    Read the article

  • How to wrap two unmannaged C++ functions into two managed C# functions?

    - by Gbps
    I've got two unmanaged C++ functions, Compress and Decompress. The arguments and returns go as followed: unsigned char* Compress (unsigned char*,int) unsigned char* Decompress (unsigned char*,int) Where all uchars are arrays of uchars. Could someone please help me lay out a way to convert these into managed C# code using the Byte[] array instead of unsigned char*? Thank you very much!

    Read the article

  • Serialize problem with cookie

    - by cagin
    Hi there, I want use cookie in my web project. I must serialize my classes. Although my code can seralize an int or string value, it cant seralize my classes. This is my seralize and cookie code : public static bool f_SetCookie(string _sCookieName, object _oCookieValue, DateTime _dtimeExpirationDate) { bool retval = true; try { if (HttpContext.Current.Request[_sCookieName] != null) { HttpContext.Current.Request.Cookies.Remove(_sCookieName); } BinaryFormatter bf = new BinaryFormatter(); MemoryStream ms = new MemoryStream(); bf.Serialize(ms, _oCookieValue); byte[] bArr = ms.ToArray(); MemoryStream objStream = new MemoryStream(); DeflateStream objZS = new DeflateStream(objStream, CompressionMode.Compress); objZS.Write(bArr, 0, bArr.Length); objZS.Flush(); objZS.Close(); byte[] bytes = objStream.ToArray(); string sCookieVal = Convert.ToBase64String(bytes); HttpCookie cook = new HttpCookie(_sCookieName); cook.Value = sCookieVal; cook.Expires = _dtimeExpirationDate; HttpContext.Current.Response.Cookies.Add(cook); } catch { retval = false; } return retval; } And here is one of my classes: [Serializable] public class Tahlil { #region Props & Fields public string M_KlinikKodu{ get; set; } public DateTime M_AlinmaTarihi { get; set; } private List<Test> m_Tesler; public List<Test> M_Tesler { get { return m_Tesler; } set { m_Tesler = value; } } #endregion public Tahlil() {} Tahlil(DataRow _rwTahlil){} } I m calling my Set Cookie method: Tahlil t = new Tahlil(); t.M_AlinmaTarihi = DateTime.Now; t.M_KlinikKodu = "2"; t.M_Tesler = new List<Test>(); f_SetCookie("Tahlil", t, DateTime.Now.AddDays(1)); I cant see cookie in Cookie folder and Temporary Internet Files but if i will call method like that: f_SetCookie("TRY", 5, DateTime.Now.AddDays(1)); I can see cookie. What is the problem? I dont understand. Thank you for your helps.

    Read the article

  • dynamic image control

    - by BigBoss
    I am getting dynamic image in form of byte array. and i want to show that in webpage, preferably ImageControl I am aware of method of creating http handler and getting image stream. but I cant do that here as logic for same is performed somewhere else. Could not get any suitable way to do this. Thank you in advance.

    Read the article

  • TCP sequence number question

    - by Meta
    This is more of a theoretical question than an actual problem I have. If I understand correctly, the sequence number in the TCP header of a packet is the index of the first byte in the packet in the whole stream, correct? If that is the case, since the sequence number is an unsigned 32-bit integer, then what happens after more than FFFFFFFF = 4294967295 bytes are transferred? Will the sequence number wrap around, or will the sender send a SYN packet to restart at 0?

    Read the article

< Previous Page | 84 85 86 87 88 89 90 91 92 93 94 95  | Next Page >