Search Results

Search found 3953 results on 159 pages for 'byte slave'.

Page 57/159 | < Previous Page | 53 54 55 56 57 58 59 60 61 62 63 64  | Next Page >

  • How do I detect whether the sample supplied by VideoSink.OnSample() is right-side up?

    - by Ken Smith
    We're currently using the Silverlight VideoSink to capture video from users' local webcams, kinda like so: protected override void OnSample(long sampleTime, long frameDuration, byte[] sampleData) { if (FrameShouldBeSubmitted()) { byte[] resampledData = ResizeFrame(sampleData); mediaController.SetVideoFrame(resampledData); } } Now, on most of the machines that we've tested, the video sample provided in the byte[] sampleData parameter is upside-down, i.e., if you try to take the RGBA data and turn it into, say, a WriteableBitmap, the bitmap will be upside-down. That's odd, but fairly easy to correct, of course -- you just have to reverse the array as you encode it. The problem is that at least on some machines (e.g., the single Macintosh in our test environment), the video sample provided is no longer upside-down, but right-side up, and hence, flipping the image actually results in an image that's received upside-down on the far side. I reported this to MS as a bug, but their (terse) response was that it was "As Designed". Further attempts at clarification have so far been ignored. Now, I'll grant that it's kinda entertaining to imagine the discussions behind this design decision: "OK, just to make it interesting, let's play the video rightside up on a Mac, but let's turn it upside down for Windows!" "Great idea!" "Yeah, that'll keep those developers guessing!" But beyond that, I can't find this, umm, "feature" documented anywhere, nor can I find any documentation on how one is supposed to be able to tell that a given video sample is upside down or rightside up. Any thoughts on how to tell this?

    Read the article

  • Convert 4 bytes to int

    - by Oscar Reyes
    I'm reading a binary file like this: InputStream in = new FileInputStream( file ); byte[] buffer = new byte[1024]; while( ( in.read(buffer ) > -1 ) { int a = // ??? } What I want to do it to read up to 4 bytes and create a int value from those but, I don't know how to do it. I kind of feel like I have to grab 4 bytes at a time, and perform one "byte" operation ( like << & FF and stuff like that ) to create the new int What's the idiom for this? EDIT Ooops this turn out to be a bit more complex ( to explain ) What I'm trying to do is, read a file ( may be ascii, binary, it doesn't matter ) and extract the integers it may have. For instance suppose the binary content ( in base 2 ) : 00000000 00000000 00000000 00000001 00000000 00000000 00000000 00000010 The integer representation should be 1 , 2 right? :- / 1 for the first 32 bits, and 2 for the remaining 32 bits. 11111111 11111111 11111111 11111111 Would be -1 and 01111111 11111111 11111111 11111111 Would be Integer.MAX_VALUE ( 2147483647 )

    Read the article

  • VB.net Saving an MetaFile / EMF as a bitmap ( .tiff)

    - by pehaada
    Currently I have a third party control that generates a Metafile. I can save the .wmf file to disk with out issue. The problem is how do I render the Metafile as a Tiff file. Currently I have the following code to get my metafile and save it. Dim mf As Metafile = page.GetImage(TXTextControl.Page.PageContent.All) Dim enhMetafileHandle As IntPtr = mf.GetHenhmetafile() Dim h As IntPtr Dim bufferSize As UInteger = GetEnhMetaFileBits(enhMetafileHandle, 0, h) Dim buffer(CInt(bufferSize)) As Byte GetEnhMetaFileBits(enhMetafileHandle, bufferSize, buffer) Dim msMetafileStream As New MemoryStream msMetafileStream.Write(buffer, 0, CInt(bufferSize)) Dim baMetafileData() As Byte baMetafileData = msMetafileStream.ToArray Dim g As Graphics = Graphics.FromImage(mf) mf.Dispose() File.WriteAllBytes("c:\a.wmf", baMetafileData) end sub _ Public Shared Function GetEnhMetaFileBits( ByVal hEMF As System.IntPtr, ByVal nSize As UInteger, ByVal lpData As IntPtr) As UInteger End Function <System.Runtime.InteropServices.DllImportAttribute("gdi32.dll", EntryPoint:="GetEnhMetaFileBits")> _ Public Shared Function GetEnhMetaFileBits(<System.Runtime.InteropServices.InAttribute()> ByVal hEMF As System.IntPtr, ByVal nSize As UInteger, ByVal lpData() As Byte) As UInteger End Function I've tried all sort of IMAGE and Graphic calls and just can't save the meta file as a .tiff. Any suggestions would be great. I even tried to create a new bitmap and draw the metafile onto it. I always end up with a GDI exception being thrown.

    Read the article

  • Is it possible to store controls(Panel) as object, serialize it and store it as a file?

    - by ikky
    The topic says it all. Using Compact Framework C# I'm tiling (order/sequence is important) some images that i download from an url, into a Panel(each image is a PictureBox). This can be a huge process, and may take some time. Therefor i only want the user to download the images and tile them once. So the next time the user uses the Tile Application, the Panel that was created the first time is already stored in a file and is loaded from that file. So what i want is a method to store a Panel as a file. Is this possible, or do you think i should do it another way? I've tried something like this: BinaryWriter panelStorage = new BinaryWriter(new FileStream("imagePanel.panel", FileMode.OpenOrCreate, FileAccess.Write, FileShare.None)); Byte[] bImageObject = new Byte[20000]; bImageObject = (byte[])(object)this.imagePanel; panelStorage .Write(bMapObject); panelStorage .Close(); But the casting was not very legal :P "InvalidCastException" Can anyone help me with this problem? Thank you in advance!

    Read the article

  • udp can not receive any data

    - by StoneHeart
    Here is my code Socket sck = new Socket(AddressFamily.InterNetwork, SocketType.Dgram, ProtocolType.Udp); sck.Bind(new IPEndPoint(IPAddress.Any, 0)); // Broadcast to find server string msg = "Imlookingforaserver:" + udp_listen_port; byte[] sendBytes4 = Encoding.ASCII.GetBytes(msg); IPEndPoint groupEP = new IPEndPoint(IPAddress.Parse("255.255.255.255"), server_port); sck.SetSocketOption(SocketOptionLevel.Socket, SocketOptionName.Broadcast, 1); sck.SendTo(sendBytes4, groupEP); //Wait response from server Socket sck2 = new Socket(AddressFamily.InterNetwork, SocketType.Dgram, ProtocolType.Udp); sck2.Bind(new IPEndPoint(IPAddress.Any, udp_listen_port)); byte[] buffer = new byte[128]; EndPoint remoteEndPoint = new IPEndPoint(IPAddress.Any, udp_listen_port); sck2.ReceiveFrom(buffer, ref remoteEndPoint); //<<< I never pass this line I use above code to try find a server. First I broadcast a message and then I wait for a response from the server. A test I made with the server written in C++ and running in Windows Vista, client written in C# and run on the same machine with server. Problem is: The server can receive message which client broadcast, but client can not receive anything from server. I try to write a client with C++ and it work like a charm, I think my problem is in C# client.

    Read the article

  • Best way to pass image to server?

    - by Chris
    I have an SL3 application that needs to be able to pass an image to the server, and then the server will generate a PDF file with the image in it, and display it to the user. What I already have in place are the following: (1) Code to convert image to byte array (2) Code to generate PDF file with image The main problem that I am running into is the following: In order to bypass the pop-up blocker, which is a requirement for my application, I am using the following code: var button = new NavigationButton(); button.NavigateUri = new Uri("http://localhost:3616/PrintReport.aspx?ReportIndex=11&ActionType=Get&ReportIdentifier=" + reportIdentifier.ToString() + ""); button.TargetName = "_blank"; button.PerformClick(); Initially, I would pass the image to a WCF web service (as a byte array), and then "navigate" to the ASP.NET page that would display the report. However, if I do this, then I can not use my custom HyperlinkButton class, and, certain browsers, including Safari, will block a new window from opening up. Therefore, it appears that the only option is to use the HyperlinkButton class. What I need to be able to do is to somehow pass the image, as a byte array or some other data type, to the server, such that it can temporarily store the image, even if it is in a server variable, and then immediately retrieve it when I navigate to the PrintReport.aspx page. If I upload the image to an ASP.NET form and then use the HyperlinkButton class to navigate to the PrintReport page, it doesn't work, as the app navigates to the PrintReport page before the system has finished uploading the image. I can't pass it to a web service, as that would require that I navigate to the PrintReport.aspx page in the callback code of the web method that I would be passing the image to, and the HyperlinkButton will not allow that, based on security rules. Any help or ideas would be appreciated. Thanks. Chris

    Read the article

  • Java File IO Compendium

    - by Warren Taylor
    I've worked in and around Java for nigh on a decade, but have managed to ever avoid doing serious work with files. Mostly I've written database driven applications, but occasionally, even those require some file io. Since I do it so rarely, I end up googling around for quite some time to figure out the exact incantation that Java requires to read a file into a byte[], char[], String or whatever I need at the time. For a 'once and for all' list, I'd like to see all of the usual ways to get data from a file into Java, or vice versa. There will be a fair bit of overlap, but the point is to define all of the subtle different variants that are out there. For example: Read/Write a text file from/to a single String. Read a text file line by line. Read/Write a binary file from/to a single byte[]. Read a binary file into a byte[] of size x, one chunk at a time. The goal is to show concise ways to do each of these. Samples do not need to handle missing files or other errors, as that is generally domain specific. Feel free to suggest more IO tasks that are somewhat common and I have neglected to mention.

    Read the article

  • Decoding tcp packets using python

    - by mikip
    Hello I am trying to decode data received over a tcp connection. The packets are small, no more than 100 bytes. However when there is a lot of them I receive some of the the packets joined together. Is there a way to prevent this. I am using python I have tried to separate the packets, my source is below. The packets start with STX byte and end with ETX bytes, the byte following the STX is the packet length, (packet lengths less than 5 are invalid) the checksum is the last bytes before the ETX def decode(data): while True: start = data.find(STX) if start == -1: #no stx in message pkt = '' data = '' break #stx found , next byte is the length pktlen = ord(data[1]) #check message ends in ETX (pktken -1) or checksum invalid if pktlen < 5 or data[pktlen-1] != ETX or checksum_valid(data[start:pktlen]) == False: print "Invalid Pkt" data = data[start+1:] continue else: pkt = data[start:pktlen] data = data[pktlen:] break return data , pkt I use it like this #process reports try: data = sock.recv(256) except: continue else: while data: data, pkt = decode(data) if pkt: process(pkt) Also if there are multiple packets in the data stream, is it best to return the packets as a collection of lists or just return the first packet I am not that familiar with python, only C, is this method OK. Any advice would be most appreciated. Thanks in advance Thanks

    Read the article

  • rs232 communication, general timing question

    - by Sunny Dee
    Hi, I have a piece of hardware which sends out a byte of data representing a voltage signal at a frequency of 100Hz over the serial port. I want to write a program that will read in the data so I can plot it. I know I need to open the serial port and open an inputstream. But this next part is confusing me and I'm having trouble understanding the process conceptually: I create a while loop that reads in the data from the inputstream 1 byte at a time. How do I get the while loop timing so that there is always a byte available to be read whenever it reaches the readbyte line? I'm guessing that I can't just put a sleep function inside the while loop to try and match it to the hardware sample rate. Is it just a matter of continuing reading the inputstream in the while loop, and if it's too fast then it won't do anything (since there's no new data), and if it's too slow then it will accumulate in the inputstream buffer? Like I said, i'm only trying to understand this conceptually so any guidance would be much appreciated! I'm guessing the idea is independent of which programming language I'm using, but if not, assume it is for use in Java. Thanks!

    Read the article

  • Binary search in a sorted (memory-mapped ?) file in Java

    - by sds
    I am struggling to port a Perl program to Java, and learning Java as I go. A central component of the original program is a Perl module that does string prefix lookups in a +500 GB sorted text file using binary search (essentially, "seek" to a byte offset in the middle of the file, backtrack to nearest newline, compare line prefix with the search string, "seek" to half/double that byte offset, repeat until found...) I have experimented with several database solutions but found that nothing beats this in sheer lookup speed with data sets of this size. Do you know of any existing Java library that implements such functionality? Failing that, could you point me to some idiomatic example code that does random access reads in text files? Alternatively, I am not familiar with the new (?) Java I/O libraries but would it be an option to memory-map the 500 GB text file (I'm on a 64-bit machine with memory to spare) and do binary search on the memory-mapped byte array? I would be very interested to hear any experiences you have to share about this and similar problems.

    Read the article

  • Why This Maintainability Index Increase?

    - by Timothy
    I would be appreciative if someone could explain to me the difference between the following two pieces of code in terms of Visual Studio's Code Metrics rules. Why does the Maintainability Index increase slightly if I don't encapsulate everything within using ( )? Sample 1 (MI score of 71) public static String Sha1(String plainText) { using (SHA1Managed sha1 = new SHA1Managed()) { Byte[] text = Encoding.Unicode.GetBytes(plainText); Byte[] hashBytes = sha1.ComputeHash(text); return Convert.ToBase64String(hashBytes); } } Sample 2 (MI score of 73) public static String Sha1(String plainText) { Byte[] text, hashBytes; using (SHA1Managed sha1 = new SHA1Managed()) { text = Encoding.Unicode.GetBytes(plainText); hashBytes = sha1.ComputeHash(text); } return Convert.ToBase64String(hashBytes); } I understand metrics are meaningless outside of a broader context and understanding, and programmers should exercise discretion. While I could boost the score up to 76 with return Convert.ToBase64String(sha1.ComputeHash(Encoding.Unicode.GetBytes(plainText))), I shouldn't. I would clearly be just playing with numbers and it isn't truly any more readable or maintainable at that point. I am curious though as to what the logic might be behind the increase in this case. It's obviously not line-count.

    Read the article

  • How Can I Generate Equivalent Output Using the CryptoAPI and the .NET Encryption (TripleDESCryptoSer

    - by S. Butts
    I have some C#/.NET code that encrypts and decrypts data using TripleDES encryption. It sticks to the sample code provided at MSDN pretty closely. The encryption piece looks like the following: TripleDESCryptoServiceProvider _desProvider = new TripleDESCryptoServiceProvider(); //bytes for key and initialization vector //keyBytes is 24 bytes of stuff, vectorBytes is 8 bytes of stuff byte[] keyBytes; byte[] vectorBytes; FileStream fStream = File.Open(locationOfFile, FileMode.Create, FileAccess.Write); CryptoStream cStream = new CryptoStream(fStream, _desProvider.CreateEncryptor(keyBytes, vectorBytes), CryptoStreamMode.Write); BinaryWriter bWriter = new BinaryWriter(cStream); //write out encrypted data //raw data is a few bytes of binary information byte[] rawData; bWriter.Write(rawData); With encrypting and decrypting in C#, this all works like a charm. The problem is I need to write a small Win32 utility that will duplicate the encryption above. I have tried several methods using the CryptoAPI, and I simply do not get output that the .NET piece can decrypt, no matter what I do. Can someone please tell me what the equivalent C++ code is that will produce the same output? I am not certain just what methods of the CryptoAPI the .NET functions use to encrypt the data. What options are used, and what method of generating the key is used? Before someone suggests that I just write it in C# anyway, or create some common library bridge for them, those options are unfortunately off the table. It really has to work in Win32 with .NET and without using a DLL. I have some leeway in changing the C# code. I apologize in advance if this is bone-headed, as I am new to encryption.

    Read the article

  • C# - parse content away from structure in a binary file

    - by Jeff Godfrey
    Using C#, I need to read a packed binary file created using FORTRAN. The file is stored in an "Unformatted Sequential" format as described here (about half-way down the page in the "Unformatted Sequential Files" section): http://www.tacc.utexas.edu/services/userguides/intel8/fc/f_ug1/pggfmsp.htm As you can see from the URL, the file is organized into "chunks" of 130 bytes or less and includes 2 length bytes (inserted by the FORTRAN compiler) surrounding each chunk. So, I need to find an efficient way to parse the actual file payload away from the compiler-inserted formatting. Once I've extracted the actual payload from the file, I'll then need to parse it up into its varying data types. That'll be the next exercise. My first thoughts are to slurp up the entire file into a byte array using File.ReadAllBytes. Then, just iterate through the bytes, skipping the formatting and transferring the actual data to a second byte array. In the end, that second byte array should contain the actual file contents minus all the formatting, which I'd then need to go back through to get what I need. As I'm fairly new to C#, I thought there might be a better, more accepted way of tackling this. Also, in case it's helpful, these files could be fairly large (say 30MB), though most will be much smaller...

    Read the article

  • GCC - How to realign stack?

    - by psihodelia
    I try to build an application which uses pthreads and __m128 SSE type. According to GCC manual, default stack alignment is 16 bytes. In order to use __m128, the requirement is the 16-byte alignment. My target CPU supports SSE. I use a GCC compiler which doesn't support runtime stack realignment (e.g. -mstackrealign). I cannot use any other GCC compiler version. My test application looks like: #include <xmmintrin.h> #include <pthread.h> void *f(void *x){ __m128 y; ... } int main(void){ pthread_t p; pthread_create(&p, NULL, f, NULL); } The application generates an exception and exits. After a simple debugging (printf "%p", &y), I found that the variable y is not 16-byte aligned. My question is: how can I realign the stack properly (16-byte) without using any GCC flags and attributes (they don't help)? Should I use GCC inline Assembler within this thread function f()?

    Read the article

  • how is data stored at bit level according to "Endianness" ?

    - by bakra
    I read about Endianness and understood squat... so I wrote this main() { int k = 0xA5B9BF9F; BYTE *b = (BYTE*)&k; //value at *b is 9f b++; //value at *b is BF b++; //value at *b is B9 b++; //value at *b is A5 } k was equal to "A5 B9 BF 9F" and (byte)pointer "walk" o/p was "9F BF b9 A5" so I get it bytes are stored backwards...ok. ~ so now I thought how is it stored at BIT level... I means is "9f"(1001 1111) stored as "f9"(1111 1001)? so I wrote this int _tmain(int argc, _TCHAR* argv[]) { int k = 0xA5B9BF9F; void *ptr = &k; bool temp= TRUE; cout<<"ready or not here I come \n"< for(int i=0;i<32;i++) { temp = *( (bool*)ptr + i ); if( temp ) cout<<"1 "; if( !temp) cout<<"0 "; if(i==7||i==15||i==23) cout<<" - "; } } I get some random output even for nos. like "32" I dont get anything sensible. why ?

    Read the article

  • What am I doing wrong with HttpResponse content and headers when downloading a file?

    - by Slauma
    I want to download a PDF file from a SQL Server database which is stored in a binary column. There is a LinkButton on an aspx page. The event handler of this button looks like this: protected void LinkButtonDownload(object sender, EventArgs e) { ... byte[] aByteArray; // Read binary data from database into this ByteArray // aByteArray has the size: 55406 byte Response.ClearHeaders(); Response.ClearContent(); Response.BufferOutput = true; Response.AddHeader("Content-Disposition", "attachment; filename=" + "12345.pdf"); Response.ContentType = "application/pdf"; using (BinaryWriter aWriter = new BinaryWriter(Response.OutputStream)) { aWriter.Write(aByteArray, 0, aByteArray.Length); } } A "File Open/Save dialog" is offered in my browser. When I store this file "12345.pdf" to disk, the file has a size of 71523 Byte. The additional 16kB at the end of the PDF file are the HTML code of my page (as I can see when I view the file in an editor). I am confused because I was believing that ClearContent and ClearHeaders would ensure that the page content is not sent together with the file content. What am I doing wrong here? Thanks for help!

    Read the article

  • Retrieving dll version info via Win32 - VerQueryValue(...) crashes under Win7 x64

    - by user256890
    The respected open source .NET wrapper implementation (SharpBITS) of Windows BITS services fails identifying the underlying BITS version under Win7 x64. Here is the source code that fails. NativeMethods are native Win32 calls wrapped by .NET methods and decorated via DllImport attribute. private static BitsVersion GetBitsVersion() { try { string fileName = Path.Combine( System.Environment.SystemDirectory, "qmgr.dll"); int handle = 0; int size = NativeMethods.GetFileVersionInfoSize(fileName, out handle); if (size == 0) return BitsVersion.Bits0_0; byte[] buffer = new byte[size]; if (!NativeMethods.GetFileVersionInfo(fileName, handle, size, buffer)) { return BitsVersion.Bits0_0; } IntPtr subBlock = IntPtr.Zero; uint len = 0; if (!NativeMethods.VerQueryValue(buffer, @"\VarFileInfo\Translation", out subBlock, out len)) { return BitsVersion.Bits0_0; } int block1 = Marshal.ReadInt16(subBlock); int block2 = Marshal.ReadInt16((IntPtr)((int)subBlock + 2 )); string spv = string.Format( @"\StringFileInfo\{0:X4}{1:X4}\ProductVersion", block1, block2); string versionInfo; if (!NativeMethods.VerQueryValue(buffer, spv, out versionInfo, out len)) { return BitsVersion.Bits0_0; } ... The implementation follows the MSDN instructions by the letter. Still during the second VerQueryValue(...) call, the application crashes and kills the debug session without hesitation. Just a little more debug info right before the crash: spv = "\StringFileInfo\040904B0\ProductVersion" buffer = byte[1900] - full with binary data block1 = 1033 block2 = 1200 I looked at the targeted "C:\Windows\System32\qmgr.dll" file (The implementation of BITS) via Windows. It says that the Product Version is 7.5.7600.16385. Instead of crashing, this value should return in the verionInfo string. Any advice?

    Read the article

  • Why should the "prime-based" hashcode implmentation be used instead of the "naive" one?

    - by Wilhelm
    I have seen that a prime number implmentation of the GetHashCode function is being recommend, for example here. However using the following code (in VB, sorry), it seems as if that implementation gives the same hash density as a "naive" xor implementation. If the density is the same, I would suppose there is the same probability of cllision in both implementations. Am I missing anything on why is the prime approach preferred? I am supossing that if the hash code is a byte I do not lose generality for the integer case. Sub Main() Dim XorHashes(255) As Integer Dim PrimeHashes(255) As Integer For i = 0 To 255 For j = 0 To 255 For k = 0 To 255 XorHashes(GetXorHash(i, j, k)) += 1 PrimeHashes(GetPrimeHash(i, j, k)) += 1 Next Next Next For i = 0 To 255 Console.WriteLine("{0}: {1}, {2}", i, XorHashes(i), PrimeHashes(i)) Next Console.ReadKey() End Sub Public Function GetXorHash(ByVal valueOne As Integer, ByVal valueTwo As Integer, ByVal valueThree As Integer) As Byte Return CByte((valueOne Xor valueTwo Xor valueThree) Mod 256) End Function Public Function GetPrimeHash(ByVal valueOne As Integer, ByVal valueTwo As Integer, ByVal valueThree As Integer) As Byte Dim TempHash = 17 TempHash = 31 * TempHash + valueOne TempHash = 31 * TempHash + valueTwo TempHash = 31 * TempHash + valueThree Return CByte(TempHash Mod 256) End Function

    Read the article

  • Why do System.IO.Log SequenceNumbers have variable length?

    - by Doug McClean
    I'm trying to use the System.IO.Log features to build a recoverable transaction system. I understand it to be implemented on top of the Common Log File System. The usual ARIES approach to write-ahead logging involves persisting log record sequence numbers in places other than the log (for example, in the header of the database page modified by the logged action). Interestingly, the documentation for CLFS says that such sequence numbers are always 64-bit integers. Confusingly, however, the .Net wrapper around those SequenceNumbers can be constructed from a byte[] but not from a UInt64. It's value can also be read as a byte[], but not as a UInt64. Inspecting the implementation of SequenceNumber.GetBytes() reveals that it can in fact return arrays of either 8 or 16 bytes. This raises a few questions: Why do the .Net sequence numbers differ in size from the CLFS sequence numbers? Why are the .Net sequence numbers variable in length? Why would you need 128 bits to represent such a sequence number? It seems like you would truncate the log well before using up a 64-bit address space (16 exbibytes, or around 10^19 bytes, more if you address longer words)? If log sequence numbers are going to be represented as 128 bit integers, why not provide a way to serialize/deserialize them as pairs of UInt64s instead of rather-pointlessly incurring heap allocations for short-lived new byte[]s every time you need to write/read one? Alternatively, why bother making SequenceNumber a value type at all? It seems an odd tradeoff to double the storage overhead of log sequence numbers just so you can have an untruncated log longer than a million terabytes, so I feel like I'm missing something here, or maybe several things. I'd much appreciate it if someone in the know could set me straight.

    Read the article

  • Send already generated MHTML using CDOSYS through C#?

    - by mutex
    I have a ready generated MHTML as a byte array (from Aspose.Words) and would like to send it as an email. I'm trying to do this through CDOSYS, though am open to other suggestions. For now though I have the following: CDO.Message oMsg = new CDO.Message(); CDO.IConfiguration iConfg = oMsg.Configuration; Fields oFields = iConfg.Fields; // Set configuration. Field oField = oFields["http://schemas.microsoft.com/cdo/configuration/sendusing"]; oField.Value = CDO.CdoSendUsing.cdoSendUsingPort; oField = oFields["http://schemas.microsoft.com/cdo/configuration/smtpserver"]; oField.Value = SmtpClient.Host; oField = oFields["http://schemas.microsoft.com/cdo/configuration/smtpserverport"]; oField.Value = SmtpClient.Port; oFields.Update(); //oMsg.CreateMHTMLBody("http://www.microsoft.com", CDO.CdoMHTMLFlags.cdoSuppressNone, "", ""); // NEED MAGIC HERE :) oMsg.Subject = warning.Subject; // string oMsg.From = "[email protected]"; oMsg.To = warning.EmailAddress; oMsg.Send(); In this snippet, the warning variable has a Body property which is a byte[]. Where it says "NEED MAGIC HERE" in the code above I want to use this byte[] to set the body of the CDO Message. I have tried the following, which unsurprisingly doesn't work: oMsg.HTMLBody = System.Text.Encoding.ASCII.GetString(warning.Body); Anybody have any ideas how I can achieve what I want with CDOSYS or something else?

    Read the article

  • How to Use Calculated Color Values with ColorMatrix?

    - by Otaku
    I am changing color values of each pixel in an image based on a calculation. The problem is that this takes over 5 seconds on my machine with a 1000x1333 image and I'm looking for a way to optimize it to be much faster. I think ColorMatrix may be an option, but I'm having a difficult time figure out how I would get a set of pixel RGB values, use that to calculate and then set the new pixel value. I can see how this can be done if I was just modifying (multiplying, subtracting, etc.) the original value with ColorMatrix, but now how I can use the pixels returned value to use it to calculate and new value. For example: Sub DarkenPicture() Dim clrTestFolderPath = "C:\Users\Me\Desktop\ColorTest\" Dim originalPicture = "original.jpg" Dim Luminance As Single Dim bitmapOriginal As Bitmap = Image.FromFile(clrTestFolderPath + originalPicture) Dim Clr As Color Dim newR As Byte Dim newG As Byte Dim newB As Byte For x = 0 To bitmapOriginal.Width - 1 For y = 0 To bitmapOriginal.Height - 1 Clr = bitmapOriginal.GetPixel(x, y) Luminance = ((0.21 * (Clr.R) + (0.72 * (Clr.G)) + (0.07 * (Clr.B))/ 255 newR = Clr.R * Luminance newG = Clr.G * Luminance newB = Clr.B * Luminance bitmapOriginal.SetPixel(x, y, Color.FromArgb(newR, newG, newB)) Next Next bitmapOriginal.Save(clrTestFolderPath + "colorized.jpg", ImageFormat.Jpeg) End Sub The Luminance value is the calculated one. I know I can set ColorMatrix's M00, M11, M22 to 0, 0, 0 respectively and then put a new value in M40, M41, M42, but that new value is calculated based of a value multiplication and addition of that pixel's components (((0.21 * (Clr.R) + (0.72 * (Clr.G)) + (0.07 * (Clr.B)) and the result of that - Luminance - is multiplied by the color component). Is this even possible with ColorMatrix?

    Read the article

  • What is the correct JNA mapping for UniChar on Mac OS X?

    - by Trejkaz
    I have a C struct like this: struct HFSUniStr255 { UInt16 length; UniChar unicode[255]; }; I have mapped this in the expected way: public class HFSUniStr255 extends Structure { public UInt16 length; // UInt16 is just an IntegerType with length 2 for convenience. public /*UniChar*/ char[] unicode = new char[255]; //public /*UniChar*/ byte[] unicode = new byte[255*2]; //public /*UniChar*/ UInt16[] unicode = new UInt16[255]; public HFSUniStr255() { } public HFSUniStr255(Pointer pointer) { super(pointer); } } If I use this version, I get every second character of the string into my char[] ("aits D" for "Macintosh HD".) I am assuming that this is something to do with being on a 64-bit platform and JNA mapping the value to a 32-bit wchar_t but then chopping off the high 16 bits on each wchar_t on copying them back. If I use the byte[] version, I get data which decodes correctly using the UTF-16LE charset. If I use the UInt16[] version, I get the right code point for each character but it is then inconvenient to convert them back into a string. Is there some way I can define my type as char[], and yet have it convert correctly?

    Read the article

  • C# udp can not receive any data

    - by StoneHeart
    here is my code Socket sck = new Socket(AddressFamily.InterNetwork, SocketType.Dgram, ProtocolType.Udp); sck.Bind(new IPEndPoint(IPAddress.Any, 0)); // Broadcast to find server string msg = "Imlookingforaserver:" + udp_listen_port; byte[] sendBytes4 = Encoding.ASCII.GetBytes(msg); IPEndPoint groupEP = new IPEndPoint(IPAddress.Parse("255.255.255.255"), server_port); sck.SetSocketOption(SocketOptionLevel.Socket, SocketOptionName.Broadcast, 1); sck.SendTo(sendBytes4, groupEP); //Wait response from server Socket sck2 = new Socket(AddressFamily.InterNetwork, SocketType.Dgram, ProtocolType.Udp); sck2.Bind(new IPEndPoint(IPAddress.Any, udp_listen_port)); byte[] buffer = new byte[128]; EndPoint remoteEndPoint = new IPEndPoint(IPAddress.Any, udp_listen_port); sck2.ReceiveFrom(buffer, ref remoteEndPoint); //<<< I never pass this line I use above code to try find a server. First i broadcast a message and then i wait response from server. A test i made with the server written in c++ and running in windows vista, client written in C# and run on the same machine with server. Problem is: The server can receive message which client broadcast. But client can not receive anything from server. I try to write a client with c++ and it work like a charm, i think my problem is in C# client.

    Read the article

  • Why Does This Maintainability Index Increase?

    - by Timothy
    I would be appreciative if someone could explain to me the difference between the following two pieces of code in terms of Visual Studio's Code Metrics rules. Why does the Maintainability Index increase slightly if I don't encapsulate everything within using ( )? Sample 1 (MI score of 71) public static String Sha1(String plainText) { using (SHA1Managed sha1 = new SHA1Managed()) { Byte[] text = Encoding.Unicode.GetBytes(plainText); Byte[] hashBytes = sha1.ComputeHash(text); return Convert.ToBase64String(hashBytes); } } Sample 2 (MI score of 73) public static String Sha1(String plainText) { Byte[] text, hashBytes; using (SHA1Managed sha1 = new SHA1Managed()) { text = Encoding.Unicode.GetBytes(plainText); hashBytes = sha1.ComputeHash(text); } return Convert.ToBase64String(hashBytes); } I understand metrics are meaningless outside of a broader context and understanding, and programmers should exercise discretion. While I could boost the score up to 76 with return Convert.ToBase64String(sha1.ComputeHash(Encoding.Unicode.GetBytes(plainText))), I shouldn't. I would clearly be just playing with numbers and it isn't truly any more readable or maintainable at that point. I am curious though as to what the logic might be behind the increase in this case. It's obviously not line-count.

    Read the article

  • Culture Sensitive GetHashCode

    - by user114928
    Hi, I'm writing a c# application that will process some text and provide basic query functions. In order to ensure the best possible support for other languages, I am allowing the users of the application to specify the System.Globalization.CultureInfo (via the "en-GB" style code) and also the full range of collation options using the System.Globalization.CompareOptions flags enum. For regular string comparison I'm then using a combination of: a) String.Compare overload that accepts the culture and options b) For some bulk processes I'm caching the byte data (KeyData) from CompareInfo.GetSortKey (overload that accepts the options) and using a byte-by-byte comparison of the KeyData. This seemed fine (although please comment if you think these two methods shouldn't be mixed), but then I had reason to use the HashSet< class which only has an overload for IEqualityComparer<. MS documentation seems to suggest that I should use StringComparer (which implements both IEqualityComparer< and IComparer<), but this only seems to support the "IgnoreCase" option from CompareOptions and not "IgnoreKanaType", "IgnoreSymbols", "IgnoreWidth" etc. I'm assuming that a StringComparer that ignores these other options could produce different hashcodes for two strings that might be considered the same using my other comparison options. I'd therefore get incorrect results from my application. Only thought at the moment is to create my own IEqualityComparer< that generates a hashcode from the SortKey.KeyData and compares eqality be using the String.Compare overload. Any suggestions?

    Read the article

< Previous Page | 53 54 55 56 57 58 59 60 61 62 63 64  | Next Page >