Search Results

Search found 3449 results on 138 pages for 'tranquil byte'.

Page 119/138 | < Previous Page | 115 116 117 118 119 120 121 122 123 124 125 126  | Next Page >

  • Enforce strong type checking in C (type strictness for typedefs)

    - by quinmars
    Is there a way to enforce explicit cast for typedefs of the same type? I've to deal with utf8 and sometimes I get confused with the indices for the character count and the byte count. So it be nice to have some typedefs: typedef unsigned int char_idx_t; typedef unsigned int byte_idx_t; With the addition that you need an explicit cast between them: char_idx_t a = 0; byte_idx_t b; b = a; // compile warning b = (byte_idx_t) a; // ok I know that such a feature doesn't exist in C, but maybe you know a trick or a compiler extension (preferable gcc) that does that. EDIT: I still don't really like the Hungarian notation in general, I couldn't used it for this problem because of project coding conventions, but I used it now in another similar case, where also the types are the same and the meanings are very similar. And I have to admit: it helps. I never would go and declare every integer with a starting "i", but as in Joel's example for overlapping types, it can be life saving.

    Read the article

  • Trouble when changing pixel data with alpha on png on iphone --okay on simulator

    - by Ted
    I'm trying to change the color of the pixels (lighten or darken) without changing the value of the alpha channel using CGDataProviderCopyData. I leave every 4th databyte untouched. It work fine of the iphone simulator, however on the real thing the alpha goes white as I increase the values of the other pixels. I've tried changing just the first byte, or the second, or the third. Does anybody have any idea what is going on? The basic code is borrowed from Jorge. I like this simple approach --I'm new to this. But I want to make it work with png images with some transparency. here is most of the code by Jorge : CFDataRef CopyImagePixels(CGImageRef inImage){ return CGDataProviderCopyData(CGImageGetDataProvider(inImage)); } CGImageRef img=originalImage.CGImage; CFDataRef dataref=CopyImagePixels(img); UInt8 *data=(UInt8 *)CFDataGetBytePtr(dataref); int length=CFDataGetLength(dataref); for(int index=0;index255){ data[index+i]=255; }else{ data[index+i]+=value; } } } } size_t width=CGImageGetWidth(img); size_t height=CGImageGetHeight(img); size_t bitsPerComponent=CGImageGetBitsPerComponent(img); size_t bitsPerPixel=CGImageGetBitsPerPixel(img); size_t bytesPerRow=CGImageGetBytesPerRow(img); CGColorSpaceRef colorspace=CGImageGetColorSpace(img); CGBitmapInfo bitmapInfo=CGImageGetBitmapInfo(img); CGImageAlphaInfo alphaInfo = kCGBitmapAlphaInfoMask(img); NSLog(@"bitmapinfo: %d",bitmapInfo); CFDataRef newData=CFDataCreate(NULL,data,length); CGDataProviderRef provider=CGDataProviderCreateWithCFData(newData); CGImageRef newImg=CGImageCreate(width,height,bitsPerComponent,bitsPerPixel,bytesPerRow,colorspace,bitmapInfo,provider,NULL,true,kCGRenderingIntentDefault); [iv setImage:[UIImage imageWithCGImage:newImg]]; CGImageRelease(newImg); CGDataProviderRelease(provider);

    Read the article

  • Interface for reading variable length files with header and footer.

    - by John S
    I could use some hints or tips for a decent interface for reading file of special characteristics. The files in question has a header (~120 bytes), a body (1 byte - 3gb) and a footer (4 bytes). The header contains information about the body and the footer is only a simple CRC32-value of the body. I use Java so my idea was to extend the "InputStream" class and add a constructor such as "public MyInStream( InputStream in)" where I immediately read the header and the direct the overridden read()'s the body. Problem is, I can't give the user of the class the CRC32-value until the whole body has been read. Because the file can be 3gb large, putting it all in memory is a be an idea. Reading it all in to a temporary file is going to be a performance hit if there are many small files. I don't know how large the file is because the InputStream doesn't have to be a file, it could be a socket. Looking at it again, maybe extending InputStream is a bad idea. Thank you for reading the confused thoughts of a tired programmer. :)

    Read the article

  • How do the size standard libraries compare for different languages

    - by Roman A. Taycher
    Someone was recently raving about how great jQuery was and how it made javascript into a pleasure and also how the whole source code was so small(and one file). I looked it up on www.ohloh.net/ and it said it was about 30,000 lines of javascript, when I tired curl piped to wc it said about 5000 lines(strange discrepancy that, maybe test suites, ect?). I thought well it isn't that strange since javascript from what I've heard has a lot of fun dynamic tricks, so you can probably get away with a small library. But then I thought what about other high level languages, the ones with large standard libraries and wondered how big the standard are for python/ruby/haskell/pharo(smalltalk)/*ml/ect. (libraries not vm stuff to the degree its possible to separate it) Anybody know? Any details (comment/blank/code lines , test code lines, lines in language vs lines in ffi/byte-code) are appreciated! edit: ps. since it started this me asking about jQuery as a bonus if you could please list the size of mega frameworks, a megaframewok provides so much that people using an x megaframework in language y might sometimes refer to programming in xy or even x rather then in y (ie. : qt, jQuery, etc.).

    Read the article

  • ANSI or OEM Codepage when using MME and DirectMusic?

    - by Carl Seleborg
    Hello, I noticed that when reading MIDI port names from MME, the names are multi-byte strings encoded using the ANSI Codepage, which my app uses by default. When receiving those names from the DirectMusic driver, the names are wide-character strings encoded with the OEM Codepage. See this article by Raymond Chen for a quick refresher on Codepages. On my German system, this means that when using the current codepage, which turns out to be the ANSI one, I get "Audiogerät" from MME, and "Audiogeröt" from DirectMusic, the latter being wrong. This gets fixed when I treat that last name as OEM-encoded instead. So how do I know with which codepage to decode those names? Why does the name coming from DirectMusic get encoded differently? Does it come from the USB driver? The COM framework? DirectMusic? How can I know for sure which codepage to use when reading the names of my MIDI ports? For info: I use the MultiByteToWideChar() and WideCharToMultiByte() functions to perform the conversions, with CP_ACP and CP_OEMCP as argument for the codepage to use. I use midiInGetDeviceCaps() to get MIDI port information from the MME subsystem... ... and convert MIDIINCAPS.szPname using the CP_ACP (ANSI) codepage. I use IID_IDirectMusic8::EnumPort() to get port information from DirectMusic... ... and convert DMUS_PORTCAPS.wszDescription using the CP_OEMCP codepage.

    Read the article

  • Python line file iteration and strange characters

    - by muckabout
    I have a huge gzipped text file which I need to read, line by line. I go with the following: for i, line in enumerate(codecs.getreader('utf-8')(gzip.open('file.gz'))): print i, line At some point late in the file, the python output diverges from the file. This is because lines are getting broken due to weird special characters that python thinks are newlines. When I open the file in 'vim', they are correct, but the suspect characters are formatted weirdly. Is there something I can do to fix this? I've tried other codecs including utf-16, latin-1. I've also tried with no codec. I looked at the file using 'od'. Sure enough, there are \n characters where they shouldn't be. But, the "wrong" ones are prepended by a weird character. I think there's some encoding here with some characters being 2-bytes, but the trailing byte being a \n if not viewed properly. If I replace: gzip.open('file.gz') With: os.popen('zcat file.gz') It works fine (and actually, quite faster). But, I'd like to know where I'm going wrong.

    Read the article

  • Having an issue with Nullable MySQL columns in SubSonic 3.0 templates

    - by omegawkd
    Looking at this line in the Settings.ttinclude string CheckNullable(Column col){ string result=""; if(col.IsNullable && col.SysType !="byte[]" && col.SysType !="string") result="?"; return result; } It describes how it determines if the column is nullable based on requirements and returns either "" or "?" to the generated code. Now I'm not too familiar with the ? nullable type operator but from what I can see a cast is required. For instance, if I have a nullable integer MySQL column and I generate the code using the default template files it returns a line similar to this: int? _User_ID; When trying to compile the project I get the error: Cannot implicitly convert type 'int?' to 'int'. An explicit conversion exists (are you missing a cast?) I checked teh Settings files for the other database types and they all seems to have the same routine. So my question is, is this behaviour expected or is this a bug? I need to solve it one way or the other before I can procede. Thanks for your help.

    Read the article

  • Why execution of a portion of code loaded from external file is not halted by the OS?

    - by menjaraz
    I've harnessed a project released on internet a long time ago. Here comes the details, all irrelevant things being stripped off for sake of concision and clarity. A binary file whose content is descibed below HEX DUMP: 55 89 E5 83 EC 08 C7 45 FC 00 00 00 00 8B 45 FC 3B 45 10 72 02 EB 19 8B 45 FC 8B 55 0C 01 C2 8B 45 FC 03 45 08 8A 00 88 02 8D 45 FC FF 00 EB DD C6 45 FA 00 83 7D 10 01 76 6C 80 7D FA 00 74 02 EB 64 C6 45 FA 01 C7 45 FC 00 00 00 00 8B 45 10 48 39 45 FC 72 02 EB E2 8B 45 FC 8B 4D 0C 01 C1 8B 45 FC 03 45 0C 8D 50 01 8A 01 3A 02 73 30 8B 45 FC 03 45 0C 8A 00 88 45 FB 8B 45 FC 8B 55 0C 01 C2 8B 45 FC 03 45 0C 40 8A 00 88 02 8B 45 FC 03 45 0C 8D 50 01 8A 45 FB 88 02 C6 45 FA 00 8D 45 FC FF 00 EB A7 C9 C2 0C 00 90 90 90 90 90 90 is loaded into memory and executed using the following method snippet var MySrcArray, MyDestArray: array [1 .. 15] of Byte; // ... MyBuffer: Pointer; TheProc: procedure; SortIt: procedure(ASrc, ADest: Pointer; ASize: LongWord); stdcall; begin // Initialization of MySrcArray with random Bytes and display here ... // Instructions of loading of the binary file into MyBuffer using merely **GetMem** here ... @SortIt := MyBuffer; try SortIt(@MySrcArray, @MyDestArray, 15); // Display of MyDestArray (The outcome of the processing !) except // Invalid code error handling end; // Cleaning code here ... end; works like a charm on my box. My Question: How comes it works without using VirtualAlloc and/or VirtualProtect?

    Read the article

  • deserialize system.outofmemoryexception

    - by clanier9
    I've got a serializeable class called Cereal with several public fields shown here <Serializable> Public Class Cereal Public id As Integer Public cardType As Type Public attacker As String Public defender As String Public placedOn As String Public attack As Boolean Public placed As Boolean Public played As Boolean Public text As String Public Sub New() End Sub End Class My client computer is sending a new Cereal to the host by serializing it shown here 'sends data to host stream (c1) Private Sub cSendText(ByVal Data As String) Dim bf As New BinaryFormatter Dim c As New Cereal c.text = Data bf.Serialize(mobjClient.GetStream, c) End Sub The host listens to the stream for activity and when something gets put on it, it is supposed to deserialize it to a new Cereal shown here 'accepts data sent from the client, raised when data on host stream (c2) Private Sub DoReceive(ByVal ar As IAsyncResult) Dim intCount As Integer Try 'find how many byte is data SyncLock mobjClient.GetStream intCount = mobjClient.GetStream.EndRead(ar) End SyncLock 'if none, we are disconnected If intCount < 1 Then RaiseEvent Disconnected(Me) Exit Sub End If Dim bf As New BinaryFormatter Dim c As New Cereal c = CType(bf.Deserialize(mobjClient.GetStream), Cereal) If c.text.Length > 0 Then RaiseEvent LineReceived(Me, c.text) Else RaiseEvent CardReceived(Me, c) End If 'starts listening for action on stream again SyncLock mobjClient.GetStream mobjClient.GetStream.BeginRead(arData, 0, 1024, AddressOf DoReceive, Nothing) End SyncLock Catch e As Exception RaiseEvent Disconnected(Me) End Try End Sub when the following line executes, I get a System.OutOfMemoryException and I cannot figure out why this isn't working. c = CType(bf.Deserialize(mobjClient.GetStream), Cereal) The stream is a TCPClient stream. I'm new to serialization/deserialization and using visual studio 11

    Read the article

  • Is it possible to receive SMS message on appWidget?

    - by cappuccino
    Is it possible to receive SMS message on appWidget? I saw android sample source(API Demos). In API Demos, ExampleAppWidgetProvider class extends AppWidgetProvider, not Activity. So, I guess it is impossible to regist SMS Receiver like this, rcvIncoming = new BroadcastReceiver() { @Override public void onReceive(Context context, Intent intent) { Log.i("telephony", "SMS received"); Bundle data = intent.getExtras(); if (data != null) { // SMS uses a data format known as a PDU Object pdus[] = (Object[]) data.get("pdus"); String message = "New message:\n"; String sender = null; for (Object pdu : pdus) { SmsMessage part = SmsMessage.createFromPdu((byte[])pdu); message += part.getDisplayMessageBody(); if (sender == null) { sender = part.getDisplayOriginatingAddress(); } } Log.i(sender, message); } } }; registerReceiver(rcvIncoming, new IntentFilter("android.provider.Telephony.SMS_RECEIVED")); My goal is to receive SMS message on my custom appWidget. Any help would be appreciated!!

    Read the article

  • Python and Unicode: How everything should be Unicode

    - by A A
    Forgive if this a long a question: I have been programming in Python for around six months. Self taught, starting with the Python tutorial and then SO and then just using Google for stuff. Here is the sad part: No one told me all strings should be Unicode. No, I am not lying or making this up, but where does the tutorial mention it? And most examples also I see just make use of byte strings, instead of Unicode strings. I was just browsing and came across this question on SO, which says how every string in Python should be a Unicode string. This pretty much made me cry! I read that every string in Python 3.0 is Unicode by default, so my questions are for 2.x: Should I do a: print u'Some text' or just print 'Text' ? Everything should be Unicode, does this mean, like say I have a tuple: t = ('First', 'Second'), it should be t = (u'First', u'Second')? I read that I can do a from __future__ import unicode_literals and then every string will be a Unicode string, but should I do this inside a container also? When reading/ writing to a file, I should use the codecs module. Right? Or should I just use the standard way or reading/ writing and encode or decode where required? If I get the string from say raw_input(), should I convert that to Unicode also? What is the common approach to handling all of the above issues in 2.x? The from __future__ import unicode_literals statement? Sorry for being a such a noob, but this changes what I have been doing for a long time and so clearly I am confused.

    Read the article

  • OracleGlobalization.SetThreadInfo() ORA-12705 Error

    - by michele
    Hi guys! I'm stuck in a problem, i cannot workaround! I have a Oracle client 11, with registry key set to AMERICAN_AMERICA.WE8ISO8859P1. I cannot edit this key, but my application must get data from Oracle in Italian culture format. So I want to edit culture info form my application only. I'm trying to using OracleGlobalization class in ODP.NET library before my Application.Run(), to set culture for my thread: OracleGlobalization og = OracleGlobalization.GetThreadInfo(); //OracleGlobalization.SetThreadInfo(OracleGlobalization.GetThreadInfo()); og.Calendar = "GREGORIAN"; og.Comparison = "BINARY"; og.Currency = "€"; og.DateFormat = "DD-MON-RR"; og.DateLanguage = "ITALIAN"; og.DualCurrency = "€"; og.ISOCurrency = "ITALY"; og.Language = "ITALIAN"; og.LengthSemantics = "BYTE"; og.NCharConversionException = false; og.NumericCharacters = ",."; og.Sort = "WEST_EUROPEAN"; og.Territory = "ITALY"; OracleGlobalization.SetThreadInfo(og); I get always the same error: ORA-12705: Cannot access NLS data files or invalid environment specified. I really don't know ho to solve this problem! Any hint? I'm working on a Win7 pc with VisualStudio 2008. Thank you in advance!

    Read the article

  • Stored procedure woes ... inserting binary ...

    - by Wardy
    Ok so I have this storedproc in my SQL 2008 database (works in 2005 too / used to) ... CREATE PROCEDURE [dbo].[SetBinaryContent] @Ref nvarchar(50), @Content varbinary(MAX), @ObjectID uniqueidentifier AS BEGIN DELETE ObjectContent WHERE ObjectId = @ObjectID AND Ref = @Ref IF DATALENGTH(@Content) > 5 BEGIN INSERT INTO ObjectContent (Ref,BinaryContent,ObjectId) VALUES (@Ref,@Content,@ObjectId) END UPDATE Objects SET [Status] = 1 WHERE ID = @ObjectID END Relatively simple, I take a byte array in C# and chuck it in @Content i then give it a guid and string for the other params and off we go. ... Great, it used to work ... but it don't anymore ... so erm ... What's wrong with this stored proc? I've stepped through my C# code thinking I screwed up somehow in that but it definately adds the params and gives them the correct values so what would cause the server to just stop executing this storedproc correctly? When called this proc executes but nothing changes in the db ... no new records are added to the ObjectContent table. Weird huh ...

    Read the article

  • How do you save and retrieve a Key/IV pair securely?

    - by Shawn Steward
    I'm using VB.Net's RijndaelManaged (RM) to encrypt files, using the RM.GenerateKey and RM.GenerateIV methods to generate the Key and IV and encrypting the file using the CryptoStream class. I'm planning on saving this Key and IV to a file and want to make sure I'm doing it the right way. I am combining the IV+Key, and encrypting that with my RSA Public key and writing it out to a file. Then, to decrypt I use the RSA Private key on this file to get the IV+Key, split them up and set RM.Key and RM.IV to these values and run the decryptor. Is this the best method to accomplish this, or is there a preferred method for saving the IV & Key? Also, what's the best way to construct and deconstruct the byte array? I used the .Concat method to join them together and that seems to work well but I can't seem to find something as easy to deconstruct it. I played with the .Take method that takes the first x # of bytes and it works for the first part but can't find anything that gets the rest of it.

    Read the article

  • Windows cmd encoding change causes Python crash.

    - by Alex
    First I chage Windows CMD encoding to utf-8 and run Python interpreter: chcp 65001 python Then I try to print a unicode sting inside it and when i do this Python crashes in a peculiar way (I just get a cmd prompt in the same window). >>> import sys >>> print u'ëèæîð'.encode(sys.stdin.encoding) Any ideas why it happens and how to make it work? UPD: sys.stdin.encoding returns 'cp65001' UPD2: It just came to me that the issue might be connected with the fact that utf-8 uses multi-byte character set (kcwu made a good point on that). I tried running the whole example with 'windows-1250' and got 'ëeaî?'. Windows-1250 uses single-character set so it worked for those characters it understands. However I still have no idea how to make 'utf-8' work here. UPD3: Oh, I found out it is a known Python bug. I guess what happens is that Python copies the cmd encoding as 'cp65001 to sys.stdin.encoding and tries to apply it to all the input. Since it fails to understand 'cp65001' it crushes on any input that contains non-ascii characters.

    Read the article

  • Add two 32-bit integers in Assembler for use in VB6

    - by Emtucifor
    I would like to come up with the byte code in assembler (assembly?) for Windows machines to add two 32-bit longs and throw away the carry bit. I realize the "Windows machines" part is a little vague, but I'm assuming that the bytes for ADD are pretty much the same in all modern Intel instruction sets. I'm just trying to abuse VB a little and make some things faster. So... if the string "8A4C240833C0F6C1E075068B442404D3E0C20800" is the assembly code for SHL that can be "injected" into a VB6 program for a fast SHL operation expecting two Long parameters (we're ignoring here that 32-bit longs in VB6 are signed, just pretend they are unsigned), what is the hex string of bytes representing assembler instructions that will do the same thing to return the sum? The hex code above for SHL is, according to the author: mov eax, [esp+4] mov cl, [esp+8] shl eax, cl ret 8 I spit those bytes into a file and tried unassembling them in a windows command prompt using the old debug utility, but I figured out it's not working with the newer instruction set because it didn't like EAX when I tried assembling something but it was happy with AX. I know from comments in the source code that SHL EAX, CL is D3E0, but I don't have any reference to know what the bytes are for instruction ADD EAX, CL or I'd try it. I tried flat assembler and am not getting anything I can figure out how to use. I used it to assemble the original SHL code and got a very different result, not the same bytes. Help?

    Read the article

  • Execute binary from memory in C# .net with binary protected from a 3rd party software

    - by NoobTom
    i've the following scenario: i've a C# application.exe i pack application.exe inside TheMida, a software anti-piracy/reverse engineering. i encrypt application.exe with aes256. (i wrote my own aes encryption/decryption and it is working) Now, when i want to execute my application i do the following: decrypt application.exe in memory execute the application.exe with the following code: BinaryReader br = new BinaryReader(decOutput); byte[] bin = br.ReadBytes(Convert.ToInt32(decOutput.Length)); decOutput.Close(); br.Close(); // load the bytes into Assembly Assembly a = Assembly.Load(bin); // search for the Entry Point MethodInfo method = a.EntryPoint; if (method != null) { // create an istance of the Startup form Main method object o = a.CreateInstance(method.Name); // invoke the application starting point method.Invoke(o, null); the application does not execute correctly. Now, the problem i think, is that this method is only to execute .NET executable. Since i packed my application.exe inside TheMida this does not work. Is there a workaround to this situation? Any suggestion? Thank you in advance.

    Read the article

  • Is there a limit for the number of files in a directory on an SD card?

    - by jamesh
    I have a project written for Android devices. It generates a large number of files, each day. These are all text files and images. The app uses a database to reference these files. The app is supposed to clear up these files after a little use (perhaps after a few days), but this process may or may not be working. This is not the subject of this question. Due to a historic accident, the organization of the files are somewhat naive: everything is in the same directory; a .hidden directory which contains a zero byte .nomedia file to prevent the MediaScanner indexing it. Today, I am seeing an error reported: java.io.IOException: Cannot create: /sdcard/.hidden/file-4200.html at java.io.File.createNewFile(File.java:1263) Regarding the sdcard, I see it has plenty of storage left, but counting $ cd /Volumes/NO_NAME/.hidden $ ls | wc -w 9058 Deleting a number of files seems to have allowed the file creation for today to proceed. Regrettably, I did not try touching a new file to try and reproduce the error on a commandline; I also deleted several hundred files rather than a handful. However, my question is: are there hard limits on filesize or number of files in a directory? am I even on the right track here? Nota Bene: The SD card is as-is - i.e. I haven't formatted it, so I would guess it would be a FAT-* format. The FAT-32 format has hard limits of filesize of 2GB (well above the filesizes I am dealing with) and a limit of number of files in the root directory. I am definitely not writing files in the root directory.

    Read the article

  • PAYPAL IPN Response Problem

    - by Gorkem Tolan
    I am having a problem with Paypal IPN response. After payment is made by the customer, paypal ipn returns this url www.mywebsite.com?orderid=32&tx=2AC67201DL3533325&st=Pending&amt=2.50&cc=USD&cm=&item_number=32 There are a couple of issues 1- Postback field names are undefined or missing. Thus I can get the INVALID message. I am not sure if my website does not read POST variables. When I looked at IPN history, it shows that each IPN has been sent with the complete url. 2- Payment status keeps coming Pending. Does this issue cause the first issue? Thank you for your responses in advance. Here is the code: Dim strSandbox As String, strLive As String Dim req As HttpWebRequest strSandbox = "http://www.sandbox.paypal.com/cgi-bin/webscr/" strLive = "https://www.paypal.com/cgi-bin/webscr" req = CType(WebRequest.Create(strSandbox), HttpWebRequest) 'Set values for the request back req.Method = "POST" req.ContentType = "application/x-www-form-urlencoded" Dim param() As Byte param = Request.BinaryRead(HttpContext.Current.Request.ContentLength) Dim strRequest As String strRequest = Encoding.ASCII.GetString(param) strRequest = strRequest & "&cmd=_notify-validate" req.ContentLength = strRequest.Length 'Response.Write(strRequest) 'Send the request to PayPal and get the response Dim streamOut As StreamWriter streamOut = New StreamWriter(req.GetRequestStream(), System.Text.Encoding.ASCII) streamOut.Write(strRequest) streamOut.Close() Dim streamIn As StreamReader streamIn = New StreamReader(req.GetResponse().GetResponseStream()) Dim strResponse As String strResponse = streamIn.ReadToEnd() Response.Write(strResponse) streamIn.Close() If (strResponse = "VERIFIED") Then Response.Redirect("thankyou.aspx") ElseIf (strResponse = "INVALID") Then End If

    Read the article

  • how to implement a sparse_vector class

    - by Neil G
    I am implementing a templated sparse_vector class. It's like a vector, but it only stores elements that are different from their default constructed value. So, sparse_vector would store the index-value pairs for all indices whose value is not T(). I am basing my implementation on existing sparse vectors in numeric libraries-- though mine will handle non-numeric types T as well. I looked at boost::numeric::ublas::coordinate_vector and eigen::SparseVector. Both store: size_t* indices_; // a dynamic array T* values_; // a dynamic array int size_; int capacity_; Why don't they simply use vector<pair<size_t, T>> data_; My main question is what are the pros and cons of both systems, and which is ultimately better? The vector of pairs manages size_ and capacity_ for you, and simplifies the accompanying iterator classes; it also has one memory block instead of two, so it incurs half the reallocations, and might have better locality of reference. The other solution might search more quickly since the cache lines fill up with only index data during a search. There might also be some alignment advantages if T is an 8-byte type? It seems to me that vector of pairs is the better solution, yet both containers chose the other solution. Why?

    Read the article

  • Does anyone have documentation on SHGetSysColor?

    - by Paulo Santos
    I'm trying to find any reference for this function, but I haven't found anything. All I have is an obscure KB from Microsoft referencing that a programmer made boo-boo when coding a part of the Windows Mobile 6 where he should call SHGetSysColor but instead he called GetSysColor that gives a complete different color, for the same spec. From what I could gather the GetSysColor read a color value in the registry from HKEY_LOCAL_MACHINE\Software\Microsoft\Color\SHColor or HKEY_LOCAL_MACHINE\Software\Microsoft\Color\DefSHColor and returns the color according to the index. In that registry I have the following value for a standard Win Mobile 6.5 "DefSHColor"=hex:\ ff,00,00,00,00,00,00,00,dd,dd,dd,00,ff,ff,cc,00,ff,ff,ff,00,15,af,bc,00,15,\ af,bc,00,c9,e7,e9,00,14,9c,a7,00,ff,ff,ff,00,14,9c,a7,00,14,9c,a7,00,14,9c,\ a7,00,15,af,bc,00,14,9c,a7,00,ff,ff,ff,00,c9,e7,e9,00,37,c7,d3,00,37,c7,d3,\ 00,ff,ff,ff,00,00,b7,c9,00,14,9c,a7,00,ff,ff,ff,00,15,af,bc,00,84,84,c3,00,\ 15,af,bc,00,14,9c,a7,00,ff,ff,ff,00,ff,ff,ff,00,00,00,00,00,ff,ff,ff,00,00,\ 00,00,00,ff,ff,ff,00,2e,44,4f,00,00,14,3c,00,00,f0,ff,00,ff,ff,ff,00,c9,e7,\ e9,00,14,9c,a7,00,ff,ff,ff,00,14,9c,a7,00 And I realized that each four bytes represents a different color (RR,GG,BB,AA -- The AA I'm assuming here, as every color there has the AA byte as 00 which would mean that it's a solid color). What I can't get a fix on is what each index mean, as I have 41 different colors in there. Googling for SHGetSysColor in gives me only 7 matches, two of them are the KB from Microsoft (one in English, the other in French) one is from a Russian site (which I don't read), yet another two are from the freepascal.org and one from Koders.com that is describing the commctl.def file. I went to the commctl.h trying to see if I could find reference tom this function, and found absolutely nothing. No search on MSDN, either fro Google, Bing, or the default MSDN search gave me any result. So, does anyone know what indexes are we talking about here?

    Read the article

  • Silverlight Socket Constantly Returns With Empty Buffer

    - by Benny
    I am using Silverlight to interact with a proxy application that I have developed but, without the proxy sending a message to the Silverlight application, it executes the receive completed handler with an empty buffer ('\0's). Is there something I'm doing wrong? It is causing a major memory leak. this._rawBuffer = new Byte[this.BUFFER_SIZE]; SocketAsyncEventArgs receiveArgs = new SocketAsyncEventArgs(); receiveArgs.SetBuffer(_rawBuffer, 0, _rawBuffer.Length); receiveArgs.Completed += new EventHandler<SocketAsyncEventArgs>(ReceiveComplete); this._client.ReceiveAsync(receiveArgs); if (args.SocketError == SocketError.Success && args.LastOperation == SocketAsyncOperation.Receive) { // Read the current bytes from the stream buffer int bytesRecieved = this._client.ReceiveBufferSize; // If there are bytes to process else the connection is lost if (bytesRecieved > 0) { try { //Find out what we just received string messagePart = UTF8Encoding.UTF8.GetString(_rawBuffer, 0, _rawBuffer.GetLength(0)); //Take out any trailing empty characters from the message messagePart = messagePart.Replace('\0'.ToString(), ""); //Concatenate our current message with any leftovers from previous receipts string fullMessage = _theRest + messagePart; int seperator; //While the index of the seperator (LINE_END defined & initiated as private member) while ((seperator = fullMessage.IndexOf((char)Messages.MessageSeperator.Terminator)) > 0) { //Pull out the first message available (up to the seperator index string message = fullMessage.Substring(0, seperator); //Queue up our new message _messageQueue.Enqueue(message); //Take out our line end character fullMessage = fullMessage.Remove(0, seperator + 1); } //Save whatever was NOT a full message to the private variable used to store the rest _theRest = fullMessage; //Empty the queue of messages if there are any while (this._messageQueue.Count > 0) { ... } } catch (Exception e) { throw e; } // Wait for a new message if (this._isClosing != true) Receive(); } } Thanks in advance.

    Read the article

  • Audio output from Silverlight

    - by leecarter
    I'm looking to develop a Silverlight application which will take a stream of data (not an audio stream as such) from a web server. The data stream would then be manipulated to give audio of a certain format (G.711 a-Law for example) which would then be converted into PCM so that additional effects can be applied (such as boosting the volume). I'm OK up to this point. I've got my data, converted the G.711 into PCM but my problem is being able to output this PCM audio to the sound card. I basing a solution on some C# code intended for a .Net application but in Silverlight there is a problem with trying to take a copy of a delegate (function pointer) which will be the topic of a separate question once I've produced a simple code sample. So, the question is... How can I output the PCM audio that I have held in a data structure (currently an array) in my Silverlight to the user? (Please don't say write the byte values to a text box) If it were a MP3 or WMA file I would play it using a MediaElement but I don't want to have to make it into a file as this would put a crimp on applying dynamic effects to the audio. I've seen a few posts from people saying low level audio support is poor/non-existant in Silverlight so I'm open to any suggestions/ideas people may have.

    Read the article

  • Flag bit computation and detection

    - by Majid
    Hi all, In some code I'm working on I should take care of ten independent parameters which can take one of two values (0 or 1). This creates 2^10 distinct conditions. Some of the conditions never occur and can be left out, but those which do occur are still A LOT and making a switch to handle all cases is insane. I want to use 10 if statements instead of a huge switch. For this I know I should use flag bits, or rather flag bytes as the language is javascript and its easier to work with a 10 byte string with to represent a 10-bit binary. Now, my problem is, I don't know how to implement this. I have seen this used in APIs where multiple-selectable options are exposed with numbers 1, 2, 4, 8, ... , n^(n-1) which are decimal equivalents of 1, 10, 100, 1000, etc. in binary. So if we make call like bar = foo(7), bar will be an object with whatever options the three rightmost flags enable. I can convert the decimal number into binary and in each if statement check to see if the corresponding digit is set or not. But I wonder, is there a way to determine the n-th digit of a decimal number is zero or one in binary form, without actually doing the conversion?

    Read the article

  • How to tell when a Socket has been disconnected

    - by BowserKingKoopa
    On the client side I need to know when/if my socket connection has been broken. However the Socket.Connected property always returns true, even after the server side has been disconnected and I've tried sending data through it. Can anyone help me figure out what's going on here. I need to know when a socket has been disconnected. Socket serverSocket = null; TcpListener listener = new TcpListener(1530); listener.Start(); listener.BeginAcceptSocket(new AsyncCallback(delegate(IAsyncResult result) { Debug.WriteLine("ACCEPTING SOCKET CONNECTION"); TcpListener currentListener = (TcpListener)result.AsyncState; serverSocket = currentListener.EndAcceptSocket(result); }), listener); Socket clientSocket = new Socket(AddressFamily.InterNetwork, SocketType.Stream, ProtocolType.Tcp); Debug.WriteLine("client socket connected: " + clientSocket.Connected);//should be FALSE, and it is clientSocket.Connect("localhost", 1530); Debug.WriteLine("client socket connected: " + clientSocket.Connected);//should be TRUE, and it is Thread.Sleep(1000); serverSocket.Close();//closing the server socket here Thread.Sleep(1000); clientSocket.Send(new byte[0]);//sending data should cause the socket to update its Connected property. Debug.WriteLine("client socket connected: " + clientSocket.Connected);//should be FALSE, but its always TRUE

    Read the article

< Previous Page | 115 116 117 118 119 120 121 122 123 124 125 126  | Next Page >