Search Results

Search found 9471 results on 379 pages for 'technology tid bits'.

Page 298/379 | < Previous Page | 294 295 296 297 298 299 300 301 302 303 304 305  | Next Page >

  • Setting up Mercurial/TortoiseHg to work with UltraCompare

    - by Tim Pietzcker
    Hi, I'm trying to get my favorite Windows diff/merge tool, UltraCompare (V7.00) to work with Mercurial/TortoiseHg. I have set up UltraCompare in my Mercurial.ini like this (only relevant bits shown): [merge-tools] UltraCompare.executable = C:\Programme\IDM Computer Solutions\UltraCompare\uc.com UltraCompare.args = $base $local $other UltraCompare.priority = 1 UltraCompare.gui = True UltraCompare.binary = True UltraCompare.checkconflicts = True UltraCompare.checkchanged = True However, the three-way-merge fails. The path names get messed up if the path to the repository that is being merged to contains a space. I have done some more testing, and I've found out (using Process Explorer) that uc.com is called with a broken command line if there is a space in the repository's path: Compare "C:\Programme\IDM Computer Solutions\UltraCompare\uc.exe" " "c:\dokume~1\tim~1.pie\lokale~1\temp\test.txt~base.akr6au" "E:\Eigene Dateien\test\test-merge\test.txt" "c:\dokume~1\tim~1.pie\lokale~1\temp\test.txt~other.b92442" and "C:\Programme\IDM Computer Solutions\UltraCompare\uc.com" "c:\dokume~1\tim~1.pie\lokale~1\temp\test.txt~base.e7vryp" "E:\test\test-merge\test.txt" "c:\dokume~1\tim~1.pie\lokale~1\temp\test.txt~other.u_qxme" There is an extraneous " after the path of the executable in the first example - not in the second (which works fine). To me, it seems as if UltraCompare is doing everything right, and that Mercurial/TortoiseHg are passing a defective command line to it. Would you say so, too? Is there a workaround? I've just updated to Mercurial 1.5/TortoiseHg 1.0, and the problem persists. Support for other merge tools (Beyond Compare and others) has been added, sadly not UltraCompare...

    Read the article

  • Why do System.IO.Log SequenceNumbers have variable length?

    - by Doug McClean
    I'm trying to use the System.IO.Log features to build a recoverable transaction system. I understand it to be implemented on top of the Common Log File System. The usual ARIES approach to write-ahead logging involves persisting log record sequence numbers in places other than the log (for example, in the header of the database page modified by the logged action). Interestingly, the documentation for CLFS says that such sequence numbers are always 64-bit integers. Confusingly, however, the .Net wrapper around those SequenceNumbers can be constructed from a byte[] but not from a UInt64. It's value can also be read as a byte[], but not as a UInt64. Inspecting the implementation of SequenceNumber.GetBytes() reveals that it can in fact return arrays of either 8 or 16 bytes. This raises a few questions: Why do the .Net sequence numbers differ in size from the CLFS sequence numbers? Why are the .Net sequence numbers variable in length? Why would you need 128 bits to represent such a sequence number? It seems like you would truncate the log well before using up a 64-bit address space (16 exbibytes, or around 10^19 bytes, more if you address longer words)? If log sequence numbers are going to be represented as 128 bit integers, why not provide a way to serialize/deserialize them as pairs of UInt64s instead of rather-pointlessly incurring heap allocations for short-lived new byte[]s every time you need to write/read one? Alternatively, why bother making SequenceNumber a value type at all? It seems an odd tradeoff to double the storage overhead of log sequence numbers just so you can have an untruncated log longer than a million terabytes, so I feel like I'm missing something here, or maybe several things. I'd much appreciate it if someone in the know could set me straight.

    Read the article

  • How to install DBD::mysql on OS X server?

    - by Zoran Simic
    Trying to install DBD::mysql on OS X Server 10.6 (mac mini server). But I'm missing the mysql headers apparently. Since mysql is already part of OS X Server 10.6, I would like to NOT install anything else (no fink or darwin ports installs), just whatever's needed to get DBD::mysql installed and working. Do you know how I could do that? Do I have to install the headers somewhere? And if so, where? (again: I don't want to install another version of mysql on the box, want to use the version it came with). Is there a way to install DBD::mysql without compiling any C files? This is the error I get (the actual error is much longer, but these are the most meaningful bits, this is the first error reported). Checking if your kit is complete... Looks good Unrecognized argument in LIBS ignored: '-pipe' Note (probably harmless): No library found for -lmysqlclient Multiple copies of Driver.xst found in: /Library/Perl/5.10.0/darwin-thread-multi-2level/auto/DBI/ /System/Library/Perl/Extras/5.10.0/darwin-thread-multi-2level/auto/DBI/ at Makefile.PL line 907 Using DBI 1.611 (for perl 5.010000 on darwin-thread-multi-2level) installed in /Library/Perl/5.10.0/darwin-thread-multi-2level/auto/DBI/ Writing Makefile for DBD::mysql cp lib/DBD/mysql.pm blib/lib/DBD/mysql.pm cp lib/DBD/mysql/GetInfo.pm blib/lib/DBD/mysql/GetInfo.pm cp lib/DBD/mysql/INSTALL.pod blib/lib/DBD/mysql/INSTALL.pod cp lib/Bundle/DBD/mysql.pm blib/lib/Bundle/DBD/mysql.pm gcc-4.2 -c -I/Library/Perl/5.10.0/darwin-thread-multi-2level/auto/DBI -I/usr/include -fno-omit-frame-pointer -pipe -D_P1003_1B_VISIBLE -DSIGNAL_WITH_VIO_CLOSE -DSIGNALS_DONT_BREAK_READ -DIGNORE_SIGHUP_SIGQUIT -DDBD_MYSQL_INSERT_ID_IS_GOOD -g -arch x86_64 -arch i386 -arch ppc -g -pipe -fno-common -DPERL_DARWIN -fno-strict-aliasing -I/usr/local/include -Os -DVERSION=\"4.014\" -DXS_VERSION=\"4.014\" "-I/System/Library/Perl/5.10.0/darwin-thread-multi-2level/CORE" dbdimp.c In file included from dbdimp.c:20: dbdimp.h:22:49: error: mysql.h: No such file or directory dbdimp.h:23:45: error: mysqld_error.h: No such file or directory dbdimp.h:25:49: error: errmsg.h: No such file or directory

    Read the article

  • How to read/write high-resolution (24-bit, 8 channel) .wav files in Java?

    - by dB'
    I'm trying to write a Java application that manipulates high resolution .wav files. I'm having trouble importing the audio data, i.e. converting the .wav file into an array of doubles. When I use a standard approach an exception is thrown. AudioFileFormat as = AudioSystem.getAudioFileFormat(new File("orig.wav")); --> javax.sound.sampled.UnsupportedAudioFileException: file is not a supported file type Here's the file format info according to soxi: dB$ soxi orig.wav soxi WARN wav: wave header missing FmtExt chunk Input File : 'orig.wav' Channels : 8 Sample Rate : 96000 Precision : 24-bit Duration : 00:00:03.16 = 303526 samples ~ 237.13 CDDA sectors File Size : 9.71M Bit Rate : 24.6M Sample Encoding: 32-bit Floating Point PCM Can anyone suggest the simplest method for getting this audio into Java? I've tried using a few techniques. As stated above, I've experimented with the Java AudioSystem (on both Mac and Windows). I've also tried using Andrew Greensted's WavFile class, but this also fails (WavFileException: Compression Code 3 not supported). One workaround is to convert the audio to 16 bits using sox (with the -b 16 flag), but this is suboptimal since it increases the noise floor. Incidentally, I've noticed that the file CAN be read by libsndfile. Is my best bet to write a jni wrapper around libsndfile, or can you suggest something quicker? Note that I don't need to play the audio, I just need to analyze it, manipulate it, and then write it out to a new .wav file. * UPDATE * I solved this problem by modifying Andrew Greensted's WavFile class. His original version only read files encoded as integer values ("format code 1"); my files were encoded as floats ("format code 3"), and that's what was causing the problem. I'll post the modified version of Greensted's code when I get a chance. In the meantime, if anyone wants it, send me a message.

    Read the article

  • Debian packaging of a Python package.

    - by chrisdew
    I need to write (or find) a script to create a Debian package (using python-support) from a Python package. The Python package will be pure Python (no C extensions). The Python package (for testing purposes) will just be a directory with an empty __init__.py file and a single Python module, package_test.py. The packaging script must use python-support to provide the correct bytecode for possible multiple installations of Python on a target platform. (i.e. v2.5 and v2.6 on Ubuntu Jaunty.) Most advice I find while googling are just examples nasty hacks that don't even use python-support or python-central. I have so far spent hours researching this, and the best I can come up with is to hack around the script from an existing open source project - but I don't know which bits are required for what I'm doing. Has anyone here made a Debian package out of a Python package in a reasonably non-hacky way? I'm starting to think that it will take me more than a week to go from no knowledge of Debian packaging and python-support to getting a working script. How long has it taken others? Any advice? Chris.

    Read the article

  • How can I create objects based on dump file memory in a WinDbg extension?

    - by pj4533
    I work on a large application, and frequently use WinDbg to diagnose issues based on a DMP file from a customer. I have written a few small extensions for WinDbg that have proved very useful for pulling bits of information out of DMP files. In my extension code I find myself dereferencing c++ class objects in the same way, over and over, by hand. For example: Address = GetExpression("somemodule!somesymbol"); ReadMemory(Address, &addressOfPtr, sizeof(addressOfPtr), &cb); // get the actual address ReadMemory(addressOfObj, &addressOfObj, sizeof(addressOfObj), &cb); ULONG offset; ULONG addressOfField; GetFieldOffset("somemodule!somesymbolclass", "somefield", &offset); ReadMemory(addressOfObj+offset, &addressOfField, sizeof(addressOfField), &cb); That works well, but as I have written more extensions, with greater functionality (and accessing more complicated objects in our applications DMP files), I have longed for a better solution. I have access to the source of our own application of course, so I figure there should be a way to copy an object out of a DMP file and use that memory to create an actual object in the debugger extension that I can call functions on (by linking in dlls from our application). This would save me the trouble of pulling things out of the DMP by hand. Is this even possible? I tried obvious things like creating a new object in the extension, then overwriting it with a big ReadMemory directly from the DMP file. This seemed to put the data in the right fields, but freaked out when I tried to call a function. I figure I am missing something...maybe c++ pulls some vtable funky-ness that I don't know about? My code looks similar to this: SomeClass* thisClass = SomeClass::New(); ReadMemory(addressOfObj, &(*thisClass), sizeof(*thisClass), &cb);

    Read the article

  • Did the Unity Team fix that "generics handling" bug back in 2008?

    - by rasx
    At my level of experience with Unity it might be faster to ask whether the "generics handling" bug acknowledged by ctavares back in 2008 was fixed in a public release. Here was the problem (which might be my problem today): Hi, I get an exception when using .... container.RegisterType(typeof(IDictionary<,), typeof(Dictionary<,)); The exception is... "Resolution of the dependency failed, type = \"IDictionary2\", name = \"\". Exception message is: The current build operation (build key Build Key[System.Collections.Generic.Dictionary2[System.String,System.String], null]) failed: The current build operation (build key Build Key[System.Collections.Generic.Dictionary2[System.String,System.String], null]) failed: The type Dictionary2 has multiple constructors of length 2. Unable to disambiguate. When I attempt... IDictionary myExampleDictionary = container.Resolve(); Here was the moderated response: There are no books that'll help, Unity is a little too new for publishers to have caught up yet. Unfortunately, you've run into a bug in our generics handling. This is currently fixed in our internal version, but it'll be a little while before we can get the bits out. In the meantime, as a workaround you could do something like this instead: public class WorkaroundDictionary : Dictionary { public WorkaroundDictionary() { } } container.RegisterType(typeof(IDictionary<,),typeof(WorkaroundDictionary<,)); The WorkaroundDictionary only has the default constructor so it'll inject no problem. Since the rest of your app is written in terms of IDictionary, when we get the fixed version done you can just replace the registration with the real Dictionary class, throw out the workaround, and everything will still just work. Sorry about the bug, it'll be fixed soon!

    Read the article

  • What is the correct JNA mapping for UniChar on Mac OS X?

    - by Trejkaz
    I have a C struct like this: struct HFSUniStr255 { UInt16 length; UniChar unicode[255]; }; I have mapped this in the expected way: public class HFSUniStr255 extends Structure { public UInt16 length; // UInt16 is just an IntegerType with length 2 for convenience. public /*UniChar*/ char[] unicode = new char[255]; //public /*UniChar*/ byte[] unicode = new byte[255*2]; //public /*UniChar*/ UInt16[] unicode = new UInt16[255]; public HFSUniStr255() { } public HFSUniStr255(Pointer pointer) { super(pointer); } } If I use this version, I get every second character of the string into my char[] ("aits D" for "Macintosh HD".) I am assuming that this is something to do with being on a 64-bit platform and JNA mapping the value to a 32-bit wchar_t but then chopping off the high 16 bits on each wchar_t on copying them back. If I use the byte[] version, I get data which decodes correctly using the UTF-16LE charset. If I use the UInt16[] version, I get the right code point for each character but it is then inconvenient to convert them back into a string. Is there some way I can define my type as char[], and yet have it convert correctly?

    Read the article

  • Anonymous union definition/declaration in a macro GNU vs VS2008

    - by Alan_m
    I am attempting to alter an IAR specific header file for a lpc2138 so it can compile with Visual Studio 2008 (to enable compatible unit testing). My problem involves converting register definitions to be hardware independent (not at a memory address) The "IAR-safe macro" is: #define __IO_REG32_BIT(NAME, ADDRESS, ATTRIBUTE, BIT_STRUCT) \ volatile __no_init ATTRIBUTE union \ { \ unsigned long NAME; \ BIT_STRUCT NAME ## _bit; \ } @ ADDRESS //declaration //(where __gpio0_bits is a structure that names //each of the 32 bits as P0_0, P0_1, etc) __IO_REG32_BIT(IO0PIN,0xE0028000,__READ_WRITE,__gpio0_bits); //usage IO0PIN = 0x0xAA55AA55; IO0PIN_bit.P0_5 = 0; This is my comparable "hardware independent" code: #define __IO_REG32_BIT(NAME, BIT_STRUCT)\ volatile union \ { \ unsigned long NAME; \ BIT_STRUCT NAME##_bit; \ } NAME; //declaration __IO_REG32_BIT(IO0PIN,__gpio0_bits); //usage IO0PIN.IO0PIN = 0xAA55AA55; IO0PIN.IO0PIN_bit.P0_5 = 1; This compiles and works but quite obviously my "hardware independent" usage does not match the "IAR-safe" usage. How do I alter my macro so I can use IO0PIN the same way I do in IAR? I feel this is a simple anonymous union matter but multiple attempts and variants have proven unsuccessful. Maybe the IAR GNU compiler supports anonymous unions and vs2008 does not. Thank you.

    Read the article

  • Create x509 certificate with openssl/makecert tool

    - by Zé Carlos
    I'm creating a x509 certificate using makecert with the following parameters: makecert -r -pe -n "CN=Client" -ss MyApp I want to use this certificate to encrypt and decrypt data with RSA algoritm. I look to generated certificate in windows certificate store and everything seems ok (It has a private key, public key is a RSA key with 1024 bits and so on..) Now i use this C# code to encrypt data: X509Store store = new X509Store("MyApp", StoreLocation.CurrentUser); store.Open(OpenFlags.ReadOnly); X509Certificate2Collection certs = store.Certificates.Find(X509FindType.FindBySubjectName, "Client", false); X509Certificate2 _x509 = certs[0]; using (RSACryptoServiceProvider rsa = (RSACryptoServiceProvider)_x509.PrivateKey) { byte[] dataToEncrypt = Encoding.UTF8.GetBytes("hello"); _encryptedData = rsa.Encrypt(dataToEncrypt, true); } When executing the Encrypt method, i receive a CryptographicException with message "Bad key". I think the code is fine. Probably i'm not creating the certificate properly. Any comments? Thanks ---------------- EDIT -------------- If anyone know how to create the certificate using OpenSsl, its also a valid answer for me.

    Read the article

  • How do set up jquery to exclude classes in a function?

    - by user1497158
    I essentially only understand how to read bits of javascript and make modifications. I am using grid-slider, a script I purchased, but the code writer is mia at the moment, so hopefully someone here can help me. Basically, it's a slider and there are options to have links open like normal or to have links open in a panel on the same page. I want some links to open in a panel and others to open in the parent window. It seems to me that all that would be required to do this would be to activate the panel display function (which I've done) and then set up an exclude function to exclude certain uls or lis with a specific class from the function. I've read about the .not selector, but I don't see how to make it applicable to this code: enter code hereelse { enter code here if (this._displayOverlay) { if ($item.find(">.content").size() > 0) { $item.data("type", "static"); } else { var contentType = this.getContentType($link); var url = $link.attr("href"); $item.data({type:contentType, url:(typeof url != "undefined") ? url : ""}); } $item.css("cursor", "pointer").bind("click", {elem:this, i:i}, this.openOverlay); } $link.data("text", $item.find(">div:first").html()); $img = $link.find(">img"); } Can anyone help based on looking at this? Here's a link to the demo of the code: http://codecanyon.net/item/jquery-grid-style-slider/full_screen_preview/1204040 Thank you.

    Read the article

  • asp.net mvc: What is the correct way to return html from controller to refresh select list?

    - by Mark Redman
    Hi, I am new to ASP.NET MVC, particularly ajax operations. I have a form with a jquery dialog for adding items to a drop-down list. This posts to the controller action. If nothing (ie void method) is returned from the Controller Action the page returns having updated the database, but obviously there no chnage to the form. What would be the best practice in updating the drop down list with the added id/value and selecting the item. I think my options are: 1) Construct and return the html manually that makes up the new <select> tag [this would be easy enough and work, but seems like I am missing something] 2) Use some kind of "helper" to construct the new html [This seems to make sense] 3) Only return the id/value and add this to the list and select the item [This seems like an overkill considering the item needs to be placed in the correct order etc] 4) Use some kind of Partial View [Does this mean creating additional forms within ascx controls? not sure how this would effect submitting the main form its on? Also unless this is reusable by passing in parameters(not sure how thats done) maybe 2 is the option?] UPDATE: Having looked around a bit, it seems that generating html withing the controller is not a good idea. I have seen other posts that render partialviews to strings which I guess is what I need and separates concerns (since the html bits are in the ascx). Any comments on whether that is good practice.

    Read the article

  • ASP.NET Memory Usage in IIS is FAR greater than in DevEnv. Is this normal?

    - by Tom
    Greetings! I have an ASP.NET app that scrapes data from a handful of external pages, parses the relevant bits and displays them in a table. Total data retrieved is 3-4MB and the resulting page is about 1MB. I am using synchronous WebRequest GetResponse for the retrieval, but the same problem existed using an asynchronous BeginGetResponse/EndGetResponse process. There is no database access, no session storage, no caching, but an in-memory list of about 100 objects (total 1MB of data), plus a good amount of AJAX (AjaxControlToolkit). This issue appears on the very first run of the app, even if I have restarted IIS. The issue: When I run the app on my dev computer, the maximum commit charge is about 1.5GB. The biggest user, measured by Task Manager's VM Size, is WebDev.WebServer.exe (600MB). The app runs perfectly. When I run it on my rent-a-server (IIS 7.5, 1GB RAM), the maximum commit charge is over 3.8GB. The biggest user is w3wp.exe at 2.7GB. IIS grinds to a halt and spits out a timed-out error page. Given my limited server budget and the hope of having multiple simultaneous users, I'm kind of in a panic. Is this normal? If I bump the server RAM up to 4GB, will that be enough? Will multiple users require even more memory? Could the culprit be AJAX or the list of objects? Thanks for any insight you can provide.

    Read the article

  • Should I use concrete Inheritance or not?

    - by Mez
    I have a project using Propel where I have three objects (potentially more in the future) Occasion Event extends Occasion Gig extends Occasion Occasion is an item that has the shared things, that will always be needed (Venue, start, end etc) With this - I want to be able to add in extra functionality, say for example, adding "Band" objects to the Gig object, or "Flyers" to an "Event" object. For this, I plan to create objects for these. However, without concrete inheritance, I have to have the foreign key point to the Occasion object - giving the (propel generated) functions for all of these extra bits to anything inherited from Occasion. I could, in theory do this without a foreign constraint, and add in functions to use the Peer or Query classes to get things related to the "Gig" or similar. Whereas with concrete inheritance, I would only have these functions in the things where they are. I think the decision here is whether I should Duck Type the objects (after all they are occasions) or whether I should just use the "Occasion" object as a "template" (only being used to search for things, like, all occasions at a venue) Thoughts? Comments?

    Read the article

  • How can I know what this does?

    - by Dabor Troppe
    I got this piece of Assembly code extracted from some piece of software, but unfortunately I don't know anything of assembler and the bits I touched of Assembler was back in the Commodore Amiga with the 68000. Can anybody guide me on how I could understand this code without me needing to learn assembler from scratch, or just tell me what it does? Is there any kind of "Simulator" out there that I can run this on to see what it does? -[ObjSample Param1:andParam2:]: 00000c79 pushl %ebp 00000c7a movl %esp,%ebp 00000c7c subl $0x48,%esp 00000c7f movl %ebx,0xf4(%ebp) 00000c82 movl %esi,0xf8(%ebp) 00000c85 movl %edi,0xfc(%ebp) 00000c88 calll 0x00000c8d 00000c8d popl %ebx 00000c8e cmpb $-[ObjSample delegate],_bDoOnce.26952-0xc8d(%ebx) 00000c95 jel 0x00000d47 00000c9b movb $-[ObjSample delegate],_bDoOnce.26952-0xc8d(%ebx) 00000ca2 movl 0x7dc0-0xc8d(%ebx),%eax 00000ca8 movl %eax,0x04(%esp) 00000cac movl 0x7df4-0xc8d(%ebx),%eax 00000cb2 movl %eax,(%esp) 00000cb5 calll _objc_msgSend 00000cba movl 0x7dbc-0xc8d(%ebx),%edx 00000cc0 movl %edx,0x04(%esp) 00000cc4 movl %eax,(%esp) 00000cc7 calll _objc_msgSend 00000ccc movl %eax,0xe4(%ebp) 00000ccf movl 0x7db8-0xc8d(%ebx),%eax 00000cd5 movl %eax,0x04(%esp) 00000cd9 movl 0xe4(%ebp),%eax 00000cdc movl %eax,(%esp) 00000cdf calll _objc_msgSend 00000ce4 leal (%eax,%eax),%edi 00000ce7 movl %edi,(%esp) 00000cea calll _malloc 00000cef movl %eax,%esi 00000cf1 movl %edi,0x08(%esp) 00000cf5 movl $-[ObjSample delegate],0x04(%esp) 00000cfd movl %eax,(%esp) 00000d00 calll _memset 00000d05 movl $0x00000004,0x10(%esp) 00000d0d movl %edi,0x0c(%esp) 00000d11 movl %esi,0x08(%esp) 00000d15 movl 0x7db4-0xc8d(%ebx),%eax 00000d1b movl %eax,0x04(%esp) 00000d1f movl 0xe4(%ebp),%eax 00000d22 movl %eax,(%esp) 00000d25 calll _objc_msgSend 00000d2a xorl %edx,%edx 00000d2c movl %edi,%eax 00000d2e shrl $0x03,%eax 00000d31 jmp 0x00000d34 00000d33 incl %edx 00000d34 cmpl %edx,%eax 00000d36 ja 0x00000d33 00000d38 movl %esi,(%esp) 00000d3b calll _free 00000d40 movb $0x01,_isAuthenticated-0xc8d(%ebx) 00000d47 movzbl _isAuthenticated-0xc8d(%ebx),%eax 00000d4e movl 0xf4(%ebp),%ebx 00000d51 movl 0xf8(%ebp),%esi 00000d54 movl 0xfc(%ebp),%edi 00000d57 leave 00000d58 ret

    Read the article

  • Crashing when masking an image

    - by shameer
    I am trying to mask an image with a 'mask image". its work fine at first time. but when i try one more time with in the application with same mask image . Application crashed. but when trying with another mask image it works fine. Why this happens? pls help? Console shows ": CGImageMaskCreate: invalid mask bits/component: 4294967295." (UIImage*) maskImage:(UIImage *)image withMask:(UIImage *)maskImage { CGImageRef maskRef = maskImage.CGImage; CGImageRef mask = CGImageMaskCreate(CGImageGetWidth(maskRef), CGImageGetHeight(maskRef), CGImageGetBitsPerComponent(maskRef), CGImageGetBitsPerPixel(maskRef), CGImageGetBytesPerRow(maskRef), CGImageGetDataProvider(maskRef), NULL, false); CGImageRef masked = CGImageCreateWithMask([image CGImage], mask); UIImage *img2=[UIImage imageWithCGImage:masked]; CGImageRelease(maskRef); CGImageRelease(mask); CGImageRelease(masked); return img2; } fun1() { view.image=[self maskImage:image1 withMask:[UIImage imageNamed:@"image2.png"]]; } fun2() { view.image=[self maskImage:image1 withMask:[UIImage imageNamed:@"image2.png"]]; view.image=[self maskImage:image1 withMask:[UIImage imageNamed:@"image3.png"]]; } fun3() { view.image=[self maskImage:image1 withMask:[UIImage imageNamed:@"image2.png"]]; view.image=[self maskImage:image1 withMask:[UIImage imageNamed:@"image2.png"]]; } When calling fun1 and fun2 apllication works fine. Application crashing when calling fun3.

    Read the article

  • Is there a way to trace through only project source in Delphi?

    - by Justin
    I'm using Delphi 2010 and I'm wondering if there's a way to trace through code which is in the project without tracing through calls to included VCLs. For example - you put in a breakpoint and then use Shift+F7 to trace through line-by-line. Now you run into a call to some lengthy procedure in a VCL - in my case it's often a Measurement Studio or other component that draws the doodads for a bunch of I/O, OPC, or other bits. At any rate, what happens is that the debugger hops out of the active source file, opens the component source, and traces through that line by line. Often this is hundreds or thousands of lines of code I don't care about - I just want to have it execute and return to the next source line in MY project. Obviously you can do this by setting breakpoints around every instance of an external call, but often there are too many to make this practical - I'd be spending an hour setting a hundred breakpoints every time I wanted to step through a section of code. Is there a setting or a tool somewhere that can do this? Allow one to trace through code within the project while silently executing code which is external to the project? Thanks in advance, Stack Overflow!

    Read the article

  • How to genrate a monochrome bit mask for a 32bit bitmap

    - by Mordachai
    Under Win32, it is a common technique to generate a monochrome bitmask from a bitmap for transparency use by doing the following: SetBkColor(hdcSource, clrTransparency); VERIFY(BitBlt(hdcMask, 0, 0, bm.bmWidth, bm.bmHeight, hdcSource, 0, 0, SRCCOPY)); This assumes that hdcSource is a memory DC holding the source image, and hdcMask is a memory DC holding a monochrome bitmap of the same size (so both are 32x32, but the source is 4 bit color, while the target is 1bit monochrome). However, this seems to fail for me when the source is 32 bit color + alpha. Instead of getting a monochrome bitmap in hdcMask, I get a mask that is all black. No bits get set to white (1). Whereas this works for the 4bit color source. My search-foo is failing, as I cannot seem to find any references to this particular problem. I have isolated that this is indeed the issue in my code: i.e. if I use a source bitmap that is 16 color (4bit), it works; if I use a 32 bit image, it produces the all-black mask. Is there an alternate method I should be using in the case of 32 bit color images? Is there an issue with the alpha channel that overrides the normal behavior of the above technique? Thanks for any help you may have to offer! ADDENDUM: I am still unable to find a technique that creates a valid monochrome bitmap for my GDI+ produced source bitmap. I have somewhat alleviated my particular issue by simply not generating a monochrome bitmask at all, and instead I'm using TransparentBlt(), which seems to get it right (but I don't know what they're doing internally that's any different that allows them to correctly mask the image). It might be useful to have a really good, working function: HBITMAP CreateTransparencyMask(HDC hdc, HBITMAP hSource, COLORREF crTransparency); Where it always creates a valid transparency mask, regardless of the color depth of hSource. Ideas?

    Read the article

  • Managing logs/warnings in Python extensions

    - by Dimitri Tcaciuc
    TL;DR version: What do you use for configurable (and preferably captured) logging inside your C++ bits in a Python project? Details follow. Say you have a a few compiled .so modules that may need to do some error checking and warn user of (partially) incorrect data. Currently I'm having a pretty simplistic setup where I'm using logging framework from Python code and log4cxx library from C/C++. log4cxx log level is defined in a file (log4cxx.properties) and is currently fixed and I'm thinking how to make it more flexible. Couple of choices that I see: One way to control it would be to have a module-wide configuration call. # foo/__init__.py import sys from _foo import import bar, baz, configure_log configure_log(sys.stdout, WARNING) # tests/test_foo.py def test_foo(): # Maybe a custom context to change the logfile for # the module and restore it at the end. with CaptureLog(foo) as log: assert foo.bar() == 5 assert log.read() == "124.24 - foo - INFO - Bar returning 5" Have every compiled function that does logging accept optional log parameters. # foo.c int bar(PyObject* x, PyObject* logfile, PyObject* loglevel) { LoggerPtr logger = default_logger("foo"); if (logfile != Py_None) logger = file_logger(logfile, loglevel); ... } # tests/test_foo.py def test_foo(): with TemporaryFile() as logfile: assert foo.bar(logfile=logfile, loglevel=DEBUG) == 5 assert logfile.read() == "124.24 - foo - INFO - Bar returning 5" Some other way? Second one seems to be somewhat cleaner, but it requires function signature alteration (or using kwargs and parsing them). First one is.. probably somewhat awkward but sets up entire module in one go and removes logic from each individual function. What are your thoughts on this? I'm all ears to alternative solutions as well. Thanks,

    Read the article

  • ASP.NET SqlDataReader throwing error: Invalid attempt to call Read when reader is closed.

    - by Bugget
    This one has me stumped. Here are the relative bits of code: public AgencyDetails(Guid AgencyId) { try { evgStoredProcedure Procedure = new evgStoredProcedure(); Hashtable commandParameters = new Hashtable(); commandParameters.Add("@AgencyId", AgencyId); SqlDataReader AppReader = Procedure.ExecuteReaderProcedure("evg_getAgencyDetails", commandParameters); commandParameters.Clear(); //The following line is where the error is thrown. Errormessage: Invalid attempt to call Read when reader is closed. while (AppReader.Read()) { AgencyName = AppReader.GetOrdinal("AgencyName").ToString(); AgencyAddress = AppReader.GetOrdinal("AgencyAddress").ToString(); AgencyCity = AppReader.GetOrdinal("AgencyCity").ToString(); AgencyState = AppReader.GetOrdinal("AgencyState").ToString(); AgencyZip = AppReader.GetOrdinal("AgencyZip").ToString(); AgencyPhone = AppReader.GetOrdinal("AgencyPhone").ToString(); AgencyFax = AppReader.GetOrdinal("AgencyFax").ToString(); } AppReader.Close(); AppReader.Dispose(); } catch (Exception ex) { throw new Exception("AgencyDetails Constructor: " + ex.Message.ToString()); } } And the implementation of ExecuteReaderProcedure: public SqlDataReader ExecuteReaderProcedure(string ProcedureName, Hashtable Parameters) { SqlDataReader returnReader; using (SqlConnection conn = new SqlConnection(connectionString)) { try { SqlCommand cmd = new SqlCommand(ProcedureName, conn); SqlParameter param = new SqlParameter(); cmd.CommandType = System.Data.CommandType.StoredProcedure; foreach (DictionaryEntry keyValue in Parameters) { cmd.Parameters.AddWithValue(keyValue.Key.ToString(), keyValue.Value); } conn.Open(); returnReader = cmd.ExecuteReader(CommandBehavior.CloseConnection); } catch (SqlException e) { throw new Exception(e.Message.ToString()); } } return returnReader; } The connection string is working as other stored procedures in the same class run fine. The only problem seems to be when returning SqlDataReaders from this method! They throw the error message in the title. Any ideas are greatly appreciated! Thanks in advance!

    Read the article

  • Large flags enumerations in C#

    - by LorenVS
    Hey everyone, got a quick question that I can't seem to find anything about... I'm working on a project that requires flag enumerations with a large number of flags (up to 40-ish), and I don't really feel like typing in the exact mask for each enumeration value: public enum MyEnumeration : ulong { Flag1 = 1, Flag2 = 2, Flag3 = 4, Flag4 = 8, Flag5 = 16, // ... Flag16 = 65536, Flag17 = 65536 * 2, Flag18 = 65536 * 4, Flag19 = 65536 * 8, // ... Flag32 = 65536 * 65536, Flag33 = 65536 * 65536 * 2 // right about here I start to get really pissed off } Moreover, I'm also hoping that there is an easy(ier) way for me to control the actual arrangement of bits on different endian machines, since these values will eventually be serialized over a network: public enum MyEnumeration : uint { Flag1 = 1, // BIG: 0x00000001, LITTLE:0x01000000 Flag2 = 2, // BIG: 0x00000002, LITTLE:0x02000000 Flag3 = 4, // BIG: 0x00000004, LITTLE:0x03000000 // ... Flag9 = 256, // BIG: 0x00000010, LITTLE:0x10000000 Flag10 = 512, // BIG: 0x00000011, LITTLE:0x11000000 Flag11 = 1024 // BIG: 0x00000012, LITTLE:0x12000000 } So, I'm kind of wondering if there is some cool way I can set my enumerations up like: public enum MyEnumeration : uint { Flag1 = flag(1), // BOTH: 0x80000000 Flag2 = flag(2), // BOTH: 0x40000000 Flag3 = flag(3), // BOTH: 0x20000000 // ... Flag9 = flag(9), // BOTH: 0x00800000 } What I've Tried: // this won't work because Math.Pow returns double // and because C# requires constants for enum values public enum MyEnumeration : uint { Flag1 = Math.Pow(2, 0), Flag2 = Math.Pow(2, 1) } // this won't work because C# requires constants for enum values public enum MyEnumeration : uint { Flag1 = Masks.MyCustomerBitmaskGeneratingFunction(0) } // this is my best solution so far, but is definitely // quite clunkie public struct EnumWrapper<TEnum> where TEnum { private BitVector32 vector; public bool this[TEnum index] { // returns whether the index-th bit is set in vector } // all sorts of overriding using TEnum as args } Just wondering if anyone has any cool ideas, thanks!

    Read the article

  • When software problems reported are not really software problems

    - by AndyUK
    Hi Apologies if this has already been covered or you think it really belongs on wiki. I am a software developer at a company that manufactures microarray printing machines for the biosciences industry. I am primarily involved in interfacing with various bits of hardware (pneumatics, hydraulics, stepper motors, sensors etc) via GUI development in C++ to aspirate and print samples onto microarray slides. On joining the company I noticed that whenever there was a hardware-related problem this would cause the whole setup to freeze, with nobody being any the wiser as to what the specific problem was - hardware / software / misuse etc. Since then I have improved things somewhat by introducing software timeouts and exception handling to better identify and deal with any hardware-related problems that arise eg PLC commands not successfully completed, inappropriate FPGA response commands, and various other deadlock type conditions etc. In addition, the software will now log a summary of the specific problem, inform the user and exit the thread gracefully. This software is not embedded, just interfacing using serial ports. In spite of what has been achieved, non-software guys still do not fully appreciate that in these cases, the 'software' problem they are reporting to me is not really a software problem, rather the software is reporting a problem, but not causing it. Don't get me wrong, there is nothing I enjoy more than to come down on software bugs like a ton of bricks, and looking at ways of improving robustness in any way. I know the system well enough now that I almost have a sixth sense for these things. No matter how many times I try to explain this point to people, it does not really penetrate. They still report what are essentially hardware problems (which eventually get fixed) as software ones. I would like to hear from any others that have endured similar finger-pointing experiences and what methods they used to deal with them.

    Read the article

  • ApplicationPoolIdentity permissions on Temporary Asp.Net files

    - by Anton
    Hi all, at work I am struggling a bit with the following situation: We have a web application that runs on a WIndows Server 2008 64 bits machine. The app's ApplicationPool is running under the ApplicationPoolIdentity and configured for .net 2 and Classic pipeline mode. This works fine up to the moment that XmlSerialization requires creation of Serializer assemblies where MEF is being used to create a collection of knowntypes. To remedy this I was hoping that granting the ApplicationPoolIdentity rights to the ASP.Net Temporary Files directory would be enough, but alas... What I did was the run the following command from a cmd prompt: icacls "c:\windows\microsoft.net\framework64\v2.0.50727\Temporary ASP.NET Files" /grant "IIS AppPool\MyAppPool":(M) Obviously this did not work, otherwise you would not be reading this :) Strange thing is that whenever I grant the Users or even more specific, the Authenticated Users Group those permissions, it works. What's weird as well (in my eyes) is that before I started granting access the ApplicationPoolIdentity was already a member of IIS_IUSRS which does have Modify rights for the temporary asp files directory. And now I'm left wondering why this situation requires Modify rights for the Authenticated Users group. I thought it could be because the apppool account was missing additional rights (googling for this returned some results, so I tried those), but granting the ApplicationPoolIdentity modification rights to the Windows\Temp directory and/or the application directory itself did not fix it. For now we have a workaround, but I hate that I don't know what is exactly going on here, so I was hoping any of you guys could shed some light on this. Thanx in advance!

    Read the article

  • Are there pitfalls to using static class/event as an application message bus

    - by Doug Clutter
    I have a static generic class that helps me move events around with very little overhead: public static class MessageBus<T> where T : EventArgs { public static event EventHandler<T> MessageReceived; public static void SendMessage(object sender, T message) { if (MessageReceived != null) MessageReceived(sender, message); } } To create a system-wide message bus, I simply need to define an EventArgs class to pass around any arbitrary bits of information: class MyEventArgs : EventArgs { public string Message { get; set; } } Anywhere I'm interested in this event, I just wire up a handler: MessageBus<MyEventArgs>.MessageReceived += (s,e) => DoSomething(); Likewise, triggering the event is just as easy: MessageBus<MyEventArgs>.SendMessage(this, new MyEventArgs() {Message="hi mom"}); Using MessageBus and a custom EventArgs class lets me have an application wide message sink for a specific type of message. This comes in handy when you have several forms that, for example, display customer information and maybe a couple forms that update that information. None of the forms know about each other and none of them need to be wired to a static "super class". I have a couple questions: fxCop complains about using static methods with generics, but this is exactly what I'm after here. I want there to be exactly one MessageBus for each type of message handled. Using a static with a generic saves me from writing all the code that would maintain the list of MessageBus objects. Are the listening objects being kept "alive" via the MessageReceived event? For instance, perhaps I have this code in a Form.Load event: MessageBus<CustomerChangedEventArgs>.MessageReceived += (s,e) => DoReload(); When the Form is Closed, is the Form being retained in memory because MessageReceived has a reference to its DoReload method? Should I be removing the reference when the form closes: MessageBus<CustomerChangedEventArgs>.MessageReceived -= (s,e) => DoReload();

    Read the article

  • Is my objective possible using WCF (and is it the right way to do things?)

    - by David
    I'm writing some software that modifies a Windows Server's configuration (things like MS-DNS, IIS, parts of the filesystem). My design has a server process that builds an in-memory object graph of the server configuration state and a client which requests this object graph. The server would then serialize the graph, send it to the client (presumably using WCF), the server then makes changes to this graph and sends it back to the server. The server receives the graph and proceeds to make modifications to the server. However I've learned that object-graph serialisation in WCF isn't as simple as I first thought. My objects have a hierarchy and many have parametrised-constructors and immutable properties/fields. There are also numerous collections, arrays, and dictionaries. My understanding of WCF serialisation is that it requires use of either the XmlSerializer or DataContractSerializer, but DCS places restrictions on the design of my object-graph (immutable data seems right-out, it also requires parameter-less constructors). I understand XmlSerializer lets me use my own classes provided they implement ISerializable and have the de-serializer constructor. That is fine by me. I spoke to a friend of mine about this, and he advocates going for a Data Transport Object-only route, where I'd have to maintain a separate DataContract object-graph for the transport of data and re-implement my server objects on the client. Another friend of mine said that because my service only has two operations ("GetServerConfiguration" and "PutServerConfiguration") it might be worthwhile just skipping WCF entirely and implementing my own server that uses Sockets. So my questions are: Has anyone faced a similar problem before and if so, are there better approaches? Is it wise to send an entire object graph to the client for processing? Should I instead break it down so that the client requests a part of the object graph as it needs it and sends only bits that have changed (thus reducing concurrency-related risks?)? If sending the object-graph down is the right way, is WCF the right tool? And if WCF is right, what's the best way to get WCF to serialise my object graph?

    Read the article

< Previous Page | 294 295 296 297 298 299 300 301 302 303 304 305  | Next Page >