Search Results

Search found 6152 results on 247 pages for 'known'.

Page 203/247 | < Previous Page | 199 200 201 202 203 204 205 206 207 208 209 210  | Next Page >

  • Destroy process-less console windows left by Visual Studio debug sessions

    - by jon hanson
    A known bug with security update KB978037 can occur with Visual Studio 2003 (and 2008) where sometimes if you restart a debugging session on a console app then the console window doesn't get closed even though the owner process no longer exists. The problem is discussed further here: http://stackoverflow.com/questions/2402875/visual-studio-debug-console-sometimes-stays-open-and-is-impossible-to-close These zombie windows then can not be closed via the Taskbar or via the TaskManager, and typically require a power off/on to get rid of them. Over the period of even a single day you can accumulate quite a few of them, which clog up your TaskBar and are generally annoying. I thought I would knock up a simple C++ Win32 utility to attempt to call DestroyWindow() on these windows by passing the windows handle as a cmd-line argument and converting it to a HWND. I'm converting the handle from a string by parsing it as a DWORD then casting the DWORD to a HWND. This appears to be working as if I call GetWindowInfo() on the handle it succeeds. However calling DestroyWindow() on the handle fails with error 5 (access denied), presumably because the caller process (i.e. my app) doesn't own the window in question. Any ideas as to how I might get rid of the zombie windows, either via the above approach or any other alternative short of rebooting? I'm in a corporate environment so installing/uninstalling updates/service-packs etc isn't an option.

    Read the article

  • Multiple on-screen view controllers in iPhone apps

    - by Felixyz
    I'm creating a lot of custom views and controllers in a lot of my apps and so far I've mostly set them up programmatically, with adjustments and instantiations being controlled from plists. However, now I'm transitioning to using Interface Builder as much as possible (wish I had done that before, was always on my back-list). Apple is recommending against having many view controllers being simultaneously active in iPhone apps, with a couple of well-known exceptions. I've never fully understood why it should be bad to have different parts of the interface belong to different controllers, if they have no interdependent functionality at all. Does having multiple controllers risk messing up the responder chain, or is there some other reason that it's not recommended, except for the fact that it's usually not needed? What I want to be able to do is design reusable views and controls in IB, but since a nib is not associated with a view, but with a view controller, it seems I'd have to have different parts of the screen be connected to different controllers. I know it's possible to have other objects than view controllers being instantiated from nibs. Should I look into how to create my own alternative more light-weight controllers (that could be sub-controllers of a UIViewController) which could be instantiated from nibs?

    Read the article

  • Inheritance issue with ivar on the iPhone

    - by Buffalo
    I am using the BLIP/MYNetwork library to establish a basic tcp socket connection between the iPhone and my computer. So far, the code builds and runs correctly in simulator but deploying to device yields the following error: error: property 'delegate' attempting to use ivar '_delegate' declared in super class of 'TCPConnection' @interface TCPConnection : TCPEndpoint { @private TCPListener *_server; IPAddress *_address; BOOL _isIncoming, _checkedPeerCert; TCPConnectionStatus _status; TCPReader *_reader; TCPWriter *_writer; NSError *_error; NSTimeInterval _openTimeout; } /** The delegate object that will be called when the connection opens, closes or receives messages. */ @property (assign) id<TCPConnectionDelegate> delegate; /** The delegate messages sent by TCPConnection. All methods are optional. */ @protocol TCPConnectionDelegate <NSObject> @optional /** Called after the connection successfully opens. */ - (void) connectionDidOpen: (TCPConnection*)connection; /** Called after the connection fails to open due to an error. */ - (void) connection: (TCPConnection*)connection failedToOpen: (NSError*)error; /** Called when the identity of the peer is known, if using an SSL connection and the SSL settings say to check the peer's certificate. This happens, if at all, after the -connectionDidOpen: call. */ - (BOOL) connection: (TCPConnection*)connection authorizeSSLPeer: (SecCertificateRef)peerCert; /** Called after the connection closes. You can check the connection's error property to see if it was normal or abnormal. */ - (void) connectionDidClose: (TCPConnection*)connection; @end @interface TCPEndpoint : NSObject { NSMutableDictionary *_sslProperties; id _delegate; } - (void) tellDelegate: (SEL)selector withObject: (id)param; @end Does anyone know how I would fix this? Would I simply declare _delegate as a public property of the base class "TCPEndPoint"? Thanks for the help ya'll!

    Read the article

  • Thread-safe data structure design

    - by Inso Reiges
    Hello, I have to design a data structure that is to be used in a multi-threaded environment. The basic API is simple: insert element, remove element, retrieve element, check that element exists. The structure's implementation uses implicit locking to guarantee the atomicity of a single API call. After i implemented this it became apparent, that what i really need is atomicity across several API calls. For example if a caller needs to check the existence of an element before trying to insert it he can't do that atomically even if each single API call is atomic: if(!data_structure.exists(element)) { data_structure.insert(element); } The example is somewhat awkward, but the basic point is that we can't trust the result of "exists" call anymore after we return from atomic context (the generated assembly clearly shows a minor chance of context switch between the two calls). What i currently have in mind to solve this is exposing the lock through the data structure's public API. This way clients will have to explicitly lock things, but at least they won't have to create their own locks. Is there a better commonly-known solution to these kinds of problems? And as long as we're at it, can you advise some good literature on thread-safe design? EDIT: I have a better example. Suppose that element retrieval returns either a reference or a pointer to the stored element and not it's copy. How can a caller be protected to safely use this pointer\reference after the call returns? If you think that not returning copies is a problem, then think about deep copies, i.e. objects that should also copy another objects they point to internally. Thank you.

    Read the article

  • how to hide ssh expect user/password

    - by raindrop18
    my perl cgi script I have the password/user on clear text and want to hide it or the user enter the credential interactively.is that possible? here is my code. please any help!! i am very new for perl. #!/usr/local/bin/expect ####################################################################################################### # Input: It will handle two arguments -> a device and a show command. ####################################################################################################### # ######### Start of Script ###################### # #### Set up Timeouts - Debugging Variables log_user 0 set timeout 10 set userid "USER" set password "PASS" # ############## Get two arguments - (1) Device (2) Command to be executed set device [lindex $argv 0] set command [lindex $argv 1] spawn /usr/local/bin/ssh -l $userid $device match_max [expr 32 * 1024] expect { -re "RSA key fingerprint" {send "yes\r"} timeout {puts "Host is known"} } expect { -re "username: " {send "$userid\r"} -re "(P|p)assword: " {send "$password\r"} -re "Warning:" {send "$password\r"} -re "Connection refused" {puts "Host error -> $expect_out(buffer)";exit} -re "Connection closed" {puts "Host error -> $expect_out(buffer)";exit} -re "no address.*" {puts "Host error -> $expect_out(buffer)";exit} timeout {puts "Timeout error. Is device down or unreachable?? ssh_expect";exit} } expect { -re "\[#>]$" {send "term len 0\r"} timeout {puts "Error reading prompt -> $expect_out(buffer)";exit} } expect { -re "\[#>]$" {send "$command\r"} timeout {puts "Error reading prompt -> $expect_out(buffer)";exit} } expect -re "\[#>]$" set output $expect_out(buffer) send "exit\r" puts "$output\r\n"

    Read the article

  • Why do compiled Haskell libraries see invalid static FFI storage?

    - by John Millikin
    I am using GHC 6.12.1, in Ubuntu 10.04 When I try to use the FFI syntax for static storage, only modules running in interpreted mode (ie GHCI) work properly. Compiled modules have invalid pointers, and do not work. I'd like to know whether anybody can reproduce the problem, whether this an error in my code or GHC, and (if the latter) whether it's a known issue. Given the following three modules: -- A.hs {-# LANGUAGE ForeignFunctionInterface #-} module A where import Foreign import Foreign.C foreign import ccall "&sys_siglist" siglist_a :: Ptr CString -- -- B.hs {-# LANGUAGE ForeignFunctionInterface #-} module B where import Foreign import Foreign.C foreign import ccall "&sys_siglist" siglist_b :: Ptr CString -- -- Main.hs {-# LANGUAGE ForeignFunctionInterface #-} module Main where import Foreign import Foreign.C import A import B foreign import ccall "&sys_siglist" siglist_main :: Ptr CString main = do putStrLn $ "siglist_a = " ++ show siglist_a putStrLn $ "siglist_b = " ++ show siglist_b putStrLn $ "siglist_main = " ++ show siglist_main peekSiglist "a " siglist_a peekSiglist "b " siglist_b peekSiglist "main" siglist_main peekSiglist name siglist = do ptr <- peekElemOff siglist 2 str <- maybePeek peekCString ptr putStrLn $ "siglist_" ++ name ++ "[2] = " ++ show str I would expect something like this output, where all pointer values identical and valid: $ runhaskell Main.hs siglist_a = 0x00007f53a948fe00 siglist_b = 0x00007f53a948fe00 siglist_main = 0x00007f53a948fe00 siglist_a [2] = Just "Interrupt" siglist_b [2] = Just "Interrupt" siglist_main[2] = Just "Interrupt" However, if I compile A.hs (with ghc -c A.hs), then the output changes to: $ runhaskell Main.hs siglist_a = 0x0000000040378918 siglist_b = 0x00007fe7c029ce00 siglist_main = 0x00007fe7c029ce00 siglist_a [2] = Nothing siglist_b [2] = Just "Interrupt" siglist_main[2] = Just "Interrupt"

    Read the article

  • iPhone: Leak with UIWebView loading Office documents. Any ideas how to avoid it?

    - by Thomas Tempelmann
    While there are already quite a few posts about leaks around UIWebView, mine is a bit more special, I believe, and thus deserves its own post here. I see a reproducible large leak every time I load a Office document such as a Word or Excel file. For instance, every time I display a 180KB .doc file, I get a 100KB leak. And that happens with both the simulator and an actual device, running OS 3.1.3. The leak is not visible with the Leaks instrument but only by looking at the malloc instances via the ObjectAlloc instrument. Here's a picture from the instruments trace: I've also made a demo project, UIWebView-Leak.zip, so you can verify this yourself. To see the leak, use the ObjectAlloc instrument, switch to the view where you see individual allocation objects, and sort by size so that you see the large ones in a group, just like in my picture above. Then view a Office document a few times and find the Malloc objects that keep staying "Live" even after the actual UIWebView has been freed. Is this a known bug? Or is there any way I can avoid these leaks? I.e, have you successfully shown Office documents on an iPhone withing getting such leaks? Note: I've reported this as a bug to Apple now, too (ID 7950594) I am still waiting for someone (including Apple) to confirm this as a true leak or show why it isn't (i.e. that I do something wrong or make wrong assumptions)

    Read the article

  • Using conditionals in Linq Programatically

    - by Mike B
    I was just reading a recent question on using conditionals in Linq and it reminded me of an issue I have not been able to resolve. When building Linq to SQL queries programatically how can this be done when the number of conditionals is not known until runtime? For instance in the code below the first clause creates an IQueryable that, if executed, would select all the tasks (called issues) in the database, the 2nd clause will refine that to just issues assigned to one department if one has been selected in a combobox (Which has it's selected item bound to the departmentToShow property). How could I do this using the selectedItems collection instead? IQueryable<Issue> issuesQuery; // Will select all tasks issuesQuery = from i in db.Issues orderby i.IssDueDate, i.IssUrgency select i; // Filters out all other Departments if one is selected if (departmentToShow != "All") { issuesQuery = from i in issuesQuery where i.IssDepartment == departmentToShow select i; } By the way, the above code is simplified, in the actual code there are about a dozen clauses that refine the query based on the users search and filter settings.

    Read the article

  • Windows cmd encoding change causes Python crash.

    - by Alex
    First I chage Windows CMD encoding to utf-8 and run Python interpreter: chcp 65001 python Then I try to print a unicode sting inside it and when i do this Python crashes in a peculiar way (I just get a cmd prompt in the same window). >>> import sys >>> print u'ëèæîð'.encode(sys.stdin.encoding) Any ideas why it happens and how to make it work? UPD: sys.stdin.encoding returns 'cp65001' UPD2: It just came to me that the issue might be connected with the fact that utf-8 uses multi-byte character set (kcwu made a good point on that). I tried running the whole example with 'windows-1250' and got 'ëeaî?'. Windows-1250 uses single-character set so it worked for those characters it understands. However I still have no idea how to make 'utf-8' work here. UPD3: Oh, I found out it is a known Python bug. I guess what happens is that Python copies the cmd encoding as 'cp65001 to sys.stdin.encoding and tries to apply it to all the input. Since it fails to understand 'cp65001' it crushes on any input that contains non-ascii characters.

    Read the article

  • Website running JavaScript setInterval starts to fail after ~1day

    - by Martin Clemens Bloch
    I wish I could be more specific here, but unfortunately this might be hard. I basically hope this is some "well"-known timeout or setup issue. We have a website running an (JS/html - ASP.net project) website overview on a screen at a factory. This screen has no keyboard so it should keep refreshing the page forever - years perhaps (though 1 week might be okay). (It is used by factory workers to see incoming transports etc.) This all works perfectly; the site continuously updates itself and gets the new correct data. Then, sometimes, in the morning this "overview" screen has no data and the workers have to manually refresh the site using the simple refresh button or F5 - which fixes everything. I have tried a few things trying to reproduce the error myself including: Cutting the internet connection and MANY other ways of making it timeout (breakpoints, stopping services etc.). Setting the refresh time of setInterval to 100ms and letting the site run 3-5 minutes. (normal timer is 1 minute) setInterval SHOULD run forever according to the internet searching I have done. Checked that "JavaScript frequency" has not been turned down in power saving settings. No matter what; the site resumes correct function WITHOUT a refresh as soon as I plug in the internet cable or whatever again - I can't reproduce the error. The website is dependent on a backend WCF service and project integration, but since the workers are fixing this with a simple refresh I am assuming this has not crashed. EDIT: The browser I tried to reproduce the error in was IE/win7. I will ask about the factory tomorrow, but I am guessing IE/win? also. Is setInterval in fact really infinite or is there something else wrong here? All help much appreciated. 0.5 bitcoin reward for solving answer ;)

    Read the article

  • Theory: "Lexical Encoding"

    - by _ande_turner_
    I am using the term "Lexical Encoding" for my lack of a better one. A Word is arguably the fundamental unit of communication as opposed to a Letter. Unicode tries to assign a numeric value to each Letter of all known Alphabets. What is a Letter to one language, is a Glyph to another. Unicode 5.1 assigns more than 100,000 values to these Glyphs currently. Out of the approximately 180,000 Words being used in Modern English, it is said that with a vocabulary of about 2,000 Words, you should be able to converse in general terms. A "Lexical Encoding" would encode each Word not each Letter, and encapsulate them within a Sentence. // An simplified example of a "Lexical Encoding" String sentence = "How are you today?"; int[] sentence = { 93, 22, 14, 330, QUERY }; In this example each Token in the String was encoded as an Integer. The Encoding Scheme here simply assigned an int value based on generalised statistical ranking of word usage, and assigned a constant to the question mark. Ultimately, a Word has both a Spelling & Meaning though. Any "Lexical Encoding" would preserve the meaning and intent of the Sentence as a whole, and not be language specific. An English sentence would be encoded into "...language-neutral atomic elements of meaning ..." which could then be reconstituted into any language with a structured Syntactic Form and Grammatical Structure. What are other examples of "Lexical Encoding" techniques? If you were interested in where the word-usage statistics come from : http://www.wordcount.org

    Read the article

  • Overloading operator>> to a char buffer in C++ - can I tell the stream length?

    - by exscape
    I'm on a custom C++ crash course. I've known the basics for many years, but I'm currently trying to refresh my memory and learn more. To that end, as my second task (after writing a stack class based on linked lists), I'm writing my own string class. It's gone pretty smoothly until now; I want to overload operator that I can do stuff like cin my_string;. The problem is that I don't know how to read the istream properly (or perhaps the problem is that I don't know streams...). I tried a while (!stream.eof()) loop that .read()s 128 bytes at a time, but as one might expect, it stops only on EOF. I want it to read to a newline, like you get with cin to a std::string. My string class has an alloc(size_t new_size) function that (re)allocates memory, and an append(const char *) function that does that part, but I obviously need to know the amount of memory to allocate before I can write to the buffer. Any advice on how to implement this? I tried getting the istream length with seekg() and tellg(), to no avail (it returns -1), and as I said looping until EOF (doesn't stop reading at a newline) reading one chunk at a time.

    Read the article

  • Destructors not called when native (C++) exception propagates to CLR component

    - by Phil Nash
    We have a large body of native C++ code, compliled into DLLs. Then we have a couple of dlls containing C++/CLI proxy code to wrap the C++ interfaces. On top of that we have C# code calling into the C++/CLI wrappers. Standard stuff, so far. But we have a lot of cases where native C++ exceptions are allowed to propagate to the .Net world and we rely on .Net's ability to wrap these as System.Exception objects and for the most part this works fine. However we have been finding that destructors of objects in scope at the point of the throw are not being invoked when the exception propagates! After some research we found that this is a fairly well known issue. However the solutions/ workarounds seem less consistent. We did find that if the native code is compiled with /EHa instead of /EHsc the issue disappears (at least in our test case it did). However we would much prefer to use /EHsc as we translate SEH exceptions to C++ exceptions ourselves and we would rather allow the compiler more scope for optimisation. Are there any other workarounds for this issue - other than wrapping every call across the native-managed boundary in a (native) try-catch-throw (in addition to the C++/CLI layer)?

    Read the article

  • ColdFusion 8 Slow Performance

    - by JoeBob
    We have started a new CF8 app and it is running dog slow. A test where we go around ColdFusion (queries within a database utility) show normal speed (80ms). CF8 returns the same query in something like 60 to 80 seconds! I have been looking online and seeing lots of posts about CF8 and performance problems, but don't get any overall sense of a solution; just lots of people trying things and saying that they didn't have the problem with CF7. We are also seeing instability on the server, and some errors relating to garbage collection and the memory heap. We have a number of other applications running on CF8 and they perform adequately...our programmer is not an expert or a guru, he just plugs away. We have isolated this down to a single query that takes forever to return, so it is not a complicated test. Are there any known CF8 problems or obvious tweaks that we should consider trying? If we have to start over and learn a new environment, I will never make deadline. JoeBob

    Read the article

  • Blackberry application works in simulator but not device

    - by Kai
    I read some of the similar posts on this site that deal with what seems to be the same issue and the responses didn't really seem to clarify things for me. My application works fine in the simulator. I believe I'm on Bold 9000 with OS 4.6. The app is signed. My app makes an HTTP call via 3G to fetch an XML result. type is application/xhtml+xml. In the device, it gives no error. it makes no visual sign of error. I tell the try catch to print the results to the screen and I get nothing. HttpConnection was taken right out of the demos and works fine in sim. Since it gives no error, I begin to reflect back on things I recall reading back when the project began. deviceside=true? Something like that? My request is simply HttpConnection connection = (HttpConnection)Connector.open(url); where url is just a standard url, no get vars. Based on the amount of time I see the connection arrows in the corner of the screen, I assume the app is launching the initial communication to my server, then either getting a bad result, or it gets results and the persistent store is not functioning as expected. I have no idea where to begin with this. Posting code would be ridiculous since it would be basically my whole app. I guess my question is if anyone knows of any major differences with device versus simulator that could cause something like http connection or persistent store to fail? A build setting? An OS restriction? Any standard procedure I may have just not known about that everyone should do before beginning device testing? Thanks

    Read the article

  • JavaME - LWUIT images eat up all the memory

    - by Marko
    Hi, I'm writing a MIDlet using LWUIT and images seem to eat up incredible amounts of memory. All the images I use are PNGs and are packed inside the JAR file. I load them using the standard Image.createImage(URL) method. The application has a number of forms and each has a couple of labels an buttons, however I am fairly certain that only the active form is kept in memory (I know it isn't very trustworthy, but Runtime.freeMemory() seems to confirm this). The application has worked well in 240x320 resolution, but moving it to 480x640 and using appropriately larger images for UI started causing out of memory errors to show up. What the application does, among other things, is download remote images. The application seems to work fine until it gets to this point. After downloading a couple of PNGs and returning to the main menu, the out of memory error is encountered. Naturally, I looked into the amount of memory the main menu uses and it was pretty shocking. It's just two labels with images and four buttons. Each button has three images used for style.setIcon, setPressedIcon and setRolloverIcon. Images range in size from 15 to 25KB but removing two of the three images used for every button (so 8 images in total), Runtime.freeMemory() showed a stunning 1MB decrease in memory usage. The way I see it, I either have a whole lot of memory leaks (which I don't think I do, but memory leaks aren't exactly known to be easily tracked down), I am doing something terribly wrong with image handling or there's really no problem involved and I just need to scale down. If anyone has any insight to offer, I would greatly appreciate it.

    Read the article

  • How to display only one letter in Flex Text Layout Framework ContainerController?

    - by rattkin
    I'm trying to implement dropped initials feature into my Flex application. Since Text Layout Framework does not support floating, the only known solution is to create additional containers that will be linked together, displaying the same textflow. Width and positioning of these containers has to be set in such a way that it will pretend that it's float. I'm using the same solution for dropped initials. Basically, I'm creating three containers, one for the initial letter (first letter in text flow), the other for text floating around, and the 3rd one to display text below these two. All these containers share one textflow. I have big issues with forcing controller to display only one letter from the text flow, and size it accordingly, so that it wont take any unnecessary aditional space and won't get any more text into it. Using ContainerController.getContentBounds() returns me size of whole sprite of the letter (with ascent/descent empty parts), not the height/width of the actual rendered letter. I'm using textFlow.flowComposer.getLineAt(0).getTextLine().getAtomBounds(0), but i think it's still not right. Also, even if I set container for this dimensions, it sometimes display additional text in it, especially for bigger fonts. See screen : Also, if I set width to just 1px less that contentBounds, things are going crazy, containers are moved around, positioned with big margins, etc. How should I solve this? Is it a bug in TLF / Player? Can I fix it somehow? Can I detect the size of the letter, or make containercontroller autosize to fit just one letter only?

    Read the article

  • Decoupling the view, presentation and ASP.NET Web Forms

    - by John Leidegren
    I have an ASP.NET Web Forms page which the presenter needs to populate with controls. This interaction is somewhat sensitive to the page-life cycle and I was wondering if there's a trick to it, that I don't know about. I wanna be practical about the whole thing but not compromise testability. Currently I have this: public interface ISomeContract { void InstantiateIn(System.Web.UI.Control container); } This contract has a dependency on System.Web.UI.Control and I need that to be able to do things with the ASP.NET Web Forms programming model. But neither the view nor the presenter may have knowledge about ASP.NET server controls. How do I get around this? How can I work with the ASP.NET Web Forms programming model in my concrete views without taking a System.Web.UI.Control dependency in my contract assemblies? To clarify things a bit, this type of interface is all about UI composition (using MEF). It's known through-out the framework but it's really only called from within the concrete view. The concrete view is still the only thing that knows about ASP.NET Web Forms. However those public methods that say InstantiateIn(System.Web.UI.Control) exists in my contract assemblies and that implies a dependency on ASP.NET Web Forms. I've been thinking about some double dispatch mechanism or even visitor pattern to try and work around this.

    Read the article

  • Delphi RTTI unable to find interface

    - by conciliator
    I'm trying to fetch an interface using D2010 RTTI. program rtti_sb_1; {$APPTYPE CONSOLE} {$M+} uses SysUtils, Rtti, mynamespace in 'mynamespace.pas'; var ctx: TRttiContext; RType: TRttiType; MyClass: TMyIntfClass; begin ctx := TRttiContext.Create; MyClass := TMyIntfClass.Create; // This prints a list of all known types, including some interfaces. // Unfortunately, IMyPrettyLittleInterface doesn't seem to be one of them. for RType in ctx.GetTypes do WriteLn(RType.Name); // Finding the class implementing the interface is easy. RType := ctx.FindType('mynamespace.TMyIntfClass'); // Finding the interface itself is not. RType := ctx.FindType('mynamespace.IMyPrettyLittleInterface'); MyClass.Free; ReadLn; end. Both IMyPrettyLittleInterface and TMyIntfClass = class(TInterfacedObject, IMyPrettyLittleInterface) are declared in mynamespace.pas. Do anyone know why this doesn't work? Is there a way to solve my problem? Thanks in advance!

    Read the article

  • Python C API from C++ app - know when to lock

    - by Alex
    Hi Everyone, I am trying to write a C++ class that calls Python methods of a class that does some I/O operations (file, stdout) at once. The problem I have ran into is that my class is called from different threads: sometimes main thread, sometimes different others. Obviously I tried to apply the approach for Python calls in multi-threaded native applications. Basically everything starts from PyEval_AcquireLock and PyEval_ReleaseLock or just global locks. According to the documentation here when a thread is already locked a deadlock ensues. When my class is called from the main thread or other one that blocks Python execution I have a deadlock. Python Cfunc1() - C++ func that creates threads internally which lead to calls in "my class", It stuck on PyEval_AcquireLock, obviously the Python is already locked, i.e. waiting for C++ Cfunc1 call to complete... It completes fine if I omit those locks. Also it completes fine when Python interpreter is ready for the next user command, i.e. when thread is calling funcs in the background - not inside of a native call I am looking for a workaround. I need to distinguish whether or not the global lock is allowed, i.e. Python is not locked and ready to receive the next command... I tried PyGIL_Ensure, unfortunately I see hang. Any known API or solution for this ? (Python 2.4)

    Read the article

  • Working with Decimal fields in SSIS

    - by CoffeeAddict
    I'm using SQL Server 2008 w/SP2. I've got an incoming decimal(9,2) field incoming through my OLE DB transformation to my recordset destination transformation. It's like it's reading it as something other than a decimal? I don't know..I'm not an SSIS guru. So continuing on...the problem I have starts here with me trying to stuff the value into a variable for this decimal field. In a foreach loop, I have a variable to represent this decimal field so I can work with it. The first problem that I believe is pretty well known is SSIS variables do not have a decimal type. And from my own testing and what I've read out there, people are using type object for the variable to make SSIS "happy" with decimal values? It makes mine happy. But, then in my foreach loop, I have a for loop. And inside that I'm using an E*xecute SQL Task transformation*. In it, I need to create a parameter mapping to my variable so I can work with that decimal field in my T-SQL call in here. So now I see a type decimal for the parameter and use it and set that to point to my variable. When I run SSIS and it hits my SQL call, I get this in my output window.: The type is not supported.DBTYPE_DECIMAL So I am hitting a wall here. All I wanna do is work with a decimal!!!

    Read the article

  • How to choose light version of database system

    - by adopilot
    I am starting one POS (Point of sale) project. Targeting system is going to be written in C# .NET 2 WinForms and as main database server We are going to use MS-SQL Server. As we have a lot of POS devices in chain for one store I will love to have backend local data base system on each POS device. Scenario are following: When main server goes down!! POS application should continue working "off-line" with local database, until connection to main server come up again. Now I am in dilemma which local database is going to be most adoptable for me. Here is some notes for helping me point me in right direction: To be Light "My POS devices art usually old and suffering with performances" To be Free "I have a lot of devices and I do not wont additional cost beside main SQL serer" One day Ill love to try all that port on Mono and Linux OS. Here is what I've researched so far: Simple XML "Light but I am afraid of performance, My main table of items is average of 10K records" SQL-Express "I am afraid that my POS devices is poor with hardware for SQLExpress, and also hard to install on each device and configure" Less known Advantage Database Server have free distribution of offline ADT system. DBF with extended Library,"Respect for good old DBFs but that era is behind Me with clipper and DBFs" MS Access Sqlite "Mostly like for now, but I am afraid how it is going to pair with MS SQL do they have same Data types". I know that in this SO is a lot of subjective data, but at least can someone recommended some others lite database system, or things that I shod most take attention before I choice database.

    Read the article

  • Trying to create XPath from this HTML snippet

    - by doneright
    I have played for a while writing XPath but am unable to come up with exactly what I want. I'm trying to write XPath for link(click1 and click2 in code snippet below) based on known text(myidentity in code snippet below). Can someone take a look into and suggest possible solution? HTML code snippet: <div class="abc"> <a onclick="mycontroller.goto('xx','yy'); return false;" href="#"> <img src="images/controls/inheritance.gif"/> </a> myidentity <span> <a onclick="mycontroller.goto('xx','yy'); return false;" href="#">click1</a> <a onclick="mycontroller.goto('xx','yy'); return false;" href="#">click2</a> </span> </div>

    Read the article

  • what are all the Optimize tricks that you know for asp.net code ?

    - by Aristos
    After some time of many code programming on asp.net, I discover the very big speed different between string and StringBuilder. I know that is very common and known but I just mention it for start. The second think that I have found to speed up the code, is to use the const, and not the static, for declare my configuration constants value (especial the strings). With the const, the compiler not create new object, but just place the value, on the point that you have ask it, but with the static declaration, is create a new memory object and keep its on the memory. My third trick, is when I search for string, I use hash values, and not the string itself. For example, if I need a List<string SomeValues, and place inside strings that I need to search them, I prefer to use List<int SomeHashValue, and I use the hash value to locate the strings. My forth thought that I was wandering, is if is better to place big strings in one line, or separate them in different lines with the + symbol to be more easy to read out. I make some tests and see that the compiler make a good job is some split the string, in many lines, using the + symbol. What other tricks/tips do you know and use on your programming to make it run faster, and maybe use less memory. Well I know, that some times, to make something run faster, you need more memory, more cache. My priority is on speed. Because Speed Counts.

    Read the article

  • Is a call to the following method considered late binding?

    - by AspOnMyNet
    1) Assume: • B1 defines methods virtualM() and nonvirtualM(), where former method is virtual while the latter is non-virtual • B2 derives from B1 • B2 overrides virtualM() • B2 is defined inside assembly A • Application app doesn’t have a reference to assembly A In the following code application app dynamically loads an assembly A, creates an instance of a type B2 and calls methods virtualM() and nonvirtualM(): Assembly a=Assembly.Load(“A”); Type t= a.GetType(“B2”); B1 a = ( B1 ) Activator.CreateInstance ( “t” ); a.virtualM(); a.nonvirtualM(); a) Is call to a.virtualM() considered early binding or late binding? b) I assume a call to a.nonvirtualM() is resolved during compilation time? 2) Does the term late binding refer only to looking up the target method at run time or does it also refer to creating an instance of given type at runtime? thanx EDIT: 1) A a=new A(); a.M(); As far as I know, it is not known at compile time where on the heap (thus at which memory address ) will instance a be created during runtime. Now, with early binding the function calls are replaced with memory addresses during compilation process. But how can compiler replace function call with memory address, if it doesn’t know where on the heap will object a be created during runtime ( here I’m assuming the address of method a.M will also be at same memory location as a )? 2) The method slot is determined at compile time I assume that by method slot you’re referring to the entry point in V-table?

    Read the article

< Previous Page | 199 200 201 202 203 204 205 206 207 208 209 210  | Next Page >