Search Results

Search found 11913 results on 477 pages for 'fail fast fail early'.

Page 25/477 | < Previous Page | 21 22 23 24 25 26 27 28 29 30 31 32  | Next Page >

  • MVVM Light is too fast :)

    - by Hikari
    Hello, I have a simple WM7 Page with textbox. Futher, I assigned EventToCommand (RelayCommand) to this textbox, reacting to TextChanged event. For testing pourposes I made additional method TextBox_TextChanged in page's code behing. Both command and TextBox_TextChanged print a message box with the textbox content. Initial value of textbox is ABC. Then I press D and: 1) TextBox_TextChanged prints ABCD. 2) Command prints ABC. D is missing. Why commands is so fast???

    Read the article

  • strange results with /fp:fast

    - by martinus
    We have some code that looks like this: inline int calc_something(double x) { if (x > 0.0) { // do something return 1; } else { // do something else return 0; } } Unfortunately, when using the flag /fp:fast, we get calc_something(0)==1 so we are clearly taking the wrong code path. This only happens when we use the method at multiple points in our code with different parameters, so I think there is some fishy optimization going on here from the compiler (Microsoft Visual Studio 2008, SP1). Also, the above problem goes away when we change the interface to inline int calc_something(const double& x) { But I have no idea why this fixes the strange behaviour. Can anyone explane this behaviour? If I cannot understand what's going on we will have to remove the /fp:fastswitch, but this would make our application quite a bit slower.

    Read the article

  • Why would MessageBox fail silently?

    - by Tim Gradwell
    Does anyone know how MessageBox(...) could fail silently? MessageBox(g_hMainhWnd, buffer, "Oops!", MB_OK | MB_ICONERROR); ShellExecute(0, "open", "http://intranet/crash_handler.php", NULL, "", SW_SHOWNORMAL); For a little context, this code is called inside our own exception handler, which was registered with SetUnhandledExceptionFilter() Most of the time, I see the message box, and then it launches a web browser. However, I have an exe, which as far as I'm aware uses this exact code, and it successfully launches the web browser, but I do not see the message box first. Thanks Tim

    Read the article

  • Storing millions of URLs in a database for fast pattern matching

    - by Paras Chopra
    I am developing a web analytics kind of system which needs to log referring URL, landing page URL and search keywords for every visitor on the website. What I want to do with this collected data is to allow end-user to query the data such as "Show me all visitors who came from Bing.com searching for phrase that contains 'red shoes'" or "Show me all visitors who landed on URL that contained 'campaign=twitter_ad'", etc. Because this system will be used on many big websites, the amount of data that needs to log will grow really, really fast. So, my question: a) what would be the best strategy for logging so that scaling the system doesn't become a pain; b) how to use that architecture for rapid querying of arbitrary requests? Is there a special method of storing URLs so that querying them gets faster? In addition to MySQL database that I use, I am exploring (and open to) other alternatives better suited for this task.

    Read the article

  • php fail to open a sqlserver 2000 database

    - by Mike108
    I can use the sql server management studio to open a sqlserver 2000 database, but I can not open the same database in a php page using the same user and password. what is the problem? if(!$dbSource->open("192.168.4.241:1433","sa","sa","NorthWind")) { echo "Fail to open the sql server 2000 database"; } ----------------------- function open($db_server, $db_user, $db_password, $db_name) { $this->conn = mssql_connect($db_server, $db_user, $db_password); if(!$this->conn) { return false; } @mssql_select_db($db_name, $this->conn); return true; }

    Read the article

  • Examples of fast .NET WPF/WinForms apps?

    - by mythz
    I am currently investigating whether to build a windows application using unmanaged C/C++ or in .NET and would like to know of the kind of performance and responsiveness that is capable with a managed C#/.NET GUI app? Not surprisingly it looks like the fastest most responsive applications (e.g. chrome, spotify, etc) are written in unmanaged C/C++. I've had a hard time finding examples of really good .NET applications and so I would like some help. What's the best example of a fast and responsive .NET windows application?

    Read the article

  • Fast way to pass a simple java object from one thread to another

    - by Adal
    I have a callback which receives an object. I make a copy of this object, and I must pass it on to another thread for further processing. It's very important for the callback to return as fast as possible. Ideally, the callback will write the copy to some sort of lock-free container. I only have the callback called from a single thread and one processing thread. I only need to pass a bunch of doubles to the other thread, and I know the maximum number of doubles (around 40). Any ideas? I'm not very familiar with Java, so I don't know the usual ways to pass stuff between threads.

    Read the article

  • fast on-demand c++ compilation [closed]

    - by Amit Prakash
    I'm looking at the possibility of building a system where when a query hits the server, we turn the query into c++ code, compile it as shared object and the run the code. The time for compilation itself needs to be small for it to be worthwhile. My code can generate the corresponding c++ code but if I have to write it out on disk and then invoke gcc to get a .so file and then run it, it does not seem to be worth it. Are there ways in which I can get a small snippet of code to compile and be ready as a share object fast (can have a significant start up time before the queries arrive). If such a tool has a permissive license thats a further plus. Edit: I have a very restrictive query language that the users can use so the security threat is not relevant. My own code translates the query into c++ code. The answer mentioning clang is perfect.

    Read the article

  • I want absolute atomicity on a single couchdb instance (insert, fail if already existing)

    - by MatternPatching
    I've come to really love the couchdb style of organizing and updating data, but there are a few situations where I really need to be able to create an entry and determine if an equivalent entry is already in existence before returning to the user. The only situation that this is absolutely necessary for my application is user registration. I'm fine with having all user registration writes go to a particular, designated couchdb instance known as the "registration-instance". I want to hash the user_id into some _id to use. Then execute a put with this _id, but fail if the _id is already inserted. I need to return to the user that the user name is already reserved, and I cannot detect the conflict later and resolve it at that point, because the user would be under the impression that they had reserved the user name. I don't see why couchdb couldn't provide some way to do this, under the assumption that you designate that inserts for a particular "type" of document always are routed to a particular instance.

    Read the article

  • Fast image coordinate lookup in Numpy

    - by victor
    I've got a big numpy array full of coordinates (about 400): [[102, 234], [304, 104], .... ] And a numpy 2d array my_map of size 800x800. What's the fastest way to look up the coordinates given in that array? I tried things like paletting as described in this post: http://opencvpython.blogspot.com/2012/06/fast-array-manipulation-in-numpy.html but couldn't get it to work. I was also thinking about turning each coordinate into a linear index of the map and then piping it straight into my_map like so: my_map[linearized_coords] but I couldn't get vectorize to properly translate the coordinates into a linear fashion. Any ideas?

    Read the article

  • How to get REALLY fast python over a simple loop

    - by totallymike
    I'm working on a spoj problem, INTEST. The goal is to specify the number of test cases (n) and a divisor (k), then feed your program n numbers. The program will accept each number on a newline of stdin and after receiving the nth number, will tell you how many were divisible by k. The only challenge in this problem is getting your code to be FAST because it k can be anything up to 10^7 and the test cases can be as high as 10^9. I'm trying to write it in python and having trouble speeding it up. Any ideas? import sys first_in = raw_input() thing = first_in.split() n = int(thing[0]) k = int(thing[1]) total = 0 i = 0 for line in sys.stdin: t = int(line) if t % k == 0: total += 1 print total

    Read the article

  • Is there a way to force JUnit to fail on ANY unchecked exception, even if swallowed

    - by Uri
    I am using JUnit to write some higher level tests for legacy code that does not have unit tests. Much of this code "swallows" a variety of unchecked exceptions like NullPointerExceptions (e.g., by just printing stack trace and returning null). Therefore the unit test can pass even through there is a cascade of disasters at various points in the lower level code. Is there any way to have a test fail on the first unchecked exception even if they are swallowed? The only alternative I can think of is to write a custom JUnit wrapper that redirects System.err and then analyzes the output for exceptions.

    Read the article

  • How to make my WPF application as FAST as Outlook

    - by Raul Otaño
    The commons WPF applications take some time for loading medium complex views, once the view is loaded it works fine. For example in a Master - Detail view, if the Detail view is very complex and use different DataTemplates take some seconds (2-3 seconds) for load the view. When i open the Outlook application, for instance, it renders complex views and it is relative much more fast. Is there a way for increase the performance of my WPF application? Maybe a way for not loading the template's data every time that change the "master" item, and load it only one time in the app time live? i will appreciate any suggestion.

    Read the article

  • How to fast rendering UITableView

    - by pubudu
    In my program has two view controller. first one has one button.and second one has tableview with custom cell. in this cell has 5 textviews. when i click button of first tableview.it shows second view controller. Its is very slow rendering table view with 5 , 6 rows.it is working well with simulator.but it is very slow with actual i pad. when i click the button i have to wait 2,3 second with button pressed status.and after it view the second view controller it also very slow rendering.i can see it render rows. [tableView dequeueReusableCellWithIdentifier:CellIdentifier]; this one also i used.when i comment this table from my second view.it navigate first view controller to second view controller very fast. how can i solve this issue?

    Read the article

  • Multiple data centers and HTTP traffic: DNS Round Robin is the ONLY way to assure instant fail-over?

    - by vmiazzo
    Hi, Multiple A records pointing to the same domain seem to be used almost exclusively to implement DNS Round Robin as a cheap load balancing technique. The usual warning against DNS RR is that it is not good for high availability. When 1 IP goes down clients will continue to use it for minutes. A load balancer is often suggested as a better choice. Both claims are not completely true: When the traffic is HTTP then, most of the HTML browsers are able to automatically try the next A record if the previous is down, without a new DNS look-up. Read here chapter 3.1 and here. When multiple data centers are involved then, DNS RR is the only option to distribute traffic across them. So, is it true that, with multiple data centers and HTTP traffic, the use of DNS RR is the ONLY way to assure instant fail-over when one data center goes down? Thanks, Valentino Edit: Off course each data center has a local Load Balancer with hot spare. It's OK to sacrifice session affinity for an instant fail-over. AFAIK the only way for a DNS to suggest a data center instead of another is to reply with just the IP (or IPs) associated to that data center. If the data center becomes unreachable then all those IP are also unreachables. This means that, even if smart HTML browsers are able to instantly try another A record , all the attempts will fail until the local cache entry expires and a new DNS lookup is done, fetching the new working IPs (I assume DNS automatically suggests to a new data center when one fail). So, "smart DNS" cannot assure instant fail-over. Conversely a DNS round-robin permits it. When one data center fail, the smart HTML browsers (most of them) instantly try the other cached A records jumping to another (working) data center. So, DNS round-robin doesn't assure session affinity or the lowest RTT but seems to be the only way to assure instant fail-over when the clients are "smart" HTML browsers. Edit 2: Some people suggest TCP Anycast as a definitive solution. In this paper (chapter 6) is explained that Anycast fail-over is related to BGP convergence. For this reason Anycast can employ from 15 minutes to 20 seconds to complete. 20 seconds are possible on networks where the topology was optimized for this. Probably just CDN operators can grant such fast fail-overs. Edit 3:* I did some DNS look-ups and traceroutes (maybe some expert can double check) and: The only CDN using TCP Anycast seems to be CacheFly, other operators like CDN networks and BitGravity use CacheFly. Seems that their edges cannot be used as reverse proxies. Therefore, they cannot be used to grant instant failover. Akamai and LimeLight seems to use geo-aware DNS. But! They return multiple A records. From traceroutes seems that the returned IPs are on the same data center. So, I'm puzzled on how they can offer a 100% SLA when one data center goes down.

    Read the article

  • Algorithm for non-contiguous netmask match

    - by Gianluca
    Hi, I have to write a really really fast algorithm to match an IP address to a list of groups, where each group is defined using a notation like 192.168.0.0/252.255.0.255. As you can see, the bitmask can contain zeros even in the middle, so the traditional "longest prefix match" algorithms won't work. If an IP matches two groups, it will be assigned to the group containing most 1's in the netmask. I'm not working with many entries (let's say < 1000) and I don't want to use a data structure requiring a large memory footprint (let's say 1-2 MB), but it really has to be fast (of course I can't afford a linear search). Do you have any suggestion? Thanks guys. UPDATE: I found something quite interesting at http://www.cse.usf.edu/~ligatti/papers/grouper-conf.pdf, but it's still too memory-hungry for my utopic use case

    Read the article

  • Detecting Xml namespace fast

    - by Anna Tjsoken
    Hello there, This may be a very trivial problem I'm trying to solve, but I'm sure there's a better way of doing it. So please go easy on me. I have a bunch of XSD files that are internal to our application, we have about 20-30 Xml files that implement datasets based off those XSDs. Some Xml files are small (<100Kb), others are about 3-4Mb with a few being over 10Mb. I need to find a way of working out what namespace these Xml files are in order to provide (something like) intellisense based off the XSD. The implementation of this is not an issue - another developer has written the code for this. But I'm not sure the best (and fastest!) way of detecting the namespace is without the use of XmlDocument (which does a full parse). I'm using C# 3.5 and the documents come through as a Stream (some are remote files). All the files are *.xml (I can detect if it was extension based) but unfortunately the Xml namespace is the only way. Right now I've tried XmlDocument but I've found it to be innefficient and slow as the larger documents are awaiting to be parsed (even the 100Kb docs). public string GetNamespaceForDocument(Stream document); Something like the above is my method signature - overloads include string for "content". Would a RegEx (compiled) pattern be good? How does Visual Studio manage this so efficiently? Another college has told me to find a fast Xml parser in C/C++, parse the content and have a stub that gives back the namespace as its slower in .NET, is this a good idea?

    Read the article

  • Ways std::stringstream can set fail/bad bit?

    - by Evan Teran
    A common piece of code I use for simple string splitting looks like this: inline std::vector<std::string> split(const std::string &s, char delim) { std::vector<std::string> elems; std::stringstream ss(s); std::string item; while(std::getline(ss, item, delim)) { elems.push_back(item); } return elems; } Someone mentioned that this will silently "swallow" errors occurring in std::getline. And of course I agree that's the case. But it occurred to me, what could possibly go wrong here in practice that I would need to worry about. basically it all boils down to this: inline std::vector<std::string> split(const std::string &s, char delim) { std::vector<std::string> elems; std::stringstream ss(s); std::string item; while(std::getline(ss, item, delim)) { elems.push_back(item); } if(ss.fail()) { // *** How did we get here!? *** } return elems; } A stringstream is backed by a string, so we don't have to worry about any of the issues associated with reading from a file. There is no type conversion going on here since getline simply reads until it sees a newline or EOF. So we can't get any of the errors that something like boost::lexical_cast has to worry about. I simply can't think of something besides failing to allocate enough memory that could go wrong, but that'll just throw a std::bad_alloc well before the std::getline even takes place. What am I missing?

    Read the article

  • Simple, fast SQL queries for flat files.

    - by plinehan
    Does anyone know of any tools to provide simple, fast queries of flat files using a SQL-like declarative query language? I'd rather not pay the overhead of loading the file into a DB since the input data is typically thrown out almost immediately after the query is run. Consider the data file, "animals.txt": dog 15 cat 20 dog 10 cat 30 dog 5 cat 40 Suppose I want to extract the highest value for each unique animal. I would like to write something like: cat animals.txt | foo "select $1, max(convert($2 using decimal)) group by $1" I can get nearly the same result using sort: cat animals.txt | sort -t " " -k1,1 -k2,2nr And I can always drop into awk from there, but this all feels a bit awkward (couldn't resist) when a SQL-like language would seem to solve the problem so cleanly. I've considered writing a wrapper for SQLite that would automatically create a table based on the input data, and I've looked into using Hive in single-processor mode, but I can't help but feel this problem has been solved before. Am I missing something? Is this functionality already implemented by another standard tool? Halp!

    Read the article

  • Python: OSX Library for fast full screen jpg/png display

    - by Parand
    Frustrated by lack of a simple ACDSee equivalent for OS X, I'm looking to hack one up for myself. I'm looking for a gui library that accommodates: Full screen image display High quality image fit-to-screen (for display) Low memory usage Fast display Reasonable learning curve (the simpler the better) Looks like there are several choices, so which is the best? Here are some I've run across: PyOpenGL PyGame PyQT wxpython I don't have any particular experience with any of these, nor any strong desire to become an expert - I'm looking for the simplest solution. What do you recommend? [Update] For those not familiar with ACDSee, here's what it does that I care about: Simple list/thubmnail display of images in a directory Sort by name/size/type Ability to view images full screen Single-key delete while viewing full screen Move to next/previous image while viewing full screen Ability to select a group of images for: move to / copy to directory delete resize ACDSee has a bunch of niceties as well, such as remembering directories you've moved images to in the past, remembering your resize settings, displaying the total size of the images you've selected, etc. I've tried most of the options I could find (including Xee) and none of them quite get there. Please keep in mind that this is a programming/library question, not a criticism of any of the existing tools.

    Read the article

  • MySQL ORDER BY DESC is fast but ASC is very slow

    - by Pepper
    Hello, I'm completely stumped on this one. For some reason when I sort this query by DESC it's super fast, but if sorted by ASC it's extremely slow. This takes about 150 milliseconds: SELECT posts.id FROM posts USE INDEX (published) WHERE posts.feed_id IN ( 4953,622,1,1852,4952,76,623,624,10 ) ORDER BY posts.published DESC LIMIT 0, 50; This takes about 32 seconds: SELECT posts.id FROM posts USE INDEX (published) WHERE posts.feed_id IN ( 4953,622,1,1852,4952,76,623,624,10 ) ORDER BY posts.published ASC LIMIT 0, 50; The EXPLAIN is the same for both queries. id select_type table type possible_keys key key_len ref rows Extra 1 SIMPLE posts index NULL published 5 NULL 50 Using where I've tracked it down to "USE INDEX (published)". If I take that out it's the same performance both ways. But the EXPLAIN shows the query is less efficient overall. id select_type table type possible_keys key key_len ref rows Extra 1 SIMPLE posts range feed_id feed_id 4 \N 759 Using where; Using filesort And here's the table. CREATE TABLE `posts` ( `id` int(20) NOT NULL AUTO_INCREMENT, `feed_id` int(11) NOT NULL, `post_url` varchar(255) NOT NULL, `title` varchar(255) NOT NULL, `content` blob, `author` varchar(255) DEFAULT NULL, `published` int(12) DEFAULT NULL, `updated` datetime NOT NULL, `created` datetime NOT NULL, PRIMARY KEY (`id`), UNIQUE KEY `post_url` (`post_url`,`feed_id`), KEY `feed_id` (`feed_id`), KEY `published` (`published`) ) ENGINE=InnoDB AUTO_INCREMENT=196530 DEFAULT CHARSET=latin1; Is there a fix for this? Thanks!

    Read the article

  • Why do my CouchDB databases grow so fast?

    - by konrad
    I was wondering why my CouchDB database was growing to fast so I wrote a little test script. This script changes an attributed of a CouchDB document 1200 times and takes the size of the database after each change. After performing these 1200 writing steps the database is doing a compaction step and the db size is measured again. In the end the script plots the databases size against the revision numbers. The benchmarking is run twice: The first time the default number of document revision (=1000) is used (_revs_limit). The second time the number of document revisions is set to 1. The first run produces the following plot The second run produces this plot For me this is quite an unexpected behavior. In the first run I would have expected a linear growth as every change produces a new revision. When the 1000 revisions are reached the size value should be constant as the older revisions are discarded. After the compaction the size should fall significantly. In the second run the first revision should result in certain database size that is then keeps during the following writing steps as every new revision leads to the deletion of the previous one. I could understand if there is a little bit of overhead needed to manage the changes but this growth behavior seems weird to me. Can anybody explain this phenomenon or correct my assumptions that lead to the wrong expectations?

    Read the article

  • .net real time stream processing - needed huge and fast RAM buffer

    - by mack369
    The application I'm developing communicates with an digital audio device, which is capable of sending 24 different voice streams at the same time. The device is connected via USB, using FTDI device (serial port emulator) and D2XX Drivers (basic COM driver is to slow to handle transfer of 4.5Mbit). Basically the application consist of 3 threads: Main thread - GUI, control, ect. Bus reader - in this thread data is continuously read from the device and saved to a file buffer (there is no logic in this thread) Data interpreter - this thread reads the data from file buffer, converts to samples, does simple sample processing and saves the samples to separate wav files. The reason why I used file buffer is that I wanted to be sure that I won't loose any samples. The application doesn't use recording all the time, so I've chosen this solution because it was safe. The application works fine, except that buffered wave file generator is pretty slow. For 24 parallel records of 1 minute, it takes about 4 minutes to complete the recording. I'm pretty sure that eliminating the use of hard drive in this process will increase the speed much. The second problem is that the file buffer is really heavy for long records and I can't clean this up until the end of data processing (it would slow down the process even more). For RAM buffer I need at lest 1GB to make it work properly. What is the best way to allocate such a big amount of memory in .NET? I'm going to use this memory in 2 threads so a fast synchronization mechanism needed. I'm thinking about a cycle buffer: one big array, the Bus Reader saves the data, the Data Interpreter reads it. What do you think about it? [edit] Now for buffering I'm using classes BinaryReader and BinaryWriter based on a file.

    Read the article

  • Perl's Devel::LeakTrace::Fast Pointing to blank files and evals

    - by kt
    I am using Devel::LeakTrace::Fast to debug a memory leak in a perl script designed as a daemon which runs an infinite loop with sleeps until interrupted. I am having some trouble both reading the output and finding documentation to help me understand the output. The perldoc doesn't contain much information on the output. Most of it makes sense, such as pointing to globals in DBI. Intermingled with the output, however, are several leaked SV(<LOCATION>) from (eval #) line # Where the numbers are numbers and <LOCATION> is a location in memory. The script itself is not using eval at any point - I have not investigated each used module to see if evals are present. Mostly what I want to know is how to find these evals (if possible). I also find the following entries repeated over and over again leaked SV(<LOCATION>) from line # Where line # is always the same #. Not very helpful in tracking down what file that line is in.

    Read the article

  • Looking for a fast, compact, streamable, multi-language, strongly typed serialization format

    - by sanity
    I'm currently using JSON (compressed via gzip) in my Java project, in which I need to store a large number of objects (hundreds of millions) on disk. I have one JSON object per line, and disallow linebreaks within the JSON object. This way I can stream the data off disk line-by-line without having to read the entire file at once. It turns out that parsing the JSON code (using http://www.json.org/java/) is a bigger overhead than either pulling the raw data off disk, or decompressing it (which I do on the fly). Ideally what I'd like is a strongly-typed serialization format, where I can specify "this object field is a list of strings" (for example), and because the system knows what to expect, it can deserialize it quickly. I can also specify the format just by giving someone else its "type". It would also need to be cross-platform. I use Java, but work with people using PHP, Python, and other languages. So, to recap, it should be: Strongly typed Streamable (ie. read a file bit by bit without having to load it all into RAM at once) Cross platform (including Java and PHP) Fast Free (as in speech) Any pointers?

    Read the article

< Previous Page | 21 22 23 24 25 26 27 28 29 30 31 32  | Next Page >