Search Results

Search found 5786 results on 232 pages for 'fast'.

Page 13/232 | < Previous Page | 9 10 11 12 13 14 15 16 17 18 19 20  | Next Page >

  • MVVM Light is too fast :)

    - by Hikari
    Hello, I have a simple WM7 Page with textbox. Futher, I assigned EventToCommand (RelayCommand) to this textbox, reacting to TextChanged event. For testing pourposes I made additional method TextBox_TextChanged in page's code behing. Both command and TextBox_TextChanged print a message box with the textbox content. Initial value of textbox is ABC. Then I press D and: 1) TextBox_TextChanged prints ABCD. 2) Command prints ABC. D is missing. Why commands is so fast???

    Read the article

  • strange results with /fp:fast

    - by martinus
    We have some code that looks like this: inline int calc_something(double x) { if (x > 0.0) { // do something return 1; } else { // do something else return 0; } } Unfortunately, when using the flag /fp:fast, we get calc_something(0)==1 so we are clearly taking the wrong code path. This only happens when we use the method at multiple points in our code with different parameters, so I think there is some fishy optimization going on here from the compiler (Microsoft Visual Studio 2008, SP1). Also, the above problem goes away when we change the interface to inline int calc_something(const double& x) { But I have no idea why this fixes the strange behaviour. Can anyone explane this behaviour? If I cannot understand what's going on we will have to remove the /fp:fastswitch, but this would make our application quite a bit slower.

    Read the article

  • Storing millions of URLs in a database for fast pattern matching

    - by Paras Chopra
    I am developing a web analytics kind of system which needs to log referring URL, landing page URL and search keywords for every visitor on the website. What I want to do with this collected data is to allow end-user to query the data such as "Show me all visitors who came from Bing.com searching for phrase that contains 'red shoes'" or "Show me all visitors who landed on URL that contained 'campaign=twitter_ad'", etc. Because this system will be used on many big websites, the amount of data that needs to log will grow really, really fast. So, my question: a) what would be the best strategy for logging so that scaling the system doesn't become a pain; b) how to use that architecture for rapid querying of arbitrary requests? Is there a special method of storing URLs so that querying them gets faster? In addition to MySQL database that I use, I am exploring (and open to) other alternatives better suited for this task.

    Read the article

  • Examples of fast .NET WPF/WinForms apps?

    - by mythz
    I am currently investigating whether to build a windows application using unmanaged C/C++ or in .NET and would like to know of the kind of performance and responsiveness that is capable with a managed C#/.NET GUI app? Not surprisingly it looks like the fastest most responsive applications (e.g. chrome, spotify, etc) are written in unmanaged C/C++. I've had a hard time finding examples of really good .NET applications and so I would like some help. What's the best example of a fast and responsive .NET windows application?

    Read the article

  • fast on-demand c++ compilation [closed]

    - by Amit Prakash
    I'm looking at the possibility of building a system where when a query hits the server, we turn the query into c++ code, compile it as shared object and the run the code. The time for compilation itself needs to be small for it to be worthwhile. My code can generate the corresponding c++ code but if I have to write it out on disk and then invoke gcc to get a .so file and then run it, it does not seem to be worth it. Are there ways in which I can get a small snippet of code to compile and be ready as a share object fast (can have a significant start up time before the queries arrive). If such a tool has a permissive license thats a further plus. Edit: I have a very restrictive query language that the users can use so the security threat is not relevant. My own code translates the query into c++ code. The answer mentioning clang is perfect.

    Read the article

  • Fast way to pass a simple java object from one thread to another

    - by Adal
    I have a callback which receives an object. I make a copy of this object, and I must pass it on to another thread for further processing. It's very important for the callback to return as fast as possible. Ideally, the callback will write the copy to some sort of lock-free container. I only have the callback called from a single thread and one processing thread. I only need to pass a bunch of doubles to the other thread, and I know the maximum number of doubles (around 40). Any ideas? I'm not very familiar with Java, so I don't know the usual ways to pass stuff between threads.

    Read the article

  • Fast image coordinate lookup in Numpy

    - by victor
    I've got a big numpy array full of coordinates (about 400): [[102, 234], [304, 104], .... ] And a numpy 2d array my_map of size 800x800. What's the fastest way to look up the coordinates given in that array? I tried things like paletting as described in this post: http://opencvpython.blogspot.com/2012/06/fast-array-manipulation-in-numpy.html but couldn't get it to work. I was also thinking about turning each coordinate into a linear index of the map and then piping it straight into my_map like so: my_map[linearized_coords] but I couldn't get vectorize to properly translate the coordinates into a linear fashion. Any ideas?

    Read the article

  • How to get REALLY fast python over a simple loop

    - by totallymike
    I'm working on a spoj problem, INTEST. The goal is to specify the number of test cases (n) and a divisor (k), then feed your program n numbers. The program will accept each number on a newline of stdin and after receiving the nth number, will tell you how many were divisible by k. The only challenge in this problem is getting your code to be FAST because it k can be anything up to 10^7 and the test cases can be as high as 10^9. I'm trying to write it in python and having trouble speeding it up. Any ideas? import sys first_in = raw_input() thing = first_in.split() n = int(thing[0]) k = int(thing[1]) total = 0 i = 0 for line in sys.stdin: t = int(line) if t % k == 0: total += 1 print total

    Read the article

  • How to make my WPF application as FAST as Outlook

    - by Raul Otaño
    The commons WPF applications take some time for loading medium complex views, once the view is loaded it works fine. For example in a Master - Detail view, if the Detail view is very complex and use different DataTemplates take some seconds (2-3 seconds) for load the view. When i open the Outlook application, for instance, it renders complex views and it is relative much more fast. Is there a way for increase the performance of my WPF application? Maybe a way for not loading the template's data every time that change the "master" item, and load it only one time in the app time live? i will appreciate any suggestion.

    Read the article

  • How to fast rendering UITableView

    - by pubudu
    In my program has two view controller. first one has one button.and second one has tableview with custom cell. in this cell has 5 textviews. when i click button of first tableview.it shows second view controller. Its is very slow rendering table view with 5 , 6 rows.it is working well with simulator.but it is very slow with actual i pad. when i click the button i have to wait 2,3 second with button pressed status.and after it view the second view controller it also very slow rendering.i can see it render rows. [tableView dequeueReusableCellWithIdentifier:CellIdentifier]; this one also i used.when i comment this table from my second view.it navigate first view controller to second view controller very fast. how can i solve this issue?

    Read the article

  • Algorithm for non-contiguous netmask match

    - by Gianluca
    Hi, I have to write a really really fast algorithm to match an IP address to a list of groups, where each group is defined using a notation like 192.168.0.0/252.255.0.255. As you can see, the bitmask can contain zeros even in the middle, so the traditional "longest prefix match" algorithms won't work. If an IP matches two groups, it will be assigned to the group containing most 1's in the netmask. I'm not working with many entries (let's say < 1000) and I don't want to use a data structure requiring a large memory footprint (let's say 1-2 MB), but it really has to be fast (of course I can't afford a linear search). Do you have any suggestion? Thanks guys. UPDATE: I found something quite interesting at http://www.cse.usf.edu/~ligatti/papers/grouper-conf.pdf, but it's still too memory-hungry for my utopic use case

    Read the article

  • Detecting Xml namespace fast

    - by Anna Tjsoken
    Hello there, This may be a very trivial problem I'm trying to solve, but I'm sure there's a better way of doing it. So please go easy on me. I have a bunch of XSD files that are internal to our application, we have about 20-30 Xml files that implement datasets based off those XSDs. Some Xml files are small (<100Kb), others are about 3-4Mb with a few being over 10Mb. I need to find a way of working out what namespace these Xml files are in order to provide (something like) intellisense based off the XSD. The implementation of this is not an issue - another developer has written the code for this. But I'm not sure the best (and fastest!) way of detecting the namespace is without the use of XmlDocument (which does a full parse). I'm using C# 3.5 and the documents come through as a Stream (some are remote files). All the files are *.xml (I can detect if it was extension based) but unfortunately the Xml namespace is the only way. Right now I've tried XmlDocument but I've found it to be innefficient and slow as the larger documents are awaiting to be parsed (even the 100Kb docs). public string GetNamespaceForDocument(Stream document); Something like the above is my method signature - overloads include string for "content". Would a RegEx (compiled) pattern be good? How does Visual Studio manage this so efficiently? Another college has told me to find a fast Xml parser in C/C++, parse the content and have a stub that gives back the namespace as its slower in .NET, is this a good idea?

    Read the article

  • Simple, fast SQL queries for flat files.

    - by plinehan
    Does anyone know of any tools to provide simple, fast queries of flat files using a SQL-like declarative query language? I'd rather not pay the overhead of loading the file into a DB since the input data is typically thrown out almost immediately after the query is run. Consider the data file, "animals.txt": dog 15 cat 20 dog 10 cat 30 dog 5 cat 40 Suppose I want to extract the highest value for each unique animal. I would like to write something like: cat animals.txt | foo "select $1, max(convert($2 using decimal)) group by $1" I can get nearly the same result using sort: cat animals.txt | sort -t " " -k1,1 -k2,2nr And I can always drop into awk from there, but this all feels a bit awkward (couldn't resist) when a SQL-like language would seem to solve the problem so cleanly. I've considered writing a wrapper for SQLite that would automatically create a table based on the input data, and I've looked into using Hive in single-processor mode, but I can't help but feel this problem has been solved before. Am I missing something? Is this functionality already implemented by another standard tool? Halp!

    Read the article

  • Python: OSX Library for fast full screen jpg/png display

    - by Parand
    Frustrated by lack of a simple ACDSee equivalent for OS X, I'm looking to hack one up for myself. I'm looking for a gui library that accommodates: Full screen image display High quality image fit-to-screen (for display) Low memory usage Fast display Reasonable learning curve (the simpler the better) Looks like there are several choices, so which is the best? Here are some I've run across: PyOpenGL PyGame PyQT wxpython I don't have any particular experience with any of these, nor any strong desire to become an expert - I'm looking for the simplest solution. What do you recommend? [Update] For those not familiar with ACDSee, here's what it does that I care about: Simple list/thubmnail display of images in a directory Sort by name/size/type Ability to view images full screen Single-key delete while viewing full screen Move to next/previous image while viewing full screen Ability to select a group of images for: move to / copy to directory delete resize ACDSee has a bunch of niceties as well, such as remembering directories you've moved images to in the past, remembering your resize settings, displaying the total size of the images you've selected, etc. I've tried most of the options I could find (including Xee) and none of them quite get there. Please keep in mind that this is a programming/library question, not a criticism of any of the existing tools.

    Read the article

  • MySQL ORDER BY DESC is fast but ASC is very slow

    - by Pepper
    Hello, I'm completely stumped on this one. For some reason when I sort this query by DESC it's super fast, but if sorted by ASC it's extremely slow. This takes about 150 milliseconds: SELECT posts.id FROM posts USE INDEX (published) WHERE posts.feed_id IN ( 4953,622,1,1852,4952,76,623,624,10 ) ORDER BY posts.published DESC LIMIT 0, 50; This takes about 32 seconds: SELECT posts.id FROM posts USE INDEX (published) WHERE posts.feed_id IN ( 4953,622,1,1852,4952,76,623,624,10 ) ORDER BY posts.published ASC LIMIT 0, 50; The EXPLAIN is the same for both queries. id select_type table type possible_keys key key_len ref rows Extra 1 SIMPLE posts index NULL published 5 NULL 50 Using where I've tracked it down to "USE INDEX (published)". If I take that out it's the same performance both ways. But the EXPLAIN shows the query is less efficient overall. id select_type table type possible_keys key key_len ref rows Extra 1 SIMPLE posts range feed_id feed_id 4 \N 759 Using where; Using filesort And here's the table. CREATE TABLE `posts` ( `id` int(20) NOT NULL AUTO_INCREMENT, `feed_id` int(11) NOT NULL, `post_url` varchar(255) NOT NULL, `title` varchar(255) NOT NULL, `content` blob, `author` varchar(255) DEFAULT NULL, `published` int(12) DEFAULT NULL, `updated` datetime NOT NULL, `created` datetime NOT NULL, PRIMARY KEY (`id`), UNIQUE KEY `post_url` (`post_url`,`feed_id`), KEY `feed_id` (`feed_id`), KEY `published` (`published`) ) ENGINE=InnoDB AUTO_INCREMENT=196530 DEFAULT CHARSET=latin1; Is there a fix for this? Thanks!

    Read the article

  • Why do my CouchDB databases grow so fast?

    - by konrad
    I was wondering why my CouchDB database was growing to fast so I wrote a little test script. This script changes an attributed of a CouchDB document 1200 times and takes the size of the database after each change. After performing these 1200 writing steps the database is doing a compaction step and the db size is measured again. In the end the script plots the databases size against the revision numbers. The benchmarking is run twice: The first time the default number of document revision (=1000) is used (_revs_limit). The second time the number of document revisions is set to 1. The first run produces the following plot The second run produces this plot For me this is quite an unexpected behavior. In the first run I would have expected a linear growth as every change produces a new revision. When the 1000 revisions are reached the size value should be constant as the older revisions are discarded. After the compaction the size should fall significantly. In the second run the first revision should result in certain database size that is then keeps during the following writing steps as every new revision leads to the deletion of the previous one. I could understand if there is a little bit of overhead needed to manage the changes but this growth behavior seems weird to me. Can anybody explain this phenomenon or correct my assumptions that lead to the wrong expectations?

    Read the article

  • Looking for a fast, compact, streamable, multi-language, strongly typed serialization format

    - by sanity
    I'm currently using JSON (compressed via gzip) in my Java project, in which I need to store a large number of objects (hundreds of millions) on disk. I have one JSON object per line, and disallow linebreaks within the JSON object. This way I can stream the data off disk line-by-line without having to read the entire file at once. It turns out that parsing the JSON code (using http://www.json.org/java/) is a bigger overhead than either pulling the raw data off disk, or decompressing it (which I do on the fly). Ideally what I'd like is a strongly-typed serialization format, where I can specify "this object field is a list of strings" (for example), and because the system knows what to expect, it can deserialize it quickly. I can also specify the format just by giving someone else its "type". It would also need to be cross-platform. I use Java, but work with people using PHP, Python, and other languages. So, to recap, it should be: Strongly typed Streamable (ie. read a file bit by bit without having to load it all into RAM at once) Cross platform (including Java and PHP) Fast Free (as in speech) Any pointers?

    Read the article

  • .net real time stream processing - needed huge and fast RAM buffer

    - by mack369
    The application I'm developing communicates with an digital audio device, which is capable of sending 24 different voice streams at the same time. The device is connected via USB, using FTDI device (serial port emulator) and D2XX Drivers (basic COM driver is to slow to handle transfer of 4.5Mbit). Basically the application consist of 3 threads: Main thread - GUI, control, ect. Bus reader - in this thread data is continuously read from the device and saved to a file buffer (there is no logic in this thread) Data interpreter - this thread reads the data from file buffer, converts to samples, does simple sample processing and saves the samples to separate wav files. The reason why I used file buffer is that I wanted to be sure that I won't loose any samples. The application doesn't use recording all the time, so I've chosen this solution because it was safe. The application works fine, except that buffered wave file generator is pretty slow. For 24 parallel records of 1 minute, it takes about 4 minutes to complete the recording. I'm pretty sure that eliminating the use of hard drive in this process will increase the speed much. The second problem is that the file buffer is really heavy for long records and I can't clean this up until the end of data processing (it would slow down the process even more). For RAM buffer I need at lest 1GB to make it work properly. What is the best way to allocate such a big amount of memory in .NET? I'm going to use this memory in 2 threads so a fast synchronization mechanism needed. I'm thinking about a cycle buffer: one big array, the Bus Reader saves the data, the Data Interpreter reads it. What do you think about it? [edit] Now for buffering I'm using classes BinaryReader and BinaryWriter based on a file.

    Read the article

  • Perl's Devel::LeakTrace::Fast Pointing to blank files and evals

    - by kt
    I am using Devel::LeakTrace::Fast to debug a memory leak in a perl script designed as a daemon which runs an infinite loop with sleeps until interrupted. I am having some trouble both reading the output and finding documentation to help me understand the output. The perldoc doesn't contain much information on the output. Most of it makes sense, such as pointing to globals in DBI. Intermingled with the output, however, are several leaked SV(<LOCATION>) from (eval #) line # Where the numbers are numbers and <LOCATION> is a location in memory. The script itself is not using eval at any point - I have not investigated each used module to see if evals are present. Mostly what I want to know is how to find these evals (if possible). I also find the following entries repeated over and over again leaked SV(<LOCATION>) from line # Where line # is always the same #. Not very helpful in tracking down what file that line is in.

    Read the article

  • Fast way to manually mod a number

    - by Nikolai Mushegian
    I need to be able to calculate (a^b) % c for very large values of a and b (which individually are pushing limit and which cause overflow errors when you try to calculate a^b). For small enough numbers, using the identity (a^b)%c = (a%c)^b%c works, but if c is too large this doesn't really help. I wrote a loop to do the mod operation manually, one a at a time: private static long no_Overflow_Mod(ulong num_base, ulong num_exponent, ulong mod) { long answer = 1; for (int x = 0; x < num_exponent; x++) { answer = (answer * num_base) % mod; } return answer; } but this takes a very long time. Is there any simple and fast way to do this operation without actually having to take a to the power of b AND without using time-consuming loops? If all else fails, I can make a bool array to represent a huge data type and figure out how to do this with bitwise operators, but there has to be a better way.

    Read the article

  • C# / Silverlight / WPF / Fast rendering lots of circles

    - by Walt W
    I want to render a lot of circles or small graphics within either silverlight or wpf (around 1000-10000) as fast and as frequently as possible. If I have to go to DX or OGL, that's fine, but I'm wondering about doing this within either of those two frameworks first (read: it's OK if an answer is WPF-only or Silverlight-only). Also, if there is a way to access DX through WPF and render on a surface that way, I would be interested in that as well. So, what's the fastest way to draw a load of circles? They can be as plain as necessary, but they do need to have a radius. Currently I'm using DrawingVisual and a DrawingContext.DrawEllipse() command for each circle, then rendering the visual to a RenderTargetBItmap, but it becomes very slow as the number of circles rises. By the way, these circles move every frame, so caching isn't really an option unless you're going to suggest caching the individual circles . . . But their sizes are dynamic, so I'm not sure that's a great approach.

    Read the article

  • Way to store a large dictionary with low memory footprint + fast lookups (on Android)

    - by BobbyJim
    I'm developing an android word game app that needs a large (~250,000 word dictionary) available. I need: reasonably fast look ups e.g. constant time preferable, need to do maybe 200 lookups a second on occasion to solve a word puzzle and maybe 20 lookups within 0.2 second more often to check words the user just spelled. EDIT: Lookups are typically asking "Is in the dictionary?". I'd like to support up to two wildcards in the word as well, but this is easy enough by just generating all possible letters the wildcards could have been and checking the generated words (i.e. 26 * 26 lookups for a word with two wildcards). as it's a mobile app, using as little memory as possible and requiring only a small initial download for the dictionary data is top priority. My first naive attempts used Java's HashMap class, which caused an out of memory exception. I've looked into using the SQL lite databases available on android, but this seems like overkill. What's a good way to do what I need?

    Read the article

  • Fast find object by string property

    - by Andrew Kalashnikov
    Hello, colleagues. I've got task to fast find object by its string property. Object: class DicDomain { public virtual string Id{ get; set; } public virtual string Name { get; set; } } For storing my object I use List[T] dictionary where T is DicDomain for now . I've got 5-10 such lists, which contain about 500-20000 at each one. Task is find objects by its Name. I use next code now: List<T> entities = dictionary.FindAll(s => s.Name.Equals(word, StringComparison.OrdinalIgnoreCase)); I've got some questions: Is my search speed optimal. I think now. Data structure. It List good for this task. What about hashtable,sorted... Method Find. May be i should use string intern?? I haven't much exp at these tasks. Can u give me good advice for increase perfomance. Thanks

    Read the article

  • I need a fast runtime expression parser

    - by Chris Lively
    I need to locate a fast, lightweight expression parser. Ideally I want to pass it a list of name/value pairs (e.g. variables) and a string containing the expression to evaluate. All I need back from it is a true/false value. The types of expressions should be along the lines of: varA == "xyz" and varB==123 Basically, just a simple logic engine whose expression is provided at runtime. UPDATE At minimum it needs to support ==, !=, , =, <, <= Regarding speed, I expect roughly 5 expressions to be executed per request. We'll see somewhere in the vicinity of 100/requests a second. Our current pages tend to execute in under 50ms. Usually there will only be 2 or 3 variables involved in any expression. However, I'll need to load approximately 30 into the parser prior to execution. UPDATE 2012/11/5 Update about performance. We implemented nCalc nearly 2 years ago. Since then we've expanded it's use such that we average 40+ expressions covering 300+ variables on post backs. There are now thousands of post backs occurring per second with absolutely zero performance degradation. We've also extended it to include a handful of additional functions, again with no performance loss. In short, nCalc met all of our needs and exceeded our expectations.

    Read the article

  • Very fast document similarity

    - by peyton
    Hello, I am trying to determine document similarity between a single document and each of a large number of documents (n ~= 1 million) as quickly as possible. More specifically, the documents I'm comparing are e-mails; they are grouped (i.e., there are folders or tags) and I'd like to determine which group is most appropriate for a new e-mail. Fast performance is critical. My a priori assumption is that the cosine similarity between term vectors is appropriate for this application; please comment on whether this is a good measure to use or not! I have already taken into account the following possibilities for speeding up performance: Pre-normalize all the term vectors Calculate a term vector for each group (n ~= 10,000) rather than each e-mail (n ~= 1,000,000); this would probably be acceptable for my application, but if you can think of a reason not to do it, let me know! I have a few questions: If a new e-mail has a new term never before seen in any of the previous e-mails, does that mean I need to re-compute all of my term vectors? This seems expensive. Is there some clever way to only consider vectors which are likely to be close to the query document? Is there some way to be more frugal about the amount of memory I'm using for all these vectors? Thanks!

    Read the article

< Previous Page | 9 10 11 12 13 14 15 16 17 18 19 20  | Next Page >