Search Results

Search found 2214 results on 89 pages for 'significant figures'.

Page 71/89 | < Previous Page | 67 68 69 70 71 72 73 74 75 76 77 78  | Next Page >

  • high cpu in IIS

    - by Miki Watts
    Hi all. I'm developing a POS application that has a local database on each POS computer, and communicates with the server using WCF hosted in IIS. The application has been deployed in several customers for over a year now. About a week ago, we've started getting reports from one of our customers that the server that the IIS is hosted on is very slow. When I've checked the issue, I saw the application pool with my process rocket to almost 100% cpu on an 8 cpu server. I've checked the SQL Activity Monitor and network volume, and they showed no significant overload beyond what we usually see. When checking the threads in Process Explorer, I saw lots of threads repeatedly calling CreateApplicationContext. I've tried installing .Net 2.0 SP1, according to some posts I found on the net, but it didn't solve the problem and replaced the function calls with CLRCreateManagedInstance. I'm about to capture a dump using adplus and windbg of the IIS processes and try to figure out what's wrong. Has anyone encountered something like this or has an idea which directory I should check ? p.s. The same version of the application is deployed in another customer, and there it works just fine. I also tried rolling back versions (even very old versions) and it still behaves exactly the same. Edit: well, problem solved, turns out I've had an SQL query in there that didn't limit the result set, and when the customer went over a certain number of rows, it started bogging down the server. Took me two days to find it, because of all the surrounding noise in the logs, but I waited for the night and took a dump then, which immediately showed me the query.

    Read the article

  • Utility of List<T>.Sort() versus List<T>.OrderBy() for a member of a custom container class

    - by ccomet
    I've found myself running back through some old 3.5 framework legacy code, and found some points where there are a whole bunch of lists and dictionaries that must be updated in a synchronized fashion. I've determined that I can make this process infinitely easier to both utilize and understand by converging these into custom container classes of new custom classes. There are some points, however, where I came to concerns with organizing the contents of these new container classes by a specific inner property. For example, sorting by the ID number property of one class. As the container classes are primarily based around a generic List object, my first instinct was to write the inner classes with IComparable, and write the CompareTo method that compares the properties. This way, I can just call items.Sort() when I want to invoke the sorting. However, I've been thinking instead about using items = items.OrderBy(Func) instead. This way it is more flexible if I need to sort by any other property. Readability is better as well, since the property used for sorting will be listed in-line with the sort call rather than having to look up the IComparable code. The overall implementation feels cleaner as a result. I don't care for premature or micro optimization, but I like consistency. I find it best to stick with one kind of implementation for as many cases as it is appropriate, and use different implementations where it is necessary. Is it worth it to convert my code to use the LINQ OrderBy instead of using List.Sort? Is it a better practice to stick with the IComparable implementation for these custom containers? Are there any significant mechanical advantages offered by either path that I should be weighing the decision on? Or is their end-functionality equivalent to the point that it just becomes coder's preference?

    Read the article

  • How to keep confirmation messages after POST while doing a post-submit redirect?

    - by MicE
    Hello, I'm looking for advise on how to share certain bits of data (i.e. post-submit confirmation messages) between individual requests in a web application. Let me explain: Current approach: user submits an add/edit form for a resource if there were no errors, user is shown a confirmation with links to: submit a new resource (for "add" form) view the submitted/edited resource view all resources (one step above in hierarchy) user then has to click on one of the three links to proceed (i.e. to the page "above") Progmatically, the form and its confirmation page are one set of classes. The page above that is another. They can technically share code, but at the moment they are both independent during processing of individual requests. We would like to amend the above as follows: user submits an add/edit form for a resource if there were no errors, the user is redirected to the page with all resources (one step above in hierarchy) with one or more confirmation messages displayed at the top of the page (i.e. success message, to whom was the request assigned, etc) This will: save users one click (they have to go through a lot of these add/edit forms) the post-submit redirect will address common problems with browser refresh / back-buttons What approach would you recommend for sharing data needed for the confirmation messages between the two requests, please? I'm not sure if it helps, it's a PHP application backed by a RESTful API, but I think that this is a language-agnostic question. A few simple solutions that come to mind are to share the data via cookies or in the session, this however breaks statelessness and would pose a significant problem for users who work in several tabs (the data could clash together). Passing the data as GET parameters is not suitable as we are talking about several messages which are dynamic (e.g. changing actors, dates). Thanks, M.

    Read the article

  • real time stock quotes, StreamReader performance optimization

    - by sean717
    I am working on a program that extracts real time quote for 900+ stocks from a website. I use HttpWebRequest to send HTTP request to the site and store the response to a stream and open a stream using the following code: HttpWebResponse response = (HttpWebResponse)request.GetResponse(); Stream stream = response.GetResponseStream (); StreamReader reader = new StreamReader( stream ) the size of the received HTML is large (5000+ lines), so it takes a long time to parse it and extract the price. For 900 files, It takes about 6 mins for parsing and extracting. Which my boss isn't happy with, he told me he'd want the whole process to be done in TWO mins. I've identified the part of the program that takes most of time to finish is parsing and extracting. I've tried to optimize the code to make it faster, the following is what I have now after some optimization: // skip lines at the top for(int i=0;i<1500;++i) reader.ReadLine(); // read the line that contains the price string theLine = reader.ReadLine(); // ... extract the price from the line now it takes about 4 mins to process all the files, there is still a significant gap to what my boss's expecting. So I am wondering, is there other way that I can further speed up the parsing and extracting and have everything done within 2 mins?

    Read the article

  • Detection of negative integers using bit operations

    - by Nawaz
    One approach to check if a given integer is negative or not, could be this: (using bit operations) int num_bits = sizeof(int) * 8; //assuming 8 bits per byte! int sign_bit = given_int & (1 << (num_bits-1)); //sign_bit is either 1 or 0 if ( sign_bit ) { cout << "given integer is negative"<<endl; } else { cout << "given integer is positive"<<endl; } The problem with this solution is that number of bits per byte couldn't be 8, it could be 9,10, 11 even 16 or 40 bits per byte. Byte doesn't necessarily mean 8 bits! Anyway, this problem can be easily fixed by writing, //CHAR_BIT is defined in limits.h int num_bits = sizeof(int) * CHAR_BIT; //no assumption. It seems fine now. But is it really? Is this Standard conformant? What if the negative integer is not represented as 2's complement? What if it's representation in a binary numeration system that doesn't necessitate only negative integers to have 1 in it's most significant bit? Can we write such code that will be both portable and standard conformant? Related topics: Size of Primitive data types Why is a boolean 1 byte and not 1 bit of size?

    Read the article

  • What database systems should an startup company consider?

    - by Am
    Right now I'm developing the prototype of a web application that aggregates large number of text entries from a large number of users. This data must be frequently displayed back and often updated. At the moment I store the content inside a MySQL database and use NHibernate ORM layer to interact with the DB. I've got a table defined for users, roles, submissions, tags, notifications and etc. I like this solution because it works well and my code looks nice and sane, but I'm also worried about how MySQL will perform once the size of our database reaches a significant number. I feel that it may struggle performing join operations fast enough. This has made me think about non-relational database system such as MongoDB, CouchDB, Cassandra or Hadoop. Unfortunately I have no experience with either. I've read some good reviews on MongoDB and it looks interesting. I'm happy to spend the time and learn if one turns out to be the way to go. I'd much appreciate any one offering points or issues to consider when going with none relational dbms?

    Read the article

  • return only the last select results from stored procedure

    - by Madalina Dragomir
    The requirement says: stored procedure meant to search data, based on 5 identifiers. If there is an exact match return ONLY the exact match, if not but there is an exact match on the not null parameters return ONLY these results, otherwise return any match on any 4 not null parameters... and so on My (simplified) code looks like: create procedure xxxSearch @a nvarchar(80), @b nvarchar(80)... as begin select whatever from MyTable t where ((@a is null and t.a is null) or (@a = t.a)) and ((@b is null and t.b is null) or (@b = t.b))... if @@ROWCOUNT = 0 begin select whatever from MyTable t where ((@a is null) or (@a = t.a)) and ((@b is null) or (@b = t.b))... if @@ROWCOUNT = 0 begin ... end end end As a result there can be more sets of results selected, the first ones empty and I only need the last one. I know that it is easy to get the only the last result set on the application side, but all our stored procedure calls go through a framework that expects the significant results in the first table and I'm not eager to change it and test all the existing SPs. Is there a way to return only the last select results from a stored procedure? Is there a better way to do this task ?

    Read the article

  • Tool to detect use/abuse of String.Concat (where StringBuilder should be used)

    - by Mark Rushakoff
    It's common knowledge that you shouldn't use a StringBuilder in place of a small number of concatenations: string s = "Hello"; if (greetingWorld) { s += " World"; } s += "!"; However, in loops of a significant size, StringBuilder is the obvious choice: string s = ""; foreach (var i in Enumerable.Range(1,5000)) { s += i.ToString(); } Console.WriteLine(s); Is there a tool that I can run on either raw C# source or a compiled assembly to identify where in the source code that String.Concat is being called? (If you're not familiar, s += "foo" is mapped to String.Concat in the IL output.) Obviously, I can't realistically search through an entire project and evaluate every += to identify whether the lvalue is a string. Ideally, it would only point out calls inside a for/foreach loop, but I would even put up with all the false positives of noting every String.Concat. Also, I'm aware that there are some refactoring tools that will automatically refactor my code to use StringBuilder, but I am only interested in identifying the Concat usage at this point. I routinely run Gendarme and FxCop on my code, and neither of those tools identify what I've described.

    Read the article

  • How to draw complex shape from code behind for custom control in resource dictionary

    - by HopelessCoder
    Hi I am new to wpf and am having a problem which may or may not be trivial. I have defined a custom control as follows in the resource dictionary: <ResourceDictionary x:Class="SyringeSlider.Themes.Generic" xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" xmlns:local="clr-namespace:SyringeSlider"> <Style TargetType="{x:Type local:CustomControl1}"> <Setter Property="Template"> <Setter.Value> <ControlTemplate TargetType="{x:Type local:CustomControl1}"> <Border Background="{TemplateBinding Background}" BorderBrush="{TemplateBinding BorderBrush}" BorderThickness="{TemplateBinding BorderThickness}"> <Canvas Height="{TemplateBinding Height}" Width="{TemplateBinding Width}" Name="syringeCanvas"> </Canvas> </Border> </ControlTemplate> </Setter.Value> </Setter> </Style> </ResourceDictionary> Unfortunately I cannot go beyond this because I would like to draw a geometry onto the canvas consisting of a set of multiple line geometries whose dimensions are calculated as a function of the space available in the canvas. I believe that I need a code behind method to do this, but have not been able to determine how to link the xaml definition to a code behind method. Note that I have set up a class x:Class="SyringeSlider.Themes.Generic" for specifically this purpose, but can't figure out which Canvas property to link the drawing method to. My drawing method looks like this private void CalculateSyringe() { int adjHeight = (int) Height - 1; int adjWidth = (int) Width - 1; // Calculate some very useful values based on the chart above. int borderOffset = (int)Math.Floor(m_borderWidth / 2.0f); int flangeLength = (int)(adjHeight * .05f); int barrelLeftCol = (int)(adjWidth * .10f); int barrelLength = (int)(adjHeight * .80); int barrelRightCol = adjWidth - barrelLeftCol; int coneLength = (int)(adjHeight * .10); int tipLeftCol = (int)(adjWidth * .45); int tipRightCol = adjWidth - tipLeftCol; int tipBotCol = adjWidth - borderOffset; Path mySyringePath = new Path(); PathGeometry mySyringeGeometry = new PathGeometry(); PathFigure mySyringeFigure = new PathFigure(); mySyringeFigure.StartPoint = new Point(0, 0); Point pointA = new Point(0, flangeLength); mySyringeFigure.Segments.Add(new LineSegment(pointA, true)); Point pointB = new Point(); pointB.Y = pointA.Y + barrelLength; pointB.X = 0; mySyringeFigure.Segments.Add(new LineSegment(pointB, true)); // You get the idea....Add more points in this way mySyringeGeometry.Figures.Add(mySyringeFigure); mySyringePath.Data = mySyringeGeometry; } SO my question is: 1) Does what I am trying to do make any sense? 2) Can a use a canvas for this purpose? If not, what are my other options? Thanks!

    Read the article

  • What is the benefit of using ONLY OpenID authentication on a site?

    - by Peter
    From my experience with OpenID, I see a number of significant downsides: Adds a Single Point of Failure to the site It is not a failure that can be fixed by the site even if detected. If the OpenID provider is down for three days, what recourse does the site have to allow its users to login and access the information they own? Takes a user to another sites content and every time they logon to your site Even if the OpenID provider does not have an error, the user is re-directed to their site to login. The login page has content and links. So there is a chance a user will actually be drawn away from the site to go down the Internet rabbit hole. Why would I want to send my users to another company's website? [ Note: my provider no longer does this and seems to have fixed this problem (for now).] Adds a non-trivial amount of time to the signup To sign up with the site a new user is forced to read a new standard, chose a provider, and signup. Standards are something that the technical people should agree to in order to make a user experience frictionless. They are not something that should be thrust on the users. It is a Phisher's Dream OpenID is incredibly insecure and stealing the person's ID as they log in is trivially easy. [ taken from David Arno's Answer below ] For all of the downside, the one upside is to allow users to have fewer logins on the Internet. If a site has opt-in for OpenID then users who want that feature can use it. What I would like to understand is: What benefit does a site get for making OpenID mandatory?

    Read the article

  • What can i use to journal writes to file system

    - by Dmitry
    Hello, all I need to track all writes to files in order to have synchronized version of files on different place (server or just other directory, not considerable). Let it: all files located in same directory feel free to create some system files (e.g. SomeFileName.Ext~temp-data) no one have concurrent access to synced directory; nobody spoil ours meta-files or change real-files before we do postponed writes (like a commits) do not to care recovering "local" changes in case of crash; system can just rolled back to state of "server" by simple copy from it significant to have it transparent to use (so programmer must just call ordinary fopen(), read(), write()) It must be guaranteed that copy of files which "server" have is consistent. That is whole files scope existed in some moment of time. They may be sufficiently outdated but it must be fair snapshot of all files at some time. As i understand i should overload writing logic to collect data in order sent changes to "server". For example writing to temporary File~tmp. And so i have to overload reads in order program could read actual data of file. It would be great if you suggest some existing library (java or c++, it is unimportant) or solution (VCS customizing?). Or give hints how should i write it by myself. edit: After some reading i have more precision requirements: I need COW (Copy-on-write) wrapper for fopen(),fwrite(),.. or interceptor (hook) WriteFile() and other FS api system calls. Log-structured file system in userspace would be a alternative too.

    Read the article

  • Javascript CS-PRNG - 64-bit random

    - by Jack
    Hi, I need to generate a cryptographically secure 64-bit unsigned random integer in Javascript. The first problem is that Javascript only allows 64-bit signed integers, so 9223372036854775808 is the biggest supported integer without going into floating point use I think? To fix this I can use a big number library, no problem. My Method: var randNum = SHA256( randBigInt(128, 0) ) % 2^64; Where SHA256() is a secure hash function and randBigInt() is defined below as a non-crypto PRNG, im giving it a 128bit seed so brute force shouldn't be a problem. randBigInt(n,s) //return an n-bit random BigInt (n>=1). If s=1, then the most significant of those n bits is set to 1. Is this a secure method to generate a cryptographically secure 64-bit random int? And importantly does taking the 2^64 mod guarantee 100% I have a 64-bit number? An abstract example, say this number is prime (it isn't i know), I will use it in the Galois Field [2^p], where p must be 64bits so that every possible 1-63bit number is a field element. In this query, my random int must be larger than any 63-bit number. And Im not sure im correct in taking the 2^64 mod of a 256bit hash output. Thanks (hope that makes sense)

    Read the article

  • How to make HTML layout whitespace-agnostic?

    - by ssg
    If you have consecutive inline-blocks white-space becomes significant. It adds some level of space between elements. What's the "correct" way of avoiding whitespace effect to HTML layout if you want those blocks to look stuck to each other? Example: <span>a</span> <span>b</span> This renders differently than: <span>a</span><span>b</span> because of the space inbetween. I want whitespace-effect to go away without compromising HTML source code layout. I want my HTML templates to stay clean and well-indented. I think these options are ugly: 1) Tweaking text-indent, margin, padding etc. (Because it would be dependent on font-size, default white-space width etc) 2) Putting everything on a single line, next to each other. 3) Zero font-size. That would require overriding font-size in blocks, which would otherwise be inherited. 4) Possible document-wide solutions. I want the solution to stay local for a certain block of HTML. Any ideas, any obvious points which I'm missing?

    Read the article

  • Good way to identify similar images?

    - by Nick
    I've developed a simple and fast algorithm in PHP to compare images for similarity. Its fast (~40 per second for 800x600 images) to hash and a unoptimised search algorithm can go through 3,000 images in 22 mins comparing each one against the others (3/sec). The basic overview is you get a image, rescale it to 8x8 and then convert those pixels for HSV. The Hue, Saturation and Value are then truncated to 4 bits and it becomes one big hex string. Comparing images basically walks along two strings, and then adds the differences it finds. If the total number is below 64 then its the same image. Different images are usually around 600 - 800. Below 20 and extremely similar. Are there any improvements upon this model I can use? I havent looked at how relevant the different components (hue, saturation and value) are to the comparison. Hue is probably quite important but the others? To speed up searches I could probably split the 4 bits from each part in half, and put the most significant bits first so if they fail the check then the lsb doesnt need to be checked at all. I dont know a efficient way to store bits like that yet still allow them to be searched and compared easily. I've been using a dataset of 3,000 photos (mostly unique) and there havent been any false positives. Its completely immune to resizes and fairly resistant to brightness and contrast changes.

    Read the article

  • What is the performance impact of CSS's universal selector?

    - by Bungle
    I'm trying to find some simple client-side performance tweaks in a page that receives millions of monthly pageviews. One concern that I have is the use of the CSS universal selector (*). As an example, consider a very simple HTML document like the following: <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd"> <html xmlns="http://www.w3.org/1999/xhtml" xml:lang="en" lang="en"> <head> <meta http-equiv="Content-Type" content="text/html; charset=utf-8"/> <title>Example</title> <style type="text/css"> * { margin: 0; padding: 0; } </head> <body> <h1>This is a heading</h1> <p>This is a paragraph of text.</p> </body> </html> The universal selector will apply the above declaration to the body, h1 and p elements, since those are the only ones in the document. In general, would I see better performance from a rule such as: body, h1, p { margin: 0; padding: 0; } Or would this have exactly the same net effect? Essentially, what I'm asking is if these rules are effectively equivalent in this case, or if the universal selector has to perform more unnecessary work that I may not be aware of. I realize that the performance impact in this example may be very small, but I'm hoping to learn something that may lead to more significant performance improvements in real-world situations. Thanks for any help!

    Read the article

  • Javascript/jQuery: programmatically follow a link

    - by Dan
    In Javascript code, I would like to programmatically cause the browser to follow a link that's on my page. Simple case: <a id="foo" href="mailto:[email protected]">something</a> function goToBar() { $('#foo').trigger('follow'); } This is hypothetical as it doesn't actually work. And no, triggering click doesn't do it. I am aware of window.location and window.open but these differ from native link-following in some ways that matter to me: a) in the presence of a <base /> element, and b) in the case of mailto URLs. The latter in particular is significant. In Firefox at least, calling window.location.href = "mailto:[email protected]" causes the window's unload handlers to fire, whereas simply clicking a mailto link does not, as far as I can tell. I'm looking for a way to trigger the browser's default handling of links, from Javascript code. Does such a mechanism exist? Toolkit-specific answers also welcome (especially for Gecko).

    Read the article

  • Memcache error: Failed reading line from stream (0) Array

    - by daviddripps
    I get some variation of the following error when our server gets put under any significant load. I've Googled for hours about it and tried everything (including upgrading to the latest versions and clean installs). I've read all the posts about it here on SA, but can't figure it out. A lot of people are having the same problem, but no one seems to have a definitive answer. Any help would be greatly appreciated. Thanks in advance. Fatal error: Uncaught exception 'Zend_Session_Exception' with message 'Zend_Session::start() - /var/www/trunk/library/Zend/Cache/Backend/Memcached.php(Line:180): Error #8 Memcache::get() [memcache.get]: Server localhost (tcp 11211) failed with: Failed reading line from stream (0) Array We have a copy of our production environment for testing and everything works great until we start load-testing. I think the biggest object stored is about 170KB, but it will probably be about 500KB when all is said and done (well below the 1MB limit). Just FYI: Memcache gets hit about 10-20 times per page load. Here's the memcached settings: PORT="11211" USER="memcached" MAXCONN="1024" CACHESIZE="64" OPTIONS="" I'm running Memcache 1.4.5 with version 2.2.6 of the PHP-memcache module. PHP is version 5.2.6. memcache details from php -i: memcache memcache support = enabled Active persistent connections = 0 Version = 2.2.6 Revision = $Revision: 303962 $ Directive = Local Value = Master Value memcache.allow_failover = 1 = 1 memcache.chunk_size = 8192 = 8192 memcache.default_port = 11211 = 11211 memcache.default_timeout_ms = 1000 = 1000 memcache.hash_function = crc32 = crc32 memcache.hash_strategy = standard = standard memcache.max_failover_attempts = 20 = 20 Thanks everyone

    Read the article

  • Match subpatterns in any order

    - by Yaroslav
    I have long regexp with two complicated subpatters inside. How i can match that subpatterns in any order? Simplified example: /(apple)?\s?(banana)?\s?(orange)?\s?(kiwi)?/ and i want to match both of apple banana orange kiwi apple orange banana kiwi It is very simplified example. In my case banana and orange is long complicated subpatterns and i don't want to do something like /(apple)?\s?((banana)?\s?(orange)?|(orange)?\s?(banana)?)\s?(kiwi)?/ Is it possible to group subpatterns like chars in character class? UPD Real data as requested: 14:24 26,37 Mb 108.53 01:19:02 06.07 24.39 19:39 46:00 my strings much longer, but it is significant part. Here you can see two lines what i need to match. First has two values: length (14 min 24 sec) and size 26.37 Mb. Second one has three values but in different order: size 108.53 Mb, length 01 h 19 m 02 s and date June, 07 Third one has two size and length Fourth has only length There are couple more variations and i need to parse all values. I have a regexp that pretty close except i can't figure out how to match patterns in different order without writing it twice. (?<size>\d{1,3}\[.,]\d{1,2}\s+(?:Mb)?)?\s? (?<length>(?:(?:01:)?\d{1,2}:\d{2}))?\s* (?<date>\d{2}\.\d{2}))? NOTE: that is only part of big regexp that forks fine already.

    Read the article

  • MinGW-gcc PCH not speeding up wxWidget build times. Is my setup correct?

    - by Victor T.
    Hi all, I've been building wxMSW 2.8.11 with the latest stable release of mingw-gcc 4.5.1 and I'm trying to see if the build could be sped up using precompiled headers. My initial attempts at this doesn't seem to work. I basically followed the given instructions here. I created a wxprec.h precompiled header with the following: g++ -O2 -mthreads -DHAVE_W32API_H -D__WXMSW__ -DNDEBUG -D_UNICODE -I..\..\lib\gcc_dll\mswu -I..\..\include -W -Wall -DWXBUILDING -I..\.. \src\tiff -I..\..\src\jpeg -I..\..\src\png -I..\..\src\zlib -I..\..\src \regex -I..\..\src\expat\lib -DwxUSE_BASE=1 -DWXMAKINGDLL -Wno-ctor- dtor-privacy ../../include/wx/wxprec.h That does successfully create a wxprec.h.gch that's about ~1.6meg in size. Now I proceed to build wxmsw using the follow make command from cmd.exe shell: mingw32-make -f makefile.gcc While, the build does succeed I noticed no speedup whatsoever then if pch wasn't used. To make sure gcc was actually using the pch I added -H in the config.gcc and did another rebuild. Indeed, the outputted include list does show a '!' next to the wxprec.h so gcc is supposely using it. What's the reason for pch not working? Did I setup the precompiled headers correctly or am I missing a step? Just for reference comparison, here's the compile times I get when building wxmsw 2.8.11 with the other compilers(visual studio 2010 and C++ Builder 2007). The time savings is pretty significant. | | release, pch | release, nopch | debug, nopch ------------------------------------------------------- | gcc451 | 8min 33sec | 8min 17sec | 8min 49sec | msc_1600 | 2min 23sec | 13min 11sec | -- | bcc593 | 0min 59sec | 2min 29sec | -- Thanks

    Read the article

  • node.js UDP data lost at high package rates

    - by koleto
    I am observing a significant data-lost on a UDP connection with node.js 0.6.18 and 0.8.0 . It appears at high packet rates about 1200 packet per second with frames about 1500 byte limit. Each data packages has a incrementing number so it easy to track the number of lost packages. var server = dgram.createSocket("udp4"); server.on("message", function (message, rinfo) { //~processData(message); //~ writeData(message, null, 5000); }).bind(10001); On the receiving callback I tested two cases I first saved 5000 packages in a file. The result ware no dropped packages. After I have included a data processing routine and got about 50% drop rate. What I expected was that the process data routine should be completely asynchronous and should not introduce dead time to the system, since it is a simple parser to process binary data in the package and to emits events to a further processing routine. It seems that the parsing routine introduce dead time in which the event handler is unable to handle each packets. At the low package rates (< 1200 packages/sec) there are no data lost observed! Is this a bug or I am doing something wrong?

    Read the article

  • Multi-reader IPC solution?

    - by gct
    I'm working on a framework in C++ (just for fun for now), that lets the user write plugins that use a standard API to stream data between each other. There's going to be three basic transport mechanisms for the data: files, sockets, and some kind of IPC piping system. The system is set up so that for the non-file transport, each stream can have multiple readers. IE once a server socket it setup, multiple computers can connect and stream the data. I'm a little stuck at the multi-reader IPC system though. All my plugins run in threads so they live in the same address space, so some kind of shared memory system would work fine, I was thinking I'd write my own circular buffer with a write pointer and read pointers chassing it around the buffer, but I have my doubts that I can achieve the same performance as something like linux pipes. I'm curious what people would suggest for a multi-reader solution to something like this? Is the overhead for pipes or domain sockets low enough that I could just open a connection to each reader and issue separate writes to each reader? This is intended to be significant volumes of data (tens of mega-samples/sec), so performance is a must.

    Read the article

  • What if a large number of objects are passed to my SwingWorker.process() method?

    - by Trejkaz
    I just found an interesting situation. Suppose you have some SwingWorker (I've made this one vaguely reminiscent of my own): public class AddressTreeBuildingWorker extends SwingWorker<Void, NodePair> { private DefaultTreeModel model; public AddressTreeBuildingWorker(DefaultTreeModel model) { } @Override protected Void doInBackground() { // Omitted; performs variable processing to build a tree of address nodes. } @Override protected void process(List<NodePair> chunks) { for (NodePair pair : chunks) { // Actually the real thing inserts in order. model.insertNodeInto(parent, child, parent.getChildCount()); } } private static class NodePair { private final DefaultMutableTreeNode parent; private final DefaultMutableTreeNode child; private NodePair(DefaultMutableTreeNode parent, DefaultMutableTreeNode child) { this.parent = parent; this.child = child; } } } If the work done in the background is significant then things work well - process() is called with relatively small lists of objects and everything is happy. Problem is, if the work done in the background is suddenly insignificant for whatever reason, process() receives a huge list of objects (I have seen 1,000,000, for instance) and by the time you process each object, you have spent 20 seconds on the Event Dispatch Thread, exactly what SwingWorker was designed to avoid. In case it isn't clear, both of these occur on the same SwingWorker class for me - it depends on the input data, and the type of processing the caller wanted. Is there a proper way to handle this? Obviously I can intentionally delay or yield the background processing thread so that a smaller number might arrive each time, but this doesn't feel like the right solution to me.

    Read the article

  • iPhone: How to Determine Average Light/Dark of an Area of an UIImage

    - by TechZen
    I need to place labels with a transparent background over a variable-content UIImage. Readability will vary significantly depending on the relationship between the color of the label's text and the color/luminosity of the area of the image displayed under the label. Since the image will be constantly changing, the color of the label's text needs to change in sync. I have found several techniques for determining the color, perceived luminosity etc of a single pixel. However, I need to rather quickly (while a view loads) determine the rough perceived color/luminosity of an area of the UIImage under the frame of the UILabel. I presume I will also need to measure the alpha because the same color/luminosity looks different at different alpha values. Is there a way to calculate such a value for an area? Will I be reduced to simply summing pixels? If it comes to that, is there an algorithm to accomplish this? I've thought of two possible approaches: Perform some "folding" operations i.e. combining pixels from one half of the area to the other half. Then repeat until I get a single value. Would this be practical? How would you logically combine pixels to average their perceived color/luminosity? Sample a statistically significant number of pixels in the area and then combine them (somehow) to get a rough measure. I think this problem comes up a lot these days with people being so found of customizing backgrounds. Seems like something that would be worth my time to bang out a category or class to handle this and then share it around.

    Read the article

  • Uniquing with Existing Core Data Entities

    - by warrenm
    I'm using Core Data to store a lot (1000s) of items. A pair of properties on each item are used to determine uniqueness, so when a new item comes in, I compare it against the existing items before inserting it. Since the incoming data is in the form of an RSS feed, there are often many duplicates, and the cost of the uniquing step is O(N^2), which has become significant. Right now, I create a set of existing items before iterating over the list of (possible) new items. My theory is that on the first iteration, all the items will be faulted in, and assuming we aren't pressed for memory, most of those items will remain resident over the course of the iteration. I see my options thusly: Use string comparison for uniquing, iterating over all "new" items and comparing to all existing items (Current approach) Use a predicate to filter the set of existing items against the properties of the "new" items. Use a predicate with Core Data to determine uniqueness of each "new" item (without retrieving the set of existing items). Is option 3 likely to be faster than my current approach? Do you know of a better way?

    Read the article

  • Visitor Pattern can be replaced with Callback functions?

    - by getit
    Is there any significant benefit to using either technique? In case there are variations, the Visitor Pattern I mean is this: http://en.wikipedia.org/wiki/Visitor_pattern And below is an example of using a delegate to achieve the same effect (at least I think it is the same) Say there is a collection of nested elements: Schools contain Departments which contain Students Instead of using the Visitor pattern to perform something on each collection item, why not use a simple callback (Action delegate in C#) Say something like this class Department { List Students; } class School { List Departments; VisitStudents(Action<Student> actionDelegate) { foreach(var dep in this.Departments) { foreach(var stu in dep.Students) { actionDelegate(stu); } } } } School A = new School(); ...//populate collections A.Visit((student)=> { ...Do Something with student... }); *EDIT Example with delegate accepting multiple params Say I wanted to pass both the student and department, I could modify the Action definition like so: Action class School { List Departments; VisitStudents(Action<Student, Department> actionDelegate, Action<Department> d2) { foreach(var dep in this.Departments) { d2(dep); //This performs a different process. //Using Visitor pattern would avoid having to keep adding new delegates. //This looks like the main benefit so far foreach(var stu in dep.Students) { actionDelegate(stu, dep); } } } }

    Read the article

< Previous Page | 67 68 69 70 71 72 73 74 75 76 77 78  | Next Page >