Search Results

Search found 7821 results on 313 pages for 'high dpi'.

Page 33/313 | < Previous Page | 29 30 31 32 33 34 35 36 37 38 39 40  | Next Page >

  • Quickest algorithm for finding sets with high intersection

    - by conradlee
    I have a large number of user IDs (integers), potentially millions. These users all belong to various groups (sets of integers), such that there are on the order of 10 million groups. To simplify my example and get to the essence of it, let's assume that all groups contain 20 user IDs (i.e., all integer sets have a cardinality of 100). I want to find all pairs of integer sets that have an intersection of 15 or greater. Should I compare every pair of sets? (If I keep a data structure that maps userIDs to set membership, this would not be necessary.) What is the quickest way to do this? That is, what should my underlying data structure be for representing the integer sets? Sorted sets, unsorted---can hashing somehow help? And what algorithm should I use to compute set intersection)? I prefer answers that relate C/C++ (especially STL), but also any more general, algorithmic insights are welcome. Update Also, note that I will be running this in parallel in a shared memory environment, so ideas that cleanly extend to a parallel solution are preferred.

    Read the article

  • Usage of Maven (and open source in general) in high governance and risk-averse large organizations (

    - by bart
    Does anyone have any good stories of these kinds of organizations being open to using open source (such as tools like Maven etc). Many staff I've encountered have little or no exposure to open source/systems and open source is treated with great suspicion. Some reasons given for this are lack of support and robustness, which is ironic given the number of end-of-life unsupported vendor products that are in production. Bonus points for any success stories where you've seen open source go into orgs like this and have a real benefit!

    Read the article

  • Storing high precision latitude/longitude numbers in iOS Core Data

    - by Bryan
    I'm trying to store Latitude/Longitudes in core data. These end up being anywhere from 6-20 digit precision. And for whatever reason, i had them as floats in Core Data, its rounding them and not giving me the exact values back. I tried "decimal" type, with no luck either. Are NSStrings my only other option? EDIT NSManagedObject: @interface Event : NSManagedObject { } @property (nonatomic, retain) NSDecimalNumber * dec; @property (nonatomic, retain) NSDate * timeStamp; @property (nonatomic, retain) NSNumber * flo; @property (nonatomic, retain) NSNumber * doub; Here's the code for a sample number that I store into core data: NSNumber *n = [NSDecimalNumber decimalNumberWithString:@"-97.12345678901234567890123456789"]; Code to access it again: NSNumber *n = [managedObject valueForKey:@"dec"]; NSNumber *f = [managedObject valueForKey:@"flo"]; NSNumber *d = [managedObject valueForKey:@"doub"]; Printed values: Printing description of n: -97.1234567890124 Printing description of f: <CFNumber 0x603f250 [0xfef3e0]>{value = -97.12345678901235146441, type = kCFNumberFloat64Type} Printing description of d: <CFNumber 0x6040310 [0xfef3e0]>{value = -97.12345678901235146441, type = kCFNumberFloat64Type}

    Read the article

  • Important question about linq to SQL performance on high loaded web applications

    - by Alex
    I started working with linq to SQL several weeks ago. I got really tired of working with SQL server directly through the SQL queries (sqldatareader, sqlcommand and all this good stuff).  After hearing about linq to SQL and mvc I quickly moved all my projects to these technologies. I expected linq to SQL work slower but it suprisongly turned out to be pretty fast, primarily because I always forgot to close my connections when using datareaders. Now I don't have to worry about it. But there's one problem that really bothers me. There's one page that's requested thousands of times a day. The system gets data in the beginning, works with it and updates it. Primarily the updates are ++ @ -- (increase and decrease values). I used to do it like this UPDATE table SET value=value+1 WHERE ID=@I'd It worked with no problems obviously. But with linq to SQL the data is taken in the beginning, moved to the class, changed and then saved. Stats.registeredusers++; Db.submitchanges(); Let's say there were 100 000 users. Linq will say "let it be 100 001" instead of "let it be increased by 1". But if there value of users has already been increased (that happens in my site all the time) then linq will be like oops, this value is already 100 001. Whatever I'll throw an exception" You can change this behavior so that it won't throw an exception but it still will not set the value to 100 002. Like I said, it happened with me all the time. The stas value was increased twice a second on average. I simply had to rewrite this chunk of code with classic ado net. So my question is how can you solve the problem with linq

    Read the article

  • Recommendations of a high volume log event viewer in a Java enviroment

    - by Thorbjørn Ravn Andersen
    I am in a situation where I would like to accept a LOT of log events controlled by me - notably the logging agent I am preparing for slf4j - and then analyze them interactively. I am not as such interested in a facility that presents formatted log files, but one that can accept log events as objects and allow me to sort and display on e.g. threads and timelines etc. Chainsaw could maybe be an option but is currently not compatible with logback which I use for technical reasons. Is there any project with stand alone viewers or embedded in an IDE which would be suitable for this kind of log handling. I am aware that I am approaching what might be suitable for a profiler, so if there is a profiler projekt suitable for this kind of data acquisition and display where I can feed the event pipe, I would like to hear about it). Thanks for all feedback Update 2009-03-19: I have found that there is not a log viewer which allows me to see what I would like (a visual display of events with coordinates determined by day and time, etc), so I have decided to create a very terse XML format derived from the log4j XMLLayout adapted to be as readable as possible while still being valid XML-snippets, and then use the Microsoft LogParser to extract the information I need for postprocessing in other tools.

    Read the article

  • MsSql Server high Resource Waits and Head Blocker

    - by MartinHN
    Hi I have a MS SQL Server 2008 Standard installation running a database for a webshop. The current size of the database is 2.5 GB. Running on Windows 2008 Standard. Dual Intel Xeon X5355 @ 2.00 GHz. 4 GB RAM. When I open the Activity Monitor, I see that I have a Wait Time (ms/sec) of 5000 in the "Other" category. In the Processes list, all connections from the webshop, the Head Blocker value is 1. I see every day that when I try to access the website, it can take 20-30 secs before it even starts to "work". I know that it is not network latency. (I have a 301 redirect from the same server that is executed instantly). When the first request has been served, it seems as if it's not a sleep anymore and every subsequent request is served instantly with the speed of light. The problem was worse two weeks ago, until I changed every query to include WITH (NOLOCK). But I still experience the problem, and the Wait times in the Activity Monitor is about the same. The largest table (Images) has 32764 rows (448576 KB). Some tables exceed 300000 rows, thought they're much smaller in size than the Images table. I have the default clustered index for every primary key column, only. Any ideas?

    Read the article

  • Large Y-axis tickInterval in high charts does not work

    - by ckovacs
    I have a chart at this JSFiddle to demonstrate a problem where our charts are not respecting the y-axis tick interval for large values: http://jsfiddle.net/z2cDu/1/ var plots = {"usBytePlots":[[1362009600000,143663192997],[1362096000000,110184848742],[1362182400000,97694974247],[1362268800000,90764690805],[1362355200000,112436517747],[1362441600000,113563368701],[1362528000000,139579327454],[1362614400000,118406594506],[1362700800000,125366899935],[1362787200000,134189435596],[1362873600000,132873135854],[1362960000000,121002328604],[1363046400000,123138222001],[1363132800000,115667785553],[1363219200000,103746172138],[1363305600000,108602633473],[1363392000000,89133998142],[1363478400000,92170701458],[1363564800000,86696922873],[1363651200000,80980159054],[1363737600000,97604615694],[1363824000000,108011666339],[1363910400000,124419138381],[1363996800000,121704988344],[1364083200000,124337959109],[1364169600000,137495512348],[1364256000000,136017103319],[1364342400000,60867510427]],"dsBytePlots":[[1362009600000,1734982247336],[1362096000000,1471928923201],[1362182400000,1453869593201],[1362268800000,1411787942581],[1362355200000,1460252447519],[1362441600000,1595590020177],[1362528000000,1658007074783],[1362614400000,1411941908699],[1362700800000,1447659369450],[1362787200000,1643008799861],[1362873600000,1792357973023],[1362960000000,1575173242169],[1363046400000,1565139003978],[1363132800000,1549211975554],[1363219200000,1438411448469],[1363305600000,1380445413578],[1363392000000,1298319283929],[1363478400000,1194578344720],[1363564800000,1211409679299],[1363651200000,1142416351471],[1363737600000,1223822672626],[1363824000000,1267692136487],[1363910400000,1384335759541],[1363996800000,1577205919828],[1364083200000,1675715948928],[1364169600000,1517593781592],[1364256000000,1562183018457],[1364342400000,681007264598]],"aggregatedTotalBytes":43476367948896,"aggregatedUsBytes":3150320403841,"aggregatedDsBytes":40326047545055,"maxTotalBytes":328186292129,"maxTotalBitsPerSecond":30387619.641574074} ; $('#container').highcharts({ yAxis: { tickInterval: 53687091200 // 500 gigabytes. Maximum y-axis value is approx 1.8TB }, series : [ { color: 'rgba(80, 180, 77, 0.7)', type: 'areaspline', name : 'Downstream', data : plots.dsBytePlots, total: plots.aggregatedDsBytes }, { color: 'rgba(33, 143, 197, 0.7)', type: 'areaspline', name : 'Upstream', data : plots.usBytePlots, total: plots.aggregatedUsBytes }] }); In this example we are charting bandwidth utilization in bytes. The chart has a maximum value of about 1.8TB. We set the y-axis tick interval to exactly 500GB but the rendered y-axis ticks don't make any sense for the given interval.

    Read the article

  • memory usage in C# (.NET) app is very high, until I call System.GC.Collect()

    - by Chris Gray
    I've written an app that spins a few threads each of which read several MB of memory. Each thread then connects to the Internet and uploads the data. this occurs thousands of times and each upload takes some time I'm seeing a situation where (verified with windbg/sos and !dumpheap) that the Byte[] are not getting collected automatically, causing 100/150MB of memory to be reported in task manager if I call System.GC.Collect() i'm seeing a huge drop in memory, a drop of over 100MB I dont like calling System.GC.Collect() and my PC has tons of free memory. however if anyone looks at TaskManager they're going to be concerned, thinking my app is leaking horribly. tips?

    Read the article

  • High Runtime for Dictionary.Add for a large amount of items

    - by aaginor
    Hi folks, I have a C#-Application that stores data from a TextFile in a Dictionary-Object. The amount of data to be stored can be rather large, so it takes a lot of time inserting the entries. With many items in the Dictionary it gets even worse, because of the resizing of internal array, that stores the data for the Dictionary. So I initialized the Dictionary with the amount of items that will be added, but this has no impact on speed. Here is my function: private Dictionary<IdPair, Edge> AddEdgesToExistingNodes(HashSet<NodeConnection> connections) { Dictionary<IdPair, Edge> resultSet = new Dictionary<IdPair, Edge>(connections.Count); foreach (NodeConnection con in connections) { ... resultSet.Add(nodeIdPair, newEdge); } return resultSet; } In my tests, I insert ~300k items. I checked the running time with ANTS Performance Profiler and found, that the Average time for resultSet.Add(...) doesn't change when I initialize the Dictionary with the needed size. It is the same as when I initialize the Dictionary with new Dictionary(); (about 0.256 ms on average for each Add). This is definitely caused by the amount of data in the Dictionary (ALTHOUGH I initialized it with the desired size). For the first 20k items, the average time for Add is 0.03 ms for each item. Any idea, how to make the add-operation faster? Thanks in advance, Frank

    Read the article

  • CSS font-size causing the last line to be too high

    - by tster
    OK, I have a list (<ul>) then inside each <li> element I have an <a...> Here are all the applicable CSS items to the <a> tag .search_area li a { font-size:11px; } sResCntr li { list-style-type:none; } body { font-family:Arial; } Everything looked great, until I put that font-size:11px in there. The problem is that the hyperlinks wrap to multiple lines within the list (which is fine). But when I decrease the font-size, the last line of the hyperlink always has a larger gap between it and the line above it than the other lines. All the other lines look good, but the last line looks like it is 1.5 spaced or something. I have adjusted the line-height property, but always the last line is larger than the rest. If you need a demo to look at to see what I mean, I can arrange it when I get home.

    Read the article

  • linq: SQL performance on high loaded web applications

    - by Alex
    I started working with linq to SQL several weeks ago. I got really tired of working with SQL server directly through the SQL queries (sqldatareader, sqlcommand and all this good stuff).  After hearing about linq to SQL and mvc I quickly moved all my projects to these technologies. I expected linq to SQL work slower but it suprisongly turned out to be pretty fast, primarily because I always forgot to close my connections when using datareaders. Now I don't have to worry about it. But there's one problem that really bothers me. There's one page that's requested thousands of times a day. The system gets data in the beginning, works with it and updates it. Primarily the updates are ++ @ -- (increase and decrease values). I used to do it like this UPDATE table SET value=value+1 WHERE ID=@I'd It worked with no problems obviously. But with linq to SQL the data is taken in the beginning, moved to the class, changed and then saved. Stats.registeredusers++; Db.submitchanges(); Let's say there were 100 000 users. Linq will say "let it be 100 001" instead of "let it be increased by 1". But if there value of users has already been increased (that happens in my site all the time) then linq will be like oops, this value is already 100 001. Whatever I'll throw an exception" You can change this behavior so that it won't throw an exception but it still will not set the value to 100 002. Like I said, it happened with me all the time. The stas value was increased twice a second on average. I simply had to rewrite this chunk of code with classic ado net. So my question is how can you solve the problem with linq

    Read the article

  • High performance querying - Suggestions please

    - by Alex Takitani
    Supposing that I have millions of user profiles, with hundreds of fields (name, gender, preferred pet and so on...). You want to make searches on profiles. Ex.:All profiles that has age between x and y, loves butterflies, hates chocolate.... With database would you choose? Suppose that You have a Facebook like load. Speed is a must. Open Source preferred. I've read a lot about Cassandra, HBase, Mongo, Mysql... I just can't decide.....

    Read the article

  • High performance querying - Sugestions please

    - by Alex Takitani
    Supposing that I have millions of user profiles, with hundreds of fields (name, gender, preferred pet and so on...). With database would You choose? Suppose that You have a Facebook like load. Speed is a must. Open Source preferred. I've read a lot about Cassandra, HBase, Mongo, Mysql... I just can't decide.....

    Read the article

  • OpenGL: Textured Primitives + High Framerate

    - by James D
    Short version: What's the best practice going forward for efficiently rendering large numbers of independent texture-mapped, lighted 2D/3D primitives (circles, rects, etc.) in OpenGL? For example: a typical particle system using billboarded quads/triangles, point sprites, or whatever other technique, with blending. Because after reading this thread on the messiness of OpenGL versioning/deprecation I'm starting to have my doubts. My specific question is not the ABCs of displaying primitives in OpenGL, but rather how to do so efficiently in post-deprecation (or pre-deprecation) OpenGL, in a way that's going to be compatible with a wide range of commodity hardware and in a way that's not going to break or itself get deprecated, five years down the line. Thanks!

    Read the article

  • CEIL is one too high for exact integer divisions

    - by Synetech
    This morning I lost a bunch of files, but because the volume they were one was both internally and externally defragmented, all of the information necessary for a 100% recovery is available; I just need to fill in the FAT where required. I wrote a program to do this and tested it on a copy of the FAT that I dumped to a file and it works perfectly except that for a few of the files (17 out of 526), the FAT chain is one single cluster too long, and thus cross-linked with the next file. Fortunately I know exactly what the problem is. I used ceil in my EOF calculation because even a single byte over will require a whole extra cluster: //Cluster is the starting cluster of the file //Size is the size (in bytes) of the file //BPC is the number of bytes per cluster //NumClust is the number of clusters in the file //EOF is the last cluster of the file’s FAT chain DWORD NumClust = ceil( (float)(Size / BPC) ) DWORD EOF = Cluster + NumClust; This algorithm works fine for everything except files whose size happens to be exactly a multiple of the cluster size, in which case they end up being one cluster too much. I thought about it for a while but am at a loss as to a way to do this. It seems like it should be simple but somehow it is surprisingly tricky. What formula would work for files of any size?

    Read the article

  • high cpu in IIS

    - by Miki Watts
    Hi all. I'm developing a POS application that has a local database on each POS computer, and communicates with the server using WCF hosted in IIS. The application has been deployed in several customers for over a year now. About a week ago, we've started getting reports from one of our customers that the server that the IIS is hosted on is very slow. When I've checked the issue, I saw the application pool with my process rocket to almost 100% cpu on an 8 cpu server. I've checked the SQL Activity Monitor and network volume, and they showed no significant overload beyond what we usually see. When checking the threads in Process Explorer, I saw lots of threads repeatedly calling CreateApplicationContext. I've tried installing .Net 2.0 SP1, according to some posts I found on the net, but it didn't solve the problem and replaced the function calls with CLRCreateManagedInstance. I'm about to capture a dump using adplus and windbg of the IIS processes and try to figure out what's wrong. Has anyone encountered something like this or has an idea which directory I should check ? p.s. The same version of the application is deployed in another customer, and there it works just fine. I also tried rolling back versions (even very old versions) and it still behaves exactly the same. Edit: well, problem solved, turns out I've had an SQL query in there that didn't limit the result set, and when the customer went over a certain number of rows, it started bogging down the server. Took me two days to find it, because of all the surrounding noise in the logs, but I waited for the night and took a dump then, which immediately showed me the query.

    Read the article

  • Partitioning requests in code among several servers

    - by Jacques René Mesrine
    I have several forum servers (what they are is irrelevant) which stores posts from users and I want to be able to partition requests among these servers. I'm currently leaning towards partitioning them by geographic location. To improve the locality of data, users will be separated into regions e.g. North America, South America and so on. Is there any design pattern on how to implement the function that maps the partioning property to the server, so that this piece of code has high availability and would not become a single point of failure ? f( Region ) -> Server IP

    Read the article

  • Implementing a scalable and high-performing web app

    - by Christopher McCann
    I have asked a few questions on here before about various things relating to this but this is more of a consolidation question as I would like to check I have got the gist of everything. I am in the middle of developing a social media web app and although I have a lot of experience coding in Java and in PHP I am trying things a bit different this time. I have modularised each component of the application. So for example one component of the application allows users to private message each other and I have split this off into its own private messaging service. I have also created a user data service the purpose of which is to return data about the user for example their name, address, age etc etc from the database. Their is also another service, the friends service, which will work off the neo4j database to create a social graph. My reason for doing all this is to allow me up to update seperate modules when I need to - so while they mostly all run off MySQL right now I could move one to Cassandra later if I thought it approriate. The actual code of the web app is really just used for the final construction. The modules behind it dont really follow any strict REST or SOAP protocol. Basically each method on our API is turned into a PHP procedural script. This then may make calls to other back-end code which tends to be OO. The web app makes CURL requests to these pages and POSTs data to them or GETs data from them. These pages then return JSON where data is required. I'm still a little mixed up about how I actually identify which user is logged in at that moment. Do I just use sessions for that? Like if we called the get-messages.php script which equates to the getMessages() method for that user - returning all the private messages for that user - how would the back-end code know which user it is as posting the users ID to the script would not be secure. Anyone could do that and get all the messages. So I thought I would use sessions for it. Am I correct on this? Can anyone spot any other problems with what I am doing here? Thanks

    Read the article

  • How to display many SVGs in Java with high performance

    - by Oak
    What I want My goal is to be able to display a large number of SVG images on a single drawing area in Java, each with its own translation/rotation/scale values. I'm looking for the simplest solution allowing this, optionally even using OpenGL to speed things up. What I've Tried My initial naive approach was to use SVGSalamander to draw directly on a JPanel, but the performance was pathetic. I poked around around and learned that I should do something like manually convert each SVG into a BufferedImage created with createCompatibleImage, then do the transformations I want, then draw it using double buffering. I ran into some troubles here, and before I continued I tried looking for frameworks to simplify things. What I've Looked At I've been a bit overwhelmed by the available options, which is why I'm turning to SO for help. I've looked at: Cairo (with Glitz maybe?) Libart - not sure if this actually supports SVGs FengGUI Slick - looks promising but a bit of an overkill But couldn't decide what is best for me to start working with, and I hope someone here as experience with any of these doing similar things.

    Read the article

  • Tuning JVM (GC) for high responsive server application

    - by elgcom
    I am running an application server on Linux 64bit with 8 core CPUs and 6 GB memory. The server must be highly responsive. After some inspection I found that the application running on the server creates rather a huge amount of short-lived objects, and has only about 200~400 MB long-lived objects(as long as there is no memory leak) After reading http://java.sun.com/javase/technologies/hotspot/gc/gc_tuning_6.html I use these JVM options -Xms2g -Xmx2g -XX:MaxPermSize=256m -XX:NewRatio=1 -XX:+UseConcMarkSweepGC Result: the minor GC takes 0.01 ~ 0.02 sec, the major GC takes 1 ~ 3 sec the minor GC happens constantly. How can I further improve or tune the JVM? larger heap size? but will it take more time for GC? larger NewSize and MaxNewSize (for young generation)? other collector? parallel GC? is it a good idea to let major GC take place more often? and how?

    Read the article

  • MySQL Rank Not Matching High Score in Table

    - by boddie
    While making a game the MySQL call to get the top 10 is as follows: SELECT username, score FROM userinfo ORDER BY score DESC LIMIT 10 This seems to work decently enough, but when paired with a call to get a individual player's rank the numbers may be different if the player has a tied score with other players. The call to get the players rank is as follows: SELECT (SELECT COUNT(*) FROM userinfo ui WHERE (ui.score, ui.username) >= (uo.score, uo.username)) AS rank FROM userinfo uo WHERE username='boddie'; Example results from first call: +------------+-------+ | username | score | +------------+-------+ | admin | 4878 | | test3 | 3456 | | char | 785 | | test2 | 456 | | test1 | 253 | | test4 | 78 | | test7 | 0 | | boddie | 0 | | Lz | 0 | | char1 | 0 | +------------+-------+ Example results from second call +------+ | rank | +------+ | 10 | +------+ As can be seen, the first call ranks the player at number 8 on the list, but the second call puts him at number 10. What changes or what can I do to make these calls give matching results? Thank you in advance for any help!

    Read the article

  • Stuttering animation in iPhone OpenGL ES although fps is high

    - by guymic
    I am building a 2d OpenGL es application For iPad it displays a background texture and numerous textures on top of it which are always in motion. Every frame their location is recalculated based on time delta and speed and the entire thing is being rendered at 60 fps successfully, but still as the movement speed of the sprites raises, thing look stuttering. Any ideas? Are there inherit problems with what I'm doing? Are there known design patterns for smooth animation?

    Read the article

  • Appropriate high level language to deal with binary data

    - by fortran
    Hi, I need to write a small tool that parses a textual input and generates some binary encoded data. I would prefer to stay away from C and the like, in favour of a higher level, (optionally) safer, more expressive and faster to develop language. My language of choice for this kind of tasks usually is Python, but for this case dealing with binary raw data can be problematic if one isn't very careful with the numbers being promoted to bignums, sign extensions and such. Ideally I would like to have records with named bitfields that are portable to be serialised in a consistent manner. (I know that there's a strong point in doing it in a language I already master, although it isn't optimal, but I think this could be a good opportunity to learn something new). Thanks.

    Read the article

  • High concurrent request server in ruby

    - by WedTM
    I'm trying to write a simple server that will grab an mp3 file from rackspace cloudfiles, and stream it to a client over HTTP. The server must be able to stream to multiple clients simultaneously, however, I'm finding it difficult to come up with a viable solution. Anyone have some ideas?

    Read the article

< Previous Page | 29 30 31 32 33 34 35 36 37 38 39 40  | Next Page >