Search Results

Search found 31421 results on 1257 pages for 'software performance'.

Page 883/1257 | < Previous Page | 879 880 881 882 883 884 885 886 887 888 889 890  | Next Page >

  • Implementing Tagging using Core Data on the iPhone

    - by Jonathan Penn
    I have an application that uses CoreData and I'm trying to figure out the best way to implement tagging and filtering by tag. For my purposes, if I was doing this in raw SQLite I would only need three tables, tags, item_tags and of course my items table. Then filtering would be as simple as joining between the three tables where only items are related to the given tags. Quite straightforward. But, is there a way to do this in CoreData and utilizing NSFetchedResultsController? It doesn't seem that NSPredicate give you the ability to filter through joins. NSPredicate's aren't full SQL anyway so I'm probably barking up the wrong tree there. I'm trying to avoid reimplementing my app using SQLite without CoreData since I'm enjoying the performance CoreData gives me in other areas. Yes, I did consider (and built a test implementation) diving into the raw SQLite that CoreData generates, but that's not future proof and I want to avoid that, too. Has anyone else tried to tackle tagging/filtering with CoreData in a UITableView with NSFetchedResultsController

    Read the article

  • How do I view how many concurrent long polling requests there are on my server?

    - by Pascal
    My host is Joyent. My host says I have 15 process limit and prstat -J shows those processes but that doesn't tell me how many long polling requests are currently being served. I could record it myself but that would add alot of performance overhead. I need to know when the server is at its long polling limits. I know this limit occurs far before the memory or CPU is used up. From experimentation, I've already verified that the number of long polls open is NOT equivalant to the number of processes running, probably because each process has multiple threads, each serving a request. thanks.

    Read the article

  • Alternative to NOT EXISTS

    - by Dave Colwell
    Hi all, I have two tables linked by an ID column, lets call them Table A and table B. My goal is to find all the records in table A that have no record in table B. For instance: Table A: ID----Value 1-----value1 2-----value2 3-----value3 4-----value4 Table B ID----Value 1-----x 2-----y 4-----z 4-----l As you can see, record with ID = 3 does not exist in table B, so i want a query that will give me record 3 from table A. the way i am currently doing this is by saying AND NOT EXISTS (SELECT ID FROM TableB) but since the tables are huge, the performance on this is terrible. Also, when i tried using a Left Join where TableB.ID is null, it didnt work. Can anyone suggest an alternative?

    Read the article

  • Giving Users an Option Between UDP & TCP?

    - by cam
    After studying TCP/UDP difference all week, I just can't decide which to use. I have to send a large amount of constant sensor data, while at the same time sending important data that can't be lost. This made a perfect split for me to use both, then I read a paper (http://www.isoc.org/INET97/proceedings/F3/F3_1.HTM) that says using both causes packet/performance loss in the other. Is there any issue presented if I allow the user to choose which protocol to use (if I program both server side) instead of choosing myself? Are there any disadvantages to this? The only other solution I came up with is to use UDP, and if there seems to be too great of loss of packets, switch to TCP (client-side).

    Read the article

  • Hit Testing with CALayer using the alpha properties of the CALayer contents.

    - by Charliehorse
    I'm writing a game for Mac using Cocoa. I'm currently implementing hit testing and have founds that CALayer offers hit testing, but does not seem to implement the alpha properties. As I have at times many CALayers stacked on top of each other, I really need to find a way to determine what the user actually meant to click on. I'm thinking if I could somehow get an array that contains pointers to all of the CALayers that contain the click point, I could filter through them some how. However the only way I've got so far to create the array is: NSMutableArray* anArrayOfLayers = [NSMutableArray array]; for (CALayer* aLayer in mapLayer.sublayers) { if ([aLayer containsPoint:mouseCoord]) [anArrayOfLayers addObject:aLayer]; } Then sort the array by the CALayer's z-values then go through checking if the pixel at location is alpha or not. However, between the sort and the alpha check this seems to be an incredible performance hog. (How would you even check the alpha?) Is there any way to do this?

    Read the article

  • Multivalue Mysql Inserts using HibernateTemplate

    - by Langali
    I am using Spring HibernateTemplate and need to insert hundreds of records into a mysql database every second. Not sure what is the most performant way of doing it, but I am trying to see how the multi value mysql inserts do using hibernate. String query = "insert into user(age, name, birth_date) values(24, 'Joe', '2010-05-19 14:33:14'), (25, 'Joe1', '2010-05-19 14:33:14')" getHibernateTemplate().execute(new HibernateCallback(){ public Object doInHibernate(Session session) throws HibernateException, SQLException { return session.createSQLQuery(query).executeUpdate(); } }); But I get this error: 'could not execute native bulk manipulation query.' Please check your query ..... Any idea of I can use a multi value mysql insert using Hibernate? or is my query incorrect? Any other ways that I can improve the performance? I did try the saveOrUpdateAll() method, and that wasn't good enough!

    Read the article

  • Threads to make video out of images

    - by masood
    updates: I think/ suspect the imageIO is not thread safe. shared by all threads. the read() call might use resources that are also shared. Thus it will give the performance of a single thread no matter how many threads used. ? if its correct . what is the solution (in practical code) Single request and response model at one time do not utilizes full network/internet bandwidth, thus resulting in low performance. (benchmark is of half speed utilization or even lower) This is to make a video out of an IP cam that gives a new image on each request. http://149.5.43.10:8001/snapshot.jpg It makes a delay of 3 - 8 seconds no matter what I do. Changed thread no. and thread time intervals, debugged the code by System.out.println statements to see if threads work. All seems normal. Any help? Please show some practical code. You may modify mine. This code works (javascript) with much smoother frame rate and max bandwidth usage. but the later code (java) dont. same 3 to 8 seconds gap. <!DOCTYPE html> <html> <head> <script type="text/javascript"> (function(){ var img="/*url*/"; var interval=50; var pointer=0; function showImg(image,idx) { if(idx<=pointer) return; document.body.replaceChild(image,document.getElementsByTagName("img")[0]); pointer=idx; preload(); } function preload() { var cache=null,idx=0;; for(var i=0;i<5;i++) { idx=Date.now()+interval*(i+1); cache=new Image(); cache.onload=(function(ele,idx){return function(){showImg(ele,idx);};})(cache,idx); cache.src=img+"?"+idx; } } window.onload=function(){ document.getElementsByTagName("img")[0].onload=preload; document.getElementsByTagName("img")[0].src="/*initial url*/"; }; })(); </script> </head> <body> <img /> </body> </html> and of java (with problem) : package camba; import java.applet.Applet; import java.awt.Button; import java.awt.Graphics; import java.awt.Image; import java.awt.Label; import java.awt.Panel; import java.awt.TextField; import java.awt.event.ActionEvent; import java.awt.event.ActionListener; import java.net.URL; import java.security.Timestamp; import java.util.Date; import java.util.concurrent.TimeUnit; import java.util.concurrent.atomic.AtomicBoolean; import javax.imageio.ImageIO; public class Camba extends Applet implements ActionListener{ Image img; TextField textField; Label label; Button start,stop; boolean terminate = false; long viewTime; public void init(){ label = new Label("please enter camera URL "); add(label); textField = new TextField(30); add(textField); start = new Button("Start"); add(start); start.addActionListener(this); stop = new Button("Stop"); add(stop); stop.addActionListener(this); } public void actionPerformed(ActionEvent e){ Button source = (Button)e.getSource(); if(source.getLabel() == "Start"){ for (int i = 0; i < 7; i++) { myThread(50*i); } System.out.println("start..."); } if(source.getLabel() == "Stop"){ terminate = true; System.out.println("stop..."); } } public void paint(Graphics g) { update(g); } public void update(Graphics g){ try{ viewTime = System.currentTimeMillis(); g.drawImage(img, 100, 100, this); } catch(Exception e) { e.printStackTrace(); } } public void myThread(final int sleepTime){ new Thread(new Runnable() { public void run() { while(!terminate){ try { TimeUnit.MILLISECONDS.sleep(sleepTime); } catch (InterruptedException ex) { ex.printStackTrace(); } long requestTime= 0; Image tempImage = null; try { URL pic = null; requestTime= System.currentTimeMillis(); pic = new URL(getDocumentBase(), textField.getText()); tempImage = ImageIO.read(pic); } catch(Exception e) { e.printStackTrace(); } if(requestTime >= /*last view time*/viewTime){ img = tempImage; Camba.this.repaint(); } } }}).start(); System.out.println("thread started..."); } }

    Read the article

  • Php getting too many connections error from MySQL

    - by uzioriluzan
    Hello everyone, I am using MySQL and PHP with 2 application servers and 1 database server. With the increase of the number of users (around 1000 by now), I'm getting the following error : SQLSTATE[08004] [1040] Too many connections The parameter "max_connections" is set to "1000" in my.cnf and "mysql.max_persistent" is set to -1 in php.ini. There are at most 1500 apache processes running at a time since the "MaxClients" apache parameter is equal to 750 and we have 2 application servers. Should I raise the "max_connections" to 1500 as indicated in here? Or should I set "mysql.max_persistent" to 750 (we use PDO with persistent connections for performance reasons since the database server is not the same as the application servers)? Or should I try something else? Thanks in advance!

    Read the article

  • Rails: getting logic to run at end of request, regardless of filter chain aborts?

    - by JSW
    Is there a reliable mechanism discussed in rails documentation for calling a function at the end of the request, regardless of filter chain aborts? It's not after filters, because after filters don't get called if any prior filter redirected or rendered. For context, I'm trying to put some structured profiling/reporting information into the app log at the end of every request. This information is collected throughought the request lifetime via instance variables wrapped in custom controller accessors, and dumped at the end in a JSON blob for use by a post-processing script. My end goal is to generate reports about my application's logical query distribution (things that depend on controller logic, not just request URIs and parameters), performance profile (time spent in specific DB queries or blocked on webservices), failure rates (including invalid incoming requests that get rejected by before_filter validation rules), and a slew of other things that cannot really be parsed from the basic information in the application and apache logs. At a higher level, is there a different "rails way" that solves my app profiling goal?

    Read the article

  • Different i18n in spring according to url

    - by Fanooos
    I have a spring web application that is required to work as following the application will be accessed from two different URLs www.domain1.com and www.domain2.com and it is required that the two URLs looks like two different applications with different CSS and I18n. for the css part is done but I am stuck with the i18n part How to make spring load different i18n properties file according to the domain name? The solution that I thought in is to implement a filter that check the request URL and according to the URL it clears the message source bean and load the required i18n file but it does not looks good for the performance by the way I am using ReloadableResourceBundleMessageSource message source Another solution is to implement two different message sources. The problem with this solution is that from the source code I can manage the bean that I use but how can I tell the fmt:message tag which data source to use ? Thanks in advance and best regards

    Read the article

  • Retrieving data from database. Retrieve only when needed or get everything?

    - by RHaguiuda
    I have a simple application to store Contacts. This application uses a simple relational database to store Contact information, like Name, Address and other data fields. While designing it, I question came to my mind: When designing programs that uses databases, should I retrieve all database records and store them in objects in my program, so I have a very fast performance or I should always gather data only when required? Of course, retrieving all data can only be done if it`s not too many, but do you use this approach when you make sure that the database will be small (< 300 records for example)? I have designed once a similar application that fetches data only when needed, but that was slow (using a Access database). Thanks for all help.

    Read the article

  • How to implement Voting for Grails Domain Classes?

    - by userWebMobile
    I have a Book class and need to implement a yes/no voting functionality. My domain classes look like this: class Book { String title static hasMany = [votes: Vote] } class User { String name static hasMany = [votes: Vote] } class Vote { boolean yesVote static belongsTo = [user: User, book: Book] } What is the best way to implement a voting for the book class. I need the following informations: What is the average yesVote for a book over all votes (either yes or no)? How to check if a specific user has done a vote? What is the best way to implement the computation of the average yesVote such that the performance does not drop?

    Read the article

  • Interview Question: .Any() vs if (.Length > 0) for testing if a collection has elements

    - by Chris
    In a recent interview I was asked what the difference between .Any() and .Length > 0 was and why I would use either when testing to see if a collection had elements. This threw me a little as it seems a little obvious but feel I may be missing something. I suggested that you use .Length when you simply need to know that a collection has elements and .Any() when you wish to filter the results. Presumably .Any() takes a performance hit too as it has to do a loop / query internally.

    Read the article

  • Monitoring .NET ASP.NET Applications

    - by James Hollingworth
    I have a number of applications running on top of ASP.NET I want to monitor. The main things I care about are: Exceptions: We currently some custom code which will email us when an exception occurs. If the application is failing hard it will crash our outlook... I know (and use) elmah which partly solves the problem however it is still just a big table of exceptions with a pretty(ish) UI. I want something that makes sense of all of these exceptions (e.g. groups exceptions, alerts when new ones occur, tells me what the common ones are that I should fix, etc) Logging: We currently log to files which are then accessible via a shared folder which dev's grep & tail. Does anyone know of better ways of presenting this information. In an ideal world I want to associate it with exceptions. Performance: Request times, memory usage, cpu, etc. whatever stats I can get I'm guessing this is probably going to be solved by a number of tools, has anyone got any suggestions?

    Read the article

  • PostgreSQL: How to index all foreign keys?

    - by biggusjimmus
    I am working with a large PostgreSQL database, and I are trying to tune it to get more performance. Our queries and updates seem to be doing a lot of lookups using foreign keys. What I would like is a relatively simple way to add Indexes to all of our foreign keys without having to go through every table (~140) and doing it manually. In researching this, I've come to find that there is no way to have Postgres do this for you automatically (like MySQL does), but I would be happy to hear otherwise there, too.

    Read the article

  • NSArray vs. SQLite for Complex Queries on iPhone

    - by GingerBreadMane
    Developing for iPhone, I have a collection of points that I need to make complex queries on. For example: "How many points have a y-coordinate of 10" and "Return all points with an X-coordinate between 3 and 5 and a y-coordinate of 7". Currently, I am just cycling through each element of an NSArray and checking to see if each element matches my query. It's a pain to write the queries though. SQLite would be much nicer. I'm not sure which would be more efficient though since a SQLite database resides on disk and not in memory (to my understanding). Would SQLite be as efficient or more efficient here? Or is there a better way to do it other than these methods that I haven't thought of? I would need to perform the multiple queries with multiple sets of points thousands of times, so the best performance is important.

    Read the article

  • code-style: Is inline initialization of JS objects ok?

    - by michael
    I often find myself using inline initialization (see example below), especially in a switch statement when I don't know which case loop will hit. I find it easier to read than if statements. But is this good practice or will it incur side-effects or a performance hit? for (var i in array) { var o = o ? o : {}; // init object if it doesn't exist o[array[i]] = 1; // add key-values } Is there a good website to go to get coding style tips?

    Read the article

  • Multimedia content in REST responce(XML/JSON)

    - by Koushik
    In my thesis I need to test different architectures. A request to a REST web service developed using Apache CXF and Spring MVC with MySQL as back end serving references(a field in database) to images,audio and video files stored in file system. In the response message, what is the best method to send the content to the client(another application using the service which I developed). URI: http://www.filmservices.com/film/{id} A client here is not the end user. Send the encoded hyperlink's(where the content is stored in the file system) to the client, so that the client renders the response and displays it to the browser. Use Base64 to encode the message(image,audio,video) and send it to the client. Main concern is performance.

    Read the article

  • Mysql query taking too much time

    - by aditya
    I have problem related to mysql database. i am linux webserver admin and i am facing a problem with a mysql query. The database is very small. I tried to track in logs and found that a query is taking minimum 5 sec to respond . The first page of site is coming from the database. Client are using cms. when the server gets some number of hits database server starts to give response very slowly and wait time increases from 5 sec to several seconds. I checked slow query logs { Query_time: 11.480138 Lock_time: 0.003837 Rows_sent: 921 Rows_examined: 3333 SET timestamp=1346656767; SELECT `Tender`.`id`, `Tender`.`department_id`, `Tender`.`title_english`, `Tender`.`content_english`, `Tender`.`title_hindi`, `Tender`.`content_hindi`, `Tender`.`file_name`, `Tender`.`start_publish`, `Tender`.`end_publish`, `Tender`.`publish`, `Tender`.`status`, `Tender`.`createdBy`, `Tender`.`created`, `Tender`.`modifyBy`, `Tender`.`modified` FROM `mcms_tenders` AS `Tender` WHERE `Tender`.`department_id` IN ( 31, 33, 32, 30 ); } Every line in the log is same only there is diff in Query time. Is there any way tweak the performance?

    Read the article

  • How to correct this glitch

    - by Rebol Tutorial
    I removed background: url(none); in my stylesheet because of load performance http://stackoverflow.com/questions/2577422/why-firebug-pretends-that-my-stylesheet-is-calling-my-xmlrpc The problem is that it now causes some glitch on css list. Any idea how to fix this ? Thanks. Update: picture below Tried to put background: none as suggested but didn't solve the problem/ ul.sidebar_list li ul li ul { margin: 0px; padding: 0px!important; float: left; width: 100%; list-style-type: none; background: none; }

    Read the article

  • How to implement square root and exponentiation on arbitrary length numbers?

    - by tomp
    I'm working on new data type for arbitrary length numbers (only non-negative integers) and I got stuck at implementing square root and exponentiation functions (only for natural exponents). Please help. I store the arbitrary length number as a string, so all operations are made char by char. Please don't include advices to use different (existing) library or other way to store the number than string. It's meant to be a programming exercise, not a real-world application, so optimization and performance are not so necessary. If you include code in your answer, I would prefer it to be in either pseudo-code or in C++. The important thing is the algorithm, not the implementation itself. Thanks for the help.

    Read the article

  • assignment vs std::swap and merging and keeping duplicates in seperate object

    - by rubenvb
    Say I have two std::set<std::string>s. The first one, old_options, needs to be merged with additional options, contained in new_options. I can't just use std::merge (well, I do, but not only that) because I also check for doubles and warn the user about this accordingly. To this effect, I have void merge_options( set<string> &old_options, const set<string> &new_options ) { // find duplicates and create merged_options, a stringset containing the merged options // handle duplicated the way I want to // ... old_options = merged_options; } Is it better to use std::swap( merged_options, old_options ); or the assignment I have? Is there a better way to filter duplicates and return the merged set than consecutive calls to std::set_intersection and std::set_union to detect dupes and merge the sets? I know it's slower than one traversal and doing both at once, but these sets are small (performance is not critical) and I trust the Standard more than I trust myself.

    Read the article

  • how do copyright permission systems for content hosting sites work?

    - by zebraman
    I am wondering about subscription sites that host content, like recorded performances from concerts. I'm sure there is a tangle of copyright permissions that must be granted for these video/audio files to be hosted. For example, if a band plays a cover of another band's song, permission must be obtained from not only the band that performed, but the band that owns the song. Perhaps even from the venue that hosted the performance, to record the video and post the content. I am curious how websites that host content like this work. How might an automated copyright system work to keep track of who has ownership of certain performances and obtain permission from said owners to record and post their content.

    Read the article

  • Dealing with Windows line-endings in Python

    - by Adam Nelson
    I've got a 700MB XML file coming from a Windows provider. As one might expect, the line endings are '\r\n' (or ^M in vi). What is the most efficient way to deal with this situation aside from getting the supplier to send over '\n' :-) Use os.linesep Use rstrip() (requiring opening the file ... which seems crazy) Using Universal newline support is not standard on my Mac Snow Leopard - so isn't an option. I'm open to anything that requires Python 2.6+ but it needs to work on Snow Leopard and Ubuntu 9.10 with minimal external requirements. I don't mind a small performance penalty but I am looking for the standard best way to deal with this.

    Read the article

  • entity framework and dirty reads

    - by bryanjonker
    I have Entity Framework (.NET 4.0) going against SQL Server 2008. The database is (theoretically) getting updated during business hours -- delete, then insert, all through a transaction. Practically, it's not going to happen that often. But, I need to make sure I can always read data in the database. The application I'm writing will never do any types of writes to the data -- read-only. If I do a dirty read, I can always access the data; the worst that happens is I get old data (which is acceptable). However, can I tell Entity Framework to always use dirty reads? Are there performance or data integrity issues I need to worry about if I set up EF this way? Or should I take a step back and see about rewriting the process that's doing the delete/insert process?

    Read the article

< Previous Page | 879 880 881 882 883 884 885 886 887 888 889 890  | Next Page >