Search Results

Search found 31421 results on 1257 pages for 'software performance'.

Page 494/1257 | < Previous Page | 490 491 492 493 494 495 496 497 498 499 500 501  | Next Page >

  • Eager loading vs. many queries with PHP, SQLite

    - by Mike
    I have an application that has an n+1 query problem, but when I implemented a way to load the data eagerly, I found absolutely no performance gain. I do use an identity map, so objects are only created once. Here's a benchmark of ~3000 objects. first query + first object creation: 0.00636100769043 sec. memory usage: 190008 bytes iterate through all objects (queries + objects creation): 1.98003697395 sec. memory usage: 7717116 bytes And here's one when I use eager loading. query: 0.0881109237671 sec. memory usage: 6948004 bytes object creation: 1.91053009033 sec. memory usage: 12650368 bytes iterate through all objects: 1.96605396271 sec. memory usage: 12686836 bytes So my questions are Is SQLite just magically lightning fast when it comes to small queries? (I'm used to working with MySQL.) Does this just seem wrong to anyone? Shouldn't eager loading have given much better performance?

    Read the article

  • Extracting data from multiple servers SQL 2005 SSIS

    - by Raj
    I have created an SSIS package to connect to multiple SQL servers, create a database, a table and a stored procedure. The package also creates a job and schedules it to run every 5 minutes. The requirement is to collect performance metrics. I am using an ado object variable to get the server names and all the above tasks are in a for each loop and everything works fine. Now the problem: I need to create a data flow task, which will connect to each of these servers in turn, copy the performance metrics data over to a central server and purge the source table. I am unable to get this task to work. This task fails with "Unable to obtain Connection" error. Any help will be greatly appreciated. SQL Server Version : 2005 Thanks, Raj

    Read the article

  • To store images from UIGetScreenImage() in NSMutable Array

    - by sujyanarayan
    Hi, I'm getting images from UIGetScreenImage() and storing directly in mutable array like:- image = [UIImage imageWithScreenContents]; [array addObject:image]; [image release]; I've set this code in timer so I cant use UIImagePNGRepresentation() to store as NSData as it reduces the performance. I want to use this array directly after sometime i.e after capturing 1000 images in 100 seconds. When I use the code below:- UIImage *im = [[UIImage alloc] init]; im = [array objectAtIndex:i]; UIImageWriteToSavedPhotosAlbum(im, nil, nil, nil); the application crashes. And I dont want to use UIImagePNG or JPGRepresentation() in timer as it reduces performance. My problem is how to use this array so that it is converted into image. If anybody has idea related to it please share with me. Thanks in Advance.

    Read the article

  • Adjacency List Tree Using Recursive WITH (Postgres 8.4) instead of Nested Set

    - by Koobz
    I'm looking for a Django tree library and doing my best to avoid Nested Sets (they're a nightmare to maintain). The cons of the adjacency list model have always been an inability to fetch descendants without resorting to multiple queries. The WITH clause in Postgres seems like a solid solution to this problem. Has anyone seen any performance reports regarding WITH vs. Nested Set? I assume the Nested set will still be faster but as long as they're in the same complexity class, I could swallow a 2x performance discrepancy. Django-Treebeard interests me. Does anyone know if they've implemented the WITH clause when running under Postgres? Has anyone here made the switch away from Nested Sets in light of the WITH clause?

    Read the article

  • What Simple Changes Made the Biggest Improvements to Your Delphi Programs

    - by lkessler
    I have a Delphi 2009 program that handles a lot of data and needs to be as fast as possible and not use too much memory. What small simple changes have you made to your Delphi code that had the biggest impact on the performance of you program by noticeably reducing execution time or memory use? Thanks everyone for all your answers. Many great tips. For completeness, I'll post a few important articles on Delphi optimization that I found. Before you start optimizing Delphi code at About.com Speed and Size: Top 10 Tricks also at About.com Code Optimization Fundamentals and Delphi Optimization Guidelines at High Performance Delphi, relating to Delphi 7 but still very pertinent.

    Read the article

  • Optimizing a large iteration of PHP objects (EAV-based)

    - by Aron Rotteveel
    I am currently working on a project that utilizes the EAV model. This turns out to work quite well, but like many others I am now stumbling upon some performance issues. The data set in this particular case consists of aproximately 2500 entities, each with aprox. 150 attributes. Each entity and each attribute is represented by a PHP-object. Since most parts of the application only iterate through a filtered set of entities, we have not had very large issues yet. Now, however, I am working on an algorithm that requires iteration over the entire dataset, which causes a major impact on performance. This information is perhaps not very much to work with, but since this is an architectural problem, I am hoping for a architectural pattern to help me on the way as well. Each entity, including it's attributes takes up aprox. 500KB of memory.

    Read the article

  • silverlight for .NET / CLR based numerical computing on osx

    - by Jonathan Shore
    I'm interested in using F# for numerical work, but my platforms are not windows based. Mono still has a significant performance penalty for programs that generate a significant amount of short-lived objects (as would be typical for functional languages). Silverlight is available on OSX. I had seen some reference indicating that assemblies compiled in the usual way could not be referenced, but not clear on the details. I'm not interested in UIs, but wondering whether could use the VM bundled with silverlight effectively for execution? I would want to be able to reference a large library of numerical models I already have in java (cross-compiled via IKVM to .NET assemblies) and a new codebase written in F#. My hope would be that the silverlight VM on OSX has good performance and can reference external assemblies and native libraries. Is this doable?

    Read the article

  • DDD: Client-side script to enforce invariants

    - by Mosh
    Hello, One thing that I'm confused about in regards to DDD is that our domain is supposed to handle all business logic and enforce invariants. I have noticed some people (me included) handle certain invariants in the presentation layer (i.e. WebForms, Views, etc) with javascript. This is mainly done to improve performance so the server is not hit for every request which may be invalid. Even though this approach may be beneficial performance-wise, it violates DDD principles. What if the business rules are changed? This way we don't have a rich domain where all the business rules are captured. In case of a change, we should change the domain as well as the presentation layer. Has anyone come across this situation before? I'd like to know your thoughts on this. Cheers, Mosh

    Read the article

  • mysql index optimization for a table with multiple indexes that index some of the same columns

    - by Sean
    I have a table that stores some basic data about visitor sessions on third party web sites. This is its structure: id, site_id, unixtime, unixtime_last, ip_address, uid There are four indexes: id, site_id/unixtime, site_id/ip_address, and site_id/uid There are many different types of ways that we query this table, and all of them are specific to the site_id. The index with unixtime is used to display the list of visitors for a given date or time range. The other two are used to find all visits from an IP address or a "uid" (a unique cookie value created for each visitor), as well as determining if this is a new visitor or a returning visitor. Obviously storing site_id inside 3 indexes is inefficient for both write speed and storage, but I see no way around it, since I need to be able to quickly query this data for a given specific site_id. Any ideas on making this more efficient? I don't really understand B-trees besides some very basic stuff, but it's more efficient to have the left-most column of an index be the one with the least variance - correct? Because I considered having the site_id being the second column of the index for both ip_address and uid but I think that would make the index less efficient since the IP and UID are going to vary more than the site ID will, because we only have about 8000 unique sites per database server, but millions of unique visitors across all ~8000 sites on a daily basis. I've also considered removing site_id from the IP and UID indexes completely, since the chances of the same visitor going to multiple sites that share the same database server are quite small, but in cases where this does happen, I fear it could be quite slow to determine if this is a new visitor to this site_id or not. The query would be something like: select id from sessions where uid = 'value' and site_id = 123 limit 1 ... so if this visitor had visited this site before, it would only need to find one row with this site_id before it stopped. This wouldn't be super fast necessarily, but acceptably fast. But say we have a site that gets 500,000 visitors a day, and a particular visitor loves this site and goes there 10 times a day. Now they happen to hit another site on the same database server for the first time. The above query could take quite a long time to search through all of the potentially thousands of rows for this UID, scattered all over the disk, since it wouldn't be finding one for this site ID. Any insight on making this as efficient as possible would be appreciated :) Update - this is a MyISAM table with MySQL 5.0. My concerns are both with performance as well as storage space. This table is both read and write heavy. If I had to choose between performance and storage, my biggest concern is performance - but both are important. We use memcached heavily in all areas of our service, but that's not an excuse to not care about the database design. I want the database to be as efficient as possible.

    Read the article

  • Adding an application to OpenWithList with Inno Setup

    - by Ben McCann
    I'm trying to write an installer for an app I created. I found a suggestion elsewhere that I was trying to follow and it mostly worked. My app is now in the "Open With" list. However, the app won't run at all. Could it be that it's because the app is not being started in its directory, so it can't find the dlls? Root: HKCR; Subkey: ".xls\OpenWithList\docs.exe"; Flags: uninsdeletekey noerror Root: HKCR; Subkey: ".ods\OpenWithList\docs.exe"; Flags: uninsdeletekey noerror Root: HKCR; Subkey: "applications\docs.exe\shell\open\command"; ValueType: string; ValueData: """{app}\docs.exe"" ""%1?"""; Flags: uninsdeletekey noerror Root: HKCU; Subkey: "Software\Classes\.xls\OpenWithList\docs.exe"; Flags: uninsdeletekey Root: HKCU; Subkey: "Software\Classes\.ods\OpenWithList\docs.exe"; Flags: uninsdeletekey Root: HKCU; Subkey: "Software\Classes\applications\docs.exe\shell\open\command"; ValueType: string; ValueData: """{app}\docs.exe"" ""%1"""; Flags: uninsdeletekey

    Read the article

  • Difference between Apache Tapestry and Apache Wicket

    - by Stephan Schmidt
    Apache Wicket ( http://wicket.apache.org/ ) and Apache Tapestry ( http://wicket.apache.org/ ) are both component oriented web frameworks - contrary to action based frameworks like Stripes - by the Apache Foundation. Both allow you to build your application from components in Java. They both look very similar to me. What are the differences between those two frameworks? Has someone experience in both? Specifically: How is their performance, how much can state handling be customized, can they be used stateless? What is the difference in their component model? What would you choose for which applications? How do they integrate with Guice, Spring, JSR 299? Edit: I have read the documentation for both and I have used both. The questions cannot be answered sufficently from reading the documentation, but from the experience from using these for some time, e.g. how to use Wicket in a stateless mode for high performance sites. Thanks.

    Read the article

  • one page filter results in new page in javascript

    - by Jake
    I have links set up on one page and the relationship between the links is a parent child relationship. (For example: Parent: All, Children: Software; Hardware) These links of course lead the user to a new page that shows the results from a table that is populated. Currently these links are all Similar destinations, but just a filter in the url. But the problem is that there is a javascript filter on the page that gives the user to choose between All, Software, or Hardware. Understand basically that if the url is still reading that there on the software page but they just filtered on the page to be Hardware that doesn't look good IMO. So what I was trying to do was make the links on the inital page all go the the exact same destination and somehow still know on the new page which link was clicked and run the javascript filter from knowing which link was clicked on that page. Is there a way to found that out from javascript? I guess a way to pass that value to the new page and retrieving it in javascript without showing it in the url so I can filter the table for the user based on that value?

    Read the article

  • Nonetype object has no attribute '__getitem__'

    - by adohertyd
    I am trying to use an API wrapper downloaded from the net to get results from the new azure Bing API. I'm trying to implement it as per the instructions but getting the runtime error: Traceback (most recent call last): File "bingwrapper.py", line 4, in <module> bingsearch.request("affirmative action") File "/usr/local/lib/python2.7/dist-packages/bingsearch-0.1-py2.7.egg/bingsearch.py", line 8, in request return r.json['d']['results'] TypeError: 'NoneType' object has no attribute '__getitem__' This is the wrapper code: import requests URL = 'https://api.datamarket.azure.com/Data.ashx/Bing/SearchWeb/Web?Query=%(query)s&$top=50&$format=json' API_KEY = 'SECRET_API_KEY' def request(query, **params): r = requests.get(URL % {'query': query}, auth=('', API_KEY)) return r.json['d']['results'] The instructions are: >>> import bingsearch >>> bingsearch.API_KEY='Your-Api-Key-Here' >>> r = bingsearch.request("Python Software Foundation") >>> r.status_code 200 >>> r[0]['Description'] u'Python Software Foundation Home Page. The mission of the Python Software Foundation is to promote, protect, and advance the Python programming language, and to ...' >>> r[0]['Url'] u'http://www.python.org/psf/ This is my code that uses the wrapper (as per the instructions): import bingsearch bingsearch.API_KEY='abcdefghijklmnopqrstuv' r = bingsearch.request("affirmative+action")

    Read the article

  • Schemas and tables versus user-ids in a single table using PostgreSQL

    - by gvkv
    I'm developing a web app and I've come to a fork in the road with respect to database structure and I don't know which direction to take. I have a database with user information that I can structure one of two ways. The first is to create a schema and a set of tables for each user (duplicating the structure for each user) and the second is to create a single set of tables and query information based on user-id. Suppose 100000 users. Here are my questions: Considering security, performance, scalability and administration where does each choice lie? Would the answers change for 1000000 or 10000? Is there a set of best practices that lead to one choice or the other? It seems to me that multiple schemas are more secure since it's trivial to restrict user privileges but what about performance and scalability? Administration seems like a wash since dumping (and restoring) lots of schemas isn't any more difficult than dumping a few.

    Read the article

  • Any good interview Questions to ask a sybase dba.

    - by scot
    Hi, I am a java developer and i will be interviewing sybase dbas along with my boss. I know some basic stuff about sybase. Iam looking for good interview questions that i can ask for a sybase dba. they will be having a min of 4 years of experience. I am looking for them to have really good knowledge in performance and tuning related areas like how to measure database performance and suggest ways to improve database design or sybase configuration etc. Help much appreciated. BR

    Read the article

  • Working for free

    - by truncate
    Finances are making me take an extended period off of my college education. In my current state, I don't feel fully qualified to be employed by an iPhone software company. While I work on getting things back together, I'd like to try an work for a software company for free in my local area (I'm going to college out of state and have to move back as well). The economy has forced employers to be very picky about who they hire, if any at all. Since I'd like to continue refining my abilities, I was wondering on what the consensus is on working for free. It can't be considered an internship, as I would no longer be in school..., I guess an apprenticeship is more appropriate. Like I said, I don't think I'm qualified to be paid for my services, and I don't want to be. I just don't know how to ask, or if it's even appropriate to ask them to show me how to develop software in the real world. My thinking is that they would be willing to get some work done for free and if I prove myself, they could hire me. If not, there was no major loss. They get some free development, and lose a bit of time helping show me the ropes. I get either a job, or valuable experience that I need. The other alternative is that I try to work out things by myself on the iPhone platform, but that sounds terrifying. I appreciate any input the community has to offer.

    Read the article

  • Is there anything for Python that is like readability.js?

    - by Emre Sevinç
    Hi, I'm looking for a package / module / function etc. that is approximately the Python equivalent of Arc90's readability.js http://lab.arc90.com/experiments/readability http://lab.arc90.com/experiments/readability/js/readability.js so that I can give it some input.html and the result is cleaned up version of that html page's "main text". I want this so that I can use it on the server-side (unlike the JS version that runs only on browser side). Any ideas? PS: I have tried Rhino + env.js and that combination works but the performance is unacceptable it takes minutes to clean up most of the html content :( (still couldn't find why there is such a big performance difference).

    Read the article

  • Using SqlCacheDependency to get real time updates? - ASP.NET

    - by user102533
    I would like to display real time updates on a web page (based on a status field in a database table that is altered by an external process). Based on my research, there are several ways of doing this. Long Polling (Comet) - This seems to be complex to implement Regular Polling - I can have an AJAX method trigger a database hit every 5seconds to get the current status. But I fear this will have performance issues. Then I read about using SqlCacheDependency - basically the cache gets invalidated based on a field in the table. I am assuming I can use the event trigerred when the cache is invalidated to show the new update to the user? What's an easy solution that will not have performance issues? anyone?

    Read the article

  • Visual Artifacts in Visual Studio 2010

    - by Simon Chadwick
    I'm using VS 2010 on Windows Server 2003, running on a Dell Inspiron 9400 laptop. VS 2010 runs fine, except for persistent and random screen re-drawing issues. Samples of these are here. These artifacts occur as the mouse moves over items that highlight on a mouse-over event, while scrolling, and when switching tabs. VS 2008 has non of these issues, so I assume that it is related to VS 2010's use of WPF. Could it be that my video card or driver is not up to the task of rendering WPF? Some other WPF applications (not Silverlight) also have some of these screen repainting problems. I have tried a variety of settings in System Properties--Advanced--Performance Options--Visual Effects, and in the related "Advanced" tab, Processor Scheduling is adjusted for best performance of programs. Many thanks for any suggestions!

    Read the article

  • pyopengl: Could it replace c++ ?

    - by Tom
    Hi everyone. I'm starting a computer graphics course, and I have to choose a language. Choices are between C++ and Python. I have no problem with C++, python is a work in progress. So i was thinking to go down the python road, using pyopengl for graphics part. I have heard though, that performance is an issue. Is python / pyopengl mature enough to challenge C++ on performance? I realize its a long shot, but I'd like to hear your thoughts, experiences on uses of pyopengl. Thanks in advance.

    Read the article

  • Push data to flex client

    - by KensoDev
    Howday, I want to push data to flex clients. I am talking about anywhere between 5000-15000 concurrent users, need to get data every time a currency is changed so that means lots of changes for lots of users. I have been looking into WebOrb.net, but the performance seem very poor (100 users concurrent) for a product so pricy (we purchased a license). So, I have to look into alternatives, I know there's fluorineFx but it seems no one is really using it for products and it lacks in examples and documentation. My question is: what products can answer my needs (.net backend) and what are the performance I can expect out of these products? Thanks

    Read the article

  • Popularity Algorithm - SQL / Django

    - by RadiantHex
    Hi folks, I've been looking into popularity algorithms used on sites such as Reddit, Digg and even Stackoverflow. Reddit algorithm: t = (time of entry post) - (Dec 8, 2005) x = upvotes - downvotes y = {1 if x > 0, 0 if x = 0, -1 if x < 0) z = {1 if x < 0, otherwise x} log(z) + (y * t)/45000 I have always performed simple ordering within SQL, I'm wondering how I should deal with such ordering. Should it be used to define a table, or could I build an SQL with the ordering within the formula (without hindering performance)? I am also wondering, if it is possible to use multiple ordering algorithms in different occasions, without incurring into performance problems. I'm using Django and PostgreSQL. Help would be much appreciated! ^^

    Read the article

  • Why Tokyo Tyrant so slow

    - by Tantra
    I have follow situation tyrant server lunched on freebsd host, like this: ttserver -uas -log /data/tyrant/1.log -sid 1 -thnum 8 -tout 5 /data/tyrant/data/1.tct And i try to communicate this server on windows from python and pyrant-0.3.5: like this: import pyrant; import time; t = pyrant.Tyrant(host="192.168.0.220", port=1978); tbegin = time.time(); for i in xrange(4000000): if i and ((i % 10000) == 0): print time.time() - tbegin; tbegin = time.time(); t[i] = {"text": "ruslan text", "value": i}; and have i think very slow performance about 5-6 per 10,000 records. But if i start this code on the same machine like server(ttserver). Performance are good - about 0.5 sec per 10,000 records What i must do to workaround this problem?

    Read the article

  • Design suggestions for creating document management structure using hidden shares.

    - by focus.nz
    I need to add some document management functionality into my software. Documents will be grouped by company name and project name. The folders need to be accessed by the application using the id numbers of clients/projects, but also easily browsed by the end user using windows explorer. Clients and Projects will be stored in a database. I am thinking of having the software create the folders using the friendly name and then using a hidden share with the id number for the software to access the files. The folder structure would be something like this --Company 1 (Company-1234$) -- Project 101 (Project-101$) -- Project 102 (Project-102$) -- Project 103 (Project-103$) -- Company 2 (Company-5678$) -- Project 201 (Project-201$) -- Project 202 (Project-202$) -- Project 203 (Project-203$) So in the example above there would be a company called "Company 1" with a ID of "1234". When browsing the folders using windows explorer the user would see \\ServerName\Documents\Company1 and you could also access the same folder from \\ServerName\Documents\Company-1234$ By using the hidden share, if the company name changes or its renamed for some reason it doesn't break the link in the application because its using the hidden shared based on the ID that never changes. Will having hundreds (maybe thousands) or hidden shares on a server provide a huge performance hit? Does any one have any suggestions or alternatives to provide this feature?

    Read the article

< Previous Page | 490 491 492 493 494 495 496 497 498 499 500 501  | Next Page >