Search Results

Search found 13403 results on 537 pages for 'epm performance tuning'.

Page 439/537 | < Previous Page | 435 436 437 438 439 440 441 442 443 444 445 446  | Next Page >

  • Why SQL functions are faster than UDF

    - by Zerotoinfinite
    Though it's a quite subjective question but I feel it necessary to share on this forum. I have personally experienced that when I create a UDF (even if that is not complex) and use it into my SQL it drastically decrease the performance. But when I use SQL inbuild function they happen to work pretty faster. Conversion , logical & string functions are clear example of that. So, my question is "Why SQL in build functions are faster than UDF"? and it would be an advantage if someone can guide me how can I judge/manipulate function cost either mathematically or logically.

    Read the article

  • select subquery cause mysql server to be unresponsive

    - by Xiao
    DELETE FROM item_measurement WHERE measurement_id IN ( SELECT (-id) AS measurement_id FROM invoice_item WHERE invoice_id = 'A3722' ) I've really tried hard to find what's wrong with the code. I tried to run this in a php page, which doesn't respond. I also tried the same line in phpmyadmin, infinite spinning circle and I had to restart the server(MAMP on Mac 10.9). No error was given in browser/ If I run the delete and select separately, they both finish very quickly. I don't think it's a performance issue because the separate run took < 0.1 sec each. Thanks for any help in advance. Edit: I also found that a simple select statement will also suspend mysql: select * from item_measurement where measurement_id in (select -id from invoice_item where invoice_id='A3722')

    Read the article

  • Is a selector like *+* safe to use?

    - by mcmullins
    I recently came across this CSS selector while trying to find a way to easily space out major blog elements such as paragraphs and images. An example of its use would be something like this: .post *+* {margin-top: 15px;} /* or... */ .post > *+* {margin-top: 15px;} /* if you don't want the margin to apply to nested elements */ At first glance, it seemed pretty useful. So my question is: What downsides are there to using these selectors? Specifically: What's the browser support like for this? Are there any cases you wouldn't want an even margin spacing between elements in an article and if not, is it easier to declare this first and then overwrite or simply declare each element individually? Does this have performance issues since you're selecting everything twice?

    Read the article

  • Magento: server requirements for a quite big shop to run smoothly

    - by david parloir
    Hi, I'm working on a quite big magento: it will have 50 different shops (1 magento install, 1 admin to rule them all) for start, this number is expected to raise in the future, and a catalog of more than 1k products. This catalog will be shared by all shops. I'm concerned about the server requirements I need for this to run smoothly. So far this is what I've found to get the most of it: Caching: using magento's cache with APC, MySQL's querys Images sprite in the theme use FastCGI instead of mod_php database clustering: I don't think it will be necesary for 1k products, what do you think? using Zend Server Are there other thing I can do in order to improve magento's performance? I'd like to know all I need from the beginning so I can find the right server. thanks in advance.

    Read the article

  • SQL Server, fetching data from multiple joined tables. Why is slow?

    - by user562192
    I have problem with performance when retrieving data from SQL Server. My sql query looks something like this: SELECT table_1.id, table_1.value, table_2.id, table_2.value,..., table_20.id, table_20.value From table_1 INNER JOIN table_2 ON table_1.id = table_2.table_1_id INNER JOIN table_3 ON table_2.id = table_3.table_2_id... WHERE table_1.row_number BETWEEN 1 AND 20 So, I am fetching 20 results. This query takes about 5 seconds to execute. When I select only table_1.id, it returns results instantly. Because of that, I guess that problem is not in JOINs, it is in retrieving data from multiple tables. Any suggestions how I would speed up this query?

    Read the article

  • Best way to have unique key over 500M varchar(255) records in mysql/innodb?

    - by taw
    I have url column with unique key over it - but its performance on updates is absolutely atrocious. I suspect that's because the index doesn't all fit in memory. So I was thinking, how about adding a column of md5(url) with 16 bytes of binary data and unique-keying that instead. What would be the best datatype for that? I'd love to be able to just see 32-character hex hash, while mysql would convert it to/from 16 binary bytes and index that, as programs using the database might have some troubles with arbitrary binary data that I'd rather avoid if possible (also I'm a bit afraid that mysql might get some strange ideas about character sets and for example overalocating storage for that by 3:1 because it thinks it might need utf8, how do I avoid that for cure?).

    Read the article

  • What's the most efficient way to manage large datasets with Javascript/jQuery in IE?

    - by RenderIn
    I have a search that returns JSON, which I then transform into a HTML table in Javascript. It repeatedly calls the jQuery.append() method, once for each row. I have a modern machine, and the Firefox response time is acceptable. But in IE 8 it is unbearably slow. I decided to move the transformation from data to HTML into the server-side PHP, changing the return type from JSON to HTML. Now, rather than calling the jQuery.append() time repeatedly, I call the jQuery.html() method once with the entire table. I noticed Firefox got faster, but IE got slower. These results are anecdotal and I have not done any benchmarking, but the IE performance is very disappointing. Is there something I can do to speed up the manipulation of large amounts of data in IE or is it simply a bad idea to process very much data at once with AJAX/Javascript?

    Read the article

  • Go, AppEngine: How to structure templates for application

    - by laslowh
    How are people handling the use of templates in their Go-based AppEngine applications? Specifically, I'm looking for a project structure that affords the following: Hierarchical (directory) structure of templates and partial templates Allow me to use HTML tools/editors on my templates (embedding template text in xxx.go files makes this difficult) Automatic reload of template text when on dev server Potential stumbling blocks are: template.ParseGlob() will not traverse recursively. For performance reasons it has been recommended not to upload your templates as raw text files (because those text files reside on different servers than executing code). Please note that I am not looking for a tutorial/examples of the use of the template package. This is more of an app structure question. That being said, if you have code that solves the above problems, I would love to see it. Thanks in advance.

    Read the article

  • Take advantage of multiple cores executing SQL statements

    - by willvv
    I have a small application that reads XML files and inserts the information on a SQL DB. There are ~ 300 000 files to import, each one with ~ 1000 records. I started the application on 20% of the files and it has been running for 18 hours now, I hope I can improve this time for the rest of the files. I'm not using a multi-thread approach, but since the computer I'm running the process on has 4 cores I was thinking on doing it to get some improvement on the performance (although I guess the main problem is the I/O and not only the processing). I was thinking on using the BeginExecutingNonQuery() method on the SqlCommand object I create for each insertion, but I don't know if I should limit the max amount of simultaneous threads (nor I know how to do it). What's your advice to get the best CPU utilization? Thanks

    Read the article

  • One on One table relation - is it harmful to keep relation in both tables?

    - by EBAGHAKI
    I have 2 tables that their rows have one on one relation.. For you to understand the situation, suppose there is one table with user informations and there is another table that contains a very specific informations and each user can only link to one these specific kind of informations ( suppose second table as characters ) And that character can only assign to the user who grabs it, Is it against the rules of designing clean databases to hold the relation key in both tables? User Table: user_id, name, age, character_id Character Table: character_id, shape, user_id I have to do it for performance, how do you think about it?

    Read the article

  • In Python, is it better to use list comprehensions or for-each loops?

    - by froadie
    Which of the following is better to use and why? Method 1: for k, v in os.environ.items() print "%s=%s" % (k, v) Method 2: print "\n".join(["%s=%s" % (k, v) for k,v in os.environ.items()]) I tend to lead towards the first as more understandable, but that might just be because I'm new to Python and list comprehensions are still somewhat foreign to me. Is the second way considered more Pythonic? I'm assuming there's no performance difference, but I may be wrong. What would be the advantages and disadvantages of these 2 techniques? (Code taken from Dive into Python)

    Read the article

  • Can JPA do batch update | put | write | insert as pm.makePersistentAll() does in GAE/J

    - by Kenyth
    I searched through multiple discussions here. Can someone just give me a quick and direct answer? And if with JPA you can't do a batch update, what if I don't use transaction, and just use the following flow: em = emf.getEntityManager // do some query // make some data modification em.persist(..) // do some query // make some data modification em.persist(..) // do some query // make some data modification em.persist(..) ... em.close() How does this compare to batch update with regard to performance, and compare to a single transaction commit, measured by RPC calls to datastore server, CPU cycles per request, or so. Does every call to em.persist(..) before em.close() trigger a RPC call to the datastore server? Thanks very much for any response!

    Read the article

  • Can I Use EF Across Multiple DBs in One SQLServer Instance?

    - by thomashubschman
    Hello, I have been searching the blogs and articles but I have not found much support for this scenario. I have been poking around EF and realized that I could create views that contained data from multiple databases and then build the EF object model off of those views. Although it works I am not sure about the usual issues of performance, scalability, maintainability. The way I am achieving the connection between databases is by creating associations in the EF model. Does anyone have any information about this type of implementation? Either another solution or commentary on this proposed solution? Thanks, Tom

    Read the article

  • what's the right way to do polymorphism with protocol buffers?

    - by user364003
    I'm trying to long-term serialize a bunch of objects related by a strong class hierarchy in java, and I'd like to use protocol buffers to do it due to their simplicity, performance, and ease of upgrade. However, they don't provide much support for polymorphism. Right now, the way I'm handling it is by having a "one message to rule them all" solution that has a required string uri field that allows me to instantiate the correct type via reflection, then a bunch of optional fields for all the other possible classes I could serialize, only one of which will be used (based on the value of the uri field). Is there a better way to handle polymorphism, or is this as good as I'm going to get?

    Read the article

  • Declaration and initialization of local variables - what is most C++ like?

    - by tuergeist
    Hi, I did not find any suitable questions answered yet, so I'd like to know what is "better" C++ style in the mean of performance and/or memory. Both codes are inside a method. The question is: When to declare long prio? And what are the implications? Code 1 while (!myfile.eof()) { getline(myfile, line); long prio = strtol(line); // prio is declared here // put prio in map... // some other things } Code 2 long prio; // prio is declared here while (!myfile.eof()) { getline(myfile, line); prio = strtol(line); // put prio in map... // some other things }

    Read the article

  • Getting \ building latest NHibernate build

    - by Alex Yakunin
    I'd like to get the latest NHibernate build or build it by my own. The build available at SourceForge is dated by Nov 2009, although I see there was a lot of activity later, especially related to LINQ development. So what is the best option? I can: Get the latest source code and try to build it. Are there any instructions for this? Get one of latest builds shared by someone else. Are there any people maintaining such builds? Please note, that I'm not interested in 8-month old builds - I need the latest code for tests (LINQ, performance). I know there is a similar question, but it looks like top answers there are outdated.

    Read the article

  • Calculate time taken by each cpp file to compile in VS2005?

    - by Rajiv Podar
    Hi Guys, I am writing a tool which can be used to make the matrix for the current performance of the project. For that I required to get the time taken by each file to get compiled. I tried with the following option but didn't succeed :( Tools-Options-Proejcts & Solutions - VC++ Project Settings - Build Timing- Yes From the above option I am able to get the whole time taken to build the solution but my problem is to get for each one. I am using VS2005 So anyone is having any idea then pls revert back ASAP....

    Read the article

  • How to get the keyword match number for many categories?

    - by Mike108
    How to get the keyword match number for many categories? The scenario is that when I type a product keyword, I want to get the match item number in many categories. For example, when I type the keyword "iphone" , the page will show the match item number in many categories: Mobile(5) battery(2) app(6) typeA(2) typeB(9) typeC(15) typeC(1) typeD(9) typeE(7) typeF(8) ...... ...... typeZ(5) How to implement this for a better performance? I use C# ASP.NET.

    Read the article

  • how to create english language dictionary application with python (django)?

    - by sintaloo
    Hi All, I would like to create an online dictionary application by using python (or with django). It will be similar to http://dictionary.reference.com/. My question is (1) Are there any existing open source python package or modules or application which implements this functionality that I can use or study from? (2) If the answer to the first question is NO. which algorithm should I follow to create such web application? Can I simply use the python built-in dictionary object for this job? so that the dictionary object's key will be the english word and the value will be the explanation. is this OK in term of performance? OR Do I have to create my own Tree Object to speed up the search? or any existing package which handles this job properly? Thank you very much.

    Read the article

  • Efficient job progress update in web application

    - by Endru6
    Hi, Creating a web application (Django in my case, but I think the question is more general) that is administrating a cluster of workers doing queued jobs, there is a need to track each jobs progress. When I've done it using database UPDATE (PostgreSQL in this case), it severely hits the database performance, because each UPDATE creates a new row in a table, and in my case only vacuuming DB removes obsolete rows. Having 30 jobs running and reporting progress every 1 minute DB may require vacuuming (and it means huge slow downs on a front end side for all the employees working with the system) every 10 days. Because the progress information isn't critical, ie. it doesn't have to be persistent, how would you do the progress updates from jobs without using an overhead database implies? There are 30 worker servers, each doing 1 or 2 jobs simultaneously, 1 front end server which serves a web application to users, and 1 database server.

    Read the article

  • INSERT INTO ... SELECT ... vs dumping/loading a file in MySQL

    - by Daniel Huckstep
    What are the implications of using a INSERT INTO foo ... SELECT FROM bar JOIN baz ... style insert statement versus using the same SELECT statement to dump (bar, baz) to a file, and then insert into foo by loading the file? In my messing around, I haven't seen a huge difference. I would assume the former would use more memory, but the machine that this runs on has 8GB of RAM, and I never even see it go past half used. Are there any huge (or long term) performance implications that I'm not seeing? Advantages/disadvantages of either?

    Read the article

  • Optimal search queries

    - by Macros
    Following on from my last question http://stackoverflow.com/questions/2788082/sql-server-query-performance, and discovering that my method of allowing optional parameters in a search query is sub optimal, does anyone have guidelines on how to approach this? For example, say I have an application table, a customer table and a contact details table, and I want to create an SP which allows searching on some, none or all of surname, homephone, mobile and app ID, I may use something like the following: select * from application a inner join customer c on a.customerid = a.id left join contact hp on (c.id = hp.customerid and hp.contacttype = 'homephone') left join contact mob on (c.id = mob.customerid and mob.contacttype = 'mobile') where (a.ID = @ID or @ID is null) and (c.Surname = @Surname or @Surname is null) and (HP.phonenumber = @Homphone or @Homephone is null) and (MOB.phonenumber = @Mobile or @Mobile is null) The schema used above isn't real, and I wouldn't be using select * in a real world scenario, it is the construction of the where clause I am interested in. Is there a better approach, either dynamic sql or an alternative which can achieve the same result, without the need for many nested conditionals. Some SPs may have 10 - 15 criteria used in this way

    Read the article

  • InvalidCastException: System.Web.UI.PartialCachingControl -> MyCustomControl when OutputCaching

    - by marcinn
    The problem: I am unable to use OutputCaching with my controls which derives from MyCustomControl. Controls are loaded dynamically using definitions from database with Page.LoadControl method. When I add to ascx <%@ OutputCache VaryByParam="*" Duration="3600"% the "InvalidCastException: System.Web.UI.PartialCachingControl - MyCustomControl" exception is thrown. I am unable to modify assembly witch contains dynamic loading controls logic. Is there any way to fix it in derived controls? The second question is about iis7 and native output caching - is it resolves this problem? (I tried to set up several performance counters and I saw that cache wasn't hit...)

    Read the article

  • When is BIG, big enough for a database?

    - by David ???
    I'm developing a Java application that has performance at its core. I have a list of some 40,000 "final" objects, i.e., I have an initialization input data of 40,000 vectors. This data is unchanged throughout the program's run. I am always preforming lookups against a single ID property to retrieve the proper vectors. Currently I am using a HashMap over a sub-sample of a 1,000 vectors, but I'm not sure it will scale to production. When is BIG, actually big enough for a use of DB? One more thing, an SQLite DB is a viable option as no concurrency is involved, so I guess the "threshold" for db use, is perhaps lower.

    Read the article

  • SQL alert for a stored procedure?

    - by superdupersomething
    I have a SQL 2005 setup and am rather new :) Been cracking at this for a few hours and I just need some help. I have been able to setup alerts successfully for the standard "SQL server performance events", its fun. So I already have email alerts working. However I need the alert thing to run a stored procedure I have created, and depending on its output it will alert me or not via email. So far I have been trying to use the WMI events, but I keep getting an error "The @wmi_query could not be executed in the @wmi_namespace provided. Verify that an event class selected in the query exists in the namespace and that the query has the correct syntax" the query definitely works so I have no idea.. is there a different way to do this?

    Read the article

< Previous Page | 435 436 437 438 439 440 441 442 443 444 445 446  | Next Page >