Search Results

Search found 14841 results on 594 pages for 'performance monitoring'.

Page 385/594 | < Previous Page | 381 382 383 384 385 386 387 388 389 390 391 392  | Next Page >

  • Is there any advantage for using a library other than Hibernate for JPA?

    - by Jeduan Cornejo
    Hi, I've been using JPA for some time now and been in projects where we've used both Hibernate Annotations and Toplink Essentials. AFAIK the project leader chose Toplink because Netbeans had it integrated and seemed to be the easy thing to do. However when looking for help, most of the literature seemed to assume that you are using Hibernate as the JPA provider, so, the question is, is have you found any advantage, performance or otherwise for not using the de-facto standard for JPA, Hibernate?

    Read the article

  • Will First() perform the OrderBy()?

    - by Martin
    Is there any difference in (asymptotic) performance between Orders.OrderBy(order => order.Date).First() and Orders.Where(order => order.Date == Orders.Max(x => x.Date)); i.e. will First() perform the OrderBy()? I'm guessing no. MSDN says enumerating the collection via foreach och GetEnumerator does but the phrasing does not exclude other extensions.

    Read the article

  • database design: table with large amount of columns (50+) or many sub tables with small amount of co

    - by Guillaume
    In our oroject we already have a lots of tables (100+). Some of them contains a lot of columns (50-100) and we are facing the need of adding more columns from time to time. What do you think is best - from maintenance and performance point of view - to split these huge tables in smaller entities or to keep the tables the way they are ? We are using an ORM tools, so we don't need to write custom request.

    Read the article

  • How to choose between UUIDs, autoincrement/sequence keys and sequence tables for database primary keys?

    - by Tim
    I'm looking at the pros and cons of these three primary methods of coming up with primary keys for database rows. So assuming I am using a database that supports more than one of these methods, is there a simple heuristic to determine what the best option would be for me? How do considerations such a distributed/multiple masters, performance requirements, ORM use, security and testing have on the choice? Any unexpected drawbacks that one might run into?

    Read the article

  • solr schema for article->paragraph structure

    - by Ke
    Hi guys, I want to index some articles and show the paragraph number in the search result. So I guess the solr schema should looks like this: article_id, paragraph_number, paragraph_content Therefore, I need to parse article first, extract paragraphs and index it one by one. I'm worried about the performance since one article can contain 100 paragraphs. Any suggestion?

    Read the article

  • Recommended book for Sql Server query optimisation

    - by Patrick Honorez
    Even if I have made a certification exam on Sql Server Design and implementation , I have no clue about how to trace/debug/optimise performance in Sql Sever. Now the database I built is really business critical, and getting big, so it is time for me to dig into optimisation, specially regarding when/where to add indexes. Can you recommend a good book on this subject ? (smaller is better :) Just in case: I am using Sql Server 2008. Thanks

    Read the article

  • Automatic people counting + twittering.

    - by c2h2
    Want to develop a system accurately counting people that go through a normal 1-2m wide door. and twitter whenever people goes in or out and tells how many people remain inside. Now, Twitter part is easy, but people counting is difficult. There is some semi existing counting solution, but they do not quite fit my needs. My idea/algorithm: Should I get some infra-red camera mounting on top of my door and constantly monitoring, and divide the camera image into several grid and calculating they entering and gone? can you give me some suggestion and starting point?

    Read the article

  • J2me - Arrays vs vector ?

    - by Galaxy
    if we have to implementations of string split for j2me, one returns vector and the other returns array , in terms of performance on hand held devices which one is the best choice ?

    Read the article

  • Python Logging across multiple classes and files; how to configure so as to be easily disabled?

    - by mellort
    Currently, I have osmething like this in all of my classes: # Import logging to log information import logging # Set up the logger LOG_FILENAME = 'log.txt' logging.basicConfig(filename=LOG_FILENAME,level=logging.DEBUG) This works well, and I get the output I want, but I would really like to have all this sort of information in one place, and be able to just do something like import myLogger and then start logging, and then hopefully be able to just go into that file and turn off logging when I need an extra performance boost. Thanks in advance

    Read the article

  • Pure python implementation of greenlet API

    - by Tristan
    The greenlet package is used by gevent and eventlet for asynchronous IO. It is written as a C-extension and therefore doesn't work with Jython or IronPython. If performance is of no concern, what is the easiest approach to implementing the greenlet API in pure Python. A simple example: def test1(): print 12 gr2.switch() print 34 def test2(): print 56 gr1.switch() print 78 gr1 = greenlet(test1) gr2 = greenlet(test2) gr1.switch() Should print 12, 56, 34 (and not 78).

    Read the article

  • Consistant hashing with memcache

    - by Industrial
    Hi everyone, I am setting up a new web app that will on the client side feature a multi-memcached server environment for reliability and performance. Would it be wise for us to utilize something like Flexihash to make it better to replicate the data between the memcache servers? Reference: http://github.com/pda/flexihash Thanks!

    Read the article

  • nonatomic property in model class when using NSOperationQueue (iPhone)?

    - by Andrew B.
    I have a custom model class with an NSMutableData ivar that will be accessed by custom NSOperation subclasses (using an NSOperationQueue). I think I can guarantee thread-safe access to the ivar from multiple NSOperations by using dependencies, and I can guarantee that I don't access the ivar from other code (say my main app thread) by waiting until the Q has finished all operations. Should I use a nonatomic property specification, or leave it atomic? Is there a significant impact on performance?

    Read the article

  • speed of map() vs. list comprehension vs. numpy vectorized function in python

    - by mcstrother
    I have a function foo(i) that takes an integer and takes a significant amount of time to execute. Will there be a significant performance difference between any of the following ways of initializing 'a': a = [foo(i) for i in xrange(100)] , a = map(foo, range(100)) , and vfoo = numpy.vectorize(foo) vfoo(range(100)) ? (I don't care whether the output is a list or a numpy array). Is there some other better way of doing this? Thanks.

    Read the article

  • What are good RPC frameworks between a Java server and C++ clients?

    - by Zwei Steinen
    Hi, I am looking for a RPC stack that can be used between a Java Server and C++ clients. My requirements are: Ease of integration (for both C++ and Java) Performance, especially number of concurrent connections and response time. Payload are mostly binaries (8-100kb) I found some like: http://code.google.com/p/protobuf-socket-rpc/ http://code.google.com/p/netty-protobuf-rpc/ Are there any other good alternatives?

    Read the article

  • How to switch from VARCHAR to TEXT in SQL 2000?

    - by MatthewMartin
    What do I need to consider before I switch a bunch of fields from VARCHAR(bignumber) to TEXT? Aside from performance, and sometime in the far future TEXT will be deprecated, and aside from the fact that it looks like I need to drop and recreate the table to alter the column's data type? This is for SQL 2000-- I can't do VARCHAR(max) and VARCHAR(8000) isn't large enough.

    Read the article

  • Migrated windows application from VS2008 to VS2010

    - by Reddy M
    Hi, I have converted my windows application projects from Visual Studio 2008 to Visual Studio 2010, Everything fine but Application is running very slow in 2010. Same Application is running in good performance in 2008. Any one can help me what could be the wrong. I am using Visual Studio 2010 RC (Release Candidate) Version.

    Read the article

  • How can I improve my select query for storing large versioned data sets?

    - by Jason Francis
    At work, we build large multi-page web applications, consisting mostly of radio and check boxes. The primary purpose of each application is to gather data, but as users return to a page they have previously visited, we report back to them their previous responses. Worst-case scenario, we might have up to 900 distinct variables and around 1.5 million users. For several reasons, it makes sense to use an insert-only approach to storing the data (as opposed to update-in-place) so that we can capture historical data about repeated interactions with variables. The net result is that we might have several responses per user per variable. Our table to collect the responses looks something like this: CREATE TABLE [dbo].[results]( [id] [bigint] IDENTITY(1,1) NOT NULL, [userid] [int] NULL, [variable] [varchar](8) NULL, [value] [tinyint] NULL, [submitted] [smalldatetime] NULL) Where id serves as the primary key. Virtually every request results in a series of insert statements (one per variable submitted), and then we run a select to produce previous responses for the next page (something like this): SELECT t.id, t.variable, t.value FROM results t WITH (NOLOCK) WHERE t.userid = '2111846' AND (t.variable='internat' OR t.variable='veteran' OR t.variable='athlete') AND t.id IN (SELECT MAX(id) AS id FROM results WITH (NOLOCK) WHERE userid = '2111846' AND (t.variable='internat' OR t.variable='veteran' OR t.variable='athlete') GROUP BY variable) Which, in this case, would return the most recent responses for the variables "internat", "veteran", and "athlete" for user 2111846. We have followed the advice of the database tuning tools in indexing the tables, and against our data, this is the best-performing version of the select query that we have been able to come up with. Even so, there seems to be significant performance degradation as the table approaches 1 million records (and we might have about 150x that). We have a fairly-elegant solution in place for sharding the data across multiple tables which has been working quite well, but I am open for any advice about how I might construct a better version of the select query. We use this structure frequently for storing lots of independent data points, and we like the benefits it provides. So the question is, how can I improve the performance of the select query? I assume the nested select statement is a bad idea, but I have yet to find an alternative that performs as well. Thanks in advance. NB: Since we emphasize creating over reading in this case, and since we never update in place, there doesn't seem to be any penalty (and some advantage) for using the NOLOCK directive in this case.

    Read the article

  • Reverse engineering a custom data file

    - by kerchingo
    At my place of work we have a legacy document management system that for various reasons is now unsupported by the developers. I have been asked to look into extracting the documents contained in this system to eventually be imported into a new 3rd party system. From tracing and process monitoring I have determined that the document images (mainly tiff files) are stored in a number of 1.5GB files. These files seem to be read from a specific offset and then written to a tmp file that is then served via a web app to the client, and then deleted. I guess I am looking for suggestions as to how I can inspect these large files that contain the tiff images, and eventually extract and write them to individual files.

    Read the article

  • On a Hudson master node, what are the .tmp files created in the workspace-files folder?

    - by Patrick Johnmeyer
    Question: In the path HUDSON_HOME/jobs/<jobname>/builds/<timestamp>/workspace-files, there are a series of .tmp files. What are these files, and what feature of Hudson do they support? Background Using Hudson version 1.341, we have a continuous build task that runs on a slave instance. After the build is otherwise complete, including archiving the artifacts, task scanner, etc., the job appears to hang for a long period of time. In monitoring the master node, I noted that many .tmp files were being created and modified under builds//workspace=files, and that some of them were very large. This appears to be causing the delay, as the job completed at the same time that files in this path stopped changing. Some key configuration points of the job: It is tied to a specific slave node It builds in a 'custom workspace' It triggers a downstream job that builds in the same custom workspace on the same slave node

    Read the article

< Previous Page | 381 382 383 384 385 386 387 388 389 390 391 392  | Next Page >