Search Results

Search found 13151 results on 527 pages for 'performance counters'.

Page 400/527 | < Previous Page | 396 397 398 399 400 401 402 403 404 405 406 407  | Next Page >

  • Sinatra Gem install error

    - by lakshmanan
    I have been trying to install sinatra in a macbook running leopard system, and I am not able to do it. I get the following error. MacBook:rubygems-1.3.7 lakshmanan$ gem install sinatra WARNING: RubyGems 1.2+ index not found for: http://rubygems.org/ RubyGems will revert to legacy indexes degrading performance. Bulk updating Gem source index for: http://rubygems.org/ ERROR: While executing gem ... (NoMethodError) undefined method `gems' for #<Array:0x101901008> Please help. I reinstalled gems also. Still I get the same error.

    Read the article

  • Running Awk command on a cluster

    - by alex
    How do you execute a Unix shell command (awk script, a pipe etc) on a cluster in parallel (step 1) and collect the results back to a central node (step 2) Hadoop seems to be a huge overkill with its 600k LOC and its performance is terrible (takes minutes just to initialize the job) i don't need shared memory, or - something like MPI/openMP as i dont need to synchronize or share anything, don't need a distributed VM or anything as complex Google's SawZall seems to work only with Google proprietary MapReduce API some distributed shell packages i found failed to compile, but there must be a simple way to run a data-centric batch job on a cluster, something as close as possible to native OS, may be using unix RPC calls i liked rsync simplicity but it seem to update remote notes sequentially, and you cant use it for executing scripts as afar as i know switching to Plan 9 or some other network oriented OS looks like another overkill i'm looking for a simple, distributed way to run awk scripts or similar - as close as possible to data with a minimal initialization overhead, in a nothing-shared, nothing-synchronized fashion Thanks Alex

    Read the article

  • rails using jruby 1.5 - slow!!

    - by gucki
    Hi! I'm currently using passenger with ree 1.8.7 in production for a rails 2.3.5 project using postgresql as a database. ab -n 10000 -c 100: 285.69 [#/sec] (mean) I read jruby should be the fastest solution, so I installed jruby-1.5.0.rc2 together with jdbc postgres adapter and glassfish. As the performance is really poor, I also started running my application using "jruby --server -J-Druby.jit.threshold=0 script/server -e production". Anyway, I only get ab -n 10000 -c 100: 43.88 [#/sec] (mean) Thread_safe! is activated in my rails config. Java seems to use all cores, cpu usage is around 350% (top). ruby -v: jruby 1.5.0.RC2 (ruby 1.8.7 patchlevel 249) (2010-04-28 7c245f3) (Java HotSpot(TM) 64-Bit Server VM 1.6.0_16) [amd64-java] I wonder what I'm doing wrong and how to get better performancre with jruby than with ree? Thanks, Corin

    Read the article

  • Symfony app - how to add calculated fields to Propel objects?

    - by Thomas Kohl
    What is the best way of working with calculated fields of Propel objects? Say I have an object "Customer" that has a corresponding table "customers" and each column corresponds to an attribute of my object. What I would like to do is: add a calculated attribute "Number of completed orders" to my object when using it on View A but not on Views B and C. The calculated attribute is a COUNT() of "Order" objects linked to my "Customer" object via ID. What I can do now is to first select all Customer objects, then iteratively count Orders for all of them, but I'd think doing it in a single query would improve performance. But I cannot properly "hydrate" my Propel object since it does not contain the definition of the calculated field(s). How would you approach it?

    Read the article

  • How to populate List<string> with Datarow values from single columns...

    - by James
    Hi, I'm still learning (baby steps). Messing about with a function and hoping to find a tidier way to deal with my datatables. For the more commonly used tables throughout the life of the program, I'll dump them to datatables and query those instead. What I'm hoping to do is query the datatables for say column x = "this", and convert the values of column "y" directly to a List to return to the caller: private List<string> LookupColumnY(string hex) { List<string> stringlist = new List<string>(); DataRow[] rows = tblDataTable.Select("Columnx = '" + hex + "'", "Columny ASC"); foreach (DataRow row in rows) { stringlist.Add(row["Columny"].ToString()); } return stringlist; } Anyone know a slightly simpler method? I guess this is easy enough, but I'm wondering if I do enough of these if iterating via foreach loop won't be a performance hit. TIA!

    Read the article

  • Interspire to Magento migration

    - by patrikas
    Hello, I recently started with Magento and decided to migrate Interspire shopping cart I already made time ago to it. At first look Magento seems a very huge beast - lots of options, maybe lack of simplicity resulting in some performance loss. I've got user guide from which I am not getting much of benefit since there're just descriptions of very ordinary tasks that I could easily discover myself by poking around frontend/backend. So my first tasks are category and product export. Interspire seems to be exporting ONLY products in three available formats: Default MYOB Peachtree accounting I did some searching on Magento's product importing and found a blog post which says that I should create a few sample products with all the necessary attributes myself and then start the import. But what should I do with categories ? Is it possible to import them or instruct Magento to automatically create categories when importing product file if unknown category is encountered ? Thanks

    Read the article

  • Separate Database for Integration Testing

    - by john doe
    I am performance integration testing where I fire up the ASPX pages using WatiN and fill the fields and insert into the database. There are couple of problems that I am facing. 1) Should I use a completely separate database for integration testing? I already gave db_test and db_dev. db_test is for unit testing and is cleared after each test. db_dev is for developers. 2) When I run WatiN test which are contained in a separate assembly (not separate from unit test assembly which should be better since WatiN test take so much time to run). So WatiN test fire up the WebApps project and uses their web.config which is pointing to the dev database. Is there anyway I can tell WatiN to use a separate web.config which contains a different database name?

    Read the article

  • Polling duplex does not scale... what's the alternative?

    - by user80855
    Our tests showed that the polling duplex binding simply does not scale and can not be used on a service within a web-farm or even a web garden. We have looked at TCP/IP sockets for a client push method, but the firewall issue is does allow us to use sockets. I was wondering what is the alternative "free" solution to this problem? allowing us to scale and allowing us to push data to client... I have also tried the solution in this article http://tomasz.janczuk.org/2009/09/scale-out-of-silverlight-http-polling.html but at the end, there was too much polling on a database, and performance was affected. Our Silverlight application need a pub/sub design, but it needs to be reliable and scalable... any ideas?

    Read the article

  • How to simulate different CPU frequency and limit RAM

    - by user351103
    Hi I have to build a simulator with C#. This simulator should be able to run a second thread with configureable CPU speed and limited RAM size, e.g. 144MHz and 50 MB. Of course I know that a simulator can never be as accurate as the real hardware. But I try to get almost similar performance. At the moment I'm thinking about creating a thread which I will stop/sleep from time to time. Depending on the desired CPU speed the simulator should adjust the sleep time of this thread and therefore simulate different cpu frequency. To measure the achieved speed I though about using PerformanceCounters. But with this approach I have the problem that I don't know how to limit the RAM size the thread could use. Do you have any ideas how to realize such a simulator? Thanks in advance!!

    Read the article

  • Situations to prefer Apache Lucene over Solr?

    - by Karussell
    There are several advantages to use Solr (out-of-the-box facetting search, grouping, replication, http administration vs. luke, ...). Even if I embed a search-functionality in my Java application I could use SolrJ to avoid the HTTP trade-off when using Solr. So, when would you recommend to use "pure-Lucene"? Does it have a better performance or requires less RAM? Is it better unit-testable? PS: I am aware of this question.

    Read the article

  • Subclassing UIScrollView for drawing w/o views

    - by David Dunham
    I'm contemplating subclassing UIScrollView (the way UITextView does) to draw a fairly large amount of text (formatted in ways that NSTextView can't). So far the view won't actually scroll. I'm setting contentSize, and when I drag, I see the scroll indicator. But nothing changes (and I don't get a drawRect: message). An alternate approach is to use a child view, and I've done this. The view can be over 5000 pixels high, however, and I'm a bit concerned about performance on an actual device. (The other approach, be like UITableView, would be a huge pain -- I'm "porting" Mac Cocoa code, and a collection of views would be a huge architecture change.) I've done some searching, but haven't found anyone who is using UIScrollView to do the drawing. Has anyone done this and know of any pitfalls?

    Read the article

  • Google Analytics API - Tying Behavior to Specific Dates

    - by DavidS
    I am using the API to understand the performance of Adwords ad campaigns. I need to know how to attribute metrics back to the date dimension. For instance, for a given date, if I have 20 clicks, 18 visits, and 3 goal completions, does it mean that: 1) All of these actions happened on the day in question and are otherwise independent (meaning that the 3 goals could have been for people that clicked any time in the past 30 days, not who clicked on that day) 2) The on-site actions are a subset of the click activity on that day (i.e. on that day, 20 people clicked, 18 registered a real visit, and 3 completed a goal) If it is scenario 2, does that mean there is a need to refresh old rows every day? Thanks!

    Read the article

  • C# Process flow - Datastream, XML and datagrid

    - by Farstucker
    Im looking for some advice/suggestions on how I should setup the work flow of a small application Im building. When the application is launched the datagrid will be populated via the XML file. Once running the application will receive a datastream that I hope to update the file and datagrid. So Im curious what you would suggest on how I setup the workflow (ie, split the data from the data stream and simultaneously populate the file and grid or would you suggest populating the XML file first and setting up a timer to have the grid read the file?) Im really looking for optimal performance.

    Read the article

  • Faking a dynamic schema in Core Data?

    - by Gouldsc
    From reading the Apple Docs on Core Data, I've learned that you should not use Core Data when you need a dynamic schema. If I wanted to provide the user the ability to create their own properties, in a core data model would it work if I created some "dummy" attributes like "custom decimal 1", "custom decimal 2", "custom text 1", "custom text 2" etc that the user could name and use for their own purposes? Obviously this won't work for relationships, but for simple properties it seems like a reasonable workaround. Will creating a bunch of dummy attributes on my entities that go unused by most users noticeably decrease performance for them? Have any of you tried something like this? Thanks!

    Read the article

  • FreeBSD or NetBSD based commercial TCP/IP stack vendor?

    - by Vineet
    Hi - Receiving recommendations for commercial TCP/IP stack implementation based on FreeBSD or NetBSD. Requirements are similar to a typical desktop PC running a browser, email and streaming voice/video. Which is to say a rich network functionality for a end-host type of device with mature implementation and reasonable performance. BSD derived network stacks are deployed in wide variety of situations for years and hence have mature implementation. It's supposed to run on a proprietary RTOS. Most vendors I found don't advertise if their stack is based on BSD. Any recommendations? -- Vineet

    Read the article

  • How to avoid the same calculations on column values over and over again in a select?

    - by Peter
    I sometimes write SELECTs on the form: SELECT a.col1+b.col2*c.col4 as calc_col1, a.col1+b.col2*c.col4 + xxx as calc_col1_PLUS_MORE FROM .... INNER JOIN ... ON a.col1+b.col2*c.col4 < d.some_threshold WHERE a.col1+b.col2*c.col4 > 0 When the calculations get rather involved and used up to 3-5 times within the same SELECT, I would really like to refactor that out in a function or similar in order to 1) hopefully improve performance / make use of cache 2) avoid forgetting to update one of the 4 calculations when I at a later stage realize I need to change the calculation. I usually have these selects within SPs. Any ideas?

    Read the article

  • mySQL and general database normalization question

    - by Sinan
    I have question about normalization. Suppose I have an applications dealing with songs. First I thought about doing like this: Songs Table: id | song_title | album_id | publisher_id | artist_id Albums Table: id | album_title | etc... Publishers Table: id | publisher_name | etc... Artists Tale: id | artist_name | etc... Then as I think about normalization stuff. I thought I should get rid of "album_id, publisher_id, and artist_id in songs table and put them in intermediate tables like this. Table song_album: song_id, album_id Table song_publisher song_id, publisher_id Table song_artist song_id, artist_id Now I can't decide which is the better way. I'm not an expert on database design so If someone would point out the right direction. It would awesome. Are there any performance issues between two approaches? Thanks

    Read the article

  • OpenCL: does it play well with OpenMP, can I connect other languages to it, etc.

    - by Cem Karan
    The 1.0 spec for OpenCL just came out a few days ago (Spec is here) and I've just started to read through it. I want to know if it plays well with other high performance multiprocessing APIs like OpenMP (spec) and I want to know what I should learn. So, here are my basic questions: If I am already using OpenMP, will that break OpenCL or vice-versa? Is OpenCL more powerful than OpenMP? Or are they intended to be complementary? Is there a standard way of connecting an OpenCL program to a standard C99 program (or any other language)? What is it? Does anyone know if anyone is writing an OpenCL book? I'm reading the spec, but I've found books to be more helpful.

    Read the article

  • select only new row in oracle

    - by Hlex
    Hi, I have table with "varchar2" as primary key. Transaction is about 1 000 000 per day. my app wake up every 5 minute to generate text file by query only new record. It will remember last point and do only new record. 1)Do you have idea how query with good performance? I able to add new column if need. 2)what do you think this process should do by? plsql? java?

    Read the article

  • How can I combine sequential expression trees into a fast method?

    - by chillitom
    Suppose I have the following expressions: Expression<Action<T, StringBuilder>> expr1 = (t, sb) => sb.Append(t.Name); Expression<Action<T, StringBuilder>> expr2 = (t, sb) => sb.Append(", "); Expression<Action<T, StringBuilder>> expr3 = (t, sb) => sb.Append(t.Description); I'd like to be able to compile these into a method/delegate equivalent to the following: void Method(T t, StringBuilder sb) { sb.Append(t.Name); sb.Append(", "); sb.Append(t.Description); } What is the best way to approach this? I'd like it to perform well, ideally with performance equivalent to the above method.

    Read the article

  • jQuery/Javascript framework efficiency

    - by Russell
    My latest project is using a javascript framework (jQuery), along with some plugins (validation, jquery-ui, datepicker, facebox, ...) to help make a modern web application. I am now finding pages loading slower than I am used to. After some js profiling (thanks VS2010!), it seems a lot of the time is taken procesing inside the framework. Now I understand the more complex the ui tools, the more processing needs to be done. The project is not yet at a large stage and I think would be average functions. At this stage I can see it is not going to scale well. I noticed things like the 'each' command in jQuery takes quite a lot of processing time. Have others experienced some extra latency using JS frameworks? How do I minimise their effect on page performance? Are there best practices on implementation using JS frameworks? Thanks

    Read the article

  • Cost of using repeated parameters

    - by Palimondo
    I consider refactoring few method signatures that currently take parameter of type List or Set of concrete classes --List[Foo]-- to use repeated parameters instead: Foo*. This would allow me to use the same method name and overload it based on the parameter type. This was not possible using List or Set, because List[Foo] and List[Bar] have same type after erasure: List[Object]. In my case the refactored methods work fine with scala.Seq[Foo] that results from the repeated parameter. I would have to change all the invocations and add a sequence argument type annotation to all collection parameters: baz.doStuffWith(foos:_*). Given that switching from collection parameter to repeated parameter is semantically equivalent, does this change have some performance impact that I should be aware of? Is the answer same for scala 2.7._ and 2.8?

    Read the article

  • Subsonic SQLite Multiple Files

    - by Marcus Vinicius de LIma
    Hi, I have an application that must be accessed for many users. To optimize the performance I intend to store each user profile information at a independant database file. I need everytime a user login the application, to setup a new provider linked with his own database. All databases have the same structure. So while querying user the commom generated DAL classes must switch for the database file relative the the user. Is there a way for configure SubSonic for doing that switch at runtime? Thanks.

    Read the article

  • How to modularize a b2b webservice transformation application

    - by hstoerr
    How would you modularize a large application that has some incoming (SOAP) webservices, some outgoing webservices, transformations between them and internal formats, internal logging services, accesses external archiving webservices, delays stuff and works on this asynchronously and so forth? One way is to split the functionality into a collection of WAR, deploy all of them on one application server and have them communicate with internal webservices. This has some overhead, especially if the messages are large, and you might run into performance problems due to thread count restrictions and so forth. Another way would be to put everything into a giant WAR, such that you can communicate directly. Not exactly modularization. What would you do?

    Read the article

  • mod_php / mod_suphp / FastCGI | Which do you recommend and why.

    - by Saif Bechan
    I am at the point that I have to choose on what type of setup my application should run. I know there are some types available where apache runs smooth on, but they all have there downsides. System: Apache 2 / PHP 5.2 I hope you can give me some tips from firsthand experience. To give you an example of what to be covered. - Performance - Ease of setup - Security I know this does not really involve programming, but I have seen post concerning this and I know that you guys/girls here are certainly qualified to comment on this subject.

    Read the article

< Previous Page | 396 397 398 399 400 401 402 403 404 405 406 407  | Next Page >