Search Results

Search found 13151 results on 527 pages for 'performance counters'.

Page 400/527 | < Previous Page | 396 397 398 399 400 401 402 403 404 405 406 407  | Next Page >

  • Sinatra Gem install error

    - by lakshmanan
    I have been trying to install sinatra in a macbook running leopard system, and I am not able to do it. I get the following error. MacBook:rubygems-1.3.7 lakshmanan$ gem install sinatra WARNING: RubyGems 1.2+ index not found for: http://rubygems.org/ RubyGems will revert to legacy indexes degrading performance. Bulk updating Gem source index for: http://rubygems.org/ ERROR: While executing gem ... (NoMethodError) undefined method `gems' for #<Array:0x101901008> Please help. I reinstalled gems also. Still I get the same error.

    Read the article

  • Running Awk command on a cluster

    - by alex
    How do you execute a Unix shell command (awk script, a pipe etc) on a cluster in parallel (step 1) and collect the results back to a central node (step 2) Hadoop seems to be a huge overkill with its 600k LOC and its performance is terrible (takes minutes just to initialize the job) i don't need shared memory, or - something like MPI/openMP as i dont need to synchronize or share anything, don't need a distributed VM or anything as complex Google's SawZall seems to work only with Google proprietary MapReduce API some distributed shell packages i found failed to compile, but there must be a simple way to run a data-centric batch job on a cluster, something as close as possible to native OS, may be using unix RPC calls i liked rsync simplicity but it seem to update remote notes sequentially, and you cant use it for executing scripts as afar as i know switching to Plan 9 or some other network oriented OS looks like another overkill i'm looking for a simple, distributed way to run awk scripts or similar - as close as possible to data with a minimal initialization overhead, in a nothing-shared, nothing-synchronized fashion Thanks Alex

    Read the article

  • Strip tags (with tags inside attributes and nested tags) using javascript

    - by Kokizzu
    What the fastest (in performance) way to strip strings from tags, most solution i've tried that uses regexp not resulting correct values for tags inside attributes (yes, i know it's wrong), example test case: var str = "<div data-content='yo! press this: <br/> <button type=\"button\"><i class=\"glyphicon glyphicon-disk\"></i> Save</button>' data-title='<div>this one for tooltips <div>seriously</div></div>'> this is the real content<div> with another nested</div></div>" that should resulting: this is the real content with another nested

    Read the article

  • DbDataReader with DbTransactions

    - by Gustavo Paulillo
    Its the wrong way or lack of performance, using DbDataReader combinated with DbTransactions? An example of code: public DbDataReader ExecuteReader() { try { if (this._baseConnection.State == ConnectionState.Closed) this._baseConnection.Open(); if (this._baseCommand.Transaction != null) return this._baseCommand.ExecuteReader(); return this._baseCommand.ExecuteReader(CommandBehavior.CloseConnection); } catch (Exception excp) { if (this._baseCommand.Transaction != null) this._baseCommand.Transaction.Rollback(); this._baseCommand.CommandText = string.Empty; this._baseConnection.Close(); throw new Exception(excp.Message); } } Some methods call this operation. Sometimes openning a DbTransaction. Its using DbConnection and DbCommand. The real problem, is in production enviroment (like 5,000 access/day) the ADO operations start throwing exceptions

    Read the article

  • How to populate List<string> with Datarow values from single columns...

    - by James
    Hi, I'm still learning (baby steps). Messing about with a function and hoping to find a tidier way to deal with my datatables. For the more commonly used tables throughout the life of the program, I'll dump them to datatables and query those instead. What I'm hoping to do is query the datatables for say column x = "this", and convert the values of column "y" directly to a List to return to the caller: private List<string> LookupColumnY(string hex) { List<string> stringlist = new List<string>(); DataRow[] rows = tblDataTable.Select("Columnx = '" + hex + "'", "Columny ASC"); foreach (DataRow row in rows) { stringlist.Add(row["Columny"].ToString()); } return stringlist; } Anyone know a slightly simpler method? I guess this is easy enough, but I'm wondering if I do enough of these if iterating via foreach loop won't be a performance hit. TIA!

    Read the article

  • How to avoid the same calculations on column values over and over again in a select?

    - by Peter
    I sometimes write SELECTs on the form: SELECT a.col1+b.col2*c.col4 as calc_col1, a.col1+b.col2*c.col4 + xxx as calc_col1_PLUS_MORE FROM .... INNER JOIN ... ON a.col1+b.col2*c.col4 < d.some_threshold WHERE a.col1+b.col2*c.col4 > 0 When the calculations get rather involved and used up to 3-5 times within the same SELECT, I would really like to refactor that out in a function or similar in order to 1) hopefully improve performance / make use of cache 2) avoid forgetting to update one of the 4 calculations when I at a later stage realize I need to change the calculation. I usually have these selects within SPs. Any ideas?

    Read the article

  • Interspire to Magento migration

    - by patrikas
    Hello, I recently started with Magento and decided to migrate Interspire shopping cart I already made time ago to it. At first look Magento seems a very huge beast - lots of options, maybe lack of simplicity resulting in some performance loss. I've got user guide from which I am not getting much of benefit since there're just descriptions of very ordinary tasks that I could easily discover myself by poking around frontend/backend. So my first tasks are category and product export. Interspire seems to be exporting ONLY products in three available formats: Default MYOB Peachtree accounting I did some searching on Magento's product importing and found a blog post which says that I should create a few sample products with all the necessary attributes myself and then start the import. But what should I do with categories ? Is it possible to import them or instruct Magento to automatically create categories when importing product file if unknown category is encountered ? Thanks

    Read the article

  • High density Silverlight charting control

    - by ahosie
    I've been looking into Silverlight charting controls to display a large number of samples, (~10,000 data points in five separate series - ~50k points all up). I have found the existing options produced by Dundas, Visifire, Microsoft etc to be extremely poor performers when displaying more than a few hundred data points. I believe the performance issues with existing chart controls is caused by the heavy use of vector graphics. Ergo one solution would be a client-side chart control that uses the WritableBitmap class to generate a raster chart. Before I fall too far down the wheel re-invention rabbit hole - has anyone found a third party or OSS control that will manage large numbers of data points on a sparkline?

    Read the article

  • rails using jruby 1.5 - slow!!

    - by gucki
    Hi! I'm currently using passenger with ree 1.8.7 in production for a rails 2.3.5 project using postgresql as a database. ab -n 10000 -c 100: 285.69 [#/sec] (mean) I read jruby should be the fastest solution, so I installed jruby-1.5.0.rc2 together with jdbc postgres adapter and glassfish. As the performance is really poor, I also started running my application using "jruby --server -J-Druby.jit.threshold=0 script/server -e production". Anyway, I only get ab -n 10000 -c 100: 43.88 [#/sec] (mean) Thread_safe! is activated in my rails config. Java seems to use all cores, cpu usage is around 350% (top). ruby -v: jruby 1.5.0.RC2 (ruby 1.8.7 patchlevel 249) (2010-04-28 7c245f3) (Java HotSpot(TM) 64-Bit Server VM 1.6.0_16) [amd64-java] I wonder what I'm doing wrong and how to get better performancre with jruby than with ree? Thanks, Corin

    Read the article

  • Separate Database for Integration Testing

    - by john doe
    I am performance integration testing where I fire up the ASPX pages using WatiN and fill the fields and insert into the database. There are couple of problems that I am facing. 1) Should I use a completely separate database for integration testing? I already gave db_test and db_dev. db_test is for unit testing and is cleared after each test. db_dev is for developers. 2) When I run WatiN test which are contained in a separate assembly (not separate from unit test assembly which should be better since WatiN test take so much time to run). So WatiN test fire up the WebApps project and uses their web.config which is pointing to the dev database. Is there anyway I can tell WatiN to use a separate web.config which contains a different database name?

    Read the article

  • How to simulate different CPU frequency and limit RAM

    - by user351103
    Hi I have to build a simulator with C#. This simulator should be able to run a second thread with configureable CPU speed and limited RAM size, e.g. 144MHz and 50 MB. Of course I know that a simulator can never be as accurate as the real hardware. But I try to get almost similar performance. At the moment I'm thinking about creating a thread which I will stop/sleep from time to time. Depending on the desired CPU speed the simulator should adjust the sleep time of this thread and therefore simulate different cpu frequency. To measure the achieved speed I though about using PerformanceCounters. But with this approach I have the problem that I don't know how to limit the RAM size the thread could use. Do you have any ideas how to realize such a simulator? Thanks in advance!!

    Read the article

  • Subclassing UIScrollView for drawing w/o views

    - by David Dunham
    I'm contemplating subclassing UIScrollView (the way UITextView does) to draw a fairly large amount of text (formatted in ways that NSTextView can't). So far the view won't actually scroll. I'm setting contentSize, and when I drag, I see the scroll indicator. But nothing changes (and I don't get a drawRect: message). An alternate approach is to use a child view, and I've done this. The view can be over 5000 pixels high, however, and I'm a bit concerned about performance on an actual device. (The other approach, be like UITableView, would be a huge pain -- I'm "porting" Mac Cocoa code, and a collection of views would be a huge architecture change.) I've done some searching, but haven't found anyone who is using UIScrollView to do the drawing. Has anyone done this and know of any pitfalls?

    Read the article

  • Symfony app - how to add calculated fields to Propel objects?

    - by Thomas Kohl
    What is the best way of working with calculated fields of Propel objects? Say I have an object "Customer" that has a corresponding table "customers" and each column corresponds to an attribute of my object. What I would like to do is: add a calculated attribute "Number of completed orders" to my object when using it on View A but not on Views B and C. The calculated attribute is a COUNT() of "Order" objects linked to my "Customer" object via ID. What I can do now is to first select all Customer objects, then iteratively count Orders for all of them, but I'd think doing it in a single query would improve performance. But I cannot properly "hydrate" my Propel object since it does not contain the definition of the calculated field(s). How would you approach it?

    Read the article

  • FreeBSD or NetBSD based commercial TCP/IP stack vendor?

    - by Vineet
    Hi - Receiving recommendations for commercial TCP/IP stack implementation based on FreeBSD or NetBSD. Requirements are similar to a typical desktop PC running a browser, email and streaming voice/video. Which is to say a rich network functionality for a end-host type of device with mature implementation and reasonable performance. BSD derived network stacks are deployed in wide variety of situations for years and hence have mature implementation. It's supposed to run on a proprietary RTOS. Most vendors I found don't advertise if their stack is based on BSD. Any recommendations? -- Vineet

    Read the article

  • Google Analytics API - Tying Behavior to Specific Dates

    - by DavidS
    I am using the API to understand the performance of Adwords ad campaigns. I need to know how to attribute metrics back to the date dimension. For instance, for a given date, if I have 20 clicks, 18 visits, and 3 goal completions, does it mean that: 1) All of these actions happened on the day in question and are otherwise independent (meaning that the 3 goals could have been for people that clicked any time in the past 30 days, not who clicked on that day) 2) The on-site actions are a subset of the click activity on that day (i.e. on that day, 20 people clicked, 18 registered a real visit, and 3 completed a goal) If it is scenario 2, does that mean there is a need to refresh old rows every day? Thanks!

    Read the article

  • jQuery/Javascript framework efficiency

    - by Russell
    My latest project is using a javascript framework (jQuery), along with some plugins (validation, jquery-ui, datepicker, facebox, ...) to help make a modern web application. I am now finding pages loading slower than I am used to. After some js profiling (thanks VS2010!), it seems a lot of the time is taken procesing inside the framework. Now I understand the more complex the ui tools, the more processing needs to be done. The project is not yet at a large stage and I think would be average functions. At this stage I can see it is not going to scale well. I noticed things like the 'each' command in jQuery takes quite a lot of processing time. Have others experienced some extra latency using JS frameworks? How do I minimise their effect on page performance? Are there best practices on implementation using JS frameworks? Thanks

    Read the article

  • Variable as numeric sent to stored procedure (SQL Server 2005)

    - by TimCarrett
    I see that with SQL Server 2005 you can pass a parameter as numeric e.g. create procedure dbo.TestSP @Param1 numeric as But what does this equate to? E.g. Numeric(10,0), Numeric(9,2), etc? We have some Developers here who are using this instead of the correct definition for the field that this parameter is going to be used against e.g. instead of using Numeric(10, 0) for the parameter @Param1. Also are there any underlying performance issues with using Numeric instead of the data type defined against the field in the table? Many thanks.

    Read the article

  • Faking a dynamic schema in Core Data?

    - by Gouldsc
    From reading the Apple Docs on Core Data, I've learned that you should not use Core Data when you need a dynamic schema. If I wanted to provide the user the ability to create their own properties, in a core data model would it work if I created some "dummy" attributes like "custom decimal 1", "custom decimal 2", "custom text 1", "custom text 2" etc that the user could name and use for their own purposes? Obviously this won't work for relationships, but for simple properties it seems like a reasonable workaround. Will creating a bunch of dummy attributes on my entities that go unused by most users noticeably decrease performance for them? Have any of you tried something like this? Thanks!

    Read the article

  • mySQL and general database normalization question

    - by Sinan
    I have question about normalization. Suppose I have an applications dealing with songs. First I thought about doing like this: Songs Table: id | song_title | album_id | publisher_id | artist_id Albums Table: id | album_title | etc... Publishers Table: id | publisher_name | etc... Artists Tale: id | artist_name | etc... Then as I think about normalization stuff. I thought I should get rid of "album_id, publisher_id, and artist_id in songs table and put them in intermediate tables like this. Table song_album: song_id, album_id Table song_publisher song_id, publisher_id Table song_artist song_id, artist_id Now I can't decide which is the better way. I'm not an expert on database design so If someone would point out the right direction. It would awesome. Are there any performance issues between two approaches? Thanks

    Read the article

  • NHibernate transaction management in ASP.NET MVC - how should it be done?

    - by adrin
    I am writing a simple ASP.NET MVC using session per request and transaction per request patterns (custom HttpModule). It seems to work properly, but.. the performance is terrible (a simple page loads ~7 seconds). For every http request, graphical resources incuding (all images on the site) a transaction is created and that seems to delay the loading times (without the transactions loading times per one image are ~1-10 ms with transactions they are over 1 second). What is the proper way to manage transactions in ASP.NET MVC + NH stack? When i've put all transactions into my repository methods, for some obscure reasons I got 'implicit transactions' warning in NHProf (the SQL statements were executed outside transaction, even that in code session.Save()/Update()/etc methods were invoked within transaction 'using' scope and before transaction.Commit() call) BTW are implicit transactions really bad?

    Read the article

  • C# Process flow - Datastream, XML and datagrid

    - by Farstucker
    Im looking for some advice/suggestions on how I should setup the work flow of a small application Im building. When the application is launched the datagrid will be populated via the XML file. Once running the application will receive a datastream that I hope to update the file and datagrid. So Im curious what you would suggest on how I setup the workflow (ie, split the data from the data stream and simultaneously populate the file and grid or would you suggest populating the XML file first and setting up a timer to have the grid read the file?) Im really looking for optimal performance.

    Read the article

  • Cost of using repeated parameters

    - by Palimondo
    I consider refactoring few method signatures that currently take parameter of type List or Set of concrete classes --List[Foo]-- to use repeated parameters instead: Foo*. This would allow me to use the same method name and overload it based on the parameter type. This was not possible using List or Set, because List[Foo] and List[Bar] have same type after erasure: List[Object]. In my case the refactored methods work fine with scala.Seq[Foo] that results from the repeated parameter. I would have to change all the invocations and add a sequence argument type annotation to all collection parameters: baz.doStuffWith(foos:_*). Given that switching from collection parameter to repeated parameter is semantically equivalent, does this change have some performance impact that I should be aware of? Is the answer same for scala 2.7._ and 2.8?

    Read the article

  • select only new row in oracle

    - by Hlex
    Hi, I have table with "varchar2" as primary key. Transaction is about 1 000 000 per day. my app wake up every 5 minute to generate text file by query only new record. It will remember last point and do only new record. 1)Do you have idea how query with good performance? I able to add new column if need. 2)what do you think this process should do by? plsql? java?

    Read the article

  • NSThread vs. NSOperationQueue vs. ??? on the iPhone

    - by kubi
    Currently I'm using NSThread to cache images in another thread. [NSThread detachNewThreadSelector:@selector(cacheImage:) toTarget:self withObject:image]; Alternatively: [self performSelectorInBackground:@selector(cacheImage:) withObject:image]; Alternatively, I can use an NSOperationQueue NSInvocationOperation *invOperation = [[NSInvocationOperation alloc] initWithTarget:self selector:@selector(cacheImage:) object:image]; NSOperationQueue *opQueue = [[NSOperationQueue alloc] init]; [opQueue addOperation:invOperation]; Is there any reason to switch away from NSThread? GCD is a 4th option when it's released for the iPhone, but unless there's a significant performance gain, I'd rather stick with methods that work in most platforms.

    Read the article

  • Generate HTML To PDF Control for the .NET application

    - by Karan
    Has anyone used any open source or paid .NET Control which does the conversion job from html to pdf file? At the moment, i am using Winnovative convertor control. But it has a performance limitation during the generation of bulk pages (like more than 1000) in the pdf. The limitation comes when we use bigger images in the html content. From last 4 months i've been working on the winnovative control and found plenty of major bugs in it. For a small application and usage. winnovative is good but not for the level where application will be used by thousands of clients. Please suggest.

    Read the article

< Previous Page | 396 397 398 399 400 401 402 403 404 405 406 407  | Next Page >