Search Results

Search found 13403 results on 537 pages for 'epm performance tuning'.

Page 194/537 | < Previous Page | 190 191 192 193 194 195 196 197 198 199 200 201  | Next Page >

  • About to migrate :string but I'm thinking :text might be better. Performance/Purpose?

    - by Sam
    class CreateScrapes < ActiveRecord::Migration def self.up create_table :scrapes do |t| t.text :saved_characters t.text :sanitized_characters t.string :href t.timestamps end end def self.down drop_table :scrapes end end I'm about to rake db:migrate and I'm think about the attribute type if I should be using text or string. Since saved_characters and sanitized_characters will be arrays with thousands of unicode values, its basically comma delimited data, I'm not sure if `:text' is really the right way to go here. What would you do?

    Read the article

  • slow php command line performance - is this normal or do I have an install problem?

    - by Frank Schwieterman
    I have a simple PHP app that prints 'hello world'. When I run it from the command line it takes 6 seconds. Is this normal? It seems to take 1 seconds before "hello world" prints, then 5 seconds after. I assume this is overhead of the interpreter. I am running PHP version 5.2.12 on Windows Server 2008 R2. Could this be an install issue, or is it typical? I did a manual install of PHP then added whatever components were needed to run Drupal. The only PHP addon I remember adding was MDB2, CGI support is there too. I am used to a Lua project I run from the command line, hundreds of lines of code that will run in under a second. I have some unit tests I run from the command line, and already with just a few they are very slow. I run them from Netbeans and the tests are still very slow.

    Read the article

  • Does Microsoft hate firefox? ASP.Net gridview performance in firefox bug?

    - by Maxim Gershkovich
    Could someone please explain the significant difference in speed between a firefox updatepanel async postback and one performed in IE? Average Firefox Postback Time For 500 objects: 1.183 Second Average IE Postback Time For 500 objects: 0.295 Seconds Using firebug I can see that the majority of this time in FireFox is spent on the server side. A total of 1.04 seconds. Given this fact the only thing I can assume is causing this problem is the way that ASP.Net renders its controls between the two browsers. Has anyone run into this problem before? VB.Net Code Protected Sub Button1_Click(ByVal sender As Object, ByVal e As EventArgs) Handles Button1.Click GridView1.DataBind() End Sub Public Function GetStockList() As StockList Dim res As New StockList For l = 0 To 500 Dim x As New Stock With {.Description = "test", .ID = Guid.NewGuid} res.Add(x) Next Return res End Function Public Class Stock Private m_ID As Guid Private m_Description As String Public Sub New() End Sub Public Property ID() As Guid Get Return Me.m_ID End Get Set(ByVal value As Guid) Me.m_ID = value End Set End Property Public Property Description() As String Get Return Me.m_Description End Get Set(ByVal value As String) Me.m_Description = value End Set End Property End Class Public Class StockList Inherits List(Of Stock) End Class Markup <form id="form1" runat="server"> <asp:ScriptManager ID="ScriptManager1" runat="server"> </asp:ScriptManager> <script type="text/javascript" language="Javascript"> function timestamp_class(this_current_time, this_start_time, this_end_time, this_time_difference) { this.this_current_time = this_current_time; this.this_start_time = this_start_time; this.this_end_time = this_end_time; this.this_time_difference = this_time_difference; this.GetCurrentTime = GetCurrentTime; this.StartTiming = StartTiming; this.EndTiming = EndTiming; } //Get current time from date timestamp function GetCurrentTime() { var my_current_timestamp; my_current_timestamp = new Date(); //stamp current date & time return my_current_timestamp.getTime(); } //Stamp current time as start time and reset display textbox function StartTiming() { this.this_start_time = GetCurrentTime(); //stamp current time } //Stamp current time as stop time, compute elapsed time difference and display in textbox function EndTiming() { this.this_end_time = GetCurrentTime(); //stamp current time this.this_time_difference = (this.this_end_time - this.this_start_time) / 1000; //compute elapsed time return this.this_time_difference; } //--> </script> <script type="text/javascript" language="javascript"> var time_object = new timestamp_class(0, 0, 0, 0); //create new time object and initialize it Sys.WebForms.PageRequestManager.getInstance().add_beginRequest(BeginRequestHandler); Sys.WebForms.PageRequestManager.getInstance().add_endRequest(EndRequestHandler); function BeginRequestHandler(sender, args) { var elem = args.get_postBackElement(); ActivateAlertDiv('visible', 'divAsyncRequestTimer', elem.value + ''); time_object.StartTiming(); } function EndRequestHandler(sender, args) { ActivateAlertDiv('visible', 'divAsyncRequestTimer', '(' + time_object.EndTiming() + ' Seconds)'); } function ActivateAlertDiv(visstring, elem, msg) { var adiv = $get(elem); adiv.style.visibility = visstring; adiv.innerHTML = msg; } </script> <asp:UpdatePanel ID="UpdatePanel1" runat="server"> <Triggers> <asp:AsyncPostBackTrigger ControlID="Button1" EventName="click" /> </Triggers> <ContentTemplate> <asp:UpdateProgress ID="UpdateProgress1" runat="server" AssociatedUpdatePanelID="UpdatePanel1"> </asp:UpdateProgress> <asp:Button ID="Button1" runat="server" Text="Button" /> <div id="divAsyncRequestTimer" style="font-size:small;"> </div> <asp:GridView ID="GridView1" runat="server" DataSourceID="ObjectDataSource1" AutoGenerateColumns="False"> <Columns> <asp:BoundField DataField="ID" HeaderText="ID" SortExpression="ID" /> <asp:BoundField DataField="Description" HeaderText="Description" SortExpression="Description" /> </Columns> </asp:GridView> <asp:ObjectDataSource ID="ObjectDataSource1" runat="server" SelectMethod="GetStockList" TypeName="WebApplication1._Default"> </asp:ObjectDataSource> </ContentTemplate> </asp:UpdatePanel> </form>

    Read the article

  • Is there any performance issue using Row_Number to implement table paging in Sql Server 2008?

    - by majkinetor
    I want to implement table paging using this method: SET @PageNum = 2; SET @PageSize = 10; WITH OrdersRN AS ( SELECT ROW_NUMBER() OVER(ORDER BY OrderDate, OrderID) AS RowNum ,* FROM dbo.Orders ) SELECT * FROM OrdersRN WHERE RowNum BETWEEN (@PageNum - 1) * @PageSize + 1 AND @PageNum * @PageSize ORDER BY OrderDate ,OrderID; Is there anything I should be aware of ? Table has millions of records. Thx. EDIT: After using suggested MAXROWS method for some time (which works really really fast) I had to switch back to ROW_NUMBER method because of its greater flexibility. I am also very happy about its speed so far (I am working with View having more then 1M records with 10 columns). To use any kind of query I use following modification: PROCEDURE [dbo].[PageSelect] ( @Sql nvarchar(512), @OrderBy nvarchar(128) = 'Id', @PageNum int = 1, @PageSize int = 0 ) AS BEGIN SET NOCOUNT ON Declare @tsql as nvarchar(1024) Declare @i int, @j int if (@PageSize <= 0) OR (@PageSize > 10000) SET @PageSize = 10000 -- never return more then 10K records SET @i = (@PageNum - 1) * @PageSize + 1 SET @j = @PageNum * @PageSize SET @tsql = 'WITH MyTableOrViewRN AS ( SELECT ROW_NUMBER() OVER(ORDER BY ' + @OrderBy + ') AS RowNum ,* FROM MyTableOrView WHERE ' + @Sql + ' ) SELECT * FROM MyTableOrViewRN WHERE RowNum BETWEEN ' + CAST(@i as varchar) + ' AND ' + cast(@j as varchar) exec(@tsql) END If you use this procedure make sure u prevented sql injection.

    Read the article

  • How does the verbosity of identifiers affect the performance of a programmer?

    - by DR
    I always wondered: Are there any hard facts which would indicate that either shorter or longer identifiers are better? Example: clrscr() opposed to ClearScreen() Short identifiers should be faster to read because there are fewer characters but longer identifiers often better resemble natural language and therefore also should be faster to read. Are there other aspects which suggest either a short or a verbose style? EDIT: Just to clarify: I didn't ask: "What would you do in this case?". I asked for reasons to prefer one over the other, i.e. this is not a poll question. Please, if you can, add some reason on why one would prefer one style over the other.

    Read the article

  • Does lamda in List.ForEach leads to memory leaks and performance problems ?

    - by Monomachus
    I have a problem which I could solve using something like this sortedElements.ForEach((XElement el) => PrintXElementName(el,i++)); And this means that I have in ForEach a lambda which permits using parameters like int i. I like that way of doing it, but i read somewhere that anonymous methods and delegates with lambda leads to a lot of memory leaks because each time when lambda is executed something is instantiated but is not released. Something like that. Could you please tell me if this is true in this situation and if it is why?

    Read the article

  • VIsual Studio 2010 Web Performance Test / Load tests / Coded UI Tests. ANYONE REALLY USE THESE?

    - by punkouter
    I can find some articles on how to use them but I can't seem to find anywhere peoples impression of them using them in real projects. I have been trying to figure out how to use them and Ive had alot of problems.. Can someone out there who uses these tools on the job give me thier impression? Are there better alternate tools available? Using these really just a waste of time ? With Coded UI Tests I see how they are good for basic javascript checking but its so basic of a example I don't think it is worth it. With web tests I like how they work but when I activate code coverage/ASP.NET profiling it doesnt work half the time.

    Read the article

  • Is Amazon SQS the right choice here? Rails performance issue.

    - by ole_berlin
    I'm close to releasing a rails app with the common networking features (messaging, wall, etc.). I want to use some kind of background processing (most likely Bj) for off-loading tasks from the request/response cycle. This would happen when users invite friends via email to join and for email notifications. I'm not sure if I should just drop these invites and notifications in my Database, using a model and then just process it with a worker process every x minutes or if I should go for Amazon SQS, storing the messages and invites there and let my worker retrieve it from Amazon SQS for processing (sending the invites / notifications). The Amazon approach would get load off my Database but I guess it is slower to retrieve messages from there. What do you think?

    Read the article

  • Entity Framework + MySQL - Why is the performance so terrible?

    - by Cyril Gupta
    When I decided to use an OR/M (Entity Framework for MySQL this time) for my new project I was hoping it would save me time, but I seem to have failed it (for the second time now). Take this simple SQL Query SELECT * FROM POST ORDER BY addedOn DESC LIMIT 0, 50 It executes and gives me results in less than a second as it should (the table has about 60,000 rows). Here's the equivalent LINQ To Entities query that I wrote for this var q = (from p in db.post orderby p.addedOn descending select p).Take(50); var q1 = q.ToList(); //This is where the query is fetched and timed out But this query never even executes it times out ALWAYS (without orderby it takes 5 seconds to run)! My timeout is set to 12 seconds so you can imagine it is taking much more than that. Why is this happening? Is there a way I can see what is the actual SQL Query that Entity Framework is sending to the db? Should I give up on EF+MySQL and move to standard SQL before I lose all eternity trying to make it work? I've recalibrated my indexes, tried eager loading (which actually makes it fail even without the orderby clause) Please help, I am about to give up OR/M for MySQL as a lost cause.

    Read the article

  • WCF interoperability with WSDL proxy and performance consideration advise.

    - by user194917
    I'm essentially writing a broker service. The requirement is that I write an API that acts as an intermediary broker between our in-house developed services and a 3rd party provided API. The intention being that my API abstract the actual communication with the 3rd party API from our internal systems. The architect on the project chose WCF as the communication framework. The problem is that 70 percent of our subscriber applications are written in .Net 2 and as such have no access to the class libraries required to implement a WCF proxy. The end result being that our proxy classes are loosely based on the code auto generated by the WSDL tool as opposed to the SvcUtil tool. My question is, although I have no issues implementing the required proxy classes using basicHttp as the actual binding and using the WSDL tool, are there any special considerations that I need to take into account in this scenario? I.E proxy optimizations and the like. Thanks in advance.

    Read the article

  • APC decreasing php performance??? (php 5.3, apache 2.2, windows vista 64bit)

    - by M.M.
    Hi, I have an Apache/2.2.15 (VC9) and PHP/5.3.2 (VC9 thread safe) running as an apache module on Vista 64bit machine. All running fine. Project that I'm benchmarking (with apache's ab utility) is basically standard Zend Framework project with no db connection involved. Average (median) apache response is about 0.15 seconds. After I've installed APC (3.1.4-dev VC9 thread safe) with standard settings suddenly the request response time raised to 1.3 seconds (!), which is unacceptable... All apc settings looked always good (through the apc.php script: enough shm memory, no cache full, fragmentation 0%). Only difference was to disable the stats lookup (apc.stat = 0). Then the response dropped to 0.09 seconds which was finally better than without the apc. IIRC, it's expected and obvious that the stat lookup creates some overhead, but shouldn't it still be far more performant compared to running wihout the apc extension at all? Or put it differently why is the apc.stat creating so much overhead? Apparently, something is not working as it should, I don't really know where to start looking. Thank you for your time/answers/direction in advance. Cheers, m.

    Read the article

  • Which parallel sorting algorithm has the best average case performance?

    - by Craig P. Motlin
    Sorting takes O(n log n) in the serial case. If we have O(n) processors we would hope for a linear speedup. O(log n) parallel algorithms exist but they have a very high constant. They also aren't applicable on commodity hardware which doesn't have anywhere near O(n) processors. With p processors, reasonable algorithms should take O(n/p log n/p) time. In the serial case, quick sort has the best runtime complexity on average. A parallel quick sort algorithm is easy to implement (see here and here). However it doesn't perform well since the very first step is to partition the whole collection on a single core. I have found information on many parallel sort algorithms but so far I have not seen anything pointing to a clear winner. I'm looking to sort lists of 1 million to 100 million elements in a JVM language running on 8 to 32 cores.

    Read the article

  • Is it okay to violate the principle that collection properties should be readonly for performance?

    - by uriDium
    I used FxCop to analyze some code I had written. I had exposed a collection via a setter. I understand why this is not good. Changing the backing store when I don't expect it is a very bad idea. Here is my problem though. I retrieve a list of business objects from a Data Access Object. I then need to add that collection to another business class and I was doing it with the setter method. The reason I did this was that it is going to be faster to make an assignment than to insert hundreds of thousands of objects one at a time to the collection again via another addElement method. Is it okay to have a getter for a collection in some scenarios? I though of rather having a constructor which takes a collection? I thought maybe I could pass the object in to the Dao and let the Dao populate it directly? Are there any other better ideas?

    Read the article

  • MS DOS function like ipconfig to get system performance specs?

    - by JustADude
    I am aware of MSINFO32, but I'm wondering if there is a MS DOS command similar to ipconfig in order to get system specifications? I would like for the system specifications to be displayed in the MS DOS prompt. I would like to see at least: CPU RAM BUS speed Thanks for any insights. Edit: I am unable to install any other software, so just have to use existing DOS programming commands to extract this information. Thank you again. 2nd Edit: Whoops. Using Windows XP and Windows Vista.

    Read the article

  • [boost::filesystem] performance: is it better to read all files once, or use b::fs functions over an

    - by rubenvb
    I'm conflicted between a "read once, use memory+pointers to files" and a "read when necessary" approach. The latter is of course much easier (no additional classes needed to store the whole dir structure), but IMO it is slower? A little clarification: I'm writing a simple build system, that read a project file, checks if all files are present, and runs some compile steps. The file tree is static, so the first option doesn't need to be very dynamic and only needs to be built once every time the program is run. Thanks

    Read the article

  • performance: jquery.live() vs. creating specific listeners on demand?

    - by Haroldo
    I have a page with news item titles, clicking each one ajax loads a full news items including a photo thumbnail. I want to attach a lightbox to the thumbnails to show a bigger photo. I have two options (i think): .live() . $('img .thumb').live('click', function()) add a specific id based listener on callback of the news item click . $('div.news_item').click(function(){ var id = $(this).attr('id'); //click show_news_item(), //callback function(){$(id+' .thumb').lightbox();} })

    Read the article

  • Normalize database or not? Read only MyISAM table, performance is the main priority (MySQL)

    - by hello
    I'm importing data to a future database that will have one, static MyISAM table (will only be read from). I chose MyISAM because as far as I understand it's faster for my requirements (I'm not very experienced with MySQL / SQL at all). That table will have various columns such as ID, Name, Gender, Phone, Status... and Country, City, Street columns. Now the question is, should I create tables (e.g Country: Country_ID, Country_Name) for the last 3 columns and refer to them in the main table by ID (normalize...[?]), or just store them as VARCHAR in the main table (having duplicates, obviously)? My primary concern is speed - since the table won't be written into, data integrity is not a priority. The only actions will be selecting a specific row or searching for rows that much a certain criteria. Would searching by the Country, City and/or Street columns (and possibly other columns in the same search) be faster if I simply use VARCHAR?

    Read the article

  • Performance of vector::size() : is it as fast as reading a variable?

    - by zoli2k
    I have do an extensive calculation on a big vector of integers. The vector size is not changed during the calculation. The size of the vector is frequently accessed by the code. What is faster in general: using the vector::size() function or using helper constant vectorSize storing the size of the vector? I know that compilers usually able to inline the size() function when setting the proper compiler flags, however, making a function inline is something that a compiler may do but can not be forced.

    Read the article

  • Forcing the use of an index can improve performance?

    - by aF.
    Imagine that we have a query like this: select a.col1, b.col2 from t1 a inner join t2 b on a.col1 = b.col2 where a.col1 = 'abc' Both col1 and col2 don't have any index. If I add another restriction on the where clause, one that is always correct but with a column with an index: select a.col1, b.col2 from t1 a inner join t2 b on a.col1 = b.col2 where a.col1 = 'abc' and a.id >= 0 -- column always true and with index May the query perform faster since it may use the index on id column?

    Read the article

< Previous Page | 190 191 192 193 194 195 196 197 198 199 200 201  | Next Page >