Search Results

Search found 48586 results on 1944 pages for 'page performance'.

Page 287/1944 | < Previous Page | 283 284 285 286 287 288 289 290 291 292 293 294  | Next Page >

  • C# DynamicMethod prelink

    - by soywiz
    I'm the author of a psp emulator made in C#. I'm generating lots of "DynamicMethod" using ILGenerator. I'm converting assembly code into an AST, and then generating IL code and building that DynamicMethod. I'm doing this in another thread, so I can generate new methods while the program is executing others so it can run smoothly. My problem is that the native code generation is lazy, so the machine code is generated when the function is called, not when the IL is generated. So it generates in the program executing thread, native code generation is prettly slow as it is the asm-ast-il step. I have tried the Marshal.Prelink method that it is suposed to generate the machine code before executing the function. It does work on Mono, but it doesn't work on MS .NET. Marshal.Prelink(MethodInfo); Is there a way of prelinking a DynamicMethod on MS .NET? I thought adding a boolean parameter to the function that if set, exits the function immediately so no code is actually executed. I could "prelink" that way, but I think that's a nasty solution I want to avoid. Any idea?

    Read the article

  • How to interpret mono profiler results?

    - by Ovidiu Pacurar
    I created a console application in C# and running it on windows/.NET is 5x faster than on linux/mono or windows/mono. The app encodes some binary files into text format(JSON). I profiled the app on linux/mono using: mono --profile=default:stat myconsoleapp.exe Here is the first part of the result: prof counts: total/unmanaged: 32274/25062 23542 72.95 % mono 459 1.42 % System.Decimal:Divide (System.Decimal,System.Decimal) 457 1.42 % System.Decimal:Round (System.Decimal,int,System.MidpointRounding) 411 1.27 % /lib/libz.so.1 262 0.81 % /lib/tls/i686/cmov/libc.so.6(memmove 253 0.78 % System.Decimal:IsZero () 247 0.77 % System.NumberFormatter:Init (string,double,int) 213 0.66 % System.NumberFormatter:AppendDigits (int,int) 72.95 % mono? Are mono internals using 3 quarters of the total execution time?

    Read the article

  • css anchor div to foot of page

    - by foxed
    I may bounce my head off the wall shortly, I can't believe that something as stupid as this has utterly defeated me ... therefore I turn to you, Stack Overflow ... for guidance and enlightenment. Problem: Sit div at foot of page, 100% width, outside of any sort of wrapper. Proposed Solution: http://ryanfait.com/sticky-footer/ Implementation with content: http://www.weleasewodewick.com/redesign/index_content.html Implementation with no content: http://www.weleasewodewick.com/redesign/index.html with content - Good, works nicely no content = bad, footer sits exactly height of footer below the viewport. I really would appreciate your input into this, it's completely vexed me for the past hour. I wholly expect some form of ridicule :) Thanks! Foxed

    Read the article

  • What's the fastest way to check if a word from one string is in another string?

    - by Mike Trpcic
    I have a string of words; let's call them bad: bad = "foo bar baz" I can keep this string as a whitespace separated string, or as a list: bad = bad.split(" "); If I have another string, like so: str = "This is my first foo string" What's the fasted way to check if any word from the bad string is within my comparison string, and what's the fastest way to remove said word if it's found? #Find if a word is there bad.split(" ").each do |word| found = str.include?(word) end #Remove the word bad.split(" ").each do |word| str.gsub!(/#{word}/, "") end

    Read the article

  • PHP Object References in Frameworks

    - by bigstylee
    Before I dive into the disscusion part a quick question; Is there a method to determine if a variable is a reference to another variable/object? For example $foo = 'Hello World'; $bar = &$foo; echo (is_reference($bar) ? 'Is reference' : 'Is orginal'; I have been using PHP5 for a few years now (personal use only) and I would say I am moderately reversed on the topic of Object Orientated implementation. However the concept of Model View Controller Framework is fairly new to me. I have looked a number of tutorials and looked at some of the open source frameworks (mainly CodeIgnitor) to get a better understanding how everything fits together. I am starting to appreciate the real benefits of using this type of structure. I am used to implementing object referencing in the following technique. class Foo{ public $var = 'Hello World!'; } class Bar{ public function __construct(){ global $Foo; echo $Foo->var; } } $Foo = new Foo; $Bar = new Bar; I was surprised to see that CodeIgnitor and Yii pass referencs of objects and can be accessed via the following method: $this->load->view('argument') The immediate advantage I can see is a lot less code and more user friendly. But I do wonder if it is more efficient as these frameworks are presumably optimised? Or simply to make the code more user friendly? This was an interesting article Do not use PHP references.

    Read the article

  • Large number of UPDATE queries slowing down page

    - by Bryan Lewis
    I am reading and validating large fixed-width text files (range from 10-50K lines) that are submitted via our ASP.net website (coded in VB.Net). I do an initial scan of the file to check for basic issues (line length, etc). Then I import each row into a MS SQL table. Each DB rows basically consists of a record_ID (Primary, auto-incrementing) and about 50 varchar fields. After the insert is done, I run a validation function on the file that checks each field in each row based on a bunch of criteria (trimmed length, isnumeric, range checks, etc). If it finds an error in any field, it inserts a record into the Errors table, which has an error_ID, the record_ID and an error message. In addition, if the field fails in a particular way, I have to do a "reset" on that field. A reset might consist of blanking the entire field, or simply replacing the value with another value (e.g. replacing the string with a new one that has all illegals chars taken out). I have a 5,000 line test file. The upload, initial check, and import takes about 5-6 seconds. The detailed error check and insert into the Errors table takes about 5-8 seconds (this file has about 1200 errors in it). However, the "resets" part takes about 40-45 seconds for 750 fields that need to be reset. When I comment out the resets function (returning immediately without actually calling the UPDATE stored proc), the process is very fast. With the resets turned on, the pages take 50 seconds to return. My UPDATE stored proc is using some recommended code from http://sommarskog.se/dynamic_sql.html, whereby it uses CASE instead of dynamic SQL: UPDATE dbo.Records SET dbo.Records.file_ID = CASE @field_name WHEN 'file_ID' THEN @field_value ELSE file_ID END, . . (all 50 varchar field CASE statements here) . WHERE dbo.Records.record_ID = @record_ID Is there any way I can help my performance here. Can I somehow group all of these UPDATE calls into a single transaction? Should I be reworking the UPDATE query somehow? Or is it just sheer quantity of 750+ UPDATEs and things are just slow (it's a quad proc server with 8GB ram). Any suggestions appreciated.

    Read the article

  • Ideas Related to Subset Sum with 2,3 and more integers

    - by rolandbishop
    I've been struggling with this problem just like everyone else and I'm quite sure there has been more than enough posts to explain this problem. However in terms of understanding it fully, I wanted to share my thoughts and get more efficient solutions from all the great people in here related to Subset Sum problem. I've searched it over the Internet and there is actually a lot sources but I'm really willing to re-implement an algorithm or finding my own in order to understand fully. The key thing I'm struggling with is the efficiency considering the set size will be large. (I do not have a limit, just conceptually large). The two phases I'm trying to implement ideas on is finding two numbers that are equal to given integer T, finding three numbers and eventually K numbers. Some ideas I've though; For the two integer part I'm thing basically sorting the array O(nlogn) and for each element in the array searching for its negative value. (i.e if the array element is 3 searching for -3). Maybe a hash table inclusion could be better, providing a O(1) indexing the element? For the three or more integers I've found an amazing blog post;http://www.skorks.com/2011/02/algorithms-a-dropbox-challenge-and-dynamic-programming/. However even the author itself states that it is not applicable for large numbers. So I was for 2 and 3 and more integers what ideas could be applied for the subset problem. I'm struggling with setting up a dynamic programming method that will be efficient for the large inputs as well.

    Read the article

  • How to shift pixels of a pixmap efficient in Qt4

    - by stanleyxu2005
    Hello, I have implemented a marquee text widget using Qt4. I painted the text content onto a pixmap first. And then paint a portion of this pixmap onto a paint device by calling painter.drawTiledPixmap(offsetX, offsetY, myPixmap) My Imagination is that, Qt will fill the whole marquee text rectangle with the content from myPixmap. Is there a ever faster way, to shift all existing content to left by 1px and than fill the newly exposed 1px wide and N-px high area with the content from myPixmap?

    Read the article

  • How to decide on what hardware to deploy web application

    - by Yuval A
    Suppose you have a web application, no specific stack (Java/.NET/LAMP/Django/Rails, all good). How would you decide on which hardware to deploy it? What rules of thumb exist when determining how many machines you need? How would you formulate parameters such as concurrent users, simultaneous connections and DB read/write ratio to a decision on how much, and which, hardware you need? Any resources on this issue would be very helpful...

    Read the article

  • Speeding up PostgreSQL query where data is between two dates

    - by Roger
    I have a large table ( 50m rows) which has some data with an ID and timestamp. I need to query the table to select all rows with a certain ID where the timestamp is between two dates, but it currently takes over 2 minutes on a high end machine. I'd really like to speed it up. I have found this tip which recommends using a spatial index, but the example it gives is for IP addresses. However, the speed increase (436s to 3s) is impressive. How can I use this with timestamps?

    Read the article

  • iPhone and Vertex Buffer Objects

    - by dancer
    I've just started playing around with opengl es on the iphone the past couple of weeks and i'm looking at refactoring some of my code to use Vertex Buffer Objects(VBO). Before I do though I would like to make sure it'll be worth it. The problem is that afaik the only reason you create VBO's is to shift a chunk of data onto the graphics card so that it doesn't need to be retrieved from system ram when it's used. The iPhone however does not have any dedicated ram that I'm aware of so i'm struggling to see why I would benefit at all from using VBO's. I have seen talk around the internet with conflicting opinions and apple certainly want dev's to use it so there's probably still a reason to use them but just wanted to see if anyone on SO had an opinion to add.

    Read the article

  • php mysql database users connection handling

    - by aviv
    What is the best way to handle mysql database users connection in PHP? I have a web server running a PHP application on MySQL. I have created a database user for the application: dbuser1 with limited access - only for query, insert and update tables. No alter table. Now the question is, should i use the same dbuser1 widely in my scripts, so if there are 100 current people using my system and hence 100 scripts running parallel they all connect to the database with the same dbuser1? or should i create a few users and assign each script a different user or load-balance between the dbusers ?

    Read the article

  • What is the most efficient way to store a mapping "key -> event stream"?

    - by jkff
    Suppose there are ~10,000's of keys, where each key corresponds to a stream of events. I'd like to support the following operations: push(key, timestamp, event) - pushes event to the event queue for key, marked with the given timestamp. It is guaranteed that event timestamps for a particular key are pushed in sorted or almost sorted order. tail(key, timestamp) - get all events for key since the given timestamp. Usually the timestamp requests for a given key are almost monotonically increasing, almost synchronously with pushes for the same key. This stuff has to be persistent (although it is not absolutely necessary to persist pushes immediately and to keep tails with pushes strictly in sync), so I'm going to use some kind of database. What is the optimal kind of database structure for this task? Would it be better to use a relational database, a key-value storage, or something else?

    Read the article

  • iPhone Image Resources, ICO vs PNG, app bundle filesize

    - by Jasarien
    My application has a collection of around 1940 icons that are used throughout. They're currently in ICO and new images provided to me come in ICO format too. I have noticed that they contain a 16x16 and 32x32 representation of each icon in one file. Each file is roughly 4KB in filesize (as reported by finder, but ls reports that they vary from being ~1000 bytes to 5000 bytes) A very small number of these icons only contain the 32x32 representation, and as a result are only around 700 bytes in size. Currently I am bundling these icons with my application and they are inflating the size of the app a bit more than I would like. Altogether, the images total just about 25.5MB. Xcode must do some kind of compression because the resulting app bundle is about 12.4MB. Compressing this further into a ZIP (as it would be when submitted to the App Store), results in a final file of 5.8MB. I'm aware that the maximum limit for over the air App Store downloads has been raised to 20MB since the introduction of the iPad (I'm not sure if that extends to iPhone apps as well as iPad apps though, if not the limit would be 10MB). My worry is that new icons are going to be added (sometimes up to 10 icons per week), and will continue to inflate the app bundle over time. What is the best way to distribute these icons with my app? Things I've tried and not had much success with: Converting the icons from ICO to PNG: I tried this in the hopes that the pngcrush utility would help out with the filesize. But it appears that it doesn't make much of a difference between a normal PNG and a crushed png (I believe it just optimises the image for display on the iPhone's GPU rather than compress it's size). Also in going from ICO to PNG actually increased the size of the icon file... Zipping the images, and then uncompressing them on first run. While this did reduce the overall image sizes, I found that the effort needed to unzip them, copy them to the documents folder and ensure that duplication doesn't happen on upgrades was too much hassle to be worth the benefit. Also, on original and 3G iPhones unzipping and copying around 25MB of images takes too long and creates a bad experience... Things I've considered but not yet tried: Instead of distributing the icons within the app bundle, host them online, and download each icon on demand (it depends on the user's data as to which icons will actually be displayed and when). Issues with this is that bandwidth costs money, and image downloads will be bandwidth intensive. However, my app currently has a small userbase of around 5,500 users (of which I estimate around 1500 to be active based on Flurry stats), and I have a huge unused bandwidth allowance with my current hosting package. So I'm open to thoughts on how to solve this tricky issue.

    Read the article

  • Good strategy for copying a "sliding window" of data from a table?

    - by chiborg
    I have a MySQL table from a third-party application that has millions of rows and only one index - the timestamp of each entry. Now I want to do some heavy self-joins and queries on the data using fields other than the timestamp. Doing the query on the original table would bring the database to a crawl, adding indexes to the table is not an option. Additionally, I only need entries that are newer than one week. My current strategy for doing the queries efficiently is to use a separate table (aux_table) that has the necessary indexes. My questions are: Is there another way to do the queries? and if not, How do I update the data in the indexed table efficiently? So far I have found two approaches for updating aux_table: Truncate aux_table and insert the desired data from the original table. Not very efficient because all the indexes must be re-crated. Check for the biggest timestamp in aux_table and insert all entries with a greater or equal timestamp from the original table. Occasionally drop older entries. Only copying entries with greater timestamp leads to dropped entries (because of entries with same timestamp that were inserted into the original table after the last update).

    Read the article

  • fastest way to crawl recursive ntfs directories in C++

    - by Peter Parker
    I have written a small crawler to scan and resort directory structures. It based on dirent(which is a small wrapper around FindNextFileA) In my first benchmarks it is surprisingy slow: around 123473ms for 4500 files(thinkpad t60p local samsung 320 GB 2.5" HD). 121481 files found in 123473 milliseconds Is this speed normal? This is my code: int testPrintDir(std::string strDir, std::string strPattern="*", bool recurse=true){ struct dirent *ent; DIR *dir; dir = opendir (strDir.c_str()); int retVal = 0; if (dir != NULL) { while ((ent = readdir (dir)) != NULL) { if (strcmp(ent->d_name, ".") !=0 && strcmp(ent->d_name, "..") !=0){ std::string strFullName = strDir +"\\"+std::string(ent->d_name); std::string strType = "N/A"; bool isDir = (ent->data.dwFileAttributes & FILE_ATTRIBUTE_DIRECTORY) !=0; strType = (isDir)?"DIR":"FILE"; if ((!isDir)){ //printf ("%s <%s>\n", strFullName.c_str(),strType.c_str());//ent->d_name); retVal++; } if (isDir && recurse){ retVal += testPrintDir(strFullName, strPattern, recurse); } } } closedir (dir); return retVal; } else { /* could not open directory */ perror ("DIR NOT FOUND!"); return -1; } }

    Read the article

  • page redirection problem via Bean in Js1.1

    - by johnbritto
    I have a requested scoped managed bean, called AuthenticationBean. I am developing a smal application with a login module, user activation and deactivation. When I click on the activate or deactivate link, then the action is processed in AuthenticationBean. I want thereafter redirect to some page depending on the activate or deactivate link. I have tried the following in the bean constructor: FacesContext.getCurrentInstance().getExternalContext().redirect("/user/activate.jsp"); But this code is not working. Please help me.

    Read the article

  • Why is autorelease especially dangerous/expensive for iPhone applications?

    - by e.James
    I'm looking for a primary source (or a really good explanation) to back up the claim that the use of autorelease is dangerous or overly expensive when writing software for the iPhone. Several developers make this claim, and I have even heard that Apple does not recommend it, but I have not been able to turn up any concrete sources to back it up. SO references: autorelease-iphone Why does this create a memory leak (iPhone)? Note: I can see, from a conceptual point of view, that autorelease is slightly more expensive than a simple call to release, but I don't think that small penalty is enough to make Apple recommend against it. What's the real story?

    Read the article

  • PHP fastest method of reading server response

    - by Peter John
    Hi there, im having some real problems with the lag produced by using fgets to grab the server's response to some batch database calls im making. Im sending through a batch of say, 10,000 calls and ive tracked the lag down to fgets causing the hold up in the speed of my application as the response for each call needs to be grabbed. I have found this thread http://bugs.php.net/bug.php?id=32806 which explains the problem quite well, but hes reading a file, not a server response so fread could be a bit tricky as i could get part of the next line, and extra stuff which i dont want. Any help much appreciated!

    Read the article

  • advantages of Zend_Db_Table vs raw (My)SQL?

    - by sunwukung
    Currently working on a new Zend application and developing the Model. Having worked with Zend_Db_Table before, I opted to replace references in the Model to the Table API with a custom SQL script to take care of data access duties. Now I'm looking at developing a new application/domain model, and I wanted to get some feedback from people re: their experiences with Zend_Db API vs raw SQL, and use cases where it would be preferable to use the API. From a project perspective, the DB platform is unlikely to change from MySQL - so it doesn't need to be particularly abstract - and I assume writing a custom SQL API will be more performant than the assorted classes the Zend DB API requires.

    Read the article

  • Speeding up jQuery empty() or replaceWith() Functions When Dealing with Large DOM Elements

    - by Levi Hackwith
    Let me start off by apologizing for not giving a code snippet. The project I'm working on is proprietary and I'm afraid I can't show exactly what I'm working on. However, I'll do my best to be descriptive. Here's a breakdown of what goes on in my application: User clicks a button Server retrieves a list of images in the form of a data-table Each row in the table contains 8 data-cells that in turn each contain one hyperlink Each request by the user can contain up to 50 rows (I can change this number if need be) That means the table contains upwards of 800 individual DOM elements My analysis shows that jQuery("#dataTable").empty() and jQuery("#dataTable).replaceWith(tableCloneObject) take up 97% of my overall processing time and take on average 4 - 6 seconds to complete. I'm looking for a way to speed up either of the above mentioned jQuery functions when dealing with massive DOM elements that need to be removed / replaced. I hope my explanation helps.

    Read the article

  • Cocoa - does CGDataProviderCopyData() actually copy the bytes? Or just the pointer?

    - by jtrim
    I'm running that method in quick succession as fast as I can, and the faster the better, so obviously if CGDataProviderCopyData() is actually copying the data byte-for-byte, then I think there must be a faster way to directly access that data...it's just bytes in memory. Anyone know for sure if CGDataProviderCopyData() actually copies the data? Or does it just create a new pointer to the existing data?

    Read the article

  • parser error in aspx page(.net 2.0) after converting from website to web application

    - by persistence911
    I have a website I recently converted from website to web application Project . When I compile It compiles Successfuly but when I run the code I get this error Parser Error Message: Could not load type 'DApplause.Logon'. Source Error: Line 1: <%@ Page Language="vb" AutoEventWireup="true" CodeBehind ="Logon.aspx.vb" inherits="DApplause.Logon"%> Line 2: <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN"> Line 3: <HTML> Source File: /Shared/LogOn.aspx Line: 1 -------------------------------------------------------------------------------- Version Information: Microsoft .NET Framework Version:2.0.50727.3053; ASP.NET Version:2.0.50727.3053 I have tried to resolve this error for like a day now. I am currently running with the internal web server of VS 2005 sp1. I have the Sain.dll in my bin folder cuase the assembly name use is sain and Root Namespace is DApplause . Please I need urgent help.

    Read the article

  • Sql server query using function and view is slower

    - by Lieven Cardoen
    I have a table with a xml column named Data: CREATE TABLE [dbo].[Users]( [UserId] [int] IDENTITY(1,1) NOT NULL, [FirstName] [nvarchar](max) NOT NULL, [LastName] [nvarchar](max) NOT NULL, [Email] [nvarchar](250) NOT NULL, [Password] [nvarchar](max) NULL, [UserName] [nvarchar](250) NOT NULL, [LanguageId] [int] NOT NULL, [Data] [xml] NULL, [IsDeleted] [bit] NOT NULL,... In the Data column there's this xml <data> <RRN>...</RRN> <DateOfBirth>...</DateOfBirth> <Gender>...</Gender> </data> Now, executing this query: SELECT UserId FROM Users WHERE data.value('(/data/RRN)[1]', 'nvarchar(max)') = @RRN after clearing the cache takes (if I execute it a couple of times after each other) 910, 739, 630, 635, ... ms. Now, a db specialist told me that adding a function, a view and changing the query would make it much more faster to search a user with a given RRN. But, instead, these are the results when I execute with the changes from the db specialist: 2584, 2342, 2322, 2383, ... This is the added function: CREATE FUNCTION dbo.fn_Users_RRN(@data xml) RETURNS varchar(100) WITH SCHEMABINDING AS BEGIN RETURN @data.value('(/data/RRN)[1]', 'varchar(max)'); END; The added view: CREATE VIEW vwi_Users WITH SCHEMABINDING AS SELECT UserId, dbo.fn_Users_RRN(Data) AS RRN from dbo.Users Indexes: CREATE UNIQUE CLUSTERED INDEX cx_vwi_Users ON vwi_Users(UserId) CREATE NONCLUSTERED INDEX cx_vwi_Users__RRN ON vwi_Users(RRN) And then the changed query: SELECT UserId FROM Users WHERE dbo.fn_Users_RRN(Data) = '59021626919-61861855-S_FA1E11' Why is the solution with a function and a view going slower?

    Read the article

< Previous Page | 283 284 285 286 287 288 289 290 291 292 293 294  | Next Page >