Search Results

Search found 30217 results on 1209 pages for 'website performance'.

Page 257/1209 | < Previous Page | 253 254 255 256 257 258 259 260 261 262 263 264  | Next Page >

  • How to pass ctor args in Activator.CreateInstance?

    - by thames
    I need a performance enhanced Activator.CreateInstance() and came across this article by Miron Abramson that uses a factory to create the instance in IL and then cache it. (I've included code below from Miron Abramson's site in case it somehow disappears). I'm new to IL Emit code and anything beyond Activator.CreateInstance() for instantiating a class and any help would be much appreciative. My problem is that I need to create an instance of an object that takes a ctor with a parameter. I see there is a way to pass in the Type of the parameter, but is there a way to pass in the value of the ctor parameter as well? If possible, I would like to use a method similar to CreateObjectFactory<T>(params object[] constructorParams) as some objects I want to instantiate may have more than 1 ctor param. // Source: http://mironabramson.com/blog/post/2008/08/Fast-version-of-the-ActivatorCreateInstance-method-using-IL.aspx public static class FastObjectFactory { private static readonly Hashtable creatorCache = Hashtable.Synchronized(new Hashtable()); private readonly static Type coType = typeof(CreateObject); public delegate object CreateObject(); /// /// Create an object that will used as a 'factory' to the specified type T /// public static CreateObject CreateObjectFactory() where T : class { Type t = typeof(T); FastObjectFactory.CreateObject c = creatorCache[t] as FastObjectFactory.CreateObject; if (c == null) { lock (creatorCache.SyncRoot) { c = creatorCache[t] as FastObjectFactory.CreateObject; if (c != null) { return c; } DynamicMethod dynMethod = new DynamicMethod("DM$OBJ_FACTORY_" + t.Name, typeof(object), null, t); ILGenerator ilGen = dynMethod.GetILGenerator(); ilGen.Emit(OpCodes.Newobj, t.GetConstructor(Type.EmptyTypes)); ilGen.Emit(OpCodes.Ret); c = (CreateObject)dynMethod.CreateDelegate(coType); creatorCache.Add(t, c); } } return c; } } Update to Miron's code from commentor on his post 2010-01-11 public static class FastObjectFactory2<T> where T : class, new() { public static Func<T> CreateObject { get; private set; } static FastObjectFactory2() { Type objType = typeof(T); var dynMethod = new DynamicMethod("DM$OBJ_FACTORY_" + objType.Name, objType, null, objType); ILGenerator ilGen = dynMethod.GetILGenerator(); ilGen.Emit(OpCodes.Newobj, objType.GetConstructor(Type.EmptyTypes)); ilGen.Emit(OpCodes.Ret); CreateObject = (Func<T>) dynMethod.CreateDelegate(typeof(Func<T>)); } }

    Read the article

  • clam anti-virus is slowing down my server performance

    - by Scarface
    Hey guys, I just installed clam av http://sourceforge.net/projects/php-clamav/ for scanning file uploads on my linux VPN running php. The problem is that for some reason just initiating the extension in the php ini file slows down my entire network. Regular requests such as changing pages that should take less than 1 second take 5. Has anyone ever experienced this before or have a good virus scanning alternative for scanning file uploads? extension=clamav.so [clamav] clamav.dbpath="/usr/share/clamav" clamav.keeptmp=20 clamav.maxreclevel=16 clamav.maxfiles=10000 clamav.maxfilesize=26214400 clamav.maxscansize=104857600 clamav.keeptmp=0

    Read the article

  • Slow (to none) performance on SQL 2005 after attaching SQL 2000 database

    - by ploft
    Issue: Using the detach/attach SQL database from a SQL 2000 SP4 instance to a much beefier SQL 2005 SP2 server. Run reindex, reorganize and update statistics a couple of times, but without any success. Queries on SQL 2000 took about 1-2 sec. to complete, now the same queries take 2-3 min on the SQL 2005 (and even 2008 - tested it there also). Have looked at the execution plans and the overall percent matches or are alike on each server.

    Read the article

  • Markup filter wanted for a public website

    - by sibidiba
    Developing a community site where everyone can post text, I'm looking for a markup filter: What is not part of the markup must be escaped (htmlspecialchars()) as it is. Should turn URL-s automatically into links Should support some form of basic markups (bold, image, url, pre, list) Should have a simple parser, that turns user input text into HTML Content on the site is public to everyone, XSS must not allowed to happen. What do you suggest? What markup language in the first place? BBCode? Wiki? Markdown? Are there any complete API-s with good examples? PHP is available on the server side. If there is a WYSIWYG-like texarea in addition (like here on SO) that would be a fantastic bonus!

    Read the article

  • Very slow remote page load performance in QtWebKit (Windows)

    - by Gil
    I get very slow load times on QtWebKit on Windows 7, through ADSL. I am using the Qt Demo Browser, on a Core2 Quad, 64 bit Windows 7, 4GB ram, 2gb processor. through a VPN. Simplest example: google search page takes ~18 seconds to load, compared to 2.5 on Chrome (cash cleared). On larger pages, with scripts etc. it is worse. I tried Qt 4.6 and also the Qt 4.7 beta, but don't see any difference. I see the same results with Arora browser. Are there any settings, or patches that can be applied to fix this? Thanks

    Read the article

  • How to improve performance of non-scalar aggregations on denormalized tables

    - by The Lazy DBA
    Suppose we have a denormalized table with about 80 columns, and grows at the rate of ~10 million rows (about 5GB) per month. We currently have 3 1/2 years of data (~400M rows, ~200GB). We create a clustered index to best suit retrieving data from the table on the following columns that serve as our primary key... [FileDate] ASC, [Region] ASC, [KeyValue1] ASC, [KeyValue2] ASC ... because when we query the table, we always have the entire primary key. So these queries always result in clustered index seeks and are therefore very fast, and fragmentation is kept to a minimum. However, we do have a situation where we want to get the most recent FileDate for every Region, typically for reports, i.e. SELECT [Region] , MAX([FileDate]) AS [FileDate] FROM HugeTable GROUP BY [Region] The "best" solution I can come up to this is to create a non-clustered index on Region. Although it means an additional insert on the table during loads, the hit isn't minimal (we load 4 times per day, so fewer than 100,000 additional index inserts per load). Since the table is also partitioned by FileDate, results to our query come back quickly enough (200ms or so), and that result set is cached until the next load. However I'm guessing that someone with more data warehousing experience might have a solution that's more optimal, as this, for some reason, doesn't "feel right".

    Read the article

  • NSURLConnection on simulator and iphone performance issues

    - by Nava Carmon
    I'm experiencing a weird problem and i wonder if anybody else has noticed this: I'm using NSURLConnection as it appears in apple's examples to get xml files from a certain server - pretty straight forward. And most of time it works, but sometimes it's just stuck after initialing and don't get into connection's delegate methods. I'm working with WiFi & 3G and the same server all the time. When it comes to didFailWithError i see that mostly it was a timeout error. When I enter same link in Safari it takes a second to bring data. And after another trial I can access the link. What might be the reason for such a weird behavior? How can I improve it? What is the role of cache policy with NSURLConnection? Thanks, Nava

    Read the article

  • Subscription website architecture questions + SQL Server & .NET

    - by chopps
    Hey Guys, I have a few questions about the architecture of a subscription service I am about to embark on and I am looking for some feedback on how best to set it up. I won’t have a large amount of customers as Basecamp, maybe a few hundred and was wondering what would be a solid architecture for setting up the customer sites. I’m running SQL Server and .NET on a dedicated machine. Should create a new database for each customer as to have control and isolation of data or keep them all in one database? I am also thinking of creating a sub-domain for each customer as well so modifications can be made to each site as needed. The customer URLs would look like this: https://customer1.foobar.com https://customer2.foobar.com I am going to have the ability to ‘plug-in’ reports that will be uploaded to the site so each customer can customize as needed. Off the top of my head this necessitates having each sub domain on its own code-base for the uploading of these reports. So on the main site the customer would sign up for their new subscription and I would programmatically create a new directory for the customer from the main code base and then create a sub domain pointing to the new directory for the customer and then finally their database. Does this sound about right? Am I on the right track? How do other such sites accomplish the same thing? Thanks for letting me bend your ear for a bit on this.

    Read the article

  • .Net website create directory to remote server access denied

    - by tmfkmoney
    I have a web application that creates directories. The application works fine when creating a directory on the web server, however, it does not work when it tries to create a directory on our remote fileserver. The fileserver and the webserver are in the same domain. I have created a local user in our domain, "DOMAIN\aspnet". The local user is on both servers. I am running my .Net app pool under the domain user. I have also tried using windows impersonate in the web.config to run under the domain user. I have verified that the domain user has full control to the remote directory. In an effort to debug this I have also given the "everyone" full control to the remote directory. In an effort to debug this I have also added the domain user to the administrators group. I have a simple .net test page on the web server to test this. Through the test page I am able to read the directory on the file server and get a list of everything in it. I am not able to upload files or to create directories on the file server. Here's code that works: var path = @"\\fileserver\images\"; var di = new DirectoryInfo(path); foreach (var d in di.GetDirectories()) { Response.Write(d.Name); } Here's code that doesn't work: path = Path.Combine(path, "NewDirectory"); Directory.CreateDirectory(path); Here's the error I'm getting: Access to the path '\fileserver\images\NewDirectory' is denied. I'm pretty stuck on this. Any ideas?

    Read the article

  • simple blog engine for website

    - by Alexander
    I know that there's a lot of free open source blog engine out there such as BlogEngine.NET. However that's an overkill for my purpose... I so far has created my own simple one by storing posts in a .xml file and so every time the main page loads it reads from all those xml files and displays it as posts. Now my problem is when a user clicks on a post title I want it to show on a new page(.aspx), so if the title is X then I want a new page called X.aspx when the user clicks on the title on the homepage. I hope this makes sense. My question is how do I create such thing?

    Read the article

  • boost multi_index_container and erase performance

    - by rjoshi
    I have a boost multi_index_container declared as below which is indexed by hash_unique id(unsigned long) and hash_non_unique transaction id(long). Insertion and retrieval of elements is fast but when I delete elements, it is much slower. I was expecting it to be constant time as key is hashed. e.g To erase elements from container for 10,000 elements, it takes around 2.53927016 seconds for 15,000 elements, it takes around 7.137100068 seconds for 20,000 elements, it takes around 21.391720757 seconds Is it something I am missing or is it expected behavior? class Session { public: Session() { //increment unique id static unsigned long counter = 0; boost::mutex::scoped_lock guard(mx); counter++; m_nId = counter; } unsigned long GetId() { return m_nId; } long GetTransactionHandle(){ return m_nTransactionHandle; } .... private: unsigned long m_nId; long m_nTransactionHandle; boost::mutext mx; .... }; typedef multi_index_container< Session*, indexed_by< hashed_unique< mem_fun<Session,unsigned long,&Session::GetId> >, hashed_non_unique< mem_fun<Session,unsigned long,&Session::GetTransactionHandle> > > //end indexed_by > SessionContainer; typedef SessionContainer::nth_index<0>::type SessionById; int main() { ... SessionContainer container; SessionById *pSessionIdView = &get<0>(container); unsigned counter = atoi(argv[1]); vector<Session*> vSes(counter); //insert for(unsigned i = 0; i < counter; i++) { Session *pSes = new Session(); container.insert(pSes); vSes.push_back(pSes); } timespec ts; lock_settime(CLOCK_PROCESS_CPUTIME_ID, &ts); //erase for(unsigned i = 0; i < counter; i++) { pSessionIdView->erase(vSes[i]->getId()); delete vSes[i]; } lock_gettime(CLOCK_PROCESS_CPUTIME_ID, &ts); std::cout << "Total time taken for erase:" << ts.tv_sec << "." << ts.tv_nsec << "\n"; return (EXIST_SUCCESS); }

    Read the article

  • IE performance issues with offsetHeight and offsetWidth

    - by Paul
    I have a site that grabs the response text from an AJAX call and does 'innerHTML' on a div that is going to contain it. After I do the 'innerHTML' I process the DIV by traversing the whole hierarchy of nodes and grabbing their [offsetWidth/offsetHeight] to do some operations with it. Why not css style width/height? because sometimes those values are not available since I don't control what is coming from the AJAX response, plus I want the real box dimensions including borders/scrolls/padding. On large injections (let's say 7,000 new DOM elements) IE takes way longer time than FF/Safari just to get this [offsetWidth/offsetHeight], actually if I wasn't doing injection but just render the contents of the HTML in the browser and processing it, it would be much faster. But that is not an option since I have to inject it on a div that will contain it. Anybody has deal with this kind of issue before? is there an alternative to innerHTML, I have try using documentFragment to inject and process and the move it to the div and still I don't see much gain. How can I get the values that are available with [offsetWidth/offsetHeight]? Thanks a bunch for any suggestions. Paul

    Read the article

  • Website revision control system

    - by kylex
    I'm looking for the ability to use a revision control system for websites, but ALSO have the revisions go live immediately. Example: A developer submits to the repository, those changes are live immediately pulled from the repository. Any suggestions on available software?

    Read the article

  • Unexpected performance curve from CPython merge sort

    - by vkazanov
    I have implemented a naive merge sorting algorithm in Python. Algorithm and test code is below: import time import random import matplotlib.pyplot as plt import math from collections import deque def sort(unsorted): if len(unsorted) <= 1: return unsorted to_merge = deque(deque([elem]) for elem in unsorted) while len(to_merge) > 1: left = to_merge.popleft() right = to_merge.popleft() to_merge.append(merge(left, right)) return to_merge.pop() def merge(left, right): result = deque() while left or right: if left and right: elem = left.popleft() if left[0] > right[0] else right.popleft() elif not left and right: elem = right.popleft() elif not right and left: elem = left.popleft() result.append(elem) return result LOOP_COUNT = 100 START_N = 1 END_N = 1000 def test(fun, test_data): start = time.clock() for _ in xrange(LOOP_COUNT): fun(test_data) return time.clock() - start def run_test(): timings, elem_nums = [], [] test_data = random.sample(xrange(100000), END_N) for i in xrange(START_N, END_N): loop_test_data = test_data[:i] elapsed = test(sort, loop_test_data) timings.append(elapsed) elem_nums.append(len(loop_test_data)) print "%f s --- %d elems" % (elapsed, len(loop_test_data)) plt.plot(elem_nums, timings) plt.show() run_test() As much as I can see everything is OK and I should get a nice N*logN curve as a result. But the picture differs a bit: Things I've tried to investigate the issue: PyPy. The curve is ok. Disabled the GC using the gc module. Wrong guess. Debug output showed that it doesn't even run until the end of the test. Memory profiling using meliae - nothing special or suspicious. ` I had another implementation (a recursive one using the same merge function), it acts the similar way. The more full test cycles I create - the more "jumps" there are in the curve. So how can this behaviour be explained and - hopefully - fixed? UPD: changed lists to collections.deque UPD2: added the full test code UPD3: I use Python 2.7.1 on a Ubuntu 11.04 OS, using a quad-core 2Hz notebook. I tried to turn of most of all other processes: the number of spikes went down but at least one of them was still there.

    Read the article

  • Website url whitelists

    - by buggedcom
    I'm building a user content parser and am adding an automatic link parser. I'm adding a dialogue, that confirms that the user wants to go to the particular site being linked to. This is for two reasons. Anti phishing and spam combating. However I want to be able to disable both the dialogue and nofollow additions with commonly used websites, so I'm building a whitelist. Are there any common whitelists about or should I start building one from scratch?

    Read the article

  • Is I/O Completion ports(Windows) or Asynchronous I/O (AIO) will improve performance of multithreaded

    - by Naga
    Hi, I want to use I/O Completion ports for Windows and Asynchronous I/O (AIO) for solaris and Linux versions of my server application. The application server is multithreaded and it can accept lot of concurrent TCP connections and can process many requests per conenction. Every request will be handled by seperate detached thread. Is this criteria well enough to use the latest AIO?. Is there any standardization using which one code can be used to all platforms. Thanks, Naga

    Read the article

  • Optimizing Smart Client Performance

    - by Burt
    I have a smart client (WPF) that makes calls to the server va services (WCF). The screen I am working on holds a list of objects that it loads when the constructor is called. I am able to add, edit and delete records in the list. Typically what I am doing is after every add or delete I am reloading the entire model from the service again, there are a number off reasons for this including the fact that the data may have changed on the server between calls. This approach has proved to be a big hit on perfomance because I am loading everything sending the list up and down the wire on Add and Edit. What other options are open to me, should I only be send the required information to the server and how would I go about not reloading all the data again ever time an add or delete is performed?

    Read the article

  • Block specific IP block from my website in PHP

    - by iTayb
    I'd like, for example, block every IP from base 89.95 (89.95..). I don't have .htaccess files on my server, so I'll have to do it with PHP. if ($_SERVER['REMOTE_ADDR'] == "89.95.25.37") die(); Would block specific IP. How can I block entire IP blocks? Thank you very much.

    Read the article

  • SQL Server insert performance

    - by Jose
    I have an insert query that gets generated like this INSERT INTO InvoiceDetail (LegacyId,InvoiceId,DetailTypeId,Fee,FeeTax,Investigatorid,SalespersonId,CreateDate,CreatedById,IsChargeBack,Expense,RepoAgentId,PayeeName,ExpensePaymentId,AdjustDetailId) VALUES(1,1,2,1500.0000,0.0000,163,1002,'11/30/2001 12:00:00 AM',1116,0,550.0000,850,NULL,@ExpensePay1,NULL); DECLARE @InvDetail1 INT; SET @InvDetail1 = (SELECT @@IDENTITY); This query is generated for only 110K rows. It takes 30 minutes for all of these query's to execute I checked the query plan and the largest % nodes are A Clustered Index Insert at 57% query cost which has a long xml that I don't want to post. A Table Spool which is 38% query cost <RelOp AvgRowSize="35" EstimateCPU="5.01038E-05" EstimateIO="0" EstimateRebinds="0" EstimateRewinds="0" EstimateRows="1" LogicalOp="Eager Spool" NodeId="80" Parallel="false" PhysicalOp="Table Spool" EstimatedTotalSubtreeCost="0.0466109"> <OutputList> <ColumnReference Database="[SkipPro]" Schema="[dbo]" Table="[InvoiceDetail]" Column="InvoiceId" /> <ColumnReference Database="[SkipPro]" Schema="[dbo]" Table="[InvoiceDetail]" Column="InvestigatorId" /> <ColumnReference Column="Expr1054" /> <ColumnReference Column="Expr1055" /> </OutputList> <Spool PrimaryNodeId="3" /> </RelOp> So my question is what is there that I can do to improve the speed of this thing? I already run ALTER TABLE TABLENAME NOCHECK CONSTRAINTS ALL Before the queries and then ALTER TABLE TABLENAME NOCHECK CONSTRAINTS ALL after the queries. And that didn't shave off hardly anything off of the time. Know I am running these queries in a .NET application that uses a SqlCommand object to send the query. I then tried to output the sql commands to a file and then execute it using sqlcmd, but I wasn't getting any updates on how it was doing, so I gave up on that. Any ideas or hints or help?

    Read the article

  • Problem in integrating Wordpress blog's in Cakephp Website

    - by Nishant
    Hello Everyone, I was working on a site of Cakephp which was successfully delivered.But recently Client again appered and asked me to put the Wordpress blog in it,to cover up the Blogging thing in his site.He wants to share the authentication between the Cakephp and WP.Whoever registers in his site,then Logins in it and if he clicks on the Blog Tab,he must be redirected to the WP blog with the session still there.After some googling I have installed it in /app/webroot/blog folder but I am not able to edit the .htaccess file. Please help me in the right direction,that how to share the authentication betwenn Cake Php and Wordpress, and the second one how to customize the .htaccess file so that URL's look good. Thanks in advance..!

    Read the article

< Previous Page | 253 254 255 256 257 258 259 260 261 262 263 264  | Next Page >