Search Results

Search found 30217 results on 1209 pages for 'website performance'.

Page 138/1209 | < Previous Page | 134 135 136 137 138 139 140 141 142 143 144 145  | Next Page >

  • Improving Windows Authentication performance on IIS

    - by flalar
    We're struggling with performance issues with a ASP.NET MVC site that is using Windows Authentication. Response time is very slow on the first request to the site when the user is being authenticated. Further, every time the Authorization header is sent from the browser the response time increases with many seconds. The same issue occurs for both executed files and static content like CSS and JS. Access to the application is restricted to users within a certain role and we are now planning to allow access to static files for all authenticated users to see if that helps. The authentication method in use is NTLM. How should we go forward in pinpointing why authentication decreases performance drastically?

    Read the article

  • FTP account ownership on vhost directory makes Apache not run website correctly

    - by CodeShining
    I've purchased a virtual server, where I'm given of a non-root sudo-enabled user. Actually I do need to create an FTP account that's not that sudo-able account, so I created a no-login account just for that purpose. I've set up VSFTPd correctly, also enabling the "userlist" feature, to specify which user are permitted to use FTP. Then I created an empty directory under my sudo-able account, and I gave ownership permissions to the second account, so to make it more easy to understand, let's say the main account (the one I do use to manage my VPS) is called ubuntu and the FTP-user is named ftpuser, I created a directory /home/ubuntu/mywebsite giving the ownership to ftpuser:ftpuser. Then I uploaded a worpdress website, whose default permissions are 755 and 644. The issue is that Apache is not given of any privilege to run the website. How can I make the website run properly, and which is the most secure? Should I run that virtualhost with another user (if it's possible)? Should I force the FTP user to use the www-data group (if that's possible) and run with permissions like 775 and 664? How can I solve this issue? Any help is appreciated, I'd like to run it using the default permissions, so any update won't break up anything (because of permissions reset).

    Read the article

  • nginx: URL rewrites and performance

    - by j0nes
    I have a website where I need to change the URL structure. The old URLs look like /olddir/part1_de.htm, the new ones will look like /newdir/sub/category/anotherpage.htm. There are a lot of URL rewrites I need to do, I assume about 500 distinct rewrites in the end. As my website gets quite a lot of traffic, my main concern is about performance at the moment. My questions are: I assume that for each request, the rewrites block will be parsed and the regex will be evaluated. Am I right? Will there be a performance penalty if I use these rewrites? Can nginx handle this? Are there any "best practices" to follow when doing a lot of rewrites?

    Read the article

  • SSD Performance for PHP?

    - by Andrew Fashion
    My programmer just built an application with PHP using Doctrine ORM (will be a high traffic social networking website), and it's very heavy in PHP/Apache and CPU. The queries are wonderfully fast, and MySQL is barely using any CPU, it's just Apache. I was curious to if an SSD would help speed up PHP/Apache, because I know the bottleneck is in PHP reading multiple files, class files, and loading up a bunch of data. So common sense makes me think if PHP is reading multiple PHP files, an SSD would only help as far as read/write? I was thinking of doing a high performance SSD for the PHP application, but for user image uploads, I would just continue using a 15k SAS. Is there any performance issues regarding using an SSD in this kind of situation? And would it prove to help speed up PHP/Apache, and help the CPU problem out?

    Read the article

  • Website always having DNS problems

    - by Root
    I moved my website from shared hosting to VPS. When it was in shared hosting all I did is updated my name servers whereas now I got my own VPS server and I used one of my domain sjdpublishing.com as the primary domain for my VPS. I created nameservers as ns1.sjdpublishing.com and ns2.sjdpublishing.com and then my actual website is creativeproperty.com.au which are pointing to ns1.sjdpublishing.com and ns2.sjdpublishing.com I am having repeated problems with my domain creativeproperty.com.au a few weeks back I had a problem which was resolved by flushing DNS and later I got similar problem which was not resolved by flushing DNS, I posted a question here and someone answered me to go to Network Settings in my MAC OSX and remove the IP as in my MAC terminal nslookup creativeproperty.com.au points to my router IP and I fixed this problem Now many of my clients were complaining that they are having same troubles accessing my website. I don't know whether its to flush DNS or change network settings or other issues. Can anyone please check my domain creativeproperty.com.au and sjdpublishing.com are having correct records or not and also can anyone tell me the best solution for this issue?

    Read the article

  • Commercial template (theme) sellers which give license to resell the template?

    - by Tony_Henrich
    I came once across a website template seller which granted rights to resell the template to the buyer but forgot to bookmark it. All the commercial template vendors I know of (Template Monster, BoxedArt, Dreamtemplate..etc) don't allow this. Anyone knows of any sellers which allow this? Some of them allow you to resell the template if it's packaged for a different host software. For example, if I buy a website template, I am allowed to sell it as a WordPress or Joomla or Drupal theme. I am aware of public domain and free templates sites but I am not looking for those. I am talking about templates which are being sold.

    Read the article

  • How to Check the Performance of VPS?

    - by Ngu Soon Hui
    I have subscribed to a VPS offered by a hosting provider. The guaranteed performance 1GB RAM, 1M bandwidth. But I found that from time to time the websites can be very slow, so slow that it could take more than 30 seconds to load a simple Joomla website. However the website resumed the usual speed after a few minutes. This created a problem for me when I want to report the performance problem to the hosting provider. They would say to me "see, no problem". Of course there is no problem because the problem is only there for a few minutes , and everything is normal. This ocassionally-slow problem will bug me a few days later and the cycle repeats. Any idea how to solve this problem?

    Read the article

  • Are there any known issues with Windows-8 installed on a VHD?

    - by Richard
    I installed Windows 8 preview on a VHD image and it seemed to work until I actually started using it. I´m seeing terrible performance. Installing anything makes everything else "stutter" or freeze for up to a couple of seconds at a time. I looked up hard disk performance in the task manager and this is what I found: It doesn't seem right it has 2500ms response time while reading/writing at those speeds. Is this an issue with my drive, installation or VHDs in general?

    Read the article

  • Which type of motherboard i should by and why?

    - by metal gear solid
    If budged is not matter. I just need best performance with less power consumption. I can purchase any cabinet , power supply and Motherboard. Is Power supply has any relation with Form factor? Is the size of motherboard and number of Slots only difference between all form factors? Is there any difference related to performance of motherboard? Is bigger in Size (ATX) motherboard always better?

    Read the article

  • Can't reach server without proxy (website down from my home)

    - by user2128576
    I have a website hosted on Hostinger However I am experiencing problems with my wordpress site. This is really annoying. If I understood the situation right, The server is blocking me or denying access to my own website. When I visit the site with google chrome, it returns: Oops! Google Chrome could not find Same thing happens to firefox! Firefox can't find the server but when I do a check if my site is online and working through http://www.downforeveryoneorjustme.com/ it says that the site is working and up. Another thing, I access the website through a proxy, both on chrome and in firefox, and t works. Why is this? I have also recently installed the plugin Better Wp Security 5 days ago. Could the plugin have caused it? but I don't remember setting any IP's to be blocked. Also, this happens at random times, sometimes I can access it, sometimes it fails to reach the server. I am currently developing the site live. Was I blocked by the server for frequently refreshing the page? (duh, I'm a developer and I need to refresh to see changes.) or is this a problem with my ISP's DNS server? How can I resolve? and what are the possible fixes? Thanks in advance! -Jomar

    Read the article

  • Soap client call has slow performance

    - by Alon_A
    OS is Centos 6.2 with PHP 5.3.15. We have a Facebook application that is using PHP soap web services. We sometimes experince slow preformance when connecting to these services, but we cant understand what exacly is causing the problem. We've try to analyse the behavior using the profiling tool Kcachgrind. Here is a call graph from the index.php page that took 21 seconds to load. You can clearly see that calling the soap client is the bottle neck. I've also noticed that exactly before the page finishes to load, this file is being created in our serve's /tmp folder: wsdl-apache-d1032d85dfd16c0d91a6b70facc70e43 These are the permission of /tmp drwxrwxrwt 6 root root 40960 Aug 30 10:39 tmp I know its not the most specific question, but if any one had similar performance issues with soap client, We would love some ideas about what can cause this kind of performance problem, what can we do to investigate more accurately or how to overcome the problem ? Thanks.

    Read the article

  • Create taskbar shortcut to website in Window 7

    - by BJ292
    I'd like to create a shortcut to a website in Windows 7 on the taskbar that is not pinned to the default web browser. Currently if I drag the favicon from the left end of the firefox address bar to the Win 7 taskbar it will pin a shortcut to the firefox browser icon. Similarly if I create a shortcut on the desktop to a website and drag it to the taskbar it will also end up pinned to the firefox icon. The problem with this is to get to that shortcut I have to right click on the firefox icon and then select the pinned shortcut. That is workable for me but I want to do this for a child - so the shortcut needs to be right there on the taskbar as a stand-alone item. There is a workaround that pretty much solves the problem - create a new folder somewhere safe - create the shortcut to the website in the new folder - right click the taskbar and select toolbars - new toolbar - then browse to the folder you created and select it as the new toolbar. The contents of the folder will now appear on the taskbar as shortcuts. You need to drag it from the right hand end of the taskbar into the middle - turn off show titles and show text and make the icon large. I'd call this a 75% solution. Anyone know how to make a web shortcut that looks and operates just like any of the other shortcuts on the taskbar?

    Read the article

  • My server hostname doesn't work? [on hold]

    - by xSpartanCx
    I've got a raspberry pi running raspbian server edition. It's a modified debian that runs well on the rPi. My problem is that the only way I can ssh into it with putty is through the static ip. My router doesn't recognize the hostname; it shows the mac address as the name. This causes the pi not to show my website online (I think). The only way I've gotten it to work is using my other linux server to forward using virtual hosts, and that has to use the ip address, too. However, now that I have my other server off, the website doesn't work and I can't ssh (or find it anywhere on the network) using the hostname.

    Read the article

  • Plan Caching and Query Memory Part II (Hash Match) – When not to use stored procedure - Most common performance mistake SQL Server developers make.

    - by sqlworkshops
    SQL Server estimates Memory requirement at compile time, when stored procedure or other plan caching mechanisms like sp_executesql or prepared statement are used, the memory requirement is estimated based on first set of execution parameters. This is a common reason for spill over tempdb and hence poor performance. Common memory allocating queries are that perform Sort and do Hash Match operations like Hash Join or Hash Aggregation or Hash Union. This article covers Hash Match operations with examples. It is recommended to read Plan Caching and Query Memory Part I before this article which covers an introduction and Query memory for Sort. In most cases it is cheaper to pay for the compilation cost of dynamic queries than huge cost for spill over tempdb, unless memory requirement for a query does not change significantly based on predicates.   This article covers underestimation / overestimation of memory for Hash Match operation. Plan Caching and Query Memory Part I covers underestimation / overestimation for Sort. It is important to note that underestimation of memory for Sort and Hash Match operations lead to spill over tempdb and hence negatively impact performance. Overestimation of memory affects the memory needs of other concurrently executing queries. In addition, it is important to note, with Hash Match operations, overestimation of memory can actually lead to poor performance.   To read additional articles I wrote click here.   The best way to learn is to practice. To create the below tables and reproduce the behavior, join the mailing list by using this link: www.sqlworkshops.com/ml and I will send you the table creation script. Most of these concepts are also covered in our webcasts: www.sqlworkshops.com/webcasts  Let’s create a Customer’s State table that has 99% of customers in NY and the rest 1% in WA.Customers table used in Part I of this article is also used here.To observe Hash Warning, enable 'Hash Warning' in SQL Profiler under Events 'Errors and Warnings'. --Example provided by www.sqlworkshops.com drop table CustomersState go create table CustomersState (CustomerID int primary key, Address char(200), State char(2)) go insert into CustomersState (CustomerID, Address) select CustomerID, 'Address' from Customers update CustomersState set State = 'NY' where CustomerID % 100 != 1 update CustomersState set State = 'WA' where CustomerID % 100 = 1 go update statistics CustomersState with fullscan go   Let’s create a stored procedure that joins customers with CustomersState table with a predicate on State. --Example provided by www.sqlworkshops.com create proc CustomersByState @State char(2) as begin declare @CustomerID int select @CustomerID = e.CustomerID from Customers e inner join CustomersState es on (e.CustomerID = es.CustomerID) where es.State = @State option (maxdop 1) end go  Let’s execute the stored procedure first with parameter value ‘WA’ – which will select 1% of data. set statistics time on go --Example provided by www.sqlworkshops.com exec CustomersByState 'WA' goThe stored procedure took 294 ms to complete.  The stored procedure was granted 6704 KB based on 8000 rows being estimated.  The estimated number of rows, 8000 is similar to actual number of rows 8000 and hence the memory estimation should be ok.  There was no Hash Warning in SQL Profiler. To observe Hash Warning, enable 'Hash Warning' in SQL Profiler under Events 'Errors and Warnings'.   Now let’s execute the stored procedure with parameter value ‘NY’ – which will select 99% of data. -Example provided by www.sqlworkshops.com exec CustomersByState 'NY' go  The stored procedure took 2922 ms to complete.   The stored procedure was granted 6704 KB based on 8000 rows being estimated.    The estimated number of rows, 8000 is way different from the actual number of rows 792000 because the estimation is based on the first set of parameter value supplied to the stored procedure which is ‘WA’ in our case. This underestimation will lead to spill over tempdb, resulting in poor performance.   There was Hash Warning (Recursion) in SQL Profiler. To observe Hash Warning, enable 'Hash Warning' in SQL Profiler under Events 'Errors and Warnings'.   Let’s recompile the stored procedure and then let’s first execute the stored procedure with parameter value ‘NY’.  In a production instance it is not advisable to use sp_recompile instead one should use DBCC FREEPROCCACHE (plan_handle). This is due to locking issues involved with sp_recompile, refer to our webcasts, www.sqlworkshops.com/webcasts for further details.   exec sp_recompile CustomersByState go --Example provided by www.sqlworkshops.com exec CustomersByState 'NY' go  Now the stored procedure took only 1046 ms instead of 2922 ms.   The stored procedure was granted 146752 KB of memory. The estimated number of rows, 792000 is similar to actual number of rows of 792000. Better performance of this stored procedure execution is due to better estimation of memory and avoiding spill over tempdb.   There was no Hash Warning in SQL Profiler.   Now let’s execute the stored procedure with parameter value ‘WA’. --Example provided by www.sqlworkshops.com exec CustomersByState 'WA' go  The stored procedure took 351 ms to complete, higher than the previous execution time of 294 ms.    This stored procedure was granted more memory (146752 KB) than necessary (6704 KB) based on parameter value ‘NY’ for estimation (792000 rows) instead of parameter value ‘WA’ for estimation (8000 rows). This is because the estimation is based on the first set of parameter value supplied to the stored procedure which is ‘NY’ in this case. This overestimation leads to poor performance of this Hash Match operation, it might also affect the performance of other concurrently executing queries requiring memory and hence overestimation is not recommended.     The estimated number of rows, 792000 is much more than the actual number of rows of 8000.  Intermediate Summary: This issue can be avoided by not caching the plan for memory allocating queries. Other possibility is to use recompile hint or optimize for hint to allocate memory for predefined data range.Let’s recreate the stored procedure with recompile hint. --Example provided by www.sqlworkshops.com drop proc CustomersByState go create proc CustomersByState @State char(2) as begin declare @CustomerID int select @CustomerID = e.CustomerID from Customers e inner join CustomersState es on (e.CustomerID = es.CustomerID) where es.State = @State option (maxdop 1, recompile) end go  Let’s execute the stored procedure initially with parameter value ‘WA’ and then with parameter value ‘NY’. --Example provided by www.sqlworkshops.com exec CustomersByState 'WA' go exec CustomersByState 'NY' go  The stored procedure took 297 ms and 1102 ms in line with previous optimal execution times.   The stored procedure with parameter value ‘WA’ has good estimation like before.   Estimated number of rows of 8000 is similar to actual number of rows of 8000.   The stored procedure with parameter value ‘NY’ also has good estimation and memory grant like before because the stored procedure was recompiled with current set of parameter values.  Estimated number of rows of 792000 is similar to actual number of rows of 792000.    The compilation time and compilation CPU of 1 ms is not expensive in this case compared to the performance benefit.   There was no Hash Warning in SQL Profiler.   Let’s recreate the stored procedure with optimize for hint of ‘NY’. --Example provided by www.sqlworkshops.com drop proc CustomersByState go create proc CustomersByState @State char(2) as begin declare @CustomerID int select @CustomerID = e.CustomerID from Customers e inner join CustomersState es on (e.CustomerID = es.CustomerID) where es.State = @State option (maxdop 1, optimize for (@State = 'NY')) end go  Let’s execute the stored procedure initially with parameter value ‘WA’ and then with parameter value ‘NY’. --Example provided by www.sqlworkshops.com exec CustomersByState 'WA' go exec CustomersByState 'NY' go  The stored procedure took 353 ms with parameter value ‘WA’, this is much slower than the optimal execution time of 294 ms we observed previously. This is because of overestimation of memory. The stored procedure with parameter value ‘NY’ has optimal execution time like before.   The stored procedure with parameter value ‘WA’ has overestimation of rows because of optimize for hint value of ‘NY’.   Unlike before, more memory was estimated to this stored procedure based on optimize for hint value ‘NY’.    The stored procedure with parameter value ‘NY’ has good estimation because of optimize for hint value of ‘NY’. Estimated number of rows of 792000 is similar to actual number of rows of 792000.   Optimal amount memory was estimated to this stored procedure based on optimize for hint value ‘NY’.   There was no Hash Warning in SQL Profiler.   This article covers underestimation / overestimation of memory for Hash Match operation. Plan Caching and Query Memory Part I covers underestimation / overestimation for Sort. It is important to note that underestimation of memory for Sort and Hash Match operations lead to spill over tempdb and hence negatively impact performance. Overestimation of memory affects the memory needs of other concurrently executing queries. In addition, it is important to note, with Hash Match operations, overestimation of memory can actually lead to poor performance.   Summary: Cached plan might lead to underestimation or overestimation of memory because the memory is estimated based on first set of execution parameters. It is recommended not to cache the plan if the amount of memory required to execute the stored procedure has a wide range of possibilities. One can mitigate this by using recompile hint, but that will lead to compilation overhead. However, in most cases it might be ok to pay for compilation rather than spilling sort over tempdb which could be very expensive compared to compilation cost. The other possibility is to use optimize for hint, but in case one sorts more data than hinted by optimize for hint, this will still lead to spill. On the other side there is also the possibility of overestimation leading to unnecessary memory issues for other concurrently executing queries. In case of Hash Match operations, this overestimation of memory might lead to poor performance. When the values used in optimize for hint are archived from the database, the estimation will be wrong leading to worst performance, so one has to exercise caution before using optimize for hint, recompile hint is better in this case.   I explain these concepts with detailed examples in my webcasts (www.sqlworkshops.com/webcasts), I recommend you to watch them. The best way to learn is to practice. To create the above tables and reproduce the behavior, join the mailing list at www.sqlworkshops.com/ml and I will send you the relevant SQL Scripts.  Register for the upcoming 3 Day Level 400 Microsoft SQL Server 2008 and SQL Server 2005 Performance Monitoring & Tuning Hands-on Workshop in London, United Kingdom during March 15-17, 2011, click here to register / Microsoft UK TechNet.These are hands-on workshops with a maximum of 12 participants and not lectures. For consulting engagements click here.   Disclaimer and copyright information:This article refers to organizations and products that may be the trademarks or registered trademarks of their various owners. Copyright of this article belongs to R Meyyappan / www.sqlworkshops.com. You may freely use the ideas and concepts discussed in this article with acknowledgement (www.sqlworkshops.com), but you may not claim any of it as your own work. This article is for informational purposes only; you use any of the suggestions given here entirely at your own risk.   R Meyyappan [email protected] LinkedIn: http://at.linkedin.com/in/rmeyyappan

    Read the article

  • Bad performance function in PHP. With large files memory blows up! How can I refactor?

    - by André
    Hi I have a function that strips out lines from files. I'm handling with large files(more than 100Mb). I have the PHP Memory with 256MB but the function that handles with the strip out of lines blows up with a 100MB CSV File. What the function must do is this: Originally I have the CSV like: Copyright (c) 2007 MaxMind LLC. All Rights Reserved. locId,country,region,city,postalCode,latitude,longitude,metroCode,areaCode 1,"O1","","","",0.0000,0.0000,, 2,"AP","","","",35.0000,105.0000,, 3,"EU","","","",47.0000,8.0000,, 4,"AD","","","",42.5000,1.5000,, 5,"AE","","","",24.0000,54.0000,, 6,"AF","","","",33.0000,65.0000,, 7,"AG","","","",17.0500,-61.8000,, 8,"AI","","","",18.2500,-63.1667,, 9,"AL","","","",41.0000,20.0000,, When I pass the CSV file to this function I got: locId,country,region,city,postalCode,latitude,longitude,metroCode,areaCode 1,"O1","","","",0.0000,0.0000,, 2,"AP","","","",35.0000,105.0000,, 3,"EU","","","",47.0000,8.0000,, 4,"AD","","","",42.5000,1.5000,, 5,"AE","","","",24.0000,54.0000,, 6,"AF","","","",33.0000,65.0000,, 7,"AG","","","",17.0500,-61.8000,, 8,"AI","","","",18.2500,-63.1667,, 9,"AL","","","",41.0000,20.0000,, It only strips out the first line, nothing more. The problem is the performance of this function with large files, it blows up the memory. The function is: public function deleteLine($line_no, $csvFileName) { // this function strips a specific line from a file // if a line is stripped, functions returns True else false // // e.g. // deleteLine(-1, xyz.csv); // strip last line // deleteLine(1, xyz.csv); // strip first line // Assigna o nome do ficheiro $filename = $csvFileName; $strip_return=FALSE; $data=file($filename); $pipe=fopen($filename,'w'); $size=count($data); if($line_no==-1) $skip=$size-1; else $skip=$line_no-1; for($line=0;$line<$size;$line++) if($line!=$skip) fputs($pipe,$data[$line]); else $strip_return=TRUE; return $strip_return; } It is possible to refactor this function to not blow up with the 256MB PHP Memory? Give me some clues. Best Regards,

    Read the article

  • Multi-tenant ASP.NET MVC – Introduction

    - by zowens
    I’ve read a few different blogs that talk about multi-tenancy and how to resolve some of the issues surrounding multi-tenancy. What I’ve come to realize is that these implementations overcomplicate the issues and give only a muddy implementation! I’ve seen some really illogical code out there. I have recently been building a multi-tenancy framework for internal use at eagleenvision.net. Through this process, I’ve realized a few different techniques to make building multi-tenant applications actually quite easy. I will be posting a few different entries over the issue and my personal implementation. In this first post, I will discuss what multi-tenancy means and how my implementation will be structured.   So what’s the problem? Here’s the deal. Multi-tenancy is basically a technique of code-reuse of web application code. A multi-tenant application is an application that runs a single instance for multiple clients. Here the “client” is different URL bindings on IIS using ASP.NET MVC. The problem with different instances of the, essentially, same application is that you have to spin up different instances of ASP.NET. As the number of running instances of ASP.NET grows, so does the memory footprint of IIS. Stack Exchange shifted its architecture to multi-tenancy March. As the blog post explains, multi-tenancy saves cost in terms of memory utilization and physical disc storage. If you use the same code base for many applications, multi-tenancy just makes sense. You’ll reduce the amount of work it takes to synchronize the site implementations and you’ll thank your lucky stars later for choosing to use one application for multiple sites. Multi-tenancy allows the freedom of extensibility while relying on some pre-built code.   You’d think this would be simple. I have actually seen a real lack of reference material on the subject in terms of ASP.NET MVC. This is somewhat surprising given the number of users of ASP.NET MVC. However, I will certainly fill the void ;). Implementing a multi-tenant application takes a little thinking. It’s not straight-forward because the possibilities of implementation are endless. I have yet to see a great implementation of a multi-tenant MVC application. The only one that comes close to what I have in mind is Rob Ashton’s implementation (all the entries are listed on this page). There’s some really nasty code in there… something I’d really like to avoid. He has also written a library (MvcEx) that attempts to aid multi-tenant development. This code is even worse, in my honest opinion. Once I start seeing Reflection.Emit, I have to assume the worst :) In all seriousness, if his implementation makes sense to you, use it! It’s a fine implementation that should be given a look. At least look at the code. I will reference MvcEx going forward as a comparison to my implementation. I will explain why my approach differs from MvcEx and how it is better or worse (hopefully better).   Core Goals of my Multi-Tenant Implementation The first, and foremost, goal is to use Inversion of Control containers to my advantage. As you will see throughout this series, I pass around containers quite frequently and rely on their use heavily. I will be using StructureMap in my implementation. However, you could probably use your favorite IoC tool instead. <RANT> However, please don’t be stupid and abstract your IoC tool. Each IoC is powerful and by abstracting the capabilities, you’re doing yourself a real disservice. Who in the world swaps out IoC tools…? No one!</RANT> (It had to be said.) I will outline some of the goodness of StructureMap as we go along. This is really an invaluable tool in my tool belt and simple to use in my multi-tenant implementation. The second core goal is to represent a tenant as easily as possible. Just as a dependency container will be a first-class citizen, so will a tenant. This allows us to easily extend and use tenants. This will also allow different ways of “plugging in” tenants into your application. In my implementation, there will be a single dependency container for a single tenant. This will enable isolation of the dependencies of the tenant. The third goal is to use composition as a means to delegate “core” functions out to the tenant. More on this later.   Features In MvcExt, “Modules” are a code element of the infrastructure. I have simplified this concept and have named this “Features”. A feature is a simple element of an application. Controllers can be specified to have a feature and actions can have “sub features”. Each tenant can select features it needs and the other features will be hidden to the tenant’s users. My implementation doesn’t require something to be a feature. A controller can be common to all tenants. For example, (as you will see) I have a “Content” controller that will return the CSS, Javascript and Images for a tenant. This is common logic to all tenants and shouldn’t be hidden or considered a “feature”; Content is a core component.   Up next My next post will be all about the code. I will reveal some of the foundation to the way I do multi-tenancy. I will have posts dedicated to Foundation, Controllers, Views, Caching, Content and how to setup the tenants. Each post will be in-depth about the issues and implementation details, while adhering to my core goals outlined in this post. As always, comment with questions of DM me on twitter or send me an email.

    Read the article

  • SQL Server 2008 uses half the CPU’s

    - by ACALVETT
    I recently got my hands on a couple of 4 socket servers with Intel E7-4870's (10 cores per cpu) and with hyper threading enabled that gave me 80 logical CPU's. The server has Windows 2008 R2 SP1 along with SQL 2008 (Currently we can not deploy SQL 2008 R2 for the application being hosted). When SQL Server started I noticed only 2 NUMA nodes were configured and 40 logical cores where there should have been 4 NUMA nodes and 80 logical cores (see below). The problem is caused by that fact that...(read more)

    Read the article

  • Do not use “using” in WCF Client

    - by oazabir
    You know that any IDisposable object must be disposed using using. So, you have been using using to wrap WCF service’s ChannelFactory and Clients like this: using(var client = new SomeClient()) {. ..} Or, if you are doing it the hard and slow way (without really knowing why), then: using(var factory = new ChannelFactory<ISomeService>()) {var channel= factory.CreateChannel();...} That’s what we have all learnt in school right? We have learnt it wrong! When there’s a network related error or the connection is broken, or the call is timed out before Dispose is called by the using keyword, then it results in the following exception when the using keyword tries to dispose the channel: failed: System.ServiceModel.CommunicationObjectFaultedException : The communication object, System.ServiceModel.Channels.ServiceChannel, cannot be used for communication because it is in the Faulted state. Server stack trace: at System.ServiceModel.Channels.CommunicationObject.Close(TimeSpan timeout) Exception rethrown at [0]: at System.Runtime.Remoting.Proxies.RealProxy.HandleReturnMessage(IMessage reqMsg, IMessage retMsg) at System.Runtime.Remoting.Proxies.RealProxy.PrivateInvoke(MessageData& msgData, Int32 type) at System.ServiceModel.ICommunicationObject.Close(TimeSpan timeout) at System.ServiceModel.ClientBase`1.System.ServiceModel.ICommunicationObject.Close(TimeSpan timeout) at System.ServiceModel.ClientBase`1.Close() at System.ServiceModel.ClientBase`1.System.IDisposable.Dispose() There are various reasons for which the underlying connection can be at broken state before the using block is completed and the .Dispose() is called. Common problems like network connection dropping, IIS doing an app pool recycle at that moment, some proxy sitting between you and the service dropping the connection for various reasons and so on. The point is, it might seem like a corner case, but it’s a likely corner case. If you are building a highly available client, you need to treat this properly before you go-live. So, do NOT use using on WCF Channel/Client/ChannelFactory. Instead you need to use an alternative. Here’s what you can do: First create an extension method. public static class WcfExtensions{ public static void Using<T>(this T client, Action<T> work) where T : ICommunicationObject { try { work(client); client.Close(); } catch (CommunicationException e) { client.Abort(); } catch (TimeoutException e) { client.Abort(); } catch (Exception e) { client.Abort(); throw; } }} Then use this instead of the using keyword: new SomeClient().Using(channel => { channel.Login(username, password);}); Or if you are using ChannelFactory then: new ChannelFactory<ISomeService>().Using(channel => { channel.Login(username, password);}); Enjoy!

    Read the article

  • SQL SERVER – Introduction to Rollup Clause

    - by pinaldave
    In this article we will go over basic understanding of Rollup clause in SQL Server. ROLLUP clause is used to do aggregate operation on multiple levels in hierarchy. Let us understand how it works by using an example. Consider a table with the following structure and data: CREATE TABLE tblPopulation ( Country VARCHAR(100), [State] VARCHAR(100), City VARCHAR(100), [Population (in Millions)] INT ) GO INSERT INTO tblPopulation VALUES('India', 'Delhi','East Delhi',9 [...]

    Read the article

  • SSAS Multithreaded sync with Windows 2008 R2

    - by ACALVETT
    We have been happily running some of our systems on WIndows 2003 and have had an upgrade to W2K8 R2 on the list for quite some time. The upgrade has now completed and we can start taking advantage of some of the new features which is the reason for this post. For a long time we have used the sample Robocopy script from the SQLCat team to synchronize some of our larger SSAS databases. If your wondering what i mean by large, around 5 TB with a good few thousand partitions. The script works like a dream...(read more)

    Read the article

< Previous Page | 134 135 136 137 138 139 140 141 142 143 144 145  | Next Page >