Search Results

Search found 20904 results on 837 pages for 'disk performance'.

Page 110/837 | < Previous Page | 106 107 108 109 110 111 112 113 114 115 116 117  | Next Page >

  • Improving Windows Authentication performance on IIS

    - by flalar
    We're struggling with performance issues with a ASP.NET MVC site that is using Windows Authentication. Response time is very slow on the first request to the site when the user is being authenticated. Further, every time the Authorization header is sent from the browser the response time increases with many seconds. The same issue occurs for both executed files and static content like CSS and JS. Access to the application is restricted to users within a certain role and we are now planning to allow access to static files for all authenticated users to see if that helps. The authentication method in use is NTLM. How should we go forward in pinpointing why authentication decreases performance drastically?

    Read the article

  • Server with 3 Disk, what's the best HD Configuration?

    - by aleroot
    I Have an HP Server with a quad core Opteron and 3 Disk 250Gb S-ATA Disk, i'm thinking about what's the best configuration of the disk for performance and reliability. There is mainly 2 scenario : -RAID 5 with these 3 HD (on the the array 100GB Partition for OS, Other Space for Data Partition) -RAID 1 + 1 Disk for OS (one single Disk OS Installation, RAID 1 Array for a Data Partition) What's the best configuration ? In the Server Run MySQL and Small Document File server, the OS to be installed is Windows Server 2008 ...

    Read the article

  • Windows 8 disk errors

    - by wrongusername
    So yesterday, I forcibly restarted my Windows 8 PC. VMWare Workstation was having some trouble with the guest Linux Mint OS. It wasn't responding for some time, so I tried suspending it September 28th or perhaps even before. It wouldn't suspend -- I forgot what the window looked like, but all options in the power menu were disabled (i.e. "Shutdown," "Power Off," and options like that were all disabled). I eventually killed the VMWare application through Task Manager, though I was too lazy to hunt down the running virtual machine itself, and decided to kill it by just shutting down my PC entirely. The PC wouldn't shut down for quite some time after the monitor went blank, so I did a cold reset by holding the power button. I then powered it on again and Windows briefly gave me some message like "Search for KERNEL_STACK_INPAGE_ERROR." Windows then started diagnosing some problems and gave me the message, "Repairing disk errors. This might take over an hour to complete." That was yesterday night, and I went to sleep without waiting for it to finish. This morning, it said that the repair failed, and that the log was at C:\windows\system32\LogFiles\srt\srtTrail.txt (as I remember it -- I don't have the exact path I wrote down right now). It gave me some other options to troubleshoot, such as resetting Windows (files and settings still intact, but programs not installed through the app store will be erased). That didn't work (no error message given, I was just told it didn't work). I tried rebooting in safe mode, the same diagnosis process begins, except that this time it doesn't bother with the automatic repairs again. So I tried using the command prompt to try to see if my files are at least still there. I was on the X drive, and I couldn't cd to the C drive. I couldn't find my folder under Users (of course?), and couldn't find the srt folder under LogFiles either. I am not sure what to try next. I have backed up everything, but to the cloud, so if absolutely necessary I can start off with a fresh copy of Windows and restore all my data, though it would be a hassle. Any thoughts on what might be wrong or what I can try? My computer was purchased just this June, so the hard drive should still be pretty new.

    Read the article

  • nginx: URL rewrites and performance

    - by j0nes
    I have a website where I need to change the URL structure. The old URLs look like /olddir/part1_de.htm, the new ones will look like /newdir/sub/category/anotherpage.htm. There are a lot of URL rewrites I need to do, I assume about 500 distinct rewrites in the end. As my website gets quite a lot of traffic, my main concern is about performance at the moment. My questions are: I assume that for each request, the rewrites block will be parsed and the regex will be evaluated. Am I right? Will there be a performance penalty if I use these rewrites? Can nginx handle this? Are there any "best practices" to follow when doing a lot of rewrites?

    Read the article

  • SSD Performance for PHP?

    - by Andrew Fashion
    My programmer just built an application with PHP using Doctrine ORM (will be a high traffic social networking website), and it's very heavy in PHP/Apache and CPU. The queries are wonderfully fast, and MySQL is barely using any CPU, it's just Apache. I was curious to if an SSD would help speed up PHP/Apache, because I know the bottleneck is in PHP reading multiple files, class files, and loading up a bunch of data. So common sense makes me think if PHP is reading multiple PHP files, an SSD would only help as far as read/write? I was thinking of doing a high performance SSD for the PHP application, but for user image uploads, I would just continue using a 15k SAS. Is there any performance issues regarding using an SSD in this kind of situation? And would it prove to help speed up PHP/Apache, and help the CPU problem out?

    Read the article

  • Linux: find out what process is using all the RAM?

    - by Timur
    Before actually asking, just to be clear: yes, I know about disk cache, and no, it is not my case :) Sorry, for this preamble :) I'm using CentOS 5. Every application in the system is swapping heavily, and the system is very slow. When I do free -m, here is what I got: total used free shared buffers cached Mem: 3952 3929 22 0 1 18 -/+ buffers/cache: 3909 42 Swap: 16383 46 16337 So, I actually have only 42 Mb to use! As far as I understand, -/+ buffers/cache actually doesn't count the disk cache, so I indeed only have 42 Mb, right? I thought, I might be wrong, so I tried to switch off the disk caching and it had no effect - the picture remained the same. So, I decided to find out who is using all my RAM, and I used top for that. But, apparently, it reports that no process is using my RAM. The only process in my top is MySQL, but it is using 0.1% of RAM and 400Mb of swap. Same picture when I try to run other services or applications - all go in swap, top shows that MEM is not used (0.1% maximum for any process). top - 15:09:00 up 2:09, 2 users, load average: 0.02, 0.16, 0.11 Tasks: 112 total, 1 running, 111 sleeping, 0 stopped, 0 zombie Cpu(s): 0.0%us, 0.0%sy, 0.0%ni,100.0%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st Mem: 4046868k total, 4001368k used, 45500k free, 748k buffers Swap: 16777208k total, 68840k used, 16708368k free, 16632k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ SWAP COMMAND 3214 ntp 15 0 23412 5044 3916 S 0.0 0.1 0:00.00 17m ntpd 2319 root 5 -10 12648 4460 3184 S 0.0 0.1 0:00.00 8188 iscsid 2168 root RT 0 22120 3692 2848 S 0.0 0.1 0:00.00 17m multipathd 5113 mysql 18 0 474m 2356 856 S 0.0 0.1 0:00.11 472m mysqld 4106 root 34 19 251m 1944 1360 S 0.0 0.0 0:00.11 249m yum-updatesd 4109 root 15 0 90152 1904 1772 S 0.0 0.0 0:00.18 86m sshd 5175 root 15 0 90156 1896 1772 S 0.0 0.0 0:00.02 86m sshd Restart doesn't help, and, by they way is very slow, which I wouldn't normally expect on this machine (4 cores, 4Gb RAM, RAID1). So, with that - I'm pretty sure that this is not a disk cache, who is using the RAM, because normally it should have been reduced and let other processes to use RAM, rather then go to swap. So, finally, the question is - if someone has any ideas how to find out what process is actually using the memory so heavily?

    Read the article

  • Is it possible to change the Raid5 chunk size of an existing device?

    - by AlexCombas
    I have an existing raid5 device which I created using mdadm on Linux. When I created the device I set the chunk size to 64 but I would like to test the performance of various sizes but I don't want to have to rebuild my entire system to do so. If it is not possible to do it live then is it possible to do this by booting with a rescue disk? Any advice on the steps how to do this, either live or with a rescue disk, will be greatly appreciated.

    Read the article

  • How to Check the Performance of VPS?

    - by Ngu Soon Hui
    I have subscribed to a VPS offered by a hosting provider. The guaranteed performance 1GB RAM, 1M bandwidth. But I found that from time to time the websites can be very slow, so slow that it could take more than 30 seconds to load a simple Joomla website. However the website resumed the usual speed after a few minutes. This created a problem for me when I want to report the performance problem to the hosting provider. They would say to me "see, no problem". Of course there is no problem because the problem is only there for a few minutes , and everything is normal. This ocassionally-slow problem will bug me a few days later and the cycle repeats. Any idea how to solve this problem?

    Read the article

  • Which type of motherboard i should by and why?

    - by metal gear solid
    If budged is not matter. I just need best performance with less power consumption. I can purchase any cabinet , power supply and Motherboard. Is Power supply has any relation with Form factor? Is the size of motherboard and number of Slots only difference between all form factors? Is there any difference related to performance of motherboard? Is bigger in Size (ATX) motherboard always better?

    Read the article

  • Soap client call has slow performance

    - by Alon_A
    OS is Centos 6.2 with PHP 5.3.15. We have a Facebook application that is using PHP soap web services. We sometimes experince slow preformance when connecting to these services, but we cant understand what exacly is causing the problem. We've try to analyse the behavior using the profiling tool Kcachgrind. Here is a call graph from the index.php page that took 21 seconds to load. You can clearly see that calling the soap client is the bottle neck. I've also noticed that exactly before the page finishes to load, this file is being created in our serve's /tmp folder: wsdl-apache-d1032d85dfd16c0d91a6b70facc70e43 These are the permission of /tmp drwxrwxrwt 6 root root 40960 Aug 30 10:39 tmp I know its not the most specific question, but if any one had similar performance issues with soap client, We would love some ideas about what can cause this kind of performance problem, what can we do to investigate more accurately or how to overcome the problem ? Thanks.

    Read the article

  • TFS Disk Structure - and "Add new folder" vs "Add solution"

    - by NealWalters
    Our organization recently got TFS 2008 set up ready for our use. I have a practice TeamProject available to play with. To simplify slightly, we previous organized our code on disk like this: -EC - Main - Database - someScript1.sql - someScript2.sql - Documents - ReleaseNotes_V1.doc - Source - Common - Company.EC.Common.Biztalk.Artifacts [folder] - Company.EC.Common.BizTalk.Components [folder] - Company.EC.Common.Biztalk.Deployment [folder] - Company.EC.BookTransfer.BizTalk.sln - BookTransfer - Company.EC.BookTransfer.BizTalk.Artifacts [folder] - Company.EC.BookTransfer.BizTalk.Components [folder] - Company.EC.BookTransfer.BizTalk.Components.UnitTest [folder] - Company.EC.BookTransfer.BizTalk.Deployment [folder] - Company.EC.BookTransfer.BizTalk.sln I'm trying to decide, do I want to check in the entire c:\EC directory? Or do I want to open each solution and checkin. What are the pros and cons of each? It seems like by doing the "Add Files/Folder" option, I could check in everything at once and it would match the disk structure. It also looks like that if I check in each solution separately, that creates another working folder in my Workspace. I think if I check in by "add files/folder", I will have one workspace and that would be better. But most of the books and samples I see talk about checking in projects and solutions. P.S. I know I need to add more to my disk structure in accordance with the Branch/Merge guidelines, but that is not the question I'm asking here. Thanks, Neal Walters

    Read the article

  • Script to copy files on CD and not on hard disk to a new directory

    - by John22
    I need to copy files from a set of CDs that have a lot of duplicate content, with each other, and with what's already on my hard disk. The file names of identical files are not the same, and are in sub-directories of different names. I want to copy non-duplicate files from the CD into a new directory on the hard disk. I don't care about the sub-directories - I will sort it out later - I just want the unique files. I can't find software to do that - see my post at SuperUser http://superuser.com/questions/129944/software-to-copy-non-duplicate-files-from-cd-dvd Someone at SuperUser suggested I write a script using GNU's "find" and the Win32 version of some checksum tools. I glanced at that, and have not done anything like that before. I'm hoping something exists that I can modify. I found a good program to delete duplicates, Duplicate Cleaner (it compares checksums), but it won't help me here, as I'd have to copy all the CDs to disk, and each is probably about 80% duplicates, and I don't have room to do that - I'd have to cycle through a few at a time copying everything, then turning around and deleting 80% of it, working the hard drive a lot. Thanks for any help.

    Read the article

  • Combining cache methods - memcache/disk based

    - by Industrial
    Hi! Here's the deal. We would have taken the complete static html road to solve performance issues, but since the site will be partially dynamic, this won't work out for us. What we have thought of instead is using memcache + eAccelerator to speed up PHP and take care of caching for the most used data. Here's our two approaches that we have thought of right now: Using memcache on all<< major queries and leaving it alone to do what it does best. Usinc memcache for most commonly retrieved data, and combining with a standard harddrive-stored cache for further usage. The major advantage of only using memcache is of course the performance, but as users increases, the memory usage gets heavy. Combining the two sounds like a more natural approach to us, even though the theoretical compromize in performance. Memcached appears to have some replication features available as well, which may come handy when it's time to increase the nodes. What approach should we use? - Is it stupid to compromize and combine the two methods? Should we insted be focusing on utilizing memcache and instead focusing on upgrading the memory as the load increases with the number of users? Thanks a lot!

    Read the article

  • How to load/save C++ class instance (using STL containers) to disk

    - by supert
    I have a C++ class representing a hierarchically organised data tree which is very large (~Gb, basically as large as I can get away with in memory). It uses an STL list to store information at each node plus iterators to other nodes. Each node has only one parent, but 0-10 children. Abstracted, it looks something like: struct node { public: node_list_iterator parent; // iterator to a single parent node double node_data_array[X]; map<int,node_list_iterator> children; // iterators to child nodes }; class strategy { private: list<node> tree; // hierarchically linked list of nodes struct some_other_data; public: void build(); // build the tree void save(); // save the tree from disk void load(); // load the tree from disk void use(); // use the tree }; I would like to implement the load() and save() to disk, and it should be fairly fast, however the obvious problems are: I don't know the size in advance; The data contains iterators, which are volatile; My ignorance of C++ is prodigious. Could anyone suggest a pure C++ solution please?

    Read the article

  • Plan Caching and Query Memory Part II (Hash Match) – When not to use stored procedure - Most common performance mistake SQL Server developers make.

    - by sqlworkshops
    SQL Server estimates Memory requirement at compile time, when stored procedure or other plan caching mechanisms like sp_executesql or prepared statement are used, the memory requirement is estimated based on first set of execution parameters. This is a common reason for spill over tempdb and hence poor performance. Common memory allocating queries are that perform Sort and do Hash Match operations like Hash Join or Hash Aggregation or Hash Union. This article covers Hash Match operations with examples. It is recommended to read Plan Caching and Query Memory Part I before this article which covers an introduction and Query memory for Sort. In most cases it is cheaper to pay for the compilation cost of dynamic queries than huge cost for spill over tempdb, unless memory requirement for a query does not change significantly based on predicates.   This article covers underestimation / overestimation of memory for Hash Match operation. Plan Caching and Query Memory Part I covers underestimation / overestimation for Sort. It is important to note that underestimation of memory for Sort and Hash Match operations lead to spill over tempdb and hence negatively impact performance. Overestimation of memory affects the memory needs of other concurrently executing queries. In addition, it is important to note, with Hash Match operations, overestimation of memory can actually lead to poor performance.   To read additional articles I wrote click here.   The best way to learn is to practice. To create the below tables and reproduce the behavior, join the mailing list by using this link: www.sqlworkshops.com/ml and I will send you the table creation script. Most of these concepts are also covered in our webcasts: www.sqlworkshops.com/webcasts  Let’s create a Customer’s State table that has 99% of customers in NY and the rest 1% in WA.Customers table used in Part I of this article is also used here.To observe Hash Warning, enable 'Hash Warning' in SQL Profiler under Events 'Errors and Warnings'. --Example provided by www.sqlworkshops.com drop table CustomersState go create table CustomersState (CustomerID int primary key, Address char(200), State char(2)) go insert into CustomersState (CustomerID, Address) select CustomerID, 'Address' from Customers update CustomersState set State = 'NY' where CustomerID % 100 != 1 update CustomersState set State = 'WA' where CustomerID % 100 = 1 go update statistics CustomersState with fullscan go   Let’s create a stored procedure that joins customers with CustomersState table with a predicate on State. --Example provided by www.sqlworkshops.com create proc CustomersByState @State char(2) as begin declare @CustomerID int select @CustomerID = e.CustomerID from Customers e inner join CustomersState es on (e.CustomerID = es.CustomerID) where es.State = @State option (maxdop 1) end go  Let’s execute the stored procedure first with parameter value ‘WA’ – which will select 1% of data. set statistics time on go --Example provided by www.sqlworkshops.com exec CustomersByState 'WA' goThe stored procedure took 294 ms to complete.  The stored procedure was granted 6704 KB based on 8000 rows being estimated.  The estimated number of rows, 8000 is similar to actual number of rows 8000 and hence the memory estimation should be ok.  There was no Hash Warning in SQL Profiler. To observe Hash Warning, enable 'Hash Warning' in SQL Profiler under Events 'Errors and Warnings'.   Now let’s execute the stored procedure with parameter value ‘NY’ – which will select 99% of data. -Example provided by www.sqlworkshops.com exec CustomersByState 'NY' go  The stored procedure took 2922 ms to complete.   The stored procedure was granted 6704 KB based on 8000 rows being estimated.    The estimated number of rows, 8000 is way different from the actual number of rows 792000 because the estimation is based on the first set of parameter value supplied to the stored procedure which is ‘WA’ in our case. This underestimation will lead to spill over tempdb, resulting in poor performance.   There was Hash Warning (Recursion) in SQL Profiler. To observe Hash Warning, enable 'Hash Warning' in SQL Profiler under Events 'Errors and Warnings'.   Let’s recompile the stored procedure and then let’s first execute the stored procedure with parameter value ‘NY’.  In a production instance it is not advisable to use sp_recompile instead one should use DBCC FREEPROCCACHE (plan_handle). This is due to locking issues involved with sp_recompile, refer to our webcasts, www.sqlworkshops.com/webcasts for further details.   exec sp_recompile CustomersByState go --Example provided by www.sqlworkshops.com exec CustomersByState 'NY' go  Now the stored procedure took only 1046 ms instead of 2922 ms.   The stored procedure was granted 146752 KB of memory. The estimated number of rows, 792000 is similar to actual number of rows of 792000. Better performance of this stored procedure execution is due to better estimation of memory and avoiding spill over tempdb.   There was no Hash Warning in SQL Profiler.   Now let’s execute the stored procedure with parameter value ‘WA’. --Example provided by www.sqlworkshops.com exec CustomersByState 'WA' go  The stored procedure took 351 ms to complete, higher than the previous execution time of 294 ms.    This stored procedure was granted more memory (146752 KB) than necessary (6704 KB) based on parameter value ‘NY’ for estimation (792000 rows) instead of parameter value ‘WA’ for estimation (8000 rows). This is because the estimation is based on the first set of parameter value supplied to the stored procedure which is ‘NY’ in this case. This overestimation leads to poor performance of this Hash Match operation, it might also affect the performance of other concurrently executing queries requiring memory and hence overestimation is not recommended.     The estimated number of rows, 792000 is much more than the actual number of rows of 8000.  Intermediate Summary: This issue can be avoided by not caching the plan for memory allocating queries. Other possibility is to use recompile hint or optimize for hint to allocate memory for predefined data range.Let’s recreate the stored procedure with recompile hint. --Example provided by www.sqlworkshops.com drop proc CustomersByState go create proc CustomersByState @State char(2) as begin declare @CustomerID int select @CustomerID = e.CustomerID from Customers e inner join CustomersState es on (e.CustomerID = es.CustomerID) where es.State = @State option (maxdop 1, recompile) end go  Let’s execute the stored procedure initially with parameter value ‘WA’ and then with parameter value ‘NY’. --Example provided by www.sqlworkshops.com exec CustomersByState 'WA' go exec CustomersByState 'NY' go  The stored procedure took 297 ms and 1102 ms in line with previous optimal execution times.   The stored procedure with parameter value ‘WA’ has good estimation like before.   Estimated number of rows of 8000 is similar to actual number of rows of 8000.   The stored procedure with parameter value ‘NY’ also has good estimation and memory grant like before because the stored procedure was recompiled with current set of parameter values.  Estimated number of rows of 792000 is similar to actual number of rows of 792000.    The compilation time and compilation CPU of 1 ms is not expensive in this case compared to the performance benefit.   There was no Hash Warning in SQL Profiler.   Let’s recreate the stored procedure with optimize for hint of ‘NY’. --Example provided by www.sqlworkshops.com drop proc CustomersByState go create proc CustomersByState @State char(2) as begin declare @CustomerID int select @CustomerID = e.CustomerID from Customers e inner join CustomersState es on (e.CustomerID = es.CustomerID) where es.State = @State option (maxdop 1, optimize for (@State = 'NY')) end go  Let’s execute the stored procedure initially with parameter value ‘WA’ and then with parameter value ‘NY’. --Example provided by www.sqlworkshops.com exec CustomersByState 'WA' go exec CustomersByState 'NY' go  The stored procedure took 353 ms with parameter value ‘WA’, this is much slower than the optimal execution time of 294 ms we observed previously. This is because of overestimation of memory. The stored procedure with parameter value ‘NY’ has optimal execution time like before.   The stored procedure with parameter value ‘WA’ has overestimation of rows because of optimize for hint value of ‘NY’.   Unlike before, more memory was estimated to this stored procedure based on optimize for hint value ‘NY’.    The stored procedure with parameter value ‘NY’ has good estimation because of optimize for hint value of ‘NY’. Estimated number of rows of 792000 is similar to actual number of rows of 792000.   Optimal amount memory was estimated to this stored procedure based on optimize for hint value ‘NY’.   There was no Hash Warning in SQL Profiler.   This article covers underestimation / overestimation of memory for Hash Match operation. Plan Caching and Query Memory Part I covers underestimation / overestimation for Sort. It is important to note that underestimation of memory for Sort and Hash Match operations lead to spill over tempdb and hence negatively impact performance. Overestimation of memory affects the memory needs of other concurrently executing queries. In addition, it is important to note, with Hash Match operations, overestimation of memory can actually lead to poor performance.   Summary: Cached plan might lead to underestimation or overestimation of memory because the memory is estimated based on first set of execution parameters. It is recommended not to cache the plan if the amount of memory required to execute the stored procedure has a wide range of possibilities. One can mitigate this by using recompile hint, but that will lead to compilation overhead. However, in most cases it might be ok to pay for compilation rather than spilling sort over tempdb which could be very expensive compared to compilation cost. The other possibility is to use optimize for hint, but in case one sorts more data than hinted by optimize for hint, this will still lead to spill. On the other side there is also the possibility of overestimation leading to unnecessary memory issues for other concurrently executing queries. In case of Hash Match operations, this overestimation of memory might lead to poor performance. When the values used in optimize for hint are archived from the database, the estimation will be wrong leading to worst performance, so one has to exercise caution before using optimize for hint, recompile hint is better in this case.   I explain these concepts with detailed examples in my webcasts (www.sqlworkshops.com/webcasts), I recommend you to watch them. The best way to learn is to practice. To create the above tables and reproduce the behavior, join the mailing list at www.sqlworkshops.com/ml and I will send you the relevant SQL Scripts.  Register for the upcoming 3 Day Level 400 Microsoft SQL Server 2008 and SQL Server 2005 Performance Monitoring & Tuning Hands-on Workshop in London, United Kingdom during March 15-17, 2011, click here to register / Microsoft UK TechNet.These are hands-on workshops with a maximum of 12 participants and not lectures. For consulting engagements click here.   Disclaimer and copyright information:This article refers to organizations and products that may be the trademarks or registered trademarks of their various owners. Copyright of this article belongs to R Meyyappan / www.sqlworkshops.com. You may freely use the ideas and concepts discussed in this article with acknowledgement (www.sqlworkshops.com), but you may not claim any of it as your own work. This article is for informational purposes only; you use any of the suggestions given here entirely at your own risk.   R Meyyappan [email protected] LinkedIn: http://at.linkedin.com/in/rmeyyappan

    Read the article

  • XFS disk becomes unavailable after a while

    - by Guard
    Ubuntu 12.04 (but the same was on 11.10 before upgrading) WD MyBook, 2TB, no RAID (or RAID0, not completely sure, anyway no mirroring, both 1TB disks are in use, mounted as a single device). Formatted to XFS, normally used for big movie files. Connected to Firewire 800. At some point the LED started going up and down as when constantly reading/writing. The device gives access error. When unplugged (cable, then holding the power button for a while, then unplugging the power) and re-connected becomes available. xfs_check with no results. xfs_repair did something, but looks like didn't fix any error. Then after a massive read (checking 1.5GB torrent file for integrity) becomes unavailable again. Any ideas what's wrong? Drives? Cables? Motherboard? OS? UPD: not sure how relevant this is, but here are dmesg output [14380.632816] SGI XFS with ACLs, security attributes, realtime, large block/inode numbers, no debug enabled [14380.633356] SGI XFS Quota Management subsystem [14421.812220] firewire_core: phy config: card 0, new root=ffc1, gap_count=5 [14441.890596] firewire_core: phy config: card 0, new root=ffc1, gap_count=5 [14441.896858] firewire_core: phy config: card 0, new root=ffc1, gap_count=5 [14453.895347] firewire_core: created device fw1: GUID 0090a99500a35518, S400, 9 config ROM retries [14453.904818] scsi6 : SBP-2 IEEE-1394 [14453.905014] scsi7 : SBP-2 IEEE-1394 [14454.139993] firewire_sbp2: fw1.0: logged in to LUN 0000 (0 retries) [14454.158769] scsi 6:0:0:0: Direct-Access WD My Book 1015 PQ: 0 ANSI: 4 [14454.159251] sd 6:0:0:0: Attached scsi generic sg3 type 0 [14454.162391] firewire_sbp2: fw1.1: logged in to LUN 0001 (0 retries) [14454.167453] sd 6:0:0:0: [sdc] 3907017568 512-byte logical blocks: (2.00 TB/1.81 TiB) [14454.178822] sd 6:0:0:0: [sdc] Write Protect is off [14454.178826] sd 6:0:0:0: [sdc] Mode Sense: 10 00 00 00 [14454.186830] scsi 7:0:0:1: Enclosure WD My Book Device 1015 PQ: 0 ANSI: 4 [14454.186995] scsi 7:0:0:1: Attached scsi generic sg4 type 13 [14454.190078] sd 6:0:0:0: [sdc] Cache data unavailable [14454.190087] sd 6:0:0:0: [sdc] Assuming drive cache: write through [14454.202176] sd 6:0:0:0: [sdc] Cache data unavailable [14454.202185] sd 6:0:0:0: [sdc] Assuming drive cache: write through [14454.239940] sdc: [mac] sdc1 sdc2 sdc3 sdc4 [14454.271262] sd 6:0:0:0: [sdc] Cache data unavailable [14454.271270] sd 6:0:0:0: [sdc] Assuming drive cache: write through [14454.271354] sd 6:0:0:0: [sdc] Attached SCSI disk [14454.272149] ses 7:0:0:1: Attached Enclosure device [14606.090024] XFS (sdc3): Mounting Filesystem [14612.048343] XFS (sdc3): Starting recovery (logdev: internal) [14620.697636] XFS (sdc3): Ending recovery (logdev: internal) [14748.120957] e1000e: eth0 NIC Link is Up 100 Mbps Full Duplex, Flow Control: Rx/Tx [14748.120963] e1000e 0000:00:19.0: eth0: 10/100 speed: disabling TSO [14752.568382] uhci_hcd 0000:00:1a.0: PCI INT A disabled [14752.568579] uhci_hcd 0000:00:1a.1: PCI INT B disabled [14752.568738] ehci_hcd 0000:00:1a.7: PCI INT C disabled [14752.568779] ehci_hcd 0000:00:1a.7: PME# enabled [14752.584526] uhci_hcd 0000:00:1d.1: PCI INT B disabled [14752.584689] uhci_hcd 0000:00:1d.2: PCI INT C disabled [14752.680079] ehci_hcd 0000:00:1a.7: BAR 0: set to [mem 0xe4641000-0xe46413ff] (PCI address [0xe4641000-0xe46413ff]) [14752.680104] ehci_hcd 0000:00:1a.7: restoring config space at offset 0xf (was 0x300, writing 0x30b) [14752.680136] ehci_hcd 0000:00:1a.7: restoring config space at offset 0x1 (was 0x2900000, writing 0x2900002) [14752.680170] ehci_hcd 0000:00:1a.7: PME# disabled [14752.680182] ehci_hcd 0000:00:1a.7: PCI INT C -> GSI 18 (level, low) -> IRQ 18 [14752.680190] ehci_hcd 0000:00:1a.7: setting latency timer to 64 [14752.710334] uhci_hcd 0000:00:1a.0: PCI INT A -> GSI 16 (level, low) -> IRQ 16 [14752.710342] uhci_hcd 0000:00:1a.0: setting latency timer to 64 [14752.749186] uhci_hcd 0000:00:1a.1: PCI INT B -> GSI 17 (level, low) -> IRQ 17 [14752.749194] uhci_hcd 0000:00:1a.1: setting latency timer to 64 [14752.790231] uhci_hcd 0000:00:1d.1: PCI INT B -> GSI 22 (level, low) -> IRQ 22 [14752.790239] uhci_hcd 0000:00:1d.1: setting latency timer to 64 [14752.829170] uhci_hcd 0000:00:1d.2: PCI INT C -> GSI 18 (level, low) -> IRQ 18 [14752.829178] uhci_hcd 0000:00:1d.2: setting latency timer to 64

    Read the article

  • Bad performance function in PHP. With large files memory blows up! How can I refactor?

    - by André
    Hi I have a function that strips out lines from files. I'm handling with large files(more than 100Mb). I have the PHP Memory with 256MB but the function that handles with the strip out of lines blows up with a 100MB CSV File. What the function must do is this: Originally I have the CSV like: Copyright (c) 2007 MaxMind LLC. All Rights Reserved. locId,country,region,city,postalCode,latitude,longitude,metroCode,areaCode 1,"O1","","","",0.0000,0.0000,, 2,"AP","","","",35.0000,105.0000,, 3,"EU","","","",47.0000,8.0000,, 4,"AD","","","",42.5000,1.5000,, 5,"AE","","","",24.0000,54.0000,, 6,"AF","","","",33.0000,65.0000,, 7,"AG","","","",17.0500,-61.8000,, 8,"AI","","","",18.2500,-63.1667,, 9,"AL","","","",41.0000,20.0000,, When I pass the CSV file to this function I got: locId,country,region,city,postalCode,latitude,longitude,metroCode,areaCode 1,"O1","","","",0.0000,0.0000,, 2,"AP","","","",35.0000,105.0000,, 3,"EU","","","",47.0000,8.0000,, 4,"AD","","","",42.5000,1.5000,, 5,"AE","","","",24.0000,54.0000,, 6,"AF","","","",33.0000,65.0000,, 7,"AG","","","",17.0500,-61.8000,, 8,"AI","","","",18.2500,-63.1667,, 9,"AL","","","",41.0000,20.0000,, It only strips out the first line, nothing more. The problem is the performance of this function with large files, it blows up the memory. The function is: public function deleteLine($line_no, $csvFileName) { // this function strips a specific line from a file // if a line is stripped, functions returns True else false // // e.g. // deleteLine(-1, xyz.csv); // strip last line // deleteLine(1, xyz.csv); // strip first line // Assigna o nome do ficheiro $filename = $csvFileName; $strip_return=FALSE; $data=file($filename); $pipe=fopen($filename,'w'); $size=count($data); if($line_no==-1) $skip=$size-1; else $skip=$line_no-1; for($line=0;$line<$size;$line++) if($line!=$skip) fputs($pipe,$data[$line]); else $strip_return=TRUE; return $strip_return; } It is possible to refactor this function to not blow up with the 256MB PHP Memory? Give me some clues. Best Regards,

    Read the article

  • Multi-tenant ASP.NET MVC – Introduction

    - by zowens
    I’ve read a few different blogs that talk about multi-tenancy and how to resolve some of the issues surrounding multi-tenancy. What I’ve come to realize is that these implementations overcomplicate the issues and give only a muddy implementation! I’ve seen some really illogical code out there. I have recently been building a multi-tenancy framework for internal use at eagleenvision.net. Through this process, I’ve realized a few different techniques to make building multi-tenant applications actually quite easy. I will be posting a few different entries over the issue and my personal implementation. In this first post, I will discuss what multi-tenancy means and how my implementation will be structured.   So what’s the problem? Here’s the deal. Multi-tenancy is basically a technique of code-reuse of web application code. A multi-tenant application is an application that runs a single instance for multiple clients. Here the “client” is different URL bindings on IIS using ASP.NET MVC. The problem with different instances of the, essentially, same application is that you have to spin up different instances of ASP.NET. As the number of running instances of ASP.NET grows, so does the memory footprint of IIS. Stack Exchange shifted its architecture to multi-tenancy March. As the blog post explains, multi-tenancy saves cost in terms of memory utilization and physical disc storage. If you use the same code base for many applications, multi-tenancy just makes sense. You’ll reduce the amount of work it takes to synchronize the site implementations and you’ll thank your lucky stars later for choosing to use one application for multiple sites. Multi-tenancy allows the freedom of extensibility while relying on some pre-built code.   You’d think this would be simple. I have actually seen a real lack of reference material on the subject in terms of ASP.NET MVC. This is somewhat surprising given the number of users of ASP.NET MVC. However, I will certainly fill the void ;). Implementing a multi-tenant application takes a little thinking. It’s not straight-forward because the possibilities of implementation are endless. I have yet to see a great implementation of a multi-tenant MVC application. The only one that comes close to what I have in mind is Rob Ashton’s implementation (all the entries are listed on this page). There’s some really nasty code in there… something I’d really like to avoid. He has also written a library (MvcEx) that attempts to aid multi-tenant development. This code is even worse, in my honest opinion. Once I start seeing Reflection.Emit, I have to assume the worst :) In all seriousness, if his implementation makes sense to you, use it! It’s a fine implementation that should be given a look. At least look at the code. I will reference MvcEx going forward as a comparison to my implementation. I will explain why my approach differs from MvcEx and how it is better or worse (hopefully better).   Core Goals of my Multi-Tenant Implementation The first, and foremost, goal is to use Inversion of Control containers to my advantage. As you will see throughout this series, I pass around containers quite frequently and rely on their use heavily. I will be using StructureMap in my implementation. However, you could probably use your favorite IoC tool instead. <RANT> However, please don’t be stupid and abstract your IoC tool. Each IoC is powerful and by abstracting the capabilities, you’re doing yourself a real disservice. Who in the world swaps out IoC tools…? No one!</RANT> (It had to be said.) I will outline some of the goodness of StructureMap as we go along. This is really an invaluable tool in my tool belt and simple to use in my multi-tenant implementation. The second core goal is to represent a tenant as easily as possible. Just as a dependency container will be a first-class citizen, so will a tenant. This allows us to easily extend and use tenants. This will also allow different ways of “plugging in” tenants into your application. In my implementation, there will be a single dependency container for a single tenant. This will enable isolation of the dependencies of the tenant. The third goal is to use composition as a means to delegate “core” functions out to the tenant. More on this later.   Features In MvcExt, “Modules” are a code element of the infrastructure. I have simplified this concept and have named this “Features”. A feature is a simple element of an application. Controllers can be specified to have a feature and actions can have “sub features”. Each tenant can select features it needs and the other features will be hidden to the tenant’s users. My implementation doesn’t require something to be a feature. A controller can be common to all tenants. For example, (as you will see) I have a “Content” controller that will return the CSS, Javascript and Images for a tenant. This is common logic to all tenants and shouldn’t be hidden or considered a “feature”; Content is a core component.   Up next My next post will be all about the code. I will reveal some of the foundation to the way I do multi-tenancy. I will have posts dedicated to Foundation, Controllers, Views, Caching, Content and how to setup the tenants. Each post will be in-depth about the issues and implementation details, while adhering to my core goals outlined in this post. As always, comment with questions of DM me on twitter or send me an email.

    Read the article

  • SQL Server 2008 uses half the CPU’s

    - by ACALVETT
    I recently got my hands on a couple of 4 socket servers with Intel E7-4870's (10 cores per cpu) and with hyper threading enabled that gave me 80 logical CPU's. The server has Windows 2008 R2 SP1 along with SQL 2008 (Currently we can not deploy SQL 2008 R2 for the application being hosted). When SQL Server started I noticed only 2 NUMA nodes were configured and 40 logical cores where there should have been 4 NUMA nodes and 80 logical cores (see below). The problem is caused by that fact that...(read more)

    Read the article

  • Do not use “using” in WCF Client

    - by oazabir
    You know that any IDisposable object must be disposed using using. So, you have been using using to wrap WCF service’s ChannelFactory and Clients like this: using(var client = new SomeClient()) {. ..} Or, if you are doing it the hard and slow way (without really knowing why), then: using(var factory = new ChannelFactory<ISomeService>()) {var channel= factory.CreateChannel();...} That’s what we have all learnt in school right? We have learnt it wrong! When there’s a network related error or the connection is broken, or the call is timed out before Dispose is called by the using keyword, then it results in the following exception when the using keyword tries to dispose the channel: failed: System.ServiceModel.CommunicationObjectFaultedException : The communication object, System.ServiceModel.Channels.ServiceChannel, cannot be used for communication because it is in the Faulted state. Server stack trace: at System.ServiceModel.Channels.CommunicationObject.Close(TimeSpan timeout) Exception rethrown at [0]: at System.Runtime.Remoting.Proxies.RealProxy.HandleReturnMessage(IMessage reqMsg, IMessage retMsg) at System.Runtime.Remoting.Proxies.RealProxy.PrivateInvoke(MessageData& msgData, Int32 type) at System.ServiceModel.ICommunicationObject.Close(TimeSpan timeout) at System.ServiceModel.ClientBase`1.System.ServiceModel.ICommunicationObject.Close(TimeSpan timeout) at System.ServiceModel.ClientBase`1.Close() at System.ServiceModel.ClientBase`1.System.IDisposable.Dispose() There are various reasons for which the underlying connection can be at broken state before the using block is completed and the .Dispose() is called. Common problems like network connection dropping, IIS doing an app pool recycle at that moment, some proxy sitting between you and the service dropping the connection for various reasons and so on. The point is, it might seem like a corner case, but it’s a likely corner case. If you are building a highly available client, you need to treat this properly before you go-live. So, do NOT use using on WCF Channel/Client/ChannelFactory. Instead you need to use an alternative. Here’s what you can do: First create an extension method. public static class WcfExtensions{ public static void Using<T>(this T client, Action<T> work) where T : ICommunicationObject { try { work(client); client.Close(); } catch (CommunicationException e) { client.Abort(); } catch (TimeoutException e) { client.Abort(); } catch (Exception e) { client.Abort(); throw; } }} Then use this instead of the using keyword: new SomeClient().Using(channel => { channel.Login(username, password);}); Or if you are using ChannelFactory then: new ChannelFactory<ISomeService>().Using(channel => { channel.Login(username, password);}); Enjoy!

    Read the article

  • Event on SQL Server 2008 Disk IO and the new Complex Event Processing (StreamInsight) feature in R2

    - by tonyrogerson
    Allan Mitchell and myself are doing a double act, Allan is becoming one of the leading guys in the UK on StreamInsight and will give an introduction to this new exciting technology; on top of that I'll being talking about SQL Server Disk IO - well, "Disk" might not be relevant anymore because I'll be talking about SSD and IOFusion - basically I'll be talking about the underpinnings - making sure you understand and get it right, how to monitor etc... If you've any specific problems or questions just ping me an email [email protected]. To register for the event see: http://sqlserverfaq.com/events/217/SQL-Server-and-Disk-IO-File-GroupsFiles-SSDs-FusionIO-InRAM-DBs-Fragmentation-Tony-Rogerson-Complex-Event-Processing-Allan-Mitchell.aspx 18:15 SQL Server and Disk IOTony Rogerson, SQL Server MVPTony's Blog; Tony on TwitterIn this session Tony will talk about RAID levels, how SQL server writes to and reads from disk, the effect SSD has and will talk about other options for throughput enhancement like Fusion IO. He will look at the effect fragmentation has and how to minimise the impact, he will look at the File structure of a database and talk about what benefits multiple files and file groups bring. We will also touch on Database Mirroring and the effect that has on throughput, how to get a feeling for the throughput you should expect.19:15 Break19:45 Complex Event Processing (CEP)Allan Mitchell, SQL Server MVPhttp://sqlis.com/sqlisStreamInsight is Microsoft’s first foray into the world of Complex Event Processing (CEP) and Event Stream Processing (ESP).  In this session I want to show an introduction to this technology.  I will show how and why it is useful.  I will get us used to some new terminology but best of all I will show just how easy it is to start building your first CEP/ESP application.

    Read the article

< Previous Page | 106 107 108 109 110 111 112 113 114 115 116 117  | Next Page >