Search Results

Search found 6634 results on 266 pages for 'fast fashion'.

Page 203/266 | < Previous Page | 199 200 201 202 203 204 205 206 207 208 209 210  | Next Page >

  • Copy past speed very slow for a large number of tiny files on Windows but not on linux

    - by Arno2501
    I've got this folder which contains 15'000 of tiny images (around 400 bytes each). If I copy past this folder on my laptop (Windows 7, i7 latest gen, superfast ssd) it takes about 30 seconds (yes for 7 megs !!!) the average transfer rate is 400 KBytes / second which is so slow. I mean my usual transfer rate is more like hundreds of MBytes per second !!! I get the same problem on my servers (Windows 2003, 2008 /r2) and on every Windows box that I could get my hands on. On the other hand if I do the same on a linux box (debian base, Ext3 FS) (which runs on the same SAN than all the windows servers I've tested) It's nearly instantaneous !!! I'm pretty sure the size / number of the files may stress such filesystem more than another but such differences !? Why is that ? Why is it so slow on the windows boxes (more that 30 sec for 7 MB) and so fast on the linux ones (one sec or so) (I mean this was not a hardlink that I've created it was a true copy). Is it a normal behaviour or something unusual ?

    Read the article

  • Load is 0, yet site crawls (sometimes). What gives?

    - by Yegor
    I have a ~1.5-2mil page views per day site running on 2 servers. One for mysql, other for everything else. Mysql box has a load of 3, frontend is usually 0.0-0.1. Both are dual quad core with 8GB ram running SAS drives in raid5. CPU is idle for majority of the time, iowait is non-existent. Im running nginx, memcache, and site is built on php. Half the time everything runs perfect, while at other times it lags something severe, when it takes 10-15 seconds for a page to load. Page execution time is always super low, but it seems to hang, waiting for something before it actually loads the page. Whats even more weird is that it only happens to 1 file on the site (but its the one thats most commonly accessed, that actually loads the content on the site). Other pages are super fast at all times, even when it takes 15 seconds to load actual content. I have nginx_stats plugin installed, and if I monitor it, the lag spikes happen when the write column starts going above 100, and it frequently does... all the way to 500-1000. It does so at totally random times... not when traffic is heavy... it can do this in the middle of the night, and work perfectly at 5pm when traffic is at its highest. Any ideas?

    Read the article

  • Word 2010 creates multiple processes... sometimes

    - by Bill Sambrone
    I've run into a strange behavior when I migrated our users from Office 2007 / Vista to Office 2010 / Windows 7 (all 32-bit). They use a web based document management system called NetDocuments which stores all their .doc/.docx files. Generally, when they click on a doc from the browser window it fires up Word and opens the doc. Word has an add-in in it from NetDocs as well so it can upload the changed document directly back to the NetDocs server. I get a phone call when Word crashes, and every single time it has crashed I have witnessed multiple winword.exe processes running in task manager. I used process explorer to see what created the process, and it is all Internet Explorer. So far I have rolled them back to IE8 and the problem happens less frequently, but it still happens. When I try to duplicate the problem, I can make it happen sometimes if I open multiple documents very quickly. Using lightning fast alt-tab reflexes, I DO see that a 2nd WinWord process is created when a user clicks on a document, then it closes once the document is open. I think what is happening is that the secondary WinWord process that does some sort of NetDocs voodoo is getting stuck open. This behavior is new to Word 2010 / Windows 7 and google searching isn't coming up with much. I have seen a few posts that this is a known issue in certain circumstances and there is no "fix", but I thought it would good to ask others on this. Thanks!

    Read the article

  • Network connection keeps dropping - bad hardware?

    - by Bill Sambrone
    Hello all, I've into a bit of a wall with a client of mine. In an office of 20 people, he is the only one who experiences broken connections to his mapped network drives. I have everyone set up with about 6 mapped drives, all pointing to the same server (no DFS), and everyone else can access them lightning fast. The environment consists of a mix of Windows 7 and XP machines, all 32-bit. The server holding the data everyone is mapping to is running on Server 2008 R2, and is a domain controller. We recently swapped out their old 10/100 switch for a shiny new Dell PowerConnect gigabit switch. We have also replaced an old dying Sonicwall with a shiny new one. Everything is running on an ESX host except for the DC, where everyone is getting data from. In my client's office, we have done the following: Swapped out his computer (Win7 and XP box) Swapped out the desktop switch in his office Removed the desktop switch in his office Changed out the network cable going to the wall Ran 'net config server /autodisconnect:-1' on the server Disabled remote differential compression on his current Win7 box When we swapped out his network cable, everything seemed fine for about 4 days. Normally I would get a phone call a couple times per day letting me know that Outlook has crashed (there is a 9GB PST living on the server he is always connected to), or that his software he is running from his L drive has crashed. I almost thought I had this solved, but after we rebooted the DC the other night he all of a sudden couldn't stay connected to his mapped network drives for more than 10 minutes. When I ran 'net use' from the command prompt, it listed all the network drives where were randomly in a state of 'OK', 'Disconnected', or 'Reconnecting'. What else should I try? Maybe there is bad wiring in the wall, patch panel, or a bad port in the new switch I have in the server room?

    Read the article

  • Orphaned SQL Recordsets/Connections with IIS

    - by Damian
    I have an IIS 6 site running on Windows 2003 Server x86 with MS SQL2005 Enterprise edition running ASP Classic (no choice). The site runs very fast with about 8000 page views per hour. All of my SQL tables are indexed and I have used the profiler to check my queries, the slowest of which is only about 10-15ms. I have autoshrink disabled, autogrow is set to 250mb, database is 2gb with 800mb of free space. My problem is that every now and then the site will slow to a crawl for no reason. Pages that just have a simple 'connect to databse and increment a hit counter' work ok, but more SQL intensive pages that normally execute in about 60ms take 25,000ms to run. This happens for about 30 seconds and then goes away. I was having an issue with orphan recordsets and connections due to the way I was releasing them. I have fixed this up and the issue is much better, but I am still getting them. Is there a way with permon, etc. to track when SQL Server or Windows closes these Orphan connections? At least if I can monitor the issue I will know if I am making progress or if I am even looking at the right things. Is there anything else I might be missing? Thank you!

    Read the article

  • mkfs Operation Takes Very Long on Linux Software Raid 5

    - by Elmar Weber
    I've set-up a Linux software raid level 5 consisting of 4 * 2 TB disks. The disk array was created with a 64k stripe size and no other configuration parameters. After the initial rebuild I tried to create a filesystem and this step takes very long (about half an hour or more). I tried to create an xfs and ext3 filesystem, both took a long time, with mkfs.ext3 I observed the following behaviour, which might be helpful: writing inode tables runs fast until it reaches 1053 (~ 1 second), then it writes about 50, waits for two seconds, then the next 50 are written (according to the console display) when I try to cancel the operation with Control+C it hangs for half a minute before it is really canceled The performance of the disks individually is very good, I've run bonnie++ on each one separately with write / read values of around 95 / 110MB/s. Even when I run bonnie++ on every drive in parallel the values are only reduced by about 10 MB. So I'm excluding hardware / I/O scheduling in general as a problem source. I tried different configuration parameters for stripe_cache_size and readahead size without success, but I don't think they are that relevant for the file system creation operation. The server details: Linux server 2.6.35-27-generic #48-Ubuntu SMP x86_64 GNU/Linux mdadm - v2.6.7.1 Does anyone has a suggestion on how to further debug this?

    Read the article

  • Using NFS for scalable PHP/MySQL web application

    - by Jeroen Moons
    Here's the situation: I have a PHP/MySQL web application that accepts user uploads (pdf files). From these pdf files' pages a preview image is made on the fly and presented to the web app's users. Some pdfs might be on the large side, most will be under 50 MB but some extreme cases could be as large as a few hundred MB. A little waiting for the preview image for large pdf files is acceptable but no more than a minute let's say. Everything is running on one server for now, but soon the app will hit the server's limit on both storage and processing power. My idea to solve the problem: To deal with this situation I had the idea of having one or more pdf processing servers as needed, and one or more file storage servers. These two types of servers are mounted to the server on which the actual app runs using NFS. The app could then use GearMan to delegate pdf processing tasks to these processing servers. The processing server can mount the storage server and read the file stored there, process it and write its output to that server. The servers I'm talking about will be amazon ec2 instances. The web app returns a link to the resulting pdf preview image on the storage server that was used which can then be used on the front end to show the image to the user. My question: I have zero experience with apps that use multiple servers, is this idea viable or is there a better way to do it? Is an NFS setup fast and reliable enough for this situation?

    Read the article

  • Windows 8 x64 with VMWare Workstation or inside ESXi

    - by Dommer
    I need to run several virtual machines on a core i7-920 box with 12GB or RAM and a 256GB SSD to host the VMs. It also has a Highpoint RocketRaid 2720SGL RAID controller with a 12TB RAID 5 array. I want one of my VMs to run Windows 8 x64, to have access to the RAID array as a native disk (not as networked drives and it needs to run at full speed) and to be able to send files quickly across the network. Initially I thought I'd try to do this using ESXi 5, but I have been unable to find any working RAID drivers for the RR2720SGL and it is not on the HCL for ESXi 5. In light of this, I have installed Windows 8 x64 on the hardware and am thinking of installing VMWare Workstation and running my VMs inside there. I guess my questions are these: How does VMWare Workstation 9 perform compared to ESXi 5? In the real world I mean? Presumably installing Win 8 as the host OS will give me way better performance for that Win 8 machine than Win 8 running under ESXi? I should stick with Windows 8 x64 as the host OS, right? If I install a domain controller VM inside my Win 8 box and join the Win 8 machine to that domain, am I insane (I would guess the Win 8 machine wouldn't see the domain controller until it finished starting everything up, but I don't think that matters)?! is it feasible to give metrics like this and if so, what is the likely value of x? 25%? 50%? 75%? Win 8 under ESXi runs x% as fast as Win 8 installed bare metal.

    Read the article

  • Best Practice - SQL 2012 & IIS in VMWare

    - by Dan Ribar
    We are pretty new to VMWare and looking for some thoughts on our environment. We have a VMWare cluster that has on one host: VM#1: MS Windows 2008 R2 Enterprise & SQL Server 2012 VM#2: MS Windows 2008 R2 Standard & IIS The IIS asp.net app talks directly to the SQL Server. We had this similar environment on physical servers a few months ago and just recently moved to the virtualized environment. Regarding the setup, we have not tweaked any of the vm resource parameters -- all is set as standard and all is working. What is observed is that the VMs seem to spool down and we get lags in response. Of course this sin't as fast as the old physical environment, but I am wondering if: *is it a good idea to run the SQL server and the IIS server on the same host? They are the only two VMs on it. The host is a new Dell R620 with 192 gb mem. does it make sense to change any CPU or memory reservations when it doesn't seem like there is any contention is there a way to keep the VMs spooled up to eliminate delays? This is a brand new squeaky clean vanilla install. What are your thoughts?

    Read the article

  • Zsh super slow inside my Git repo

    - by Jason Swett
    My Zsh is super slow inside a certain Git repo of mine. When I Google "zsh git slow", I get a bunch of results about Git autocompletion being slow, but autocompletion isn't necessarily my problem; it's everything. I tried removing all plugins and that, strangely, didn't do anything at all when I opened a new shell. Zsh would still do Git stuff inside my Git repo. I found this snippet on this page: function git_prompt_info() { ref=$(git symbolic-ref HEAD 2> /dev/null) || return echo "$ZSH_THEME_GIT_PROMPT_PREFIX${ref#refs/heads/}$ZSH_THEME_GIT_PROMPT_SUFFIX" } That made everything fast again, but it also gave me a prompt that looks like this: ? snip git:(master Note the missing right parenthesis. That's kind of lame. Plus the whole thing just seems like a hack I shouldn't have to do. There's also this promising-looking SU question, but the links on the accepted answer are dead. How can I get my Zsh not to be slow inside a Git repo?

    Read the article

  • How to embed/hardcode SRT subtitles into mp4 videos with VLC?

    - by Jens Bannmann
    I'm looking for a way to "burn in" or render/rembed/hardcode subtitles (from an SRT file) into an MP4 video with VLC. But no matter what options I use, it never works properly. I get a file that plays video way too fast (audio is normal), or one that plays normally, but actually does not have embedded subtitles. Also, with some options (like the one below) it does not play in QuickTime, only in VLC. So the main question is: how can I make this work in VLC? Secondary questions are: How do I decide which options I should set? Which settings are best if I want to leave the file bitrate etc. the same as much as possible, only embed subtitles? It seems I cannot leave the field empty or Video/Audio unchecked, so I guess I would first need to figure out the original audio and video bitrate. What do the "Scale" and "Channels" options mean? ... none of which are answered within the VLC documentation. For example, this is one set of options I used in the "Advanced Open File…" dialog: Advanced Open File… myFileName.mp4 [ ] Treat as a pipe rather than as a file [x] Load subtitles file: mySubtitleFileName.srt [ ] Play another media synchronously [x] Streaming/Saving Streaming and Transcoding Options [ ] Display the stream locally (o) File [outputFileName.mp4 ] [ ] Dump raw input Encapsulation Method: (MPEG 4 ) Transcoding options [x] Video (mp4v ) Bitrate (kb/s) [256 ] Scale [1 ] [x] Audio (mp3 ) Bitrate (kb/s) [128 ] Channels [1 ]

    Read the article

  • 802.11g -> wired ethernet bridging not working

    - by Malachi
    Usually people want to go the other direction, but I want to take our relatively fast and stable house 802.11g signal and bridge it to ethernet. I have tried using an Airport Express (the b/g flavor) and my i7 MacBook pro, both to no avail. Word is that the b/g flavor of This flavor of Airport Express maxes at firmware 6.3 which doesn't support this kind of bridging properly. However, I expected my MacBook pro to do the job with its "Internet Sharing" feature. Alas, although my wired PC does sort of see it, it doesn't work out. Strangely, using DHCP the PC receives the same IP address as my MBP uses on the network. Less strangely, but still surprisingly, the wired ethernet port on my mac registers as the IP address of the gateway when queried with IFCONFIG. It sort of makes sense that the mac would "pretend" to be the gateway, but the whole thing just isn't working and seems configured wrong - but all the docs I see say basically "OS X Internet Sharing: click it and go". What do I do? Do i really have to buy more hardware, even though I have plenty of would-be candidates for bridging? Incidentally, the host router originating the 802.11g signal is a belkin 802.11g router, and is documented to support WDS.

    Read the article

  • SSH tunnel for socks5 proxy is slow with concurrent load

    - by RawwrBag
    I ssh to a remote AWS server using Ubuntu. I use ssh's port forwarding capabilities to do this. I have tried forwarding a dynamic port (ssh -D) or a single port (ssh -L with dante running as a remote socks server). Both are equally slow. I also tried different ciphers (ssh -c). Concurrent TCP connections pretty much do not work. For example, I can go to speedtest.net and start a test (which is fairly fast, probably maxes out my line speed) and if I try and do anything (i.e. load google.com) while the test is still running, all the additional connections seem to hang until the speed test is over. I realize OpenSSH is single-threaded. Is this the problem? It doesn't even show up on my top. Same goes for sshd on the remote server -- no processor hit. Is there anyway to bump ssh performance or should I step up to OpenVPN or something better suited for this?

    Read the article

  • How to access previous VHD versions of system backup?

    - by feklee
    Quote from the 31 Oct 2009 TechNet article "Learn more about system image backup": During the first backup, the backup engine scans the source drive and copies only blocks that contain data into a .vhd file stored on the target, creating a compact view of the source drive. The next time a system image is created, only new and changed data is written to the .vhd file, and old data on the same block is moved out of the VHD and into the shadow copy storage area. Volume Shadow Copy Service is used to compute the changed data between backups, as well as to handle the process of moving the old data out to the shadow copy area on the target. This approach makes the backup fast (since only changed blocks are backed up) and efficient (since data is stored in a compact manner). When restoring the image, blocks will be restored to their original locations on the source disk. If you want to restore from an older backup, the engine reads from the shadow copy area and restores the appropriate blocks. For the last days, a daily system backup of drive C: to drive E: has been scheduled and run by Windows 7 Backup and Restore. Drive C: currently holds 233 GB of data, which fits comfortably on drive E:, a 1 TB drive, with 727 GB of free space remaining. How do I access the previous version of a VHD? I right clicked on files and folders in E:\WindowsImageBackup, and I looked for Previous Versions but always: There are no previous versions available

    Read the article

  • Join multiple filesystems (on multiple computers) into one big volume

    - by jm666
    Scenario: Have 10 computers, each have 12x2TB HDDs (currently) in raidZ2 (10+2) configuration, so, in the each computer i have one approx. 20TB volume. Now, need those 10 separate computers (separate raid groups) join into one big volume. What is the recommended solution? I'm thinking about the FCoE (10GB ethernet). So, buying into each computer FCoE (10GB ethernet card) and - what need more on the hardware side? (probably another computer, FCoE switch? like Cisco Nexus?) The main question is: what need to install and configure on each computer? Currently they have freebsd/raidz2, but it is possible change it into Linux/Solaris if needed. Any helpful resource what talking about how to build a big volumes from smaller raid-groups (on the software side) is very welcomed. So, what OS, what filesystem, what software - etc. In short: want get one approx. 200TB storage (in one filesystem) from already existing computers/storage. Don't need fast writes, but need good performance on reading data. (as a big fileserver), what will works transparently, so when storing data don't want care about onto what computer the data goes. (e.g. not 10 mountpoints - but one big logical filesystem). Thanks.

    Read the article

  • SQL Server becomes slow after restart

    - by Tobi DM
    I already posted this one on stackoverflow but someone gave me the hint to that I might have more luck on serverfault. We use SQL Server 2005 on an Windwos Server 2008. Ther Server has 48 GB RAM. SQL Server is configured to use 40 GB RAM. There is only one database hosted (About 70 GB). The only app beside SQL Server is our App-Server which connects the clients to the database. Now we encounter the following problem: After a restart of the server our the performance is great. The server grabs the 40 GB RAM wich it is allowed to and then runs fast as hell. But after about 4 weeks the system becomes slower and slower. The execution of statements (seen in the profiler) is raising slowly. But I cannot see that there is something going wrong on the server. CPU usage is at about 20% I/O also seems to be no Problem The process monitor does also not show that there are strange apps or something like that. Eventlog does also have no interessting messages No open transactions or blockings to see We do not use cursors in our app We tried already the following things without effect: Droped the cache by using the statements DBCC FreeProcCache DBCC FREESYSTEMCACHE('ALL') DBCC DropCleanbuffers Restarted the Appserver we are using. Restart the sql server service But nothing did help exept restarting the whole server. Any ideas?

    Read the article

  • Skip new Windows 7 user selection and go to login prompt

    - by Doltknuckle
    We've begun our migration to windows 7 and we ran into an interesting issue. When we hit "CTRL+ALT+DEL" we are brought to the "User selection Screen". Normally, this screen will have an icon for every local user for the machine. These machines are Domain members with "Fast User Switch" disabled so no user names are listed only the "Other User" option. If you click "Other User" or hit enter, the system moves on to the normal login screen where it prompts for user name and password. Here's the issue: We want to find a way to skip over the part where a user selects "Other User". We essentially want the system to always assume that we always want "Other User" and to go directly to the login screen when a user hits "CTRL+ALT+DEL". What I find odd is that the "Other User" doesn't show up until we've had more than one domain user log in. Right after we re-image the machine, the login process goes directly to the user credential prompt. Anyone have any ideas?

    Read the article

  • Forcing programs to be installed to another drive

    - by zyboxenterprises
    I have an SSD as my main Windows drive, with a 640GB 2.5" HDD, partitioned to store programs and user settings, and also to act as backup (it's the only thing I had lying around at the time of building my PC). The task was to make the PC as fast as possible, while having an increased storage capacity available to store normal user data, and to assist in my small data recovery business. The problem is that whenever I install a program, it installs to C:\Program Files [(x86 for the 32 bit programs]\, although I have changed the environment variables. This wouldn't normally be an issue, however every installation program points its shortcut to my 640GB HDD. The root layout of both drives: To clarify: Program files get installed to C:\ Program shortcuts are always pointed to Z:\, my 640GB HDD Modifying the relevant environment variables doesn't do anything, I looked at this, but however it only talks about modifying the registry and environment variables, which I have already done so. I install to the Z:\ drive if the installation program lets me change the installation path, but however the installation programs sometimes don't let me change this. Is there a way that I can force every program to install to the relevant location on Z:\? Perhaps I'm missing something here? Edit: Found this program; would it be appropriate to use in my case? I would be able to move the entire Program Files (and its x86 version) to Z:\, without impacting on the performance.

    Read the article

  • very slow internet with Linksys WRT54GL only in wireless mode (wired is OK)

    - by gojira
    I bought a new Cisco Linksys WRT54GL router to connect my laptop (running Windows 7) to the internet. I installed Tomato 1.28 firmware on the router. When I connect the laptop to the router via ethernet cable, everything is fine and I get extremely fast up- and download speeds. When I connect wirelesssly however, websites load extremely slow - it takes dozens of seconds to load a website! <-- This is my question, how can I fix the wireless speed issue? Gmail for example is unusable this way. I tried speedtest.net, but this always fails in the upload part of the test so I can't even test the bandwidth (could the fact that it fails in the upload part, not the download part, be an indication what the problem is?!). I have isolated the problem a bit, I am convinced it has to do either with the router itself, the router settings, or the settings of the wireless connection in Win 7. Because previously, I was using another router by Buffalo and I had no problems whatsoever. I have tried to reproduce the settings from the Bufallo router as closely as possible on the Linksys router (same channel, same encryption etc). The download speed problem only occurs with the Linksys router, and only in wireless mode! When I exchange the Linksys router with the Buffalo router I have here for testing, the wireless speed is up to normal again. Also, before I had installed the Tomato firmware I had exactly the same problem, so it has nothing to do with the firmware itself. Notes & things I already tried: Changing the channel: does not seem to affect anything, I am also on the same channel (10) which I was previously on when I had a Buffalo router. QoS is off. Ping to the router itself is OK, ~ 1 ms. Some current settings of the linksys router: WAN / Internet Type: DHCP Wirelesss Mode: Access Point B/G Mode: Mixed Broadcast: check Channel: 10 - 2.457 GHz Security: WPA2 Personal Encryption: AES

    Read the article

  • Cut (smart edit) .mts (AVCHD Progressive) files un Ubuntu Lucid

    - by pts
    I have a bunch of .mts files containing AVCHD Progressive video recorded by a Panasonic camera, and I need software on Ubuntu Lucid with which I can remove the boring parts, and concatenate the interesting parts, all this without reencoding the video stream. It's OK for me to cut at keyframe boundary. If Avidemux was able to open the files, it would take about 60 hours of work for me to cut the files. (At least that was it last time I tried with similar videos, but of a file format supported by Avidemux.) So I need a fast, powerful and stable video editor, because I don't want that 60 hours of work go up to 240 or even 480 hours just because the tool is too slow or unstable or has a terrible UI. I've tried Avidemux 2.5.5 and 2.5.6, but they crash trying to open such a file, even if I convert the file to .avi first using mencoder -oac copy -ovc copy. mplayer can play the files. I've tried Avidemux 2.6.0, which can open the file, but it cannot jump to the previous or next keyframe etc. (if I make it jump to the next keyframe, and then to the previous keyframe, it doesn't end up at the original keyframe, sometimes displays an error etc.). Also I'm not sure if Avidemux 2.6.x would let me save the result without reencoding. I've tried Kdenlive 0.7.7.1, but playback is very choppy, and it cannot play audio at all (complaining that SDL cannot find the device; but many other programs on the system can play audio). It would be a pain to work with. I've tried converting the .mts file to .mkv using ffmpeg -i input.mts -vcodec copy -sameq -acodec copy -f matroska output.mkv, but that caused too much visible distortions in the video in both mplayer and Avidemux. I've tried converting the .mts file with TsRemux.exe, but Avidemux 2.5.x still can't open that file. Is there another program to cut and concatenate the files? Is there a preprocessor which would create a file (without reencoding the video) on which Avidemux wouldn't crash?

    Read the article

  • Firefox takes a really long time to load some sites on Ubuntu

    - by Dave
    Hello guys, I have an issue here. Some sites - just a few - takes a really long time to load on Firefox. One example is A List Apart (http://www.alistapart.com/) which takes more than 30 minutes (yes, minutes, not seconds). On Opera, ou even through a telnet session, the problematic sites run without problem, fast as expected. I am using Linux 8.04, running Firefox 3.6.3 downloaded from mozilla site, with a 10M ADSL connection. I tried many tweaks I found googling, like disable IPv6, and change http pipelining settings on FF's about:config. None worked. I also used Firebug to find what phase during negotiation is the bottleneck. Findings are in the screenshot. Well guys, any idea what is the issue? And how to solve it? I repeat, this only happens with firefox (3.6.3 and prior versions), for a few sites only (even sites with much more requests, images, javascripts, stylesheets work fine), and http pipelines and IPv6 tweaks on about:config didn't work. Thanks

    Read the article

  • Disk usage on IIS, PHP5, performance problems.

    - by Jacob84
    Hi everybody, I'm quite worried with a performance problem that I'm facing in one of our production servers. I'm working for a hosting company, so you can imagine how heterogeneous the applications runnning here are. All started with a call of a client complaining about the speed loading a Joomla. The setup is IIS6 (Windows 2003) with PHP5 and FAST CGI wich normally works pretty well. I've tested the loading time and indeed, he was right. 7 or 8 seconds to load, when usually this can be accomplished in 2. Seeing this results, I started to check first CPU and RAM. Everithing normal, 2GB of RAM free, 3%-8% of CPU activity. That's what I call a relaxed server ;). Unfortunately, digging a little deeper I've found the 'PhysicalDisk' counters quite high (above 10), specially the read queues. I've used Process Explorer to see wich of those processes has the higher deltas, but everything seemed normal. As the problem is specially related to PHP pages, I've checked specific IIS counters, as Actual connections, Number of CGI requeriments and Number of ISAPI requeriments. CGI -> 3 to 7 ISAPI -> 5 to 9 Connections-> 90 to 120 (wich appears at the top of the graph) More than a solution (I know this is hard to find), I would like to know if you have an specifical methodology to face this kind of problems. Thanks a lot, as always.

    Read the article

  • Graphics Artifacts/ Texture Flickering

    - by Cerin
    Hey I been having some problems with artifacts in games. Sometimes textures flicker. Artifacts of various shapes and sizes show up usually after a couple games of dota 2. I built my computer almost exactly one month ago and it has been doing this pretty much from the start except before the artifacts I believe just flashed on screen fast enough to where I couldn't tell what it was but I still noticed. In dota I've seen green triangular artifacts among other things. I've tried running Furmark for a while but even though it pushes the gpu much harder than dota 2, there are still no artifacts. It maxes in furmark at about 60C and running every game I've tried on it at 40C. CPU and system temp don't usually get higher than 40C either. These are my system specs: Gigabyte Z68 Intel Motherboard 16 GB Gskill Ripjaws SDRAM DDR3 Sapphire Radeon HD 7770 GHz edition Intel Core i5-2500k (with built in gpu) Corsair 750 Watt PSU windows 7-64 bit I have the latest drivers for everything. What should I do about this? Try to RMA my graphics card? Are there other things that could be causing this?

    Read the article

  • Accessing clearcase view drive from virtual machine is slow

    - by PermanentGuest
    I have a windows XP virtual machine running under a Windows XP host. On the host : On the host clearcase 7.1.1.2 is installed. I have a dynamic view mapped onto some drive. The view has certain VOB/directory structure where my application DLLs from the nightly build and config files are stored. I run my application on the host machine which uses the DLLs and config files from the VOB and everything runs smooth. Now I want to move this set-up to a virtual machine. On the guest : I'm running the guest with a vm-player. I don't want to install clear-case on this as I don't want to expose this machine onto the network. The network setting in the guest is 'host-only'. I have mapped the host's clearcase view drive as a shared folder and I'm able to access this drive from the virtual machine. Also, the application is running. However, the problem is that the access of the clearcase drive from the virtual machine is very slow. I can experience this from the windows explorer. Due to this, the starting of my application takes several seconds in the virtual machine while on the guest it comes up pretty fast. My question is : Is there any way to speed up the performance? I have managed to copy some of the DLLs which don't change frequently to the virtual machine to improve the performance. However, there are still lot of DLLs which have to be taken from the clearcase drive as they change frequently. VMplayer version is : VM Player 3.0.1 build-227600 Both guest and host is : Windows XP service pack 3 Host clearcase is : clearcase 7.1.1.2

    Read the article

  • Slow manipulation of netfilter rules

    - by Ole Martin Eide
    I have a script maintaining gre tunnels and firewall rules using the "ip" and "iptables" tools. Setting up hundreds of tunnels, and adresses per interface runs just fine. Takes less than 0.1 second per interface, however when I get around to do the firewall rules everything slows down spending 0.5 per insertion. Why is it running so slow? What can I do to improve the speed? It seems like I could try ipset instead, but I really feel there is something wrong with the kernel or something. The interesting thing is that the first 10 rules runs fast, then it slows down.. mybox(root) foo# iptables -V iptables v1.3.5 mybox(root) foo# uname -a Linux foo 2.6.18-164.el5 #1 SMP Tue Aug 18 15:51:48 EDT 2009 x86_64 x86_64 x86_64 GNU/Linux mybox(root) foo# cat test.sh #!/bin/sh for n in {1..100} do /sbin/iptables -A OUTPUT -s ${n} -j ACCEPT /sbin/iptables -D OUTPUT -s ${n} -j ACCEPT done mybox(root) foo# time ./test.sh real 1m38.839s user 0m0.100s sys 1m38.724s Appriciate any help. Cheers!

    Read the article

< Previous Page | 199 200 201 202 203 204 205 206 207 208 209 210  | Next Page >