Search Results

Search found 13341 results on 534 pages for 'obiee performance tuning'.

Page 125/534 | < Previous Page | 121 122 123 124 125 126 127 128 129 130 131 132  | Next Page >

  • What is the effect on LVM snapshot size when a file block is rewritten with it's original contents?

    - by NevilleDNZ
    I'm exploring using LVM snapshot's to off site incremental archives from a snapshot "master" file system. In essence: simply copy across only the files on the "master" that have changed since the last incremental copy to the "archive". Then snapshot the "archive" to retain the incremental. I am a bit puzzled as to the block usage behaviour of the archive's own incremental snapshot. I'm expecting that LVM is not smart enough to know that the "file block" is actually unchanged, and the a new copy will be allocated and written for the fresh "archive" file system. Can anyone confirm this, or point me to a document/page that gives some hints? BTW: the OS hard disk cache, hard disk physical cache and hard disk itself also doesn't need to do any actual "disk writes" as the "disk block" likewise is unnecessary. Any pointers to discussion of this style of optimisation would also be ineresting.

    Read the article

  • Why such a dramatic difference in wireless router max. simultaneous connections?

    - by Jez
    Recently, I've needed to look into buying a wireless router for a mission-critical system at work that will need to support quite a few simultaneous connections (potentially a few hundred laptops). One thing I've noticed is that there seems to be a dramatic difference between the max. simultaneous connections different routers can support; see this page for example - anything from 32 to 35,000! Why is there this degree of difference? You'd have thought that if we know how to make routers that can handle thousands of connections, we wouldn't be making stuff that's limited to a pathetic 32 anymore. Is it a firmware thing? A hardware thing? Are low-end manufacturers purposely putting low arbitrary connection limits in so people can be "encouraged" to pay more for high-end routers?

    Read the article

  • Linux Read-Ahead Downsides

    - by JPerkSter
    Hi Everyone, Hope all is well. I have a question regarding read-ahead caching. Are there any downsides to raising the size of the read-ahead cache? On our farm, we're currently running at 256, and upon raising that higher, we are seeing significant throughput gains.   [root@server~]# hdparm -tT /dev/sda /dev/sda: Timing cached reads: 7352 MB in 2.00 seconds = 3677.62 MB/sec 3 Timing buffered disk reads: 244 MB in 3.10 seconds = 78.68 MB/sec [root@server ~]# blockdev --setra 10240 /dev/sda [root@server ~]# hdparm -tT /dev/sda /dev/sda: Timing cached reads: 11452 MB in 2.00 seconds = 5728.52 MB/sec Timing buffered disk reads: 422 MB in 3.17 seconds = 133.04 MB/sec We are running on 2.6. Thanks!

    Read the article

  • Optimal disk partitions for database setup (15 Drives)

    - by Jason
    We are setting up a new database system and have 15 drives to play with (+2 on-board for the OS). With a total of 15 drives would it be better to setup all 14 as one RAID-10 block (+1 hot spare) OR split into two RAID-10 sets one for Data (8 disks) and one for logs/backups (6 disks). My question boils down to the following: is there a specific point where having more drives in a RAID-10 setup will out preform having the drives broken into smaller RAID-10 sets.

    Read the article

  • Intel Core 2 Duo E6600 versus AMD Athlon II X2 3GHZ

    - by Billy ONeal
    Hello :) I have an Intel Core 2 Duo E6600 (2.4GHZ) in my current desktop, and I have a newer machine with an AMD Athlon II X2 3.0GHZ. I'm wondering how the systems will perform in comparison to one another. I'd like to use the AMD because it's 45nm and uses less power, but I don't want to do so at a loss in perforamnce. Which should perform better? Billy3

    Read the article

  • SQL Server Full Text Search resource consumption

    - by Sam Saffron
    When SQL Server builds a fulltext index computer resources are consumed (IO/Memory/CPU) Similarly when you perform full text searches, resources are consumed. How can I get a gauge over a 24 hour period of the exact amount of CPU and IO(reads/writes) that fulltext is responsible for, in relation to global SQL Server resource usage. Are there any perfmon counters, DMVs or profiler traces I can use to help answer this question?

    Read the article

  • What is the computer "doing" when it is running slow and task manager is not showing any CPU activity?

    - by Joakim Tall
    Typical example is when shutting down a memoryintensive application. It can take quite a while before the computer gets back up to speed. Is there some inherent cost in releasing memory? Or is it throttled by some kind of harddrive activity, and if so is there any good way to track that? I usually bring up task manager when a computer is running slow, and usually sorting by cpu activity can show what process is causing the problem, but sometimes there is no activity showing. And yes I "show processes from all users", I have been wondering this since the days win2k :)

    Read the article

  • Windows XP: Slow start menu 'All Programs' response

    - by user17381
    When I click start, and then 'All Programs' (or select a sub menu of all programs) I get a grey menu which does not respond for about 5 seconds - after this it is ok. Any idea what is causing the menu to behave sluggishly? What can be done to fix this? Thanks Info Requested System Specs : Core2 T5500 @1.66GHz, 2GB Ram Windows version: XP Professional SP2 Happens Every time I click the menu (not just first time), has gradually been getting worse. Nothing too unusual at startup: ComodoFirewall, AVG AV, Truecrypt (only for small volume). AV Software: AVG.

    Read the article

  • sys.dm_exec_query_stats interaction with recompilation

    - by Sam Saffron
    We use sys.dm_exec_query_stats to track down slow queries and queries that are IO offenders. This works great, we get a lot of very insightful stats. It is clear this is not as accurate as running a profiler trace, as you have no idea when SQL Server will decide to chuck out a an execution plan. We have quite a few queries where the wrong execution plan is cached. For example queries like the following: SELECT TOP 30 a.Id FROM Posts a JOIN Posts q ON q.Id = a.ParentId JOIN PostTags pt ON q.Id = pt.PostId WHERE a.PostTypeId = 2 AND a.DeletionDate IS NULL AND a.CommunityOwnedDate IS NULL AND a.CreationDate @date AND LEN(a.Body) 300 AND pt.Tag = @tag AND a.Score 0 ORDER BY a.Score DESC The problem is that the ideal plan really depends on the date selected (screenshot of ideal plan): However if the wrong plan is cached, it totally chokes when the date range is big: (notice the big fat lines) To overcome this we were recommended to use either OPTION (OPTIMIZE FOR UNKNOWN) or OPTION (RECOMPILE) OPTIMIZE FOR UNKNOWN results in a slightly better plan, which is far from optimal. Executions are tracked in sys.dm_exec_query_stats. RECOMPILE results in the best plan being chosen, however no execution counts and stats are tracked in sys.dm_exec_query_stats. Is there another DMV we could use to track stats on queries with OPTION (RECOMPILE)? Is this behavior by-design? Is there another way we can for recompilation while keeping stats tracked in sys.dm_exec_query_stats? Note: the framework will always execute parameterized queries using sp_executesql

    Read the article

  • How dow I remove 1.000.000 WebsiteCache directories?

    - by harper
    I found that in a WebsiteCache directory more than 1.000.000 subdirectories has been created. I want to remove all these directories. My first approach was to use the command line tool: cd WebsiteCache rmdir /Q /S . This will remove all subdirectories except WebsiteCache itself, since it is the current working directory. I noticed after two hours that the directoriws starting with A-H have been removed. Why does rmdir removes the directories in alphabetical order? It must take additional effort to do this ordered. What is a fast way to deleted such an amount of directories?

    Read the article

  • CPU and HD degradation on sourced based Linux distribution

    - by danilo2
    I was wondering for a long time if source based Linux distributions, like Gentoo or Funtoo are "destroying" your system faster than binary ones (like Fedora or Debian). I'm talking about CPU and hard drive degradation. Of course, when you're updating your system, it has to compile everything from source, so it takes longer and your CPU is used at hard conditions (it is warmer and more loaded). Such systems compile hundreds of packages weekly, so does it really matter? Does such a system degrade faster than binary based ones?

    Read the article

  • How to calculate required switch speed based on network usage?

    - by tobefound
    I have a 48 port HP Procurve Switch 2610 (J9088A) that can handle 13.0 million PPS (packets per second) and features wire speed switching capacity at 17.6Gbps. First off, what does that REALLY mean? Where do I start when trying to figure out if my office (with 70 employees) will be well setup with this switch? How to calculate through-put based on a user average load of X MB per day? 90% of the folks will only be sending email, access random websites, etc... the other 10% will be conducting heavier tasks like moving image files (10 MB) across network shares, constant external FTP streams through the switch to a server etc... Is this switch good enough?

    Read the article

  • RAID 6 that can read with least 1000 Mbit/s?

    - by Diblo Dk
    I purchased a Dell PERC 6/i which I expected to be able to read with 1000 Mbps. There is not much to do now, but there are some things I wanted knowledge about for another time. I have configured it with four 2 TByte drives and RAID 6. It have 256 MByt ram and transfer rate of 300 Mbps. The benchmark test showed: Min read rate: 136.3 Mbps Max read rate: 329,6 Mbps Avg read rate: 242,2 Mbps What could I had done to get at least 1000 Mbps? Is it normal for internal and external RAID controllers to have a lower transfer rate eg. 300 Mbps? (I did not noticed at the time that it was not 3 Gbps) How would a RAID 10 had performed compared to RAID 6 or 5? Would it have been better to use software RAID (Linux) with the internal 3 Gbps SATA controller? UPDATE: The drives is SATA III 6 Gbps. http://www.seagate.com/files/staticfiles/docs/pdf/datasheet/disc/desktop-hdd-data-sheet-ds1770-1-1212us.pdf (2TB)

    Read the article

  • Soundtest works, but no other sounds and windows starts very slowly

    - by Kristian Kari
    So suddenly all sounds on my computer stopped working expect the sound test works, but when I go to control panel and audio it says there's no audio device. And audio players and audio on website doesn't work either. and Also windows starts very slowly. I'm not sure if this helps but I'll tell what happened before this. I downloaded this program from a little shady website, I hesitated and for a good reason I guess cos the moment I finished downloading it, my antivir detected it as unwanted program and put it on quarantine. I thought everything was fine, but had some problems with program (won't go to details about it cos it's not really related to audio) so rebooted the computer and I was suprised that windows started really really slowly. It took like 5 mins to load the desktop such. After that I noticed the sounds were missing, I immediately deleted the program since I thought it was causing the problem. But it didn't change much. I tried to find the solution with no success and now I'm just scanning again my computer. I'd be thankful if anyone would be able to help me. I might have left a thing or ten things out, so feel free to ask. oh and I'm using windows xp

    Read the article

  • iTunes and Hulu Playback Choppy and Slow?

    - by Bart Silverstrim
    Specs: Windows XP, latest updates 1.7 ghz Pentium 4 1 gig ram DirectX 9.0c NVIDIA GeForce FX 5200 with 256 meg RAM OpenGL 2.1 The story: Okay, I had an older system laying around that I figured I would try turning into a mini-media system to connect to our TV. I put together a lot of older parts, got it into working order, etc. and hooked it up and voila'...slower, but usable system that displayed to the TV. It could run some things decently. I put in iTunes, it played video okay. Not great, but okay. Played Hulu and since we have a 1Mb download rate, the minimum for their site, there were some choppy moments when watching their shows, but I found that (sadly) changing resolution to 800x600 seemed to help with the issue when running full screen. I downloaded the application called Boxee and installed it. It wouldn't run; apparently the video card in the system supported OpenGL 1.2, and needed at least 1.4. I bought a cheap card, the 5200, with four times the memory in it and support for OpenGL 2.1. Installed, everything seemed fine. iTunes seemed to run fine, the video driver (PNY video card) came with OpenGL 2.1, and Boxee finally ran. I then upgraded to the latest drivers for the video card and ran the DirectX updater from MS. After that, the OpenGL Extension Viewer wouldn't run. It just stayed as an icon in the task bar. Also, any and all videos in iTunes stuttered and went out of sync horribly. Unwatchable. I tried watching Hulu video in Boxee, and it displayed video like it was a series of stills in a very bad powerpoint. Playing straightforward audio-only came through fine, no stutters no hiccups. I tried system restore to roll back updates to pre-directX updates (I thought that seemed to be the time that triggered the weird behavior), no joy. I tried uninstalling and reinstalling the video drivers. I installed updated audio drivers (ensoniq audiopci), nothing helped. I finally wiped the drive last night and tried reinstalling everything and restoring my iTunes content via an import from a backup. Fresh install, no updater on the video card or directx. the problem was still there although I haven't tested Hulu, the iTunes player is still stuttering like crazy if I play video, fine if I play audio. I know the processor isn't high in heft, but with one gig of RAM and the fact that it seemed to do okay before I thought that the problem must be software related. Has anyone else run into this sort of issue and have a solution other than "buy a new computer"? What specs seem to work with video at the low end for you? Right now the system is of little use other than keeping my music library and iTunes apps synced with my iPod.

    Read the article

  • Microsoft Mouse and Keyboard Center - Slow response for App-specific shortcuts

    - by Darrel Hoffman
    So a few months ago, I bought a new MS mouse, and was surprised that they'd discontinued Intellipoint in favor of this Microsoft Mouse and Keyboard Center. It seems to have the same functionality underneath all the bloat, but there's a very serious drawback - when I set up application-specific functions for the extra buttons on the mouse, they work, but sometimes with a very long delay, like up to a minute or more. For example, I often set up the left side button as an "Undo" in various programs for convenience. But sometimes, when I try to use that Undo button, nothing happens, so I'm forced to use the standard Ctrl-Z or whatever. But then, a minute or so later, it suddenly remembers that I hit that button a while back, and calls the Undo unexpectedly on something entirely different. It's infuriating. No modern computer function should be this slow. It's not the software or the computer itself, because doing an Undo via Ctrl-Z or the menu still works instantly. It's very definitely a side-effect of delayed response to the mouse button. Usually after it delays the first time, it'll work quickly after that, but if you haven't used a given shortcut in several minutes, it "forgets" again and you get another inexplicably long delay. Intellipoint never had this problem, but it's not supported any more, and not compatible with the newer mice. Has anyone else noticed slow-downs with MS M&K C and app-specific shortcuts? Any ideas how to get around this? I use these shortcuts extensively in my workflow and it's just entirely unacceptable to have such a long delay in what should be a pretty basic feature.

    Read the article

  • What is fastest way to backup a disk image over LAN?

    - by David Balažic
    Sometimes I boot sysrescd or a similar live linux on a PC to backup the hardrive over local network to my server. I noticed many times, that the transfer speed is not optimal (slower than HDD and network speed). Any rules of thumb what to do and what to avoid? What I typically do is something like: dd bs=16M if=/dev/sda | nc ... # on client nc ... | dd bs=16M of=/destination/disk/backup1 # on server I also "throw" in lzop (other are way too slow) and sometimes on the fly md5sum calculation (both of uncompressed and compress source). I try to add (m)buffer (or other alternatives) to improve throughput (and get a progress indicator). I noticed that even with enough free CPU, adding commands to the pipeline slows things down. Typically the destination is on a NTFS volume (accessed via ntfs-3g, with the _big_writes_ option).

    Read the article

  • What are possible causes of keyboard lag on my desktop machine?

    - by Jer
    I am running Windows 7 and began experiencing keyboard lag in most applications, and it seems to be getting worse. Certain websites are the worst - on some, I can type a sentence, take my hands off the keyboard, and watch the characters continue to appear on the screen for several seconds. Others are not as bad, but still noticeable and annoying. I just started noticing it in non-browser applications (e.g. Outlook) as well. I've disabled all extensions in Firefox, rebooted my machine, and that did nothing. There is nothing using much memory or cpu cycles, even when the lag is occurring. This is a machine at work with very strict controls over what can be installed, so the chances of any kind of malware are very slim. I don't believe anything as been installed since before the problem started. What could be causing this, and/or what can I do to debug?

    Read the article

  • How to measure startup time and order of Windows services on boot?

    - by djangofan
    I am not asking how to measure server startup time here. I am wondering if anyone knows of a tool that can measure and show a graph of the startup time and order of all the windows services during system startup. I saw a software program shown on my local Portland news last week that does this but I am unable to remember what it was called or anything else about it. All I remember is that it was a "tech" news story to help computer users with their computers. So, I know the software exists and I am trying to find it.

    Read the article

  • Ultra-lightweight web browser?

    - by zildjohn01
    Are there any good super-lightweight graphical web browsers out there? I'd like to be able to browse the web on an old PC, but the mainstream crop of browsers is just too heavy, and I don't want to resort to something like Lynx. There must be something decent out there that'll fit in 16 or 32MB of RAM comfortably. 100% standards compliance isn't necessary, but I'd like something that supports the most widely used parts of CSS and JavaScript. The goal is to get 98% of sites usable in a nice, graphical format.

    Read the article

  • iptables, blocking large numbers of IP Addresses

    - by Twirrim
    I'm looking to block IP addresses in a relatively automated fashion if they look to be 'screen scraping' content from websites that we host. In the past this was achieved by some ingenious perl scripts and OpenBSD's pf. pf is great in that you can provide it nice tables of IP addresses and it will efficiently handle blocking based on them. However for various reasons (before my time) they made the decision to switch to CentOS. iptables doesn't natively provide the ability to block large numbers of addresses (I'm told it wasn't unusual to be blocking 5000+), and I'm a bit cautious over adding that many rules into an iptable. ipt_recent would be awesome for doing this, plus it provides a lot of flexibility for just severely slowing down access, but there is a bug in the CentOS kernel that is stopping me from using it (reported, but awaiting fix). Using ipset would entail compiling a more up-to-date version of iptables than comes with CentOS which whilst I'm perfectly capable of doing it, I'd rather not do from a patching, security and consistency perspective. Other than those two it looks like nfblock is a reasonable alternative. Is anyone aware of other ways of achieving this? Are my concerns about several thousand IP addresses in iptables as individual rules unfounded?

    Read the article

  • How to monitor IO svctm with every 5 mins frequency using nagios?

    - by sabya
    I want to collect samples of iostat's svctm, await every 5 mins from all of my servers and store them in nagios. I want to get the values for what is happening in every 5 minutes (not since boot time, iostat's first output gives values since boot time). How can I do it in nagios? EDIT The tps should NOT be calculated #of transactions happened since reboot divided by uptime. What I want is # of transferred happened in last X mins divided X*60.

    Read the article

< Previous Page | 121 122 123 124 125 126 127 128 129 130 131 132  | Next Page >