Search Results

Search found 13341 results on 534 pages for '1 obiee performance tuning'.

Page 457/534 | < Previous Page | 453 454 455 456 457 458 459 460 461 462 463 464  | Next Page >

  • Intel Ethernet Bottlenecking Internet?

    - by Donald Darma
    I'm having trouble with my internet speeds. So I just recent build a pc and everything is fine. I installed the Intel drivers and connected to the internet. It connects but I'm only half the speed I should be getting. My normal speed is 20mbps but speedtest.net is only showing 10. It can't be my ISP (which is TWC if anyone is asking) because my other devices like my laptop and my smartphone are showing 20 down. Heres my system: CPU: i5 4430 HSF: Stock cooler Mobo: Gigabyte Z87MX-D3H GPU: x2 MSI R7950-3GD5/OC BE RAM: Crucial Ballistix Tactical Tracer 8GB dual channel PSU: Silencer High Performance Power Supply 750 Watt 80+ (It's a subdivision of OCZ) HDD: Seagate Barracuda 7200RPM 3TB SSD: Samsung 840 Evo 120 GB Case: Corsair Obsidian 350D Edit: I am using the stock adapter that is on the motherboard. I know for a fact that the cable is good because I used it on my laptop and it ran fine. Its a CAT5E cable. I also ran IPERF and its giving me the same results, 10 mbps.

    Read the article

  • What is the proper way of debugging a slow Windows installation?

    - by Niklas
    You know the drill - you've been asked to check why you cousin's computer is running slow. I was there yesterday. Being a Mac user since 2007 I haven't really dug deep in Windows internals in the past five years. Googling for answers reveals many, many different answers: broken registry, spyware, antivirus program, fragmented disk, turning of visual effects etc. In this particular case I was asked to look at a two year old HP laptop with Vista. Windows was running incredibly slow and even opening up a new explorer window took almost a minute. I ended up doing everything of the above: running cc cleaner, defragmenting the disk, turning off visual effects, turning off norton and a bunch of other things people believe have an impact on Windows performance. Now I'd like to understand this in depth. Is there a proper, "scientific" if you so will, way of debugging and understanding where the problem with a slow running Windows installation lies? (In my particular case this concerned Windows Vista but let's try to create general guide for XP and Windows 7 too). To me, it seems wrong to just run a bunch of different tools without understanding the underlying cause of the error.

    Read the article

  • Looking for some IIS redirect help/ideas

    - by CoreyT
    Right now we have a site with a LOT of static asp pages such as, www.site.com/123.asp. This is due to how our current site's CMS builds it's pages by default. I don't have an exact count but we have roughly 6000 asp files in the site right now. We are in the middle of a redesign and restructuring of the site, and are looking to migrate to SEO friendly URLs. The problem we're having right now is what do we do to redirect the old pages to the new friendly URLs? I know how to do redirects that is not the issue here. The problems I am coming up with right now are listed below. 1 - Is there a limit to the number of redirects in IIS? 2 - Would having even a few thousand redirects affect IIS performance? 3 - My understanding is that we would not be passing along page rank to the new URLs, is that true? (not a major question I can ask on more SEO forums if nobody here is sure) 4 - Would using something like the IIS URL Rewrite 2 module for IIS 7 help us out? Or would I still need to define several thousand unique redirects in it? Our server right now is running Server 2003, however in the redesign I would be open to migrating to Server 2008 R2 if there is a good case for it (i.e. the URL Rewrite module). Thanks for any guidance or help. I have been looking for a good way to do this for a while now and keep coming up with things that sound problematic and bad (such as having 6000 redirects).

    Read the article

  • Best way to build / implement a corporate developer Linux distro with multiple kernels?

    - by Garen
    At work we have Linux users who understandably prefer using Ubuntu. Problem is, we also have developer tools that only work with 'officially' supported Linux distributions that use much older 2.6.18 based kernels. (And even if they worked with newer ones, the vendors could always say they won't "support" the software unless it's on one of their 'officially' supported platforms.) We could of course just tell them to use CentOS or something else 2.6.18-based, and I'm sure their response would be something like: "you can take Ubuntu from our cold, dead hands." :) Which brings to me some questions--is there any good/easy/recommended way to run something like Ubuntu as a host VM and Centos 5.x as a guest OS (with which system--Xen,KVM,VMWare, ...?), and then roll that into our own custom internal distribution that could be easily installed? KVM looks like a good high-performance option just recently included in RHEL 5.4, but if hardware support for virtualization like Intel-VT or AMD-V is necessary, then I'd guess only those folks with fairly new PCs will be able to do it. Would be very interested to hear how anyone else has addressed this kind issue. EDIT: The target audience / users of this kind of system would be developers, each one needs to run locally licensed commercial software, so building out some separate beefy central machines isn't an option unfortunately due to license restrictions. Even if that weren't the case, a couple developers could quickly eat up the resources with parallel builds. :) Ideally, I was hoping there was some step-by-step guide out there to build your own pre-built distribution that had e.g. CentOS 5.x and Ubuntu Desktop as a guest.

    Read the article

  • "iostat" command different in two equal machines

    - by Oz.
    We have several machines on Amazon (ec2) of the type c1.xlarge with 8 cpus, running the Amazon AMI. Details on the machine: 7 GB of memory 20 EC2 Compute Units (8 virtual cores with 2.5 EC2 Compute Units each) 1690 GB of instance storage 64-bit platform I/O Performance: High API name: c1.xlarge One out of the several machines is showing a high load average, since we have run the last yum upgrade a couple of weeks a go. We did not yet update the other machines, and everything looks normal on them. The strange thing is that the top command not showing any hint for the cause of the load. CPUs are - 4.8%us, 1.1%sy, 0.0%ni, 94.1%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st. Mem is about 1.5GB free. Any idea what could it be, or where else can we check? iostat command on the proper machine: avg-cpu: %user %nice %system %iowait %steal %idle 8.97 0.03 4.46 0.19 0.14 86.23 Device: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn xvdap1 1.60 0.69 55.38 587620 47254184 xvdfp2 2.64 1.10 61.04 934786 52091056 xvdfp4 0.86 0.19 41.72 163866 35601920 xvdfp1 4.37 36.59 73.89 31220810 63051504 xvdfp3 8.03 7.08 94.63 6045402 80749184 iostat command on problematic machine: avg-cpu: %user %nice %system %iowait %steal %idle 9.29 0.04 5.55 0.26 0.11 84.74 Device: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn xvdap1 2.13 3.34 68.85 246244 5077888 xvdfp1 7.60 74.31 104.88 5480362 7734840 xvdfp3 13.22 73.67 125.00 5433386 9218600 xvdfp4 1.11 0.76 65.08 55762 4799248 xvdfp2 4.16 3.31 99.17 243818 7313264 Many thanks for the help.

    Read the article

  • Wireless keeps shutting off in Windows 7

    - by Nathan Adams
    I have Windows 7 Ultimate 32bit installed on a Dell Latitude XT Tablet and for the life of me I can't figure out this really weird problem. The symptom is that the Wireless will disconnect from the AP and if I tell it to scan again, it says there are no APs in the area. I do have another wireless card in the laptop and if I disable the first one and enable the second, I am able to get onto the wireless however if I want to use the first card again I have to restart. I tried enabling/disabling the device, nothing will kick start the wireless again in the first card without a restart. I even tried different drivers. So, it seems it is random but it does occur more often when there is increased network activity (ie downloading a large file). The laptop doesn't seem to be overheating. I have tried the following: Under "Change Advanced Power Settings" for the current power profile, I set the "Wireless Adapter settings" to "Maximum Performance". Under device manger, I went to the card in question, and went to the advanced tab and set the "Power Saving mode" to "MAX_PSP" Both cards I have seem to exhibit the behavior after awhile. Both models of those cards are: Dell Wireless 1505 Draft 802.11n WLAN Mini-Card Gigabyte GN-WS30N 802.11n mini WLAN Card Has anyone have any ideas or ran into this before?

    Read the article

  • Can MySQL use multiple data directories on different physical storage devices

    - by sirlark
    I am running MySQL with its data dir on a 128Gb SSD. I am dealing with large datasets (~20Gb) that are loaded and processed weekly, each stored in a separate DB for the purposes of time point comparisons. Putting all the data into a single database in unfeasible because the performance on such large databases is already a problem. However, I cannot keep more than 6 datasets on the SSD at a time. Right now I am manually dumping the oldest to much larger 2Tb spinning disk every week, and dropping the database to make space for the new one. But if I need one of the 'archived' databases (a semi regular occurrence) I have to drop a current one (after dumping), reload it, do what I need to, then reverse the results. Is there a way to configure MySQL to use multiple data directories, say one on the SSD and one on the 2Tb spinning disk, and 'merge' them transparently? If I could do this, then archiving would no longer mean "moved out of the database entirely", but instead would mean "moved onto the slow physical device". The time taken to do my queries on a spinning disk would be less than that taken to completely dump, drop, load, drop, reload two entire databases, so this is a win. I thought of using something like unionfs but I can't think of a way to control which database gets stored on which physical drive, because it works by merging on a directory level (from what I understand) so I'm still stuck with using multiple directories. Any help appreciated, thanks in advance

    Read the article

  • Server 2003 and SSL Certificates

    - by Keith Stokes
    I have a Windows 2000 domain with dozens of Windows 2000 servers and a few 2003 servers. Each server runs a custom app talking to a 3rd party utilizing self-signed certificates. To help troubleshooting we've created a custom test app. The 2000 servers are able to talk within seconds. The 2003 servers take anywhere from 10-30 seconds using a domain account and much less, usually under 5 seconds using a local account. The only exception to the local account performance is a new account, which is slow initially then faster. If you leave the test app open and reconnect repeatedly it talks in seconds. If you leave it open for sometime between 1 and 2 hours, it reverts back to the previous 10 seconds, so obviously something is caching. Installing the destination certificates in the local 2003 server store makes no difference. I've installed the certificates in AD and that apparently makes domain accounts work in 9-12 seconds, vs 30 seconds that was regular before. Manually clearing the certificate store on the 2003 server makes no difference. I'm at a loss as to where the certs might be cached and if I'm using some sort of domain certificate store that's hiding from me.

    Read the article

  • Windows/IIS Hosting :: How much is too much?

    - by bsisupport
    I have 4 Windows 2003 servers running IIS 6. These servers host a bunch of unique web sites (in that they are all different in build/architecture/etc). The code behind these sites range from straight HTML, classic ASP, and 1.1/2.0/3.x flavors of .NET. Some (most) of the sites use a SQL backend, which is hosted on one or two different servers – not the IIS servers themselves. No virtualization on these servers and no load balancing for these particular sites. The problem I’m running into is coming up with some baseline metrics to determine, or basically come up with a “baseline score” to know when a web server has reached its hosting limit. Today, some basic information about each server is used: how much bandwidth does the server pump out, hard drive space availability, and basic (very basic) RAM & CPU utilization (what it looks like at peak traffic times.) I would be grateful if those of you that are 1000x smarter than I am could indulge me with your methods of managing IIS environments. Whether performance monitoring specifics, “score” determination as I’m trying to determine, or the obvious combination of both. Thanks in advance.

    Read the article

  • RAID 10 or RAID 5 for multiple VMs - what is the best choice?

    - by Lars Fastrup
    I have just ordered a new rig for my business. We do a lot of software development for Microsoft SharePoint and need the rig to run several virtual machines for development and test purposes. We will be using the free VMware ESXi for virtualization. For a start, we plan to build and start the following VMs - all with Windows Server 2008 R2 x64: Active Directory server MS SQL Server 2008 R2 Automated Build Server SharePoint 2010 Server for hosting our public Web site and our internal Intranet for a few people. The load on this server is going to be quite insignificant. 2xSharePoint 2007 development server 2xSharePoint 2010 development server Beyond that we will need to build several SharePoint farms for testing purposes. These VMs will only be started when needed. The specs of the new rig is: Dell R610 rack server 2xIntel XEON E5620 48GB RAM 6x146GB SAS drives Dell H700 RAID controller We believe the new server is going to make our VMs perform a lot better than our existing setup (2xIntel XEON, 16GB RAM, 2x500 GB SATA in RAID 1). But we are not sure about the RAID level for the new rig. Should we go for having the the 6x146GB SAS drives in a RAID 10 configuration or a RAID 5 configuration? RAID 10 seems to offer better write performance and lower risk of a RAID failure. But it comes at a cost of less drive space. Do we need RAID 10 or would RAID 5 also be a good choice for us?

    Read the article

  • How to transform a csv to combine matching rows?

    - by Christian Wolf
    I have a CSV file with some transaction data. Let's say date, volume, price and direction (sell/buy). Additionally there is a ID for each transaction and on each closing transaction (the newer one) there is a reference to the corresponding transaction. Classical database referencing. Now I want to do some statistics and draw some plots. This could be done via Octave, LaTeX/TikZ, Gnuplot or whatever. To do this I need both buy and sell price in one row. My thought was to preprocess the CSV to get another CSV containing the needed information and then to do the statistics. In the end I'd like to have a solution based on scripts and not on a spreadsheet as data might change often (exported from online DB). My actual solution (see http://paste.ubuntu.com/6262822/ ) is a bash script that parses the CSV line by line and checks if there exists a corresponding transaction. If found, a new row is written to the destination CSV. If not a warning is printed. The bad news: For each row in the source file I have to read the whole file a few times. This causes long running times of 10sec for 300 lines. As the line number might rise soon (10k lines), this is not perfect. I am aware, that there are many shells to be opened in the script which might cause the performance problems. Now my questions: Is bash/awk/sed/.... a good way to do things? Should I first import all data into a "real" local database to use SQL? Is there an easy way to achieve the desired results?

    Read the article

  • Please help to find a solution for two way, real-time synchronization on Centos 5.5 64Bit

    - by Vipul Limbachiya
    I am in need of a real time, two way synchronization software for Centos 5.5 / 64Bit. Here's little explanation: It needs to be able to perform: Two way synchronization. It must be realtime. By realtime means it can be almost realtime, i.e. a delay of 1 second for example is fine. And the folders are on the same server. I am currently using GlusterFS across two webservers. However, it has extremely poor small file read performance and it's slowing down my website. There's nothing more that can be done to improve this, I have already tested many configurations. As a solution, I was going to mount a RAM drive (tmpfs) that mirrors the GlusterFS web files but get the webserver to use the RAM drive. The issue is that I need two way realtime mirroring or replication between glusterfs and the RAM drive. I need this is as Apache writes files as wells. As I said, realtime two way synchronization across two folders. Which are in fact 2 different mounts points. The RAM (tmpfs) mount poing and the GlusterFS mount point. I already know about: Rsync - Which is one way Unison - Which is not realtime Please suggest me any solution free or paid. Thanks in advance

    Read the article

  • Distributed storage and computing

    - by Tim van Elteren
    Dear Serverfault community, After researching a number of distributed file systems for deployment in a production environment with the main purpose of performing both batch and real-time distributed computing I've identified the following list as potential candidates, mainly on maturity, license and support: Ceph Lustre GlusterFS HDFS FhGFS MooseFS XtreemFS The key properties that our system should exhibit: an open source, liberally licensed, yet production ready, e.g. a mature, reliable, community and commercially supported solution; ability to run on commodity hardware, preferably be designed for it; provide high availability of the data with the most focus on reads; high scalability, so operation over multiple data centres, possibly on a global scale; removal of single points of failure with the use of replication and distribution of (meta-)data, e.g. provide fault-tolerance. The sensitivity points that were identified, and resulted in the following questions, are: transparency to the processing layer / application with respect to data locality, e.g. know where data is physically located on a server level, mainly for resource allocation and fast processing, high performance, how can this be accomplished? Do you from experience know what solutions provide this transparency and to what extent? posix compliance, or conformance, is mentioned on the wiki pages of most of the above listed solutions. The question here mainly is, how relevant is support for the posix standard? Hadoop for example isn't posix compliant by design, what are the pro's and con's? what about the difference between synchronous and asynchronous opeartion of a distributed file system. Though a synchronous distributed file system has the preference because of reliability it also imposes certain limitations with respect to scalability. What would be, from your expertise, the way to go on this? I'm looking forward to your replies. Thanks in advance! :) With kind regards, Tim van Elteren

    Read the article

  • Can I boot up a virtual machine natively?

    - by Anshul
    My question is: Is is possible to run a virtual machine natively on your hardware if you have installed the proper drivers etc? In other words, can I use a VHD as a regular hard drive to boot from? The reason I want to do this is that I do both graphics-intensive and audio-intensive work, but my computer is not powerful enough to handle both at the same time and many times I install a bunch of audio programs that I don't want affecting the stability of my graphics programs. Basically I wanted to have sandboxing between the two sets of applications. So I tried running the graphics-intensive programs in a VirtualBox VM and the audio-intensive work natively (simply because it's a pain to route ASIO audio devices in/out of VirtualBox). This kind-of works - the graphics-intensive stuff is tolerable, but still relatively slow, because it's running inside a VM. So my next idea was to just dual-boot and install the graphics and audio programs in separate partitions but I frequently use them in tandem, so it wouldn't be practical to reboot my machine every time I need to use the other set of programs. But I could live with this scenario: If I need to do more audio-intensive stuff, I'll just boot up to the audio partition and run the graphics programs in a VM, and then when I'm working heavily on the graphics part, I'll just boot the graphics partition as a regular OS directly on the hardware. Is this possible? For example by booting up a VHD as a regular hard drive? Or by setting up dual-boot, and every time the audio partition is shut down, synchronize the graphics VM VHD with the native graphics partition? Is it practical, given the above scenario? And if it's not possible, barring buying another computer, can anyone suggest a best-of-all-worlds setup (the two worlds being performance, sandboxing, and running in parallel) for the above scenario? Thanks in advance.

    Read the article

  • Suggestions for Backup solution

    - by jiewmeng
    i am considering between windows home server simple nas extra HDD's in desktop btw, i will be the main user i am looking to fulfil the following needs: reliability (i am think RAID 1 or 5) not so prone to virus/malware infections (will using a separate NAS or home server help? say windows home server is still a windows pc except separated by network?) power efficiency (eg. spin down when not in use) download (eg. i may want to dl big files/torrents overnight and i may not want to use a full powered PC for it? does a full pc vs NAS provide significant power usage to justify cost of new system esp. since i am only user?) performance (i guess i like to write/access my files fast, on 2nd thought, maybe for backup i can forgo this? maybe for a WD Green HDD? but how much slower will it be? plus since i am the only user, i think the whole HDD will be mine?)

    Read the article

  • Disk Activity Alert Windows SBS 2003 on Dell PowerEdge 830 with Raid

    - by Ron Whites
    Background: I have a Dell PowerEdge 830 Server running Windows SB Server 2003. It has 4gbs of RAM and a ATA CERC SATA 6CH controller with 3 160gb drives in a Raid 5 configuration. The Problem I am seeing Admin ---"Disk Activity Alert on Server" emails These often occur when disk backups, de-frag or high disk usage is going on. Generally the server isn't over stressed. The Disk Alert emails say in part ... The following disk has low idle time, which may cause slow response time when reading or writing files to the disk. Disk: 0 C: F: D: Review the Disk Transfers/sec and % Idle Time counters for the PhysicalDisk performance object. If the Disk Transfers/sec counter is consistently below 150 while the % Idle Time counter remains very low (close to 0), there may be a problem with the disk driver or hardware. The Questions I have: With what utility can I review the Disk Transfers/sec and Idle Time? It appears there is no utility for that on the server! I think I may need to download a very large (two DVD) Dell "OpenManage" utility to be able to monitor the raid system and see what is a problem is that true?

    Read the article

  • Effects of internet connection speeds on server queries

    - by SephMerah
    Can my internet connection significantly effect queries run on phpmyadmin? I am currently 18 down and 30 up. I switched internet connections today and noticed a deep drop in query performance. The query that I am running is SELECT * FROM table. Simple. The table has one row of data. The MySQL server is on the same server as everything else. It is a VPS. Godaddy hosts. I dont have any other information. Centos 6.3 MySQL 5.1 PhpMyAdmin 3.4 Okay used google tools to inspect the XHR going out and coming in and this is what it reported. {"success":true,"message":"<div class=\"success\">Your SQL query has been executed successfully ( Query took 0.0033 sec )<\/div>","sql_query":"<div id=\"result_query\" align=\"\">\n<div class=\"success\">Your SQL query has been executed successfully ( Query took 0.0033 sec ) SNIP..................."}. So apparently my server is fine. The strange thing is though.. The returned XHR comes back exactly as soon as I execute the query on the page. It comes back within less than a second. Why PhpMyadmin does not report the change immediately. I am going to try a re-install.

    Read the article

  • Flash Backed Write Cache (FBWC) without capacitor pack

    - by Martyn
    I brought a HP Smart Array P410 controller and it is installed and working fine in a HP Prolient Microserver with 4 drives in two RAID 1 arrays. I didn’t realise however that it came without any cache so would only work by directly writing straight to the disk, and the performance was horrible. So I then brought the 512MB Flash Backed Write Cache (FBWC) memory module as I was under the impression that with FBWC I would not need a battery. I got this idea from a forum post. "What do you guys think of the choice between 'BBWC' (battery backed write cache) and 'FBWC' (flash backed write cache)? The flashed based ones use non-volitile memory so need no battery." After installing the cache module however the server pretty much won’t boot. The P410 has a flashing amber light on it, and from the manual that doesn’t sound good. I’ve managed to get to the on board BIOS once and even managed to get to boot to the HP Array Configuration Utility (ACU) CD once, but every other time the Server continuingly reboots once it get to the POST screen and reads ARRAY INITILIZING %%%. The one time I reached the ACU, it reported a problem with the Cache Module. To me, it seems like the cache module is faulty, however the supplier tells me “Do you have an FBWC battery pack, p/n 587324-001, because that is required for the cache to work. If you have it, please complete an RMA form and we'll send a replacement / credit.” Does this sound right to you? I’ve been ordering the parts from the US and I don’t want to spend $77 + $40 p&p on a battery, wait a week for the shipping to find the card is faulty, and I don’t want to send back a working card?

    Read the article

  • Toshiba A205-5804 freezes when plugged in

    - by heron
    Well I have a Toshiba A205-5804 and the problem is that the screen freezes anytime I plug the pc into the external power supply, not as most of the computers having the same issue, my computer DOES freeze in safe mode, and I really can't bear this problem for much longer... It's not an overheating problem, the computer is not getting hot or anything related, I've already tried changing the AC adapter, booting only with AC and no battery, and also all of these suggestions: Try changing the following setting in the bios setup, under the 'Advanced' tab Dynamic CPU Frequency: Mode = Always Low (NOT DYNAMIC) My laptop has been running on AC power without a problem for 24hours, including many restarts, and when I went back to the original bios setting, the problem returned almost straight away. EDIT Other suggestions I found on the web from here and here: Set the power plan to high performance Set the power plan to "Minimal Power Management" (1 and 2 do conflict) Start - Control Panel - Device Manager -- Processor - disable one of two processors - reboot normally 4.Do this: Only plug battery into laptop Turn on the laptop and start Windows normally Plug AC adapter into laptop, the screen will freeze Leave the laptop the way it is for 12-24 hours After 12-24 hours, turn it off the hard way Once it is turned off, turn it back on. The laptop is working now. I have no idea of what can it be...

    Read the article

  • System Center 2012 VMM UI is very slow

    - by Grant
    I've recently setup system center 2012 a new server 2008 r2 server which I'm using for virtual machines. Everything seems to be working fine, and the virtual machines are nice and fast. But the Virtual Machine Manager interface is always excruciatingly slow. Sometimes taking up to 15 seconds moving between screens. It's very frustrating trying to use it when a task that just involves a couple clicks ends up taking several minutes. Pages that have a lot of form fields seem to take the longest to load - such as the page to change hardware settings of a virtual machine. Is this just normal performance for VMM? If not, where can I look to find what is slowing it down. Nothing else on the system seems to suffer. I can load and use Hyper-V manager with no noticable slowness. Even programs like event viewer that are usually rather slow seem to load fairly fast. Only the system center programs seem slow. Server is a Dell R710, 2x16 core opteron 6274 processors, 96GB RAM. OS drive is 2x500GB 7.2k RPM SAS drives in RAID1 (opted for the less expensive 7.2k drives since pretty much everything is stored on the SAN). Am I just being impatient? Does anyone else use VMM 2012 and find it slow?

    Read the article

  • Upgrading memory in a laptop

    - by ulidtko
    I'm a bit confused about all the memory types and various bus frequencies of modern consumer PCs. Requesting expert help on the subject. So far I'm confident that: I have an Asus X51L laptop with an unknown set of configuration options. The CPU in there supports PAE, so I still have a chance to extend the memory beyond 3GiB; and the upper limit of the system is 8GiB. (?) The laptop has two SODIMM slots, one of which is occupied by a 2GiB bank, and the other one is empty. dmidecode and lshw tools consistently state 533 Mhz frequency of the bank. The last one confuses me the most. I failed to find out characteristics of the northbridge in this laptop, and still can't figure out what DDR2 to seek for. Is it DDR2-1066? Or, rather, PC2-8500/PC2-8600? Wouldn't a DDR2-800 bank harm the system's performance? Which kind of modules should I look up in stores? Update: I have bought a 2 GiB DDR2-800 SODIMM, and it seams that the system can't handle 4 GiB of memory. When installed by itself in either slot, both new and old bank (which btw happens to be marked GDDR2-677) work just perfectly; i.e. any configuration resulting in 2 GiB works. When both banks are installed though (totalling in 4 GiB), the memcheck86 tool produces horrible artifacts and crashes, and system reboots; an Ubuntu system can be started and even logged into a Unity session, but the system reboots too in this case from even a minor RAM load. So it's pretty obvious to me now that this laptop doesn't support 4 GiB of RAM or more.

    Read the article

  • KVM Hosting: How to efficiently replicate guests

    - by javano
    I have three KVM servers each with 1 guest VM, running directly on it's local storage, (so they are essentially getting a dedicated box worth of computing power each). In the event of a host failure I would like the guests replicated to at least one of the other hosts so I can spin it up there, until the failing host is fixed. I am curious about KVM cloning. I can clone a VM live or when it's suspended/shutdown. Obivously suspended VMs will naturally be quicker to clone but these three VMs comprise three parts of a single solution, so I don't want to ever have any one of them shutdown. How can I efficiently clone these VMs between servers? I have had a couple of ideas, but are these insane or, is there a better method I have missed for my scenario? Set up a DRDB partition between box 1 and 2 where VM 1 runs from, and so is replicated between box1 and box 2, repeat between box 2 & 3, and box 3 & 1 (This could be insane, I have never used DRDB only read about it) Just use standard KVM CLI clone options to perform live clones (I'm dubious about this because I don't know how long it will take and what the performance impact will be during) Run a copy of each VM on at least one other host, and have the guest on one host export it's data to the matching guest on another host where it can import that data, scripting this on the guest) Some of other way? Ideas welcome! Side Note These servers have 4x15k SAS drives in a RAID 10 so they aren't rocketing fast, and as I mentioned, each VM runs from the host's local storage, no NAS or SAN etc. So that is why I am asking this question about guest replication. Also, this isn't about disaster recovery. Guests will be exporting their data to a NAS over a VPN, so I am looking at how I can have them quickly spun up in a host failure situation.

    Read the article

  • one 16K random read I/O issues 2 scsi I/O (16K and 4K) requests in linux

    - by hiroyuki
    I noticed weird issue when benchmarking random read I/O for files in linux (2.6.18). The Benchmarking program is my own program and it simply keeps reading 16KB of a file from a random offset. I traced I/O behavior at system call level and scsi level by systemtap and I noticed that one 16KB sysread issues 2 scsi I/Os as following. SYSPREAD random(8472) 3, 0x16fc5200, 16384, 128137183232 SCSI random(8472) 0 1 0 0 start-sector: 226321183 size: 4096 bufflen 4096 FROM_DEVICE 1354354008068009 SCSI random(8472) 0 1 0 0 start-sector: 226323431 size: 16384 bufflen 16384 FROM_DEVICE 1354354008075927 SYSPREAD random(8472) 3, 0x16fc5200, 16384, 21807710208 SCSI random(8472) 0 1 0 0 start-sector: 1889888935 size: 4096 bufflen 4096 FROM_DEVICE 1354354008085128 SCSI random(8472) 0 1 0 0 start-sector: 1889891823 size: 16384 bufflen 16384 FROM_DEVICE 1354354008097161 SYSPREAD random(8472) 3, 0x16fc5200, 16384, 139365318656 SCSI random(8472) 0 1 0 0 start-sector: 254092663 size: 4096 bufflen 4096 FROM_DEVICE 1354354008100633 SCSI random(8472) 0 1 0 0 start-sector: 254094879 size: 16384 bufflen 16384 FROM_DEVICE 1354354008111723 SYSPREAD random(8472) 3, 0x16fc5200, 16384, 60304424960 SCSI random(8472) 0 1 0 0 start-sector: 58119807 size: 4096 bufflen 4096 FROM_DEVICE 1354354008120469 SCSI random(8472) 0 1 0 0 start-sector: 58125415 size: 16384 bufflen 16384 FROM_DEVICE 1354354008126343 As shown above, one 16KB pread issues 2 scsi I/Os. (I traced scsi io dispatching with probe scsi.iodispatching. Please ignore values except for start-sector and size.) One scsi I/O is 16KB I/O as requested from the application and it's OK. The thing is the other 4KB I/O which I don't know why linux issues that I/O. of course, I/O performance is degraded by the weired 4KB I/O and I am having trouble. I also use fio (famous I/O benchmark tool) and noticed the same issue, so it's not from the application. Does anybody know what is going on ? Any comments or advices are appreciated. Thanks

    Read the article

  • All applications quit when printing on Mac OS X 10.5.8

    - by Tamany
    I recently ran a software update. I'm not sure if my problems are associated with this but I'm pretty sure they are as I printed successfully before update. I checked the log at time of printing: 03/05/2010 22:03:15 Microsoft Word[697] *** -[NSCFString _getValue:forType:]: unrecognized selector sent to instance 0x17a82b50 03/05/2010 22:03:15 [0x0-0x51051].com.microsoft.Word[697] Ignoring Quickdraw drawing between QDBeginCGContext and QDEndCGContext 03/05/2010 22:03:16 [0x0-0x51051].com.microsoft.Word[697] Ignoring Quickdraw drawing between QDBeginCGContext and QDEndCGContext 03/05/2010 22:03:16 [0x0-0x51051].com.microsoft.Word[697] Ignoring Quickdraw drawing between QDBeginCGContext and QDEndCGContext 03/05/2010 22:03:16 [0x0-0x51051].com.microsoft.Word[697] Ignoring Quickdraw drawing between QDBeginCGContext and QDEndCGContext 03/05/2010 22:03:16 [0x0-0x51051].com.microsoft.Word[697] Ignoring Quickdraw drawing between QDBeginCGContext and QDEndCGContext 03/05/2010 22:03:16 [0x0-0x51051].com.microsoft.Word[697] Ignoring Quickdraw drawing between QDBeginCGContext and QDEndCGContext 03/05/2010 22:03:17 [0x0-0x51051].com.microsoft.Word[697] Mon May 3 22:03:17 leopards-imac-2.local Word[697] <Error>: The function `CGPDFDocumentGetMediaBox' is obsolete and will be removed in an upcoming update. Unfortunately, this application, or a library it uses, is using this obsolete function, and is thereby contributing to an overall degradation of system performance. Please use `CGPDFPageGetBoxRect' instead. 03/05/2010 22:22:09 Microsoft Word[697] *** -[NSCFString _getValue:forType:]: unrecognized selector sent to instance 0x1b036500 Any thoughts on how to fix this?

    Read the article

  • How do I repartition an SDHC card in Windows?

    - by Peter Mortensen
    How do I repartition an SDHC card (4 GB or more)? Do I need third-part tools or Linux (a live CD solution would be OK)? In Windows' Disk Management the option Delete Partition is dimmed out: I can reformat the card as FAT32, copy files to and from the card and even change the file system to NTFS using the command line command CONVERT, but not repartition it. The article How to Partition an SD Card in Windows XP talks about using "a Windows enabler program" which sound rather dubious to me. I have tried to change from “Optimize for quick removal” to “Optimize for performance”. The option to format as NTFS appeared, but the Delete Partition option is still dimmed out. Platform: Windows XP 64-bit SD card reader: USB 2.0 device, LogiLink® CR0005C Cardreader 3,5' USB 2.0 intern 54-in-1 mit USB Front Kingston 16 GB SDHC card, speed class 4. (It could be formatted as FAT32 and successfully used in a 4 GB ReadyBoost setup (Windows 7).) I have also tried on different versions of Windows and with different cards with the same result: Kingston 4 GB SDHC card, speed class 4 (the one shown in the screenshot) Transcend 2 GB (not marked as SDHC, but SD) Windows 7 32-bit (albeit with a somewhat an older card reader) and Windows XP 32-bit on an EliteBook 8730w

    Read the article

< Previous Page | 453 454 455 456 457 458 459 460 461 462 463 464  | Next Page >