Search Results

Search found 23103 results on 925 pages for 'performance issues and ha'.

Page 529/925 | < Previous Page | 525 526 527 528 529 530 531 532 533 534 535 536  | Next Page >

  • Create bootable partition from image

    - by Carson Darling
    I have a Macbook Pro with a dead CD drive. I don't have easy access to a replacement drive. I'm trying to install Ubuntu, and I can't get the MBP to recognize a USB stick as a bootable media (I believe MBP's have issues with booting from USB in some cases). Is it possible to take a disk image and write it to a partition on the HDD so that I can then boot from that partition and then install the OS onto a third partition? Partition layout: 150GB OS X 50GB Ubuntu destination 5GB Ubuntu install image

    Read the article

  • SQL Server 2008 No process is on the other end of the pipe

    - by Matt
    I've recently been having some issues with SQL Server 2008 on our server. We only have one website running at the moment, but it seems at random that the following error occurs. A connection was successfully established with the server, but then an error occurred during the pre-login handshake. (provider: Shared Memory Provider, error: 0 - No process is on the other end of the pipe.) I don't understand this error, or what's causing it. Any help would be appreciated. When this error does occur all it takes to fix it is to restart the MSSQLSERVER service. But I need to fix the underlying problem first.

    Read the article

  • SAN cache memory upgrade

    - by Scott Lundberg
    We currently have an IBM DS4300 Dual Controller Fibre SAN. It is a good box, but getting pretty old. It came with 256MB of cache per controller. Recently we replaced the batteries in one of the controllers and noticed that the cache is a DDR PC2100 ECC DIMM. Of course, we are thinking about how cheap this RAM is now and is there any good reason we can't upgrade the RAM. IBM used to have a "Turbo" upgrade to this box that doubled the cache and had a bunch of software features for about 10K USD. Since that product has been end-of-lifed, I don't think we can get that upgrade and we don't need the software upgrades (FlashCopy, StorageCopy, etc). Besides the obvious potential warranty issue, what if any issues would we expect to see if attempting to put 2 - 1GB DIMMS in this unit? Any other things I am missing here? EDIT: Memory label: Samsung CN 0433 PC2100U-25331-A1 M381L3223ETM-CB0 256MB DDR PC2100 CL2.5 ECC

    Read the article

  • NTFS disk mounted as fuseblk in ubuntu 12.10 is very slow and a lot of errors when rsync. Is that not a rare thing?

    - by Pablo Marin-Garcia
    I am having problems with a NTFS disk mounted as a fuseblk in my ubuntu 12.10 through external usb3. When I did a 1.1TB backup with rsync the speed was 1-2MB/s (wiht a ext4 disk speed was 70 MB/s before and after trying the NTFS disk). Also after one hour errors started to appear: rsync: write failed on "xxx": No such file or directory recv_files: "yyy" is a directory #but this file is a FILE not a dir ??!! .... As this is the first time I have mounted the NTFS in linux for heavy usage (the data would be used in windows afterwards), I would like to know if this kind of thinks are common o was only that something became unstable in my system and a simply restart would probably have solved it. This leads me to the these questions: Can I trust fuse for manage NTFS disks? Or is a problem of the NTFS tools in linux not yet totally stables for writing? Do people is still suffering from low performance with fuse-NTFS vs ext4 (in the past I have read about people complaining about this)?

    Read the article

  • Why do browsers have so many possible exploits?

    - by Beau Martínez
    When browsing I am ocassionally given warnings about pages that host malware "that could damage my computer". I am seriously perplexed as to why, in 2010, browsers still have possible exploits and can be cracked. My question is "Why?". I'm assuming it's because of the quick development that occured in the browser wars which were unsufficiently tested, but I'm unsure. Surely WebKit would have patched all the issues in KHTML, or Gecko sorted out the flaws in Netscape's engine, and the IE coders sorted through their codebase to eliminate possible flaws? (Somewhat related: http://superuser.com/questions/117770/which-browser-is-the-most-secure-research-and-practically-based.)

    Read the article

  • Windows Experience Index could not be computed

    - by Alexey Ivanov
    I've upgraded recently from Windows Vista to Windows 8. When I try to rate my computer, it accesses DirectX 9 performance, then processes to DirectX 10 tests: And it gets stuck at this point. In 5–10 minutes, it shows error message: The video card is rather old: Mobile Intel 965 Express Chipset Family. I'm pretty sure it does not support DirectX 10. Why does Windows assess it with DirectX 10? And how can I make it skip DirectX 10 tests and get the system rating? The driver was installed automatically by Windows 8 from Windows Update. Version: 8.15.10.2697 Date: 10/01/2012

    Read the article

  • NFSv3 Asynchronous Write Depends on Block Size?

    - by Joe Swanson
    I am trying to figure out if my NFSv3 deployment is performing SAFE asynchronous writes. I suspect that it is doing strictly synchronous writes, as I am getting poor performance in general. I used Wireshark to look at the 'stable' flag in write calls, and look for 'commit' calls. I noticed that, with especially large block sizes, writes to appear to be performed asynchronously: dd if=/dev/zero of=/proj/re3/0/zero bs=2097152 count=512 However, smaller block sizes appear to be performed strictly synchronously: dd if=/dev/zero of=/proj/re3/0/zero bs=8192 count=655360 What gives? How does the client decide whether to tell the server to perform writes synchronously or asynchronously? Is there any way I can get smaller block sizes to be performed asynchronously?

    Read the article

  • "Access Denied" error when starting Windows Security Center service

    - by Isxek
    I am working on a laptop with Windows 7 Ultimate (32-bit) which had previous issues with Microsoft Security Essentials. I've removed the previous installation of Security Essentials and reinstalled it. There's no problem with the said antivirus now, but after a couple of days it was brought back to me because of the error about Windows Security Center service not being started. I've tried setting it to start Automatically instead of "Delayed Start", but I still keep getting "Error 5: Access is Denied." I've searched other possible solutions but it's mostly been either what I did already or "Don't worry about it." Any ideas? Thanks in advance! EDIT: I've scanned the system with both Malwarebytes AM and SUPERAntiSpyware and have found no traces of anything. EDIT2: I have also tried running sfc /scannow to see if the files might be damaged. Got the message no integrity violations were found, however.

    Read the article

  • Mac friendly file sharing from VirtualBox

    - by kitsched
    I have set up Ruby on Rails on Ubuntu into a VirtualBox instance on my PC, I enabled Samba and I'm connecting to it via the home network from my Mac. All is fine except that I have some issues deleting some files from inside applications e.g. in Sublime Text 2 when I right click a file in the browser and select delete nothing happens (same in my Git client). To be able to delete files I have to navigate to the folder in Finder (which leaves those nasty .DS_Store files scattered all around) or issue the delete command from the terminal (inconvenient). If you're asking why I'm using VirtualBox for Rails instead of doing the development directly on the Mac it's because the ease of portability. So my question is: are there any network sharing options which I could use to make the Linux instance play nicer with my Mac?

    Read the article

  • serving static assets via http is really slow compared to sshfs (apache2/nginx)

    - by s1lv3r
    After migrating to a new VPS I had some users complaining about slow loading images on their sites. After creating some test files with dd I realized that I can download all files via sshfs with full speed while downloads via web are painfully slow. The larger the file is and the longer the transfer takes, the slower the transfer speed gets. I thought I had some problems with Apache and just spend the whole evening with replacing Apache2 against nginx for static file serving - with no effect at all. No I/O wait states in top. Tons of RAM free, no high CPU utilization and hdparm shows a decent I/O performance at all times. I just have no idea anymore, what's happening on this server. This is a link to a demo file: http://master.dealux.de/file.tgz Anybody an idea what I can check out?

    Read the article

  • vqadmin "invalid language file" with google chrome

    - by MrStatic
    We have been running our qmail setup for awhile with no issues. One of our admins has moved to Google Chrome as his main browser and we have noticed something odd. No matter what, when he loads the vqadmin page it errors on him with simply invalid language file. Yet if he loads Firefox, Opera, Safari or shudders IE8 it works fine. Searching google only results in 'Use IE' or 'Set english as your language in the browser'. A: I try to have people avoid IE if possible and B: There is no english option in Chrome.

    Read the article

  • size of extent on LVM2

    - by piotrek
    in LVM1 there was a limit of 65k extends. So size of extent had to been chosen carefully between wasted space on partitions (to big extent) and maximal possible size of logical volume (too small extent). in lvm2 (according to http://docstore.mik.ua/manuals/hp-ux/en/5992-4589/apa.html) the limit is ~16 million extents. so the default size of 4mb gives ~60TB of LV size. so is there any point in making the extent larger than 4-16mb on a desktop? is there any performance degradation or other costs of having big number of extents?

    Read the article

  • Unable to Get IIS Response Times with Hyperic Monitoring Tool

    - by jwmajors81
    We have a very large .NET application (vendor) that I am trying to gather performance metrics for using Hyperic. In general I wanted to be able to report response times for each of the components within the application, which include: Web Services ASP.NET Pages MSMQ I am currently unable to successfully monitor the response times for IIS on my windows machines. I have successfully auto-discovered them and I am getting diagnostic information, but am not getting back the response times. After looking online at what other people are seeing, I found that I am missing a tab for the response times which is usually next to the metrics tab. Also, when I look at the configuration screen for IIS I do not see a field which enables me to specify where the log files are located at. Please note that my logs are located at e:\LogFiles\ instead of the usual location because our IT staff doesn't allocate much space for the C: drive. Please note that I have the MSMQ monitors up and running great. Any assistance would be greatly appreciated. Thanks, Jeremy

    Read the article

  • Cheap windows shared hosting service

    - by Elangovan
    Hi I have recently purchased a domain from go daddy, now I am looking for a cheap windows hosting. Following are my requirements Shared Windows hosting Should be cheap in price Should have at least one SQL server Db and one mysql db. Should support atleast asp.net 3.5 and php Will be good if it has support for asp.net mvc (no problem even if it is not available also) Should be able to install third party blog sites. Bandwidth, total space and performance are not very important. Silverlight is also an added advantage (no problem even if it is not available also). There should be no advertisement or banner added by the hosting company in the site. Should have support for subdomains

    Read the article

  • Unmounting a zfs pool while it is shared with sharenfs

    - by Ted W.
    I have a Solaris (open indiana) system which is getting poor disk write performance. In order to enable ZIL in this version of zfs I need to add a line to /etc/system. This will not take affect until I've unmounted and remounted the zpool. The trick is that this spool is shared via nfs to about 200 other servers to host users' home directories. I can guarantee that no users will be accessing the disks during this period of maintenance but I would like to avoid having to issue an unmount for 200 systems in order to unmount the disk on the Solaris box. My question is, with sharenfs, is it necessary to have all systems disconnected before unmounting the filesystem on the host? If it's possible, how do you go about it? I've tried unmounting already, the normal way, and it reports the disk is busy. There is no lsof in Solaris and pfiles (I think that's what it was) does not show anything obviously using the mounts.

    Read the article

  • rsync error unexplained error (code 255) at io.c

    - by kabeer
    I was using a script to perform rsync in sudo crontab. The script does a 2-way rsync (from serverA to serverB and reverse). After i reboot both the server machines, the rsync is not working in sudo crontab. I also setup a new cronjob and it fails, The error is: rsync error: unexplained error (code 255) at io.c(600) [sender=3.0.6] rsync: connection unexpectedly closed (0 bytes received so far) [receiver] However, when run from terminal, the rync script works as expected without issues. please help.

    Read the article

  • Server 2008 Hard Faults

    - by claw
    Hey all, plase bear with me as I haven't looked at a server in a very long time. The problem I am having is with a Windows 2008 Standard FE Service Pack 2 Intel Xeon X3430 @ 2.40 2.39 GHZ 4 GB Memory 64 Bit There seems to be no problems other than the physical memory peaking at 91%, always with over 100 Hard Faults Per Second. To my understanding hard faults should be fairly rare on a machine with. Are there any logs I can show you? Or investigate myself. The general performance of the machine is ok, i can access SBS2008 and change settings fairly smoothly without hangs etc. However, we connect to the server and do quite a bit of SQL via an application. For a record to retrieve say 20 rows, it can take 20+ seconds. Thanks in advance, Jamie EDIT: What the server is used for: IIS ASP Web Service SQL 2008 List item Exchange unable to upload screenshots due to low reputation - why doesnt my SO work here :)

    Read the article

  • Setting up Windows 2008 with VPN and NAT

    - by Benson
    I have a Windows 2008 box set up with VPN, and that works quite well. NPS is used to validate the VPN clients, who are able to access the private address of the server, once connected. I can't for the life of me get NAT working for the VPN clients, though. I've added NAT as a routing protocol, and set the one on in the VPN address pool as private, and the other as public - but it still won't NAT connections when I add a route through the VPN server's IP on the client side (route add SomeInternetIp IpOfPrivateInterfaceOnServer). I know I can reach the server's private interface (which happens to be 10.2.2.1) with remote desktop client, so I can't think of any issues with the VPN.

    Read the article

  • windows server 2008 vs ubuntu 11 [closed]

    - by user472875
    I am working on implementing a custom server application that should be capable of handling a very large volume of traffic. I am aware that this type of question has been asked a lot, but I haven't been able to find a good answer. What I'm really looking for is for a server with given specs which OS will be able to handle a larger traffic faster and more reliably. I do not care about rights management or any other features. I am fairly good with both platforms, and so I would like to pick the OS with better performance on a clean install, and with nothing else running. Thanks in advance.

    Read the article

  • Best choice for off-site backup: dd vs tar

    - by plok
    I have two 1TB single-partition hard disks configured as RAID1, of which I would like to make an off-site backup on a third disk, which I am still to buy. The idea is to store the backup at a relative's house, considerably far away from my place, in the hope that all the information will be safe in the case of a global thermonuclear apocalypse. Of course, this backup would be well encrypted. What I still have to decide is whether I am going to simply tar the entire partition or, instead, use dd to create an image of the disks. Is there any non-trivial difference between these two approaches that I could be overlooking? This off-site backup would be updated no more than two or three times a year, in the best of the cases, so performance should not be a factor to be pondered at all. What, and why, would you use if you were me? dd, tar, or a third option?

    Read the article

  • write-through RAM disk, or massive caching of file system?

    - by Will
    I have a program that is very heavily hitting the file system, reading and writing randomly to a set of working files. The files total several gigabytes in size, but I can spare the RAM to keep them all mostly in memory. The machines this program runs on are typically Ubuntu Linux boxes. Is there a way to configure the file system to have a very very large cache, and even to cache writes so they hit the disk later? I understand the issues with power loss or such, and am prepared to accept that. Crashing aside, in normal operation the writes should eventually reach the disk! Or is there a way to create a RAM disk that writes-through to real disk?

    Read the article

  • Problem in installing & configuring bugzilla

    - by VIVEK
    Can somebody help me in installing bugzilla (list out exact steps if possible) on Windows Server 2008 with IIS 7.0 , Mysql 5.1 & existing email server(already configured)? I followed installation steps from many posts & also your site , everything goes right but when I start to creating accounts by sending email in bugzilla web panel, I get error:- “There was an error sending mail from 'bugzilla-daemon@' to '[email protected]':Can't call method "address" on an undefined value at C:/Program Files (x86)/Bugzilla/perl/perl/site/lib/Email/Send/SMTP.pm line 25.” I also tried installing through Windows installer, but same issues. Please any help would be appreciated. Expecting quick response.

    Read the article

  • internet connection problem related to a recurring registry error

    - by mats
    I have a Windows XP desktop system connected to the Internet via ethernet LAN. Initially, none of my browsers were connecting to the Internet, but I was able to update my antivirus software and all of my connection settings were perfect. I was able to solve the problem by running WinSock XP, which apparently fixes connection problems that have been caused by registry issues. My problem now is that every time I shut down the machine, the problem comes back (Internet browsers won't connect). Does anyone have any ideas/suggestions on this? Thanks

    Read the article

  • What resources are best for staying current about information security?

    - by dr.pooter
    What types of sites do you visit, on a regular basis, to stay current on information security issues? Some examples from my list include: http://isc.sans.org/ http://www.kaspersky.com/viruswatch3 http://www.schneier.com/blog/ http://blog.fireeye.com/research/ As well as following the security heavyweights on twitter. I'm curious to hear what resources you recommend for daily monitoring. Anything specific to particular operating systems or other software. Are mailing lists still considered valuable. My goal would be to trim the cruft of all the things I'm currently subscribed to and focus on the essentials.

    Read the article

  • Worth it to move /var to physical disk vs logical?

    - by Tammer Ibrahim
    Brief question about partition layout. I use an SSD for /, /boot, /usr, & /home partitions. I'd like to move /var to a mechanical disk to minimize writes to the SSD. I'm mainly concerned about maximizing drive life rather than maximizing performance (although I obviously wouldn't want to cripple my server). My mechanical disks consist of two drives sharing LVM, and a third used for nightly rsync backups. I also have a bunch of old 2.5in hard disks lying around. My question is, should I simply create a new LVM volume '/var' on my primary data store, or would it be worth the increased energy consumption (in terms of maximizing the lifetime of the LVMed drives) to install a low volume 2.5in disk to use just for /var? On a more general level my question is about the trade offs of placing OS mounts on the same physical volumes as my data. Thanks for any help!

    Read the article

< Previous Page | 525 526 527 528 529 530 531 532 533 534 535 536  | Next Page >